Categories
Uncategorized

E-book: Practical Security Tips

Don’t become the next security breach headline. “Practical Security Tips” will show you how to stay safe.

In this e-book, Chesapeake Systems’ security expert, Terry Melton, takes you through key steps for keeping your systems, accounts, and digital assets secure.

Some of the most effective protections are the least exotic and most easily implemented options out there. Have you crossed these off your security checklist?

This e-book outlines:
• Basic security – what questions should you be asking?
• Authentication management
• Encryption
• Access and protection policies
• Antivirus and anti-malware protection
• Patching schedules
• Backup fundamentals

Categories
Uncategorized

Planning Ahead for 2020: Digital Content Challenges for Post Production

Devising digital media workflow solutions with the future in mind is what we do here at Chesapeake Systems – diving deep into the latest products and technologies and thinking about their implications for the road ahead. Now that we are halfway through 2019, business planning for 2020 will be in full swing soon for many in post-production and that means planning for the continuing increase in digital content.

A recent study by Cisco predicts an unprecedented 82% of all Internet traffic will be video media by 2020. Furthermore, as more streaming services are launched – Disney+, Apple TV+, and more – it’s clear that content will continue to be produced at record levels. Even more pressing for networks, news outlets and political influencers is the coverage of the 2020 Presidential race that promises to be unprecedented in its scope and detail, including leveraging video content by any and all means possible.

What does this mean for post-production facilities, departments or in-house post teams? Running out of space will happen faster. Accurately assessing the bandwidth you will need is vital to the process of planning for the future. It’s important to have a clear understanding of your provider’s bandwidth offerings in both upload and download speeds. Having the ability to bring media back quickly has become a key factor in the technical post-production equation. Many providers tout high download speeds, but upload speeds have to be equivalent so that creatives and engineers working behind the scenes have the ability to move and deliver assets in a timely manner.

Along with the significant increase in both quantity and resolution of media being created comes the constant challenge of media management. Questions like whether storage should be cloud-based, on-premises, or both; how team members will locate files; and accessing render locations, have motivated some of the leading solutions providers in post to integrate media asset management, once not considered a significant part of the post deliverable equation. For example, Adobe has beefed up its platform, with the expectation that MAM will now be core to every workflow.

Another trend we are seeing is the investment in the editing process. Platforms like Blackmagic’s DaVinci Resolve are adding more tools beyond color grading, shifting things like simple VFX work onto the editor’s list of responsibilities. Tracking all the revisions being made by the director, studio, VFX artists and editors must become intuitive to the MAM process, or version control will not only disrupt the workflow but frustrate the people collaborating on a project, who are typically working from locations around the world. For example, in the political realm, a controlled and streamlined environment is key to enjoying all the benefits of a quick response to news events. Getting clicks wins the news cycle race and that translates to viewers and dollars. But you can’t be first without understanding the analytics – how are your videos performing? The impact of content on people – and integrating that analytical response into the workflow and MAM, alongside fast upload/download speeds – is essential for translating success into profit margins … and ultimately having your voice rise above competitors.

Furthermore, once 5G is implemented, editing in the cloud will become even more prevalent, and interest in physical drives will phase out except for those worried about security. Collaborators on a project won’t care whether the footage they are working on is cloud-based or not – as long as it can upload and download quickly, creatives will be happy; however, the fact that 5G will be easily accessible to the masses makes it a security concern. If people can access things quicker, security teams subsequently have less time to react. Two minutes is an eternity if there is a breach. Implementing a solution without thorough attention to the proper permissions, copyrights and licensing to thwart security risks is a recipe for disaster.

We are also experiencing growth in the prevalence and popularity of collaborative workflows and the platforms that cater to them. The evolving capabilities for creative teams to interact with the many members of the post team in real-time is impacting the pace of finishing in the post-production process. Here at Chesapeake, we are currently solving ways to implement an instantaneous, collaborative process into highly flexible and functional workflows, which should be on everyone’s hotlist.

Advancements in Internet technology should also be taken into consideration when planning for 2020. IP-based approaches, as well as rapid enhancements in disk storage technology, have positively affected the performance levels of NAS devices. The speed and agility of NAS setups, which at one time were only available with a SAN, means that media-rich color correction can now be done on a NAS, opening up new doors for how post teams can operate. While this path will be a viable new option for some, post and IT teams need to understand the ideal environment for this type of move. Hurdles in security or scalability can be serious complications in making this transition successful.

In summary, there is a multitude of moving parts to consider as you evaluate your media workflow management needs for 2020 and beyond. We are thinking ahead, and the experts here at Chesapeake Systems are poised to make this evolution the most successful one you can imagine.

Categories
Uncategorized

Cloud Storage for M&E: Don’t Let the Buzz Sting You

Considering the changes to the business in media and entertainment, and the technologies associated with managing their digital media supply chain, it’s not surprising to see many of our clients wanting to take a fresh look at their video storage workflows based off the value of assets to an organization.

When the 2011 tsunami in Japan put videotape in short supply, companies were forced to put their media in a digital format on high-performance digital storage platforms, which, in turn, led to substantial investments in production storage. With the advent of new technologies and the cloud, there is now serious attention focused on ways to mitigate the need for expensive, unnecessary hardware and find more efficient and less costly ways of managing video.

Clearly, the market is full of storage options that promise to be everything to everyone but these promises do not take into account the value of content to your company. For example, content that’s in use or that needs to be instantly accessible by your teams can be kept on high availability production storage; however, one-off media that’s ready for long-term archiving can be migrated to less expensive storage platforms.

While you may be aware of the different storage technologies available, and are probably finding business reasons to use them, it can be tough to understand and stay abreast of the value proposition that each technology offers your business. It’s tempting to want a one-size-fits-all solution – after all, that’s what production storage has been for all these years.

However, using a single storage technology to serve multiple workflows is, to put it simply, no longer best practices.

Here is where our conversation starts – by talking about what your company does and how you do it. Do you create and push content on a daily basis? Does content need to be instantly accessible? If so, keeping media on production storage might be the best solution. Or does your content consist mainly of one-offs or programs to which you have limited rights? If that’s the case, you’ll want to move it to a secondary tier of storage – you don’t need to invest in the kind of easy access production storage offers.

Which solutions are right for you? You’ve probably heard about object storage, SATA and SAS drives, Flash and the cloud. It’s easy to gravitate to the latest buzzword or the industry’s “next big thing,” but that may not be what you really need.

Cloud vendors, in particular, have done a very good job of marketing themselves. Its appeal is understandable. Storage requires significant resources: You need a data center, the power to run and cool it, and IT and broadcast engineers to keep the system up and running. If you work in New York, Los Angeles or London, real estate is at a premium. The idea of having storage off-site, in the cloud, is awfully inviting to corporate officers looking to cut costs.

The cloud is also advertised as costing less than a penny per gigabyte to store media but if you need to push and pull content from the cloud frequently, the cloud gets very expensive, very quickly. Many suffer from sticker shock over the cost of recalling large bit-rate video files from the cloud, and for that reason it is still not truly considered a viable active storage tier.

On the other hand, for situations like disaster recovery, cloud storage is often ideal. When disaster strikes your headquarters, you’ll be grateful for the geographical distance between your primary site and your data center.

Middleware may also be a key part of your solution. Typically, part of an overall media asset management solution, middleware can help you utilize different technologies within your ecosystem. You simply set certain policies, and middleware acts like a traffic cop to trigger different storage workflows based on your data policies. For example, if content is not manipulated in six months on production storage, it’s tiered off automatically to nearline disk storage, tape or the cloud. If content needs to be recalled for production, it’s pulled back into object storage or a less expensive pool of storage behind the production server.

As storage options grow, the question of what’s best for your media-centric business or workflow can seem overwhelming. It’s Chesapeake’s goal to help you discover the technologies that best fit your workflows and how to utilize them as part of your daily operations, as well as in response to your short- and long-term storage needs. We’re here to navigate the maze on the market. Let us help you choose the options that meet your business goals.

Categories
Uncategorized

An Exploration of Object Storage for Media-Centric Workflows

Object storage offers a new paradigm. As a data storage system, it is something that can be installed on site, but it is also the basis for most of the storage available on the public cloud. However, its use as a valuable technology in M&E — both for active workflow storage and long-term asset preservation — is less understood. This tutorial will explain why it is so useful, how it works, the problems it solves, and how it differs from other approaches.

Like many things in IT, the way data is stored, especially in a shared storage environment, can be thought of as a “solution stack.” At a base level, data is stored on storage devices, such as hard drives and flash drives, as blocks of data. Every individual file gets broken up into several blocks of data with each block being a particular number of bytes. These data blocks are mapped to regions of the data storage device, such as sectors on a hard drive. These mappings are stored in a file system, which is a database of these mappings, along with metadata about the files, such as access rights, creation and modification dates, etc. The file system is layered onto the raw storage device when a drive is formatted.

File systems are organized into hierarchies of directories, or folders, and within these directories are files. They are certainly useful for organization, and many users and workgroups come up with elaborate hierarchies with strong naming conventions for directories and files. We have the means of sharing these file systems out to workgroups of users, server systems and associated software platforms, via SANs and file servers.

But there is something rigid about hierarchies. Perhaps there is a better paradigm than the traditional file system.

Blocks, Files, and Objects, Oh My!

Object storage answers this by borrowing the notion of a data object from areas such as object-oriented programming and databases. So, what is a data object? It is the data — any data but probably a file or other captured data stream — referred to by an arbitrary set of attributes, usually expressed as a number of key-value pairs. “File name” would be a key, and the name of the file would be the value for that key. “File creation date” and “file modification date” would be other keys, with their own values.

What object storage gives you that traditional file systems do not is the ability to create your own sets of key-value pairs to associate with the data objects you are storing, which can be integrated, more or less, any way you please through a software application interface. Think of the key-value metadata pairs you may have come up with for different classes of assets stored in a MAM database. You can come up with whatever you want, and they are not inherently hierarchically arranged. This means you could have a search engine application integrate with the metadata index of an object storage system, based on a customized query looking to bring back a list of all files that adhere to that particular set of criteria.

It is an exceptionally flexible way to organize things on an object store. Which might mean it is not really structured at all. You may find it more useful to keep an object’s searchable metadata in a totally separate place, like your MAM database. What the MAM and the Object Store both need to track is the file’s main object ID, which the object system assigns to the files that are stored on it. This ID will be what a MAM or other software application passes to the object store via a GET API call, for example, in order to pull the file back to a file system for processing or use.

Workflow Considerations

Many media software applications today are not able to modify data in place that is stored on an Object Store via that system’s APIs because they are written to utilize file systems on local storage, or shared via SAN and NAS file sharing technologies. Your NLE cannot really edit off object storage. Your transcoder cannot really decode from and encode to your object store. Not via the native APIs, anyway. However, many object storage systems do offer file-sharing protocol “front-ends” to their systems in order to support applications or users that need that interface to the data today. These do work decently, but tend not to match the performance of traditional shared file systems for media processing workloads.

Where this is starting to change more is in the public cloud space. Some providers like Amazon Web Services also offer media-specific services such as transcoding. These cloud providers are based around files being stored on their object platforms, like S3. They have been motivated to adapt and build media tool sets that are able to work with data on an object store. These capabilities will likely, over time, “trickle down” to on-premises deployment models.

For on-premise object storage deployments, the object storage platform is usually the second tier of a two-tier data storage setup. SAN or NAS for processing, and object for longer-term storage. Even smaller object stores can compete performance-wise with very large tape library installations. Tape may still be useful as part of a disaster recovery, or DR, strategy but it seems as though object storage is set to supplant it for many so-called “active archive” applications — archives that store data that is actually utilized on a regular basis.

Preservation at Scale

Another strength of many object storage platforms is how well they scale. Many can grow to tens and hundreds of petabytes of capacity per “namespace,” or unified object storage system. Many traditional shared file system technologies fall apart at this scale, but object truly is “cloud-scale.” We generate a lot of data in media these days, and by the looks of things, data rates and storage requirements are only going to keep increasing with 4K, 8K, HDR, volumetric video, 360-degree video, etc.

But what’s especially exciting about object for those of us in media is that it’s not just storage at scale, but storage with exceptional preservation qualities for the underlying data. Data integrity is usually talked about in terms of “durability,” and is referred to as X number of nines — much like data availability which, unlike durability, speaks more to the accessibility of data at a given moment. Durability for object storage is achieved through a few mechanisms and they result in systems that make it very likely you will never experience any data ever going “bad” or being lost due to bit-flipping, drive loss, or other failures, even when storing many petabytes of data.

The first way this is achieved is through erasure coding algorithms. Similar to RAID, they generate extra data based on all file data that lands on the system. Unlike the generation of RAID parity data in RAID 5 and 6, however, erasure coding does not require costly specialized controllers; rather, it uses the CPU of the host server to do the calculations.

Unlike RAID, erasure coding can allow for more loss per disk set than a RAID 6 set, which can only lose two of its constituent drives. When a third fails, before rebuild is complete, all data on the RAID set is lost. As such, it is imperative to limit the number of total disks when creating a traditional RAID set in order to balance total storage space desired with the risk you are willing to take on multiple disk failures that can cause data loss. Erasure coding algorithms are much more flexible — you can assign six out of 18 drives in a set for parity, so one third is unavailable for actual storage. However, six drives per set can be lost, without data loss! Other ratios, which affect overall system data durability and storage efficiency, can also be selected.

Another mechanism some object storage systems use to achieve durability — even in the case of entire site failures — is the ability to do erasure-coded disk sets across subsystems housed in multiple geographic locations. There are often some fairly stringent networking requirements between sites, but it sure is reassuring to be able to have, for instance, a single object store that is spread between three locations, and is erasure coded in a way where all data can be maintained even if one of the three locations is wiped off the map — all while still achieving the storage efficiency of a single erasure code. If this is a larger geographic footprint than you want, a direct one-to-one replica is often also a function between systems in different locations. This really means two separate object stores do a full replication between one another. There are some whispers of two-site erasure coding becoming an option with some future systems, so things may improve the road from a storage efficiency perspective for two-site setups.

Finally, as far as preservation technologies go, some object stores feature ongoing data integrity checks, via checksum (or hash value comparison) techniques. Hashing algorithms generate a unique value, based on the original set of bits in a file. If the hash is run again at a later time, and the hash value generated is the same as the original, you know that all of the bits in the file are identical to the originals. If the hash value changes, however, you know at least one bit in the file has changed — and that is enough to consider a file ruined in most circumstances.

Thankfully, because multiple copies of bits are stored utilizing the previously-described erasure coding techniques, object stores that offer this kind of feature are capable of figuring out which versions of a file’s bits are uncorrupted and can repair the corrupted bits using these uncorrupted copies. This kind of operation can be set up to run periodically for the entire data set, so such checksums are performed every six or 12 months. When considered in tandem with overall bit-flipping probabilities, this can lead to a stable system. While some data tape systems do offer such checksum features, these are much slower to complete due to the inherent latencies and bandwidth limitations of the data tape format.

So Really, in a Nutshell

Object storage is metadata-friendly, and thus extremely flexible when it comes to organization and discovery of data. It is very easy to integrate with applications like MAMs and media supply chain platforms. It offers quick-accessibility of data and can scale to extremely large capacities while protecting against data loss due to mechanical part or subsystem failure, data corruption, or even site loss. It is not wedded to hard drive technology — you can build, if you want, object stores out of flash drives (we do not advise this but it is possible). You can own it and host it yourself, lease it via the cloud and a broadband connection, and in some cases create hybrid systems of these approaches. And it’s not subject to the risks of RAID when deployed at scale.

I think a common model that will emerge for many media-centric companies will be to build a single-site object store that is relied on for daily use, but a copy of all data is also put into a very low-cost public cloud storage tier as a disaster recovery backup. This will essentially keep any egress or recovery fees for the public cloud tier minimal, other than in a disaster scenario, because you are using the copy of the data that’s on your own system.

There is finally something new under the sun in data storage that offers real value. Object storage is available for everyone, and we look forward to seeing how we can use it to build smarter, more efficient systems for our clients over the coming years.

Categories
Uncategorized

The Cloud: Inherently, Unpredictably Transformative

At Chesapeake Systems, we’ve been moving “the edges” of our clients’ infrastructure to the cloud for some time now. Some use cloud storage to collect assets from the field, which are then fed into the central management platform (often still found on-premises). Others use the cloud to host a review and approve web application, which might tie into a post-production workflow. The cloud is obviously used for delivery to both partners and the public at large, and all we have to do is look to YouTube to see how much of a shakeup to traditional M&E that has caused.

This makes the cloud transformative. It is more than “someone else’s server,” even though on one level it is that. But I believe that technologies as fundamental as “the cloud” are often inherently unpredictably transformative. It is difficult to imagine the kinds of shakeups they foretell. And this notion is exemplified by the famously controversial invasion of West Coast cities (and beyond, including Baltimore) by electric scooters.

For those catching up, Uber-like services have been renting electric scooters for short, “last-mile” type trips in major American cities. In classic “disrupt-at-all-costs” style, companies, like Bird Rides and Lime, dropped their rentable scooters off in metro area test markets by the hundreds (and maybe even thousands), without any permitting whatsoever. And Santa Monica, California, embraced them full on. These scooters are EVERYWHERE! Ditched on sidewalks. Being ridden down streets and bike lanes. I can only compare it to such “future shock” moments as watching people play Pokémon GO in NYC, the week it was first released. Essentially, one day there really wasn’t anyone tooling around on electric scooters other than maybe as an occasional novelty, and then BAM! Scooters scooters everywhere, scooting to and fro.

What was the confluence of factors that aligned the stars for rentable electric scooters to accelerate from minor influence to “THEY’RE EVERYWHERE!” practically overnight?

It’s simple. The scooters work easily. The consumer downloads the service’s app, which shows you the location of all rentable scooters around you and their electric battery charge level. Thanks to built-in GPS and cellular networking technologies, the scooter ‘s unique QR-code with the iOS or Android app “unlocks” the scooter with payment kicking in via credit card, Apple Pay, etc. The scooter is not literally locked to anything, but it will not function until you pay for it with the app, which is connected, of course, to your identity (you even have to scan your driver’s license when setting up an account). These services are affordable. And when you’re done, you finish your ride, which locks the scooter, and you put it… wherever. The scooters are gathered up every night, charged, and redistributed around the city by non-employee contractors, akin in a way to how Uber or Lyft contracts automobile drivers.

With lithium-ion battery technology reaching certain performance and pricing levels, GPS and cellular data tech expanding, high smartphone ownership (over 75% in the U.S.), easy mobile payment processing, and QR code technology, scooters went from zero to near ubiquity overnight.

But not without the cloud. What does the cloud have to do with electric scooters? The cloud brings to a startup operation the ability to weave the aforementioned technologies and services into a cohesive system without having to make a major technology infrastructure investment. And for a widely-distributed system, it makes the most sense to put the IT backbone in the cloud. It’s scalable and can easily talk to a fleet of thousands and thousands of mobile devices that happen to also be modes of transportation.

I would submit that without the cloud there would be less of a – or even nonexistent – rentable electric scooter craze. It’s a major supporting piece of the puzzle.

Similarly, that is what the cloud is doing to the media technology space.

Now, we can begin to plan out how to put more of the “guts” of a client’s infrastructure up there, and on-premises systems will soon enough be there only to touch systems which make sense to house at the edges of a deployment. Maybe your MAM’s database, viewing proxies, and application stack will be next to go up to the cloud. Maybe the cloud will house your disaster-recovery data set.

It’s even fairly easy to imagine more of high-performance post-production taking place without any significant on-premises infrastructure beyond a workstation. Or will that, too, become a virtualized cloud system? We can already do this, in fact, in a way that works for some workflows.

What’s further out? Here’s just one scenario:

In five years, significantly more of the software application and platform stack that we all rely on today will be “containerized,” and thus ripe for cloud and hybrid-cloud deployments in a much more sophisticated way than is currently done (in our M&E world, at least — other industries already do this). Software containers tend to use a technology called Docker. You can think of a Docker container almost like a VM, but it has no operating system, just a piece of the overall software stack (a “microservice”) and any software dependencies that piece of the overall stack has.

Management platforms, such as the popular Kubernetes (from Google), allow one to manage the containers that make up a software platform, even auto-scaling these microservices as needed on a microservice-by-microservice basis. Say the transcoder element of your solution needs to scale up to meet demand, but short term? Kubernetes can help spin up more container instances that house the transcoder microservices your solution relies on. Same could go for a database that needs to scale, or workflow processing nodes, and on and on.

All of this is basically a complicated way of engineering an infrastructure that on a fine-grained basis can automatically spin up (and likewise, spin down) just the necessary portions of a unified system, incurring additional costs just at the moments those services are needed. This is, as we know, essentially the opposite of a major on-premises capital buildout project as we currently envision it.

What I described above, by itself, is going to be extremely disruptive to our industry. That’s not to say it’s a bad thing, but it will significantly impact which vendors we work with, which ones are with us a decade from now, how we fund projects, what type of projects can even be “built,” who’s building them, etc.

The notion of a “pop-up broadcaster” with significantly greater capabilities than today’s OTT-only players becomes possible. Want to see if a major broadcasting operation can be sustainable? Rent some studio space and production gear, and essentially the rest of the operation can be leased, short term, and scaled in any direction you’d like.

Many, many organizations do the above every single day. Facebook is doing this, and Google/YouTube, Amazon, etc. just to deal with traffic loads for their websites. In fact, you don’t build a mass-scale, contemporary website without using the above-described approaches.

What’s more interesting than the above, or pretty much anything else we can think of today? What will be more challenging? It’ll be our “scooter” moments. It’ll be the confluence of cloud technologies and many others that will lead to innovators coming up with ideas and permutations that we can’t yet anticipate. One day we’ll be doing things in a way we couldn’t even sort of predict. One day, seemingly out of nowhere … the “scooters” will be everywhere.

To learn more about what AI can do you for you, contact Chesapeake at prosales@chesa.com

Categories
Uncategorized

Artificial Intelligence: Should You Take the Leap?

In Hollywood, the promise of artificial intelligence is all the rage: who wouldn’t want a technology that adds the magic of AI to smarter computers for an instant solution to tedious, time-intensive problems? With artificial intelligence, anyone with abundant rich media assets can easily churn out more revenue or cut costs, while simplifying operations … or so we’re told. If you’ve been to NAB or CES or any number of conferences, you’ve heard the pitch: it’s an “easy” button that’s simple to add to the workflow and foolproof to operate, turning your massive amounts of uncategorized footage into metadata.

But should you take the leap? Before you sign on the dotted line, let’s take a closer look at the technology behind AI and what it can – and can’t – do for you.

First, it’s important to understand the bigger picture of artificial intelligence in today’s marketplace. Taking unstructured data and generating relevant metadata from it is something that other industries have been doing for some time. In fact, many of the tools we embrace today started off in other industries. But unlike banking, finance or healthcare, our industry prioritizes creativity, which is why we have always shied away from tools that automate. The idea that we can rely on the same technology as a hedge fund manager just doesn’t sit well with many people in our industry, and for good reason.

In the media and entertainment industry, we’re looking for various types of metadata that could include a transcript of spoken word, important events within a period of time, or information about the production (e.g., people, location, props), and there’s no single machine-learning algorithm that will solve for all these types of metadata parameters. For that reason, the best starting point is to define your problems and identify which machine-learning tools may be able to solve them. Expecting to parse reams of untagged, uncategorized, and unstructured media data is unrealistic until you know what you’re looking for.

AI has become pretty good at solving some specific problems for our industry. Speech-to-text is one of them. With AI, extracting data from a generally accurate transcription offers an automated solution that saves time. However, it’s important to note that AI tools still have limitations. An AI tool, known as “sentiment analysis,” could theoretically look for the emotional undertones described in spoken word, but it first requires another tool to generate a transcript for analysis. And no matter how good the algorithms are, they won’t give you the qualitative data that a human observer would provide, such as the emotions expressed through body language. They won’t tell you the facial expressions of the people being spoken to, or the tone of voice, pacing, and volume level of the speaker, or what is conveyed by a sarcastic tone or a wry expression. There are sentiment analysis engines that try to do this but breaking down the components ensures the parameters you need will be addressed and solved.

Another task at which machine learning has progressed significantly is logo recognition. Certain engines are good at finding, for example, all the images with a Coke logo in 10,000 hours of video. That’s impressive and can be quite useful. But it’s another story if you want to find footage that shows two people drinking what are clearly Coke-shaped bottles with the logo obscured.

That’s because machine-learning engines tend to have a narrow focus, which goes back to the need to define very specifically what you hope to get from it. There are a bevy of algorithms and engines out there. If you license a service that will find a specific logo, then you haven’t solved your problem for finding objects that represent the product as well. Even with the right engine, you’ve got to think about how this information fits in your pipeline, and there are a lot of workflow questions to be explored.

Let’s say you’ve generated speech-to-text with audio media. But have you figured out how someone can search the results? There are several options. Sometimes vendors of have their own front end for searching. Others may offer an export option from one engine into a MAM – that you either already have on premise or plan to purchase. There are also vendors that don’t provide machine learning themselves but act as a third-party service organizing the engines.

It’s important to remember that none of these AI solutions are accurate all the time. You might get a nudity detection filter, for example, but these vendors rely on probabilistic results. If having one nude image slip through is a huge problem for your company, then machine learning alone isn’t the right solution for you. It’s important to understand whether occasional inaccuracies will be acceptable or deal breakers for your company. Testing samples of your core content in different scenarios for which you need to solve becomes another crucial step. And many vendors are happy to test footage in their systems.

Although machine learning is still in its nascent stages, I’m encouraged that clients are interested in using it. At Chesapeake Systems, we have been involved in AI for a long time and have partnerships with many of those companies pushing the technology forward. We have the expertise to help you define your needs, sift through the thousands of solution vendors to find the ones who match those needs, and integrate those solutions into your pipeline to be fully useable.

Machine learning/artificial intelligence isn’t (yet, anyway) a magic “easy” button. But it can still do some magical things, and we’re here to help you break down your needs and create an effective custom solution to suit your specific needs.

To learn more about what AI can do you for you, contact Chesapeake at prosales@chesa.com

Categories
Uncategorized

So you think you need an RFP

Over the years, Chesapeake Systems has responded to many RFPs, each with its own unique DNA. As a company that prides itself on being an engaged and enthusiastic partner to our clients, we’ve thought a lot about how best to establish that tone of partnership from the beginning of the relationship, including through the RFP process. We’re sharing our experience here in the hope that it will benefit both prospective issuers and respondents.

We believe there are three critical ideas in establishing the kind of relationship that both parties will want to stay in: collaboration, transparency, and communication.

Collaboration.
A collaborative orientation on the part of both parties is critical to a successful RFP process. The goal of the process is to find someone you want to partner with, not just to stage a rigorous competition. In the most successful RFPs, the issuing organization is as helpful as possible to respondents, because it will result in the best responses. Careful preparation and honest communication pays dividends down the line for both partners.

Share who you are, not just what you know, and expect the same from your respondents. Get acquainted with one other. Make time for more than one respondent to present to you. On a project of the scale that requires an RFP, you’re likely to be in the relationship for a long time. Don’t go in blind––make sure you’re choosing people who can communicate with you and you want to work with for the foreseeable future.

Knock down the walls. Sometimes RFPs read as if they’ve been written with the intention of keeping the relationship as sterile as possible. Communication becomes stifled in pursuit of impartiality, or its appearance––and while impartiality is a worthy goal, problems are not solved by withholding information. Ultimately, the success of the RFP process, like the eventual project work, will be determined by the combined efforts of all parties participating.

Remember, the tone of your relationship is set by the tone of your selection process.

Transparency.
Be honest about where you stand in your process. If you’re not ready to do a procurement, or are already narrowing in on your vendor, or if you don’t have executive support and budget approval, consider whether the time is right to issue a formal RFP. Prospective vendors are happy to respond to a less formal RFI (Request for Information) or sit down to talk about the potential project without a formal process. Those processes can naturally evolve into a complete, focused, well-reasoned RFP when the time is right.

Communication.
Be clear in your approach to the RFP. Articulate the problem and use the RFP platform to outline the issues. Your mastery of the problems and their nuances in the RFP gives top-tier respondents the opportunity to dig in while affording them the opportunity to offer their own perspectives and solutions.

Provide as much relevant information as humanly possible in the RFP. If you know something, say it; if you don’t know it yet, say that. Regardless of whether a third-party firm is involved in drafting the RFP, be sure to gather input from everyone who would come into contact with the system you’re bidding out and make sure all of that input makes it into the document.

Consider reserving the longest chunk of your RFP timeline for after you have answered the respondents’ questions––that’s where the work really begins, because the full scope and specifics of the project have been conveyed and are more likely to be fully understood by the respondents.

In addition to resulting in robust, detailed responses that you can actually use, evidence that you’ve carefully thought the project through attracts responses from strong contenders whom you would eventually want to work with. No desirable vendor wants to put hundreds of hours of effort into an RFP process without some assurance the issuer is both clear on what they’re doing and candid in communicating it.

Once the draft RFP feels complete, and before you distribute, read through the entirety from the respondent’s perspective. Ask yourself what you would need to know and what would help you provide the best possible response. Is the document designed to get you what you’re looking for?

Taking a step back to include all of these steps may feel like doubling the work to issue an RFP. However, putting in the effort on the front end will mean a smarter, faster evaluation process, because the responses will really get at the heart of the project and address your specific needs. Furthermore, a well-run RFP process yields one other valuable benefit: you will understand your organization, the problem, and the industry far better than when you began.

Categories
Uncategorized

A Year of Growth and Change

2017 was a big year for Chesapeake Systems, as it was for the industry at large.

We’ve been charting our path through the expansion of public, private and hybrid cloud services alongside many of you, and we are thrilled to announce our certification as a Consulting Partner for Amazon Web Services (AWS). This qualification means we are “Amazon approved” in our expert guidance to customers in designing, architecting, building, migrating, and managing their workloads and applications on AWS.

We are also excited about new roles at the company. Mark Dent, Chesapeake’s co-founder and owner, has shepherded the company through every twist and turn of the past 22 years. He has now stepped into the CFO role. His dedication to our field remains steadfast, including his unwavering commitment to guaranteeing the company’s stellar reputation for service. And after 10 years fulfilling duties at Chesapeake from sales and engineering to project management and professional services, it was an honor for me to take the reins as CEO in April. I’m grateful for the opportunity, and thrilled to work with Mark to continue to position Chesapeake as the preeminent media technology and workflow solutions architects in the industry.

Furthermore, in response to our growing media and entertainment client base on the West Coast, we have expanded our footprint and support offerings with the addition of Sarah Shechner and Drew Hall in the Los Angeles area. Sarah is thrilled to be strengthening our connections to the tech community and providing account management expertise at a regional level. And as a Senior Systems Engineer, Drew brings over 15 years of video-centric data storage expertise to his role. We are excited to offer this additional level of service to our clients in the West.

Chesapeake’s ongoing participation with important industry organizations that drive progress in media and technology continues to flourish. One of the year’s highlights for Nick was serving as conference chair of the Association of Moving Image Archivist’s (AMIA) Digital Asset Symposium in May, where experts in the community shared their knowledge and experiences across a cross-section of disciplines. He also co-programmed Bits by the Bay for the Society of Motion Picture-Television Engineers (SMPTE) Washington DC section, and spoke on a panel at the UCLA MEMES Big Data Conference, presented by the Anderson School of Management. Nick renewed our relationships with many of the leading-edge thinkers in our industry and came away with new perspectives to inform the work we do with our clients.

As we reflect on the close of the year, we are reminded of our good fortune to be working with the best of the best. Our clients stretch us, challenge us, and expect no less from us than we do from ourselves. It is a pleasure and a privilege to be working with you, and we look forward to what 2018 will bring. Stay tuned for more in the new year!

Happy Holidays from all of us at Chesapeake Systems.

Categories
Uncategorized

DAS 2017 Highlight: Video is the Language of the 21st Century

On May 5, 2017, the Association of Moving Image Archivists (AMIA) hosted their annual Digital Asset Symposium (DAS) at the Museum of Modern Art in New York City. This event brought together all aspects of the industry and covered a variety of Media Asset Management topics.

Attendees were encouraged to ask questions and leverage the community around them. To facilitate further conversation a reception was held afterward at Viacom’s White Box.

During the welcome Nick Gold, Chief Revenue Officer and Solutions Consultant of Chesapeake Systems and Program Chair of the 2017 DAS stated: “video is the language of the 21st century”. This spoke to the underlying theme of the event which was the need to not only capture this critical point in history but to preserve it and pass it on to future generations.

Skip ahead to 5:58 to hear from Nick

If you would like to revisit any or all of the sessions that were held, videos are posted on the DAS site.

Categories
Uncategorized

NAB is Nigh

The Desert Beckons!
Yes, it’s that time of the year, when many in our sphere converge on the illusory land of Las Vegas for that annual celebration of all things video technology, the NAB Show (April 24 – 27, 2017). As always, the Chesapeake Systems gang will be in attendance, bouncing around the convention center and city at large through all hours of the day (and often well into the night), so we can keep our finger on the pulse of our industry.

NAB can be maddening in its scope. There is never enough time over the course of the five days we spend in Nevada each year to see and experience everything the show has to offer. We use our time there as best we can, however. Our team joins dozens of meetings and other events, so we can stay in sync with our clientele, as well as our current vendor partners.

One of the other important aspects of attending NAB is, of course, to engage with vendors we do not currently work with, but whose exciting technologies might be useful additions to our bag of tricks, that is to say, our portfolio of solutions that we can apply to the technology and workflow challenges we face every day across our client base.

Areas of Focus for Us?
Obviously Media Asset Management and associated technologies, which have largely become our hallmark as a consultancy and integration firm. There are always new players in the MAM space, and it is our goal to be as familiar with as many as possible, as deeply as possible. Each platform and associated developer has its strengths, as well as “areas that could use improvement.” It’s critical for us at CHESA to know these ins and outs, because sometimes subtle functionalities (or lack thereof) can make or break a successful implementation.

Storage technologies as always are a foundational part of our catalog, and there is much activity in this space as well. Production SAN and NAS shared storage systems are important to our clients, but increasingly, folks are investing in longer-term archival data repositories. But in our world, archives must be “active archives,” making it trivially easy to recall a snippet of video or other media for a project, no matter what tier of storage it may be on. The choices here are as expansive as ever. We’ve always used data tape and will for some time, but other options have emerged that are worthy of exploration, such as private “object storage” systems (which typically need to be addressed via API calls, and do not present a mountable file system to browse through, like a local drive, SAN or NAS volume). Another option on more organizations’ radars than ever before is public cloud storage, such as Amazon S3 or Microsoft Azure. Like private object stores, these cloud options almost always require some type of software platform to “put” files into them or “get” files out (these being two of the most common types of API, or “Application Programming Interface” commands for addressing object storage systems).

And Then All of the Other Stuff:
Transcoding systems, workflow automation platforms, client-side creative applications from Adobe and others. Let’s not forget the fun stuff: 360-degree video camera rigs, airborne drones, maybe finally airborne drones equipped with 360-degree video cameras? A man can dream.

If you’re going to be out in Las Vegas for NAB, don’t be a stranger! It’s always fun to see friends and colleagues (a thankfully almost totally overlapping Venn diagram) out in the land of make-believe. Feel free to drop us a line ahead of the show, as we’re always happy to meet up and share our show-floor experiences. If you are not attending NAB, but there’s something you’ve got your eyes open for, let us know, and we’ll do what digging we can on your behalf while we’re out there.