Categories
Uncategorized

An Exploration of Object Storage for Media-Centric Workflows

Object storage offers a new paradigm. As a data storage system, it is something that can be installed on site, but it is also the basis for most of the storage available on the public cloud. However, its use as a valuable technology in M&E — both for active workflow storage and long-term asset preservation — is less understood. This tutorial will explain why it is so useful, how it works, the problems it solves, and how it differs from other approaches.

Like many things in IT, the way data is stored, especially in a shared storage environment, can be thought of as a “solution stack.” At a base level, data is stored on storage devices, such as hard drives and flash drives, as blocks of data. Every individual file gets broken up into several blocks of data with each block being a particular number of bytes. These data blocks are mapped to regions of the data storage device, such as sectors on a hard drive. These mappings are stored in a file system, which is a database of these mappings, along with metadata about the files, such as access rights, creation and modification dates, etc. The file system is layered onto the raw storage device when a drive is formatted.

File systems are organized into hierarchies of directories, or folders, and within these directories are files. They are certainly useful for organization, and many users and workgroups come up with elaborate hierarchies with strong naming conventions for directories and files. We have the means of sharing these file systems out to workgroups of users, server systems and associated software platforms, via SANs and file servers.

But there is something rigid about hierarchies. Perhaps there is a better paradigm than the traditional file system.

Blocks, Files, and Objects, Oh My!

Object storage answers this by borrowing the notion of a data object from areas such as object-oriented programming and databases. So, what is a data object? It is the data — any data but probably a file or other captured data stream — referred to by an arbitrary set of attributes, usually expressed as a number of key-value pairs. “File name” would be a key, and the name of the file would be the value for that key. “File creation date” and “file modification date” would be other keys, with their own values.

What object storage gives you that traditional file systems do not is the ability to create your own sets of key-value pairs to associate with the data objects you are storing, which can be integrated, more or less, any way you please through a software application interface. Think of the key-value metadata pairs you may have come up with for different classes of assets stored in a MAM database. You can come up with whatever you want, and they are not inherently hierarchically arranged. This means you could have a search engine application integrate with the metadata index of an object storage system, based on a customized query looking to bring back a list of all files that adhere to that particular set of criteria.

It is an exceptionally flexible way to organize things on an object store. Which might mean it is not really structured at all. You may find it more useful to keep an object’s searchable metadata in a totally separate place, like your MAM database. What the MAM and the Object Store both need to track is the file’s main object ID, which the object system assigns to the files that are stored on it. This ID will be what a MAM or other software application passes to the object store via a GET API call, for example, in order to pull the file back to a file system for processing or use.

Workflow Considerations

Many media software applications today are not able to modify data in place that is stored on an Object Store via that system’s APIs because they are written to utilize file systems on local storage, or shared via SAN and NAS file sharing technologies. Your NLE cannot really edit off object storage. Your transcoder cannot really decode from and encode to your object store. Not via the native APIs, anyway. However, many object storage systems do offer file-sharing protocol “front-ends” to their systems in order to support applications or users that need that interface to the data today. These do work decently, but tend not to match the performance of traditional shared file systems for media processing workloads.

Where this is starting to change more is in the public cloud space. Some providers like Amazon Web Services also offer media-specific services such as transcoding. These cloud providers are based around files being stored on their object platforms, like S3. They have been motivated to adapt and build media tool sets that are able to work with data on an object store. These capabilities will likely, over time, “trickle down” to on-premises deployment models.

For on-premise object storage deployments, the object storage platform is usually the second tier of a two-tier data storage setup. SAN or NAS for processing, and object for longer-term storage. Even smaller object stores can compete performance-wise with very large tape library installations. Tape may still be useful as part of a disaster recovery, or DR, strategy but it seems as though object storage is set to supplant it for many so-called “active archive” applications — archives that store data that is actually utilized on a regular basis.

Preservation at Scale

Another strength of many object storage platforms is how well they scale. Many can grow to tens and hundreds of petabytes of capacity per “namespace,” or unified object storage system. Many traditional shared file system technologies fall apart at this scale, but object truly is “cloud-scale.” We generate a lot of data in media these days, and by the looks of things, data rates and storage requirements are only going to keep increasing with 4K, 8K, HDR, volumetric video, 360-degree video, etc.

But what’s especially exciting about object for those of us in media is that it’s not just storage at scale, but storage with exceptional preservation qualities for the underlying data. Data integrity is usually talked about in terms of “durability,” and is referred to as X number of nines — much like data availability which, unlike durability, speaks more to the accessibility of data at a given moment. Durability for object storage is achieved through a few mechanisms and they result in systems that make it very likely you will never experience any data ever going “bad” or being lost due to bit-flipping, drive loss, or other failures, even when storing many petabytes of data.

The first way this is achieved is through erasure coding algorithms. Similar to RAID, they generate extra data based on all file data that lands on the system. Unlike the generation of RAID parity data in RAID 5 and 6, however, erasure coding does not require costly specialized controllers; rather, it uses the CPU of the host server to do the calculations.

Unlike RAID, erasure coding can allow for more loss per disk set than a RAID 6 set, which can only lose two of its constituent drives. When a third fails, before rebuild is complete, all data on the RAID set is lost. As such, it is imperative to limit the number of total disks when creating a traditional RAID set in order to balance total storage space desired with the risk you are willing to take on multiple disk failures that can cause data loss. Erasure coding algorithms are much more flexible — you can assign six out of 18 drives in a set for parity, so one third is unavailable for actual storage. However, six drives per set can be lost, without data loss! Other ratios, which affect overall system data durability and storage efficiency, can also be selected.

Another mechanism some object storage systems use to achieve durability — even in the case of entire site failures — is the ability to do erasure-coded disk sets across subsystems housed in multiple geographic locations. There are often some fairly stringent networking requirements between sites, but it sure is reassuring to be able to have, for instance, a single object store that is spread between three locations, and is erasure coded in a way where all data can be maintained even if one of the three locations is wiped off the map — all while still achieving the storage efficiency of a single erasure code. If this is a larger geographic footprint than you want, a direct one-to-one replica is often also a function between systems in different locations. This really means two separate object stores do a full replication between one another. There are some whispers of two-site erasure coding becoming an option with some future systems, so things may improve the road from a storage efficiency perspective for two-site setups.

Finally, as far as preservation technologies go, some object stores feature ongoing data integrity checks, via checksum (or hash value comparison) techniques. Hashing algorithms generate a unique value, based on the original set of bits in a file. If the hash is run again at a later time, and the hash value generated is the same as the original, you know that all of the bits in the file are identical to the originals. If the hash value changes, however, you know at least one bit in the file has changed — and that is enough to consider a file ruined in most circumstances.

Thankfully, because multiple copies of bits are stored utilizing the previously-described erasure coding techniques, object stores that offer this kind of feature are capable of figuring out which versions of a file’s bits are uncorrupted and can repair the corrupted bits using these uncorrupted copies. This kind of operation can be set up to run periodically for the entire data set, so such checksums are performed every six or 12 months. When considered in tandem with overall bit-flipping probabilities, this can lead to a stable system. While some data tape systems do offer such checksum features, these are much slower to complete due to the inherent latencies and bandwidth limitations of the data tape format.

So Really, in a Nutshell

Object storage is metadata-friendly, and thus extremely flexible when it comes to organization and discovery of data. It is very easy to integrate with applications like MAMs and media supply chain platforms. It offers quick-accessibility of data and can scale to extremely large capacities while protecting against data loss due to mechanical part or subsystem failure, data corruption, or even site loss. It is not wedded to hard drive technology — you can build, if you want, object stores out of flash drives (we do not advise this but it is possible). You can own it and host it yourself, lease it via the cloud and a broadband connection, and in some cases create hybrid systems of these approaches. And it’s not subject to the risks of RAID when deployed at scale.

I think a common model that will emerge for many media-centric companies will be to build a single-site object store that is relied on for daily use, but a copy of all data is also put into a very low-cost public cloud storage tier as a disaster recovery backup. This will essentially keep any egress or recovery fees for the public cloud tier minimal, other than in a disaster scenario, because you are using the copy of the data that’s on your own system.

There is finally something new under the sun in data storage that offers real value. Object storage is available for everyone, and we look forward to seeing how we can use it to build smarter, more efficient systems for our clients over the coming years.

Categories
Uncategorized

The Cloud: Inherently, Unpredictably Transformative

At Chesapeake Systems, we’ve been moving “the edges” of our clients’ infrastructure to the cloud for some time now. Some use cloud storage to collect assets from the field, which are then fed into the central management platform (often still found on-premises). Others use the cloud to host a review and approve web application, which might tie into a post-production workflow. The cloud is obviously used for delivery to both partners and the public at large, and all we have to do is look to YouTube to see how much of a shakeup to traditional M&E that has caused.

This makes the cloud transformative. It is more than “someone else’s server,” even though on one level it is that. But I believe that technologies as fundamental as “the cloud” are often inherently unpredictably transformative. It is difficult to imagine the kinds of shakeups they foretell. And this notion is exemplified by the famously controversial invasion of West Coast cities (and beyond, including Baltimore) by electric scooters.

For those catching up, Uber-like services have been renting electric scooters for short, “last-mile” type trips in major American cities. In classic “disrupt-at-all-costs” style, companies, like Bird Rides and Lime, dropped their rentable scooters off in metro area test markets by the hundreds (and maybe even thousands), without any permitting whatsoever. And Santa Monica, California, embraced them full on. These scooters are EVERYWHERE! Ditched on sidewalks. Being ridden down streets and bike lanes. I can only compare it to such “future shock” moments as watching people play Pokémon GO in NYC, the week it was first released. Essentially, one day there really wasn’t anyone tooling around on electric scooters other than maybe as an occasional novelty, and then BAM! Scooters scooters everywhere, scooting to and fro.

What was the confluence of factors that aligned the stars for rentable electric scooters to accelerate from minor influence to “THEY’RE EVERYWHERE!” practically overnight?

It’s simple. The scooters work easily. The consumer downloads the service’s app, which shows you the location of all rentable scooters around you and their electric battery charge level. Thanks to built-in GPS and cellular networking technologies, the scooter ‘s unique QR-code with the iOS or Android app “unlocks” the scooter with payment kicking in via credit card, Apple Pay, etc. The scooter is not literally locked to anything, but it will not function until you pay for it with the app, which is connected, of course, to your identity (you even have to scan your driver’s license when setting up an account). These services are affordable. And when you’re done, you finish your ride, which locks the scooter, and you put it… wherever. The scooters are gathered up every night, charged, and redistributed around the city by non-employee contractors, akin in a way to how Uber or Lyft contracts automobile drivers.

With lithium-ion battery technology reaching certain performance and pricing levels, GPS and cellular data tech expanding, high smartphone ownership (over 75% in the U.S.), easy mobile payment processing, and QR code technology, scooters went from zero to near ubiquity overnight.

But not without the cloud. What does the cloud have to do with electric scooters? The cloud brings to a startup operation the ability to weave the aforementioned technologies and services into a cohesive system without having to make a major technology infrastructure investment. And for a widely-distributed system, it makes the most sense to put the IT backbone in the cloud. It’s scalable and can easily talk to a fleet of thousands and thousands of mobile devices that happen to also be modes of transportation.

I would submit that without the cloud there would be less of a – or even nonexistent – rentable electric scooter craze. It’s a major supporting piece of the puzzle.

Similarly, that is what the cloud is doing to the media technology space.

Now, we can begin to plan out how to put more of the “guts” of a client’s infrastructure up there, and on-premises systems will soon enough be there only to touch systems which make sense to house at the edges of a deployment. Maybe your MAM’s database, viewing proxies, and application stack will be next to go up to the cloud. Maybe the cloud will house your disaster-recovery data set.

It’s even fairly easy to imagine more of high-performance post-production taking place without any significant on-premises infrastructure beyond a workstation. Or will that, too, become a virtualized cloud system? We can already do this, in fact, in a way that works for some workflows.

What’s further out? Here’s just one scenario:

In five years, significantly more of the software application and platform stack that we all rely on today will be “containerized,” and thus ripe for cloud and hybrid-cloud deployments in a much more sophisticated way than is currently done (in our M&E world, at least — other industries already do this). Software containers tend to use a technology called Docker. You can think of a Docker container almost like a VM, but it has no operating system, just a piece of the overall software stack (a “microservice”) and any software dependencies that piece of the overall stack has.

Management platforms, such as the popular Kubernetes (from Google), allow one to manage the containers that make up a software platform, even auto-scaling these microservices as needed on a microservice-by-microservice basis. Say the transcoder element of your solution needs to scale up to meet demand, but short term? Kubernetes can help spin up more container instances that house the transcoder microservices your solution relies on. Same could go for a database that needs to scale, or workflow processing nodes, and on and on.

All of this is basically a complicated way of engineering an infrastructure that on a fine-grained basis can automatically spin up (and likewise, spin down) just the necessary portions of a unified system, incurring additional costs just at the moments those services are needed. This is, as we know, essentially the opposite of a major on-premises capital buildout project as we currently envision it.

What I described above, by itself, is going to be extremely disruptive to our industry. That’s not to say it’s a bad thing, but it will significantly impact which vendors we work with, which ones are with us a decade from now, how we fund projects, what type of projects can even be “built,” who’s building them, etc.

The notion of a “pop-up broadcaster” with significantly greater capabilities than today’s OTT-only players becomes possible. Want to see if a major broadcasting operation can be sustainable? Rent some studio space and production gear, and essentially the rest of the operation can be leased, short term, and scaled in any direction you’d like.

Many, many organizations do the above every single day. Facebook is doing this, and Google/YouTube, Amazon, etc. just to deal with traffic loads for their websites. In fact, you don’t build a mass-scale, contemporary website without using the above-described approaches.

What’s more interesting than the above, or pretty much anything else we can think of today? What will be more challenging? It’ll be our “scooter” moments. It’ll be the confluence of cloud technologies and many others that will lead to innovators coming up with ideas and permutations that we can’t yet anticipate. One day we’ll be doing things in a way we couldn’t even sort of predict. One day, seemingly out of nowhere … the “scooters” will be everywhere.

To learn more about what AI can do you for you, contact Chesapeake at prosales@chesa.com

Categories
Uncategorized

Artificial Intelligence: Should You Take the Leap?

In Hollywood, the promise of artificial intelligence is all the rage: who wouldn’t want a technology that adds the magic of AI to smarter computers for an instant solution to tedious, time-intensive problems? With artificial intelligence, anyone with abundant rich media assets can easily churn out more revenue or cut costs, while simplifying operations … or so we’re told. If you’ve been to NAB or CES or any number of conferences, you’ve heard the pitch: it’s an “easy” button that’s simple to add to the workflow and foolproof to operate, turning your massive amounts of uncategorized footage into metadata.

But should you take the leap? Before you sign on the dotted line, let’s take a closer look at the technology behind AI and what it can – and can’t – do for you.

First, it’s important to understand the bigger picture of artificial intelligence in today’s marketplace. Taking unstructured data and generating relevant metadata from it is something that other industries have been doing for some time. In fact, many of the tools we embrace today started off in other industries. But unlike banking, finance or healthcare, our industry prioritizes creativity, which is why we have always shied away from tools that automate. The idea that we can rely on the same technology as a hedge fund manager just doesn’t sit well with many people in our industry, and for good reason.

In the media and entertainment industry, we’re looking for various types of metadata that could include a transcript of spoken word, important events within a period of time, or information about the production (e.g., people, location, props), and there’s no single machine-learning algorithm that will solve for all these types of metadata parameters. For that reason, the best starting point is to define your problems and identify which machine-learning tools may be able to solve them. Expecting to parse reams of untagged, uncategorized, and unstructured media data is unrealistic until you know what you’re looking for.

AI has become pretty good at solving some specific problems for our industry. Speech-to-text is one of them. With AI, extracting data from a generally accurate transcription offers an automated solution that saves time. However, it’s important to note that AI tools still have limitations. An AI tool, known as “sentiment analysis,” could theoretically look for the emotional undertones described in spoken word, but it first requires another tool to generate a transcript for analysis. And no matter how good the algorithms are, they won’t give you the qualitative data that a human observer would provide, such as the emotions expressed through body language. They won’t tell you the facial expressions of the people being spoken to, or the tone of voice, pacing, and volume level of the speaker, or what is conveyed by a sarcastic tone or a wry expression. There are sentiment analysis engines that try to do this but breaking down the components ensures the parameters you need will be addressed and solved.

Another task at which machine learning has progressed significantly is logo recognition. Certain engines are good at finding, for example, all the images with a Coke logo in 10,000 hours of video. That’s impressive and can be quite useful. But it’s another story if you want to find footage that shows two people drinking what are clearly Coke-shaped bottles with the logo obscured.

That’s because machine-learning engines tend to have a narrow focus, which goes back to the need to define very specifically what you hope to get from it. There are a bevy of algorithms and engines out there. If you license a service that will find a specific logo, then you haven’t solved your problem for finding objects that represent the product as well. Even with the right engine, you’ve got to think about how this information fits in your pipeline, and there are a lot of workflow questions to be explored.

Let’s say you’ve generated speech-to-text with audio media. But have you figured out how someone can search the results? There are several options. Sometimes vendors of have their own front end for searching. Others may offer an export option from one engine into a MAM – that you either already have on premise or plan to purchase. There are also vendors that don’t provide machine learning themselves but act as a third-party service organizing the engines.

It’s important to remember that none of these AI solutions are accurate all the time. You might get a nudity detection filter, for example, but these vendors rely on probabilistic results. If having one nude image slip through is a huge problem for your company, then machine learning alone isn’t the right solution for you. It’s important to understand whether occasional inaccuracies will be acceptable or deal breakers for your company. Testing samples of your core content in different scenarios for which you need to solve becomes another crucial step. And many vendors are happy to test footage in their systems.

Although machine learning is still in its nascent stages, I’m encouraged that clients are interested in using it. At Chesapeake Systems, we have been involved in AI for a long time and have partnerships with many of those companies pushing the technology forward. We have the expertise to help you define your needs, sift through the thousands of solution vendors to find the ones who match those needs, and integrate those solutions into your pipeline to be fully useable.

Machine learning/artificial intelligence isn’t (yet, anyway) a magic “easy” button. But it can still do some magical things, and we’re here to help you break down your needs and create an effective custom solution to suit your specific needs.

To learn more about what AI can do you for you, contact Chesapeake at prosales@chesa.com

Categories
Uncategorized

So you think you need an RFP

Over the years, Chesapeake Systems has responded to many RFPs, each with its own unique DNA. As a company that prides itself on being an engaged and enthusiastic partner to our clients, we’ve thought a lot about how best to establish that tone of partnership from the beginning of the relationship, including through the RFP process. We’re sharing our experience here in the hope that it will benefit both prospective issuers and respondents.

We believe there are three critical ideas in establishing the kind of relationship that both parties will want to stay in: collaboration, transparency, and communication.

Collaboration.
A collaborative orientation on the part of both parties is critical to a successful RFP process. The goal of the process is to find someone you want to partner with, not just to stage a rigorous competition. In the most successful RFPs, the issuing organization is as helpful as possible to respondents, because it will result in the best responses. Careful preparation and honest communication pays dividends down the line for both partners.

Share who you are, not just what you know, and expect the same from your respondents. Get acquainted with one other. Make time for more than one respondent to present to you. On a project of the scale that requires an RFP, you’re likely to be in the relationship for a long time. Don’t go in blind––make sure you’re choosing people who can communicate with you and you want to work with for the foreseeable future.

Knock down the walls. Sometimes RFPs read as if they’ve been written with the intention of keeping the relationship as sterile as possible. Communication becomes stifled in pursuit of impartiality, or its appearance––and while impartiality is a worthy goal, problems are not solved by withholding information. Ultimately, the success of the RFP process, like the eventual project work, will be determined by the combined efforts of all parties participating.

Remember, the tone of your relationship is set by the tone of your selection process.

Transparency.
Be honest about where you stand in your process. If you’re not ready to do a procurement, or are already narrowing in on your vendor, or if you don’t have executive support and budget approval, consider whether the time is right to issue a formal RFP. Prospective vendors are happy to respond to a less formal RFI (Request for Information) or sit down to talk about the potential project without a formal process. Those processes can naturally evolve into a complete, focused, well-reasoned RFP when the time is right.

Communication.
Be clear in your approach to the RFP. Articulate the problem and use the RFP platform to outline the issues. Your mastery of the problems and their nuances in the RFP gives top-tier respondents the opportunity to dig in while affording them the opportunity to offer their own perspectives and solutions.

Provide as much relevant information as humanly possible in the RFP. If you know something, say it; if you don’t know it yet, say that. Regardless of whether a third-party firm is involved in drafting the RFP, be sure to gather input from everyone who would come into contact with the system you’re bidding out and make sure all of that input makes it into the document.

Consider reserving the longest chunk of your RFP timeline for after you have answered the respondents’ questions––that’s where the work really begins, because the full scope and specifics of the project have been conveyed and are more likely to be fully understood by the respondents.

In addition to resulting in robust, detailed responses that you can actually use, evidence that you’ve carefully thought the project through attracts responses from strong contenders whom you would eventually want to work with. No desirable vendor wants to put hundreds of hours of effort into an RFP process without some assurance the issuer is both clear on what they’re doing and candid in communicating it.

Once the draft RFP feels complete, and before you distribute, read through the entirety from the respondent’s perspective. Ask yourself what you would need to know and what would help you provide the best possible response. Is the document designed to get you what you’re looking for?

Taking a step back to include all of these steps may feel like doubling the work to issue an RFP. However, putting in the effort on the front end will mean a smarter, faster evaluation process, because the responses will really get at the heart of the project and address your specific needs. Furthermore, a well-run RFP process yields one other valuable benefit: you will understand your organization, the problem, and the industry far better than when you began.

Categories
Uncategorized

A Year of Growth and Change

2017 was a big year for Chesapeake Systems, as it was for the industry at large.

We’ve been charting our path through the expansion of public, private and hybrid cloud services alongside many of you, and we are thrilled to announce our certification as a Consulting Partner for Amazon Web Services (AWS). This qualification means we are “Amazon approved” in our expert guidance to customers in designing, architecting, building, migrating, and managing their workloads and applications on AWS.

We are also excited about new roles at the company. Mark Dent, Chesapeake’s co-founder and owner, has shepherded the company through every twist and turn of the past 22 years. He has now stepped into the CFO role. His dedication to our field remains steadfast, including his unwavering commitment to guaranteeing the company’s stellar reputation for service. And after 10 years fulfilling duties at Chesapeake from sales and engineering to project management and professional services, it was an honor for me to take the reins as CEO in April. I’m grateful for the opportunity, and thrilled to work with Mark to continue to position Chesapeake as the preeminent media technology and workflow solutions architects in the industry.

Furthermore, in response to our growing media and entertainment client base on the West Coast, we have expanded our footprint and support offerings with the addition of Sarah Shechner and Drew Hall in the Los Angeles area. Sarah is thrilled to be strengthening our connections to the tech community and providing account management expertise at a regional level. And as a Senior Systems Engineer, Drew brings over 15 years of video-centric data storage expertise to his role. We are excited to offer this additional level of service to our clients in the West.

Chesapeake’s ongoing participation with important industry organizations that drive progress in media and technology continues to flourish. One of the year’s highlights for Nick was serving as conference chair of the Association of Moving Image Archivist’s (AMIA) Digital Asset Symposium in May, where experts in the community shared their knowledge and experiences across a cross-section of disciplines. He also co-programmed Bits by the Bay for the Society of Motion Picture-Television Engineers (SMPTE) Washington DC section, and spoke on a panel at the UCLA MEMES Big Data Conference, presented by the Anderson School of Management. Nick renewed our relationships with many of the leading-edge thinkers in our industry and came away with new perspectives to inform the work we do with our clients.

As we reflect on the close of the year, we are reminded of our good fortune to be working with the best of the best. Our clients stretch us, challenge us, and expect no less from us than we do from ourselves. It is a pleasure and a privilege to be working with you, and we look forward to what 2018 will bring. Stay tuned for more in the new year!

Happy Holidays from all of us at Chesapeake Systems.

Categories
Uncategorized

DAS 2017 Highlight: Video is the Language of the 21st Century

On May 5, 2017, the Association of Moving Image Archivists (AMIA) hosted their annual Digital Asset Symposium (DAS) at the Museum of Modern Art in New York City. This event brought together all aspects of the industry and covered a variety of Media Asset Management topics.

Attendees were encouraged to ask questions and leverage the community around them. To facilitate further conversation a reception was held afterward at Viacom’s White Box.

During the welcome Nick Gold, Chief Revenue Officer and Solutions Consultant of Chesapeake Systems and Program Chair of the 2017 DAS stated: “video is the language of the 21st century”. This spoke to the underlying theme of the event which was the need to not only capture this critical point in history but to preserve it and pass it on to future generations.

Skip ahead to 5:58 to hear from Nick

If you would like to revisit any or all of the sessions that were held, videos are posted on the DAS site.

Categories
Uncategorized

NAB is Nigh

The Desert Beckons!
Yes, it’s that time of the year, when many in our sphere converge on the illusory land of Las Vegas for that annual celebration of all things video technology, the NAB Show (April 24 – 27, 2017). As always, the Chesapeake Systems gang will be in attendance, bouncing around the convention center and city at large through all hours of the day (and often well into the night), so we can keep our finger on the pulse of our industry.

NAB can be maddening in its scope. There is never enough time over the course of the five days we spend in Nevada each year to see and experience everything the show has to offer. We use our time there as best we can, however. Our team joins dozens of meetings and other events, so we can stay in sync with our clientele, as well as our current vendor partners.

One of the other important aspects of attending NAB is, of course, to engage with vendors we do not currently work with, but whose exciting technologies might be useful additions to our bag of tricks, that is to say, our portfolio of solutions that we can apply to the technology and workflow challenges we face every day across our client base.

Areas of Focus for Us?
Obviously Media Asset Management and associated technologies, which have largely become our hallmark as a consultancy and integration firm. There are always new players in the MAM space, and it is our goal to be as familiar with as many as possible, as deeply as possible. Each platform and associated developer has its strengths, as well as “areas that could use improvement.” It’s critical for us at CHESA to know these ins and outs, because sometimes subtle functionalities (or lack thereof) can make or break a successful implementation.

Storage technologies as always are a foundational part of our catalog, and there is much activity in this space as well. Production SAN and NAS shared storage systems are important to our clients, but increasingly, folks are investing in longer-term archival data repositories. But in our world, archives must be “active archives,” making it trivially easy to recall a snippet of video or other media for a project, no matter what tier of storage it may be on. The choices here are as expansive as ever. We’ve always used data tape and will for some time, but other options have emerged that are worthy of exploration, such as private “object storage” systems (which typically need to be addressed via API calls, and do not present a mountable file system to browse through, like a local drive, SAN or NAS volume). Another option on more organizations’ radars than ever before is public cloud storage, such as Amazon S3 or Microsoft Azure. Like private object stores, these cloud options almost always require some type of software platform to “put” files into them or “get” files out (these being two of the most common types of API, or “Application Programming Interface” commands for addressing object storage systems).

And Then All of the Other Stuff:
Transcoding systems, workflow automation platforms, client-side creative applications from Adobe and others. Let’s not forget the fun stuff: 360-degree video camera rigs, airborne drones, maybe finally airborne drones equipped with 360-degree video cameras? A man can dream.

If you’re going to be out in Las Vegas for NAB, don’t be a stranger! It’s always fun to see friends and colleagues (a thankfully almost totally overlapping Venn diagram) out in the land of make-believe. Feel free to drop us a line ahead of the show, as we’re always happy to meet up and share our show-floor experiences. If you are not attending NAB, but there’s something you’ve got your eyes open for, let us know, and we’ll do what digging we can on your behalf while we’re out there.

Categories
Uncategorized

Managing Equipment Lifecycle

The Lifecycle
IT equipment might not be alive in the sense you and I are but that does not mean it does not have a lifecycle. This is a very important concept to embrace as you build out your technical infrastructure. Networking equipment, servers running various types of services, RAIDs and other devices that make up your storage environment, and the rest of your gear are typically large investments and form the technology and workflow backbone of your operation. It is especially important to know what type of lifespan to expect out of these things.

Let’s look at a RAID, for example. These mass storage devices are really marvels of our modern technological age. With today’s RAID systems, we can cram half a petabyte (1,000 terabytes) of hard drive-based storage into only 4 rack units of space! 56 or so hard drives spinning away furiously at most moments, all around the clock. Each drive spinning at maybe 7,000 or 10,000 revolutions per minute (RPM), perhaps storing 8TB of data, nearing a million times more density than my first 42 megabyte hard drive in 1993. The miniaturized electronics in a hard drive employ nano-scale technologies. Every bit of data so small and microscopic, and yet, so integral to you, the user. If one bit of that data gets corrupted, it can mean a massive problem if it was in a critical file. With just one bit being off, it might not even properly open any more.

Nothing Lasts Forever
Does this sound like the type of thing that would last forever? Just think about the levels of complexity involved in each subsystem of the RAID, the “hacking” of the laws of physics that is on demonstration to cram so much information into one of these things. Fast speeds, infinitesimally small pieces, and no real margin for errors. Nothing about any of that makes me think this is something that is going to last forever. I can imagine getting good use out of my coffee table my cousin made for me for “forever” in a sense that matters to me. But a RAID? If one little capacitor in the RAID controller goes up, that RAID controller is not going to work any more. The whole point of RAIDs is that they use drives redundantly so that when (not if) they fail, you can replace them and not lose data. The idea of the failure of the subcomponents is designed into the product, with many RAIDs offering redundant controllers, power supplies, and all main system subcomponents other than the chassis itself.

You might think, “Well, I will just replace failed pieces of the RAID as they keep dying and I will be fine.” This sounds great except for the fact that in six years, there is a good chance nobody will manufacture 8TB drives any longer (yes, hard drives are going to keep getting more storage-dense for a while). Your RAID manufacturer may also stop manufacturing replacement RAID controllers beyond a number of years and will probably run out of stock at some point. Another question is,

“Why would you want to take up a whole 4U of rack space on a measly 448 terabytes in the year 2025 when, maybe, you will be able to store 10 times as much in the same space using the state of the art?”

That may sound ridiculous but in 2025, virtual reality (VR) and 360 video will probably feel more “real” than they do today, have a much wider mainstream adoption, and you will likely be producing content for it. (Hint: Shooting video in many directions at once takes up a lot more space than just a single direction! And imagine if each of those sensors is itself is 8K or 16K.)There are many factors that drive equipment lifecycle in any given use case: the needs of the environment evolving, equipment aging beyond a supportable life, and various combinations and permutations. Many of our clients’ environments have dozens of such pieces of infrastructure that need to be thought of as having life cycles so that proper planning and budgeting can be done. Each individual piece of infrastructure is a part of a larger, more complex “workflow” that needs to keep humming along. It is a lot to keep up with!

How to Keep Up
Many of Chesapeake Systems’ (CHESA) customers are under what CHESA calls “MSAs”, or “Maintenance & Support Agreements”. This allows us to provide ongoing proactive maintenance and reactive support services for, sometimes, an entire operation’s technical infrastructure. It also serves to augment manufacturer-level support agreements, as well as any on-premise staff who may be charged with keeping those operations going.

Over the coming months, CHESA will begin rolling out a Technology Lifecycle Guide for our MSA customers. We will include all the key infrastructure that falls under our MSA and plan out a five year map of how viable all of that equipment will be under average circumstances, based on a host of criteria. This allows us to have a template for working with your team and begin generating budgetary quotes that can be slotted in for equipment that is getting closer to “aging-out.” This will allow, in turn, for our clientele to have years of advance notice as to when they may need to replace things, what the priorities are, how one equipment change may impact another piece of the infrastructure, etc. We think this will be a very useful part of our service and we look forward to getting these in front of our customers to get feedback and improve the information we present.

We will put some energy into the layout but imagine a list of all your key gear, a five year map that will be updated over the coming years, and color-coded boxes that indicate when a piece of equipment is “safe,” when it is starting to reach its age, and when it is no longer easily supportable. We think this will be a very handy “living document” for our MSA customers, so stay tuned!

Categories
Uncategorized

Solar Panels Help Chesapeake Systems See the Bright Side of Renewable Energy

Chesapeake Systems is a company that thrives on cutting edge technology, strong customer service and modern workflows. As we head into 2017, we are also a company that’s looking to the sun.

The Chesapeake Systems workplace is already unique. Our offices are located in a former Methodist Church in Baltimore’s Hampden neighborhood. When the church was destroyed by fire in 2008, we jumped in and adapted the building for modern use. With the interior renovation long since complete, our eyes turned to the outside of the structure, which has an expansive, south-facing roof, making it perfectly positioned to reap the benefits of solar panels.

Combining that information with the fact that we could reduce our environmental footprint, gain more control of our energy usage and take advantage of substantial tax credits, made the decision to invest in solar panels both pragmatic and exciting.

The Logistics
Solar panels work by using photovoltaic cells to capture the sun’s energy, which is then transformed into Direct Current (DC). From there, an inverter transforms the DC to Alternating Current. Having the panels installed correctly was important to us, which is why we chose to work with a Maryland based, commercial solar installer, Pfister Energy. Fortunately, I had an established connection with the president, William Cole, who also owns a roofing company. As our expansive (and expensive) roof was installed as recently as 2010, it was important to me to work with a reputable company whose industry expertise I trusted. An added benefit was having the roof renderings readily available, which helped Pfister Energy with the installation process.

Purchasing and installing solar panels is not a quick process. From vetting the right company to securing the appropriate permits to the actual installation of some 80 solar panels, the project took over six months.

There were a few stumbling blocks along the way, but overall the process was extremely smooth and I can’t say enough about Pfister and what a great job they did. In addition to professionally and discretely installed solar panels (they are not visible to those driving down the main thoroughfare in Hampden), we can expect to see our monthly $1,800 gas and electric bill decrease by about $5,000 per year.

While the numbers speak for themselves about why this was a worthwhile investment, I really think this reflects the Chesapeake Systems’ philosophy of ownership and taking control. Many of our competitors rely on subcontractors, but at Chesapeake Systems we strive to hire and train our own workforce. This empowers us to control the cost, quality and results as much as possible – in the same way that owning solar panels enables us to control our energy usage as much as we can.

For any company considering going the solar route, I would recommend the following tips:
Start the process as early as possible, and allow yourself as many as six months to complete, because you only receive the tax credits for the years the panels are placed into service and things like weather and the permit process must be factored in.
Your power may be down for a brief period during the installation and the process will result in quite a bit of noise as workers bolt in equipment.
Doing an outright purchase, as opposed to renting, will result in a substantial amount of out-of-pocket costs, but you will see an ROI in as little as six tax years (depending on your roof size and size of your install). It could be less or more.
Consider the lifespan of your roof and time it correctly as you begin this project. You don’t want to install solar panels with a lifespan of 25 years, if your roof only has 10 viable years remaining.
In Maryland, businesses can benefit from SREC (Solar Renewable Energy Credit) revenue. State law requires power companies to purchase a certain percentage of energy from alternative sources (from either other companies or individuals who create renewable energy), so for every 1,000 kilowatt hours of clean, renewable, solar energy, one credit can be sold.
In Conclusion
Seeing the meters spin less will be exciting. I like the idea of not being so reliant on outside sources, and I think the real excitement will come when I see a gas and electric bill $5000 a year cheaper. But overall, it’s being in control of my destiny that is the coolest part of the whole thing.

Have questions about our solar project or want to learn more about what Chesapeake Systems does? Give me a call at 410-752-7729.

Categories
Uncategorized

Clients – Here’s How to Make Your Project a Success

It’s been said the job of a project manager is like riding a bike. Except the bike is on fire. You’re on fire. Everything is on fire.

While, as someone who has completed more than 100 projects for Chesapeake Systems, I can certainly appreciate the challenges that come with the role of project manager. I also know that there are actions our clients, and we, can take that will set everyone up for success.

Here are three ways a client can help ensure the most efficient, effective project from Chesapeake Systems.

1. Be invested in the project both financially and emotionally.

A client that is really invested emotionally in what they’re about to buy, whether it’s $200 or $200K or $1 million, really sets themselves and Chesapeake Systems up for success. One of our clients, a media company, is simultaneously moving to a new facility and significantly upgrading its existing storage infrastructure. During the course of this company’s new facility buildout, they came to us and said that in addition to their electrical, plumbing and HVAC contractors, they also consider Chesapeake Systems as one of their primary contractors for the project. We were involved from the beginning in terms of facility planning. We had the opportunity to say, you’ll need this amount of electrical power, you’ll need this kind of cabling and you’ll require this amount of rack space. The fact that the client was willing to make that investment in involving us in project planning guaranteed as smooth a transition as possible and helped set them up for future success.

2. Know what you’re buying – and want it.

A lot of times clients will come to us looking for an asset management system or shared storage, and we will educate them and present them with multiple options, so they can see the benefits of various approaches. What we love to see from the organizations we work with, and what can help ensure success, is for them to want to understand how the product works.

We want our clients to:

Ask questions

Be available for demos and in-depth conversations

Make sure all key stakeholders are involved in the discussions, including managers and end users

Understand how the new technology will interact with their environment and impact their work style (How will it integrate with their existing infrastructure and workflow)

There are several reasons why we desire this, but one of the main ones is that once we walk away, the client will be responsible for this new system in their environment. We’ll show up, we’ll install it, and we’ll train them on it, but they’re responsible for the day-to-day interaction with this new product that they purchased. We want the client to have an emotional investment in what they’re buying and to appreciate it. After all, these systems will serve as the cornerstone of many aspects of their operation, and they will rely on them to help fulfill their responsibilities and do their job.

3. Be prepared.

Whether it’s making sure all the stakeholders have fully bought into the project before signing the contract, ensuring the right people in the company prioritize and are available for key project milestones, or simply ensuring all the items on our prerequisite list have been checked off before the project begins, being prepared makes all the difference.

It’s important to keep in mind that the project begins long before the customer signs on the dotted line. It begins with our first conversation.

Because the projects we’re involved with rely on our professional services team for their implementation, the quotes that we generate include labor line items that inform the projects’s scope of work. Our SOW outlines who you are as a client, what systems you currently have in place, what we’re selling and installing for you and how it’s going to integrate into your environment. Also included in that scope of work is a list of prerequisites that we expect the client to have performed ahead of the project, such as making sure adequate power and rack space for new equipment is available and logistics have been handled that will allow our staff timely access to facilities. We expect that our customers’ signature on this SOW document indicates they are aware of and have followed through on the prerequisites and have asked any pertinent questions to ensure the project runs smoothly.

It’s important to keep in mind that the project begins long before the customer signs on the dotted line. It begins with our first conversation. The more our customers invest in those pre-deployment phases, the more successful the project will be.

For every project, Chesapeake Systems really strives to understand what the needs are and to talk to the end users. If you don’t know what questions to ask, tell us. We’re professionals. You came to us for a reason – utilize us. If you’re not sure about what you’re buying, ask for a demo.

Another reason a successful project is beneficial is that, oftentimes, our relationship with our customers extends beyond the project as we segue into supporting the post deployment environment. The smoother the project runs, the more seamless this transition is to the day-to-day usage.

At Chesapeake Systems, we love nothing more than to educate and inform and empower our clients. Are you interested in speaking to another organization that was facing a similar challenge or made a similar transition? Ask us, we’ll put you in touch with them!