Categories
Uncategorized

Artificial Intelligence: Should You Take the Leap?

In Hollywood, the promise of artificial intelligence is all the rage: who wouldn’t want a technology that adds the magic of AI to smarter computers for an instant solution to tedious, time-intensive problems? With artificial intelligence, anyone with abundant rich media assets can easily churn out more revenue or cut costs, while simplifying operations … or so we’re told. If you’ve been to NAB or CES or any number of conferences, you’ve heard the pitch: it’s an “easy” button that’s simple to add to the workflow and foolproof to operate, turning your massive amounts of uncategorized footage into metadata.

But should you take the leap? Before you sign on the dotted line, let’s take a closer look at the technology behind AI and what it can – and can’t – do for you.

First, it’s important to understand the bigger picture of artificial intelligence in today’s marketplace. Taking unstructured data and generating relevant metadata from it is something that other industries have been doing for some time. In fact, many of the tools we embrace today started off in other industries. But unlike banking, finance or healthcare, our industry prioritizes creativity, which is why we have always shied away from tools that automate. The idea that we can rely on the same technology as a hedge fund manager just doesn’t sit well with many people in our industry, and for good reason.

In the media and entertainment industry, we’re looking for various types of metadata that could include a transcript of spoken word, important events within a period of time, or information about the production (e.g., people, location, props), and there’s no single machine-learning algorithm that will solve for all these types of metadata parameters. For that reason, the best starting point is to define your problems and identify which machine-learning tools may be able to solve them. Expecting to parse reams of untagged, uncategorized, and unstructured media data is unrealistic until you know what you’re looking for.

AI has become pretty good at solving some specific problems for our industry. Speech-to-text is one of them. With AI, extracting data from a generally accurate transcription offers an automated solution that saves time. However, it’s important to note that AI tools still have limitations. An AI tool, known as “sentiment analysis,” could theoretically look for the emotional undertones described in spoken word, but it first requires another tool to generate a transcript for analysis. And no matter how good the algorithms are, they won’t give you the qualitative data that a human observer would provide, such as the emotions expressed through body language. They won’t tell you the facial expressions of the people being spoken to, or the tone of voice, pacing, and volume level of the speaker, or what is conveyed by a sarcastic tone or a wry expression. There are sentiment analysis engines that try to do this but breaking down the components ensures the parameters you need will be addressed and solved.

Another task at which machine learning has progressed significantly is logo recognition. Certain engines are good at finding, for example, all the images with a Coke logo in 10,000 hours of video. That’s impressive and can be quite useful. But it’s another story if you want to find footage that shows two people drinking what are clearly Coke-shaped bottles with the logo obscured.

That’s because machine-learning engines tend to have a narrow focus, which goes back to the need to define very specifically what you hope to get from it. There are a bevy of algorithms and engines out there. If you license a service that will find a specific logo, then you haven’t solved your problem for finding objects that represent the product as well. Even with the right engine, you’ve got to think about how this information fits in your pipeline, and there are a lot of workflow questions to be explored.

Let’s say you’ve generated speech-to-text with audio media. But have you figured out how someone can search the results? There are several options. Sometimes vendors of have their own front end for searching. Others may offer an export option from one engine into a MAM – that you either already have on premise or plan to purchase. There are also vendors that don’t provide machine learning themselves but act as a third-party service organizing the engines.

It’s important to remember that none of these AI solutions are accurate all the time. You might get a nudity detection filter, for example, but these vendors rely on probabilistic results. If having one nude image slip through is a huge problem for your company, then machine learning alone isn’t the right solution for you. It’s important to understand whether occasional inaccuracies will be acceptable or deal breakers for your company. Testing samples of your core content in different scenarios for which you need to solve becomes another crucial step. And many vendors are happy to test footage in their systems.

Although machine learning is still in its nascent stages, I’m encouraged that clients are interested in using it. At Chesapeake Systems, we have been involved in AI for a long time and have partnerships with many of those companies pushing the technology forward. We have the expertise to help you define your needs, sift through the thousands of solution vendors to find the ones who match those needs, and integrate those solutions into your pipeline to be fully useable.

Machine learning/artificial intelligence isn’t (yet, anyway) a magic “easy” button. But it can still do some magical things, and we’re here to help you break down your needs and create an effective custom solution to suit your specific needs.

To learn more about what AI can do you for you, contact Chesapeake at prosales@chesa.com

Categories
Uncategorized

So you think you need an RFP

Over the years, Chesapeake Systems has responded to many RFPs, each with its own unique DNA. As a company that prides itself on being an engaged and enthusiastic partner to our clients, we’ve thought a lot about how best to establish that tone of partnership from the beginning of the relationship, including through the RFP process. We’re sharing our experience here in the hope that it will benefit both prospective issuers and respondents.

We believe there are three critical ideas in establishing the kind of relationship that both parties will want to stay in: collaboration, transparency, and communication.

Collaboration.
A collaborative orientation on the part of both parties is critical to a successful RFP process. The goal of the process is to find someone you want to partner with, not just to stage a rigorous competition. In the most successful RFPs, the issuing organization is as helpful as possible to respondents, because it will result in the best responses. Careful preparation and honest communication pays dividends down the line for both partners.

Share who you are, not just what you know, and expect the same from your respondents. Get acquainted with one other. Make time for more than one respondent to present to you. On a project of the scale that requires an RFP, you’re likely to be in the relationship for a long time. Don’t go in blind––make sure you’re choosing people who can communicate with you and you want to work with for the foreseeable future.

Knock down the walls. Sometimes RFPs read as if they’ve been written with the intention of keeping the relationship as sterile as possible. Communication becomes stifled in pursuit of impartiality, or its appearance––and while impartiality is a worthy goal, problems are not solved by withholding information. Ultimately, the success of the RFP process, like the eventual project work, will be determined by the combined efforts of all parties participating.

Remember, the tone of your relationship is set by the tone of your selection process.

Transparency.
Be honest about where you stand in your process. If you’re not ready to do a procurement, or are already narrowing in on your vendor, or if you don’t have executive support and budget approval, consider whether the time is right to issue a formal RFP. Prospective vendors are happy to respond to a less formal RFI (Request for Information) or sit down to talk about the potential project without a formal process. Those processes can naturally evolve into a complete, focused, well-reasoned RFP when the time is right.

Communication.
Be clear in your approach to the RFP. Articulate the problem and use the RFP platform to outline the issues. Your mastery of the problems and their nuances in the RFP gives top-tier respondents the opportunity to dig in while affording them the opportunity to offer their own perspectives and solutions.

Provide as much relevant information as humanly possible in the RFP. If you know something, say it; if you don’t know it yet, say that. Regardless of whether a third-party firm is involved in drafting the RFP, be sure to gather input from everyone who would come into contact with the system you’re bidding out and make sure all of that input makes it into the document.

Consider reserving the longest chunk of your RFP timeline for after you have answered the respondents’ questions––that’s where the work really begins, because the full scope and specifics of the project have been conveyed and are more likely to be fully understood by the respondents.

In addition to resulting in robust, detailed responses that you can actually use, evidence that you’ve carefully thought the project through attracts responses from strong contenders whom you would eventually want to work with. No desirable vendor wants to put hundreds of hours of effort into an RFP process without some assurance the issuer is both clear on what they’re doing and candid in communicating it.

Once the draft RFP feels complete, and before you distribute, read through the entirety from the respondent’s perspective. Ask yourself what you would need to know and what would help you provide the best possible response. Is the document designed to get you what you’re looking for?

Taking a step back to include all of these steps may feel like doubling the work to issue an RFP. However, putting in the effort on the front end will mean a smarter, faster evaluation process, because the responses will really get at the heart of the project and address your specific needs. Furthermore, a well-run RFP process yields one other valuable benefit: you will understand your organization, the problem, and the industry far better than when you began.

Categories
Uncategorized

A Year of Growth and Change

2017 was a big year for Chesapeake Systems, as it was for the industry at large.

We’ve been charting our path through the expansion of public, private and hybrid cloud services alongside many of you, and we are thrilled to announce our certification as a Consulting Partner for Amazon Web Services (AWS). This qualification means we are “Amazon approved” in our expert guidance to customers in designing, architecting, building, migrating, and managing their workloads and applications on AWS.

We are also excited about new roles at the company. Mark Dent, Chesapeake’s co-founder and owner, has shepherded the company through every twist and turn of the past 22 years. He has now stepped into the CFO role. His dedication to our field remains steadfast, including his unwavering commitment to guaranteeing the company’s stellar reputation for service. And after 10 years fulfilling duties at Chesapeake from sales and engineering to project management and professional services, it was an honor for me to take the reins as CEO in April. I’m grateful for the opportunity, and thrilled to work with Mark to continue to position Chesapeake as the preeminent media technology and workflow solutions architects in the industry.

Furthermore, in response to our growing media and entertainment client base on the West Coast, we have expanded our footprint and support offerings with the addition of Sarah Shechner and Drew Hall in the Los Angeles area. Sarah is thrilled to be strengthening our connections to the tech community and providing account management expertise at a regional level. And as a Senior Systems Engineer, Drew brings over 15 years of video-centric data storage expertise to his role. We are excited to offer this additional level of service to our clients in the West.

Chesapeake’s ongoing participation with important industry organizations that drive progress in media and technology continues to flourish. One of the year’s highlights for Nick was serving as conference chair of the Association of Moving Image Archivist’s (AMIA) Digital Asset Symposium in May, where experts in the community shared their knowledge and experiences across a cross-section of disciplines. He also co-programmed Bits by the Bay for the Society of Motion Picture-Television Engineers (SMPTE) Washington DC section, and spoke on a panel at the UCLA MEMES Big Data Conference, presented by the Anderson School of Management. Nick renewed our relationships with many of the leading-edge thinkers in our industry and came away with new perspectives to inform the work we do with our clients.

As we reflect on the close of the year, we are reminded of our good fortune to be working with the best of the best. Our clients stretch us, challenge us, and expect no less from us than we do from ourselves. It is a pleasure and a privilege to be working with you, and we look forward to what 2018 will bring. Stay tuned for more in the new year!

Happy Holidays from all of us at Chesapeake Systems.

Categories
Uncategorized

DAS 2017 Highlight: Video is the Language of the 21st Century

On May 5, 2017, the Association of Moving Image Archivists (AMIA) hosted their annual Digital Asset Symposium (DAS) at the Museum of Modern Art in New York City. This event brought together all aspects of the industry and covered a variety of Media Asset Management topics.

Attendees were encouraged to ask questions and leverage the community around them. To facilitate further conversation a reception was held afterward at Viacom’s White Box.

During the welcome Nick Gold, Chief Revenue Officer and Solutions Consultant of Chesapeake Systems and Program Chair of the 2017 DAS stated: “video is the language of the 21st century”. This spoke to the underlying theme of the event which was the need to not only capture this critical point in history but to preserve it and pass it on to future generations.

Skip ahead to 5:58 to hear from Nick

If you would like to revisit any or all of the sessions that were held, videos are posted on the DAS site.

Categories
Uncategorized

NAB is Nigh

The Desert Beckons!
Yes, it’s that time of the year, when many in our sphere converge on the illusory land of Las Vegas for that annual celebration of all things video technology, the NAB Show (April 24 – 27, 2017). As always, the Chesapeake Systems gang will be in attendance, bouncing around the convention center and city at large through all hours of the day (and often well into the night), so we can keep our finger on the pulse of our industry.

NAB can be maddening in its scope. There is never enough time over the course of the five days we spend in Nevada each year to see and experience everything the show has to offer. We use our time there as best we can, however. Our team joins dozens of meetings and other events, so we can stay in sync with our clientele, as well as our current vendor partners.

One of the other important aspects of attending NAB is, of course, to engage with vendors we do not currently work with, but whose exciting technologies might be useful additions to our bag of tricks, that is to say, our portfolio of solutions that we can apply to the technology and workflow challenges we face every day across our client base.

Areas of Focus for Us?
Obviously Media Asset Management and associated technologies, which have largely become our hallmark as a consultancy and integration firm. There are always new players in the MAM space, and it is our goal to be as familiar with as many as possible, as deeply as possible. Each platform and associated developer has its strengths, as well as “areas that could use improvement.” It’s critical for us at CHESA to know these ins and outs, because sometimes subtle functionalities (or lack thereof) can make or break a successful implementation.

Storage technologies as always are a foundational part of our catalog, and there is much activity in this space as well. Production SAN and NAS shared storage systems are important to our clients, but increasingly, folks are investing in longer-term archival data repositories. But in our world, archives must be “active archives,” making it trivially easy to recall a snippet of video or other media for a project, no matter what tier of storage it may be on. The choices here are as expansive as ever. We’ve always used data tape and will for some time, but other options have emerged that are worthy of exploration, such as private “object storage” systems (which typically need to be addressed via API calls, and do not present a mountable file system to browse through, like a local drive, SAN or NAS volume). Another option on more organizations’ radars than ever before is public cloud storage, such as Amazon S3 or Microsoft Azure. Like private object stores, these cloud options almost always require some type of software platform to “put” files into them or “get” files out (these being two of the most common types of API, or “Application Programming Interface” commands for addressing object storage systems).

And Then All of the Other Stuff:
Transcoding systems, workflow automation platforms, client-side creative applications from Adobe and others. Let’s not forget the fun stuff: 360-degree video camera rigs, airborne drones, maybe finally airborne drones equipped with 360-degree video cameras? A man can dream.

If you’re going to be out in Las Vegas for NAB, don’t be a stranger! It’s always fun to see friends and colleagues (a thankfully almost totally overlapping Venn diagram) out in the land of make-believe. Feel free to drop us a line ahead of the show, as we’re always happy to meet up and share our show-floor experiences. If you are not attending NAB, but there’s something you’ve got your eyes open for, let us know, and we’ll do what digging we can on your behalf while we’re out there.

Categories
Uncategorized

Managing Equipment Lifecycle

The Lifecycle
IT equipment might not be alive in the sense you and I are but that does not mean it does not have a lifecycle. This is a very important concept to embrace as you build out your technical infrastructure. Networking equipment, servers running various types of services, RAIDs and other devices that make up your storage environment, and the rest of your gear are typically large investments and form the technology and workflow backbone of your operation. It is especially important to know what type of lifespan to expect out of these things.

Let’s look at a RAID, for example. These mass storage devices are really marvels of our modern technological age. With today’s RAID systems, we can cram half a petabyte (1,000 terabytes) of hard drive-based storage into only 4 rack units of space! 56 or so hard drives spinning away furiously at most moments, all around the clock. Each drive spinning at maybe 7,000 or 10,000 revolutions per minute (RPM), perhaps storing 8TB of data, nearing a million times more density than my first 42 megabyte hard drive in 1993. The miniaturized electronics in a hard drive employ nano-scale technologies. Every bit of data so small and microscopic, and yet, so integral to you, the user. If one bit of that data gets corrupted, it can mean a massive problem if it was in a critical file. With just one bit being off, it might not even properly open any more.

Nothing Lasts Forever
Does this sound like the type of thing that would last forever? Just think about the levels of complexity involved in each subsystem of the RAID, the “hacking” of the laws of physics that is on demonstration to cram so much information into one of these things. Fast speeds, infinitesimally small pieces, and no real margin for errors. Nothing about any of that makes me think this is something that is going to last forever. I can imagine getting good use out of my coffee table my cousin made for me for “forever” in a sense that matters to me. But a RAID? If one little capacitor in the RAID controller goes up, that RAID controller is not going to work any more. The whole point of RAIDs is that they use drives redundantly so that when (not if) they fail, you can replace them and not lose data. The idea of the failure of the subcomponents is designed into the product, with many RAIDs offering redundant controllers, power supplies, and all main system subcomponents other than the chassis itself.

You might think, “Well, I will just replace failed pieces of the RAID as they keep dying and I will be fine.” This sounds great except for the fact that in six years, there is a good chance nobody will manufacture 8TB drives any longer (yes, hard drives are going to keep getting more storage-dense for a while). Your RAID manufacturer may also stop manufacturing replacement RAID controllers beyond a number of years and will probably run out of stock at some point. Another question is,

“Why would you want to take up a whole 4U of rack space on a measly 448 terabytes in the year 2025 when, maybe, you will be able to store 10 times as much in the same space using the state of the art?”

That may sound ridiculous but in 2025, virtual reality (VR) and 360 video will probably feel more “real” than they do today, have a much wider mainstream adoption, and you will likely be producing content for it. (Hint: Shooting video in many directions at once takes up a lot more space than just a single direction! And imagine if each of those sensors is itself is 8K or 16K.)There are many factors that drive equipment lifecycle in any given use case: the needs of the environment evolving, equipment aging beyond a supportable life, and various combinations and permutations. Many of our clients’ environments have dozens of such pieces of infrastructure that need to be thought of as having life cycles so that proper planning and budgeting can be done. Each individual piece of infrastructure is a part of a larger, more complex “workflow” that needs to keep humming along. It is a lot to keep up with!

How to Keep Up
Many of Chesapeake Systems’ (CHESA) customers are under what CHESA calls “MSAs”, or “Maintenance & Support Agreements”. This allows us to provide ongoing proactive maintenance and reactive support services for, sometimes, an entire operation’s technical infrastructure. It also serves to augment manufacturer-level support agreements, as well as any on-premise staff who may be charged with keeping those operations going.

Over the coming months, CHESA will begin rolling out a Technology Lifecycle Guide for our MSA customers. We will include all the key infrastructure that falls under our MSA and plan out a five year map of how viable all of that equipment will be under average circumstances, based on a host of criteria. This allows us to have a template for working with your team and begin generating budgetary quotes that can be slotted in for equipment that is getting closer to “aging-out.” This will allow, in turn, for our clientele to have years of advance notice as to when they may need to replace things, what the priorities are, how one equipment change may impact another piece of the infrastructure, etc. We think this will be a very useful part of our service and we look forward to getting these in front of our customers to get feedback and improve the information we present.

We will put some energy into the layout but imagine a list of all your key gear, a five year map that will be updated over the coming years, and color-coded boxes that indicate when a piece of equipment is “safe,” when it is starting to reach its age, and when it is no longer easily supportable. We think this will be a very handy “living document” for our MSA customers, so stay tuned!

Categories
Uncategorized

Solar Panels Help Chesapeake Systems See the Bright Side of Renewable Energy

Chesapeake Systems is a company that thrives on cutting edge technology, strong customer service and modern workflows. As we head into 2017, we are also a company that’s looking to the sun.

The Chesapeake Systems workplace is already unique. Our offices are located in a former Methodist Church in Baltimore’s Hampden neighborhood. When the church was destroyed by fire in 2008, we jumped in and adapted the building for modern use. With the interior renovation long since complete, our eyes turned to the outside of the structure, which has an expansive, south-facing roof, making it perfectly positioned to reap the benefits of solar panels.

Combining that information with the fact that we could reduce our environmental footprint, gain more control of our energy usage and take advantage of substantial tax credits, made the decision to invest in solar panels both pragmatic and exciting.

The Logistics
Solar panels work by using photovoltaic cells to capture the sun’s energy, which is then transformed into Direct Current (DC). From there, an inverter transforms the DC to Alternating Current. Having the panels installed correctly was important to us, which is why we chose to work with a Maryland based, commercial solar installer, Pfister Energy. Fortunately, I had an established connection with the president, William Cole, who also owns a roofing company. As our expansive (and expensive) roof was installed as recently as 2010, it was important to me to work with a reputable company whose industry expertise I trusted. An added benefit was having the roof renderings readily available, which helped Pfister Energy with the installation process.

Purchasing and installing solar panels is not a quick process. From vetting the right company to securing the appropriate permits to the actual installation of some 80 solar panels, the project took over six months.

There were a few stumbling blocks along the way, but overall the process was extremely smooth and I can’t say enough about Pfister and what a great job they did. In addition to professionally and discretely installed solar panels (they are not visible to those driving down the main thoroughfare in Hampden), we can expect to see our monthly $1,800 gas and electric bill decrease by about $5,000 per year.

While the numbers speak for themselves about why this was a worthwhile investment, I really think this reflects the Chesapeake Systems’ philosophy of ownership and taking control. Many of our competitors rely on subcontractors, but at Chesapeake Systems we strive to hire and train our own workforce. This empowers us to control the cost, quality and results as much as possible – in the same way that owning solar panels enables us to control our energy usage as much as we can.

For any company considering going the solar route, I would recommend the following tips:
Start the process as early as possible, and allow yourself as many as six months to complete, because you only receive the tax credits for the years the panels are placed into service and things like weather and the permit process must be factored in.
Your power may be down for a brief period during the installation and the process will result in quite a bit of noise as workers bolt in equipment.
Doing an outright purchase, as opposed to renting, will result in a substantial amount of out-of-pocket costs, but you will see an ROI in as little as six tax years (depending on your roof size and size of your install). It could be less or more.
Consider the lifespan of your roof and time it correctly as you begin this project. You don’t want to install solar panels with a lifespan of 25 years, if your roof only has 10 viable years remaining.
In Maryland, businesses can benefit from SREC (Solar Renewable Energy Credit) revenue. State law requires power companies to purchase a certain percentage of energy from alternative sources (from either other companies or individuals who create renewable energy), so for every 1,000 kilowatt hours of clean, renewable, solar energy, one credit can be sold.
In Conclusion
Seeing the meters spin less will be exciting. I like the idea of not being so reliant on outside sources, and I think the real excitement will come when I see a gas and electric bill $5000 a year cheaper. But overall, it’s being in control of my destiny that is the coolest part of the whole thing.

Have questions about our solar project or want to learn more about what Chesapeake Systems does? Give me a call at 410-752-7729.

Categories
Uncategorized

Clients – Here’s How to Make Your Project a Success

It’s been said the job of a project manager is like riding a bike. Except the bike is on fire. You’re on fire. Everything is on fire.

While, as someone who has completed more than 100 projects for Chesapeake Systems, I can certainly appreciate the challenges that come with the role of project manager. I also know that there are actions our clients, and we, can take that will set everyone up for success.

Here are three ways a client can help ensure the most efficient, effective project from Chesapeake Systems.

1. Be invested in the project both financially and emotionally.

A client that is really invested emotionally in what they’re about to buy, whether it’s $200 or $200K or $1 million, really sets themselves and Chesapeake Systems up for success. One of our clients, a media company, is simultaneously moving to a new facility and significantly upgrading its existing storage infrastructure. During the course of this company’s new facility buildout, they came to us and said that in addition to their electrical, plumbing and HVAC contractors, they also consider Chesapeake Systems as one of their primary contractors for the project. We were involved from the beginning in terms of facility planning. We had the opportunity to say, you’ll need this amount of electrical power, you’ll need this kind of cabling and you’ll require this amount of rack space. The fact that the client was willing to make that investment in involving us in project planning guaranteed as smooth a transition as possible and helped set them up for future success.

2. Know what you’re buying – and want it.

A lot of times clients will come to us looking for an asset management system or shared storage, and we will educate them and present them with multiple options, so they can see the benefits of various approaches. What we love to see from the organizations we work with, and what can help ensure success, is for them to want to understand how the product works.

We want our clients to:

Ask questions

Be available for demos and in-depth conversations

Make sure all key stakeholders are involved in the discussions, including managers and end users

Understand how the new technology will interact with their environment and impact their work style (How will it integrate with their existing infrastructure and workflow)

There are several reasons why we desire this, but one of the main ones is that once we walk away, the client will be responsible for this new system in their environment. We’ll show up, we’ll install it, and we’ll train them on it, but they’re responsible for the day-to-day interaction with this new product that they purchased. We want the client to have an emotional investment in what they’re buying and to appreciate it. After all, these systems will serve as the cornerstone of many aspects of their operation, and they will rely on them to help fulfill their responsibilities and do their job.

3. Be prepared.

Whether it’s making sure all the stakeholders have fully bought into the project before signing the contract, ensuring the right people in the company prioritize and are available for key project milestones, or simply ensuring all the items on our prerequisite list have been checked off before the project begins, being prepared makes all the difference.

It’s important to keep in mind that the project begins long before the customer signs on the dotted line. It begins with our first conversation.

Because the projects we’re involved with rely on our professional services team for their implementation, the quotes that we generate include labor line items that inform the projects’s scope of work. Our SOW outlines who you are as a client, what systems you currently have in place, what we’re selling and installing for you and how it’s going to integrate into your environment. Also included in that scope of work is a list of prerequisites that we expect the client to have performed ahead of the project, such as making sure adequate power and rack space for new equipment is available and logistics have been handled that will allow our staff timely access to facilities. We expect that our customers’ signature on this SOW document indicates they are aware of and have followed through on the prerequisites and have asked any pertinent questions to ensure the project runs smoothly.

It’s important to keep in mind that the project begins long before the customer signs on the dotted line. It begins with our first conversation. The more our customers invest in those pre-deployment phases, the more successful the project will be.

For every project, Chesapeake Systems really strives to understand what the needs are and to talk to the end users. If you don’t know what questions to ask, tell us. We’re professionals. You came to us for a reason – utilize us. If you’re not sure about what you’re buying, ask for a demo.

Another reason a successful project is beneficial is that, oftentimes, our relationship with our customers extends beyond the project as we segue into supporting the post deployment environment. The smoother the project runs, the more seamless this transition is to the day-to-day usage.

At Chesapeake Systems, we love nothing more than to educate and inform and empower our clients. Are you interested in speaking to another organization that was facing a similar challenge or made a similar transition? Ask us, we’ll put you in touch with them!

Categories
Uncategorized

Modern LTO: mLogic Revolutionizes Long-term Storage

History of LTO
LTO tape is a data storage format our clients have been using for years for longer-term preservation of media files, and it continues to be as relevant as ever to the needs of video and rich media producers. The tapes are relatively inexpensive in terms of the price per terabyte they offer, relative to hard drives, and are similarly dense – they are a small, light media format that is easy to store and transport. Of course what makes LTO (which stands for linear tape open) particularly appealing is its characteristic data preservation capability. While certainly not infallible, LTO tapes are, as a technology, better at preserving data for longer periods without risk to the data being corrupted over time, or the media itself failing, than hard drives. It is what the tapes are designed for, after all.

We tell our clients that it’s always best to archive data onto a pair of LTO tapes than a single tape, in case you do experience a media failure. With that said, if kept in the proper environmental conditions, LTO tapes can work perfectly fine after being kept on the shelf for ten or more years. This is totally unlike, say, a pile of desktop hard drives sitting on a shelf, going unused for years at a time. Hard drives are particularly prone to failure after longer periods of inactivity, and any hard drive that’s been “shelved” is going to have very high odds of failing if you try to fire it up for the first time in five years, hoping to pull some files off for a new project. There is a high chance the hard drive will fail to spin up, and the read/write head may literally have stuck to the platter due to lubricants congealing over time. Hard drives are made to spin, and work better when they are spun up regularly.

A Different Way
We at Chesapeake Systems have long sought a more “approachable” entry into LTO technology, and a few ventures over the years have tried to bring a less expensive desktop implementation of the LTO tape drive to market, with less than total success. Most LTO tape systems come in the form of larger “library” units, which often feature a handful of LTO drives to read from and write to multiple tapes at once. Tapes are stored in these libraries by the dozens, hundreds, or even thousands, and automated robotics (controlled by one of a handful of tape archive software packages, such as Archiware P5 Archive or Quantum’s Storage Manager) handle the job of moving tapes between inactive “slots” and the actual tape drives which the tapes need to be inserted into in order for their content to be accessible, or for the tape to be available to have more data written to it. Libraries can fill half a server rack, be the size of one or more server racks, or even fill an entire data center. These sophisticated tape library systems are fantastic, but out of the budget for smaller shops. Some folks also look to LTO as a useful format for, say, sending data from a shoot back to the post-production operation, or to send a completed program from post over to a broadcaster for delivery (it is quite common these days for broadcasters to request LTFS-formatted LTO tapes as a file-based delivery master format replacement for the video tapes of yore). For these more basic data transport scenarios, a large tape library system managed by third-party archive management middleware is just too “big” a solution.

Tabletop LTO drives have been a reality for a number of years, but have not been embraced by many media producers due to, I believe, the complexity of their setup and use. First, you need to connect the LTO drive to your computer system, and that up until recently has required putting a SAS (serial-attached SCSI) PCIe card into your desktop computer, and then connecting the interface card to the tabletop drive. That’s a practically old-school SCSI level of annoyance for today’s users, many of whom have “come up” in the era of USB, Firewire and Thunderbolt interface standards. Many media producers are on the Mac platform, and no Mac has shipped for at least 3 years that even offers an internal PCIe slot that could accommodate a SAS card.

Additionally, the software used to control these tabletop LTO drives has been a mixed bag, with some people using third-party archive software to control even a single drive tabletop LTO system, while others used LTO drive control software that was more “native” to the drive itself, and available directly from the manufacturer for use under either macOS or Windows. These simplified software packages, which typically format LTO tapes as LTFS (linear tape file system), were the most promising approach we’d yet seen to controlling tabletop LTO drives. LTFS is a neat format – it involves keeping an index of all the files on a tape immediately available on the tape itself, which can be read by a system, thus allowing it to provide a list of files on the tape to an operating system. With the right software, the operating system can be essentially “tricked” into thinking of the LTO tape as a “drive,” and can display it as a file path in Mac’s Finder or Windows’ Windows Explorer filesystem navigation applications. Files can be dragged onto the tape “drive” that appears to be mounted to copy data onto it, and dragged off to copy data off. Essentially a desktop hard drive-like workflow, with a data tape. You can’t play a file directly off of the tape, because that requires actually pulling data from tape back to a file system, which the tape is not (it is only masquerading as one, via the LTO tape drive software). All in all this has worked well, but again, these days, most media producers don’t have a ton of use for a tape drive solution that requires a PCIe SAS card.

New Solution
We became aware of a solution to this problem last year that we finally had the chance to test out toward the end of 2016 that I would like to share. We learned about a new manufacturer of tabletop LTO drives, aimed at media professionals, called mLogic, and they were kind enough to send one of their tabletop LTO drives to us to test this past autumn. mLogic’s tabletop drives were similar to others we’d worked with in the past that used SAS to connect to a host system, with one major difference: They feature a native Thunderbolt interface! It seemed like the right solution for a customer we were considering it for, but we wanted to test it out in the shop to make sure it would work as expected. I spoke with Roger Mabon, mLogic’s CEO, and he was willing to send us a loaner unit to evaluate for our prospective customer, and for our customer base in general. What we found was encouraging.

Like the tabletop LTO drives that use SAS interfaces, mLogic’s drives use Mac or Windows software that allow LTO tapes to be formatted as LTFS, and basically treated like a mounted filesystem, even though they are really not, and the same limitations I described above apply – you cannot work with data directly from the tape, as it must first be copied back to a real filesystem, which your applications are designed to work off of.

You plug the drive in via a single Thunderbolt cable, which of course couldn’t be easier. mLogic’s Thunderbolt 2 devices can work with the latest Thunderbolt 3-equipped Mac and Windows machines, so long as you use a Thunderbolt 3 to Thunderbolt 2 adapter cable – no big deal. You install the Mac or Windows software, pop in a tape, format it, tell your computer how you’d like to pseudo-mount the tape as a volume or directory, and there it appears, ready for data. Pop a tape with data into another computer with an mLogic drive (or nearly any setup that can read LTFS-formatted LTO tapes), and you will be able to see your files and pull them off to disk. It really is that easy.

I asked Roger at mLogic what inspired him to release their line of products (they have a few models, including a rackmount dual-drive unit that allows you to clone data to two tapes at once – which can also be done by daisy-chaining two of the single drive desktop drives together, and enabling the feature in their software). Roger reminded me of something that had totally skipped my mind – he was the guy who founded G-Technology! Who in the world of media hasn’t used G-Tech drives at some point over the years? Chesapeake had sold probably thousands of them, and I told Roger I had missed this connection. He went on to tell me that when he would visit various production shops, and see stacks of his G-RAID drives on shelves or in closets, it would kill him, because he knew that it was not a reliable long-term data storage medium, as this is not what desktop hard drives are for. “So I decided to do something about it,” Roger said, and he founded mLogic to improve access to the much more appropriate longer-term storage technology of LTO. I thought this anecdote was hilarious, as I have had the experience of terror when seeing such stacks of desktop drives. For Roger it must have been quite visceral, as they were more often than not, stacks of his own drives! I have to give credit to the guy for really seeing the need, and doing something about it.

“We’ve never before had such a compelling desktop archive, and even desktop archive plus MAM, solution in our portfolio until now.”

A workflow we will be implementing soon using the mLogic drive will also involve another piece of software, the MAM CatDV, and specifically its “archive plugin” that is built into some versions of the desktop CatDV client application. CatDV’s archive option was originally developed for another tabletop LTO solution from a number of years ago that never achieved its promise, but we found that it works very well with the mLogic drive and its software and LTFS-formatted LTO tapes. The archive plugin basically takes an asset or assets from CatDV, and while keeping their metadata (and proxy, if you generated one) on disk, basically shuffles the full resolution assets back and forth between their original disk path as tracked by CatDV, and essentially any other external “file path” location. The good news is, LTFS tapes that are mimicking a file path, when using the mLogic software, work fine as a destination for these archive operations! And of course, the CatDV archive plugin is used to restore the data as well. This is a nice extra layer, again for people who want to tag assets with searchable rich metadata, and perhaps keep viewing proxies available online, even when the bigger original media files are offline on tape. What’s even more interesting is, even a purely desktop implementation of CatDV (with no back-end database residing on a separate server) should be able to handle this workflow with an mLogic drive. This brings MAM down to a truly desktop technology, with an all-in cost of less than $10K for hardware, software, and some time for setup and basic training. Even better, if your needs expand, metadata captured to the desktop version of CatDV can be migrated to the full database version of the software down the road. We’ve never before had such a compelling desktop archive, and even desktop archive plus MAM, solution in our portfolio until now. We plan to make sure to let our customers know about the mLogic drives, and that it’s never been nearly so easy, or cheap, to embrace the fantastic medium of LTFS-formatted LTO tapes. A tabletop LTO-7 drive from mLogic, which can hold roughly 6TB of data per tape, runs around $5K. If this is something that’s interesting to you and you’d like to learn more, please be in touch!

Categories
Uncategorized

As Codec Competition Heats Up, Which Direction Should You Go?

With all the dozens of codecs complicating video operations and workflows, why would anyone want another one? Quite simply, bandwidth is money. Video now accounts for most Internet traffic, with consumer appetite continuing to grow. Expectations of video quality are also growing. Users have little patience for buffering delays, and HD is rapidly becoming the minimum expected resolution, with Ultra-HD on the horizon. Netflix, for example, already requires original content it purchases to be shot and delivered in 4K Ultra-HD, signaling future intentions. More and more video is being served. As you can see in the infographic below from Statista based on a study by Sandvine, just Netflix and YouTube alone account for about half of all US Internet traffic.

Infographic: Netflix and YouTube Are America's Biggest Traffic Hogs | Statista


To enable new services like Ultra HD and HD over 4G, three new video codecs are vying to become the successor to the current MPEG-2 and H.264 standards. The High Efficiency Video Coding, (HEVC, or H.265), VP9 and AV1. The Motion Picture Experts Group (MPEG) ratified the specification for HEVC in 2013, and it promises to reduce bandwidth for a given video quality by about 50%. The 50% increase is a crucial improvement. It means being able to deliver 720p HD over 4G networks. And even with the increase, Ultra HD still requires about double the bandwidth of HD. Google has a competitor to HEVC called VP9. While not as efficient as HEVC, it comes at a very compelling price—free. VP9 is distributed as open source using a BSD-style license. While Google was working on VP10, the successor to VP9, it was decided to join forces with Amazon, Cisco, Intel, Netflix, Mozilla and Microsoft. The collaborative effort is called the Alliance for Open Media, and the new codec, AV1, includes work done on VP10 as well as Cisco’s Thor and Mozilla’s Daala codecs. The AV1 license will be open source, with no requirement that licensors disclose their own code. The initial release is expected in 2017.

A Brief History

The first MPEG standard, MPEG-1, was created to be able to deliver video on CDs. It was approved in 1992, and was widely adopted for many digital video applications. It remains popular due to its universality.

MPEG-2 first approved in 1995, is similar to MPEG-1 but added some key features. MPEG-2 adds support for interlaced television video and better audio. The ubiquitous MP3 audio format comes from MPEG-2 section 3. The MPEG-2 standard is used for DVDs and is widely used for broadcast applications, cable television and the Internet.

H.264 (MPEG-4 AVC) came out in 2003, and was designed to halve the bitrate needed by MPEG-2 for a given video quality. It provides other features, as well, such as provisions for Digital Rights Management for protecting content. H.264 is incorporated into modern Blu-ray discs (MPEG-2 is also supported), and is widely used in Internet applications such as YouTube and the iTunes store, as well as Adobe Flash, Microsoft Silverlight and Apple QuickTime.

VP8 was released by On2 Technologies in 2008. Google acquired On2 in 2010, and subsequently released VP8 under a modified BSD open source license. Performance is generally considered to be comparable to H.264. Despite being open sourced, adoption was slowed by its being released after H.264, as well as for competitive reasons.

VP9 was finalized in June, 2013. Its first commercial use was in the Chrome browser.

HEVC was released as a final draft standard in January, 2013.

Adoption

Early adopters of the new codecs will be the services that have some control over both ends. With PCs, tablets and mobile devices that perform video decoding in software, only a software update is needed. Google now supports VP9 in Chrome and on YouTube. Netflix is still extensively testing both HECV and VP9. Even with the bandwidth savings of HEVC, Ultra HD would use more than twice the bandwidth of the current Netflix ‘Super HD’ streaming format.

Another major factor potentially slowing HEVC adoption is licensing. Many companies with patents on the underlying technology, 25 at last count, have joined the MPEG LA patent pool to provide licensees with one-stop shopping and predictable fees. Notable patent holders not yet in the pool include Microsoft, Nokia, AT&T and Motorola. In March, 2015, a new patent pool was created. HEVC Advance included GE, Dolby, Philips, Mitsubishi and Technicolor. Technicolor has since left, deciding to license independently.

Both HEVC and VP9 have lined up considerable vendor support. Google partners include ARM, Broadcom, Intel, LG, Marvell, MediaTek, Nvidia, Panasonic, Philips, Qualcomm, RealTek, Samsung, Sigma, Sharp, Sony and Toshiba. HEVC has much broader support, with decoders in silicon from Broadcom, Entropic, MediaTek, MStar, Qualcomm, Sigma and ViXS targeted at STBs, DTV and consumer products. Other HEVC partners include Elemental, Harmonic, ATEME, Envivio and Rovi, Samsung, LG, Sharp and Sony.

I love standards. There are so many to choose from!

The old saying certainly holds true for video codecs. While the promise of an efficient, open codec is certainly alluring, vendors and services seem resigned to having to support HEVC despite the continuing licensing questions. As a joint standard, HEVC has a head start in hardware support for encoding and playback. And then there are the strategic considerations. Apple is a member of MPEG LA, putting native VP9 or AV1 support in iOS and Safari in doubt. But because AV1 has Google and other powerful companies behind it, clearly neither standard can win it all. Many vendors and engineering departments will end up supporting both.

To try out VP9 for yourself, run Chrome from the command line with the switch: –enable-VP9-playback, then search in YouTube for various VP9 test videos.

 

 

References

iphome.hhi.de/marpe/download/Performance_HEVC_VP9_X264_PCS_2013_preprint.pdf

files.meetup.com/9842252/Overview-VP9.pdf

www.webmproject.org/vp9/

www.mpegla.com/main/PID/HEVC/default.aspx

hevc.hhi.fraunhofer.de/

ngcodec.com/news/2014/1/12/current-status-of-hevch265-hardware-support

https://www.tysoco.com/bandwidth-requirements.html#ultra