Categories
Events & Trade Shows

Inside the Broadcast Revolution: Key Takeaways from DCMUG’s Night at Monumental Sports

On March 9th, the DC Media Users Group (DCMUG) gathered for what turned out to be one of its best events yet. Hosted by CHESA and a roster of top-tier technology sponsors, the evening kicked off at Clyde’s of Gallery Place before moving into an exclusive behind-the-scenes tour of the Monumental Sports Network Production Studio and Control Rooms at Capital One Arena. The night capped off with a suite for the Washington Capitals vs. Calgary Flames matchup.

The real main event, though, happened in between the tour and the puck drop: a wide-open Q&A session featuring a panel of practitioners who pulled no punches about what it actually takes to build, run, and future-proof a modern broadcast facility. The conversation touched on IP infrastructure, workforce evolution, cybersecurity, and the age-old question of when cutting-edge technology is the right call, and when it isn’t.

Here’s a deep dive into everything that came out of the room.

MEET THE VOICES IN THE ROOM

Leading the discussion was Jon Bednar, Founder and Principal Consultant of Codeso, and the architect behind the Monumental Sports Network facility, which the group had just toured. Jon is SMPTE 2110 Certified, a former Navy broadcast engineer and instructor. He has designed IP-based broadcast environments for clients ranging from the United Nations and the NFL to HHS and the US Department of State. His real-world candor set the tone for the entire conversation.

The CHESA team, rounding out the panel, included:

  • Patrick Johnson, Director of Federal Sales at CHESA, opened the event and kept the conversation moving.
  • Jason Paquin, CEO of CHESA, moderated the Q&A and brought context from years of client-facing discovery and integration work.
  • Jason “Pep” Pepino, Director of Media Systems Design & Engineering at CHESA, weighed in on the technical and design side throughout.
  • Roger Sherman, Senior Solutions Consultant at CHESA and former Chief Broadcast Technology Officer at Voice of America, offered a rare federal practitioner’s perspective on the 2110 decision.

The audience was a mix of federal agency broadcast professionals, including teams from HHS and HUD, and commercial media operators from around the DC metro area. The back-and-forth was as honest as it gets.

THE HARDEST PART OF A 2110 BUILD ISN’T THE TECHNOLOGY

When Jason Pepino asked Jon Bednar what the biggest challenge was in upgrading the Monumental Sports facility, a project that ran from conceptual design in 2021 through its launch in May 2024, the answer was immediate.

“Honestly, the people.”

The facility had been the NBC RSN operation, running on legacy SDI infrastructure for 10 to 12 years. The engineering team knew that world intimately. The upgrade took them from a traditional Grass Valley-based routing infrastructure to Panasonic’s Kairos production switcher and an EVS-based IP routing environment.  It would be hard to find a more dramatic technology transition in the broadcast world.

“A lot of legacy engineers lived in the SDI world for so long, and then everything changed,” Jon explained. “You can install the best, most well-engineered platform in the world, but if you don’t have people that can operate and maintain it, it’s only as good as the people.”

The philosophical shift between SDI and 2110 is significant. In an SDI environment, troubleshooting is reactive and tactile: plug in a meter, see the video, hear the audio, wait for something to break, and fix it. In 2110, that approach doesn’t work.

“With 2110, you have to be proactive. You constantly have to monitor and massage it. If something breaks, you can’t just put a meter on it. You have to know where the packet goes, where it was lost, what the fail rate is, whether it’s the red or the blue side.”

What training methodology worked best? Shadowing. Walking engineers through the commissioning process in real time, letting them ask questions, observe, and build intuition alongside the system as it came to life, proved far more effective than formal classroom instruction alone.

The goal Jon described for a well-designed 2110 environment is elegantly simple: an operator who has spent their entire career on SDI should be able to sit down at the console and not know the difference. The route button makes the route. Switch takes the camera. Fader up means louder. The IP world underneath is invisible to the production operator.

But for the engineers maintaining it? They need to think like network professionals. And some people’s brains, he acknowledged honestly, simply aren’t wired for that — and that’s okay. Those individuals can still contribute in production engineering roles that don’t require deep packet-level troubleshooting.

MONITORING IN A 2110 WORLD: A LAYERED APPROACH

Jason Paquin pushed Jon to talk specifics about monitoring, because it looks fundamentally different in a 2110 environment than in the SDI playbook. What Jon described across both the Capital One Arena and COA North (the off-site production facility) was a layered diagnostic stack, each tool serving a distinct purpose.

EVS serves as the orchestration platform and sits at the foundation. Its APIs integrate with Cisco NDFC and Arista EOS, providing the first level of visibility: bandwidth utilization per port, multicast flow tracking, and signal routing analytics. When a route fails to take, Jon’s first check is bandwidth saturation — “if it’s at 96%, you know why the route dropped.”

Telestream’s Prism Inspect is the next layer. When a signal looks off, routing it into Inspect immediately reveals the full ST 2110 flow: STP file comparisons between the red and blue redundant paths, audio presence, and stream metadata. With the ability to monitor roughly 32 signals simultaneously, it provides a broad at-a-glance health check.

TAG sits on the monitor wall, delivering alarm-based monitoring with penalty boxes and configurable thresholds, with nearly 1,000 alarms available out of the box. It gives operators a broad “something’s wrong” signal. Crucially, though, TAG tells you that you lost video, not why. That’s where the next layer comes in.

Providious handles deeper network-level packet analysis, called in when packet drops or RTP errors need investigation at the multicast level.

And underlying all of it: PTP timing. Precision Time Protocol is the heartbeat of any 2110 plant, and as Jon put it with a laugh, “It’s easy until it’s not.” A disproportionate number of mysterious signal issues can be traced back to PTP drift.

Roger Sherman raised a great observation: SDI alarm systems would often just flag “illegal video” — technically accurate but diagnostically useless. The 2110 monitoring ecosystem, by contrast, actually points you in the right direction. It doesn’t always hand you the answer, but it gives you a direction to start digging.

CONSTRUCTION DELAYS, THE CAPS’ WIN STREAK, AND ADAM SANDLER

One of the lighter, but genuinely instructive, threads of the evening was Jon’s account of what it took to actually complete the build.

The conceptual design started in 2021. The facility launched in May 2024. The biggest delays? Construction, on the heels of COVID, with its material and parts shortages. Arista had lead times of up to 11 months at one point.

But the arena portion of the build had a uniquely Washington problem. The agreement with the city required maintaining operational continuity throughout construction. The start date was locked. It could not move.

“Every time the Caps won, our timeline got shorter and shorter,” Jon said. “My wife would ask what was wrong, and I’d say, ‘They won again.’ Every win pushed us further.”

When the Capitals finally lost, he was the only person in Washington celebrating.

The first live event in the newly completed facility? An Adam Sandler show. Not exactly a stress test for the broadcast infrastructure, but the team used it to run some routing and cameras, a warm-up before the real thing.

Monumental has since built a Verizon dark fiber loop connecting the arena, the Capitals’ practice facility, and the Mystics’ arena. JPEG XS is traversing that loop today, with plans to eventually move all 2110 traffic across all facilities from a centralized production hub.

2110 VS. NDI: THE HONEST ANSWER IS “IT DEPENDS”

One of the most valuable portions of the evening was a direct question from Jason Paquin: setting budget aside, what are the actual deciding factors between going 2110 versus SDI?

The three panelists each offered a different lens.

Jason Pepino’s answer was scalability. In a traditional SDI router environment, a 128×128 frame is essentially maxed at day one. Adding capacity means a second router, tie lines between them, and rapidly escalating costs. With 2110, scaling is adding a network switch. For organizations with growth ambitions, that flexibility is meaningful even if the upfront investment is higher.

Roger Sherman’s answer came from experience at Voice of America. The driving factors there weren’t prestige or future-proofing for its own sake; they were practical. Distributing gateways and endpoints across the facility meant they weren’t pulling all signal paths back to a single central location, saving on copper runs, core holes, and installation labor. A second driver was resolution flexibility: some VOA services operated in SD, while others, particularly Eastern European bureaus, were pushing 4K. A single 2110 environment handled both simultaneously.

But Sherman was equally clear about the limits. He recalled a conversation with TV Martí, a much smaller operation that wanted to pursue 2110. His advice was direct: don’t. “It was prohibitive for their scale and their needs.”

Jon Bednar’s framework was the simplest: always ask why. If a client says they want 2110, his first question is what problem are they trying to solve. He described a client in New York City who wanted a full IP infrastructure, and when pressed, couldn’t articulate why. They ended up with NDI and SDI, and it worked perfectly for them. “They have no roadmap to go to 4K. They’re not scaling across multiple facilities. Save the money and put it somewhere else.”

For organizations making major capital investments, particularly federal customers who may not see a comparable budget for a decade or more, Jon and the panel were aligned on one thing: invest in the fiber backbone now, regardless of your current technology decision. The labor cost of the pull is the significant cost. The incremental cost of pulling 512 strands instead of 96 is comparatively small, and fiber is future-proof in a way that no endpoint device is. The Monumental team pulled 512 strands to each redundant rack. They needed 96 on day one.

THE ENTERPRISE NETWORKING PROBLEM: BROADCAST AND IT STILL DON’T SPEAK THE SAME LANGUAGE

An attendee from a federal agency raised a challenge that clearly resonated with most of the room: their broadcast and networking teams are siloed, they’re operating on enterprise networks not designed for video, and getting approval for the specific switches needed for a media production environment, even for NDI, is an uphill battle.

Jason Pepino was direct: broadcast media networks and enterprise IT networks have to be physically separated. Not VLANs; separate switches. The bandwidth profiles differ, the multicast requirements differ, and the update cadence for broadcast systems (where an OS may be intentionally frozen to maintain certification and stability) conflicts directly with enterprise IT’s security patching cycles.

“Corporate IT guys are going to ask why you’re passing so much bandwidth. And you still have to keep up on security, but some of these systems can only get to a certain point because the provider only brought the OS up so far.”

Jon added a practical tool for navigating internal budget conversations: engage Cisco and Arista directly. Both companies have media-specific technical teams with documentation that explicitly explains why a general-purpose enterprise switch won’t work on a broadcast media network, and why the media-optimized variant is required. That documentation can be decisive when you’re trying to make the case to an IT procurement team or an agency budget officer.

Roger Sherman reframed the underlying problem: it’s a trust and language issue as much as a technical one. If a broadcast engineer can walk into a conversation with enterprise IT and demonstrate security fluency, speak to how the media network is segmented, how threats are mitigated, what the exposure surface actually looks like, they have a much better chance of getting the hardware and support they need.

“Once you can speak the language, you can get them to trust you. Work with them together.” He also noted the challenge that many in the room nodded at: just when you build that trust with someone on the IT side, they get promoted or leave.

CYBERSECURITY: THE CONVERSATION THE BROADCAST INDUSTRY CANNOT IGNORE

Perhaps the most sobering thread of the evening was cybersecurity. As broadcast infrastructure migrates to IP, the attack surface expands, and bad actors are already active.

“Do you guys witness bad actors frequently?” someone asked.

“Frequently,” Jon replied. “Every facility I’ve ever worked at, there are metrics where it was 10,000 hits a day on the external firewall.”

This isn’t theoretical. A 2110 plant is not a closed SDI environment with copper everywhere. The orchestration platform that used to be a massive, dedicated piece of hardware is now a virtual machine. A single compromised VM could take down an entire broadcast infrastructure — audio, control, tally, routing, everything. If the facility generates revenue through live events or chargebacks, the business impact of a successful breach is severe.

Roger Sherman outlined a pragmatic approach to segmentation: certain assets, particularly ingest encoders taking feeds via SDI, can be placed in a DMZ outside the inner firewall. If someone compromises that encoder, the incoming signal is already SDI. The blast radius is limited. “I don’t care if you hack that encoder,” he said. “Put it outside the firewall. I’ve got fewer ports to worry about. You have fewer ports to worry about. And we can proceed.”

The architecture Jon used at Monumental started with the broadcast network as a complete island. Third-party signal delivery (like connectivity to Encompass in Atlanta) went over dedicated dark fiber with no shared firewall exposure — a direct line, touching nothing else. As operational needs grew and facility-to-facility connectivity became necessary, proper dual-firewall segmentation was added. Today, anything that crosses the public internet, Zixi feeds, and similar, passes through two firewalls. Monumental also hired dedicated cybersecurity staff specifically for broadcast and 2110 security.

Jason Paquin connected the cybersecurity conversation to a historical pattern: when facilities moved from SD to HD, broadcast engineering and IT had to merge for the first time, and the friction was real. He recalled being a young engineer watching a broadcast, and IT teams fighting across a conference table at WABC New York during a SAN installation, neither willing to acknowledge the other’s expertise. The current transition is the same collision, but at a higher level of complexity, with cybersecurity now in the mix.

His framing for the discovery conversation resonated throughout the room: if a client’s plan is to have their general IT team manage the broadcast network switches, someone needs to stop and calculate the cost of being down, and the cost of chasing issues with people who don’t have the right expertise. When that number starts approaching the cost of the proper solution, the conversation changes.

THE WORKFORCE IS CHANGING — AND THAT’S JUST THE TRUTH

Woven through every topic of the evening was a theme that nobody introduced directly, but that kept surfacing: the broadcast engineering workforce is in the middle of a generational shift, and the industry is moving whether people are ready or not.

“Legacy engineers leave, and they’re going to be backfilled by a broadcast IT guy,” Jon said. “A lot of the hires I see now for day-two support, it’s not the broadcast engineer from ABC. It’s a 25-year-old with an IT degree who also streams. That’s the perfect candidate for broadcast IT engineering. They understand enough about video. They understand more about networking. That’s just the blunt truth.”

The job postings already reflect this. Almost universally, broadcast engineering roles now require Cisco, Arista, and Layer 3 networking experience. They don’t ask whether you can troubleshoot an SDI frame.

Jon’s advice to his own teams, going back to when he ran AV integration in Baltimore: conduct 3 interviews a year. Not necessarily to leave, but to read the market. See what skills employers are asking for. The job requirements tell you where the industry is headed more clearly than any conference keynote.

The through line, as Jason Paquin framed it, is that IP migration isn’t just a technology change — it’s an operational, staffing, and cultural change, all at once. Organizations that treat it as a technology procurement project and ignore the people side will find themselves with a world-class system they can’t fully operate or maintain.

THANK YOU TO OUR SPONSORS

This DCMUG event was made possible by the generous support of our sponsors. Here’s a brief introduction to each:

Backlight

Josh Norman (President & CRO) and Alex Burke joined us, representing Backlight, makers of Iconik — one of the leading media asset management platforms in the broadcast industry. Iconik was referenced throughout the evening as a go-to MAM solution for media organizations managing large volumes of content.

EVS

Bevan Gibson (North American Operations) and Will Walz (Northeast) represented EVS. You likely know EVS for sports replay — if you’ve watched any live sport, you’ve seen their technology at work. EVS has a significant installation at Monumental Sports, including the EVS Neuron conversion platform that Jon discussed extensively during the Q&A. They also do control, orchestration infrastructure, and robotics.

LiveU

Mike Mahoney (VP of Growth Markets, US & Canada) and Jared Brody represented LiveU. Best known for broadcast-grade bonded cellular encoding and transmission, LiveU is now pushing into bonded IP over WAN and LEO satellite connectivity, with AI-enhanced workflows in development. If you’re watching a live news report from a field location, there’s a good chance LiveU is how it’s getting back to the studio.

Studio Network Solutions (SNS)

Chance Hayworth (Northeast Territory Manager & DoD Territory Manager) represented SNS, a company specializing in high-performance shared storage and complete workflow solutions. SNS also serves as the OEM manufacturer for Ross Video devices and works closely with the CHESA Federal team on a range of opportunities.

Telestream

Bob Barnshaw and engineer Dave Norman represented Telestream. As Jason Pepino noted to close out the sponsor introductions: “You’d be hard pressed to find a broadcast facility without something Telestream inside.” Their Prism Inspect platform was central to the monitoring discussion all evening. They also offer transcoding, test and measurement tools, and Stanza, their captioning application.

LucidLink

Rich Warren introduced LucidLink — a cloud-based storage collaboration platform that mounts as local, shared storage and is globally accessible. The short version: it puts everyone in the same studio, regardless of where they physically are.

ABOUT DCMUG

The DC Media Users Group holds quarterly events in the DC metro area, bringing together federal and commercial broadcast professionals to share what’s working and what isn’t. The format is deliberately practitioner-focused: not vendor pitches, but real conversations from people in the trenches.

Coming up: DCMUG will have a presence at NAB in Las Vegas, followed by an event alongside the Bits by the Bay Conference, held right on the Chesapeake. If you haven’t been to Bits by the Bay before, it’s worth looking into.

If you work in broadcast, media production, or AV integration in the DC metro area — whether in a federal agency, a commercial facility, or somewhere in between — this is a community worth being part of. The conversations are real, the people have done the work, and you’ll almost certainly walk out with something you can use.

Well, and there’s usually a sports game or concert involved. That doesn’t hurt either.

Categories
Events & Trade Shows

When Machines Enter the Control Room

The final vendor panel at Chesafest 2026 saved its most time-sensitive question for last.

For the first three panels, AI was a topic you could discuss at a philosophical remove. The file system will evolve over years. MAM architectures will shift over decades. Human oversight in media operations will be negotiated gradually. But in live broadcast, there is no gradually. A shot is taken or it isn’t. A graphic fires or it doesn’t. Audio goes to air or it goes silent. The decisions happen in real time, and the consequences arrive just as fast.

Moderated by Jason “Pep” Pepino, Director of Media Systems Design and Engineering at CHESA, Panel 4 brought together representatives from LiveU, Vizrt, Netgear AV, and AI Media to answer a question that gets more urgent every year: should AI be allowed to make real-time production decisions inside a live control loop, or should it remain strictly advisory?

The answer, as it usually does, landed somewhere in the middle. But the journey to get there was worth the trip.

MEET THE PANEL

Chuck Davidson — Partner Account Manager, LiveU

Chuck describes himself as an optimist, with a “glass half full” orientation toward technology in general and AI in particular. LiveU’s work in the bonded IP and remote production space means AI decisions at the transmission layer carry real operational stakes, and Chuck brought that weight to the conversation while maintaining his characteristic forward-looking energy.

Dan Griffin — Territory Manager, Netgear AV

Dan’s background is in live production audio, which gives him a different instinct than most people in a broadcast technology conversation. He showed up to Chesafest as a self-described skeptic (his wife, who works in tech, had been actively working on his conversion) and moved visibly toward realist over the course of the discussion. His perspective on AI in network design and audio mixing was among the most practically grounded of the session.

Kyle Phillips — VP of Sales Enablement, AI Media

Kyle acknowledged upfront that his pro-AI position at a company called AI Media was not exactly a surprise. What he brought beyond the predictable enthusiasm was specificity: real deployment context for live caption automation, guardrails design, and the practical limits of what AI can handle when breaking news or live sports throws something unexpected at the system.

Steve Cooperman — Sales Manager, Vizrt

Steve came in as the panel’s pragmatist, with 20 years of experience across Panasonic, Canon, and now Vizrt spanning cameras, live production, and software. He’s seen enough real-world deployments to know where AI delivers and where it overpromises, and he wasn’t shy about either.

Jason “Pep” Pepino — Director of Media Systems Design and Engineering, CHESA (Moderator)

Pep opened by declaring himself an accelerationist, the most enthusiastic position on the AI spectrum, and framed the panel accordingly. He had just finished building CHESA’s first SMPTE 2110 studio and had personally entered thousands of IP addresses by hand. His enthusiasm for an AI agent that could do that work someday was, as he put it, “real.”

SHOULD AI MAKE REAL-TIME PRODUCTION DECISIONS?

Pep opened by laying out what AI can already do in a live production environment. It can identify key moments. It can select camera angles. It can trigger graphics automatically. It can translate and localize audio in real time and adjust levels based on speaker detection. The question isn’t capability anymore. It’s authority.

Should the AI decide, or should it advise?

Steve Cooperman came in with a real-world example that illustrated both sides of the question simultaneously. Vizrt’s Libra product brings sports analytics into live production, powering the kind of on-screen overlays that have become standard in sports broadcasts. But beyond data visualization, the platform also handles AI cutouts: cleanly separating a player from the background in real time, handling edge cases like dark uniforms on grass, enabling 3D effect automation without a compositor doing it by hand.

“AI is really helpful for that cutout, and then automating it. Of course, we could always override it. But that’s a real-world example of production applications that some sports productions are using today.”

The override option is the tell. Even in a case where the AI is clearly adding value, the ability to override it is treated as non-negotiable. The automation runs unless a human says otherwise.

Chuck Davidson framed LiveU’s approach as one of intentional flexibility. Their CTO’s current development focus is something called an AI connector, essentially a configurable entry point into the LiveU ecosystem that lets each customer define which AI agent they want to use and how much authority it gets. The premise: there is no universal right answer for how much AI authority is appropriate. It depends on the customer, the content, and what’s at stake.

“We can’t assume that everybody’s going to want to have the same parameters or the same mindset for how they want to integrate AI.”

Dan Griffin brought the audio mixing perspective, and it was one of the most honest assessments of the session. When it comes to managing microphone levels for a group of talking heads, he said flatly, machines can do it better than humans. They react faster. They don’t get fatigued. They don’t miss a cough. For that specific task, in that specific context, AI authority isn’t a philosophical question. It’s just more reliable.

But the right answer shifts dramatically when you change the context. Life-critical broadcasts, high-stakes live events, anything where a muted microphone could mean something goes out wrong or doesn’t go out at all: those require human readiness to intervene, even if AI is handling the moment-to-moment operation.

Kyle Phillips introduced a concept that became one of the most useful frameworks of the session: bounded autonomy. You define the space in which AI is allowed to act, and the machine operates confidently within that space. The boundaries are the human decision. The execution within them is the machine’s.

“You design what it’s able to do. When you can replace manual, repetitive tasks with AI, you get efficiency and speed. But you give it parameters. It can adjust levels a few decibels, but it can’t go from zero to twenty all at once.”

The design phase is where human judgment lives. The operational phase is where the machine works. Keeping those two things clearly separated is the architectural foundation of responsible AI deployment in live production.

WHEN AI FAILS: WHO’S RESPONSIBLE?

Pep shifted the conversation to accountability, and the panel’s first response was telling: someone immediately said “the systems integrator,” then immediately acknowledged they were not supposed to say that.

The laughter that followed said something real. In the live production chain, when something goes wrong, the question of who owns it is genuinely complicated.

Kyle Phillips was direct: if the AI is failing, it’s ultimately on the vendor. But the more important variable is how the system was designed and what parameters were set. You can’t blame the machine for operating within the boundaries someone gave it. The accountability traces back to whoever set those boundaries.

Chuck Davidson took a different angle. LiveU’s acquisition of Actus (a compliance monitoring platform) started making sense to him in this context in a new way during the panel. Actus was built for FCC compliance monitoring, essentially an automated oversight layer that watches what goes to air and flags violations. As AI takes on more production authority, a compliance layer like that becomes part of the answer to the accountability question. It’s governance infrastructure for an AI-driven environment.

Pep offered his own ground-level perspective: having just spent significant time manually entering IP addresses to configure CHESA’s 2110 studio, he’s acutely aware of how much room there is for AI to help with the configuration and commissioning process, and equally aware of how much human verification that work currently requires. The AI can assist. The engineer still has to verify.

GUARDRAILS: WHAT MUST EXIST IF AI IS IN THE CONTROL LOOP?

The panel’s final formal question was the most practical: if AI is operating inside the signal chain, what guardrails must be in place?

The consensus was rapid and clear: operator override is non-negotiable. Every panelist said some version of it.

Steve Cooperman used Vizrt’s gaze correction feature on the TriCaster as a live illustration. The feature automatically adjusts a speaker’s eye line to maintain direct-to-camera contact even when they’re looking down at a monitor. It works well most of the time. It does not work well when someone is moving erratically, and a malfunctioning gaze correction in the middle of a live broadcast creates a deeply unsettling viewer experience. The human has to be able to turn it off. Immediately. Without friction.

“You need a human, presumably, to be able to override or to monitor. If any technology goes bad, you want the ability to turn it off if it’s not working in that environment properly.”

Kyle Phillips described the guardrails in AI Media’s captioning deployments in specific terms. AI handles the placement of captions in real time, dynamically repositioning them so they don’t block on-screen text like lower thirds or score bugs. That’s clean, bounded automation. But then there’s the harder layer: topic models and content filters that prevent certain words from appearing in captions when a speaker has a particular accent or when a live sports moment generates unexpected language. Those filters need to be configurable, auditable, and human-adjustable in the moment.

Dan Griffin brought it back to the network layer. Netgear’s value-add includes free network design services. AI can help design a network much faster than doing it manually. But an engineer still puts eyes on every design before it goes to a customer. Not because the AI is unreliable, but because the stakes of a poorly configured live production network are too high to skip the human review, regardless of how good the last ten designs were.

“The thing to fear is getting too comfortable. You always have to look and make sure you’re monitoring what it’s doing and providing feedback as needed.”

Chuck Davidson closed the guardrails discussion with the framing that felt most honest about where the industry actually is: this is a change management problem as much as a technology problem. The resistance to AI in live production isn’t always about legitimate technical concerns. Sometimes it’s just that change is scary, and AI is a category of change that the industry has no prior template for. The tape-to-digital transition felt just as existential at the time. Many broadcasters refused to let go of physical tape long after digital was the better answer.

“Part of the challenge is that change is scary, and AI is a very powerful tool that this industry has never seen before. Part of our job on the technology side is how do we harness it and how do we manage it to eliminate the fear.”

WHERE ARE WE ON THE INNOVATION CURVE?

An audience question brought the session to its most forward-looking moment: where are we on the innovation curve for visual AI in live broadcast?

Pep didn’t hesitate: we’re at the very beginning. The capabilities visible today will look primitive compared to what five to ten years will produce. The panel agreed.

Dan Griffin noted that even the most basic AI research tools (looking up someone’s background before a meeting, pulling a bio from the web) were significantly worse just six months ago than they are now. The trajectory of improvement is steep. Broadcast-specific applications are more complex and more critical than general research tools, which means they’ll take longer to mature. But the same rate of improvement will get there.

Steve Cooperman pushed back slightly on “very beginning.” In live sports specifically, the volume of AI-driven sports tech visible on broadcast in the last year is roughly ten times what it was before. Not all of it is AI in the strict sense, but the category of computer-assisted production technology has exploded, and AI is a meaningful part of that acceleration.

Kyle Phillips connected this to the economics of linear broadcast. Traditional linear television is under revenue pressure, and that pressure is creating urgency around monetizing existing content in new ways. Old archives, up-rezzed to modern quality, localized for new markets, offered on emerging platforms (he pointed to retro TV services running on antenna signals with commercials as a surprisingly significant revenue generator) represent a category of AI-driven value that is very real, very current, and still early.

A voice from the audience painted one of the most vivid pictures of where this heads. Imagine taking old television series and not just up-resing them, but giving them new language tracks where the original actors’ voices are preserved but given the phonetic quality of the target language’s native speakers. Not a new voice actor. The original voice, delivered as if the original actor had learned to speak Hindi or Spanish natively. The legal questions around that are still being worked out. The technology to do it exists now.

Chuck Davidson offered the most memorable real-world deployment of the session: the NYPD’s Drones as First Responder program in New York City. Drones dispatched autonomously in response to 911 calls, giving officers visual intelligence on a scene before they arrive. Operational today. No lab demo. No pilot program. Running in the city.

“If you’ve never seen it, I would encourage you to watch it. It’s a great example of where we are from a technology perspective.”

It’s not a broadcast example. But it’s the clearest illustration of what bounded AI autonomy looks like when it works: a machine operating within carefully designed parameters, doing something faster and more effectively than any human alternative, with humans ready to act on what it finds.

THE THROUGH LINE ACROSS ALL FOUR PANELS

Across the four Chesafest vendor panels, the same idea surfaced in every room, in different forms and different vocabularies, and it’s worth naming it clearly.

The question was never really “AI or humans.” It was always “what do we want the humans to do?”

In storage and file systems, we want humans setting governance policy, not manually moving files. In MAM, we want humans defining taxonomy and verifying results, not logging 20-year-old archives by hand. In media operations, we want humans deciding what the work is and evaluating what comes out, not checking codec values on every ingest. In live production, we want humans making editorial decisions and ready to override, not manually adjusting audio levels for twelve talking heads who all speak at different volumes.

The machines are getting better at the things humans shouldn’t have to do. The work of the industry right now is figuring out exactly where that line is, drawing it deliberately, and building the guardrails to hold it.

That’s not a 2030 problem. It’s a now problem. And it was the right note to end Chesafest 2026 on.

ABOUT CHESAFEST

Chesafest is CHESA’s annual gathering of team members, technology partners, clients, and practitioners in the media, broadcast, and AV space. It blends the energy of a partner kickoff with substantive, practitioner-driven conversation about where the industry is actually headed.

Now in its 4th year, Chesafest has grown into something genuinely distinct: a program where CHESA’s team, its vendor partners, and its clients are all in the same room at the same time, participating in the same conversations. The panels are designed to surface real disagreement, real tradeoffs, and real-world insight. The 4th Annual Chesafest took place on February 25, 2026 in Towson, Maryland, drawing 19 vendor partners and a cross-section of CHESA’s client community.

The four vendor panels from Chesafest 2026:

Vendor Panel 1: Is the File System Dying? The Performance Tier in an Object-Native World

Featuring: Backblaze, LucidLink, Suite, and Spectra Logic | Moderated by Tom Kehn, CHESA

Vendor Panel 2: The Next Evolution of Media Asset Management: Is Structured Metadata Enough in the Age of Vector Intelligence?

Featuring: Backlight, Fonn Group, OrangeLogic, EditShare, and VIDA | With client perspective from Jason Patton, Sesame Workshop | Moderated by Felix Coats, CHESA

Vendor Panel 3: Automation, AI, and the Limits of Machine Decision-Making: Where Human Judgment Still Matters in Media Operations

Featuring: Telestream, Hiscale, HelmutUS, Adobe, and Scale Logic, with Jason Whetstone, CHESA | Moderated by Felix Coats, CHESA

Vendor Panel 4: When Machines Enter the Control Room: AI, Authority, and Real-Time Decision-Making in Live Production

Featuring: LiveU, Vizrt, Netgear AV, and AI Media | Moderated by Jason “Pep” Pepino, CHESA

This blog series covers each panel in depth. If the live production and AI authority conversation is in your world, the other sessions are worth your time too.

Categories
Events & Trade Shows

Automation, AI, and the Limits of Machine Decision-Making

The third vendor panel at Chesafest 2026 started with a question that sounds deceptively simple: how much of what media operations teams do today will be done by machines by 2030?

The answers ranged from 70% to 99%. And the real conversation was everything in between.

Moderated again by Felix Coats of CHESA, Vendor Panel 3 brought together practitioners from Telestream, Adobe, HelmutUS, Hiscale, and Scale Logic, alongside CHESA’s own Jason Whetstone, for a conversation about automation, accountability, and the specific kinds of decisions that still need a human in the room. The panel covered everything from the philosophy of machine morality to a story about a guy downloading Python at the gym and submitting the output to his boss without checking a single line.

It was a good panel.

MEET THE PANEL

Scott Eik — Senior Application Engineer, Scale Logic

Scott has been in the industry for about 16 years, moving between MAM systems, archive systems, and the customer side. He joined Scale Logic at NAB the prior year and brought a grounded, operational perspective to every question.

Dave Helmly — Director of Professional Video and Audio, Adobe

Dave has been at Adobe for 30 years and leads a workflow strategy and development team of 22, the only team of its kind embedded in Adobe’s engineering organization. His philosophy: trust your customers to tell you how to make your software. He’s been working with CHESA for most of his time there.

Greg Holick — VP of Business and Channel Development, HelmutUS

Greg has been in the M&E industry for over 25 years, with deep experience helping large customers architect and orchestrate complex media workflows. He came in as the voice of measured optimism: enthusiastic about AI’s potential, clear-eyed about the things it still can’t do.

Sarah Semlear — US Sales Lead, Hiscale

Sarah came to Hiscale after spending time on the client side, deploying MAMs and transcode systems from the inside. She showed up at Chesafest the prior year as a client. She brought the most infectious energy to the panel and consistently redirected the conversation toward what matters: whether any of this is actually making work more fun.

Erik Zindulka — Senior Sales Engineer, Telestream

Erik spent eight or nine years as a Telestream customer before joining the company. He described himself as “the MAM nerd in some circles at Telestream” and brought a practitioner’s sensibility to questions about automation, enrichment, and where AI fits into workflows people are already building.

 

Jason Whetstone — Product Development Engineer, CHESA

Jason has been at CHESA for 12 years and in the media industry for close to 18. He brought a developer’s precision to the panel: focused on what “done” actually means, why AI needs humans to define the work, and what pair programming has to teach us about working with AI tools.

Felix Coats — Solutions Consultant, CHESA (Moderator)

Felix moderated his second panel of the day and, per his own admission, had prepared a full list of questions that the panelists proceeded to answer before he could ask them. He pivoted gracefully throughout and introduced the gym story that became the thread everyone kept pulling on.

BY 2030, WHAT PERCENTAGE OF MEDIA OPERATIONS WILL BE FULLY AUTOMATED?

Felix opened with a clean, direct question and asked each panelist to answer it honestly: by 2030, what percentage of media operations in your space will be fully automated?

The answers were telling.

Dave Helmly went first and went highest: 99%. His reasoning was precise. Adobe’s AI work, particularly with Firefly Services, is focused on productivity and batch automation (resizing, reformatting, localization across 400 output variants from a single source). The jobs nobody wants. A creative still starts the job, still reviews the rejections, still makes the final call. But the volume of mechanical work being handed to machines is already enormous, and it’s only going in one direction.

Scott Eik landed at 70 to 80%, acknowledging that some human interaction will persist but that the trend is unmistakably toward automation for the operational layer.

Greg Holick took a longer view and came in at 50 to 70%. His reasoning was rooted in what AI currently lacks: creative intent, cultural inference, the subtle judgment calls that define the difference between technically correct and actually good. He’s watched the industry’s AI capabilities grow and believes they’ll continue to grow, but maintains that the creative mind brings things to the table that can’t be encoded.

Sarah Semlear declined to give a number. Her answer was better than a number: if we want the future of media to be fun, there has to be human interaction. The machines should own the tedious, horrible tasks. The calculator analogy she returned to repeatedly was perfect: a calculator doesn’t replace the mathematician. It removes the arithmetic so the mathematician can think.

“Let the machines do the tedious, horrible tasks that we don’t want to do. Then we’re focusing on the really awesome, juicy, creative, fun stuff. That’s not Skynet. That’s a utopia.”

Erik Zindulka pointed out that the “extreme majority” of media operations tasks that AI is being asked to automate are things that customers have wanted machines to handle for years. A file lands in a folder. Twenty things should happen to it automatically. Nobody should be sitting in a cubicle checking the codec and moving it to the right directory. AI is the natural continuation of automation logic the industry has been building for decades.

Jason Whetstone offered the most structurally precise answer: as long as humans are creating content and consuming content, the system can never be fully automated, and shouldn’t be. The human role shifts, but it doesn’t disappear. The job becomes defining the work, being clear with the machines about what “done” means, and reducing the exceptions that fall outside the automation envelope.

“Our job as humans is determining what the work actually is and being very clear with the machines about what the work is and how we want it done.”

WHERE HUMAN JUDGMENT IS NON-NEGOTIABLE

Felix pushed the panel on a harder question: are there operational decisions that cannot safely be automated today? And will they ever be able to be?

Erik Zindulka surfaced a quote that became a reference point for the rest of the panel, a placard from an IBM training program from 1979 that read: “A computer cannot be held accountable, therefore it cannot make a management decision.”

That sentence from nearly 50 years ago maps almost perfectly onto the AI governance debate happening right now. Accountability is the line. Wherever a decision has legal consequences, creative stakes, or reputational exposure, a human needs to be in the chain, not because machines can’t generate an answer, but because machines can’t be held responsible for the answer they generate.

Sarah Semlear picked up the accountability thread with a specific point about morality. The industry often talks about training AI to be ethical or unbiased. But morality isn’t a universal constant. It varies by culture, country, context, and situation. You can’t hand a one-size-fits-all moral framework to an AI and consider the problem solved.

Greg Holick added the copyright and compliance dimension: AI in a media environment has access to enormous volumes of protected content. Should it? The legal exposure of an AI system pulling the wrong ad, using the wrong asset, or making a rights decision it can’t justify is enormous. And the entity that gets held responsible isn’t the machine.

Dave Helmly extended this into the personalization and content consumption space: AI is already learning individual users well enough to feed them content they’ll react to. By 2030, it will know users dramatically better than it does now. That creates an obligation on the human side to question what’s being surfaced, why, and whether the information environment being constructed serves the person or just the engagement metric.

Jason Whetstone brought it back to something clean and practical: the decision to publish. You can automate the upload. You can automate the metadata. But the decision to put content in front of an audience should require a human making a deliberate choice.

“The decision to actually publish to the public should be on a human.”

Dave Helmly also noted where compliance automation actually adds value: territory-specific edits, regional restrictions, content standards for different markets. These are the jobs that no one wants to do anyway, that currently require enormous manual effort, and where AI can do the work reliably because the rules are known and explicit.

Scott Eik grounded the whole discussion with a production operations lens: someone has to QC what comes out the back end before it goes to air or to print. That checkpoint is a human checkpoint. The question isn’t whether the QC role exists; it’s whether AI can support it by catching more before it reaches the human reviewer.

THE GYM STORY: LOW CODE, UNMANAGED RISK, AND THE GUY WHO SUBMITTED THE PYTHON SCRIPT

Felix opened the third segment with a story that generated more discussion than any formal question could have.

He overheard two finance professionals at the gym. One of them had been asked by his boss to produce some charts. He didn’t know how. He asked ChatGPT. ChatGPT told him to download Python. He asked how. ChatGPT told him. He installed it, ran the script, and submitted the output to his boss. His boss said great job. He was proud of himself.

Felix’s internal reaction was a list of questions he didn’t say out loud: Did you validate the code? Did you confirm it wasn’t also accessing your financial records from the last decade? Did you check what it was touching?

This is the low code moment the industry is living in right now. The tools have gotten accessible enough that people with no technical background are generating and running code that touches real systems and real data. The gap between capability and comprehension has never been wider.

Scott Eik was direct: you have unmanaged risk the moment you don’t understand what’s happening in the background. And when something goes wrong, the person who ran the script without understanding it is not equipped to diagnose or fix it.

Dave Helmly raised the IP dimension: code generated by AI may have been derived from copyrighted source material. If you don’t know math, you can’t validate the logic. If you don’t know code, you can’t validate its origins. The people who are safe in this environment, he argued, are the ones with 10,000 hours in their specialty. They’re the ones qualified to judge what the AI produced.

Greg Holick brought it back to responsibility: automation and AI are extraordinary productivity tools, but they change who’s responsible for the outcome. The ownership lands on the person who ran the process. If you deployed code that touched data you shouldn’t have touched, the fact that an AI wrote it doesn’t reduce your exposure.

“Just because you can do it doesn’t mean you should. Automation and AI change your responsibility. The ownership is still on the person doing that.”

Sarah Semlear offered the most optimistic frame. She compared the current moment to the early days of YouTube, when traditional media companies were horrified by the chaos of user-generated video flooding the internet. People posting content they shouldn’t, no standards, no guardrails. It looked like a disaster. It became an industry. The wild west always calms down.

“Everything always calms down. It’ll be fine. We’ll get to the place where it’s actually that super powerful calculator we really need.”

Erik Zindulka pushed toward the practical design goal: the end state for low code in a media environment isn’t Python scripts generated in a gym. It’s a visual workflow builder where an operator draws a flowchart, describes the production logic they want, and the system handles the execution. Bring-your-own-code for edge cases, yes. But the default should be intuitive enough that nobody has to think about scripting at all.

Jason Whetstone added the concept of AI context: an AI system is results-driven and will generate an answer as fast as possible, even if it doesn’t have all the information it needs to get the right answer. If it’s missing context, it guesses. That’s where the human has to step in: not to do the work, but to be clear about what the work actually is.

He described two models of working with AI tools. The substitutive model: you outsource a task to AI and don’t particularly care how it gets done. The assistive model, which he prefers, is pair programming. Two people working shoulder to shoulder through a problem, each learning from the other. You understand the problem. The AI understands aspects of the code you don’t. You teach each other. The outcome is better because both parties are engaged in the process.

“I have to help the AI understand what the problem is that I’m trying to solve, what I’m not trying to solve, what good results look like, what success means, and what done means.”

THE FUTURE OF HUMAN OVERSIGHT: AI MONITORING AI?

Felix closed the formal portion of the session with a question about where human oversight goes as AI-native workflows mature. Do you create new roles to supervise AI output? Do you build AI to monitor AI? Or does the oversight layer gradually get automated away too?

The panel converged on a few consistent positions.

Scott Eik: in the near term, you want humans checking everything that comes out of AI. As trust is established over time, that check can become more targeted and less constant. The progression is gradual. You don’t just flip a switch.

Dave Helmly: AI is going to take some jobs. Photoshop took jobs too. But Photoshop created entirely new categories of work. The pattern holds. The people who lose jobs will be the ones who tried to use AI as a shortcut without understanding the underlying craft. The ones who keep their jobs, and build new ones, will be the ones who can judge what the machine produced.

Sarah Semlear: you don’t need to reinvent the wheel. The organizations that respond to AI by blowing up their org charts and starting over are making the same mistake people make with every major technology shift. Find the efficiencies. Add the roles where they’re needed. Check your sources, which is not a new skill requirement. Keep humans in the loop and keep it interesting.

“If you just take a base answer of anything and you don’t look into it, if you Google one thing and go with the first result, you should probably be fired for that too. This is not something new in humanity.”

Erik Zindulka offered one of the most forward-looking points of the session: AI enrichment isn’t a one-time event. Archives and libraries persist for decades. An archive enriched by one AI tool today will be enriched again five years from now by a better one. And again five years after that. Each pass adds another layer of metadata, another dimension of searchability, another tier of context. The result, over time, is a media archive richer than anything that could have been produced by human logging alone.

Greg Holick closed with a framing that landed well: AI changes the shape of human responsibility, but not its existence. Someone still has to set up the guardrails. Someone still has to evaluate what comes out. The pre-checking that happens before automation runs may matter as much as the post-checking that happens after.

Felix added one more thought before closing: the industry might start seeing something like a “production AI supervisor,” a new role whose job is specifically to QC AI output before it hits a downstream system or a human audience. Not a developer. Not a traditional post supervisor. Something in between. It’s not here yet, but the logic is sound.

A CLOSING QUESTION FROM THE CEO

As the session wound down, Jason Paquin stepped in with one last question for the group: what guidance do you have for someone building their career in this space right now?

It was a good question to end on, and Nina Smith gave the best answer.

She said that the greatest gift you can give anyone you’re talking to is the ability to truly listen. Not to have the answer ready before the question is finished. Not to perform expertise before you’ve understood the problem. Holding back, listening, and offering real perspective when you actually have something to contribute will take you further than sounding smart ever will.

“Know who you’re dealing with. If someone wants to talk fluff, talk fluff. If someone wants to talk truth, talk truth. You will go much further by listening and learning and offering your advice when you really know something, not when you’re guessing.”

That’s good advice in any era. In an industry moving as fast as this one, it’s essential.

ABOUT CHESAFEST

Chesafest is CHESA’s annual gathering of team members, technology partners, clients, and practitioners in the media, broadcast, and AV space. It blends the energy of a partner kickoff with substantive, practitioner-driven conversation about where the industry is actually headed.

Now in its 4th year, Chesafest has grown into something genuinely distinct: a program where CHESA’s team, its vendor partners, and its clients are all in the same room at the same time, participating in the same conversations. The panels are designed to surface real disagreement, real tradeoffs, and real-world insight. The 4th Annual Chesafest took place on February 25, 2026 in Towson, Maryland, drawing 19 vendor partners and a cross-section of CHESA’s client community.

The four vendor panels from Chesafest 2026:

Vendor Panel 1: Is the File System Dying? The Performance Tier in an Object-Native World

Featuring: Backblaze, LucidLink, Suite, and Spectra Logic | Moderated by Tom Kehn, CHESA

Vendor Panel 2: The Next Evolution of Media Asset Management: Is Structured Metadata Enough in the Age of Vector Intelligence?

Featuring: Backlight, Fonn Group, OrangeLogic, EditShare, and VIDA | With client perspective from Jason Patton, Sesame Workshop | Moderated by Felix Coats, CHESA

Vendor Panel 3: Automation, AI, and the Limits of Machine Decision-Making: Where Human Judgment Still Matters in Media Operations

Featuring: Telestream, Hiscale, HelmutUS, Adobe, and Scale Logic, with Jason Whetstone, CHESA | Moderated by Felix Coats, CHESA

Vendor Panel 4: When Machines Enter the Control Room: AI, Authority, and Real-Time Decision-Making in Live Production

Featuring: LiveU, Vizrt, Netgear AV, and AI Media | Moderated by Jason “Pep” Pepino, CHESA

This blog series covers each panel in depth. If the automation and AI accountability conversation resonates with your world, the other sessions are worth your time too.

Categories
Events & Trade Shows

The Next Evolution of Media Asset Management

The 4th Annual Chesafest didn’t slow down after its opening session. Vendor Panel 2 took the intellectual temperature in the room and raised it by a few degrees.

The question on the table: Is structured metadata still enough to run a modern media asset management system, or is the rise of vector databases and AI-driven semantic retrieval about to fundamentally reshape how media organizations find, govern, and work with their content?

It sounds like an infrastructure question. It turned out to be a conversation about users, governance, trust, library science, Star Trek, and the surprisingly stubborn challenge of teaching a machine to know what you actually meant.

Moderated by Felix Coats of CHESA, the panel brought together practitioners and vendors from across the MAM ecosystem, a mix of perspectives that produced one of the most substantive conversations of the day.

MEET THE PANEL

Jason Patton, VP of Production Technology, Sesame Workshop

Jason was a late addition to the panel, he’s a great duck pin bowler. He’s not a vendor; he’s a client, and his real-world perspective on what it actually means to manage a deep archive of beloved children’s content grounded every abstract technology debate in something concrete. His candor was a consistent highlight throughout.

Tim Ayris — Head of Channel Partnerships, VIDA

Tim brought a content operations lens to the conversation. VIDA’s customers use the platform to push and manage content at scale, which means the governance question isn’t theoretical, it’s something they have to solve for every day.

Jeff Herzog — Director of Product Management, EditShare

Jeff came in with a product-depth perspective and a healthy skepticism about the pace of vendor hype versus the pace of actual customer adoption. His point that many customers are skeptical of MAM value, and that AI enhancement layers could change that permanently, set a useful frame early.

Jim Cavedo — VP of Global Solutions, OrangeLogic

OrangeLogic occupies a unique position: a single platform with both DAM and MAM capabilities. Jim brought the agentic AI angle to the conversation and was consistent on one point throughout: the user shouldn’t know or care whether the system is querying a relational database or a vector database. That’s the vendor’s problem to solve.

Sofia Fernandez — Channel Manager, Backlight

Sofia offered clear, precise framing throughout, including one of the best analogies of the session, which involved a coffee machine. She brought a measured view of how the transition from structured to semantic metadata needs to be paced carefully to avoid breaking the users who depend on deterministic search today.

Eduardo Mancz — President and CEO, Fonn Group (Mimir)

Eduardo’s company builds Mimir, a MAM platform well known in the broadcast and media space. He pushed the conversation toward the practical: the complexity of metadata that organizations are already struggling to manage, and the risk of chasing AI capabilities without solving for portability and platform evolution.

Felix Coats — Solutions Consultant, CHESA (Moderator)

Felix opened with a technical level-set that would have impressed a database administrator, covering the core difference between relational and vector databases with enough clarity that the conversation could actually go somewhere. He kept the panel honest and on-topic throughout, and closed with a Star Trek reference that was far more apt than it had any right to be.

THE SETUP: TWO VERY DIFFERENT WAYS OF KNOWING THINGS

Felix opened by drawing a distinction that the industry tends to collapse into buzzwords. A relational database, he explained, is like a well-organized spreadsheet. You know what you’re looking for, you query it precisely, and you get back an exact match. Tomato is a vegetable. Find all videos from 1994. Return assets with active rights for North America.

A vector database works on a completely different principle. It doesn’t retrieve based on declared, structured facts; it retrieves based on similarity and meaning. A cat and a dog aren’t the same animal, but they share enough dimensional proximity in a vector space that a search for “pet” could surface both. It’s powerful for finding things you can’t precisely describe. It’s problematic when you need to know for certain.

The question Felix posed: MAM systems have been built for decades on the declared-truth model; relational databases, structured schemas, deterministic queries. Now users expect systems to understand intent. Can these two models coexist? Or are they philosophically incompatible?

The panel’s answer, reached almost immediately and reinforced throughout: they don’t just coexist, they depend on each other.

“THEY’RE GOING TO HAVE TO LIVE TOGETHER”

Jason Patton got there first, and said it most plainly. A unique identifier, the foundational record that says this asset exists and relates to these other assets, is never going away. That’s relational. That’s structural. That has to be right. But layered on top of that, and running alongside it, is where vector search lives: helping a new generation of users who have grown up talking to chatbots, who don’t know the naming convention, who have a fuzzy idea of what they’re looking for and want the system to meet them there.

“There’s going to be a whole new crop of users whose only experience is talking to a chatbot. They’re going to be like, ‘I don’t know what I want.’ They want the system to come back and say, here are things that are like what we think you’re saying.”

Tim Ayris agreed, adding a dimension specific to VIDA’s user base: the creative users who are doing production work don’t want to learn a taxonomy. They want to type something that approximates what they’re looking for and get results. But the operational users, the ones pushing content, managing distribution, handling rights, need the precision that only a relational database can provide. The same platform has to serve both.

Jeff Herzog came at it from a MAM adoption angle. Many of EditShare’s customers have MAM access but don’t fully use it. They’re skeptical. The value isn’t obvious enough yet. His contention: AI enhancement layers change that equation. Once semantic search makes finding content genuinely effortless, the reluctant users become converts.

“You won’t be able to afford not to use MAM once these enhancement layers come in.”

And Jim Cavedo put the capstone on the opening round with a point that would echo throughout the entire session: the user should never know which database is serving their query. The agentic layer on top of both systems figures that out. The user types a question. The agent decides whether it requires a relational query, a vector search, or some combination of both, and returns a single, coherent result.

“The user has no idea where any of this exists. They just want one pane of glass, one simple chat experience.”-

THE GOVERNANCE PROBLEM: WHEN “GOOD ENOUGH” ISN’T

The second major thread of the session was governance, and this is where the conversation got genuinely uncomfortable in the best way.

Vector databases, by their nature, are not deterministic. They don’t always return the same result for the same query. They can hallucinate connections. They can’t trace their own reasoning the way a relational query can. And in regulated industries (news, legal, medical, and to a significant degree entertainment with its rights and talent participation obligations) that traceability isn’t optional.

Jeff Herzog made the point precisely: a search against a relational database is auditable. You can see exactly why it returned what it returned. A vector search isn’t.

“These vector searches aren’t, by definition, traceable. You can’t see the work in the way that a relational database search is deterministic, there are facts behind it.”

Jim Cavedo went further: if you’re depending on AI to make a rights decision, and you’re challenged on that decision, you need to be able to point to something and say “the data said I could do this.” An unexplainable vector result won’t hold up.

Eduardo Mancz raised a cost dimension that rarely gets discussed: when new models emerge, and they will, you have to re-vectorize your entire dataset. Re-indexing is expensive, time-consuming, and technically demanding. The industry talks constantly about AI capabilities. It talks almost never about the infrastructure cost of maintaining them over time.

“There are going to be needs for new re-indexation of everything, and it has a huge cost associated. Very few discussions about this are actually happening.”

Jason Patton offered a nuanced real-world example from Sesame Workshop. Their archive carries curriculum and educational metadata that human researchers carefully log alongside production content. That metadata is structured, governed, and critical. But it was created by humans who sometimes missed things, especially in content from 30 years ago. Vector-based enrichment can help fill those gaps; but only as a complement to the relational layer, never as a replacement. A human still verifies. The vector layer helps close the coverage gap.

“It’s enrichment, but to a good enough level. And ‘good enough’ only works because there’s a human verifying what’s happening.”

Sofia Fernandez framed the “good enough” debate cleanly: for some industries and some use cases, “good enough” is genuinely acceptable. For others (legal, news, medical) it never will be. The answer isn’t one database winning. It’s designing the system to know which tool to use and when.

Tim Ayris landed the governance thread with a warning: if you haven’t built solid structural metadata foundations today, you’re not going to go back and build them later. Organizations that skip the taxonomy work will leapfrog directly into semantic search, and when semantic breaks, it breaks quietly but confidently, in ways that are very hard to audit or correct.

THE USER EXPERIENCE IMPERATIVE: ONE PANE OF GLASS

A recurring theme throughout the session, and a point of genuine tension, was whether users can or should be trained to understand the difference between structured and semantic search.

Jeff Herzog’s view: yes, to some degree. Users need to understand that a filter (“show me assets with rights valid through 2027”) is a different kind of query than a semantic search (“show me something that feels like a summer afternoon”). Mixing the two requires user literacy.

Jim Cavedo pushed back: users don’t want to be trained. Full stop. The benchmark the industry has to hit is the iPhone. People don’t think about whether their iPhone is making a cellular or WiFi call. They just make the call. The infrastructure decision should be invisible.

Sofia Fernandez offered the most memorable analogy of the session: a coffee machine. The milk is stored in one compartment, the coffee in another. The internal architecture is separate and distinct. But the user presses one button that says “latte” and gets exactly what they want. The underlying complexity is invisible. That’s the design goal for a MAM that bridges relational and vector search; both components working together, neither exposed to the user.

Jason Patton took this a step further, suggesting that the system itself needs to surface explanations when searches fail, not blaming the user, but offering probabilistic guidance on why nothing came back and what might help. An intelligent failure mode is part of the experience.

Jim Cavedo connected this back to the agentic layer: when AI agents are orchestrating queries across multiple databases simultaneously; interpreting intent, routing to the right system, returning results with context, the user doesn’t need to understand any of it. They just need to get the right answer. That’s the world the panel agreed they’re moving toward. The question is how fast.

LIBRARY SCIENCE BECOMES DATA SCIENCE

One of the most intellectually interesting moments came from Terry Melton in the audience, who raised the concept of vector drift and the role of traditional library science. Over time, a vector database’s internal representation of data can drift; the mathematical relationships between items shift as new content is added, as models update, as the index ages. Run the same search twice in a row and you might get different results. That non-determinism is feature for discovery but a bug for governance.

His question: can library science, the discipline that has spent decades thinking about taxonomy, controlled vocabularies, and the principled organization of information, help solve this?

Jim Cavedo’s answer resonated: library science doesn’t disappear. It migrates. It becomes data science. The skills that used to go into building a controlled vocabulary now go into building prompts, tuning embeddings, and designing the logic that drives how an agentic system navigates between retrieval modes. Human judgment doesn’t leave the system, it moves upstream.

“Library science moves into data science. It’s about how you become better at driving the prompts and the values that drive a better result set. And then, as technology gets added to your vector databases, you’re constantly reevaluating those human-led prompts.”

BEYOND SEARCH: WHAT AI ACTUALLY UNLOCKS

The panel didn’t spend all its time on the architecture. Jason Patton pushed the conversation toward what AI-enhanced MAM actually enables beyond better search, and the answers were genuinely exciting.

Sesame Workshop is exploring using semantic analysis for audio description: feeding what the AI knows about a piece of media directly into accessibility workflows, generating descriptions for the visually impaired without human logging. It’s a workflow that would have required thousands of hours of manual work. With a well-indexed archive and a capable AI layer, it becomes something closer to automated.

Jim Cavedo picked that up: if you have good vector embeddings generating rich contextual descriptions, those feed back into better structured metadata. Better transcripts. More accurate automated tags. Which in turn improve the vector layer. The two systems become genuinely codependent, each making the other more capable over time.

“At some point, nobody’s going to be manually tagging content. That goes away completely.”

Eduardo Mancz emphasized that this future only works if organizations maintain ownership of their enriched metadata through platform transitions. As companies move between MAM systems, which they do, every several years, the AI-generated enrichment they’ve accumulated needs to travel with them. Portability of vector data and AI-generated metadata isn’t a solved problem, and it’s one that will define which platforms win long-term trust.

THE CLOSING QUESTION: HOW DOES STRUCTURED METADATA EVOLVE?

Felix closed the session by asking each panelist: as AI-native workflows increase, what actually happens to structured metadata in your world?

The answers landed in a consistent place. Structured metadata doesn’t disappear, but the ratio shifts dramatically. Jeff Herzog put it starkly: the sheer volume of vector data generated by AI; transcripts, embeddings, contextual descriptions, frame-level analysis, will dwarf the structured metadata that organizations have been painstakingly logging for decades. Not ten to one. More like a hundred to one. The structured layer remains essential. It’s just no longer the majority of what the system knows.

Jason Patton’s advice, drawn from a real initiative at Sesame Workshop: before you start down the AI enrichment path, get your taxonomy right. Clean up your relational structure. It’s unglamorous work, but if your structured metadata is a mess when you add the AI layer, the AI layer inherits and amplifies that mess. Good structured data makes the vector layer smarter. Bad structured data makes everything worse.

Tim Ayris sounded the warning that no one else in the room wanted to say out loud: for organizations that haven’t done the taxonomy work and don’t have the budget to do it now, the uncomfortable truth is that they’re going to leapfrog straight to semantic search and skip the structured foundation entirely. That might work for discovery. For governance, it’s a slow-motion problem.

And Jim Cavedo brought it home with a line that could be the thesis of the entire panel:

“Today they’re codependent. And our job is to create the user experience where it doesn’t matter to the user. That’s probably the hardest part, because when users can’t figure it out, they abandon the system altogether.”

DATA AND THE USS ENTERPRISE: A MODERATOR’S SENDOFF

Felix closed with a thought experiment that earned the session a proper ending. He’d been trying to think of a perfect metaphor for the marriage of relational and vector databases, something that showed both systems working in harmony. He landed on Data from Star Trek.

Data has to track the ship’s inventory, crew assignments, mission parameters; all relational. All structured. All exact. But he also has to read facial expressions, interpret emotional states, infer intent from behavior, all vector. All probabilistic. All high-dimensional.

The goal isn’t to pick one. The goal is to be Data: a system that pulls from both databases simultaneously, serves a human experience that feels unified and natural, and does it all without making the user think about which database answered their question.

“That’s what we’re trying to do: take the human and merge it with the computer, until we’re all just Data, navigating through space.”

Naturally, that landed well in a room full of people who’ve been in media technology long enough to appreciate a good Trek reference.

ABOUT CHESAFEST

Chesafest is CHESA’s annual gathering of team members, technology partners, clients, and practitioners in the media, broadcast, and AV space, an event that blends the energy of a partner kickoff with substantive, practitioner-driven conversation about where the industry is actually headed.

Now in its 4th year, Chesafest has grown into something genuinely distinct: a program where CHESA’s team, its vendor partners, and its clients are all in the same room at the same time, participating in the same conversations. The panels are designed to surface real disagreement, real tradeoffs, and real-world insight. The 4th Annual Chesafest took place on February 25, 2026 in Towson, Maryland, drawing 19 vendor partners and a cross-section of CHESA’s client community.

The four vendor panels from Chesafest 2026:

Vendor Panel 1: Is the File System Dying? The Performance Tier in an Object-Native World

Featuring: Backblaze, LucidLink, Suite, and Spectra Logic | Moderated by Tom Kehn, CHESA

Vendor Panel 2: The Next Evolution of Media Asset Management: Is Structured Metadata Enough in the Age of Vector Intelligence?

Featuring: Backlight, Fonn Group, OrangeLogic, EditShare, and VIDA | With client perspective from Jason Patton, Sesame Workshop | Moderated by Felix Coats, CHESA

Vendor Panel 3: Automation, AI, and the Limits of Machine Decision-Making: Where Human Judgment Still Matters in Media Operations

Featuring: Telestream, Hiscale, HelmutUS, Adobe, and Scale Logic | Moderated by Jason Whetstone, CHESA

Vendor Panel 4: When Machines Enter the Control Room: AI, Authority, and Real-Time Decision-Making in Live Production

Featuring: LiveU, Vizrt, Netgear AV, and AI Media | Moderated by Jason “Pep” Pepino, CHESA

This blog series covers each panel in depth. If the MAM and AI metadata conversation is in your world, the other sessions are worth your time too.

Categories
Events & Trade Shows

Is the File System Dying?

The 4th Annual Chesafest brought together some of the sharpest minds in media technology for a day of panels, conversations, and honest debate in Towson, Maryland. One of the first sessions on the agenda, and arguably the one that set the intellectual tone for the entire day, was a vendor panel with a deliberately provocative question at its center:

Is the file system dying?

It sounds like a simple question. Far from it..

Moderated by Tom Kehn, VP of Solutions Consulting at CHESA, the panel brought together representatives from Backblaze, LucidLink, Suite, and Spectra Logic, four companies that, taken together, represent nearly every layer of the modern media storage stack. What followed was a candid, technically rich conversation about where object storage is headed, what role the file system actually plays, what “archive” even means anymore, and what happens when the next generation of media professionals doesn’t know what a file is.

(Chessie, CHESA’s Chief Acorn Procurement Officer, was also in attendance. His contributions, while enthusiastic, were not transcribed.)

MEET THE PANEL

Dave Simon — Sr. Director, Technology Analysis, Backblaze

Dave has spent years working in the MAM and media space and joined Backblaze just over a year before Chesafest 2026. Last year’s Chesafest was his first CHESA channel partner event. He brings a grounded, user-behavior-focused lens to storage conversations that cuts through a lot of the vendor hype in the space.

Ryan Servant — Sr. Director, Channel and Alliances, Suite

Ryan came to Suite after working at Iconik, drawn in by what Suite was building. He’s the first to admit he’s not the most technical person on any panel, and somehow, that usually makes him the clearest communicator in the room.

Richard “Rich” Warren — Senior Solutions Engineer, LucidLink

Rich joined LucidLink back in 2019 with a specific kind of conviction: he saw the technology, quit his job, and went to work there. That’s the kind of origin story that tends to make for good panelists. He’s been making the case for the file system as an abstraction layer ever since.

Nathan Halverson — Manager, Solutions Architecture, Spectra Logic

Nathan has been with Spectra for 14 years, managing their US solutions architecture team. He brought the deep archive and lifecycle management perspective to the panel, a view of storage that most people don’t think about until they desperately need it.

Tom Kehn — VP, Solutions Consulting, CHESA (Moderator)

Tom opened by setting the table clearly: this panel wasn’t about on-prem vs. cloud, tape vs. disk, or cost per terabyte. It was about something more fundamental, the file system itself, and whether the rise of object storage is quietly making it obsolete.

SETTING THE STAGE: WHAT ARE WE ACTUALLY DEBATING?

Tom framed the question well from the start. For decades, the file system has been the center of gravity in the media universe. Now the landscape looks something like this: native on-premises file systems, file system layers sitting over object storage (that’s where LucidLink and Suite live), pure object storage underneath (that’s Backblaze’s domain), and deep archive infrastructure behind legacy applications (Spectra’s world).

The provocation: if applications like NLEs evolve to talk directly to object storage, if Premiere and the rest of the Adobe suite can read S3 natively, does the file system layer become unnecessary? Does it quietly disappear? And what does that mean for the companies whose products live at that layer?

Tom threw it open to the panel. Rich Warren bit first.

“IT’S THE ABSTRACTION LAYER.” AND THAT’S NOT GOING AWAY

Rich’s answer was quick and consistent throughout the entire conversation: the file system isn’t dying because the file system is the abstraction layer. The same way virtualization abstracts hardware, the file system abstracts storage. Object storage will continue to grow, the economics and scalability are undeniable, but something still has to stand between the raw object layer and the humans and applications trying to use it.

“You’re going to get further growth in object, scalability and economics underneath, of course. But the actual abstraction layer is the file system, no different than if you looked at virtualization.”

Dave Simon added a dimension that’s easy to underestimate: users. Specifically, the deeply embedded human habit of organizing things into folders with names that make sense to them. He pointed to sports teams, often staffed with younger, less technically seasoned crews, who just want to see their files, organized logically, in something that doesn’t feel like a web application.

“As long as users continue to exist, the file system is not quite dead. And I don’t think it’s going to die, at least not in this generation.”

Ryan Servant, true to form, agreed, and then added a layer of his own. The expectations of end users, especially creative teams, have actually gone up. They want to see everything, all at once, instantly, across every application. The file system isn’t less important; it’s just that the burden of delivering that experience now falls more heavily on the people designing the infrastructure.

“The file system is probably more important for guys like you at CHESA, where you have to come up with really creative ways to design that and make sure the customer is getting that experience.”

In other words: the file system isn’t dying. It’s just getting harder to build well.

THE EXISTENTIAL QUESTION: WHAT IF ADOBE GOES NATIVE S3?

Tom pushed the panel toward a scenario that felt genuinely uncomfortable for at least a moment. What if Adobe announced that Premiere, After Effects, and the rest of the suite could now talk directly to object storage? What happens to the file system layer, and to the companies whose products live there, if the biggest NLEs no longer need it?

Rich’s answer was measured: even if Adobe goes native S3, Adobe isn’t the only application touching that data. The abstraction layer still serves everything else. You can’t design infrastructure around one application’s access pattern.

Dave Simon took a more practical angle. Think about a field production workflow: camera cards come off set carrying gigabytes, sometimes approaching a terabyte, of raw footage. Getting that into object storage, particularly cloud object storage, means an upload step that adds significant time before anyone can start working. The file system layer is what lets work start immediately on local or near-local storage while the underlying data lives wherever it needs to live.

“You still have to be able to support multiple disk tiers, multiple storage mediums. If it can link to an S3 bucket, that’s great, but also maintain that mount point for your day-to-day operations.”

The takeaway: even in a future where object native becomes common, the performance tier doesn’t disappear. Craft editing, finishing, and anything requiring extreme IOPS still needs fast local or near-local storage. The file system isn’t going away; it’s being complemented.

ARCHIVE WITHOUT ARCHIVE: IS EVERYTHING JUST “ONLINE” NOW?

One of the most interesting threads of the session was Tom’s question about archive itself. As object storage gets faster and cheaper, and as lifecycle management tools get more sophisticated, does “archive” stop being a meaningful category? Won’t it all eventually just be online?

Nathan Halverson had the most nuanced answer on this one. Yes, lifecycle management and tiering have transformed how data moves through the storage stack. Yes, object storage, both on-prem and in the cloud, has made data more readily accessible than tape or cold archive ever could. But the complexity underneath hasn’t gone away; it’s just moved.

“Everyone says S3 is S3, but it’s a lot more complex than that. We have to be very strategic in lifecycle management, understanding where data needs to be and how it interacts with the applications that are touching it.”

The implication for Spectra, which has spent 14 years helping organizations manage exactly that lifecycle complexity, is clear: the job hasn’t gotten simpler. It’s gotten more invisible, and invisible complexity is often the hardest kind to manage.

Ryan Servant connected this directly to Suite’s product direction. Suite’s announcement of going S3-native, the ability to interact with object storage the same way any other application does, without proprietary hooks or workarounds, is the natural progression. One fewer variable in the workflow. Creatives see their files. They interact with them. They don’t know or care what tier the data is on. That’s the goal.

“The creatives tend to not own the budget, so they don’t know everything can’t be tier one. But their experience? They want it to be.”

TAMS, LIVE READ, AND WHERE THINGS ARE ACTUALLY HEADING

Some of the most technically interesting moments came from the audience. Dave Helmly, Director of Professional Video and Audio at Adobe, raised the concept of TAMS (Time Addressable Media) and the role it plays in this evolving ecosystem. TAMS is an emerging standard that allows applications to address media at a sub-file level, essentially treating a piece of media not as a monolithic file but as a set of time-indexed segments that can be read, streamed, and edited without ever fully downloading the source. It’s a critical piece of how the industry gets to a true object-native editing workflow without sacrificing performance.

“We have to have a way to read a proxy, not the real file, onto the timeline while it talks to Suite or Iconik or LucidLink, wherever the original media is. We have to have that balance.”

Dave Simon picked that thread up and pointed to Backblaze’s Live Read capability, the ability to read a growing file straight out of object storage as it’s being written. It’s not segmented the way TAMS is, but it lives in the same spirit: getting the media into the workflow without waiting for a complete ingest cycle.

“Backblaze is very much still focused in the media space, thinking about media and supporting workflows beyond just static object storage.”

The through line here is important: the performance tier isn’t being replaced by object storage. It’s being rebuilt on top of it. The file system remains, but the file itself is becoming more fluid, addressable by time, readable in motion, distributed across tiers in ways that the application (and the user) never has to see.

THE NEXT GENERATION DOESN’T KNOW WHAT A FILE IS

One of the sharpest questions of the session came from Jason Whetstone, Product Development Engineer at CHESA, who raised something that’s been quietly unsettling practitioners across the industry: the next generation of media professionals doesn’t organize their work in file systems. They organize it in apps.

Their footage is in Frame.io or in their phone’s camera roll. Their projects are in SaaS platforms like Canva. Their reference material is in Notion or Google Drive. When you ask them where a file is, they give you a blank look, because to them, files don’t exist. There are just things in apps.

Tom Kehn validated the concern immediately: this is what gives archivists headaches. When media lives inside twenty different SaaS platforms instead of on a governed file system with a MAM on top of it, the governance problem becomes enormous. It’s the Dropbox problem of a decade ago, multiplied by every generative AI tool, every cloud collaboration platform, and every creative SaaS platform that’s been adopted without IT oversight.

Ryan Servant’s response was both honest and forward-looking: the answer isn’t to force the next generation to care about file systems. The answer is to make the infrastructure so seamless that they never have to. The file is there. It’s governed. It’s accessible. They just don’t know it, and they shouldn’t have to.

“We need to make it so it’s okay if they don’t know where the file is or don’t care where the file is. And then it’s up to you guys to make sure there’s some governance around that.”

Nina Smith from the audience added a grounding point that resonated: the solutions on this panel are powerful, but not every organization needs the full stack. Understanding who is actually using the system, editors, archivists, compliance teams, executives, and designing around their specific needs and permissions is more important than any single technology decision.

“Seeking to understand who is using your system and who this is best for. If all you do is archive, some of this may not be for you.”

It was a good reminder that the most technically sophisticated solution isn’t always the right one, and that the organizations best served by vendors like these are the ones who do the discovery work first.

WHERE DOES THIS ALL LAND?

Tom closed the session with a thought worth sitting with. He’d told the panel this discussion would be the foundation of a CHESA blog series, they wanted to hear the real conversation before putting anything in writing. And the real conversation, it turned out, landed somewhere more nuanced than the provocative title suggested.

The file system isn’t dying. But it is transforming. Object storage is becoming the underlying substrate for nearly everything, and the file system is evolving from a storage mechanism into a true abstraction and governance layer, the interface between the raw economics of object storage and the humans and applications that need to work with data.

The companies on this panel (Backblaze, LucidLink, Suite, and Spectra Logic) each hold a different piece of that puzzle. Backblaze provides the scalable, cost-effective object storage foundation, with media-specific capabilities like Live Read that keep it relevant in active workflows. LucidLink and Suite each build the abstraction layer that makes that object storage feel like local, familiar, collaborative storage to the people who use it every day. And Spectra provides the lifecycle management and deep archive infrastructure that ensures data is governed, preserved, and accessible across its entire life, even decades into the future.

The center of gravity, as Nathan Halverson put it, has always lived at the application layer. That’s not changing. What’s changing is everything underneath it.

And that, it turns out, is a pretty good reason to keep talking about it.

ABOUT CHESAFEST

Chesafest is CHESA’s annual gathering of team members, technology partners, clients, and practitioners in the media, broadcast, and AV space, an event that blends the energy of a partner kickoff with substantive, practitioner-driven conversation about where the industry is actually headed.

Now in its 4th year, Chesafest has grown into something genuinely distinct: a program where CHESA’s team, its vendor partners, and its clients are all in the same room at the same time, participating in the same conversations. The panels are designed to surface real disagreement, real tradeoffs, and real-world insight. The 4th Annual Chesafest took place on February 25, 2026 in Towson, Maryland, drawing 19 vendor partners and a cross-section of CHESA’s client community.

The four vendor panels from Chesafest 2026:

Vendor Panel 1: Is the File System Dying? The Performance Tier in an Object-Native World

Featuring: Backblaze, LucidLink, Suite, and Spectra Logic | Moderated by Tom Kehn, CHESA

Vendor Panel 2: The Next Evolution of Media Asset Management: Is Structured Metadata Enough in the Age of Vector Intelligence?

Featuring: Backlight, Fonn Group, OrangeLogic, EditShare, and VIDA | With client perspective from Jason Patton, Sesame Workshop | Moderated by Felix Coats, CHESA

Vendor Panel 3: Automation, AI, and the Limits of Machine Decision-Making: Where Human Judgment Still Matters in Media Operations

Featuring: Telestream, Hiscale, HelmutUS, Adobe, and Scale Logic | Moderated by Jason Whetstone, CHESA

Vendor Panel 4: When Machines Enter the Control Room: AI, Authority, and Real-Time Decision-Making in Live Production

Featuring: LiveU, Vizrt, Netgear AV, and AI Media | Moderated by Jason “Pep” Pepino, CHESA

This blog series covers each panel in depth. If the file system and object storage conversation is in your world, the other sessions are worth your time too.