When Machines Enter the Control Room
The final vendor panel at Chesafest 2026 saved its most time-sensitive question for last.
For the first three panels, AI was a topic you could discuss at a philosophical remove. The file system will evolve over years. MAM architectures will shift over decades. Human oversight in media operations will be negotiated gradually. But in live broadcast, there is no gradually. A shot is taken or it isn’t. A graphic fires or it doesn’t. Audio goes to air or it goes silent. The decisions happen in real time, and the consequences arrive just as fast.
Moderated by Jason “Pep” Pepino, Director of Media Systems Design and Engineering at CHESA, Panel 4 brought together representatives from LiveU, Vizrt, Netgear AV, and AI Media to answer a question that gets more urgent every year: should AI be allowed to make real-time production decisions inside a live control loop, or should it remain strictly advisory?
The answer, as it usually does, landed somewhere in the middle. But the journey to get there was worth the trip.
MEET THE PANEL
Chuck Davidson — Partner Account Manager, LiveU
Chuck describes himself as an optimist, with a “glass half full” orientation toward technology in general and AI in particular. LiveU’s work in the bonded IP and remote production space means AI decisions at the transmission layer carry real operational stakes, and Chuck brought that weight to the conversation while maintaining his characteristic forward-looking energy.
Dan Griffin — Territory Manager, Netgear AV
Dan’s background is in live production audio, which gives him a different instinct than most people in a broadcast technology conversation. He showed up to Chesafest as a self-described skeptic (his wife, who works in tech, had been actively working on his conversion) and moved visibly toward realist over the course of the discussion. His perspective on AI in network design and audio mixing was among the most practically grounded of the session.
Kyle Phillips — VP of Sales Enablement, AI Media
Kyle acknowledged upfront that his pro-AI position at a company called AI Media was not exactly a surprise. What he brought beyond the predictable enthusiasm was specificity: real deployment context for live caption automation, guardrails design, and the practical limits of what AI can handle when breaking news or live sports throws something unexpected at the system.
Steve Cooperman — Sales Manager, Vizrt
Steve came in as the panel’s pragmatist, with 20 years of experience across Panasonic, Canon, and now Vizrt spanning cameras, live production, and software. He’s seen enough real-world deployments to know where AI delivers and where it overpromises, and he wasn’t shy about either.
Jason “Pep” Pepino — Director of Media Systems Design and Engineering, CHESA (Moderator)
Pep opened by declaring himself an accelerationist, the most enthusiastic position on the AI spectrum, and framed the panel accordingly. He had just finished building CHESA’s first SMPTE 2110 studio and had personally entered thousands of IP addresses by hand. His enthusiasm for an AI agent that could do that work someday was, as he put it, “real.”
SHOULD AI MAKE REAL-TIME PRODUCTION DECISIONS?
Pep opened by laying out what AI can already do in a live production environment. It can identify key moments. It can select camera angles. It can trigger graphics automatically. It can translate and localize audio in real time and adjust levels based on speaker detection. The question isn’t capability anymore. It’s authority.
Should the AI decide, or should it advise?
Steve Cooperman came in with a real-world example that illustrated both sides of the question simultaneously. Vizrt’s Libra product brings sports analytics into live production, powering the kind of on-screen overlays that have become standard in sports broadcasts. But beyond data visualization, the platform also handles AI cutouts: cleanly separating a player from the background in real time, handling edge cases like dark uniforms on grass, enabling 3D effect automation without a compositor doing it by hand.
“AI is really helpful for that cutout, and then automating it. Of course, we could always override it. But that’s a real-world example of production applications that some sports productions are using today.”
The override option is the tell. Even in a case where the AI is clearly adding value, the ability to override it is treated as non-negotiable. The automation runs unless a human says otherwise.
Chuck Davidson framed LiveU’s approach as one of intentional flexibility. Their CTO’s current development focus is something called an AI connector, essentially a configurable entry point into the LiveU ecosystem that lets each customer define which AI agent they want to use and how much authority it gets. The premise: there is no universal right answer for how much AI authority is appropriate. It depends on the customer, the content, and what’s at stake.
“We can’t assume that everybody’s going to want to have the same parameters or the same mindset for how they want to integrate AI.”
Dan Griffin brought the audio mixing perspective, and it was one of the most honest assessments of the session. When it comes to managing microphone levels for a group of talking heads, he said flatly, machines can do it better than humans. They react faster. They don’t get fatigued. They don’t miss a cough. For that specific task, in that specific context, AI authority isn’t a philosophical question. It’s just more reliable.
But the right answer shifts dramatically when you change the context. Life-critical broadcasts, high-stakes live events, anything where a muted microphone could mean something goes out wrong or doesn’t go out at all: those require human readiness to intervene, even if AI is handling the moment-to-moment operation.
Kyle Phillips introduced a concept that became one of the most useful frameworks of the session: bounded autonomy. You define the space in which AI is allowed to act, and the machine operates confidently within that space. The boundaries are the human decision. The execution within them is the machine’s.
“You design what it’s able to do. When you can replace manual, repetitive tasks with AI, you get efficiency and speed. But you give it parameters. It can adjust levels a few decibels, but it can’t go from zero to twenty all at once.”
The design phase is where human judgment lives. The operational phase is where the machine works. Keeping those two things clearly separated is the architectural foundation of responsible AI deployment in live production.
WHEN AI FAILS: WHO’S RESPONSIBLE?
Pep shifted the conversation to accountability, and the panel’s first response was telling: someone immediately said “the systems integrator,” then immediately acknowledged they were not supposed to say that.
The laughter that followed said something real. In the live production chain, when something goes wrong, the question of who owns it is genuinely complicated.
Kyle Phillips was direct: if the AI is failing, it’s ultimately on the vendor. But the more important variable is how the system was designed and what parameters were set. You can’t blame the machine for operating within the boundaries someone gave it. The accountability traces back to whoever set those boundaries.
Chuck Davidson took a different angle. LiveU’s acquisition of Actus (a compliance monitoring platform) started making sense to him in this context in a new way during the panel. Actus was built for FCC compliance monitoring, essentially an automated oversight layer that watches what goes to air and flags violations. As AI takes on more production authority, a compliance layer like that becomes part of the answer to the accountability question. It’s governance infrastructure for an AI-driven environment.
Pep offered his own ground-level perspective: having just spent significant time manually entering IP addresses to configure CHESA’s 2110 studio, he’s acutely aware of how much room there is for AI to help with the configuration and commissioning process, and equally aware of how much human verification that work currently requires. The AI can assist. The engineer still has to verify.
GUARDRAILS: WHAT MUST EXIST IF AI IS IN THE CONTROL LOOP?
The panel’s final formal question was the most practical: if AI is operating inside the signal chain, what guardrails must be in place?
The consensus was rapid and clear: operator override is non-negotiable. Every panelist said some version of it.
Steve Cooperman used Vizrt’s gaze correction feature on the TriCaster as a live illustration. The feature automatically adjusts a speaker’s eye line to maintain direct-to-camera contact even when they’re looking down at a monitor. It works well most of the time. It does not work well when someone is moving erratically, and a malfunctioning gaze correction in the middle of a live broadcast creates a deeply unsettling viewer experience. The human has to be able to turn it off. Immediately. Without friction.
“You need a human, presumably, to be able to override or to monitor. If any technology goes bad, you want the ability to turn it off if it’s not working in that environment properly.”
Kyle Phillips described the guardrails in AI Media’s captioning deployments in specific terms. AI handles the placement of captions in real time, dynamically repositioning them so they don’t block on-screen text like lower thirds or score bugs. That’s clean, bounded automation. But then there’s the harder layer: topic models and content filters that prevent certain words from appearing in captions when a speaker has a particular accent or when a live sports moment generates unexpected language. Those filters need to be configurable, auditable, and human-adjustable in the moment.
Dan Griffin brought it back to the network layer. Netgear’s value-add includes free network design services. AI can help design a network much faster than doing it manually. But an engineer still puts eyes on every design before it goes to a customer. Not because the AI is unreliable, but because the stakes of a poorly configured live production network are too high to skip the human review, regardless of how good the last ten designs were.
“The thing to fear is getting too comfortable. You always have to look and make sure you’re monitoring what it’s doing and providing feedback as needed.”
Chuck Davidson closed the guardrails discussion with the framing that felt most honest about where the industry actually is: this is a change management problem as much as a technology problem. The resistance to AI in live production isn’t always about legitimate technical concerns. Sometimes it’s just that change is scary, and AI is a category of change that the industry has no prior template for. The tape-to-digital transition felt just as existential at the time. Many broadcasters refused to let go of physical tape long after digital was the better answer.
“Part of the challenge is that change is scary, and AI is a very powerful tool that this industry has never seen before. Part of our job on the technology side is how do we harness it and how do we manage it to eliminate the fear.”
WHERE ARE WE ON THE INNOVATION CURVE?
An audience question brought the session to its most forward-looking moment: where are we on the innovation curve for visual AI in live broadcast?
Pep didn’t hesitate: we’re at the very beginning. The capabilities visible today will look primitive compared to what five to ten years will produce. The panel agreed.
Dan Griffin noted that even the most basic AI research tools (looking up someone’s background before a meeting, pulling a bio from the web) were significantly worse just six months ago than they are now. The trajectory of improvement is steep. Broadcast-specific applications are more complex and more critical than general research tools, which means they’ll take longer to mature. But the same rate of improvement will get there.
Steve Cooperman pushed back slightly on “very beginning.” In live sports specifically, the volume of AI-driven sports tech visible on broadcast in the last year is roughly ten times what it was before. Not all of it is AI in the strict sense, but the category of computer-assisted production technology has exploded, and AI is a meaningful part of that acceleration.
Kyle Phillips connected this to the economics of linear broadcast. Traditional linear television is under revenue pressure, and that pressure is creating urgency around monetizing existing content in new ways. Old archives, up-rezzed to modern quality, localized for new markets, offered on emerging platforms (he pointed to retro TV services running on antenna signals with commercials as a surprisingly significant revenue generator) represent a category of AI-driven value that is very real, very current, and still early.
A voice from the audience painted one of the most vivid pictures of where this heads. Imagine taking old television series and not just up-resing them, but giving them new language tracks where the original actors’ voices are preserved but given the phonetic quality of the target language’s native speakers. Not a new voice actor. The original voice, delivered as if the original actor had learned to speak Hindi or Spanish natively. The legal questions around that are still being worked out. The technology to do it exists now.
Chuck Davidson offered the most memorable real-world deployment of the session: the NYPD’s Drones as First Responder program in New York City. Drones dispatched autonomously in response to 911 calls, giving officers visual intelligence on a scene before they arrive. Operational today. No lab demo. No pilot program. Running in the city.
“If you’ve never seen it, I would encourage you to watch it. It’s a great example of where we are from a technology perspective.”
It’s not a broadcast example. But it’s the clearest illustration of what bounded AI autonomy looks like when it works: a machine operating within carefully designed parameters, doing something faster and more effectively than any human alternative, with humans ready to act on what it finds.
THE THROUGH LINE ACROSS ALL FOUR PANELS
Across the four Chesafest vendor panels, the same idea surfaced in every room, in different forms and different vocabularies, and it’s worth naming it clearly.
The question was never really “AI or humans.” It was always “what do we want the humans to do?”
In storage and file systems, we want humans setting governance policy, not manually moving files. In MAM, we want humans defining taxonomy and verifying results, not logging 20-year-old archives by hand. In media operations, we want humans deciding what the work is and evaluating what comes out, not checking codec values on every ingest. In live production, we want humans making editorial decisions and ready to override, not manually adjusting audio levels for twelve talking heads who all speak at different volumes.
The machines are getting better at the things humans shouldn’t have to do. The work of the industry right now is figuring out exactly where that line is, drawing it deliberately, and building the guardrails to hold it.
That’s not a 2030 problem. It’s a now problem. And it was the right note to end Chesafest 2026 on.
ABOUT CHESAFEST
Chesafest is CHESA’s annual gathering of team members, technology partners, clients, and practitioners in the media, broadcast, and AV space. It blends the energy of a partner kickoff with substantive, practitioner-driven conversation about where the industry is actually headed.
Now in its 4th year, Chesafest has grown into something genuinely distinct: a program where CHESA’s team, its vendor partners, and its clients are all in the same room at the same time, participating in the same conversations. The panels are designed to surface real disagreement, real tradeoffs, and real-world insight. The 4th Annual Chesafest took place on February 25, 2026 in Towson, Maryland, drawing 19 vendor partners and a cross-section of CHESA’s client community.
The four vendor panels from Chesafest 2026:
Vendor Panel 1: Is the File System Dying? The Performance Tier in an Object-Native World
Featuring: Backblaze, LucidLink, Suite, and Spectra Logic | Moderated by Tom Kehn, CHESA
Vendor Panel 2: The Next Evolution of Media Asset Management: Is Structured Metadata Enough in the Age of Vector Intelligence?
Featuring: Backlight, Fonn Group, OrangeLogic, EditShare, and VIDA | With client perspective from Jason Patton, Sesame Workshop | Moderated by Felix Coats, CHESA
Vendor Panel 3: Automation, AI, and the Limits of Machine Decision-Making: Where Human Judgment Still Matters in Media Operations
Featuring: Telestream, Hiscale, HelmutUS, Adobe, and Scale Logic, with Jason Whetstone, CHESA | Moderated by Felix Coats, CHESA
Vendor Panel 4: When Machines Enter the Control Room: AI, Authority, and Real-Time Decision-Making in Live Production
Featuring: LiveU, Vizrt, Netgear AV, and AI Media | Moderated by Jason “Pep” Pepino, CHESA
This blog series covers each panel in depth. If the live production and AI authority conversation is in your world, the other sessions are worth your time too.





