Categories
Events & Trade Shows

Automation, AI, and the Limits of Machine Decision-Making

The third vendor panel at Chesafest 2026 started with a question that sounds deceptively simple: how much of what media operations teams do today will be done by machines by 2030?

The answers ranged from 70% to 99%. And the real conversation was everything in between.

Moderated again by Felix Coats of CHESA, Vendor Panel 3 brought together practitioners from Telestream, Adobe, HelmutUS, Hiscale, and Scale Logic, alongside CHESA’s own Jason Whetstone, for a conversation about automation, accountability, and the specific kinds of decisions that still need a human in the room. The panel covered everything from the philosophy of machine morality to a story about a guy downloading Python at the gym and submitting the output to his boss without checking a single line.

It was a good panel.

MEET THE PANEL

Scott Eik — Senior Application Engineer, Scale Logic

Scott has been in the industry for about 16 years, moving between MAM systems, archive systems, and the customer side. He joined Scale Logic at NAB the prior year and brought a grounded, operational perspective to every question.

Dave Helmly — Director of Professional Video and Audio, Adobe

Dave has been at Adobe for 30 years and leads a workflow strategy and development team of 22, the only team of its kind embedded in Adobe’s engineering organization. His philosophy: trust your customers to tell you how to make your software. He’s been working with CHESA for most of his time there.

Greg Holick — VP of Business and Channel Development, HelmutUS

Greg has been in the M&E industry for over 25 years, with deep experience helping large customers architect and orchestrate complex media workflows. He came in as the voice of measured optimism: enthusiastic about AI’s potential, clear-eyed about the things it still can’t do.

Sarah Semlear — US Sales Lead, Hiscale

Sarah came to Hiscale after spending time on the client side, deploying MAMs and transcode systems from the inside. She showed up at Chesafest the prior year as a client. She brought the most infectious energy to the panel and consistently redirected the conversation toward what matters: whether any of this is actually making work more fun.

Erik Zindulka — Senior Sales Engineer, Telestream

Erik spent eight or nine years as a Telestream customer before joining the company. He described himself as “the MAM nerd in some circles at Telestream” and brought a practitioner’s sensibility to questions about automation, enrichment, and where AI fits into workflows people are already building.

 

Jason Whetstone — Product Development Engineer, CHESA

Jason has been at CHESA for 12 years and in the media industry for close to 18. He brought a developer’s precision to the panel: focused on what “done” actually means, why AI needs humans to define the work, and what pair programming has to teach us about working with AI tools.

Felix Coats — Solutions Consultant, CHESA (Moderator)

Felix moderated his second panel of the day and, per his own admission, had prepared a full list of questions that the panelists proceeded to answer before he could ask them. He pivoted gracefully throughout and introduced the gym story that became the thread everyone kept pulling on.

BY 2030, WHAT PERCENTAGE OF MEDIA OPERATIONS WILL BE FULLY AUTOMATED?

Felix opened with a clean, direct question and asked each panelist to answer it honestly: by 2030, what percentage of media operations in your space will be fully automated?

The answers were telling.

Dave Helmly went first and went highest: 99%. His reasoning was precise. Adobe’s AI work, particularly with Firefly Services, is focused on productivity and batch automation (resizing, reformatting, localization across 400 output variants from a single source). The jobs nobody wants. A creative still starts the job, still reviews the rejections, still makes the final call. But the volume of mechanical work being handed to machines is already enormous, and it’s only going in one direction.

Scott Eik landed at 70 to 80%, acknowledging that some human interaction will persist but that the trend is unmistakably toward automation for the operational layer.

Greg Holick took a longer view and came in at 50 to 70%. His reasoning was rooted in what AI currently lacks: creative intent, cultural inference, the subtle judgment calls that define the difference between technically correct and actually good. He’s watched the industry’s AI capabilities grow and believes they’ll continue to grow, but maintains that the creative mind brings things to the table that can’t be encoded.

Sarah Semlear declined to give a number. Her answer was better than a number: if we want the future of media to be fun, there has to be human interaction. The machines should own the tedious, horrible tasks. The calculator analogy she returned to repeatedly was perfect: a calculator doesn’t replace the mathematician. It removes the arithmetic so the mathematician can think.

“Let the machines do the tedious, horrible tasks that we don’t want to do. Then we’re focusing on the really awesome, juicy, creative, fun stuff. That’s not Skynet. That’s a utopia.”

Erik Zindulka pointed out that the “extreme majority” of media operations tasks that AI is being asked to automate are things that customers have wanted machines to handle for years. A file lands in a folder. Twenty things should happen to it automatically. Nobody should be sitting in a cubicle checking the codec and moving it to the right directory. AI is the natural continuation of automation logic the industry has been building for decades.

Jason Whetstone offered the most structurally precise answer: as long as humans are creating content and consuming content, the system can never be fully automated, and shouldn’t be. The human role shifts, but it doesn’t disappear. The job becomes defining the work, being clear with the machines about what “done” means, and reducing the exceptions that fall outside the automation envelope.

“Our job as humans is determining what the work actually is and being very clear with the machines about what the work is and how we want it done.”

WHERE HUMAN JUDGMENT IS NON-NEGOTIABLE

Felix pushed the panel on a harder question: are there operational decisions that cannot safely be automated today? And will they ever be able to be?

Erik Zindulka surfaced a quote that became a reference point for the rest of the panel, a placard from an IBM training program from 1979 that read: “A computer cannot be held accountable, therefore it cannot make a management decision.”

That sentence from nearly 50 years ago maps almost perfectly onto the AI governance debate happening right now. Accountability is the line. Wherever a decision has legal consequences, creative stakes, or reputational exposure, a human needs to be in the chain, not because machines can’t generate an answer, but because machines can’t be held responsible for the answer they generate.

Sarah Semlear picked up the accountability thread with a specific point about morality. The industry often talks about training AI to be ethical or unbiased. But morality isn’t a universal constant. It varies by culture, country, context, and situation. You can’t hand a one-size-fits-all moral framework to an AI and consider the problem solved.

Greg Holick added the copyright and compliance dimension: AI in a media environment has access to enormous volumes of protected content. Should it? The legal exposure of an AI system pulling the wrong ad, using the wrong asset, or making a rights decision it can’t justify is enormous. And the entity that gets held responsible isn’t the machine.

Dave Helmly extended this into the personalization and content consumption space: AI is already learning individual users well enough to feed them content they’ll react to. By 2030, it will know users dramatically better than it does now. That creates an obligation on the human side to question what’s being surfaced, why, and whether the information environment being constructed serves the person or just the engagement metric.

Jason Whetstone brought it back to something clean and practical: the decision to publish. You can automate the upload. You can automate the metadata. But the decision to put content in front of an audience should require a human making a deliberate choice.

“The decision to actually publish to the public should be on a human.”

Dave Helmly also noted where compliance automation actually adds value: territory-specific edits, regional restrictions, content standards for different markets. These are the jobs that no one wants to do anyway, that currently require enormous manual effort, and where AI can do the work reliably because the rules are known and explicit.

Scott Eik grounded the whole discussion with a production operations lens: someone has to QC what comes out the back end before it goes to air or to print. That checkpoint is a human checkpoint. The question isn’t whether the QC role exists; it’s whether AI can support it by catching more before it reaches the human reviewer.

THE GYM STORY: LOW CODE, UNMANAGED RISK, AND THE GUY WHO SUBMITTED THE PYTHON SCRIPT

Felix opened the third segment with a story that generated more discussion than any formal question could have.

He overheard two finance professionals at the gym. One of them had been asked by his boss to produce some charts. He didn’t know how. He asked ChatGPT. ChatGPT told him to download Python. He asked how. ChatGPT told him. He installed it, ran the script, and submitted the output to his boss. His boss said great job. He was proud of himself.

Felix’s internal reaction was a list of questions he didn’t say out loud: Did you validate the code? Did you confirm it wasn’t also accessing your financial records from the last decade? Did you check what it was touching?

This is the low code moment the industry is living in right now. The tools have gotten accessible enough that people with no technical background are generating and running code that touches real systems and real data. The gap between capability and comprehension has never been wider.

Scott Eik was direct: you have unmanaged risk the moment you don’t understand what’s happening in the background. And when something goes wrong, the person who ran the script without understanding it is not equipped to diagnose or fix it.

Dave Helmly raised the IP dimension: code generated by AI may have been derived from copyrighted source material. If you don’t know math, you can’t validate the logic. If you don’t know code, you can’t validate its origins. The people who are safe in this environment, he argued, are the ones with 10,000 hours in their specialty. They’re the ones qualified to judge what the AI produced.

Greg Holick brought it back to responsibility: automation and AI are extraordinary productivity tools, but they change who’s responsible for the outcome. The ownership lands on the person who ran the process. If you deployed code that touched data you shouldn’t have touched, the fact that an AI wrote it doesn’t reduce your exposure.

“Just because you can do it doesn’t mean you should. Automation and AI change your responsibility. The ownership is still on the person doing that.”

Sarah Semlear offered the most optimistic frame. She compared the current moment to the early days of YouTube, when traditional media companies were horrified by the chaos of user-generated video flooding the internet. People posting content they shouldn’t, no standards, no guardrails. It looked like a disaster. It became an industry. The wild west always calms down.

“Everything always calms down. It’ll be fine. We’ll get to the place where it’s actually that super powerful calculator we really need.”

Erik Zindulka pushed toward the practical design goal: the end state for low code in a media environment isn’t Python scripts generated in a gym. It’s a visual workflow builder where an operator draws a flowchart, describes the production logic they want, and the system handles the execution. Bring-your-own-code for edge cases, yes. But the default should be intuitive enough that nobody has to think about scripting at all.

Jason Whetstone added the concept of AI context: an AI system is results-driven and will generate an answer as fast as possible, even if it doesn’t have all the information it needs to get the right answer. If it’s missing context, it guesses. That’s where the human has to step in: not to do the work, but to be clear about what the work actually is.

He described two models of working with AI tools. The substitutive model: you outsource a task to AI and don’t particularly care how it gets done. The assistive model, which he prefers, is pair programming. Two people working shoulder to shoulder through a problem, each learning from the other. You understand the problem. The AI understands aspects of the code you don’t. You teach each other. The outcome is better because both parties are engaged in the process.

“I have to help the AI understand what the problem is that I’m trying to solve, what I’m not trying to solve, what good results look like, what success means, and what done means.”

THE FUTURE OF HUMAN OVERSIGHT: AI MONITORING AI?

Felix closed the formal portion of the session with a question about where human oversight goes as AI-native workflows mature. Do you create new roles to supervise AI output? Do you build AI to monitor AI? Or does the oversight layer gradually get automated away too?

The panel converged on a few consistent positions.

Scott Eik: in the near term, you want humans checking everything that comes out of AI. As trust is established over time, that check can become more targeted and less constant. The progression is gradual. You don’t just flip a switch.

Dave Helmly: AI is going to take some jobs. Photoshop took jobs too. But Photoshop created entirely new categories of work. The pattern holds. The people who lose jobs will be the ones who tried to use AI as a shortcut without understanding the underlying craft. The ones who keep their jobs, and build new ones, will be the ones who can judge what the machine produced.

Sarah Semlear: you don’t need to reinvent the wheel. The organizations that respond to AI by blowing up their org charts and starting over are making the same mistake people make with every major technology shift. Find the efficiencies. Add the roles where they’re needed. Check your sources, which is not a new skill requirement. Keep humans in the loop and keep it interesting.

“If you just take a base answer of anything and you don’t look into it, if you Google one thing and go with the first result, you should probably be fired for that too. This is not something new in humanity.”

Erik Zindulka offered one of the most forward-looking points of the session: AI enrichment isn’t a one-time event. Archives and libraries persist for decades. An archive enriched by one AI tool today will be enriched again five years from now by a better one. And again five years after that. Each pass adds another layer of metadata, another dimension of searchability, another tier of context. The result, over time, is a media archive richer than anything that could have been produced by human logging alone.

Greg Holick closed with a framing that landed well: AI changes the shape of human responsibility, but not its existence. Someone still has to set up the guardrails. Someone still has to evaluate what comes out. The pre-checking that happens before automation runs may matter as much as the post-checking that happens after.

Felix added one more thought before closing: the industry might start seeing something like a “production AI supervisor,” a new role whose job is specifically to QC AI output before it hits a downstream system or a human audience. Not a developer. Not a traditional post supervisor. Something in between. It’s not here yet, but the logic is sound.

A CLOSING QUESTION FROM THE CEO

As the session wound down, Jason Paquin stepped in with one last question for the group: what guidance do you have for someone building their career in this space right now?

It was a good question to end on, and Nina Smith gave the best answer.

She said that the greatest gift you can give anyone you’re talking to is the ability to truly listen. Not to have the answer ready before the question is finished. Not to perform expertise before you’ve understood the problem. Holding back, listening, and offering real perspective when you actually have something to contribute will take you further than sounding smart ever will.

“Know who you’re dealing with. If someone wants to talk fluff, talk fluff. If someone wants to talk truth, talk truth. You will go much further by listening and learning and offering your advice when you really know something, not when you’re guessing.”

That’s good advice in any era. In an industry moving as fast as this one, it’s essential.

ABOUT CHESAFEST

Chesafest is CHESA’s annual gathering of team members, technology partners, clients, and practitioners in the media, broadcast, and AV space. It blends the energy of a partner kickoff with substantive, practitioner-driven conversation about where the industry is actually headed.

Now in its 4th year, Chesafest has grown into something genuinely distinct: a program where CHESA’s team, its vendor partners, and its clients are all in the same room at the same time, participating in the same conversations. The panels are designed to surface real disagreement, real tradeoffs, and real-world insight. The 4th Annual Chesafest took place on February 25, 2026 in Towson, Maryland, drawing 19 vendor partners and a cross-section of CHESA’s client community.

The four vendor panels from Chesafest 2026:

Vendor Panel 1: Is the File System Dying? The Performance Tier in an Object-Native World

Featuring: Backblaze, LucidLink, Suite, and Spectra Logic | Moderated by Tom Kehn, CHESA

Vendor Panel 2: The Next Evolution of Media Asset Management: Is Structured Metadata Enough in the Age of Vector Intelligence?

Featuring: Backlight, Fonn Group, OrangeLogic, EditShare, and VIDA | With client perspective from Jason Patton, Sesame Workshop | Moderated by Felix Coats, CHESA

Vendor Panel 3: Automation, AI, and the Limits of Machine Decision-Making: Where Human Judgment Still Matters in Media Operations

Featuring: Telestream, Hiscale, HelmutUS, Adobe, and Scale Logic, with Jason Whetstone, CHESA | Moderated by Felix Coats, CHESA

Vendor Panel 4: When Machines Enter the Control Room: AI, Authority, and Real-Time Decision-Making in Live Production

Featuring: LiveU, Vizrt, Netgear AV, and AI Media | Moderated by Jason “Pep” Pepino, CHESA

This blog series covers each panel in depth. If the automation and AI accountability conversation resonates with your world, the other sessions are worth your time too.