The 4th Annual Chesafest didn’t slow down after its opening session. Vendor Panel 2 took the intellectual temperature in the room and raised it by a few degrees.
The question on the table: Is structured metadata still enough to run a modern media asset management system, or is the rise of vector databases and AI-driven semantic retrieval about to fundamentally reshape how media organizations find, govern, and work with their content?
It sounds like an infrastructure question. It turned out to be a conversation about users, governance, trust, library science, Star Trek, and the surprisingly stubborn challenge of teaching a machine to know what you actually meant.
Moderated by Felix Coats of CHESA, the panel brought together practitioners and vendors from across the MAM ecosystem, a mix of perspectives that produced one of the most substantive conversations of the day.
MEET THE PANEL
Jason Patton, VP of Production Technology, Sesame Workshop
Jason was a late addition to the panel, he’s a great duck pin bowler. He’s not a vendor; he’s a client, and his real-world perspective on what it actually means to manage a deep archive of beloved children’s content grounded every abstract technology debate in something concrete. His candor was a consistent highlight throughout.
Tim Ayris — Head of Channel Partnerships, VIDA
Tim brought a content operations lens to the conversation. VIDA’s customers use the platform to push and manage content at scale, which means the governance question isn’t theoretical, it’s something they have to solve for every day.
Jeff Herzog — Director of Product Management, EditShare
Jeff came in with a product-depth perspective and a healthy skepticism about the pace of vendor hype versus the pace of actual customer adoption. His point that many customers are skeptical of MAM value, and that AI enhancement layers could change that permanently, set a useful frame early.
Jim Cavedo — VP of Global Solutions, OrangeLogic
OrangeLogic occupies a unique position: a single platform with both DAM and MAM capabilities. Jim brought the agentic AI angle to the conversation and was consistent on one point throughout: the user shouldn’t know or care whether the system is querying a relational database or a vector database. That’s the vendor’s problem to solve.
Sofia Fernandez — Channel Manager, Backlight
Sofia offered clear, precise framing throughout, including one of the best analogies of the session, which involved a coffee machine. She brought a measured view of how the transition from structured to semantic metadata needs to be paced carefully to avoid breaking the users who depend on deterministic search today.
Eduardo Mancz — President and CEO, Fonn Group (Mimir)
Eduardo’s company builds Mimir, a MAM platform well known in the broadcast and media space. He pushed the conversation toward the practical: the complexity of metadata that organizations are already struggling to manage, and the risk of chasing AI capabilities without solving for portability and platform evolution.
Felix Coats — Solutions Consultant, CHESA (Moderator)
Felix opened with a technical level-set that would have impressed a database administrator, covering the core difference between relational and vector databases with enough clarity that the conversation could actually go somewhere. He kept the panel honest and on-topic throughout, and closed with a Star Trek reference that was far more apt than it had any right to be.
THE SETUP: TWO VERY DIFFERENT WAYS OF KNOWING THINGS
Felix opened by drawing a distinction that the industry tends to collapse into buzzwords. A relational database, he explained, is like a well-organized spreadsheet. You know what you’re looking for, you query it precisely, and you get back an exact match. Tomato is a vegetable. Find all videos from 1994. Return assets with active rights for North America.
A vector database works on a completely different principle. It doesn’t retrieve based on declared, structured facts; it retrieves based on similarity and meaning. A cat and a dog aren’t the same animal, but they share enough dimensional proximity in a vector space that a search for “pet” could surface both. It’s powerful for finding things you can’t precisely describe. It’s problematic when you need to know for certain.
The question Felix posed: MAM systems have been built for decades on the declared-truth model; relational databases, structured schemas, deterministic queries. Now users expect systems to understand intent. Can these two models coexist? Or are they philosophically incompatible?
The panel’s answer, reached almost immediately and reinforced throughout: they don’t just coexist, they depend on each other.
“THEY’RE GOING TO HAVE TO LIVE TOGETHER”
Jason Patton got there first, and said it most plainly. A unique identifier, the foundational record that says this asset exists and relates to these other assets, is never going away. That’s relational. That’s structural. That has to be right. But layered on top of that, and running alongside it, is where vector search lives: helping a new generation of users who have grown up talking to chatbots, who don’t know the naming convention, who have a fuzzy idea of what they’re looking for and want the system to meet them there.
“There’s going to be a whole new crop of users whose only experience is talking to a chatbot. They’re going to be like, ‘I don’t know what I want.’ They want the system to come back and say, here are things that are like what we think you’re saying.”
Tim Ayris agreed, adding a dimension specific to VIDA’s user base: the creative users who are doing production work don’t want to learn a taxonomy. They want to type something that approximates what they’re looking for and get results. But the operational users, the ones pushing content, managing distribution, handling rights, need the precision that only a relational database can provide. The same platform has to serve both.
Jeff Herzog came at it from a MAM adoption angle. Many of EditShare’s customers have MAM access but don’t fully use it. They’re skeptical. The value isn’t obvious enough yet. His contention: AI enhancement layers change that equation. Once semantic search makes finding content genuinely effortless, the reluctant users become converts.
“You won’t be able to afford not to use MAM once these enhancement layers come in.”
And Jim Cavedo put the capstone on the opening round with a point that would echo throughout the entire session: the user should never know which database is serving their query. The agentic layer on top of both systems figures that out. The user types a question. The agent decides whether it requires a relational query, a vector search, or some combination of both, and returns a single, coherent result.
“The user has no idea where any of this exists. They just want one pane of glass, one simple chat experience.”-
THE GOVERNANCE PROBLEM: WHEN “GOOD ENOUGH” ISN’T
The second major thread of the session was governance, and this is where the conversation got genuinely uncomfortable in the best way.
Vector databases, by their nature, are not deterministic. They don’t always return the same result for the same query. They can hallucinate connections. They can’t trace their own reasoning the way a relational query can. And in regulated industries (news, legal, medical, and to a significant degree entertainment with its rights and talent participation obligations) that traceability isn’t optional.
Jeff Herzog made the point precisely: a search against a relational database is auditable. You can see exactly why it returned what it returned. A vector search isn’t.
“These vector searches aren’t, by definition, traceable. You can’t see the work in the way that a relational database search is deterministic, there are facts behind it.”
Jim Cavedo went further: if you’re depending on AI to make a rights decision, and you’re challenged on that decision, you need to be able to point to something and say “the data said I could do this.” An unexplainable vector result won’t hold up.
Eduardo Mancz raised a cost dimension that rarely gets discussed: when new models emerge, and they will, you have to re-vectorize your entire dataset. Re-indexing is expensive, time-consuming, and technically demanding. The industry talks constantly about AI capabilities. It talks almost never about the infrastructure cost of maintaining them over time.
“There are going to be needs for new re-indexation of everything, and it has a huge cost associated. Very few discussions about this are actually happening.”
Jason Patton offered a nuanced real-world example from Sesame Workshop. Their archive carries curriculum and educational metadata that human researchers carefully log alongside production content. That metadata is structured, governed, and critical. But it was created by humans who sometimes missed things, especially in content from 30 years ago. Vector-based enrichment can help fill those gaps; but only as a complement to the relational layer, never as a replacement. A human still verifies. The vector layer helps close the coverage gap.
“It’s enrichment, but to a good enough level. And ‘good enough’ only works because there’s a human verifying what’s happening.”
Sofia Fernandez framed the “good enough” debate cleanly: for some industries and some use cases, “good enough” is genuinely acceptable. For others (legal, news, medical) it never will be. The answer isn’t one database winning. It’s designing the system to know which tool to use and when.
Tim Ayris landed the governance thread with a warning: if you haven’t built solid structural metadata foundations today, you’re not going to go back and build them later. Organizations that skip the taxonomy work will leapfrog directly into semantic search, and when semantic breaks, it breaks quietly but confidently, in ways that are very hard to audit or correct.
THE USER EXPERIENCE IMPERATIVE: ONE PANE OF GLASS
A recurring theme throughout the session, and a point of genuine tension, was whether users can or should be trained to understand the difference between structured and semantic search.
Jeff Herzog’s view: yes, to some degree. Users need to understand that a filter (“show me assets with rights valid through 2027”) is a different kind of query than a semantic search (“show me something that feels like a summer afternoon”). Mixing the two requires user literacy.
Jim Cavedo pushed back: users don’t want to be trained. Full stop. The benchmark the industry has to hit is the iPhone. People don’t think about whether their iPhone is making a cellular or WiFi call. They just make the call. The infrastructure decision should be invisible.
Sofia Fernandez offered the most memorable analogy of the session: a coffee machine. The milk is stored in one compartment, the coffee in another. The internal architecture is separate and distinct. But the user presses one button that says “latte” and gets exactly what they want. The underlying complexity is invisible. That’s the design goal for a MAM that bridges relational and vector search; both components working together, neither exposed to the user.
Jason Patton took this a step further, suggesting that the system itself needs to surface explanations when searches fail, not blaming the user, but offering probabilistic guidance on why nothing came back and what might help. An intelligent failure mode is part of the experience.
Jim Cavedo connected this back to the agentic layer: when AI agents are orchestrating queries across multiple databases simultaneously; interpreting intent, routing to the right system, returning results with context, the user doesn’t need to understand any of it. They just need to get the right answer. That’s the world the panel agreed they’re moving toward. The question is how fast.
LIBRARY SCIENCE BECOMES DATA SCIENCE
One of the most intellectually interesting moments came from Terry Melton in the audience, who raised the concept of vector drift and the role of traditional library science. Over time, a vector database’s internal representation of data can drift; the mathematical relationships between items shift as new content is added, as models update, as the index ages. Run the same search twice in a row and you might get different results. That non-determinism is feature for discovery but a bug for governance.
His question: can library science, the discipline that has spent decades thinking about taxonomy, controlled vocabularies, and the principled organization of information, help solve this?
Jim Cavedo’s answer resonated: library science doesn’t disappear. It migrates. It becomes data science. The skills that used to go into building a controlled vocabulary now go into building prompts, tuning embeddings, and designing the logic that drives how an agentic system navigates between retrieval modes. Human judgment doesn’t leave the system, it moves upstream.
“Library science moves into data science. It’s about how you become better at driving the prompts and the values that drive a better result set. And then, as technology gets added to your vector databases, you’re constantly reevaluating those human-led prompts.”
BEYOND SEARCH: WHAT AI ACTUALLY UNLOCKS
The panel didn’t spend all its time on the architecture. Jason Patton pushed the conversation toward what AI-enhanced MAM actually enables beyond better search, and the answers were genuinely exciting.
Sesame Workshop is exploring using semantic analysis for audio description: feeding what the AI knows about a piece of media directly into accessibility workflows, generating descriptions for the visually impaired without human logging. It’s a workflow that would have required thousands of hours of manual work. With a well-indexed archive and a capable AI layer, it becomes something closer to automated.
Jim Cavedo picked that up: if you have good vector embeddings generating rich contextual descriptions, those feed back into better structured metadata. Better transcripts. More accurate automated tags. Which in turn improve the vector layer. The two systems become genuinely codependent, each making the other more capable over time.
“At some point, nobody’s going to be manually tagging content. That goes away completely.”
Eduardo Mancz emphasized that this future only works if organizations maintain ownership of their enriched metadata through platform transitions. As companies move between MAM systems, which they do, every several years, the AI-generated enrichment they’ve accumulated needs to travel with them. Portability of vector data and AI-generated metadata isn’t a solved problem, and it’s one that will define which platforms win long-term trust.
THE CLOSING QUESTION: HOW DOES STRUCTURED METADATA EVOLVE?
Felix closed the session by asking each panelist: as AI-native workflows increase, what actually happens to structured metadata in your world?
The answers landed in a consistent place. Structured metadata doesn’t disappear, but the ratio shifts dramatically. Jeff Herzog put it starkly: the sheer volume of vector data generated by AI; transcripts, embeddings, contextual descriptions, frame-level analysis, will dwarf the structured metadata that organizations have been painstakingly logging for decades. Not ten to one. More like a hundred to one. The structured layer remains essential. It’s just no longer the majority of what the system knows.
Jason Patton’s advice, drawn from a real initiative at Sesame Workshop: before you start down the AI enrichment path, get your taxonomy right. Clean up your relational structure. It’s unglamorous work, but if your structured metadata is a mess when you add the AI layer, the AI layer inherits and amplifies that mess. Good structured data makes the vector layer smarter. Bad structured data makes everything worse.
Tim Ayris sounded the warning that no one else in the room wanted to say out loud: for organizations that haven’t done the taxonomy work and don’t have the budget to do it now, the uncomfortable truth is that they’re going to leapfrog straight to semantic search and skip the structured foundation entirely. That might work for discovery. For governance, it’s a slow-motion problem.
And Jim Cavedo brought it home with a line that could be the thesis of the entire panel:
“Today they’re codependent. And our job is to create the user experience where it doesn’t matter to the user. That’s probably the hardest part, because when users can’t figure it out, they abandon the system altogether.”
DATA AND THE USS ENTERPRISE: A MODERATOR’S SENDOFF
Felix closed with a thought experiment that earned the session a proper ending. He’d been trying to think of a perfect metaphor for the marriage of relational and vector databases, something that showed both systems working in harmony. He landed on Data from Star Trek.
Data has to track the ship’s inventory, crew assignments, mission parameters; all relational. All structured. All exact. But he also has to read facial expressions, interpret emotional states, infer intent from behavior, all vector. All probabilistic. All high-dimensional.
The goal isn’t to pick one. The goal is to be Data: a system that pulls from both databases simultaneously, serves a human experience that feels unified and natural, and does it all without making the user think about which database answered their question.
“That’s what we’re trying to do: take the human and merge it with the computer, until we’re all just Data, navigating through space.”
Naturally, that landed well in a room full of people who’ve been in media technology long enough to appreciate a good Trek reference.
ABOUT CHESAFEST
Chesafest is CHESA’s annual gathering of team members, technology partners, clients, and practitioners in the media, broadcast, and AV space, an event that blends the energy of a partner kickoff with substantive, practitioner-driven conversation about where the industry is actually headed.
Now in its 4th year, Chesafest has grown into something genuinely distinct: a program where CHESA’s team, its vendor partners, and its clients are all in the same room at the same time, participating in the same conversations. The panels are designed to surface real disagreement, real tradeoffs, and real-world insight. The 4th Annual Chesafest took place on February 25, 2026 in Towson, Maryland, drawing 19 vendor partners and a cross-section of CHESA’s client community.
The four vendor panels from Chesafest 2026:
Vendor Panel 1: Is the File System Dying? The Performance Tier in an Object-Native World
Featuring: Backblaze, LucidLink, Suite, and Spectra Logic | Moderated by Tom Kehn, CHESA
Vendor Panel 2: The Next Evolution of Media Asset Management: Is Structured Metadata Enough in the Age of Vector Intelligence?
Featuring: Backlight, Fonn Group, OrangeLogic, EditShare, and VIDA | With client perspective from Jason Patton, Sesame Workshop | Moderated by Felix Coats, CHESA
Vendor Panel 3: Automation, AI, and the Limits of Machine Decision-Making: Where Human Judgment Still Matters in Media Operations
Featuring: Telestream, Hiscale, HelmutUS, Adobe, and Scale Logic | Moderated by Jason Whetstone, CHESA
Vendor Panel 4: When Machines Enter the Control Room: AI, Authority, and Real-Time Decision-Making in Live Production
Featuring: LiveU, Vizrt, Netgear AV, and AI Media | Moderated by Jason “Pep” Pepino, CHESA
This blog series covers each panel in depth. If the MAM and AI metadata conversation is in your world, the other sessions are worth your time too.

