The Next Evolution of Media Asset Management

By Tom Kehn, VP, Solutions Consulting March 12, 2026

CHESA Fest 2026 brought together technology vendors, media organizations, and workflow architects to explore the architectural shifts reshaping modern content infrastructure. As part of
the event, a series of vendor panels examined the deeper technical debates emerging across storage, asset management, and AI-driven workflows.

This discussion focused on one of those debates; whether the traditional model of structured, relational metadata remains the foundation of modern media asset management, or if emerging
vector-based semantic retrieval and AI-driven discovery are reshaping how organizations search, understand, and govern their media libraries.

Is Structured Metadata Enough in the Age of Vector Intelligence?

For decades, media asset management systems were built on declared truth.

Structured metadata fields.

Relational databases.

Deterministic queries.

If an asset had the correct tags and identifiers, the system could retrieve it with precision. If it didn’t, the asset might as well have been invisible.

But that model is being challenged.

As AI-driven workflows gain traction, users increasingly expect systems to understand intent, similarity, and context, not just keywords. Instead of searching for exactly what has been declared, they expect systems to infer what they mean.

At CHESA Fest 2026, Vendor Panel 2 explored the architectural tension emerging inside modern MAM platforms: does structured relational metadata remain the foundation of media asset management, or do vector-based semantic systems fundamentally reshape how assets are discovered and managed?

The answer from the panel was neither simple nor unanimous. But a clear theme emerged: the future of asset management isn’t relational versus vector.

It’s relational and vector; working together in ways users may never see.

The Panel

The conversation was moderated by Felix Coats, Solutions Consultant at CHESA, and brought together a mix of technology vendors and practitioners who are actively shaping the next generation of media asset management systems.

Panelists included:

  •  Jason Pattan, Media Asset Manager at Sesame Workshop, representing the client perspective from one of the most iconic media libraries in the world.
  • Tim Ayris, Head of Partnerships at VIDA
  • Jeff Herzog, Director of Product Management at EditShare
  • Jim Cavedo, VP of Global Solutions at Orange Logic
  • Sofia Fernandez, Channel Manager at Backlight (Iconik)
  • Eduardo Mancz, President and CEO of Fonn Group (Mimir)

Rather than focusing on product capabilities or feature comparisons, the panel examined a deeper architectural question: how the rise of semantic search, embeddings, and vector databases may reshape the role of structured metadata inside modern MAM systems.

The discussion quickly revealed that the industry isn’t debating whether vector intelligence will arrive in media asset management; it already has.

The Foundation Still Matters

Before the discussion even began, one reality became clear: abandoning structured metadata entirely isn’t realistic.

Jason Pattan of Sesame Workshop, who joined the panel as a client practitioner rather than a vendor, framed it bluntly.

“I can’t imagine a vector-only database for a MAM,” Pattan said. “You still need things like unique identifiers. That’s the foundation of the system.”

In other words, relational metadata provides the factual backbone of asset management: IDs, rights information, timestamps, licensing rules, and governance controls.

Those attributes aren’t fuzzy concepts. They are deterministic facts.

Trying to retrieve them through semantic similarity would introduce ambiguity where none can exist.

Jeff Herzog, Director of Product Management at EditShare, echoed that distinction.

“There’s a whole set of metadata; like rights management, camera metadata, UUIDs, that can’t be fuzzy,” Herzog explained. “A fuzzy search on a UUID doesn’t make sense.”

Structured metadata, in other words, still governs the operational truth of a media asset.

But that doesn’t mean it governs discovery.

Search Is Changing

The real disruption lies not in how assets are stored, but in how users expect to find them. For decades, the people searching MAM systems were the same people who built them. Editors, archivists, and media managers understood naming conventions and metadata structures because they created them.

That generation is disappearing.

Pattan pointed out that the next generation of users approaches search completely differently.

“There’s a whole new crop of users whose only experience is talking to a chatbot,” he said. “They don’t know naming conventions. They don’t know identifiers. They just describe what they’re looking for.”

Instead of typing a specific filename or metadata tag, a user might search for something like:

“Clips where Elmo is counting with kids.”

That type of request cannot be answered by structured metadata alone.
Vector-based search, using embeddings and semantic similarity, allows systems to retrieve assets based on meaning rather than declared fields. Images, transcripts, and video context become searchable in ways that traditional schemas cannot support.

Tim Ayris, Head of Partnerships at VIDA, summarized the shift succinctly.

“If that semantic search capability isn’t there,” he said, “the pressure on the MAM will be huge.”

Complement, Not Collision

Despite the headline tension, most panelists agreed that relational and vector systems are not competing architectures. They are complementary layers.

Jim Cavedo, VP of Global Solutions at Orange Logic, described the relationship as codependent.

Users shouldn’t have to think about which system they’re querying. Instead, the platform should dynamically determine how to answer the question.

“If someone asks for Sesame Street from 1969 with rights that expire in three years, that’s relational,” Cavedo explained. “If they’re asking for a video with a certain type of moment or feeling, that’s semantic.”

The system’s job is to translate the user’s intent into the appropriate retrieval method.

From the user’s perspective, the experience should be seamless, a single interface that abstracts the complexity underneath.

Eduardo Mancz, CEO of Fonn Group (Mimir), emphasized the same principle.

“From the user perspective, who cares what database it is?” he said. “They just want to find their content.”

The Governance Problem

While semantic discovery may improve search, it introduces a new challenge: governance.

Relational databases are deterministic. A query returns the same result every time because it operates on declared data.

Vector systems behave differently.

Similarity searches are probabilistic. Two searches may produce slightly different results depending on weighting, context, or embedding updates.

That distinction matters in regulated environments.

“Good enough is the problem,” Cavedo said. “In regulated industries, good enough is never good enough.”

Legal rights management, embargo dates, licensing restrictions, and union participation rules require deterministic enforcement.

Those systems cannot rely on probabilistic retrieval.

Herzog added that explainability is another concern. In relational systems, you can trace the logic behind a query result. In vector systems, that traceability becomes harder.

“You can’t always see the work behind the answer,” he noted.

This is why governance layers are likely to remain anchored in relational systems, even as semantic discovery expands.

The Metadata Quality Crisis

Another uncomfortable reality surfaced during the discussion: many organizations don’t actually have good structured metadata to begin with.

Ayris described a scenario his team sees regularly.

Customers migrate decades of archival content into a new MAM platform; only to discover the metadata is incomplete, inconsistent, or simply wrong.

“The metadata is terrible,” he said. “If you don’t have those foundations in place, it becomes much harder to audit or govern anything.”

Vector-based enrichment may help compensate for those gaps by generating transcripts, object detection, and contextual descriptions automatically.

But that raises its own risks.

Ayris warned that relying on semantic enrichment to replace missing structured metadata could create governance blind spots.

“If you haven’t built the foundation today,” he said, “you may leapfrog straight to semantic systems.”

Convenient, perhaps, but potentially dangerous from a compliance standpoint.

Users Don’t Want to Learn Databases

One of the more entertaining moments of the panel came when the discussion turned toward user experience.

Herzog suggested that users may still need training to understand the difference between semantic search and structured filtering.

Cavedo disagreed.

“Users don’t want to be trained,” he said. “The world is an iPhone.”

In other words, users expect systems to work intuitively. They shouldn’t have to understand the architectural layers beneath the interface.

Sofia Fernandez of Backlight offered a helpful metaphor.

She compared the system to a coffee machine.

“You store the milk in one place and the coffee somewhere else,” she said. “But when the user presses ‘latte,’ the machine figures out how to combine them.”

In modern MAM architecture, relational metadata may be the milk while vector intelligence supplies the coffee.

But the user should only see the latte.

The Cost of Intelligence

While the conversation often focused on capability, Mancz raised a less-discussed issue: cost.

Vector search systems rely on embeddings that must be stored, updated, and occasionally regenerated as models evolve.

That process, known as re-indexing or re-vectorization, can become computationally expensive as libraries grow.

“Very few discussions are happening about how much this will cost,” Mancz said.

In large archives containing millions of assets, refreshing embeddings or retraining models could become a significant operational consideration.

This reinforces the idea that vector intelligence will augment existing metadata structures rather than replace them outright.

Metadata Isn’t Disappearing

In the closing round, the panel returned to the original question: how does structured metadata evolve as AI-native workflows expand?

The consensus was clear.

Structured metadata isn’t going away.

But its role is changing.

Instead of being the primary mechanism for discovery, it becomes the framework that ensures governance, identity, and operational truth.

Pattan shared how Sesame Workshop recently revisited its own taxonomy to prepare for this shift.

“If we get the structure right,” he said, “then we can leverage it in any system; relational or AI-driven.”

Vector intelligence may generate massive volumes of contextual data; transcripts, object detection, sentiment analysis, but that information still needs structured anchors to connect it to the operational world.

So, Is Structured Metadata Enough?

No.

But it’s still essential.

Vector-based retrieval is transforming how media assets are discovered. Semantic search allows systems to surface content based on meaning, context, and similarity rather than explicit tagging.

Yet governance, rights management, compliance, and operational workflows still rely on deterministic data structures.

The future of MAM isn’t a choice between relational or vector architectures.

It’s a layered system where relational metadata defines truth, vector intelligence expands discovery, and applications orchestrate both behind the scenes.

Users may never see the difference.

But under the hood, the architecture of media asset management is quietly evolving.

 

« BACK TO BLOG POSTS