Categories
Events & Trade Shows

Is the File System Dying?

The 4th Annual Chesafest brought together some of the sharpest minds in media technology for a day of panels, conversations, and honest debate in Towson, Maryland. One of the first sessions on the agenda, and arguably the one that set the intellectual tone for the entire day, was a vendor panel with a deliberately provocative question at its center:

Is the file system dying?

It sounds like a simple question. Far from it..

Moderated by Tom Kehn, VP of Solutions Consulting at CHESA, the panel brought together representatives from Backblaze, LucidLink, Suite, and Spectra Logic, four companies that, taken together, represent nearly every layer of the modern media storage stack. What followed was a candid, technically rich conversation about where object storage is headed, what role the file system actually plays, what “archive” even means anymore, and what happens when the next generation of media professionals doesn’t know what a file is.

(Chessie, CHESA’s Chief Acorn Procurement Officer, was also in attendance. His contributions, while enthusiastic, were not transcribed.)

MEET THE PANEL

Dave Simon — Sr. Director, Technology Analysis, Backblaze

Dave has spent years working in the MAM and media space and joined Backblaze just over a year before Chesafest 2026. Last year’s Chesafest was his first CHESA channel partner event. He brings a grounded, user-behavior-focused lens to storage conversations that cuts through a lot of the vendor hype in the space.

Ryan Servant — Sr. Director, Channel and Alliances, Suite

Ryan came to Suite after working at Iconik, drawn in by what Suite was building. He’s the first to admit he’s not the most technical person on any panel, and somehow, that usually makes him the clearest communicator in the room.

Richard “Rich” Warren — Senior Solutions Engineer, LucidLink

Rich joined LucidLink back in 2019 with a specific kind of conviction: he saw the technology, quit his job, and went to work there. That’s the kind of origin story that tends to make for good panelists. He’s been making the case for the file system as an abstraction layer ever since.

Nathan Halverson — Manager, Solutions Architecture, Spectra Logic

Nathan has been with Spectra for 14 years, managing their US solutions architecture team. He brought the deep archive and lifecycle management perspective to the panel, a view of storage that most people don’t think about until they desperately need it.

Tom Kehn — VP, Solutions Consulting, CHESA (Moderator)

Tom opened by setting the table clearly: this panel wasn’t about on-prem vs. cloud, tape vs. disk, or cost per terabyte. It was about something more fundamental, the file system itself, and whether the rise of object storage is quietly making it obsolete.

SETTING THE STAGE: WHAT ARE WE ACTUALLY DEBATING?

Tom framed the question well from the start. For decades, the file system has been the center of gravity in the media universe. Now the landscape looks something like this: native on-premises file systems, file system layers sitting over object storage (that’s where LucidLink and Suite live), pure object storage underneath (that’s Backblaze’s domain), and deep archive infrastructure behind legacy applications (Spectra’s world).

The provocation: if applications like NLEs evolve to talk directly to object storage, if Premiere and the rest of the Adobe suite can read S3 natively, does the file system layer become unnecessary? Does it quietly disappear? And what does that mean for the companies whose products live at that layer?

Tom threw it open to the panel. Rich Warren bit first.

“IT’S THE ABSTRACTION LAYER.” AND THAT’S NOT GOING AWAY

Rich’s answer was quick and consistent throughout the entire conversation: the file system isn’t dying because the file system is the abstraction layer. The same way virtualization abstracts hardware, the file system abstracts storage. Object storage will continue to grow, the economics and scalability are undeniable, but something still has to stand between the raw object layer and the humans and applications trying to use it.

“You’re going to get further growth in object, scalability and economics underneath, of course. But the actual abstraction layer is the file system, no different than if you looked at virtualization.”

Dave Simon added a dimension that’s easy to underestimate: users. Specifically, the deeply embedded human habit of organizing things into folders with names that make sense to them. He pointed to sports teams, often staffed with younger, less technically seasoned crews, who just want to see their files, organized logically, in something that doesn’t feel like a web application.

“As long as users continue to exist, the file system is not quite dead. And I don’t think it’s going to die, at least not in this generation.”

Ryan Servant, true to form, agreed, and then added a layer of his own. The expectations of end users, especially creative teams, have actually gone up. They want to see everything, all at once, instantly, across every application. The file system isn’t less important; it’s just that the burden of delivering that experience now falls more heavily on the people designing the infrastructure.

“The file system is probably more important for guys like you at CHESA, where you have to come up with really creative ways to design that and make sure the customer is getting that experience.”

In other words: the file system isn’t dying. It’s just getting harder to build well.

THE EXISTENTIAL QUESTION: WHAT IF ADOBE GOES NATIVE S3?

Tom pushed the panel toward a scenario that felt genuinely uncomfortable for at least a moment. What if Adobe announced that Premiere, After Effects, and the rest of the suite could now talk directly to object storage? What happens to the file system layer, and to the companies whose products live there, if the biggest NLEs no longer need it?

Rich’s answer was measured: even if Adobe goes native S3, Adobe isn’t the only application touching that data. The abstraction layer still serves everything else. You can’t design infrastructure around one application’s access pattern.

Dave Simon took a more practical angle. Think about a field production workflow: camera cards come off set carrying gigabytes, sometimes approaching a terabyte, of raw footage. Getting that into object storage, particularly cloud object storage, means an upload step that adds significant time before anyone can start working. The file system layer is what lets work start immediately on local or near-local storage while the underlying data lives wherever it needs to live.

“You still have to be able to support multiple disk tiers, multiple storage mediums. If it can link to an S3 bucket, that’s great, but also maintain that mount point for your day-to-day operations.”

The takeaway: even in a future where object native becomes common, the performance tier doesn’t disappear. Craft editing, finishing, and anything requiring extreme IOPS still needs fast local or near-local storage. The file system isn’t going away; it’s being complemented.

ARCHIVE WITHOUT ARCHIVE: IS EVERYTHING JUST “ONLINE” NOW?

One of the most interesting threads of the session was Tom’s question about archive itself. As object storage gets faster and cheaper, and as lifecycle management tools get more sophisticated, does “archive” stop being a meaningful category? Won’t it all eventually just be online?

Nathan Halverson had the most nuanced answer on this one. Yes, lifecycle management and tiering have transformed how data moves through the storage stack. Yes, object storage, both on-prem and in the cloud, has made data more readily accessible than tape or cold archive ever could. But the complexity underneath hasn’t gone away; it’s just moved.

“Everyone says S3 is S3, but it’s a lot more complex than that. We have to be very strategic in lifecycle management, understanding where data needs to be and how it interacts with the applications that are touching it.”

The implication for Spectra, which has spent 14 years helping organizations manage exactly that lifecycle complexity, is clear: the job hasn’t gotten simpler. It’s gotten more invisible, and invisible complexity is often the hardest kind to manage.

Ryan Servant connected this directly to Suite’s product direction. Suite’s announcement of going S3-native, the ability to interact with object storage the same way any other application does, without proprietary hooks or workarounds, is the natural progression. One fewer variable in the workflow. Creatives see their files. They interact with them. They don’t know or care what tier the data is on. That’s the goal.

“The creatives tend to not own the budget, so they don’t know everything can’t be tier one. But their experience? They want it to be.”

TAMS, LIVE READ, AND WHERE THINGS ARE ACTUALLY HEADING

Some of the most technically interesting moments came from the audience. Dave Helmly, Director of Professional Video and Audio at Adobe, raised the concept of TAMS (Time Addressable Media) and the role it plays in this evolving ecosystem. TAMS is an emerging standard that allows applications to address media at a sub-file level, essentially treating a piece of media not as a monolithic file but as a set of time-indexed segments that can be read, streamed, and edited without ever fully downloading the source. It’s a critical piece of how the industry gets to a true object-native editing workflow without sacrificing performance.

“We have to have a way to read a proxy, not the real file, onto the timeline while it talks to Suite or Iconik or LucidLink, wherever the original media is. We have to have that balance.”

Dave Simon picked that thread up and pointed to Backblaze’s Live Read capability, the ability to read a growing file straight out of object storage as it’s being written. It’s not segmented the way TAMS is, but it lives in the same spirit: getting the media into the workflow without waiting for a complete ingest cycle.

“Backblaze is very much still focused in the media space, thinking about media and supporting workflows beyond just static object storage.”

The through line here is important: the performance tier isn’t being replaced by object storage. It’s being rebuilt on top of it. The file system remains, but the file itself is becoming more fluid, addressable by time, readable in motion, distributed across tiers in ways that the application (and the user) never has to see.

THE NEXT GENERATION DOESN’T KNOW WHAT A FILE IS

One of the sharpest questions of the session came from Jason Whetstone, Product Development Engineer at CHESA, who raised something that’s been quietly unsettling practitioners across the industry: the next generation of media professionals doesn’t organize their work in file systems. They organize it in apps.

Their footage is in Frame.io or in their phone’s camera roll. Their projects are in SaaS platforms like Canva. Their reference material is in Notion or Google Drive. When you ask them where a file is, they give you a blank look, because to them, files don’t exist. There are just things in apps.

Tom Kehn validated the concern immediately: this is what gives archivists headaches. When media lives inside twenty different SaaS platforms instead of on a governed file system with a MAM on top of it, the governance problem becomes enormous. It’s the Dropbox problem of a decade ago, multiplied by every generative AI tool, every cloud collaboration platform, and every creative SaaS platform that’s been adopted without IT oversight.

Ryan Servant’s response was both honest and forward-looking: the answer isn’t to force the next generation to care about file systems. The answer is to make the infrastructure so seamless that they never have to. The file is there. It’s governed. It’s accessible. They just don’t know it, and they shouldn’t have to.

“We need to make it so it’s okay if they don’t know where the file is or don’t care where the file is. And then it’s up to you guys to make sure there’s some governance around that.”

Nina Smith from the audience added a grounding point that resonated: the solutions on this panel are powerful, but not every organization needs the full stack. Understanding who is actually using the system, editors, archivists, compliance teams, executives, and designing around their specific needs and permissions is more important than any single technology decision.

“Seeking to understand who is using your system and who this is best for. If all you do is archive, some of this may not be for you.”

It was a good reminder that the most technically sophisticated solution isn’t always the right one, and that the organizations best served by vendors like these are the ones who do the discovery work first.

WHERE DOES THIS ALL LAND?

Tom closed the session with a thought worth sitting with. He’d told the panel this discussion would be the foundation of a CHESA blog series, they wanted to hear the real conversation before putting anything in writing. And the real conversation, it turned out, landed somewhere more nuanced than the provocative title suggested.

The file system isn’t dying. But it is transforming. Object storage is becoming the underlying substrate for nearly everything, and the file system is evolving from a storage mechanism into a true abstraction and governance layer, the interface between the raw economics of object storage and the humans and applications that need to work with data.

The companies on this panel (Backblaze, LucidLink, Suite, and Spectra Logic) each hold a different piece of that puzzle. Backblaze provides the scalable, cost-effective object storage foundation, with media-specific capabilities like Live Read that keep it relevant in active workflows. LucidLink and Suite each build the abstraction layer that makes that object storage feel like local, familiar, collaborative storage to the people who use it every day. And Spectra provides the lifecycle management and deep archive infrastructure that ensures data is governed, preserved, and accessible across its entire life, even decades into the future.

The center of gravity, as Nathan Halverson put it, has always lived at the application layer. That’s not changing. What’s changing is everything underneath it.

And that, it turns out, is a pretty good reason to keep talking about it.

ABOUT CHESAFEST

Chesafest is CHESA’s annual gathering of team members, technology partners, clients, and practitioners in the media, broadcast, and AV space, an event that blends the energy of a partner kickoff with substantive, practitioner-driven conversation about where the industry is actually headed.

Now in its 4th year, Chesafest has grown into something genuinely distinct: a program where CHESA’s team, its vendor partners, and its clients are all in the same room at the same time, participating in the same conversations. The panels are designed to surface real disagreement, real tradeoffs, and real-world insight. The 4th Annual Chesafest took place on February 25, 2026 in Towson, Maryland, drawing 19 vendor partners and a cross-section of CHESA’s client community.

The four vendor panels from Chesafest 2026:

Vendor Panel 1: Is the File System Dying? The Performance Tier in an Object-Native World

Featuring: Backblaze, LucidLink, Suite, and Spectra Logic | Moderated by Tom Kehn, CHESA

Vendor Panel 2: The Next Evolution of Media Asset Management: Is Structured Metadata Enough in the Age of Vector Intelligence?

Featuring: Backlight, Fonn Group, OrangeLogic, EditShare, and VIDA | With client perspective from Jason Patton, Sesame Workshop | Moderated by Felix Coats, CHESA

Vendor Panel 3: Automation, AI, and the Limits of Machine Decision-Making: Where Human Judgment Still Matters in Media Operations

Featuring: Telestream, Hiscale, HelmutUS, Adobe, and Scale Logic | Moderated by Jason Whetstone, CHESA

Vendor Panel 4: When Machines Enter the Control Room: AI, Authority, and Real-Time Decision-Making in Live Production

Featuring: LiveU, Vizrt, Netgear AV, and AI Media | Moderated by Jason “Pep” Pepino, CHESA

This blog series covers each panel in depth. If the file system and object storage conversation is in your world, the other sessions are worth your time too.