Some of the most useful tools we've built are ones that didn't exist on a brief. This is one of them.
As we've taken on more client work - automations, integrations, internal tools - we kept running into the same problem: every project that needed to store and serve files ended up with its own ad-hoc S3 setup. Different bucket configurations, different access policies, no shared conventions. Files were getting served directly from S3 with no caching layer. There was no way to see what was actually being accessed. And handing file access to a client or a non-technical team member meant either sharing AWS credentials or building something bespoke just for that project.
The Problem
Each engagement accumulated its own storage configuration. Files uploaded during one project had no connection to anything else. Public download URLs pointed at raw S3 endpoints with no analytics and no clean interface for browsing what was stored.
The failure mode wasn't dramatic - no production outage, no data loss. It was a slow accumulation of friction: "where did that file go?", "can you re-share the upload instructions?", "is anyone still downloading this asset?". None of it visible. All of it manual.
What We Built
We built a lightweight file platform: a minimal API backed by S3, a CDN delivery layer, and a management dashboard.
Upload and retrieval: Files are uploaded to a named bucket and path via a single API call. Public downloads are served through a CDN endpoint - fast, consistent, and cacheable. The original file metadata is preserved. Any project that needs to store and serve files points at the same infrastructure.
Access analytics: Every download is tracked in a non-blocking pipeline that writes back to S3 without adding latency to the request. Per-file stats include total download count, first and last access time, and a rolling history. Aggregated views show total files across the platform, overall download volume, and daily timelines. A top-files ranking surfaces what's actually being used.
Management dashboard: A web dashboard provides a file explorer built around a folder tree and file grid - the way people think about files, not the flat namespace S3 exposes. From the dashboard: browse by folder, upload files, copy CDN URLs, delete files, and view per-file analytics including access history and upload metadata. A global stats view shows platform-wide activity.
The Result
One API key. One CDN domain. One place to look when a client asks whether their assets are being accessed.
New projects no longer need their own storage configuration. Files uploaded across any engagement are accessible through the same dashboard and tracked under the same analytics. When a question comes up about file delivery - is anyone downloading this?, when was this last accessed? - the answer is in the dashboard.
What Made It Work
The analytics pipeline was the piece that could have easily made this slow to use. Tracking every download means doing I/O on every request, and synchronous writes to S3 would have added unacceptable latency. The solution was a channel-based queue that accepts events without blocking the response, batches them in memory, and flushes to S3 every 30 seconds or when the queue reaches a threshold. From the outside, the download endpoint responds at CDN speed. The tracking happens behind it.
The dashboard interface was the other decision worth explaining. S3 is a flat key-value store - there are no real folders, just object keys with slashes in the name. Most S3 management tools expose that flat namespace and ask you to think in prefixes. The explorer in this dashboard reconstructs a folder tree from object key prefixes so you can navigate it the way you'd navigate a filesystem. It's a small thing, but it's the difference between a tool that gets used and one that gets abandoned.