This is essentially S3FS using EFS (AWS's managed NFS service) as a cache layer for active data and small random accesses. Unfortunately, this also means that it comes with some of EFS's eye-watering pricing:
— All writes cost $0.06/GB, since everything is first written to the EFS cache. For write-heavy applications, this could be a dealbreaker.
— Reads hitting the cache get billed at $0.03/GB. Large reads (>128kB) get directly streamed from the underlying S3 bucket, which is free.
— Cache is charged at $0.30/GB/month. Even though everything is written to the cache (for consistency purposes), it seems like it's only used for persistent storage of small files (<128kB), so this shouldn't cost too much.
> For example, suppose you edit /mnt/s3files/report.csv through the file system. Before S3 Files synchronizes your changes back to the S3 bucket, another application uploads a new version of report.csv directly to the S3 bucket. When S3 Files detects the conflict, it moves your version of report.csv to the lost and found directory and replaces it with the version from the S3 bucket.
> The lost and found directory is located in your file system's root directory under the name .s3files-lost+found-file-system-id.
I wish they offered some managed bridging to local NVMe storage. AWS NVMe is super fast compared to EBS, and EBS (node-exclusive access as block device) is faster than EFS (multi-node access). I imagine this can go fast if you put some kind of further-cache-to-NVMe FS on top, but a completely vertically integrated option would be much better.
It appears that they put an actual file system in front of S3 (AWS EFS basically) and then perform transparent syncing. The blog post discusses a lot of caveats (consistency, for example) or object namings (incosistencies are emitted as events to customers).
Having been a fan of S3 for such a long time, I'm really a fan of the design. It's a good compromise and kudos to whoever managed to push through the design.
Because people will use it as filesystem regardless of the original intent because it is very convenient abstraction. So might as well do it in optimal and supported way I guess ?
They found a way to make money on it by putting a cache in front of it. Less load for them, better performance for you. Maybe you save money, maybe you dont.
Because without significant engineering effort (see the blog post), the mismatch between object store semantics and file semantics mean you will probably Have A Bad Time. In much earlier eras of S3, there were also some implementation specifics like throughput limits based on key prefixes (that one vanished circa 2016) that made it even worse to use for hierarchical directory shapes.
Eagerly awaiting on first blogpost where developers didn't read the eventually consistent part, lost the data and made some "genius" workaround with help of the LLM that got them in that spot in the first place
This is pretty different than s3fs. s3fs is a FUSE file system that is backed by S3.
This means that all of the non-atomic operations that you might want to do on S3 (including edits to the middle of files, renames, etc) are run on the machine running S3fs. As a result, if your machine crashes, it's not clear what's going to show up in your S3 bucket or if would corrupt things.
As a result, S3fs is also slow because it means that the next stop after your machine is S3, which isn't suitable for many file-based applications.
What AWS has built here is different, using EFS as the middle layer means that there's a safe, durable place for your file system operations to go while they're being assembled in object operations. It also means that the performance should be much better than s3fs (it's talking to ssds where data is 1ms away instead of hdds where data is 30ms away).
I was thinking: "No way this has existed for decades". But the earliest I can find it existing is 2008. Strictly speaking not decades but much closer to it than I expected.
> we locked a bunch of our most senior engineers in a room and said we weren’t going to let them out till they had a plan that they all liked.
That's one way to do it.
> When you create or modify files, changes are aggregated and committed back to S3 roughly every 60 seconds as a single PUT. Sync runs in both directions, so when other applications modify objects in the bucket, S3 Files automatically spots those modifications and reflects them in the filesystem view automatically.
That sounds about right given the above. I have trouble seeing this as something other than a giant "hack." I already don't enjoy projecting costs for new types of S3 access patterns and I feel like has the potential to double the complication I already experience here.
Maybe I'm too frugal, but I've been in the cloud for a decade now, and I've worked very hard to prevent any "surprise" bills from showing up. This seems like a great feature; if you don't care what your AWS bill is each month.
There is a staggering number of user doing this with extra steps using fsx for lustre, their life greatly simplified today (unless they use gpu direct storage I guess)
The way AWS keep their pricing section completely separate from their system and architecture docs, despite architecture being the primary driver of cost, is a major contributor to this
not everything should or needs to be some article geared towards the audience's convenience, or selling something to the audience. pretty much all allthingsdistributed articles are long form articles covering highly technical systems and contain a decent whack of detail/context. in my mind, they veer closer to "computer scientist does blog posts" compared to "5 ways React can boost your page visits" listicles.
edited slightly ... i really need to turn 10 minute post delay back on.
— All writes cost $0.06/GB, since everything is first written to the EFS cache. For write-heavy applications, this could be a dealbreaker.
— Reads hitting the cache get billed at $0.03/GB. Large reads (>128kB) get directly streamed from the underlying S3 bucket, which is free.
— Cache is charged at $0.30/GB/month. Even though everything is written to the cache (for consistency purposes), it seems like it's only used for persistent storage of small files (<128kB), so this shouldn't cost too much.
Built in cache, CDN compatible, JSON metadata, concurrency safe and it targets all S3 compatible storage systems.
> For example, suppose you edit /mnt/s3files/report.csv through the file system. Before S3 Files synchronizes your changes back to the S3 bucket, another application uploads a new version of report.csv directly to the S3 bucket. When S3 Files detects the conflict, it moves your version of report.csv to the lost and found directory and replaces it with the version from the S3 bucket.
> The lost and found directory is located in your file system's root directory under the name .s3files-lost+found-file-system-id.
Having been a fan of S3 for such a long time, I'm really a fan of the design. It's a good compromise and kudos to whoever managed to push through the design.
This means that all of the non-atomic operations that you might want to do on S3 (including edits to the middle of files, renames, etc) are run on the machine running S3fs. As a result, if your machine crashes, it's not clear what's going to show up in your S3 bucket or if would corrupt things.
As a result, S3fs is also slow because it means that the next stop after your machine is S3, which isn't suitable for many file-based applications.
What AWS has built here is different, using EFS as the middle layer means that there's a safe, durable place for your file system operations to go while they're being assembled in object operations. It also means that the performance should be much better than s3fs (it's talking to ssds where data is 1ms away instead of hdds where data is 30ms away).
I thought that would be their https://github.com/awslabs/mountpoint-s3 . But no mention about this one either.
S3 files does have the advantage of having a "shared" cache via EFS, but then that would probably also make the cache slower.
Single PUT per file I assume?
That's one way to do it.
> When you create or modify files, changes are aggregated and committed back to S3 roughly every 60 seconds as a single PUT. Sync runs in both directions, so when other applications modify objects in the bucket, S3 Files automatically spots those modifications and reflects them in the filesystem view automatically.
That sounds about right given the above. I have trouble seeing this as something other than a giant "hack." I already don't enjoy projecting costs for new types of S3 access patterns and I feel like has the potential to double the complication I already experience here.
Maybe I'm too frugal, but I've been in the cloud for a decade now, and I've worked very hard to prevent any "surprise" bills from showing up. This seems like a great feature; if you don't care what your AWS bill is each month.
we run datalakes using DuckLake and this sounds really useful. GCP should follow suit quickly.
Sell the benefits.
I have around 9 TB in 21m files on S3. How does this change benefit me?
edited slightly ... i really need to turn 10 minute post delay back on.