My NFS story: In my first job, we used NFS to maintain the developer desktops. They were all FreeBSD and remote mounted /usr/local. This worked great! Everyone worked in the office with fast local internet, and it made it easy for us to add or update apps and have everyone magically get it. And when the NFS server had a glitch, our devs could usually just reboot and fix it, or wait a bit. Since they were all systems developers they all understood the problems with NFS and the workarounds.
What I learned though was that NFS was great until it wasn't. If the server hung, all work stopped.
When I got to reddit, solving code distribution was one of the first tasks I had to take care of. Steve wanted to use NFS to distribute the app code. He wanted to have all the app servers mount an NFS mount, and then just update the code there and have them all automatically pick up the changes.
This sounded great in theory, but I told him about all the gotchas. He didn't believe me, so I pulled up a bunch of papers and blog posts, and actually set up a small cluster to show him what happens when the server goes offline, and how none of the app servers could keep running as soon as they had to get anything off disk.
To his great credit, he trusted me after that when I said something was a bad idea based on my experience. It was an important lesson for me that even with experience, trust must be earned when you work with a new team.
I set up a system where app servers would pull fresh code on boot and we could also remotely trigger a pull or just push to them, and that system was reddit's deployment tool for about a decade (and it was written in Perl!)
I was at Apple around 15 years ago working as a sysadmin in their hardware engineering org, and everything - and I mean everything - was stored on NFS. We ran a ton of hardware simulation, all the tools and code were on NFS as well as the actual designs and results.
At some point a new system came around that was able to make really good use of the hardware we had, and it didn’t use NFS at all. It was more “docker” like, where jobs ran in containers and had to pre-download all the tools they needed before running. It was surprisingly robust, and worked really well.
The designers wanted to support all of our use cases in the new system, and came to us about how to mount our NFS clusters within their containers. My answer was basically: let’s not. Our way was the old way, and their way was the new way, and we shouldn’t “infect” their system with our legacy NFS baggage. If engineers wanted to use their system they should reformulate their jobs to declare their dependencies up front and use a local cache, and all the other reasonable constraints their system had. They were surprised by my answer but I think it worked out in the end: it was the impetus for things to finally move off the legacy infrastructure, and it worked out well in the end.
NFS volumes (for home dirs, SCM repos, tools, and data) were a godsend for workstations with not enough disk, and when not everyone had a dedicated workstation (e.g., university), and for diskless workstations (which we used to call something rude, and now call "thin clients"), and for (an ISV) facilitating work on porting systems.
But even when when you needed a volume only very infrequently, if there was a server or network problem, then even doing an `ls -l` in the directory where the volume's mount point was would hang the kernel.
Now that we often have 1TB+ of storage locally on a laptop workstation (compare to the 100MB default of an early SPARCstation), I don't currently need NFS for anything. But NFS is still a nice tool to have in your toolbox, for some surprise use case.
> To his great credit, he trusted me after that when I said something was a bad idea based on my experience. It was an important lesson for me that even with experience, trust must be earned when you work with a new team.
True, though, on a risky moving-fast architectural decision, even with two very experienced people, it might be reasonable to get a bit more evidence.
And in that particular case, it might be that one or both of you were fairly early in your career, and couldn't just tell that they could bet on the other person's assessment.
Though there are limits to needing to re-earn trust from scratch with a new team. For example, the standard FAANG-bro interview of everyone having to start from scratch for credibility, like they are fresh out of school with zero track record, and zero better ways to assess, is ridiculous. The only thing more ridiculous is when companies that pay vastly less try to mimic that interview style. Every time I see that, I think that this company apparently doesn't have experienced engineers on staff who can get a better idea just by talking with someone, rather than fratbro hazing ritual.
Don't know about FreeBSD but hard hanging on a mounted filesystem is configurable (if it's essential configure it that way, otherwise don't). To this day I see plenty of code written that hangs forever if a remote resource is unavailable.
It's down to the mount options, use 'soft' and the program trying to access the (inaccessible) server gets an error return after a while, or 'intr' if you want to be able to kill the hung process.
The caveat is a lot of software is written to assume things like fread(), fopen() etc will either quickly fail or work. However, if the file is over a network obviously things can go wrong so the common default behaviour is to wait for the server to come back online. Same issue applies to any other network filesystem, different OS's (and even the same OS with different configs) handle the situation differently.
AFS implements weak consistency, which may be a bit surprising. It also seems to share objects, not block devices. Judging by its features, it seems to make most sense when there is a cluster of servers. It looks cool though, a bit more like S3 than like NFS.
I find it fascinating that the fact that NFS mounts hang the process when they don't work is due to the broken I/O model Unix historically employed.
See, unlike some other more advanced, contemporary operating systems like VMS, Unix (and early versions of POSIX) did not support async I/O; only nonblocking I/O. Furthermore, it assumed that disk-based I/O was "fast" (I/O operations could always be completed, or fail, in a reasonably brief period of time, because if the disks weren't connected and working you had much bigger problems than the failure of one process) and network-based or piped I/O was "slow" (operations could take arbitrarily long or even fail completely altogether after a long wait); so nonblocking I/O was not supported for file system access in the general case. Well, when you mount your file system over a network, you get the characteristics of "slow" I/O with the lack of nonblocking support of "fast" I/O.
A sibling comment mentions that FreeBSD has some clever workarounds for this. And of course it's largely not a concern for modern software because Linux has io_uring and even the POSIX standard library has async I/O primitives (which few seem to use) these days.
And this is one of those things that VMS (and Windows NT) got right, right from the jump, with I/O completion ports,
But issues like this, and the unfortunate proliferation of the C programming language, underscore the price we've paid as a result of the Unix developers' decision to build an OS that was easy and fun to hack, rather than one that encouraged correctness of the solutions built on top of it.
It wasn’t until relatively recently approaches like await because commonplace. Imagine all the software that wouldn’t have been written if they were forced to use async primitives before languages were ready for them.
I use NFS as a keystone of a pretty large multi-million data center application. I run it on a dedicated 100Gb network with 9k frames and it works fantastic. I'm pretty sure it is still use in many, many places because... it works!
I don't need to "remember NFS", NFS is a big part of my day!
On a smaller scale, I run multiple PC's in house diskless with NFS root; so easy to just create copies on the server and boot into them as needed, it's almost one image per bloated app these days (server also boots PC's into Windows using iSCSI/SCST and old DOS boxes from 386 onwards with etherboot/samba). Probably a bit biased due to doing a lot of hardware hacking where virtualisation solutions take so much more effect, but got to agree NFS (from V2 through V4) just works.
My introduction to NFS was first at Berkeley, and then at Sun. It more or less just worked. (Some of the early file servers at Berkeley were drastically overcapacity with all the diskless Sun-3/50s connected, but still.)
And of course I still use it every day with Amazon EFS; Happy Birthday, indeed!
NFS is the backbone of my home network servers, including file sharing (books, movies, music), local backups, source code and development, and large volumes of data for hobby projects. I don't know what I'd do without it. Haven't found anything more suitable in 15+ years.
Same. The latest thing I did was put snes state and save files on NFS so I can resume the same game from laptop, to retropi (tv), and even on the road over wireguard.
NFS v4.2. Easy to set up if you don't need authentication. Very good throughput, at least so long as your network gear isn't the bottleneck. I think it's the best choice if your clients are Linux or similar. The only bummer for me is that mounting NFS shares from Android file managers seems to be difficult or impossible (let alone NFSv4).
SMB is great for LAN, but its performance over internet is poor. It remains SFTP and WebDAV in that case. SFTP would be my choice, if there is client support.
> What are most people using today for file serving?
Google Drive. Or Dropbox, OneDrive, yada yada. I mean, sure, that's not the question you were asking. But for casual per-user storage and sharing of "file" data in the sense we've understood it since the 1980's, cloud services have killed local storage cold dead. It's been buried for years, except in weird enclaves like HN.
The other sense of "casual filesystem mounting" even within our enclave is covered well already by fuse/sshfs at the top level, or 9P for more deeply integrated things like mounting stuff into a VM.
> There is also a site, nfsv4bat.org [...] However, be careful: the site is insecure
I just find this highly ironic considering this is NFS we are talking about.
Also, do they fear their ISPs changing the 40 year old NFS specs on the flight or what ? Why even mention this ?
I have really mixed feelings about things like NFS, remote desktop, etc. The idea of having everything remote to save resources (or for other reasons) does sound really appealing in theory, and, when it works, is truly great. However in practice it's really hard to make these things be worth it, because of latency. E.g. for network block storage and for NFS the performance is usually abysmal compared to even a relatively cheap modern SSD in terms of latency, and many applications now expect a low latency file system, and perform really poorly otherwise.
Fairly obviously a 1Gbps network is not going to compete with 5Gbps SATA or 20Gbps NVME. Having said that, for real performance we load stuff over the network into local RAM and then generally run from that (faster than all other options). On the internal network the server also has a large RAM disk shared over NFS/SMB, and the performance PC's have plenty of RAM - so really it's a tradeoff, and the optimum is going to depend on how the system is used.
I'm considering NFS with RDMA for a handful of CFD workstations + one file server with 25Gbe network. Anyone know if this will perform well? Will be using XFS with some NVME disks as the base FS on the file server.
Yes, you might want to tune your NFS parameters, stick to NFSv4.2, consider if caching is appropriate for your workloads and at what level, and how much of your NFS + networking you can keep in kernel space if you decide to further upgrade your network's throughput or really expand it.
Also consider what your server and client machines will be running, some NFS clients suck. Linux on both ends works really well.
Quite some time ago I implemented NFS for a small HPC-cluster on a 40GBe network. A colleague set up RDMA later, since at start it didn't work with the Ubuntu kernel available. Full nVME on the file server too. While the raw performance using ZFS was kind of underwhelming (mdadm+XFS about 2x faster), network performance was fine I'd argue: serial transfers easily hit ~4GB/s on a single node and 4K-benchmarking with fio was comparable to a good SATA-SSD (IOPS + throughput) on multiple clients in parallel!
I looked into this a while ago and was surprised to find that no file explorer on Android seems to support it[1]. However, I did notice that VLC for Android does support it, though unfortunately only NFSv3. I was at least able to watch some videos from the share with it, but it would be nice to have general access to the share on Android.
[1] Of course, I didn’t test every single app — there’s a bucketload of them on Google Play and elsewhere…
Been a while, but if you root your phone and have access to the kernel source in order to build the NFS modules, would you be able to mount NFS shares then?
We are still using it for some pretty large apps. Still have not found a good and simple alternative. I like the simplicity and performance. Scaling is a challenge though.
Unfortunately there doesn’t seem to be any decent alternative.
SMB is a nightmare to set up if your host isn’t running Windows.
sshfs is actually pretty good but it’s not exactly ubiquitous. Plus it has its own quirks and performs slower. So it really doesn’t feel like an upgrade.
Everything else I know of is either proprietary, or hard to set up. Or both.
These days everything has gone more cloud-oriented. Eg Dropbox et al. And I don’t want to sync with a cloud server just to sync between two local machines.
It's one of those tools that, unless you already know what you're doing, you can expect to sink several hours into trying to get the damn thing working correctly.
It's not the kind of thing you can throw at a junior and expect them to get working in an afternoon.
Whereas NFS and sshfs "just work". Albeit I will concede that NFSv4 was annoying to get working back when that was new too. But that's, thankfully, a distant memory.
Anyway, we used it extensively in the UIUC engineering workstation labs hundreds of computers, 20+ years ago, and it worked excellently. I set up a server farm 20 years ago of Sun sparcs but used NFS for such.
I used to administer AFS/DFS and braved the forest of platform ifdefs to port it to different unix flavors.
plusses were security (kerberos), better administrative controls and global file space.
minuses were generally poor performance, middling small file support and awful large file support. substantial administrative overhead. the wide-area performance was so bad the global namespace thing wasn't really useful.
I guess it didn't cause as many actual multi-hour outages NFS, but we used it primarily for home/working directories and left the servers alone, whereas the accepted practice at the time was to use NFS for roots and to cross mount everything so that it easily got into a 'help I've fallen and can't get up' situation.
SMB is not that terrible to set up (has its quirks definitely), but apple devices don't interoperate well in my experience.
SMB from my samba server performs very well from linux and windows clients alike, but the performance from mac is terrible.
NFS support was lacking on windows when I last tried. I used NFS (v3) a lot in the past, but unless in a highly static high trust environment, it was worse to use than SMB (for me). Especially the user-id mapping story is something I'm not sure is solved properly. That was a PITA in the homelab scale, having to set up NIS was really something I didn't like, a road warrior setup didn't work well for me, I quickly abandoned it.
I mean the decent alternative is object storage if you can tolerate not getting a filesystem. You can get an S3 client running anywhere with little trouble. There are lots of really good S3 compatible servers you can self-host. And you don't get the issue of your system locking up because of an unresponsive server.
I've always thought that NFS makes you choose between two bad alternatives with "stop the world and wait" or "fail in a way that apps are not prepared for."
If you don't need a filesystem, then your options are numerous. The problem is sometimes you do need exactly that.
I do agree that object storage is a nice option. I wonder if a FUSE-like object storage wrapper would work well here. I've seen mixed results for S3 but for local instances, it might be a different story.
True. But for example a home server I absolutely love the simplicity. I have 6 Lenovo 720q machines, one of them as a data storage just running simple NFS for quick daily backups before it pushes them to a NAS.
9P? Significantly simpler, at the protocol level, than NFS (to the point where you can implement a client/server in your language of choice in one afternoon).
My NFS story: In my first job, we used NFS to maintain the developer desktops. They were all FreeBSD and remote mounted /usr/local. This worked great! Everyone worked in the office with fast local internet, and it made it easy for us to add or update apps and have everyone magically get it. And when the NFS server had a glitch, our devs could usually just reboot and fix it, or wait a bit. Since they were all systems developers they all understood the problems with NFS and the workarounds.
What I learned though was that NFS was great until it wasn't. If the server hung, all work stopped.
When I got to reddit, solving code distribution was one of the first tasks I had to take care of. Steve wanted to use NFS to distribute the app code. He wanted to have all the app servers mount an NFS mount, and then just update the code there and have them all automatically pick up the changes.
This sounded great in theory, but I told him about all the gotchas. He didn't believe me, so I pulled up a bunch of papers and blog posts, and actually set up a small cluster to show him what happens when the server goes offline, and how none of the app servers could keep running as soon as they had to get anything off disk.
To his great credit, he trusted me after that when I said something was a bad idea based on my experience. It was an important lesson for me that even with experience, trust must be earned when you work with a new team.
I set up a system where app servers would pull fresh code on boot and we could also remotely trigger a pull or just push to them, and that system was reddit's deployment tool for about a decade (and it was written in Perl!)
I was at Apple around 15 years ago working as a sysadmin in their hardware engineering org, and everything - and I mean everything - was stored on NFS. We ran a ton of hardware simulation, all the tools and code were on NFS as well as the actual designs and results.
At some point a new system came around that was able to make really good use of the hardware we had, and it didn’t use NFS at all. It was more “docker” like, where jobs ran in containers and had to pre-download all the tools they needed before running. It was surprisingly robust, and worked really well.
The designers wanted to support all of our use cases in the new system, and came to us about how to mount our NFS clusters within their containers. My answer was basically: let’s not. Our way was the old way, and their way was the new way, and we shouldn’t “infect” their system with our legacy NFS baggage. If engineers wanted to use their system they should reformulate their jobs to declare their dependencies up front and use a local cache, and all the other reasonable constraints their system had. They were surprised by my answer but I think it worked out in the end: it was the impetus for things to finally move off the legacy infrastructure, and it worked out well in the end.
I remember that era of NFS.
NFS volumes (for home dirs, SCM repos, tools, and data) were a godsend for workstations with not enough disk, and when not everyone had a dedicated workstation (e.g., university), and for diskless workstations (which we used to call something rude, and now call "thin clients"), and for (an ISV) facilitating work on porting systems.
But even when when you needed a volume only very infrequently, if there was a server or network problem, then even doing an `ls -l` in the directory where the volume's mount point was would hang the kernel.
Now that we often have 1TB+ of storage locally on a laptop workstation (compare to the 100MB default of an early SPARCstation), I don't currently need NFS for anything. But NFS is still a nice tool to have in your toolbox, for some surprise use case.
> To his great credit, he trusted me after that when I said something was a bad idea based on my experience. It was an important lesson for me that even with experience, trust must be earned when you work with a new team.
True, though, on a risky moving-fast architectural decision, even with two very experienced people, it might be reasonable to get a bit more evidence.
And in that particular case, it might be that one or both of you were fairly early in your career, and couldn't just tell that they could bet on the other person's assessment.
Though there are limits to needing to re-earn trust from scratch with a new team. For example, the standard FAANG-bro interview of everyone having to start from scratch for credibility, like they are fresh out of school with zero track record, and zero better ways to assess, is ridiculous. The only thing more ridiculous is when companies that pay vastly less try to mimic that interview style. Every time I see that, I think that this company apparently doesn't have experienced engineers on staff who can get a better idea just by talking with someone, rather than fratbro hazing ritual.
Don't know about FreeBSD but hard hanging on a mounted filesystem is configurable (if it's essential configure it that way, otherwise don't). To this day I see plenty of code written that hangs forever if a remote resource is unavailable.
Hi, could you give some pointers about this? Thanks!
It's down to the mount options, use 'soft' and the program trying to access the (inaccessible) server gets an error return after a while, or 'intr' if you want to be able to kill the hung process.
The caveat is a lot of software is written to assume things like fread(), fopen() etc will either quickly fail or work. However, if the file is over a network obviously things can go wrong so the common default behaviour is to wait for the server to come back online. Same issue applies to any other network filesystem, different OS's (and even the same OS with different configs) handle the situation differently.
> What I learned though was that NFS was great until it wasn't. If the server hung, all work stopped.
Sheds a tear for AFS (Andrew File System).
We had a nice, distributed file system that even had solid security and didn't fail in these silly ways--everybody ignored it.
Morgan Stanley was a heavy user of AFS for deploying software and might still be for all I know.
"Most Production Applications run from AFS"
"Most UNIX hosts are dataless AFS clients"
https://web.archive.org/web/20170709042700/http://www-conf.s...
AFS implements weak consistency, which may be a bit surprising. It also seems to share objects, not block devices. Judging by its features, it seems to make most sense when there is a cluster of servers. It looks cool though, a bit more like S3 than like NFS.
Looks like these guys are still truckin' along?
https://www.openafs.org/
I find it fascinating that the fact that NFS mounts hang the process when they don't work is due to the broken I/O model Unix historically employed.
See, unlike some other more advanced, contemporary operating systems like VMS, Unix (and early versions of POSIX) did not support async I/O; only nonblocking I/O. Furthermore, it assumed that disk-based I/O was "fast" (I/O operations could always be completed, or fail, in a reasonably brief period of time, because if the disks weren't connected and working you had much bigger problems than the failure of one process) and network-based or piped I/O was "slow" (operations could take arbitrarily long or even fail completely altogether after a long wait); so nonblocking I/O was not supported for file system access in the general case. Well, when you mount your file system over a network, you get the characteristics of "slow" I/O with the lack of nonblocking support of "fast" I/O.
A sibling comment mentions that FreeBSD has some clever workarounds for this. And of course it's largely not a concern for modern software because Linux has io_uring and even the POSIX standard library has async I/O primitives (which few seem to use) these days.
And this is one of those things that VMS (and Windows NT) got right, right from the jump, with I/O completion ports,
But issues like this, and the unfortunate proliferation of the C programming language, underscore the price we've paid as a result of the Unix developers' decision to build an OS that was easy and fun to hack, rather than one that encouraged correctness of the solutions built on top of it.
It wasn’t until relatively recently approaches like await because commonplace. Imagine all the software that wouldn’t have been written if they were forced to use async primitives before languages were ready for them.
Synchronous IO is nice and simple.
I use NFS as a keystone of a pretty large multi-million data center application. I run it on a dedicated 100Gb network with 9k frames and it works fantastic. I'm pretty sure it is still use in many, many places because... it works!
I don't need to "remember NFS", NFS is a big part of my day!
On a smaller scale, I run multiple PC's in house diskless with NFS root; so easy to just create copies on the server and boot into them as needed, it's almost one image per bloated app these days (server also boots PC's into Windows using iSCSI/SCST and old DOS boxes from 386 onwards with etherboot/samba). Probably a bit biased due to doing a lot of hardware hacking where virtualisation solutions take so much more effect, but got to agree NFS (from V2 through V4) just works.
My introduction to NFS was first at Berkeley, and then at Sun. It more or less just worked. (Some of the early file servers at Berkeley were drastically overcapacity with all the diskless Sun-3/50s connected, but still.)
And of course I still use it every day with Amazon EFS; Happy Birthday, indeed!
PornHub's origin clusters serve petabytes of files off of NFS mounts - it's still alive and well in lots of places.
NFS is the backbone of my home network servers, including file sharing (books, movies, music), local backups, source code and development, and large volumes of data for hobby projects. I don't know what I'd do without it. Haven't found anything more suitable in 15+ years.
Same. The latest thing I did was put snes state and save files on NFS so I can resume the same game from laptop, to retropi (tv), and even on the road over wireguard.
Ah Network Failure System, good memories.
A good time to plug my NFSv4 client in Go: https://github.com/Cyberax/go-nfs-client :) It's made for EFS, but works well enough with other servers.
What are most people using today for file serving? For our little lan sftp seems adequate, since ssh is already running.
NFS v4.2. Easy to set up if you don't need authentication. Very good throughput, at least so long as your network gear isn't the bottleneck. I think it's the best choice if your clients are Linux or similar. The only bummer for me is that mounting NFS shares from Android file managers seems to be difficult or impossible (let alone NFSv4).
NFSv4 over WireGuard for file systems
WebDAV shares of the NFS shares for things that need that view
sshfs for when I need a quick and dirty solution where performance and reliability don't matter
9p for file system sharing via VMs
SMB2 for high-performance writable shares, WebDAV for high-performance read-only shares, also firewall-friendly.
Sftp is useful, but is pretty slow, only good for small amounts and small number of files. (Or maybe i don't know how to cook it properly.)
SMB is great for LAN, but its performance over internet is poor. It remains SFTP and WebDAV in that case. SFTP would be my choice, if there is client support.
I suspect that NFS over Internet is also not the most brilliant idea; I assumed the LAN setting.
SMB has always worked great for me.
Depends on the use-case. Myself I'm using NFS, iCloud, and BitTorrent.
NFS! At least on my localnet.
> What are most people using today for file serving?
Google Drive. Or Dropbox, OneDrive, yada yada. I mean, sure, that's not the question you were asking. But for casual per-user storage and sharing of "file" data in the sense we've understood it since the 1980's, cloud services have killed local storage cold dead. It's been buried for years, except in weird enclaves like HN.
The other sense of "casual filesystem mounting" even within our enclave is covered well already by fuse/sshfs at the top level, or 9P for more deeply integrated things like mounting stuff into a VM.
No one wants to serve files on a network anymore.
> There is also a site, nfsv4bat.org [...] However, be careful: the site is insecure
I just find this highly ironic considering this is NFS we are talking about. Also, do they fear their ISPs changing the 40 year old NFS specs on the flight or what ? Why even mention this ?
I have really mixed feelings about things like NFS, remote desktop, etc. The idea of having everything remote to save resources (or for other reasons) does sound really appealing in theory, and, when it works, is truly great. However in practice it's really hard to make these things be worth it, because of latency. E.g. for network block storage and for NFS the performance is usually abysmal compared to even a relatively cheap modern SSD in terms of latency, and many applications now expect a low latency file system, and perform really poorly otherwise.
I can saturate both a 1 and 2.5 Gbps links with WireGuard encrypted NFSv4 on thin clients that are relatively old.
I also use it for shared storage for my cluster and NAS, and I don't think NFS itself has ever been the bottleneck.
Latency-wise, the overhead is negligible on via LAN, though can be noticeable when doing big builds or running VMs.
Fairly obviously a 1Gbps network is not going to compete with 5Gbps SATA or 20Gbps NVME. Having said that, for real performance we load stuff over the network into local RAM and then generally run from that (faster than all other options). On the internal network the server also has a large RAM disk shared over NFS/SMB, and the performance PC's have plenty of RAM - so really it's a tradeoff, and the optimum is going to depend on how the system is used.
want to emphasize, for those who haven't been following, a nice used 25Gb ethernet card is like $25 now
But how much is a 25GbE (or 40GbE) switch?
10gb is cheap as free → CRS304-4XG-IN
If I needed more than that, I’d probably do a direct link.
maybe around $400 for 25Gbe depending on your noise and power tolerance, and 40Gbe is dirt cheap now
if you only have two or three devices that need a fast connection you can just do point to point, of course
I'm considering NFS with RDMA for a handful of CFD workstations + one file server with 25Gbe network. Anyone know if this will perform well? Will be using XFS with some NVME disks as the base FS on the file server.
Yes, you might want to tune your NFS parameters, stick to NFSv4.2, consider if caching is appropriate for your workloads and at what level, and how much of your NFS + networking you can keep in kernel space if you decide to further upgrade your network's throughput or really expand it.
Also consider what your server and client machines will be running, some NFS clients suck. Linux on both ends works really well.
Quite some time ago I implemented NFS for a small HPC-cluster on a 40GBe network. A colleague set up RDMA later, since at start it didn't work with the Ubuntu kernel available. Full nVME on the file server too. While the raw performance using ZFS was kind of underwhelming (mdadm+XFS about 2x faster), network performance was fine I'd argue: serial transfers easily hit ~4GB/s on a single node and 4K-benchmarking with fio was comparable to a good SATA-SSD (IOPS + throughput) on multiple clients in parallel!
Consider BeeGFS. Had good results with it using infiniband.
If only I could mount a NFS share Android ...
I looked into this a while ago and was surprised to find that no file explorer on Android seems to support it[1]. However, I did notice that VLC for Android does support it, though unfortunately only NFSv3. I was at least able to watch some videos from the share with it, but it would be nice to have general access to the share on Android.
[1] Of course, I didn’t test every single app — there’s a bucketload of them on Google Play and elsewhere…
Been a while, but if you root your phone and have access to the kernel source in order to build the NFS modules, would you be able to mount NFS shares then?
Auto home! And jumpstart! Aah, the network is the computer!
Zfs includes nfs, its built in and very handy still!
If you're talking about OpenZFS, that is a thin wrapper over knfsd/exports file. They don't actually ship an NFS daemon in the OpenZFS code.
ZeroFS uses NFS/9P instead of fuse!
https://github.com/Barre/ZeroFS
We are still using it for some pretty large apps. Still have not found a good and simple alternative. I like the simplicity and performance. Scaling is a challenge though.
Unfortunately there doesn’t seem to be any decent alternative.
SMB is a nightmare to set up if your host isn’t running Windows.
sshfs is actually pretty good but it’s not exactly ubiquitous. Plus it has its own quirks and performs slower. So it really doesn’t feel like an upgrade.
Everything else I know of is either proprietary, or hard to set up. Or both.
These days everything has gone more cloud-oriented. Eg Dropbox et al. And I don’t want to sync with a cloud server just to sync between two local machines.
> SMB is a nightmare to set up if your host isn’t running Windows.
Samba runs fine on my FreeBSD host? All my clients are Windows though.
If I wanted to have a non-windows desktop client, I'd probably use NFS for the same share.
It runs fine but it's a nightmare to set up.
It's one of those tools that, unless you already know what you're doing, you can expect to sink several hours into trying to get the damn thing working correctly.
It's not the kind of thing you can throw at a junior and expect them to get working in an afternoon.
Whereas NFS and sshfs "just work". Albeit I will concede that NFSv4 was annoying to get working back when that was new too. But that's, thankfully, a distant memory.
What happened to Transarc's DFS ?
I looked, found the link below, but it seems to just fizzle out without info.
https://en.wikipedia.org/wiki/DCE_Distributed_File_System
Anyway, we used it extensively in the UIUC engineering workstation labs hundreds of computers, 20+ years ago, and it worked excellently. I set up a server farm 20 years ago of Sun sparcs but used NFS for such.
AFS (on which DFS was based) lives on as OpenAFS [0]. And there is a commercial evolution/solution from AuriStor [1].
[0]: https://openafs.org/
[1]: https://www.auristor.com/filesystem/
I used to administer AFS/DFS and braved the forest of platform ifdefs to port it to different unix flavors.
plusses were security (kerberos), better administrative controls and global file space.
minuses were generally poor performance, middling small file support and awful large file support. substantial administrative overhead. the wide-area performance was so bad the global namespace thing wasn't really useful.
I guess it didn't cause as many actual multi-hour outages NFS, but we used it primarily for home/working directories and left the servers alone, whereas the accepted practice at the time was to use NFS for roots and to cross mount everything so that it easily got into a 'help I've fallen and can't get up' situation.
that's very similar to what we were doing for the engineering workstations (hundreds of hosts across a very fast campus network)
(off topic, but great username)
> SMB is a nightmare to set up if your host isn’t running Windows.
It's very easy on illumos based systems due the integrated SMB/CIFS service.
SMB is not that terrible to set up (has its quirks definitely), but apple devices don't interoperate well in my experience. SMB from my samba server performs very well from linux and windows clients alike, but the performance from mac is terrible.
NFS support was lacking on windows when I last tried. I used NFS (v3) a lot in the past, but unless in a highly static high trust environment, it was worse to use than SMB (for me). Especially the user-id mapping story is something I'm not sure is solved properly. That was a PITA in the homelab scale, having to set up NIS was really something I didn't like, a road warrior setup didn't work well for me, I quickly abandoned it.
Windows 10/11 have native support. Writes aren’t terribly performant iirc.
> SMB is not that terrible to set up
Samba can be. Especially when compared with NFS
> NFS support was lacking on windows when I last tried.
If you need to connect from Windows then your options are very limited, unfortunately.
I mean the decent alternative is object storage if you can tolerate not getting a filesystem. You can get an S3 client running anywhere with little trouble. There are lots of really good S3 compatible servers you can self-host. And you don't get the issue of your system locking up because of an unresponsive server.
I've always thought that NFS makes you choose between two bad alternatives with "stop the world and wait" or "fail in a way that apps are not prepared for."
If you don't need a filesystem, then your options are numerous. The problem is sometimes you do need exactly that.
I do agree that object storage is a nice option. I wonder if a FUSE-like object storage wrapper would work well here. I've seen mixed results for S3 but for local instances, it might be a different story.
AWS has this "mountpoint for s3" thingy https://github.com/awslabs/mountpoint-s3
True. But for example a home server I absolutely love the simplicity. I have 6 Lenovo 720q machines, one of them as a data storage just running simple NFS for quick daily backups before it pushes them to a NAS.
Lustre is big in the HPC/AI training world. Amazing performance and scalability, but not for the faint of heart.
9P? Significantly simpler, at the protocol level, than NFS (to the point where you can implement a client/server in your language of choice in one afternoon).
I'd seen a proposal to use loopback NFS in place of FUSE:
https://github.com/xetdata/nfsserve
See also https://www.legitcontrol.com as presented at https://braid.org/meeting-118 for a beautiful example of "local NFS" as a wonderful replacement for FUSE!