FUSE: Filesystem in Userspace Explained
Understand FUSE (Filesystem in Userspace) - the framework that lets you implement filesystems without writing kernel code. Learn how NTFS, SSHFS, and cloud storage work on Linux.
Best viewed on desktop for optimal interactive experience
What is FUSE?
FUSE (Filesystem in Userspace) is a software interface that lets non-privileged users create their own filesystems without editing kernel code. It acts as a bridge between the kernel's VFS layer and a userspace program that implements the filesystem logic.
Think of FUSE as a translator that converts kernel filesystem requests into function calls that your regular program can handle, then translates your program's responses back into kernel-speak.
Why FUSE Exists
Traditional filesystem development requires:
- Writing kernel modules (dangerous - crashes affect entire system)
- Deep kernel knowledge
- Careful memory management
- Complex debugging
- Root privileges for testing
FUSE changes the game:
- Write filesystems as regular programs
- Use any programming language
- Debug with normal tools
- Crashes only affect the filesystem, not the kernel
- Rapid prototyping and development
FUSE Architecture
FUSE Request Flow Architecture
Visualize how filesystem operations flow through FUSE layers
Architecture Layers
Application
read(fd, buffer, size)
VFS Layer
FUSE Kernel Module
/dev/fuse Queue
libfuse (userspace)
Your Filesystem
Operation Details
Current Step: 1/10
read(fd, buffer, size)
Implementation Example:
// FUSE filesystem implementation static int my_read(const char *path, char *buf, size_t size, off_t offset, struct fuse_file_info *fi) { // Your logic here int fd = open(path, O_RDONLY); pread(fd, buf, size, offset); close(fd); return size; }
FUSE vs Native Filesystem
Native ext4:
FUSE:
Key Points
Crashes only affect filesystem, not kernel
Implement in any language
Performance for development ease
How FUSE Works
The Request Flow
- Application makes a system call (open, read, write, etc.)
- VFS Layer receives the request
- FUSE Kernel Module intercepts VFS operations for FUSE mounts
- Request Queue: Kernel queues the request
- FUSE Library in userspace reads from
/dev/fuse
- Your Filesystem handles the request
- Response travels back through the same path
# Request flow through FUSE layers: # Application # ↓ (system call) # VFS Layer # ↓ (filesystem operation) # FUSE Kernel Module # ↓ (serialized request via /dev/fuse) # FUSE Library (libfuse) # ↓ (function call) # Your Filesystem Implementation # ↓ (response) # [Same path in reverse]
Key Components
1. FUSE Kernel Module
# Check if FUSE module is loaded lsmod | grep fuse # fuse 139264 3 # Module provides /dev/fuse device ls -l /dev/fuse # crw-rw-rw- 1 root root 10, 229 Jan 1 12:00 /dev/fuse
2. FUSE Library (libfuse)
Handles communication with kernel:
#include <fuse.h> static struct fuse_operations my_operations = { .getattr = my_getattr, .readdir = my_readdir, .open = my_open, .read = my_read, .write = my_write, }; int main(int argc, char *argv[]) { return fuse_main(argc, argv, &my_operations, NULL); }
3. Your Filesystem
Implements the actual filesystem logic in userspace.
Popular FUSE Filesystems
NTFS-3G
Full read/write NTFS support on Linux:
# Mount Windows partition sudo mount -t ntfs-3g /dev/sda1 /mnt/windows # Behind the scenes: # VFS → FUSE → ntfs-3g process → NTFS operations
SSHFS
Mount remote directories over SSH:
# Mount remote directory sshfs user@server:/path /local/mount # Now you can work with remote files as if they're local ls /local/mount cp file.txt /local/mount/ # Unmount fusermount -u /local/mount
EncFS
Encrypted filesystem:
# Create encrypted directory encfs ~/.encrypted ~/decrypted # Files in ~/.encrypted are encrypted # Files in ~/decrypted appear decrypted
Cloud Storage Filesystems
rclone
Mount cloud storage (Google Drive, Dropbox, S3):
# Mount Google Drive rclone mount gdrive: /mnt/gdrive --daemon # Mount S3 bucket rclone mount s3:bucket /mnt/s3
s3fs
Amazon S3 as a filesystem:
s3fs mybucket /mnt/s3 -o passwd_file=~/.passwd-s3fs
GlusterFS
Distributed filesystem:
# Mount GlusterFS volume mount -t glusterfs server:/volume /mnt/gluster
MergerFS
Combine multiple drives into one:
mergerfs -o defaults,allow_other /disk1:/disk2:/disk3 /mnt/pool
Writing a Simple FUSE Filesystem
Hello World Filesystem in Python
#!/usr/bin/env python3 import os import errno from fuse import FUSE, Operations class HelloFS(Operations): def __init__(self): self.files = { '/': dict(st_mode=0o755 | 0o040000), # Directory '/hello.txt': dict( st_mode=0o644 | 0o100000, # Regular file st_size=13, content=b'Hello World!\n' ) } def getattr(self, path, fh=None): """Get file attributes.""" if path not in self.files: raise OSError(errno.ENOENT) attrs = self.files[path].copy() attrs['st_nlink'] = 1 attrs['st_uid'] = os.getuid() attrs['st_gid'] = os.getgid() return attrs def readdir(self, path, fh): """List directory contents.""" if path == '/': return ['.', '..', 'hello.txt'] raise OSError(errno.ENOENT) def open(self, path, flags): """Open a file.""" if path not in self.files: raise OSError(errno.ENOENT) return 0 def read(self, path, length, offset, fh): """Read from a file.""" if path not in self.files: raise OSError(errno.ENOENT) content = self.files[path].get('content', b'') return content[offset:offset + length] if __name__ == '__main__': import sys FUSE(HelloFS(), sys.argv[1], foreground=True)
Usage:
# Install python-fuse pip install fusepy # Mount the filesystem python3 hello_fs.py /mnt/hello # Use it ls /mnt/hello cat /mnt/hello/hello.txt # Unmount fusermount -u /mnt/hello
More Complex Example: Memory Filesystem
class MemoryFS(Operations): def __init__(self): self.files = {} self.data = {} self.fd = 0 def create(self, path, mode, fi=None): """Create a new file.""" self.files[path] = dict( st_mode=mode | 0o100000, st_nlink=1, st_size=0, ) self.data[path] = b'' self.fd += 1 return self.fd def write(self, path, buf, offset, fh): """Write to a file.""" if path not in self.data: raise OSError(errno.ENOENT) self.data[path] = ( self.data[path][:offset] + buf + self.data[path][offset + len(buf):] ) self.files[path]['st_size'] = len(self.data[path]) return len(buf) def unlink(self, path): """Delete a file.""" del self.files[path] del self.data[path]
FUSE Performance Characteristics
Context Switches
Every operation involves multiple context switches:
User Process → Kernel → FUSE daemon → Kernel → User Process
Benchmark Comparison
Native ext4 sequential read: 550 MB/s ████████████████████ Native ext4 random read: 180 MB/s ██████████ FUSE passthrough sequential: 420 MB/s ███████████████ FUSE passthrough random: 140 MB/s ████████ NTFS-3G sequential read: 380 MB/s █████████████ NTFS-3G random read: 90 MB/s █████ SSHFS (local network): 110 MB/s ████ SSHFS (internet): 10 MB/s ▌
Performance Optimizations
1. Kernel Caching
# Enable kernel caching (default) mount -o kernel_cache ... # Disable for always-fresh data mount -o direct_io ...
2. Large Reads/Writes
# Increase max read/write size mount -o max_read=131072 ...
3. Splice Support
Zero-copy data transfer:
// In filesystem implementation static struct fuse_operations ops = { .splice_read = my_splice_read, .splice_write = my_splice_write, };
4. Multi-threading
# Run FUSE daemon with multiple threads ./myfs -o max_threads=16 /mountpoint
FUSE Options and Mount Flags
Common Mount Options
# Allow other users to access mount -o allow_other ... # Allow root to access mount -o allow_root ... # Default permissions mount -o default_permissions ... # Custom UID/GID mount -o uid=1000,gid=1000 ... # Read-only mount mount -o ro ... # No execution mount -o noexec ...
Debug Options
# Run in foreground with debug output ./myfs -f -d /mountpoint # Verbose FUSE library debugging FUSE_DEBUG=1 ./myfs /mountpoint
FUSE Security Considerations
1. Privilege Escalation
FUSE filesystems run with user privileges:
# Check who owns FUSE mount ps aux | grep myfs # user 12345 0.0 0.1 ... myfs /mountpoint
2. Mount Restrictions
# /etc/fuse.conf # Allow non-root users to specify allow_other user_allow_other # Check if user can mount cat /proc/sys/fs/fuse/userns_mounts # 1 = allowed, 0 = not allowed
3. Denial of Service
Malicious FUSE filesystems can:
- Hang processes with infinite delays
- Consume excessive memory
- Create infinite directory loops
Protections:
# Set timeout for FUSE operations mount -o intr,hard_timeout=60 ... # Limit memory usage ulimit -v 1048576 # 1GB limit
Advanced FUSE Features
1. Extended Attributes
def getxattr(self, path, name, position=0): """Get extended attribute.""" attrs = self.files[path].get('xattrs', {}) try: return attrs[name] except KeyError: raise OSError(errno.ENODATA) def setxattr(self, path, name, value, options, position=0): """Set extended attribute.""" if path not in self.files: raise OSError(errno.ENOENT) if 'xattrs' not in self.files[path]: self.files[path]['xattrs'] = {} self.files[path]['xattrs'][name] = value
2. File Locking
def lock(self, path, fh, cmd, lock): """Handle file locking.""" # Implement advisory locking return 0 # Success
3. Polling and Notifications
// Notify kernel of changes fuse_lowlevel_notify_poll(ph); fuse_lowlevel_notify_inval_inode(se, ino, off, len);
FUSE Debugging and Troubleshooting
Debug Techniques
1. Verbose Logging
import logging logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger(__name__) def read(self, path, size, offset, fh): logger.debug(f"READ: path={path}, size={size}, offset={offset}") # ... implementation ...
2. Strace the FUSE Process
# Trace system calls strace -p $(pidof myfs) # Trace specific operations strace -e open,read,write ./myfs /mountpoint
3. Monitor FUSE Communication
# Monitor /dev/fuse traffic sudo tcpdump -i lo -w fuse.pcap # Use Wireshark to analyze
Common Issues
"Transport endpoint is not connected"
# FUSE daemon crashed/stopped # Solution: Unmount and remount fusermount -u /mountpoint ./myfs /mountpoint
"Permission denied"
# Check allow_other in /etc/fuse.conf # Check mount options mount | grep fusefs
Poor Performance
# Check if caching is enabled mount | grep cache # Monitor context switches vmstat 1 # Profile the FUSE daemon perf top -p $(pidof myfs)
FUSE vs Native Filesystems
When to Use FUSE
✅ Good for:
- Prototyping new filesystems
- Accessing remote resources (SSH, cloud)
- Special-purpose filesystems (encryption, compression)
- Cross-platform filesystem support
- Non-critical performance applications
❌ Not ideal for:
- High-performance I/O
- System-critical filesystems (root, /usr)
- Real-time applications
- Heavy concurrent access
Performance Comparison Table
Aspect | Native FS | FUSE FS |
---|---|---|
Context Switches | Minimal | Multiple per operation |
CPU Overhead | Low | Higher |
Memory Copies | 1-2 | 2-4 |
Caching | Kernel integrated | Optional |
Latency | Microseconds | Milliseconds |
Throughput | Full hardware speed | 60-80% typical |
Future of FUSE
FUSE 3.x Improvements
- Better performance
- Improved security model
- Enhanced caching
- Splice support by default
io_uring Integration
Future versions may use io_uring for:
- Reduced context switches
- Batch operations
- Better async I/O
virtiofs
For VMs, combining FUSE with virtio:
# In VM, mount host directory with near-native performance mount -t virtiofs myfs /mnt/host
Best Practices
- Cache Aggressively: Use kernel_cache when data doesn't change frequently
- Batch Operations: Combine multiple small operations
- Async When Possible: Don't block on network/slow operations
- Handle Errors Gracefully: Return proper errno values
- Profile Performance: Identify bottlenecks early
- Security First: Validate all inputs, limit resource usage
- Document Limitations: Be clear about performance/feature constraints
Conclusion
FUSE democratizes filesystem development, allowing anyone to create custom filesystems without kernel programming. While it trades some performance for safety and ease of development, FUSE enables innovative solutions like:
- Cloud storage integration
- Encryption layers
- Network filesystems
- Archive mounting
- Database filesystems
Whether you're mounting remote servers with SSHFS, accessing Windows drives with NTFS-3G, or building your own specialized filesystem, FUSE provides the bridge between your ideas and the kernel's VFS layer. Its "good enough" performance and "excellent" safety make it the perfect tool for extending Linux's filesystem capabilities.