Filesystem Snapshots: Time Travel for Your Data

10 min

Understand how modern filesystems create instant, space-efficient snapshots. Explore snapshot mechanics, rollback operations, and backup strategies through interactive visualizations.

Best viewed on desktop for optimal interactive experience

What Are Snapshots?

A snapshot is a read-only, point-in-time copy of a filesystem or subvolume. Unlike traditional backups:

  • Instant: Created in milliseconds (not hours)
  • Space-efficient: Only changed blocks consume space
  • Transparent: Based on Copy-on-Write mechanism
  • Lightweight: Thousands of snapshots possible

Use cases:

  • Before system updates (easy rollback)
  • Hourly/daily backups (time-travel recovery)
  • Testing environments (instant clone)
  • Versioning (keep file history)

How Snapshots Work: Interactive Exploration

See snapshot creation, modification tracking, and rollback in action:

Interactive Snapshot Mechanism

Snapshot Creation: Instant & Space-Efficient

Step 1: Initial Filesystem State

Filesystem Blocks:
Block 100
File A
refs: 1 | current
Block 101
File B
refs: 1 | current
Block 102
File C
refs: 1 | current
Space Usage:
Current: 30GB
Snapshots: 0GB
Total: 30GB
What's happening:
  • Filesystem has 3 files (30GB total)
  • No snapshots exist yet
  • All blocks owned by current filesystem
  • Space usage: 30GB
Step 1 of 5

Snapshot vs Traditional Backup

Traditional Backup:

1. Read all files: 100GB 2. Copy to backup location: 100GB 3. Time: Hours 4. Space: 200GB total (100GB original + 100GB backup)

CoW Snapshot:

1. Create metadata reference 2. Time: Milliseconds 3. Space: 0GB initially (blocks shared) 4. Only changed blocks consume space later

Read-Only vs Writable Snapshots

Read-Only Snapshots

  • Btrfs: Default snapshot type
  • ZFS: zfs snapshot tank/data@snap1
  • Purpose: Historical reference, backup
  • Cannot: Modify snapshot content
  • Use: Rollback, backup, archiving

Writable Snapshots (Clones)

  • Btrfs: Can promote snapshot to subvolume
  • ZFS: zfs clone tank/data@snap1 tank/clone
  • Purpose: Testing, development branching
  • Can: Modify independently
  • Use: Test environment, alternative versions

Snapshot Space Accounting

How Space Is Used

Original filesystem: 100GB Snapshot created: 0GB (shared blocks) After modifications: - 10GB modified → 10GB new blocks - 90GB unchanged → still shared Space usage: - Original: 100GB (90GB shared + 10GB new) - Snapshot: 90GB (all shared with original) - Total: 110GB (not 200GB!)

Multiple Snapshots

Original: 100GB Snap1: +0GB (all shared) Snap2: +0GB (all shared) Modify 5GB between each: Original: 100GB (85GB shared + 10GB new + 5GB newest) Snap1: 90GB (85GB shared + 5GB unique) Snap2: 95GB (90GB shared + 5GB unique) Total: 120GB (not 300GB!)

Snapshot Operations

Creating Snapshots

Btrfs:

# Read-only snapshot sudo btrfs subvolume snapshot -r /data /snapshots/data-$(date +%Y%m%d) # Writable snapshot (clone) sudo btrfs subvolume snapshot /data /test-env # List snapshots sudo btrfs subvolume list -s /

ZFS:

# Create snapshot sudo zfs snapshot tank/data@baseline # List snapshots sudo zfs list -t snapshot # Access snapshot data ls /tank/data/.zfs/snapshot/baseline/

Rolling Back

Btrfs:

# Delete current subvolume sudo btrfs subvolume delete /data # Restore from snapshot sudo btrfs subvolume snapshot /snapshots/data-20250101 /data # Or: rename snapshot to replace current sudo mv /data /data.old sudo mv /snapshots/data-20250101 /data

ZFS:

# Rollback (destroys newer snapshots!) sudo zfs rollback tank/data@baseline # Safe: clone snapshot instead sudo zfs clone tank/data@baseline tank/data-restored

Deleting Snapshots

Btrfs:

# Delete specific snapshot sudo btrfs subvolume delete /snapshots/data-20250101 # Delete all in directory sudo find /snapshots -maxdepth 1 -type d -mtime +30 -exec btrfs subvolume delete {} \;

ZFS:

# Delete snapshot sudo zfs destroy tank/data@baseline # Delete range sudo zfs destroy tank/data@snap1%snap10 # Recursive delete sudo zfs destroy -r tank/data@old

Automated Snapshots

Using Snapper (Btrfs)

# Install sudo apt install snapper # Create config for /home sudo snapper -c home create-config /home # Configure timeline snapshots sudo snapper -c home set-config "TIMELINE_CREATE=yes" sudo snapper -c home set-config "TIMELINE_LIMIT_HOURLY=10" sudo snapper -c home set-config "TIMELINE_LIMIT_DAILY=7" # Manual snapshot sudo snapper -c home create --description "Before upgrade" # List snapshots sudo snapper -c home list # Rollback sudo snapper -c home rollback 5

Using zfs-auto-snapshot (ZFS)

# Install sudo apt install zfs-auto-snapshot # Enable for dataset sudo zfs set com.sun:auto-snapshot=true tank/important # Disable for tmp sudo zfs set com.sun:auto-snapshot=false tank/tmp # Snapshots created automatically: # Frequent: every 15 min (keep 4) # Hourly: every hour (keep 24) # Daily: every day (keep 7) # Weekly: every week (keep 4) # Monthly: every month (keep 12)

Send/Receive: Incremental Backup

Btrfs Send/Receive

# Initial send sudo btrfs send /snapshots/data@snap1 | sudo btrfs receive /backup/ # Incremental send (only changes) sudo btrfs send -p /snapshots/data@snap1 /snapshots/data@snap2 | \ sudo btrfs receive /backup/ # Over network sudo btrfs send /snapshots/data@snap2 | \ ssh backup-server sudo btrfs receive /mnt/backup/

ZFS Send/Receive

# Initial send sudo zfs send tank/data@snap1 | sudo zfs receive backup/data # Incremental send sudo zfs send -i @snap1 tank/data@snap2 | sudo zfs receive backup/data # Over network sudo zfs send -i @snap1 tank/data@snap2 | \ ssh backup-server sudo zfs receive tank/data # Resume interrupted transfer sudo zfs send -t <resume_token>

Snapshot Best Practices

1. Retention Policy

Hourly: Keep 24 (1 day) Daily: Keep 7 (1 week) Weekly: Keep 4 (1 month) Monthly: Keep 12 (1 year)

2. Monitor Space Usage

# Btrfs: Check snapshot space sudo btrfs qgroup show / # ZFS: Snapshot space breakdown sudo zfs list -o space -t snapshot

3. Automation

  • Use systemd timers or cron
  • Implement cleanup policies
  • Test restores regularly
  • Monitor snapshot creation failures

4. Before Risky Operations

# Before system upgrade sudo btrfs subvolume snapshot -r / /.snapshots/pre-upgrade # Before major changes sudo zfs snapshot -r tank/vm@pre-migration # Test in clone sudo zfs clone tank/vm@pre-migration tank/vm-test

Snapshots vs Other Backup Methods

MethodSpeedSpaceIncrementalOffsiteRecovery Time
SnapshotsInstantMinimalYesNo*Instant
rsyncSlow2x+PseudoYesSlow
tar/gzipSlow~40%NoYesSlow
LVM snapshotFastVariesNoNoFast
Cloud backupVery slowVariableYesYesVery slow

*Send/receive enables offsite replication

Limitations and Caveats

Space Constraints

  • Snapshots share pool/filesystem space
  • Need 20%+ free space for good performance
  • Many snapshots = complex space accounting

Not True Backups

  • Snapshots on same disk as original
  • Disk failure = lose snapshots too
  • Always maintain offsite backups

Performance Impact

  • Many snapshots can slow deletes (reference counting)
  • Btrfs: balance operations slower with snapshots
  • ZFS: scrub reads all snapshot data

Deletion Complexity

# Can't delete snapshot if clones depend on it zfs destroy tank/data@snap1 # Error: cannot destroy snapshot: has dependent clones # Must promote clone first zfs promote tank/clone zfs destroy tank/data@snap1

Integration with Package Managers

Snapper + YaST/Zypper (openSUSE)

  • Automatic snapshots before/after package changes
  • Rollback via GRUB menu
  • Integrated with btrfs subvolumes

apt-btrfs-snapshot (Ubuntu)

sudo apt install apt-btrfs-snapshot # Automatic snapshot before apt operations

timeshift (Generic)

sudo apt install timeshift # GUI for Btrfs/rsync snapshots # System restore points

Key Takeaways

  • Instant Creation: Snapshots created in milliseconds via CoW
  • Space Efficient: Only changed blocks consume space
  • Rollback Capability: Restore to any snapshot point
  • Send/Receive: Incremental replication for offsite backup
  • Not Backups: Snapshots alone insufficient (same disk)
  • Automate: Use snapper/zfs-auto-snapshot for regular snapshots

If you found this explanation helpful, consider sharing it with others.

Mastodon