i bought a Mac Mini like a lot of people did for “openclaw” (lol). last year i also over-sold the value of local LLMs for a few days and spent a few grand for no reason. but it worked out: with subagents + heavier MCP clients, the Mac Mini became my always-on local build machine, and i mostly “vibe code” from my MacBook into it.
the problem: macOS file sharing via SMB is ok for big files, but it’s painfully slow for dev trees full of tiny files (TypeScript, config, node_modules). i switched the “live filesystem” part to NFS and tuned it until it was both fast and reboot-safe.
also: this version avoids triple-backtick code fences on purpose, because Craft.do can sometimes “drop” a fence mid-page and then bash comments like # ... get parsed as headings.
why NFS over SMB (for this workload)
SMB on macOS can be rough on small-file workloads because:
Finder and friends do extra metadata work (including extended attributes)
SMB has more per-operation overhead (and macOS’s implementation doesn’t always feel optimized for “60k tiny files”)
directory listings can devolve into a lot of little round trips
NFS is simpler. on my LAN, tuned NFS made “listing big dirs” go from “why is this taking forever” to “ok, usable”.
the setup i tested
server: Mac Mini (apple silicon), wired LAN, 192.168.1.200
client: MacBook (macOS), same network
workload: ~60,000 files / ~8GB, mostly TypeScript + config + node_modules
rtt: ~4ms (Wi‑Fi → switch → ethernet)
server (Mac Mini): exports + nfsd tuning
/etc/exports
/Users/yigitkonur -alldirs -mapall=501:20 -network 192.168.1.0 -mask 255.255.255.0what matters here:
-alldirs: lets you mount subdirectories, not only the export root
-mapall=501:20: maps all client access to a single local uid/gid (nice for single-user dev)
-network ... -mask ...: restricts access to your local subnet
note: 501:20 is common on macOS (first user + staff), but not guaranteed. if you reuse this, swap it for your own uid/gid.
/etc/nfs.conf (server)
nfs.server.mount.require_resv_port = 0
nfs.server.require_resv_port = 0
nfs.server.nfsd_threads = 16
nfs.server.async = 1
nfs.server.fsevents = 0
nfs.server.wg_delay = 0
nfs.server.wg_delay_v3 = 0
nfs.server.reqcache_size = 512
nfs.server.request_queue_length = 512
nfs.server.export_hash_size = 256
nfs.server.tcp = 1
nfs.server.udp = 0
nfs.server.user_stats = 0
nfs.server.bonjour = 0
nfs.server.verbose = 0quick “why” table:
| parameter | default | value | why |
|---|---|---|---|
| require_resv_port | 1 | 0 | macOS clients often mount from non-privileged ports; this avoids silent mount pain |
| nfsd_threads | 8 | 16 | more concurrency for lots of small file ops |
| async | 0 | 1 | faster writes; fine for dev where git is the source of truth |
| fsevents | 1 | 0 | reduces server-side overhead |
| wg_delay / wg_delay_v3 | 1000 / 0 | 0 / 0 | lowers latency for small-file writes |
| reqcache_size | 64 | 512 | better duplicate-request handling under retransmits |
| request_queue_length | 128 | 512 | avoids queue bottlenecks under bursts |
| export_hash_size | 64 | 256 | faster export lookup under load |
| bonjour | 1 | 0 | i connect by IP anyway |
client (MacBook): automount + mount options + client tuning
/etc/auto_nfs
/Volumes/yigitkonur -vers=3,tcp,rw,hard,intr,async,noresvport,nfc,locallocks,nonegnamecache,rsize=1048576,wsize=1048576,readahead=16,noatime,retrans=5,timeo=30,actimeo=10,deadtimeout=600,rdirplus 192.168.1.200:/Users/yigitkonur/etc/auto_master
add this at the end (without it, auto_nfs is ignored):
/- auto_nfs/etc/nfs.conf (client)
nfs.client.access_for_getattr = 1
nfs.client.nfsiod_thread_max = 32
nfs.client.allow_async = 1
nfs.client.is_mobile = 0
nfs.client.access_cache_timeout = 60
nfs.client.statfs_rate_limit = 10
nfs.client.tcp_sockbuf = 16777216
nfs.client.readlink_nocache = 2
nfs.client.max_async_writes = 128
nfs.client.initialdowndelay = 2
nfs.client.nextdowndelay = 4
nfs.client.iosize = 1048576mount options (what actually mattered)
| option | why |
|---|---|
| vers=3 | NFSv3 is the most stable on macOS. NFSv4 (macOS supports 4.0, not 4.1) has been flaky across releases |
| hard,intr | hard mounts don’t fail with random i/o errors; intr lets you ctrl+c stuck ops |
| async | allows buffering writes (needs nfs.client.allow_async = 1) |
| noresvport | common “why won’t NFS mount on macOS” fix |
| nfc | unicode normalization correctness on macOS |
| locallocks | avoids NLM overhead + stale lock weirdness |
| nonegnamecache | avoids phantom ENOENT when files are created/deleted frequently |
| rsize / wsize | bigger buffers = fewer trips for big reads/writes |
| noatime | avoids a write RPC on reads |
| actimeo=10 | fewer metadata RPCs; still reasonable freshness for dev |
| deadtimeout=600 | don’t declare the server dead too quickly (reboots happen) |
| rdirplus | big one for large dirs: fetch attrs with directory entries |
client nfs.conf (the biggest win)
| parameter | default | value | why |
|---|---|---|---|
| nfs.client.access_for_getattr | 0 | 1 | biggest perf win i found: merges permission checks into getattr calls (less RPC spam) |
| nfs.client.nfsiod_thread_max | 16 | 32 | more concurrent i/o for lots of small files |
| nfs.client.allow_async | 0 | 1 | makes the async mount option actually do something |
| nfs.client.is_mobile | auto | 0 | prevents macOS from auto-unmounting “unresponsive” volumes on laptops |
optional macOS overhead to disable
# stop creating .DS_Store files on network shares
defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool TRUE
# disable Spotlight indexing on the NFS volume
sudo mdutil -i off /Volumes/yigitkonur
# disable quarantine/gatekeeper checks on network files (optional)
defaults write com.apple.LaunchServices LSQuarantine -bool NOsecurity note (because it matters): NFSv3 with sec=sys is basically “uid/gid auth”. no encryption, no signing, no kerberos. keep it on a trusted LAN.
making it reboot-safe (so it stops being a “works until it doesn’t” setup)
server side (Mac Mini)
enable nfsd:
sudo nfsd enableoptional: a tiny watchdog (cron is boring and reliable)
# /usr/local/bin/nfsd-watchdog.sh
#!/bin/bash
if ! pgrep -x nfsd > /dev/null 2>&1; then
nfsd enable && nfsd start
fi
if ! showmount -e localhost 2>/dev/null | grep -q "/Users/yigitkonur"; then
nfsd update
firoot crontab:
* * * * * /usr/local/bin/nfsd-watchdog.sh >> /tmp/nfsd-watchdog.log 2>&1
@reboot sleep 10 && /usr/local/bin/nfsd-watchdog.sh >> /tmp/nfsd-watchdog.log 2>&1client side (MacBook)
a LaunchDaemon that checks every 30s and remounts if things go stale:
<!-- /Library/LaunchDaemons/com.supercmd.nfs-reconnect.plist -->
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.supercmd.nfs-reconnect</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/nfs-reconnect.sh</string>
</array>
<key>StartInterval</key>
<integer>30</integer>
<key>RunAtLoad</key>
<true/>
<key>StandardOutPath</key>
<string>/tmp/nfs-reconnect.log</string>
<key>StandardErrorPath</key>
<string>/tmp/nfs-reconnect.log</string>
</dict>
</plist>and the reconnect script:
# /usr/local/bin/nfs-reconnect.sh
#!/bin/bash
MOUNT_POINT="/Volumes/yigitkonur"
NFS_SERVER="192.168.1.200"
EXPORT_PATH="/Users/yigitkonur"
# already mounted and working? done.
if mount | grep -q "$MOUNT_POINT" && ls "$MOUNT_POINT" > /dev/null 2>&1; then
exit 0
fi
# server reachable?
if ! ping -c 1 -t 2 "$NFS_SERVER" > /dev/null 2>&1; then
exit 0
fi
# force unmount stale mount
if mount | grep -q "$MOUNT_POINT"; then
umount -f "$MOUNT_POINT" 2>/dev/null
sleep 1
fi
mkdir -p "$MOUNT_POINT"
automount -vc 2>/dev/null
# fallback to manual mount if automount didn't pick it up
if ! mount | grep -q "$MOUNT_POINT"; then
mount -t nfs -o vers=3,tcp,rw,hard,intr,async,noresvport,nfc,locallocks,nonegnamecache,rsize=1048576,wsize=1048576,readahead=16,noatime,retrans=5,timeo=30,actimeo=10,deadtimeout=600,rdirplus \
"$NFS_SERVER:$EXPORT_PATH" "$MOUNT_POINT"
fithe big lesson: don’t rsync through the NFS mount
this surprised me at first, but the math is pretty unforgiving for small files:
LOOKUP → CREATE → WRITE → COMMIT = 4 RPCs × 4ms RTT ≈ 16ms minimum per filefor 60,000 files, you’re paying minutes of pure protocol overhead before you even count real work. for bulk transfers, stream it instead:
tar cf - --exclude='.git' --exclude='.DS_Store' -C /local/project . \
| ssh mini "tar xf - -C ~/remote/project/"for me: 60,000 files in 1 min 57 sec. rsync-over-nfs was not close.
i wrapped it as a helper:
# /usr/local/bin/nfs-sync.sh
#!/bin/bash
# usage: nfs-sync.sh <local-dir> <remote-relative-dir>
LOCAL_DIR="${1:?Usage: nfs-sync.sh <local-dir> <remote-dir>}"
REMOTE_DIR="${2:?Usage: nfs-sync.sh <local-dir> <remote-dir>}"
FILE_COUNT=$(find "$LOCAL_DIR" -not -path '*/.git/*' -not -name '.DS_Store' | wc -l | tr -d ' ')
echo "syncing $FILE_COUNT files: $LOCAL_DIR → mini:~/$REMOTE_DIR"
ssh mini "mkdir -p ~/$REMOTE_DIR"
tar cf - --exclude='.git' --exclude='.DS_Store' -C "$LOCAL_DIR" . \
| ssh mini "tar xf - -C ~/$REMOTE_DIR/"benchmark snapshot (what changed)
i reran the same tests as i tuned config:
| test | original | after tuning | after research | total gain |
|---|---|---|---|---|
| create 1000 files (1–10KB) | 101.7s (9/s) | 84.3s (11/s) | 61.9s (16/s) | +78% |
| read 1000 files | 25.1s (39/s) | 20.2s (49/s) | 10.3s (97/s) | +149% |
| stat 1000 files | 2.8s (363/s) | 1.9s (533/s) | 1.9s (535/s) | +47% |
| overwrite 500 files | 31.3s (15/s) | 18.8s (26/s) | 10.5s (47/s) | +213% |
| ls -la (1000 files) | 42.3s | 7.5s | 4.4s | 9.6x |
biggest contributors for me:
nfs.client.access_for_getattr = 1
switching from soft to hard,intr (stability)
actimeo=10 + rdirplus (metadata efficiency)
is_mobile=0 (persistence on a laptop)
NFSv3 vs NFSv4 on macOS: i wouldn’t bother with v4
my quick take after reading docs + community reports:
macOS supports NFSv4.0, not 4.1
vers=4.1 often falls back to v3 anyway
vers=4 has had regressions on some Sonoma / Sequoia builds
tuned v3 is already very good for dev workloads
NFS vs SMB vs alternatives
| protocol | small file perf | setup complexity | persistence | macOS support |
|---|---|---|---|---|
| NFS (tuned) | good | medium | excellent with watchdog | stable with v3 |
| SMB | meh for small files | easy | built-in reconnect | supported, but can be slow |
| SSHFS | moderate | easy | depends (FUSE) | project status varies |
| Mutagen | often strong | medium | good | active development |
| Syncthing | async sync | easy | excellent | good |
the perf test script i used
drop this in /tmp/nfs-perftest.sh:
#!/bin/bash
set -e
NFS_TARGET="/Volumes/yigitkonur/dev/my-tauri-apps/tauri-vibescroll"
TEST_DIR="$NFS_TARGET/.nfs-perftest-$$"
t() { perl -MTime::HiRes -e 'print Time::HiRes::time()'; }
log() { printf " %-35s %7.2fs %s\n" "$1" "$2" "$3"; }
cleanup() { rm -rf "$TEST_DIR" 2>/dev/null; }
trap cleanup EXIT
mkdir -p "$TEST_DIR"
echo; echo " NFS perf test"; echo " $(printf '%.0s-' {1..45})"
t0=$(t)
for i in $(seq 1 100); do dd if=/dev/urandom bs=$((1024+RANDOM%9216)) count=1 of="$TEST_DIR/f$i" 2>/dev/null; done; sync
d=$(echo "$(t) - $t0" | bc); log "create 100 files (1-10KB)" "$d" "$(echo "100/$d"|bc) files/s"
t0=$(t)
for f in "$TEST_DIR"/f*; do cat "$f">/dev/null; done
d=$(echo "$(t) - $t0" | bc); log "read 100 files" "$d" "$(echo "100/$d"|bc) files/s"
t0=$(t)
for f in "$TEST_DIR"/f*; do stat -f "%z" "$f">/dev/null; done
d=$(echo "$(t) - $t0" | bc); log "stat 100 files" "$d" "$(echo "100/$d"|bc) ops/s"
t0=$(t)
for i in $(seq 1 50); do echo "mod $i $(date +%s%N)">"$TEST_DIR/f$i"; done; sync
d=$(echo "$(t) - $t0" | bc); log "overwrite 50 files" "$d" "$(echo "50/$d"|bc) files/s"
t0=$(t); ls -la "$TEST_DIR">/dev/null
d=$(echo "$(t) - $t0" | bc); log "ls -la (100 files)" "$d"
t0=$(t); dd if=/dev/zero of="$TEST_DIR/big" bs=1048576 count=10 2>/dev/null; sync
d=$(echo "$(t) - $t0" | bc); log "write 10MB sequential" "$d" "$(echo "10/$d"|bc) MB/s"
echo " $(printf '%.0s-' {1..45})"; echoappendix: if you post to Craft.do via API (auto-make markdown safe)
if you have a “github-flavored” version with triple-backtick code fences, you can convert it to Craft.do-safe markdown by turning fenced code blocks into indented code blocks before you POST.
here’s the jq filter (same idea as this post: no fences, no backticks required in the output):
def craft_safe_md:
gsub("\r\n"; "\n")
| (split("\n")) as $lines
| reduce $lines[] as $line (
{out: [], in_code: false};
if ($line | test("^[[:space:]]*use it on the markdown string you’re about to send (single-pass jq; don’t round-trip through shell variables).
tldr
if SMB feels slow for small files on macOS, try NFSv3
on the client: hard,intr + noresvport + rdirplus + actimeo=10
on the client: nfs.client.access_for_getattr = 1 was the single biggest win
disable .DS_Store creation on network volumes + stop Spotlight indexing on the mount
make it reboot-safe with automount + a simple reconnect loop
don’t rsync through the mount for bulk copies; use tar | ssh