I currently need to prime a backup. There’s around 1.5TB of data on a Linux server in the cloud and a client wants regular backups of it to an OS X backup server they use for their media backups.
I have a local copy so I thought I’d do the modern version of a stationwagon full of tapes to reduce the bandwidth used. Unfortunately this brings us to filesystem fun.
Filesystem | Linux | OS X |
---|---|---|
ext[234] | yes | via fuse |
hfs+ | not well | yes |
fat32/vfat | yes | yes |
ufs | kinda | kinda |
Of these UFS is most like the fs I’m copying from (ext4). But UFS is a rather fuzzy standard and Linux support isn’t great and I’m not sure if it matches the subset of UFS that OS X supports.
FAT32/vfat is also a little fuzzy. To get it to fill the 3TB disk I gave it I had to run the following:
|
|
The -s
flag (sectors-per-cluster) is a little non-standard so hoping
OS X can handle it. If not, I’m just going to give up on filesystems
altogether and just dump a tar archive to the partition.
Speaking of partitions:
First had to partition the disk which meant parted. A tool I find tedious. So just for my own future reference:
1 2 3 4 5 6 7 | parted /dev/sdXXX (parted) mklabel gpt (parted) mkpart Partition name? []? File system type? [ext2]? fat32 Start? 2048s End? 100% |
That got me a whole-disk partition and had it properly aligned.