About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

  • Domi@lemmy.secnd.me
    link
    fedilink
    English
    arrow-up
    2
    ·
    29 days ago

    btrfs has been the default file system for Fedora Workstation since Fedora 33 so not much reason to not use it.

  • sem@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    30 days ago

    Btrfs came default with my new Synology, where I have it in Synology’s raid config (similar to raid 1 I think) and I haven’t had any problems.

    I don’t recommend the btrfs drivers for windows 10. I had a drive using this and it would often become unreachable under load, but this is more a Windows problem than a problem with btrfs

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    30 days ago

    Don’t use btrfs if you need RAID 5 or 6.

    The RAID56 feature provides striping and parity over several devices, same as the traditional RAID5/6. There are some implementation and design deficiencies that make it unreliable for some corner cases and the feature should not be used in production, only for evaluation or testing. The power failure safety for metadata with RAID56 is not 100%.

    https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices

  • vividspecter@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    No reason not to. Old reputations die hard, but it’s been many many years since I’ve had an issue.

    I like also that btrfs is a lot more flexible than ZFS which is pretty strict about the size and number of disks, whereas you can upgrade a btrfs array ad hoc.

    I’ll add to avoid RAID5/6 as that is still not considered safe, but you mentioned RAID1 which has no issues.

  • SRo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    30 days ago

    One time I had a power outage and one of the btrfs hds (not in a raid) couldn’t be read anymore after reboot. Even with help from the (official) btrfs mailinglist It was impossible to repair the file system. After a lot of low level tinkering I was able to retrieve the files, but the file system itself was absolutely broken, no repair process was possible. I since switched to zfs, the emergency options are much more capable.

  • zarenki@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    30 days ago

    I’ve been using single-disk btrfs for my rootfs on every system for almost a decade. Great for snapshots while still being an in-tree driver. I also like being able to use subvolumes to treat / and /home (maybe others) similar to separate filesystems without actually being different partitions.

    I had used it for my NAS array too, with btrfs raid1 (on top of luks), but migrated that over to ZFS a couple years ago because I wanted to get more usable storage space for the same money. btrfs raid5 is widely reported to be flawed and seemed to be in purgatory of never being fixed, so I moved to raidz1 instead.

    One thing I miss is heterogenous arrays: with btrfs I can gradually upgrade my storage one disk at a time (without rewriting the filesystem) and it uses all of my space. For example, two 12TB drives, two 8TB drives, and one 4TB drive adds up to 44TB and raid1 cuts that in half to 22TB effective space. ZFS doesn’t do that. Before I could migrate to ZFS I had to commit to buying a bunch of new drives (5x12TB not counting the backup array) so that every drive is the same size and I felt confident it would be enough space to last me a long time since growing it after the fact is a burden.