gerhotel.blogg.se

Rclone pcloud
Rclone pcloud








rclone pcloud rclone pcloud rclone pcloud

Then Gzip should compress the datastream that is being sent by zfs send and feed it into gsplit gsplit then takes the stream chops it into 10MB chunks and sends it off to rclone to upload to a path on your remote device. The first command should send a full snapshot to stdout with longestlife referring to a periodic snapshot taken that expires the latest (I would like to integrate this with periodic snapshotting) to create a baseline. Zfs send | gzip | gsplit -bytes=10M -filter "rclone rcat remote:path/to/$FILE" I'd like some feedback on if something like this could work as simply as I think it may. Versioning just comes with the territory when dealing with snapshots as well, so it seems like if there was a way to directly upload snapshots that would drastically simplify backups.īased on all the reading I've been doing lately on ZFS and RClone, I came up with a command that I think should work (I haven't been able to test it yet, I'm moving drives and will use the freed ones later for a test pool). Snapshots can be diffed using tools built into ZFS by default, so filtering out only changed data is handled for free by ZFS as well. Since the snapshots are virtually instant, all the detection for changes is handled by ZFS itself so you don't need lengthy checksuming or inaccurate timestamping. I was wondering if it might be possible to directly send ZFS snapshots through to a cloud backup. (Rclone doesn't seem to have any versioning, Duplicati takes a long time to compile and send files for larger datasets) It seems most backup solutions take forever to scan your files for change, or will upload everything every time, or don't allow easy versioning. I've been thinking about this quite a bit lately.










Rclone pcloud