• Re: What do you use for RPi backups?

    From Jan Panteltje@3:770/3 to droleary@2017usenet1.subsume.com on Fri Apr 8 17:15:07 2022
    On a sunny day (Fri, 8 Apr 2022 16:47:00 -0000 (UTC)) it happened Doc O'Leary
    , <droleary@2017usenet1.subsume.com> wrote in <t2pou4$n66$1@dont-email.me>:

    I’ve tried a few “common” packages, they all seem to fall down for me. I
    have 500K+ files to warehouse, ranging in sizes from tiny up to 10G+. The >biggest problem seems to be RAM usage, and I’d like to have something that >works on a 3B, and ideally a 0W.

    Does anyone have something that still scales up when scaling down to an RPi?

    Thanks.

    I have 3.4 TB USb harddisks connected to my Pi4
    Shows up as /dev/sda
    Made a partition sda2 formatted with ext4 file system
    and directories I normally work with (root that is in my case)
    Every now and then I copy all data with cp -rp or cp -urp to that sda2 directory
    also wrote a script to backup mail to it.
    Or backup from the laptop with scp -p
    Never a problem.

    Backups go to both Pi4 harddisks, so even if you drop one USB harddisk you still have your data.
    scripting is a good idea, prevents typing errors.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Doc O'Leary ,@3:770/3 to All on Fri Apr 8 16:47:00 2022
    I’ve tried a few “common” packages, they all seem to fall down for me. I have 500K+ files to warehouse, ranging in sizes from tiny up to 10G+. The biggest problem seems to be RAM usage, and I’d like to have something that works on a 3B, and ideally a 0W.

    Does anyone have something that still scales up when scaling down to an RPi?

    Thanks.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Ahem A Rivet's Shot@3:770/3 to droleary@2017usenet1.subsume.com on Fri Apr 8 18:26:49 2022
    On Fri, 8 Apr 2022 16:47:00 -0000 (UTC)
    Doc O'Leary , <droleary@2017usenet1.subsume.com> wrote:

    I’ve tried a few “common” packages, they all seem to fall down for me. I
    have 500K+ files to warehouse, ranging in sizes from tiny up to 10G+.
    The biggest problem seems to be RAM usage, and I’d like to have something that works on a 3B, and ideally a 0W.

    Does rsync not get the job done ?

    --
    Steve O'Hara-Smith
    Odds and Ends at http://www.sohara.org/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Martin Gregorie@3:770/3 to Ahem A Rivet's Shot on Fri Apr 8 18:56:33 2022
    On Fri, 8 Apr 2022 18:26:49 +0100, Ahem A Rivet's Shot wrote:


    Does rsync not get the job done ?

    Absolutely, but strange as it may seem,some folks havent (yet) heard of
    rsync or rsnapshot.

    I use rsync for my weekly backups. Recommended, because it never takes
    longer or does more work than absolutely necessary. I make weekly backups
    of two Linux laptops, my RPi and my Linux-based house server to a cycle of
    two 1GB USB drives (WD Essentials) on a weekly basis. Currently the
    complete set of backups occupies about 40% of these disks. The process is manually controlled from an SSH login to the which has the current USB
    backup drive connected to it, and immediately followed by a weekly
    software update.

    FWIW the house server uses a nightly cronjob to make an rsnapshot backup
    to a single 2GB USB disk - this is more for recovering any previous day's finger troubles than anything else.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Richard Falken@1:123/115 to Doc O'Leary , on Fri Apr 8 15:00:42 2022
    Re: What do you use for RPi backups?
    By: Doc O'Leary , to All on Fri Apr 08 2022 04:47 pm

    I’ve tried a few “common” packages, they all seem to fall down for me. I hav
    files to warehouse, ranging in sizes from tiny up to 10G+. The biggest problem see
    to be RAM usage, and I’d like to have something that works on a 3B, and ideally a

    Does anyone have something that still scales up when scaling down to an RPi?

    Thanks.

    Not necessarily a Pi specific solution, but I find rsync to be fine.

    Using tar with a pipe should not be very taxing to your resources, but big tar files
    are not very manageable.

    Definetively try the rsync time machine desribed in Linux Magazine #258 (which basically means you do $ rsync -a Source_Dir First_Backup the first time you take a
    backup, and then $ rsync -a --link-dest=First_Backup Source Second_Backup for the next
    ones. That way the first backup will take long, but the next ones will be very quick.

    --
    gopher://gopher.richardfalken.com/1/richardfalken
    --- SBBSecho 3.15-Linux
    * Origin: Palantir * palantirbbs.ddns.net * Pensacola, FL * (1:123/115)
  • From Hermann Riemann@3:770/3 to All on Sat Apr 9 13:12:00 2022
    The dictionaries, which I want to save,
    I will copy this dictionaries with scp
    to special dictionaries in PCs.

    The backup is done by PC.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Doc O'Leary ,@3:770/3 to Ahem A Rivet's Shot on Sat Apr 9 17:10:36 2022
    For your reference, records indicate that
    Ahem A Rivet's Shot <steveo@eircom.net> wrote:

    Does rsync not get the job done ?

    Well, it gets *a* job done, but a good backup is more than just efficient
    file copying. Since I have multiple RPi devices (along with some non-RPi machines), my data is best managed with something that does deduping and snapshotting. Of all the things I tried, <https://github.com/bup/bup>
    had the most useful features, but it struggled to run on my RPi for the
    files I have.

    --
    "Also . . . I can kill you with my brain."
    River Tam, Trash, Firefly

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From scott@alfter.diespammersdie.us@3:770/3 to droleary@2017usenet1.subsume.com on Mon Apr 11 15:30:47 2022
    Doc O'Leary , <droleary@2017usenet1.subsume.com> wrote:
    I’ve tried a few “common” packages, they all seem to fall down for me. I
    have 500K+ files to warehouse, ranging in sizes from tiny up to 10G+. The biggest problem seems to be RAM usage, and I’d like to have something that works on a 3B, and ideally a 0W.

    Does anyone have something that still scales up when scaling down to an RPi?

    Have you looked at rsnapshot? I use that to back up to S3-compatible offsite storage, but it has several storage backends available.

    --
    _/_
    / v \ Scott Alfter (remove the obvious to send mail)
    (IIGS( https://alfter.us/ Top-posting!
    \_^_/ >What's the most annoying thing on Usenet?

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Doc O'Leary ,@3:770/3 to scott@alfter.diespammersdie.us on Tue Apr 12 18:54:21 2022
    For your reference, records indicate that
    scott@alfter.diespammersdie.us wrote:

    Have you looked at rsnapshot?

    My problem with rsync-based solutions is that they seems to do *way* too
    much processing in order to figure out how to do an efficient transfer.
    I mean, to my mind, if I change 3 files out of 500K for a total of 3MB
    out of 1TB of data, backing that up should be *fast*. I just don’t see
    that when I use rsync-based solutions.

    I *do* already use rsync extensively for my current “backup” needs, but
    it just doesn’t have the smarts I’d like to see in a proper backup tool. Like I said, bup has features that are more in line with what I need, but
    it seems to have trouble scaling down to an RPi sized server.

    I was just hoping that something out there with good support was better
    than the scripts I’ve written myself.

    --
    "Also . . . I can kill you with my brain."
    River Tam, Trash, Firefly

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From druck@3:770/3 to All on Tue Apr 12 22:30:19 2022
    T24gMTIvMDQvMjAyMiAxOTo1NCwgRG9jIE8nTGVhcnkgd3JvdGU6DQo+IEZvciB5b3VyIHJl ZmVyZW5jZSwgcmVjb3JkcyBpbmRpY2F0ZSB0aGF0DQo+IHNjb3R0QGFsZnRlci5kaWVzcGFt bWVyc2RpZS51cyB3cm90ZToNCj4gDQo+PiBIYXZlIHlvdSBsb29rZWQgYXQgcnNuYXBzaG90 Pw0KPiANCj4gTXkgcHJvYmxlbSB3aXRoIHJzeW5jLWJhc2VkIHNvbHV0aW9ucyBpcyB0aGF0 IHRoZXkgc2VlbXMgdG8gZG8gKndheSogdG9vDQo+IG11Y2ggcHJvY2Vzc2luZyBpbiBvcmRl ciB0byBmaWd1cmUgb3V0IGhvdyB0byBkbyBhbiBlZmZpY2llbnQgdHJhbnNmZXIuDQo+IEkg bWVhbiwgdG8gbXkgbWluZCwgaWYgSSBjaGFuZ2UgMyBmaWxlcyBvdXQgb2YgNTAwSyBmb3Ig YSB0b3RhbCBvZiAzTUINCj4gb3V0IG9mIDFUQiBvZiBkYXRhLCBiYWNraW5nIHRoYXQgdXAg c2hvdWxkIGJlICpmYXN0Ki4gIEkganVzdCBkb27igJl0IHNlZQ0KPiB0aGF0IHdoZW4gSSB1 c2UgcnN5bmMtYmFzZWQgc29sdXRpb25zLg0KDQpJIG1ha2UgbmlnaHRseSByeXNuYyBkaWZm ZXJlbnRpYWwgYmFja3VwcyBvZiB0aGUgU0QgY2FyZHMgb2YgMTUgDQpSYXNwYmVycnkgUGkn cyB0byBTRCBjYXJkIGltYWdlIGZpbGVzIG9uIGEgVVNCIFNTRCBkcml2ZSBjb25uZWN0ZWQg dG8gDQpvbmUgb2YgdGhlIFBpIDRzLiBUaGUgUGkncyBhcmUgb2YgdmFyaW91cyBnZW5lcmF0 aW9ucywgZnJvbSAyIHRvIDQgd2l0aCANCmxvdHMgb2YgWmVybyBXcywgbW9zdCBhcmUgY29u bmVjdGVkIHZpYSBXaUZpLCAzIG9uIEV0aGVybmV0IGEgY291cGxlIGFyZSANCmF0IHJlbW90 ZSBzaXRlcy4gT24gYXZlcmFnZSB0aGVyZSBpcyA5NjFNQiBkYXRhIGluIDQ0MjYgZmlsZXMg DQp0cmFuc2ZlcnJlZCwgYW5kIHRoZSB0aW1lIHRvIGJhY2sgdXAgYWxsIG9mIHRoZW0gc2Vx dWVudGlhbGx5IGlzIDE0bTI2LCANCndoaWNoIEkgZG9uJ3QgdGhpbmsgaXMgdG9vIGJhZC4N Cg0KLS0tZHJ1Y2sNCg==

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From F. W.@3:770/3 to All on Wed Apr 13 10:04:23 2022
    Am 08.04.2022 um 18:47 schrieb Doc O'Leary:

    I’ve tried a few “common” packages, they all seem to fall down for
    me. I have 500K+ files to warehouse, ranging in sizes from tiny up
    to 10G+. The biggest problem seems to be RAM usage, and I’d like to
    have something that works on a 3B, and ideally a 0W.

    Does anyone have something that still scales up when scaling down to
    an RPi?


    Overgrive

    FW

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From DeepCore@3:770/3 to Doc O'Leary on Wed Apr 13 10:40:29 2022
    Doc O'Leary wrote on 2022-04-12 at 20:54:


    My problem with rsync-based solutions is that they seems to do *way* too
    much processing in order to figure out how to do an efficient transfer.
    I mean, to my mind, if I change 3 files out of 500K for a total of 3MB
    out of 1TB of data, backing that up should be *fast*. I just don’t see that when I use rsync-based solutions.

    According to the following article on Arstechnica, rsync has to inspect
    every file to determine which differences are to be sent over the
    wire... that's the processing you experience.

    https://arstechnica.com/information-technology/2015/12/rsync-net-zfs-replication-to-the-cloud-is-finally-here-and-its-fast/

    Sometimes using ZFS is better ...

    I'm actually trying out this way with a Pi 4 ...

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Richard Kettlewell@3:770/3 to Theo on Wed Apr 13 11:53:32 2022
    Theo <theom+news@chiark.greenend.org.uk> writes:
    DeepCore <devap@deepcore.eu> wrote:
    Doc O'Leary wrote on 2022-04-12 at 20:54:

    My problem with rsync-based solutions is that they seems to do *way* too >>> much processing in order to figure out how to do an efficient transfer.
    I mean, to my mind, if I change 3 files out of 500K for a total of 3MB
    out of 1TB of data, backing that up should be *fast*. I just don’t see >>> that when I use rsync-based solutions.

    According to the following article on Arstechnica, rsync has to inspect
    every file to determine which differences are to be sent over the
    wire... that's the processing you experience.

    By default rsync just looks at file metadata: are the file's length, date, attributes the same? If so, skip it. If they differ, go through the file and work out what changed, then send the changes. That means it has to inspect every inode, but not every file. You can make it checksum the file contents for every file, rather than just those with differing metadata, which of course is a lot slower.

    That’s true, but there’s another way that rsync can be inefficient for backups, depending what you’re trying to achieve.

    If your model is that the backup is a single tree, mutated by each
    successive backup, then the cost of rsync is reading all the metadata on
    both sides, and copying the changes. The downside is that you don’t get historical backups.

    If your model that you make a fresh tree for each backup, with hardlinks between unchanged files, then you have the additional cost of creating
    all the directories and making links to unchanged files. Cheaper than
    copying everything but still relatively expensive.

    For me the fact that each backup is a complete tree, that can be
    navigated and restored with quite basic tooling, is enough of an
    advantage that I can accept that extra cost compared to a more efficient design.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Theo@3:770/3 to DeepCore on Wed Apr 13 11:32:37 2022
    DeepCore <devap@deepcore.eu> wrote:
    Doc O'Leary wrote on 2022-04-12 at 20:54:


    My problem with rsync-based solutions is that they seems to do *way* too much processing in order to figure out how to do an efficient transfer.
    I mean, to my mind, if I change 3 files out of 500K for a total of 3MB
    out of 1TB of data, backing that up should be *fast*. I just don’t see that when I use rsync-based solutions.

    According to the following article on Arstechnica, rsync has to inspect
    every file to determine which differences are to be sent over the
    wire... that's the processing you experience.

    By default rsync just looks at file metadata: are the file's length, date, attributes the same? If so, skip it. If they differ, go through the file
    and work out what changed, then send the changes. That means it has to
    inspect every inode, but not every file. You can make it checksum the file contents for every file, rather than just those with differing metadata,
    which of course is a lot slower.

    If you want to avoid the inode inspection you need help from the filesystem
    to keep track of changes when you save them, rather than when you backup. Filesystems like ZFS inherently do that. It's also possible to install software that monitors file changes dynamically. That makes backups faster
    at the expense of making file accesses slower.

    Theo

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Jim Jackson@3:770/3 to Doc O'Leary on Wed Apr 13 17:48:19 2022
    On 2022-04-12, Doc O'Leary <droleary@2017usenet1.subsume.com> wrote:
    I *do* already use rsync extensively for my current ???backup??? needs, but it just doesn???t have the smarts I???d like to see in a proper backup tool. Like I said, bup has features that are more in line with what I need, but
    it seems to have trouble scaling down to an RPi sized server.

    What do you mean by "have trouble scaling down to an RPi sized server".
    Have you tried bup? What were the problems you had?

    I've not come across bup before and am curious.

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From druck@3:770/3 to druck on Wed Apr 13 21:01:19 2022
    On 12/04/2022 22:30, druck wrote:
    I make nightly rysnc differential backups of the SD cards of 15
    Raspberry Pi's to SD card image files on a USB SSD drive connected to
    one of the Pi 4s. The Pi's are of various generations, from 2 to 4 with
    lots of Zero Ws, most are connected via WiFi, 3 on Ethernet a couple are
    at remote sites. On average there is 961MB data in 4426 files
    transferred, and the time to back up all of them sequentially is 14m26,
    which I don't think is too bad.

    And it does work. I had a Pi fail to reboot yesterday, I found the the superblocks on the root partition been corrupted, so unfixable. So I
    just used dd to overwrite it with last night's backup image, and it was
    working again within 5 minutes.

    ---druck

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Doc O'Leary ,@3:770/3 to Theo on Wed Apr 13 19:57:13 2022
    For your reference, records indicate that
    Theo <theom+news@chiark.greenend.org.uk> wrote:

    If you want to avoid the inode inspection you need help from the filesystem to keep track of changes when you save them, rather than when you backup.

    I would expect that having a “smart” backup tool would also make it
    easier to track changes than simply rescanning everything every time
    like rsync does. The scripts I use now do some of that, and it’s not perfect, but it’s still a lot faster than just throwing rsync at a
    folder hierarchy and letting it work out what to do.

    It's also possible to install
    software that monitors file changes dynamically. That makes backups faster at the expense of making file accesses slower.

    I’m pretty sure inotify support is a default part of the Linux kernel already. I’ve certainly been thinking of tapping into it to make my
    scripts even more efficient, but I really was hoping there was some
    custom backup software that has already done the heavy lifting for me.

    --
    "Also . . . I can kill you with my brain."
    River Tam, Trash, Firefly

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Richard Kettlewell@3:770/3 to droleary@2017usenet1.subsume.com on Wed Apr 13 21:12:13 2022
    Doc O'Leary , <droleary@2017usenet1.subsume.com> writes:
    I’m pretty sure inotify support is a default part of the Linux kernel already. I’ve certainly been thinking of tapping into it to make my scripts even more efficient, but I really was hoping there was some
    custom backup software that has already done the heavy lifting for me.

    inotify has been in place for years, but it doesn’t support
    whole-filesystem notifications. It looks like you want fanotify for
    that.

    --
    https://www.greenend.org.uk/rjk/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Doc O'Leary ,@3:770/3 to Jim Jackson on Wed Apr 13 20:13:27 2022
    For your reference, records indicate that
    Jim Jackson <jj@franjam.org.uk> wrote:

    What do you mean by "have trouble scaling down to an RPi sized server".
    Have you tried bup? What were the problems you had?

    I didn’t dig deep into the root causes, but it started choking on large
    file sets. What it looked like to me in testing is that it was sucking
    up a ton of RAM for indexes/hashes/whatever when I threw a lot of files
    at it. Like it was designed with the assumption that “big data” necessarily required a big machine to handle it. I even gave it a ton
    of swap space so that it could complete rather than die on my 1GB RPi,
    but it churned so much and went so slow that I had to kill it anyway.

    I've not come across bup before and am curious.

    On the whole, I like its approach. I use git myself as a software
    developer and my own backup scripts borrow a lot of the same concepts
    as bup. If I had more time, I’d look into what needed to be
    rearchitected in bup to make it work for my use case. Until then, I’ll
    stick with my scripts and keep looking for someone to champion a leaner solution.

    --
    "Also . . . I can kill you with my brain."
    River Tam, Trash, Firefly

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Doc O'Leary ,@3:770/3 to druck on Wed Apr 13 20:20:11 2022
    For your reference, records indicate that
    druck <news@druck.org.uk> wrote:

    I make nightly rysnc differential backups of the SD cards of 15

    Raspberry Pi's to SD card image files on a USB SSD drive connected to

    one of the Pi 4s. The Pi's are of various generations, from 2 to 4 with

    lots of Zero Ws, most are connected via WiFi, 3 on Ethernet a couple are

    at remote sites. On average there is 961MB data in 4426 files

    transferred, and the time to back up all of them sequentially is 14m26,

    which I don't think is too bad.

    That’s indeed fantastic; thank you for sharing those stats. All I can
    say is that the behavior I see from rsync isn’t nearly as impressive.
    I have single machines with *no* changed data that take over 15 minutes
    for rsync to process.


    --
    "Also . . . I can kill you with my brain."
    River Tam, Trash, Firefly

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Doc O'Leary ,@3:770/3 to F. W. on Wed Apr 13 20:27:31 2022
    For your reference, records indicate that
    "F. W." <me@home.com> wrote:

    Overgrive

    Just appears to be a Google Drive frontend. Even if I did use Google
    Drive, it isn’t clear to me it’s going to function any better/faster
    than something (even rsync) replicating to a local drive. How do you
    see it working better than dumb copies for backup?

    --
    "Also . . . I can kill you with my brain."
    River Tam, Trash, Firefly

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From F. W.@3:770/3 to All on Thu Apr 14 10:34:26 2022
    Am 13.04.2022 um 22:27 schrieb Doc O'Leary:

    Just appears to be a Google Drive frontend. Even if I did use Google
    Drive, it isn’t clear to me it’s going to function any better/faster
    than something (even rsync) replicating to a local drive. How do you
    see it working better than dumb copies for backup?

    Once my flat burned down and killed all my backups.

    FW

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Theo@3:770/3 to druck on Thu Apr 14 10:49:50 2022
    druck <news@druck.org.uk> wrote:
    On 12/04/2022 22:30, druck wrote:
    I make nightly rysnc differential backups of the SD cards of 15
    Raspberry Pi's to SD card image files on a USB SSD drive connected to
    one of the Pi 4s. The Pi's are of various generations, from 2 to 4 with lots of Zero Ws, most are connected via WiFi, 3 on Ethernet a couple are
    at remote sites. On average there is 961MB data in 4426 files
    transferred, and the time to back up all of them sequentially is 14m26, which I don't think is too bad.

    And it does work. I had a Pi fail to reboot yesterday, I found the the superblocks on the root partition been corrupted, so unfixable. So I
    just used dd to overwrite it with last night's backup image, and it was working again within 5 minutes.

    Do you loopback mount the target images, so they keep their ext4 partition format and bootability? That's quite a neat idea...

    (if it were me I'd be tempted to keep a second copy of the files on the
    host's native FS, outside of the ext4 image. So if the target ext4 got corrupted in some way I could always recover the files)

    Theo

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Doc O'Leary ,@3:770/3 to F. W. on Thu Apr 14 18:13:54 2022
    For your reference, records indicate that
    "F. W." <me@home.com> wrote:

    Am 13.04.2022 um 22:27 schrieb Doc O'Leary:

    Just appears to be a Google Drive frontend. Even if I did use Google Drive, it isn’t clear to me it’s going to function any better/faster than something (even rsync) replicating to a local drive. How do you
    see it working better than dumb copies for backup?

    Once my flat burned down and killed all my backups.

    That speaks to a need for remote backups, not necessarily using cloud
    storage, let alone limiting yourself to Google as a sole provider.
    Again, my aim is to find a way to efficiently and safely warehouse all
    my data using an RPi. Whether or not that data is then replicated to additional locations or media is a separate solution layer.

    --
    "Also . . . I can kill you with my brain."
    River Tam, Trash, Firefly

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From druck@3:770/3 to Theo on Thu Apr 14 21:46:32 2022
    On 14/04/2022 10:49, Theo wrote:
    druck <news@druck.org.uk> wrote:
    On 12/04/2022 22:30, druck wrote:
    I make nightly rysnc differential backups of the SD cards of 15
    Raspberry Pi's to SD card image files

    Do you loopback mount the target images, so they keep their ext4 partition format and bootability? That's quite a neat idea...

    Yes. After setting up a Pi, I take the SD card and make an initial
    manual copy of the SD card to an image file with dd. Then my nightly
    backup script loopback mounts the image file and rsync's over ssh to
    keep the image updated. When there is a failure, I can dd the image file
    on to a new SD card, and the Pi is up and running with the image from
    4am the previous night.

    (if it were me I'd be tempted to keep a second copy of the files on the host's native FS, outside of the ext4 image. So if the target ext4 got corrupted in some way I could always recover the files)

    I protect against that by making weekly and monthly compressed copies of
    the image files. I run zerofree to blank the unused space, and then they compress better using pigz to zip using all the cores. It takes a Pi4B
    with SSD about 1h30 to do this for the 15 images.

    Additionally all the important programs and configuration on the Pi's
    are in git repos which are pushed on to a NAS drive. So if the worst
    happened I could burn a completely new Raspbian OS, and set up from that.

    ---druck

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From druck@3:770/3 to All on Thu Apr 14 21:54:04 2022
    T24gMTMvMDQvMjAyMiAyMToxMywgRG9jIE8nTGVhcnkgd3JvdGU6DQo+IEZvciB5b3VyIHJl ZmVyZW5jZSwgcmVjb3JkcyBpbmRpY2F0ZSB0aGF0DQo+IEppbSBKYWNrc29uIDxqakBmcmFu amFtLm9yZy51az4gd3JvdGU6DQo+IA0KPj4gV2hhdCBkbyB5b3UgbWVhbiBieSAiaGF2ZSB0 cm91YmxlIHNjYWxpbmcgZG93biB0byBhbiBSUGkgc2l6ZWQgc2VydmVyIi4NCj4+IEhhdmUg eW91IHRyaWVkIGJ1cD8gV2hhdCB3ZXJlIHRoZSBwcm9ibGVtcyB5b3UgaGFkPw0KPiANCj4g SSBkaWRu4oCZdCBkaWcgZGVlcCBpbnRvIHRoZSByb290IGNhdXNlcywgYnV0IGl0IHN0YXJ0 ZWQgY2hva2luZyBvbiBsYXJnZQ0KPiBmaWxlIHNldHMuICBXaGF0IGl0IGxvb2tlZCBsaWtl IHRvIG1lIGluIHRlc3RpbmcgaXMgdGhhdCBpdCB3YXMgc3Vja2luZw0KPiB1cCBhIHRvbiBv ZiBSQU0gZm9yIGluZGV4ZXMvaGFzaGVzL3doYXRldmVyIHdoZW4gSSB0aHJldyBhIGxvdCBv ZiBmaWxlcw0KPiBhdCBpdC4gIExpa2UgaXQgd2FzIGRlc2lnbmVkIHdpdGggdGhlIGFzc3Vt cHRpb24gdGhhdCDigJxiaWcgZGF0YeKAnQ0KPiBuZWNlc3NhcmlseSByZXF1aXJlZCBhIGJp ZyBtYWNoaW5lIHRvIGhhbmRsZSBpdC4gIEkgZXZlbiBnYXZlIGl0IGEgdG9uDQo+IG9mIHN3 YXAgc3BhY2Ugc28gdGhhdCBpdCBjb3VsZCBjb21wbGV0ZSByYXRoZXIgdGhhbiBkaWUgb24g bXkgMUdCIFJQaSwNCj4gYnV0IGl0IGNodXJuZWQgc28gbXVjaCBhbmQgd2VudCBzbyBzbG93 IHRoYXQgSSBoYWQgdG8ga2lsbCBpdCBhbnl3YXkuDQoNCkFyZSB5b3UgYmFja2luZyB1cCBh IDFHQiBQaSB1c2luZyByc3luYyAqdG8qIGEgcmVtb3RlIHN5c3RlbT8gSSB1c2UgYSBQaSAN CjRCIHdpdGggOEdCIChhbHRob3VnaCBpdCBuZXZlciB1c2VzIG1vcmUgdGhhbiA0R0IpIGFu ZCBhIGxvY2FsIFNTRCB0byANCmJhY2t1cCAqZnJvbSogdGhlIHNtYWxsZXIgNTEyTUIgYW5k IDFHQiBQaSdzIHZpYSBzc2guIFRoYXQgd2F5IHRoZSANCnN5c3RlbSB3aXRoIG1vcmUgcGVy Zm9ybWFuY2UgYW5kIG1lbW9yeSBkb2VzIGFsbCBoYXJkIHdvcmsgb2YgY29tcGFyaW5nIA0K aW5kZXhlcy4NCg0KLS0tZHJ1Y2sNCg==

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Doc O'Leary ,@3:770/3 to druck on Fri Apr 15 20:09:07 2022
    For your reference, records indicate that
    druck <news@druck.org.uk> wrote:

    Are you backing up a 1GB Pi using rsync *to* a remote system?

    I have a bunch of computers that should all back up to at least one
    remote system. They include a variety of RPi from 0W up to a 400, but
    there are also Macs and Windows in the mix.

    Ideally, I want maximal redundancy. Everything *should* be able to
    back up everything else. No, I don’t expect a 0W to be a workhorse,
    but I don’t see any reason a good solution couldn’t scale down to at
    least be *functional* on it. If git works fine, and my git-inspired
    scripts for backup work fine, I don’t see why some larger, more well- supported backup tool wouldn’t be able to function.

    That way the

    system with more performance and memory does all hard work of comparing

    indexes.

    But that really doesn’t take all that much CPU or RAM. I mean, having
    more resources certainly *helps*, but that doesn’t mean a backup system
    has to be architected in such a way to *require* 4GB of memory (or
    more) to manage a data warehouse of 500K files totaling 2TB. I
    wouldn’t even call that big data.

    I do also use rsync for replication, but that’s just not the same as
    having a backup system that archives data in perpetuity. It’s sounding
    like what I’m looking for doesn’t exist and I should just stick with my scripts.

    --
    "Also . . . I can kill you with my brain."
    River Tam, Trash, Firefly

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Mike Scott@3:770/3 to Doc O'Leary on Sat Apr 16 12:53:23 2022
    On 15/04/2022 21:09, Doc O'Leary wrote:
    ....

    But that really doesn’t take all that much CPU or RAM. I mean, having
    more resources certainly *helps*, but that doesn’t mean a backup system
    has to be architected in such a way to *require* 4GB of memory (or
    more) to manage a data warehouse of 500K files totaling 2TB. I
    wouldn’t even call that big data.

    I do also use rsync for replication, but that’s just not the same as
    having a backup system that archives data in perpetuity. It’s sounding like what I’m looking for doesn’t exist and I should just stick with my scripts.



    Have you seen duplicity?

    I've several desktops and laptops, all running linux, plus a central
    server running freebsd.

    I use duplicity to backup the linux machines onto the server - just the
    user data, as a fresh install of the OS isn't too outlandish.

    Duplicity is highly configurable. It will recover files from a specific
    date if you need. I'd suggest taking a look.

    I just use dump to dump (levels 0 and 3 only) almost the /entire/ fbsd
    server onto one of the desktops. (which stood me in good stead this week
    when I completely trashed /var; ooops :-{ )

    --
    Mike Scott
    Harlow, England

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Ahem A Rivet's Shot@3:770/3 to Mike Scott on Sat Apr 16 13:12:15 2022
    On Sat, 16 Apr 2022 12:53:23 +0100
    Mike Scott <usenet.16@scottsonline.org.uk.invalid> wrote:

    I just use dump to dump (levels 0 and 3 only) almost the /entire/ fbsd
    server onto one of the desktops. (which stood me in good stead this week
    when I completely trashed /var; ooops :-{ )

    ZFS snapshots YKIMS.

    My NAS runs striped ZFS mirrors, keeps extensive snapshots and replicates (zrepl) to an archive server running RAIDZ. Data loss ? What's
    that ? I am however looking carefully at TrueNAS Scale - it's not quite as
    good as OneFS but it's pretty close and free (unlike OneFS).

    To be fair ZFS is only the second best snapshot solution I know -
    the prize for that goes to DragonFlyBSD's HAMMER - everything that hits the disk is a snapshot until it's pruned to reduce the history granularity.

    --
    Steve O'Hara-Smith
    Odds and Ends at http://www.sohara.org/

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Doc O'Leary ,@3:770/3 to Mike Scott on Mon Apr 18 19:31:34 2022
    For your reference, records indicate that
    Mike Scott <usenet.16@scottsonline.org.uk.invalid> wrote:

    Have you seen duplicity?

    I had looked at it, but never gave it a try. Part of the issue I’m
    looking to solve is the fact that the machines I’m backing up share a substantial amount of data (~500GB, which isn’t even all *that* big
    these days). If they are treated as independent tarballs (and
    especially if they get encrypted on top of that), it leads to a lot
    of *unmanaged* duplication.

    Maybe I need to rethink my desire to use a single backup solution for
    all use cases. I could easily see using something like duplicity for
    one-off projects I’d use a 0W for. But, then, most of my needs in
    that regard are handled by using git and ansible to set them up, and
    a simple rsync is generally fine to pull down any generated data I
    want to archive.

    --
    "Also . . . I can kill you with my brain."
    River Tam, Trash, Firefly

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)
  • From Doc O'Leary ,@3:770/3 to Ahem A Rivet's Shot on Mon Apr 18 19:45:41 2022
    For your reference, records indicate that
    Ahem A Rivet's Shot <steveo@eircom.net> wrote:

    On Sat, 16 Apr 2022 12:53:23 +0100
    Mike Scott <usenet.16@scottsonline.org.uk.invalid> wrote:

    I just use dump to dump (levels 0 and 3 only) almost the /entire/ fbsd server onto one of the desktops. (which stood me in good stead this week when I completely trashed /var; ooops :-{ )

    ZFS snapshots YKIMS.

    On this topic, I did look into using a more advanced filesystem that had
    all the modern bells and whistles built in. In the end I concluded that
    I really wanted a backup format that I could read/recover from on most
    any computer I had access to at the time (likely a spare RPi *without*
    network access).

    I do long for the day when all of this is baked into the OS (*every* OS)
    by default.

    --
    "Also . . . I can kill you with my brain."
    River Tam, Trash, Firefly

    --- SoupGate-Win32 v1.05
    * Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)