WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

926

Fugaku cost $1.2 billion to build and has so far been used for research on COVID-19, diagnostics, therapeutics, and virus spread simulations.

So presumable all that plandemic data was wiped out. How convenient.

> Fugaku cost $1.2 billion to build and has so far been used for research on COVID-19, diagnostics, therapeutics, and virus spread simulations. So presumable all that plandemic data was wiped out. How convenient.

(post is archived)

[–] 3 pts (edited )

Backups are really easy to fuck up, but also really easy to mitigate.

What companies never bother to do is allocate time for their IT staff to do any bare metal restores, so you wouldn't believe just how many departments are basically winging it

77Tb isn't even that much, around £1K worth of hard drive and £200 of LTO-8 tape

You can add 100TB of cloud storage for a $1K a year, which could be used for incrementals

I used to do a complete overnight copy of the server onto hard drive, because most missing files are noticed right away and it's faster to restore. Likewise a complete backup to LTO tape, then drop out of cycle the week ending and month ending tapes. The month ending ones were kept forever and the week ending ones for at least a few months. These were all kept offsite.

I never liked incremental backups, but if I'd had the sources then I'd have added it.

To lose that amount of data meant they were never restoring anything to bare metal, ever. Or the data they were saving was trash and never validated

[–] 0 pt

Incremental is good as an addition to normal. It's nice to be able to roll back if you say deleted a directory and didn't realize immediately. The main issue is it being in some special format (though Apple tried an approach where each new incremental backup was a normal directory with unchanged files represented by links to the actual file in a previous backup).

[–] 0 pt (edited )

Why wouldn't they just use a ZFS system with replication to remote servers and a snapshot system in place? I run a snapshot every 5 minutes during working hours, and those snapshots have a 48-hour life. I do hourly snapshots that live for two weeks. Finally, I do nightly snapshots that live for 12 months. Accidentally fucked up your spreadsheet after two weeks of work? No problem. Right click that bitch and choose previous versions to restore from 5 minutes ago.

Because of the way ZFS works, there's no thrashing of disks or IO activity when snapshots run. Since they're at the sector level they don't even take up any disk space except for the actual sectors that change after the snapshot.

[–] 0 pt

Right click that bitch and choose previous versions to restore from 5 minutes ago.

How are you doing that part with zrs? Thanks

Sanoid and syncoid are awesome if you aren't using them.