Knowing nothing about the system/s in question is it more likely that a data loss of this size is down to operator error I.e. people being told to delete stuff in error, or some sort of batch update that got rid of a lot of stuff in one go?
As an ex-DBA (database administrator), there could be many causes.
PEBCAK (Problem Exists Between Chair And Keyboard). As others have suggested, a user may have thought there were deleting a few records but had actually selected the whole dataset.
Programming error. New code was not fully tested and went wibble.
Batch or processing error. Data updates not run in the correct order meaning records were overwritten or deleted rather than updated. (It depends on how data changes are applied - do they create a duplicate recordset then delete the original? Do they amend the existing dataset? I’ve seen both.
Migration error. Were they moving data to another server or mainframe and the migration went wrong? This can often happen with “planned maintenance“. For example, some electricity supplies need to be tested and certified every year, meaning anything using that supply has to be turned off. If that something is equipment hosting the data, and you must maintain access to that data for operational reasons, you have to move the data to equipment alternative equipment that will still be powered up.
Virus. As an experiment, I once wrote a sort of virus that put itself into databases then, everytime someone logged on, it picked a record at random and randomly changed one character within that record. It would easily look like a mis-type. However, if no one noticed then, over time, the whole database (and backups) would become garbage.
Backup problem. Backups don’t always work and the backup operators don’t always notice. If that happens, and you something happens that requires a data restore, you’ll suddenly find huge amounts of data are missing / corrupted / out-of-date.
Given that the data is still missing, and everyone is staying tight-lipped about it, I’m leaning towards an initial cause being made worse by a problem with the backups. If the backups (and schedule) were good, they could have restored the data, losing only one day’s data.
Is the Home Secretary at fault? I’d say no. This looks like an operating error by users and / or administrators / contractors. The only way the Home Secretary could be at fault is if she made a deliberate decision to sign off a policy or contract that specifically removed the requirement for regular backups, or which failed to include a service to restore data from backups.