Data validation

There are three types of integrity verification in Magnus Box:

Referential integrity 

Referential integrity means that for each snapshot, all its matching chunks exist; that all the chunks are indexed; and so on. This is verified client-side every time the app runs a retention pass, so you should ensure that retention passes run successfully from time-to-time.

Data file integrity at rest 

Data file integrity ensures that each file in the Storage Vault is readable and has not been corrupted at rest (e.g. hash mismatch / decrypt errors).

Magnus Box stores files inside the Storage Vault data location as opaque, encrypted, compressed files. The filenames are the SHA256 hash of the file content. Magnus Box automatically verifies file integrity client-side, every time a file is accessed during backup and restore operations (i.e. non-exhaustively) by calculating the SHA256 hash of the content and comparing it to the filename.

Corruption of files at rest is a rare scenario; it's unlikely you need to worry about this, unless you are using local storage and you believe your disk drives are failing. However, for additional peace-of-mind, you can verify the integrity of the files on disk at any time, by comparing the filename to their SHA256 hash.

A future version of Magnus Box will add built-in functionality to verify file integrity in this way.

Example data validation commands 

The following equivalent commands read all files in the current directory, take the SHA256 hash, and compare it to the filename.

These commands exclude the config file, as these are known to be safe for other reasons.

These commands do not exclude any other temporary files (e.g. /tmp/ subdirectory, or ~-named files) that may be used by some storage location types for temporary uploaded data. Such temporary files will almost certainly cause a hash mismatch, but do not interfere with normal backup or restore operations.

On Linux, you can use the following command:

find . ! -name 'config' -type f -exec sha256sum '{}' \; | awk '{ sub("^.*/", "", $2) ; if ($1 == $2) { print $2,"ok" } else { print "[!!!]",$2,"MISMATCH",$1 } }'

On Windows, you can use the following Powershell (4.0 or later) command:

Get-ChildItem -Recurse -File | Where-Object { $_.Name -ne "config" } | ForEach-Object {
    $h = (Get-FileHash -Path $_.FullName -Algorithm SHA256)
    if ($_.Name -eq $h.Hash) { echo "$($_.Name) ok"; } else { echo "[!!!] $($_.Name) MISMATCH $($h.Hash)"; }

Data file integrity at generation-time 

It is possible that a malfunctioning Magnus Box Backup client would generate bad data, and then save it into the Storage Vault with a valid hash and valid encryption. For instance, this could happen in some rare situations where the Magnus Box client is installed on a PC with malfunctioning RAM.

In this situation, Magnus Box would try to run a future backup/restore job, load data from the vault, but fail to parse it with a couldn't load tree [...] hash mismatch error message or a Load(<index/...>): Decode [...] invalid character \x00 error message.

In this situation, it is possible to recover the Storage Vault by removing all the corrupted data. The remaining data is restoreable. However, it's not possible to identify the corrupted data using the data validation commands above.

Data validation steps 

Different methods are available to identify the corrupted files.

Use the "Deep verify Vault contents" feature

  • This feature is available in Magnus Box 18.8.2 or later, via the Magnus Box Server web interface live connected device actions dialog when the "Advanced options" setting is enabled. It is not exposed to the client in the Magnus Box app.
  • This will cause the client to download parts of the Storage Vault and perform a deeper type of hash checking than is possible via the existing data validation steps. It should alert you to which data files are corrupt, and the Storage Vault can then be repaired following the existing documented steps.
  • There are two versions of the "deep verify" feature

    - In Magnus Box 18.8.2, this feature downloads almost the entire content of the Storage Vault. This is a highly bandwidth-intensive operation. If you have the customer's password on file, it may be preferable to log in as a new device into their account from your own office, and control that device to run the command instead.

    - In Magnus Box 18.8.3 and later, the "deep verify" feature is much faster than 18.8.2; it downloads only index/tree parts of the Storage Vault, and caches temporary files to reduce total network roundtrips.

Files mentioned in error message

  • Data files (e.g. couldn't load tree [...] hash mismatch error message)

    - Magnus Box 18.8.2 updated the couldn't load tree error message to also indicate the exact corrupted pack file, if possible.

    - You can then delete the file from the /data/ subdirectory, and run a retention pass to validate the remaining content, as described below. This may assist with repairing the Storage Vault.

    - However, this only detects the corrupted directory trees that were immediately referenced by a running backup job; other past and future backup jobs may still be unrestorable.
  • Index files (e.g. Load(<index/...>): Decode [...] invalid character \x00)

    - The index files contain only non-essential metadata to accelerate performance. Index files can be safely regenerated via the "Rebuild indexes" option on a Storage Vault. This is a relatively fast operation.
  • Compared to the "Deep verify Vault contents" feature, repairing single files in this way does avoid the immediate bandwidth-intensive step of downloading the entire vault content; however, it is not a guarantee that all data in the Vault is safe. Use of this method should be coupled with a (bandwidth-equivalent) complete test restore.

Files by modification date

  • New backup jobs only add additional files into the Storage Vault. Another possible way to repair the Storage Vault is to assume files in the Storage Vault are affected after a given point in time.
  • This is only an option if your Storage Vault type exposes file modification timestamps (e.g. local disk or SFTP; and some limited number of cloud storage providers)
  • Specifically
  1. Ensure that no backup/restore/retention operations are currently running to the Storage Vault
  2. The corrupted data was created by the job prior to the one in which errors were first reported; Find the start time of this prior job
  3. Delete all files in the Storage Vault with any modification time greater than when that job startedIf you used the "Rebuild indexes" option or ran a retention pass since the errors began, the contents of the /index/ directory may have been consolidated into fewer files. If there are no files in the /index/ subdirectory, you should then initiate a "Rebuild indexes" operation
  4. Initiate a retention pass afterward, to ensure referential integrity of the remaining files, as described below

The other alternative is to start a new Storage Vault.