HCI storage that doesn't need its own team.
The Storage Scale provides hyperconverged distributed storage across your Tophan cluster. Every node contributes its local disks to a shared pool with deduplication, compression, and erasure coding — no dedicated storage hardware required.
| Feature | Description | Status |
|---|---|---|
| Deduplication | Block-level dedup across the entire cluster. Identical data stored once regardless of which VM or application wrote it. | Beta |
| Compression | Inline compression with LZ4 (fast) or Zstd (dense). Per-volume configuration. | Beta |
| Erasure Coding | Distributed parity across nodes. Survive disk or node failures without full replication overhead. | Beta |
| Tiering | Automatic data placement across NVMe, SSD, and HDD tiers based on access patterns. Hot data on fast storage, cold data on dense storage. | Planned |
| Thin Provisioning | Allocate 1TB, use 50GB. Space consumed on write, not on allocation. | Stable |
| Snapshots | Instant point-in-time snapshots. Copy-on-write, space-efficient, and fast to create or rollback. | Beta |
| Clones | Instant writable copies of any volume. Full independence from the source after clone. | Beta |
| iSCSI Target | Export volumes as iSCSI LUNs for external consumers. CHAP authentication, multipath support. | Beta |
| Replication | Synchronous or asynchronous replication between clusters for disaster recovery. | Planned |
| Encryption | At-rest encryption with per-volume keys managed by the Vault Scale. | Beta |
| Self-Healing | Automatic data reconstruction when disks or nodes fail. No manual intervention. | Beta |
Block-level deduplication operates across the entire cluster, not per-node. When any workload on any node writes a block that already exists anywhere in the cluster, no additional storage is consumed.
This has dramatic implications:
Combined with thin provisioning, you can safely over-allocate storage 5:1 or more for typical workloads.
┌──────────────────────────────────┐
│ Storage Scale API │ Volume management
├──────────────────────────────────┤
│ Distributed Metadata │ Block map, dedup index
├─────────┬─────────┬─────────────┤
│ Node 1 │ Node 2 │ Node N │ Local storage pools
│ NVMe │ SSD │ HDD │ Tiered media
└─────────┴─────────┴─────────────┘
Every node runs a storage agent that manages local disks and participates in the distributed metadata layer. Volumes are striped across nodes for performance and protected by erasure coding for resilience. The metadata layer tracks block checksums for dedup and integrity verification.