Download Storage OS Free & fully-featured up to 1TB



The Technologies Inside Syneto Storage OS

Syneto Storage OS uses a new and revolutionary smart filesystem. The features that make this file system unique are its abilities to unify volume and filesystem management, to provide end-to-end data integrity, protection against silent data corruption (bit rot, phantom writes, DMA parity errors, driver bugs), infinite scalability and a copy-on-write transactional model.

Get Storage OS Technology GuideDownload the full, print-ready PDF document


Software RAIDs

Syneto Storage OS offers software raids through its RAID-Z and MIRROR typologies. Software raids are more cost effective by eliminating proprietary hardware raid controllers. When data is updated in a RAID stripe, the parity needs to be updated too. But, there is no way to updated two or more disks atomically, causing a corrupted stripe during power outages. This is known as the RAID-5 write hole and Syneto Storage OS’s filesystem is immune to it.


Having a 128 bit filesystem, Syneto Storage OS has no physical limitations regarding space expansion that can be surpassed. It uses a pooled storage approach, which can transparently grow the pool size at any time without any downtime or interruptions.

Scalability


End-to-end data integrity

With Syneto Storage OS each block is checksummed and this checksum is kept in a pointer to that block, not in the data block itself. Data is checksummed all the way up the filesystem hierarchy up to the root node (the uberblock) which is also checksummed. When data is read its checksum is calculated and compared to what it is suppose to be. In case of mismatches a self-healing mechanisms is applied, repairing the blocks by considering its checksum result as well as other blocks that were written in the RAID-Z/MIRROR configuration.


Syneto Storage OS applies a copy-on-write transactional model to writing data blocks. Blocks that contain data are never overwritten, rather a new block is created. When new data is written, all the connected metadata and block tree structure is updated. Applying a copy-on-write model allows Snapshots to be taken. These allows the system to revert to the state it had when they were made. This technique also allows the creation of instant clones, without any additional space requirements.

Copy-on-write


Hybrid data pools

Syneto Storage OS supports a combination of SSD, SAS and SATA devices in the same data pool without sacrificing data access. This mix allows the use of high performance devices to create a tiered storage architecture for increased performance.


Syneto Storage OS supports variable data block sizes for volumes and filesystems. This makes Syneto Storage OS adaptable to any application-side block size requirements for performance and space usage.

Variable block size


Integrated hypervisor

Syneto Storage OS integrates an on-board hypervisor. This turns the kernel itself into a hypervisor, capable of creating and hosting on-appliance VMs, and allows for better scheduling, shorter I/O paths and reduced network latency. The networking layer is also virtualised.


Local virtual machines running on the integrated hypervisor benefit from the cloning technology which allows instant cloning and snapshotting of the virtual machines without any additional space overhead. Using the copy-on-write filesystem architecture it also allows virtual machines to share any unchanged data blocks.

Cloning


VMware plugin

Volumes and filesystems residing on Syneto Storage OS can be shared using ISCSI/FC, NFS and InfiniBand with the any VMware hypervisor. Syneto Storage OS can take memory consistent snapshots of hosted VMware VMs. VMs are automatically quiesced during datastore snapshots to ensure consistency. Additional datastore space provisioning is done automatically, and expanding a Syneto Storage OS LUN will also expand its attached datastore.


Syneto’s cluster solution uses a highly scalable software architecture. It provides an active-active clustering mechanism with automatic balancing of storage pools between nodes. Syneto Storage OS integrates ALUA which allows LUNs to be seen on all storage nodes that are present within the cluster, thus creating multiple access paths. The HA architecture provides split brain fencing mechanisms including Gratuitous ARP checks and SCSI 2 reservations. Storage pools can be actively migrated between nodes to achieve load balancing.

Active-active clustering


Snapshot replication

Snapshots taken with Syneto Storage OS can be incrementally replicated at a block level. Only block level differences will be synchronised between different snapshots making on/off-site replication very efficient from a network usage standpoint. Snapshots can be scheduled using different SLA’s.


Syneto Storage OS uses in-line compression while data is being written to disk, instead of running compression afterwards. Compressions algorithms vary from the space efficient ones like GZIP9 to performance increasing ones like LZ4.

Compression


Deduplication

Storage OS provides block level in-line deduplication using cryptographically strong 256 bit checksums like SHA256. Deduplication is done synchronously using the available CPU power, on the entire storage pool. Storage OS also provides granular deduplication thus allowing deduplication to be used on a per-dataset basis.


For observability, performance analysis and enhancements, Syneto Storage OS uses Dtrace technology. This allows patching live running instructions with an instrumentation code. Syneto Storage OS ships with various preconfigured Dtrace scripts that provide easy observability for common usage scenarios.

Dynamic tracing


Fault detection

Syneto Storage OS ships with a range of fault management modules for checking the integrity of different hardware and software components. Notifications can be send via emails or SMTP traps.


The management layer on Syneto Storage OS is built upon the latest web technologies and leverages a scalable multicomponent messaging architecture. It has a user friendly Web GUI geared towards usability.

Management layer