Introducing HPD Connect

Powered by EdgeFS, open-source at the core, decentralized data fabric for Edge/IoT and data-intensive Computing


Enable Edge-Native Kubernetes Apps

With increasing number​ of deployed Edge/IoT devices and wide spreading of 5G networks, there lie multiple challenges, such as data security concerns, complexity and cost in operating at geographical scale, and as a result affected application performance.

Lack of open and edge-native data fabric solution limits Kubernetes workload orchestration capabilities. We love Kubernetes and Open Source, and we are on the mission to defy data gravity, enable new class of Kubernetes applications – Edge-Native Applications.



Save up to 50x on egress and storage costs with WAN/LAN deduplication and highly efficient decentralized cache


Application controls versioned datasets with Trusted API. Once recorded, data in any given block cannot be altered retroactively


Blazing fast, always local segment I/O. Access data across geographies transparently and via standard storage protocols


HPD Connect

Decentralized Data Fabric as a Service


Build cost-efficient, highly-available and decentralized namespace

Connected Star

Connect EdgeFS endpoints into decentralized CDN


Enable AI/ML pipelines processing for data-intensive workloads

Designed for Edge/IoT
Computing and AI/ML

Open-sourced at the core, decentralized data fabric for Edge/IoT Computing available for easy integration with Kubernetes applications in a variety of data-intensive use cases.

In modern era of multi-cloud where latency, egress and data storage costs becoming a limiting factors, developers need to find a solution.

An easy to use Kubernetes-Native decentralized data layer service greatly simplifies cross region, multi-cloud, cloud-native app & API development. Kubernetes applications’s PV/PVCs gains performance, mobility, disaster recovery and self-healing capabilities while significantly cutting cloud cost.

Training autonomous vehicle models uses enormous quantities of data. Developers need to connect external sources of data and yet still track provenance for forensic debugging capabilities.

An easy to use Kubernetes-Native service greatly simplifies operation of decentralized data flow topologies. External data sources versioned at a very high speed and synchronized automatically. Intelligent cache and high-performance design enables AI/ML processing without the need for data copy. Data fabric handles networking partitioning gracefully, which is a key capability for a variety of automotive use cases.

With wider availability of 5G and high precision video cameras, the amount of data that has to be processed at the edge frontiers is growing significantly. It cannot be sent all to the core or cloud and has to be processed where collected. Developers need a solution that will provide consistent decentralized processing of the data at the edge locations.

Decentralized data fabric enables isolated computations with cryptographically strong guarantees of versioned datasets used by AI/ML algorithms. Developers do not need to worry about efficiency of data access, edge frontier I/O is always local and as a result blazingly fast. Data fabric is taking care of decentralized metadata consistency and ensure optimal utilization of network and storage.

Supports a variety of storage protocols on top of same geo-distributed namespace - S3, NoSQL, SQL, NFS, iSCSI

Data placement is replicated and can be optionally offline erasure encoded. Fault-tolerant with built-in self-healing and advanced snapshotting capabilities.

Deep Kubernetes and CSI integration. Deploys in minutes on AWS, Azure, GCP, On-Prem or Edge IoT. It can run on top of existing SAN/NAS.

Geographically-aware de-duplication and on the fly compression. Reduces cross-site link utilization and noticeably speeds up decentralized data access.

Strong server-side in-software encryption for selected buckets, objects, files or LUNs.

Geographically-aware multi-tenancy with file, object, or block granularity level QoS feature set.

No single point of failure (SPoF) due to fully distributed, immutable metadata design and no need for dedicated metadata servers.

Architected with microsecond resolution I/O failover guarantees utilizing location-independent data placement and retrieval technique.