The Data State Layer
for Object Storage

Video · Sensors · Physical AI · Medical Imaging · Documents

Versioned datasets and automatic lineage - shared operational memory for humans and agents.

Petabytes stored. No shared understanding.

Object storage accumulates artifacts - not context. Here's what that costs you.

Dataset without context

Same work, repeated.

Features extracted. Embeddings computed. No persistent record. Work starts over.

Tribal knowledge.

Context lives in Slack, notebooks, and people's heads. When they're unavailable, progress stops.

Agents operating blindly.

No catalog to search. No lineage to inspect. No versioned state to reuse. Just hallucinated pipelines.

Empowering companies

Aicon logo
Billie logo
Cyclica logo
Degould logo
Huggingface logo
Inlab Digital logo
UBS logo
Mantis logo
Papercup logo
Pieces logo
Sicara logo
UKHO logo
XP Inc logo
Kibsi logo
Summer Sports logo
Motorway logo
Aicon logo
Billie logo
Cyclica logo
Degould logo
Huggingface logo
Inlab Digital logo
UBS logo
Mantis logo
Papercup logo
Pieces logo
Sicara logo
UKHO logo
XP Inc logo
Kibsi logo
Summer Sports logo
Motorway logo
Aicon logo
Billie logo
Cyclica logo
Degould logo
Huggingface logo
Inlab Digital logo
UBS logo
Mantis logo
Papercup logo
Pieces logo
Sicara logo
UKHO logo
XP Inc logo
Kibsi logo
Summer Sports logo
Motorway logo
Aicon logo
Billie logo
Cyclica logo
Degould logo
Huggingface logo
Inlab Digital logo
UBS logo
Mantis logo
Papercup logo
Pieces logo
Sicara logo
UKHO logo
XP Inc logo
Kibsi logo
Summer Sports logo
Motorway logo
Aicon logo
Billie logo
Cyclica logo
Degould logo
Huggingface logo
Inlab Digital logo
UBS logo
Mantis logo
Papercup logo
Pieces logo
Sicara logo
UKHO logo
XP Inc logo
Kibsi logo
Summer Sports logo
Motorway logo

Six lines. Context included.

No SQL. No ETL. No data movement. Just Python.

import datachain as dc

(
 dc.read_storage("s3://acme-robots/runs/**/*.mp4", type="video")
 .filter(dc.C("file.size") > 1000)
 .settings(parallel=8, prefetch=5, workers=150)
 .map(obstacles=detect_obstacles)
 .save("obstacle_detections")
)
1

Point at storage

Connect to any S3, GCS, or Azure bucket. No data copying, no ingestion step.

2

Transform with Python

Filter, map, and enrich using plain Python - LLMs, CV models, or any function.

3

Save as a dataset

Automatically versioned, lineage tracked, fully queryable.

Every operation deposits context - metadata, lineage, and versioned state.

The same code that runs on your laptop runs on a 150-node cluster.
DataChain handles the parallelism, async download, checkpointing, and lineage.

What changes when your storage accumulates context.

Impossible with raw object storage. Automatic with DataChain.

Find any file. Without asking anyone.

No more Slack archeology. Anyone on the team can search, filter, and trace data to its source.

Agents operate on shared state

Claude Code and other tools stop hallucinating pipelines - they reuse real datasets instead of creating duplicates.

Reproduce anything. Instantly.

Every file and transformation is versioned. Debugging goes from days to minutes.

One workspace. Everyone in sync.

Shared operational memory for researchers, engineers, QA, and agents.

Open source to start. Studio to scale.

Same SDK. Same concepts. The difference is scale and collaboration.

Open Source

For individuals and small teams building pipelines over object storage.

  • Python SDK for S3/GCS/Azure
  • Pydantic-native schemas
  • Dataset versioning & lineage
  • Local parallel execution
  • LLM & ML model integration
  • Apache 2.0 license
pip install datachain

Studio

For teams that need shared operational memory across the organization.

  • Everything in Open Source
  • Web UI & dataset registry
  • Team collaboration & access control
  • Distributed cloud compute (BYOC)
  • MCP server for IDE & agent access
  • Enterprise support & SLAs
Book a Demo

Start locally. Scale to shared operational memory when your team or data grows - no rewrite required.

What our customers say

We realized we were solving a problem we shouldn't be solving. With DataChain, what used to require data engineers is now handled seamlessly by researchers - and the whole team moved to the next level.

Yoni Svechinsky

Director of Research | brain.space

What surprised me was how easily researchers adopted DataChain - data tools are usually hard for non-engineers. What surprised me more was when hardware and QA started asking for access too.

Sharon Kohen

Principal Data Engineering | brain.space

Trusted partners with global industry leaders

NVIDIA logo
GitHub logo
Databricks logo
Nebius logo
Hashicorp logo

Your data never leaves your cloud.

Your Cloud

  • Data stays in your S3/GCS/Azure bucket
  • Compute runs in your VPC (BYOC)
  • No data copying or egress
  • You control access and encryption

DataChain

  • Metadata and lineage indexed - never raw data
  • Control plane, not data plane
  • Role-based access and audit logs
  • SSO & SAML integration

Compliance

  • SOC 2 Type II certified
  • GDPR-ready data processing
  • On-prem deployment available
  • Enterprise security reviews