Fluid ForgeFluid Forge
Home
Get Started
  • Local (DuckDB)
  • GCP (BigQuery)
  • Snowflake Team Collaboration
  • Declarative Airflow
  • Orchestration Export
  • Jenkins CI/CD
  • Universal Pipeline
CLI Reference
  • Overview
  • Architecture
  • GCP (BigQuery)
  • AWS (S3 + Athena)
  • Snowflake
  • Local (DuckDB)
  • Custom Providers
  • Roadmap
GitHub
GitHub
Home
Get Started
  • Local (DuckDB)
  • GCP (BigQuery)
  • Snowflake Team Collaboration
  • Declarative Airflow
  • Orchestration Export
  • Jenkins CI/CD
  • Universal Pipeline
CLI Reference
  • Overview
  • Architecture
  • GCP (BigQuery)
  • AWS (S3 + Athena)
  • Snowflake
  • Local (DuckDB)
  • Custom Providers
  • Roadmap
GitHub
GitHub
  • Introduction

    • /
    • Getting Started
    • Snowflake Quickstart
    • Vision & Roadmap
  • Walkthroughs

    • Walkthrough: Local Development
    • Walkthrough: Deploy to Google Cloud Platform
    • Walkthrough: Snowflake Team Collaboration
    • Declarative Airflow DAG Generation - The FLUID Way
    • Generating Orchestration Code from Contracts
    • Jenkins CI/CD for FLUID Data Products
    • Universal Pipeline
  • CLI Reference

    • CLI Reference
    • init Command
    • validate Command
    • plan Command
    • apply Command
    • verify Command
    • generate-airflow Command
  • Providers

    • Providers
    • Provider Architecture
    • GCP Provider
    • AWS Provider
    • Snowflake Provider
    • Local Provider
    • Creating Custom Providers
    • Provider Roadmap
  • Advanced

    • Blueprints
    • Governance & Compliance
    • Airflow Integration
    • Built-in And Custom Forge Agents
    • FLUID Forge Contract GPT Packet
    • Forge Copilot Discovery Guide
    • Forge Copilot Memory Guide
  • Project

    • Contributing to Fluid Forge
    • Fluid Forge v0.7.1 - Multi-Provider Export Release

Provider Architecture

Providers are the execution layer of Fluid Forge. They translate your declarative YAML contract into concrete platform operations — creating tables in DuckDB locally, provisioning BigQuery datasets on GCP, or deploying schemas in Snowflake.

This page explains how the provider system works under the hood. If you want to build your own provider, see Creating Custom Providers.

How It Works

Every Fluid Forge command follows the same flow: contract → provider → plan → apply → result.

┌─────────────────────────────────────────────────────┐
│              FLUID Contract (YAML)                  │
│  id, name, version, consumes[], builds[], exposes[] │
└──────────────────────┬──────────────────────────────┘
                       │
                 ┌─────▼─────┐
                 │  fluid    │  CLI parses the contract
                 │  plan     │  and resolves the provider
                 └─────┬─────┘
                       │
           ┌───────────▼───────────┐
           │   Provider Registry   │  Discovers all available
           │   (auto-discovery)    │  providers at startup
           └───────────┬───────────┘
                       │
     ┌─────────────────┼─────────────────┐
     ▼                 ▼                 ▼
┌─────────┐     ┌──────────┐     ┌───────────┐
│  Local   │     │   GCP    │     │ Snowflake │  ...
│ (DuckDB) │     │(BigQuery)│     │           │
└────┬────┘     └────┬─────┘     └─────┬─────┘
     │               │                 │
  plan() → actions   plan() → actions  plan() → actions
  apply() → result   apply() → result  apply() → result

This design gives you:

  • One contract, multiple targets — the same YAML runs locally for development, then deploys to any cloud in production
  • Deterministic planning — plan() is pure with no side effects, the same contract always produces the same actions
  • Idempotent apply — apply() is safe to re-run, it converges toward the desired state
  • Extensibility — add a new provider without changing contracts or the CLI

The Two Required Methods

Every provider must implement exactly two methods:

plan(contract) → actions

Reads the contract and returns a list of actions — plain Python dicts describing what needs to happen:

actions = provider.plan(contract)
# [
#   {"op": "load_data", "path": "data/customers.csv", "table_name": "customers"},
#   {"op": "execute_sql", "sql": "SELECT * FROM customers WHERE active", ...},
#   {"op": "materialize", "source_table": "result", "path": "out/active.csv"}
# ]

Planning makes no network calls and has no side effects. It's just data transformation: contract in, action list out.

apply(actions) → ApplyResult

Executes each action against the target platform and returns a structured result:

result = provider.apply(actions)
# ApplyResult(
#   provider="local",
#   applied=3, failed=0,
#   duration_sec=0.142,
#   timestamp="2026-03-05T10:30:00Z",
#   results=[
#     {"i": 0, "status": "ok", "op": "load_data"},
#     {"i": 1, "status": "ok", "op": "execute_sql"},
#     {"i": 2, "status": "ok", "op": "materialize"}
#   ]
# )

Provider Discovery

When you run any fluid command, the CLI automatically discovers all available providers. You never need to configure this — it just works.

How Discovery Finds Providers

Discovery runs a 4-layer pipeline, in order:

LayerWhat it doesUse case
1. Entry pointsScans pip-installed packages for fluid_build.providers entry pointsThird-party providers installed via pip install
2. Built-in modulesImports the curated defaults: local, gcp, aws, snowflake, odpsThe providers that ship with Fluid Forge
3. Subpackage scanScans fluid_build/providers/* for any remaining modulesCatches providers added to the package tree
4. FallbackRe-attempts imports if registry is still emptyRecovers from import ordering issues

Discovery is lazy (runs on first access), idempotent (subsequent calls are no-ops), and thread-safe.

Selecting a Provider

The CLI resolves which provider to use in this order:

  1. The --provider flag: fluid --provider gcp plan contract.yaml
  2. The FLUID_PROVIDER environment variable: export FLUID_PROVIDER=gcp
# List all discovered providers
fluid providers

# Restrict discovery to specific providers (advanced)
FLUID_PROVIDERS="local,gcp" fluid providers

Built-in Providers

Fluid Forge ships with these providers:

ProviderRuntimeBest for
LocalDuckDBDevelopment, testing, CSV/Parquet workflows
GCPBigQuery + GCSGoogle Cloud production deployments
AWSS3 + Athena + GlueAmazon Web Services deployments
SnowflakeSnowflakeEnterprise data warehouse deployments
ODPSStandards exportData product interoperability (ODPS v4.1)
# Local development
fluid --provider local apply contract.yaml --yes

# Deploy to GCP
fluid --provider gcp apply contract.yaml --project my-gcp-project

# Deploy to Snowflake
fluid --provider snowflake apply contract.yaml

# Deploy to AWS
fluid --provider aws apply contract.yaml --region us-east-1

The Action System

Actions are the intermediate representation between planning and execution. Each action is a plain dict with an op field that identifies the operation.

Standard Action Types

OpPurposeKey fields
load_dataImport a file into the query enginepath, table_name, format
execute_sqlRun a SQL transformationsql, output_table, resource_id
materializeWrite results to an output filesource_table, path, format
copyCopy or export datasource, destination, format
noopPlaceholder (no operation)—

Cloud providers define their own ops (e.g., ensure_dataset, ensure_table, create_view, grant_role).

Dependency Resolution

The planner builds a dependency graph and uses topological sorting to determine execution order. Data must be loaded before transformations run, and transformations must complete before materialization:

load_data(customers.csv)  ──┐
                             ├──▶  execute_sql(transform)  ──▶  materialize(output.csv)
load_data(orders.csv)     ──┘

Capabilities

Providers advertise what they support through a capabilities object. The CLI uses this to enable or disable features dynamically:

def capabilities(self):
    return ProviderCapabilities(
        planning=True,       # Can generate execution plans
        apply=True,          # Can execute actions
        render=False,        # Can export to external formats
        graph=False,         # Can generate lineage graphs
        auth=False,          # Requires authentication
    )

Check capabilities from the CLI:

fluid providers         # Shows capabilities for all providers

Error Handling

Providers use a two-tier error model:

Error typeWhen to useUser experience
ProviderErrorUser-fixable problems (bad contract, missing resource)Friendly error message
ProviderInternalErrorBugs or environment failures (API outage)Full traceback in debug mode
from fluid_provider_sdk import ProviderError, ProviderInternalError

# User error — they can fix this
raise ProviderError("Dataset 'analytics' not found in project 'my-project'")

# Internal error — something unexpected broke
raise ProviderInternalError(f"BigQuery API returned unexpected status: {status}")

Environment Variables

VariablePurposeExample
FLUID_PROVIDERDefault providerlocal, gcp, snowflake
FLUID_PROJECTCloud project/accountmy-gcp-project
FLUID_REGIONDeployment regionus-central1
FLUID_PROVIDERSRestrict which providers to discoverlocal,gcp

Next Steps

  • Build your own provider: Creating Custom Providers
  • Use a specific provider: GCP · AWS · Snowflake · Local
  • See what's coming: Provider Roadmap
Edit this page on GitHub
Last Updated: 3/30/26, 3:30 PM
Contributors: khanya_ai, fas89
Prev
Providers
Next
GCP Provider