DuckLake supports multi-writer just fine — but only if your catalog is a real database, like Postgres (there’s some interest in SQL Server support too). But if all you have is object storage and a SQLite or DuckDB file as the catalog, you’re stuck with single-writer: object stores aren’t real filesystems, so the DB file can’t be locked. Nothing stops two processes from writing to it at the same time and corrupting it.
If single-writer is enough for you (one notebook, one pipeline, one user), you don’t need to stand up a database server. You just need accidental concurrent runs to fail fast.
The trick: take a blob lease
OneLake speaks the ADLS API, so you can take a lease on a blob — a mutex for free (it seems S3 needs DynamoDB and GCS needs a homemade lock object). Each run does:
Acquire a lease on metadata.db in abfss://.
Download it to local disk of the notebook.
Point DuckLake at the local copy and do the work.
Upload the modified file under the lease.
Release the lease.
A second notebook that starts while the lease is held fails immediately on acquire_lease. It can’t even read a stale copy. and you can’t delete the file using the UI , I can see already some uses cases here:)
What about crashed runs?
ADLS leases are either 15–60 seconds fixed, or infinite. Fixed leases need a heartbeat — annoying inside a notebook. Infinite leases work until something crashes — then the file is stuck.
The fix: take an infinite lease, but stamp acquired_at = <utc iso> into the blob’s own metadata when you acquire. When the next run hits a lease conflict, read that timestamp. Older than 12 hours? Call break_lease and re-acquire. A crashed run self-heals within 12 hours. You can shorten that window, or break the lease manually with a one-line script if you can’t wait — there’s a snippet in the README.
TL;DR: Iceberg’s value is sociological, not technical. And if you care about lightweight, single-process engines like datafusion and duckdb, it’s probably your best shot at first-class lakehouse support with Wide interoperability.
The first real data engineering work I did was an ingestion pipeline built on pandas and Parquet with Hive-style partitioning — an environment where 512 MB of memory was a genuine architectural constraint, not a rounding error. That experience shaped how I think about data tooling: the engine matters, but so does the ability to swap it out. Engine independence is something I care about more than most people I know, which is probably why I find myself paying close attention to Iceberg. Not for the reasons most people cite, though. It’s not the spec. It’s where the engineering hours are landing.
Getting query engines and catalogs to talk to each other is genuinely hard work. Most of it is unglamorous: error envelope parsing, metadata round-tripping, commit response shapes, partition spec edge cases, auth token quirks between vendors. None of it ships a feature anyone demos. None of it makes a good blog post. It’s the maintenance work that quietly determines whether your stack actually functions.
This is the part that’s easy to miss. Standards don’t converge because the spec is good. They converge because enough people, at enough companies, decide to put sustained hours into the interop bugs — year after year, across release cycles, through personnel changes and shifting priorities.
Look at the Iceberg committer list: Netflix, Apple, Databricks, Snowflake, AWS, Dremio, Microsoft. No single employer controls what gets merged. The incentive to fix cross-vendor interoperability bugs is distributed across the committer base itself. The governance isn’t just a formality — it’s what makes it possible for engineers from genuinely different setups to find, reproduce, and fix the same bug together.
There is one specific layer worth watching: the Iceberg REST catalog specification. It has become the canonical standard for how engines and catalogs communicate. Adoption is real: Polaris, lakekeeper, Gravitino, and a growing list of vendor-managed catalogs implement it.
But adoption and interoperability are not the same thing.
In practice, vendors still interpret parts of the specification differently. Engines end up handling quirks like slightly different response shapes, undocumented authentication flows, or inconsistent error handling. The nearest analogy is ODBC — a real standard, widely implemented, and still years of painful work before the “connect to anything” promise actually held up in practice.
The Iceberg REST catalog ecosystem feels earlier in that curve. The gap between specification and implementation is exactly where a lot of the maintenance work is happening right now. And closing that gap is precisely the kind of work Iceberg’s governance model is designed to support, because the people hitting the bugs are often the same people with commit access to fix them.
This is where the stakes become concrete, especially for lightweight engines.
For cloud warehouses and large JVM-based systems, the maintenance burden is manageable. There are full-time teams paid to absorb it. For the newer generation of small, single-process engines, the situation is very different. These are compact teams building engines with a specific focus: query latency, memory efficiency, embedded analytics, local execution.
Every hour spent chasing interoperability edge cases is an hour not spent improving the engine itself.
Several of these engines already support Iceberg in some form. But broad, reliable lakehouse support depends on the ecosystem doing its part: stable specifications, faithful implementations, bugs surfaced and fixed upstream.
A well-maintained standard is not just a convenience for these projects. It’s what makes serious lakehouse support achievable without hollowing out the team building the engine.
There is also a broader cost to fragmentation that rarely gets discussed directly. Every hour the ecosystem spends maintaining incompatible metadata layers is an hour not spent making lakehouse systems actually better. That cost doesn’t show up clearly in any individual issue tracker, but it accumulates across the entire ecosystem.
That’s the real argument for Iceberg.
Not that it’s a particularly clever format. Formats are mostly boring by design.
The real advantage is that Iceberg has assembled the right kind of maintenance coalition: enough companies with genuinely different incentives, governance that distributes merge authority, and enough independent implementations that bugs surface from the edges instead of only the center.
Whether that coalition survives long term as the market consolidates is still an open question. But right now, Iceberg is the ecosystem where the boring interoperability work is most likely to get done by someone other than you.
And in infrastructure, that’s close to everything.
That’s also why this feels personal to me.
The 512 MB pipeline I started with wrote Parquet files and hoped for the best — no transactions, no snapshot isolation, just partitions and careful scheduling to avoid stepping on yourself.
What I actually wanted, and couldn’t realistically have at the time, was proper ACID semantics with snapshot isloation end to end from something small and cheap. A cloud function. A tiny process with almost no memory to spare.
Iceberg is the closest thing to a realistic path toward that today. Not because the specification is especially elegant, but because it’s where the maintenance work is happening.
And eventually, ecosystems catch up to where the maintenance happens.
Special thanks to Raki Rahman for a few conversations that genuinely reshaped how I think about this space.
A complete data pipeline running on Microsoft Fabric that downloads public data, transforms it into a star schema, exports it as Delta Lake tables, and serves it through a Power BI semantic model with Direct Lake — all from a single Python notebook and using pure SQL
DuckDB as the compute engine — could have been Polars or Lakesail, just a personal preference to be honest
dbt as the transformation framework
A Python script to deploy everything via Fabric REST API
GitHub for source control, documentation, and testing
Note: DuckDB is not officially supported by Microsoft Fabric. Every effort is made to ensure compatibility with OneLake.
Overall Architecture
Why DuckDB + Delta Export
Microsoft Fabric’s lakehouse uses Delta Lake or Apache Iceberg as its table format. Power BI’s Direct Lake mode reads the data directly from OneLake. So whatever engine you use, you need to produce Delta Lake files on OneLake.
DuckDB cannot write Delta Lake natively (it is experimental at this stage). It has its own table format via the DuckLake extension, but DuckLake writes Parquet files with a DuckDB/SQLite/PostgreSQL metadata catalog.
OneLake catalog has only Iceberg read support, so that’s not an option for now.
The solution: delta_export, a community DuckDB extension that exports DuckLake tables as Delta Lake. The pipeline works like this:
dbt transforms data into DuckLake tables (Parquet + metadata)
ducklake_rewrite_data_files and ducklake_merge_adjacent_files compact the Parquet files
CALL delta_export() converts every DuckLake table into a proper Delta Lake table on OneLake
Without delta_export, DuckLake is not useful in this context. DuckLake manages tables internally, but Fabric has no idea what a SQLite metadata catalog is. It needs Delta transaction logs.
DuckLake stores table metadata in a database and writes data as Parquet files to any storage backend (local, S3, Azure). The DuckDB connection looks like this:
METADATA_LOCAL_PATH points to /lakehouse/default/Files/metadata.db — the Files section of the OneLake lakehouse. In a Fabric notebook, /lakehouse/default/ is a local mount of the lakehouse storage. The SQLite file lives right there on OneLake, persisting across notebook runs without any special sync logic. data_path points to the Tables section on OneLake (abfss://...). DuckDB computes in memory, DuckLake tracks what’s in each table via SQLite, and Parquet files land on OneLake.
The single-writer limitation. DuckLake when used with a file-based DB is basically a single-writer architecture. Only one process can write to a DuckLake database at a time. This means:
No parallel pipeline runs
No concurrent notebooks writing to the same tables
The Fabric pipeline is set to concurrency: 1 specifically because of this
For this use case, it’s fine — one notebook runs every hour, processes new files, and exits. But if you need concurrent writers, DuckLake is not the right choice.
Edit – May 2026: Updated the pipeline to enforce a single writer at any time — accidental duplicate runs exit cleanly instead of racing. Thanks to OneLake exposing primitives like leases and mutexes, DuckLake’s single-writer model becomes a much more viable option in practice.
Obviously you can use PostgreSQL as a catalog, but that makes the architecture more complex.
dbt as the Orchestrator
dbt does everything here — not just transformations. The on-run-start hook downloads data from the web, archives it to OneLake, and tracks state in a parquet log. The on-run-end hook compacts files and exports Delta.
Fetching daily SCADA and price reports from AEMO’s website
Fetching intraday 5-minute dispatch data
Downloading generator reference data
Archiving everything as partitioned ZIPs on OneLake
Maintaining a csv_archive_log.parquet file for deduplication
The 8 dbt models then process this data:
stg_csv_archive_log — view over the archive log
dim_calendar — date dimension (one-time load)
dim_duid — generator unit reference (smart refresh: only rebuilds when new generators appear)
fct_scada, fct_price — daily historical data, incremental by file
fct_scada_today, fct_price_today — intraday data, incremental by file
fct_summary — combined fact table exposed to Power BI
Every fact model uses file-based incremental processing. Pre-hooks query the archive log, filter out already-processed files, and set DuckDB VARIABLEs with the remaining ZIP paths. The model’s SQL reads from those paths. Next run, those files are skipped.
The Semantic Model: AI-Generated from Thin Air
This is the part that surprises me the most. The model.bim file — the Power BI semantic model definition — was generated entirely by AI (Claude). No Power BI Desktop. No click-through wizards. No SSDT.
The model.bim is a JSON file in TMSL (Tabular Model Scripting Language) format. It defines:
3 tables exposed to Power BI: dim_calendar, dim_duid, fct_summary
5 hidden tables (raw layer, not needed for reporting)
{{ONELAKE_URL}} is a placeholder. The deploy script substitutes it with the actual OneLake URL at deploy time.
Each table partition maps to a Delta table on OneLake:
{
"mode": "directLake",
"source": {
"type": "entity",
"entityName": "fct_summary",
"expressionSource": "DirectLake",
"schemaName": "aemo"
}
}
This maps to Tables/aemo/fct_summary/ — exactly where DuckLake + delta_export writes the Delta files.
AI generated all of this by reading the dbt schema definitions (column names, types, descriptions) and understanding the Direct Lake requirements. No manual TMSL authoring. No reverse engineering from Power BI Desktop. The entire semantic model is version-controlled, diffable, and deployable via API.
Poor Man CI/CD, No Service Principal
deploy_to_fabric.py is a single Python script that deploys everything to Fabric using the REST API. It has 6 steps:
lakehouse — Create the OneLake lakehouse (with schema support)
files — Upload all dbt project files to Files/dbt/
notebook — Create a 2-cell notebook (install deps + run dbt)
pipeline — Create a pipeline that runs the notebook
schedule — Set up hourly cron schedule
semantic_model — Deploy model.bim with Direct Lake config + refresh
You can run any subset: python deploy_to_fabric.py semantic_model deploys just the BIM.
Authentication uses az login — your browser opens, you sign in, done. The script reads from the production git branch (clones it into a temp directory) so what you deploy is always what’s been merged to production.
python deploy_to_fabric.py # deploy everything
python deploy_to_fabric.py semantic_model # just the semantic model
python deploy_to_fabric.py files notebook # just files + notebook
and here is the script in action
CI/CD
assuming you got pass the app registration in Azure, GitHub Actions handles CI — on every push and pull request to production:
Q&A
Why deploy to Fabric from local instead of from GitHub Actions?
CI (testing, docs, DAG) runs in GitHub Actions — no cloud credentials needed, just Azurite. But Fabric deployment requires authenticating to the Fabric REST API, which means a service principal.
This is just my personal experience working in different companies. As a business user, there is almost zero chance IT will give permission to register an app. And even if a miracle happens, you still need to convince a Fabric admin. This is not a technical limitation, it is human behaviour.
Instead, deploy_to_fabric.py uses AzureCliCredential — you run az login, your browser opens, you sign in, done. The script picks up your existing identity. You already have the Fabric permissions. No secrets to store, no service principal to manage.
The tradeoff is that deployment requires a human at a keyboard. For a single-person or small-team project, that’s fine — you deploy when you’re ready, not on every push.
Why not just use Datawarehouse/Spark/Dataflow etc? It’s built into Fabric.
All those tools in Fabric are awesome, but it is a lakehouse and the whole point of a lakehouse is to use whatever you want as long as it produces Parquet and Delta/Iceberg metadata, ideally sorted with a decent row group size from 2M to 16M.
Why DuckLake instead of Delta or Iceberg?
DuckDB Delta write support is still experimental.
OneLake Catalog supports Iceberg read only.
If we had Iceberg write, that would be my first preference.
Why is the semantic model AI-generated?
Because it is cool 🙂 and it is unbelievable that AI managed to write it out of thin air and did cool stuff like generating descriptions so Power BI AI behaves better.
What happens if the pipeline fails mid-run?
The DuckLake metadata DB lives on OneLake (Files section). If the run fails mid-way:
Downloaded source files are already archived on OneLake (no re-download needed)
DuckLake metadata reflects whatever was committed before the failure
Next run picks up where it left off using the archive log
The pipeline has a 1-hour timeout. If it hangs, Fabric kills it and the next hourly run starts fresh.
Can this scale?
Python notebooks scale to half a TB of RAM. If you need more, then you are reading the wrong blog 🙂
Where is TMDL?
I could not deploy using TMDL, even after feeding AI all kurt buhler articles 🙂 bim seems to be better undersood at least for now.
Why use SQLite instead of DuckDB to store the metadata DB?
The Files section of OneLake is not a real POSIX filesystem. It is not like your local disk — it basically uses FUSE. All Python engines think it is a real filesystem, but I noticed SQLite works better than DuckDB for this. It flushes data more reliably.
What is skill
In this case, a skill is simply a way to capture what was learned during the work so the AI can reuse that knowledge later.
I wrote the skill after finishing the task, then asked the AI to summarize the key learnings and steps. The idea is that next time the AI runs a similar task, it will be better informed and produce better results.
This is not specific to Claude. The same approach works with Copilot as well. The format is different, but the idea is exactly the same: capture the knowledge once so the AI can reuse it later.
Parting Thoughts
Everything you have heard about AI is pretty much true. The only wrong part was the timing. We all knew about AI’s potential, but in my experience something changed around December 2025. Suddenly AI became genuinely useful — less hallucination, and it just works well enough. Especially when you can test the outcome. And that is the key insight: data engineering is, in a sense, just software engineering. AI writes the code, AI does everything. Your job as a user is to make sure the tests are comprehensive. Contrary to what you hear from professional software engineers, you don’t need to care about the general case. If it is solid enough and it works for your use case, that is all that matters. Nothing more.
There is another aspect worth mentioning. There is a real market for business users who are not programmers. There is enormous value in using your laptop as your main dev and test environment. You open VSCode, you talk to your favorite AI agent, you run dbt run, and you see results in seconds. That feedback loop changes everything. Data platforms like Fabric become a hosting environment with security boundaries, governance, and all that.
and if you are still reading, dbt test are just awesome !!!
Same 22 TPC-H queries. Same Delta Lake data on OneLake (SF10). Same single Fabric node. Five Python SQL engines: DuckDB (Delta Classic), DuckDB (Iceberg REST), LakeSail, Polars, DataFusion , you can download the notebook here
unfortunately both daft and chdb did not support reading from Onelake abfss
DuckDB iceberg read support is not new, but it is very slow, but the next version 1.5 made a massive improvements and now it is slightly faster than Delta
They all run the same SQL now
All five engines executed the exact same SQL. No dialect tweaks, no rewrites. The one exception: Polars failed on Query 11 with
`SQLSyntaxError: subquery comparisons with '>' are not supported`
Everything else just worked,SQL compatibility across Python engines is basically solved in 2026. The differentiators are elsewhere.
Freshness vs. performance is a trade-off you should be making
importduckdb
conn=duckdb.connect()
conn.sql(f""" install delta_classic FROM community ;
Your dashboard doesn’t need sub-second freshness. Your reporting query doesn’t care about the last 30 seconds of ingestion. Declaring a staleness budget upfront – predictable, explicit – is not a compromise. It’s the right default for analytics.
Object store calls are the real bottleneck
Every engine reads from OneLake over ABFSS. Every Parquet file is a network call. It doesn’t matter how fast your engine scans columnar data in memory if it makes hundreds of HTTP calls to list files and read metadata before it starts.
– DuckDB Delta Classic (PIN_SNAPSHOT): caches the Delta log and file list at attach time. Subsequent queries skip the metadata round-trips.
– DuckDB Iceberg (MAX_TABLE_STALENESS): caches the Iceberg snapshot from the catalog API. Within the staleness window, no catalog calls.
– LakeSail: has native OneLake catalog integration (SAIL_CATALOG__LIST). You point it at the lakehouse, it discovers tables and schemas through the catalog. Metadata resolution is handled by the catalog layer, not by scanning storage paths, but it has no concept of cache, every query will call Onelake Catalog API
– Polars, DataFusion: resolve the Delta log on every query. Every query pays the metadata tax.
An engine that caches metadata will beat a “faster” engine that doesn’t. Every time, especially at scale.
How about writes?
You can write to OneLake today using Python deltalake or pyiceberg – that works fine. But native SQL writes (CREATE TABLE AS INSERT INTO ) through the engine catalog integration itself? That’s still the gap, lakesail can write delta just fine but using a path.
– LakeSail and DuckDB Iceberg both depend on OneLake’s catalog adding write support. The read path works through the catalog API, but there’s no write path yet. When it lands, both engines get writes for free.
– DuckDB Delta Classic has a different bottleneck: DuckDB’s Delta extension itself. Write support exists but is experimental and not usable for production workloads yet.
The bottom line
Raw execution speed will converge. These are all open source projects, developers read each other’s code, there’s no magical trick one has that others can’t adopt. The gap narrows with every release.
Catalog Integration and cache are the real differentiator. And I’d argue that even *reading* from OneLake is nearly solved now.
—
Full disclosure: I authored the DuckDB Delta Classic extension and the LakeSail OneLake integration (both with the help of AI), so take my enthusiasm for catalog integration with a grain of bias