Testing TPC-DS 10GB Hosted in Fabric OneLake Using Python Data Engines

This is not an official benchmark—just an exercise to experiment with the new Fabric Python notebook.

You can download the notebook and the results here

There is a growing belief that most structured data will eventually be stored in an open table format within object stores, with users leveraging various engines to query that data. The idea of data being tied to a specific data warehouse (DWH) may soon seem absurd, as everything becomes more open and interoperable.

While I can’t predict the future, 2024 will likely be remembered as the year when the lakehouse concept decoupled from Spark. It has become increasingly common for “traditional” DWHs or any Database for that matter to support open table formats out of the box. Fabric DWH, for instance, uses a native storage layer based on Parquet and publishes Delta tables for consumption by other engines. Snowflake now supports Iceberg, and BigQuery is slowly adding support as well.

I’m not particularly worried about those DWH engines—they have thousands of engineers and ample resources, they will be doing just fine.

My interest lies more in the state of open source Python engines, such as Polars and DataFusion, and how they behave with a limited resource environment.

Benchmarking Bias

Any test inherently involves bias, whether conscious or unconscious. For interactive queries, SQL is the right choice for me. I’m aware of the various DataFrame APIs, but I’m not inclined to learn a new API solely for testing. For OLAP-type queries, TPC-DS and TPC-H are the two main benchmarks. This time, I chose TPC-DS for reasons explained later.

Benchmark Setup

All data is stored in OneLake’s Melbourne region, approximately 1,400 km away from my location, the code will check if the data exists otherwise it will be generated, the whole thing is fully reproducible.

I ran each query only once, ensuring that the DuckDB cache, which is temporary, was cleared between sessions. This ensures a fair comparison.

I explicitly used the smallest available hardware since larger setups could mask bottlenecks. Additionally, I have a specific interest in the Fabric F2 SKU.

While any Python library can be used, as of this writing, only two libraries—DuckDB and DataFusion—support:

  • Running the 99 TPC-DS queries (DataFusion supports 95, which is sufficient for me).
  • Native Delta reads for abfss or at least local paths.
  • Python APIs, as they are required to run queries in a notebook.

Other libraries like ClickHouse, Databend, Daft, and Polars lack either mature Delta support or compatibility with complex SQL benchmarks like TPC-DS.

Why TPC-DS ?

TPC-DS presents a significantly greater challenge than TPC-H, with 99 queries compared to TPC-H’s 22. Its more complex schema, featuring multiple fact and dimension tables, provides a richer and more demanding testing environment.

Why 10GB?

The 10GB dataset reflects the type of data I encountered as a Power BI developer. My focus is more on scaling down than scaling up. For context:

  • The largest table contains 133 million rows.
  • The largest table by size is 1.1GB.

Admittedly, TPC-DS 10GB is overkill since my daily workload was around 1GB. However, running it on 2 cores and 16GB of RAM highlights DuckDB’s engineering capabilities.

btw, I did run the same test using 100GB and the python notebook with 16 GB did works just fine, but it took 45 minutes.

OneLake Access Modes

You can query OneLake using either abfss or mounted storage. I prefer the latter, as it simulates a local path and libraries don’t require authentication or knowledge of abfss. Moreover, it caches data on runtime SSDs, which is an order of magnitude faster than reading from remote storage. Transactions are also included in the base capacity unit consumption, eliminating extra OneLake costs.

It’s worth noting that disk storage in Fabric notebook is volatile and only available during the session, while OneLake provides permanent storage.

You can read more about how to laverage DuckDB native storage format as a cache layer here

Onelake Open internet throughput

My internet connection is not too bad but not great either, I managed to get a peak of 113 Mbps, notice here the extra compute of my laptop will not help much as the bottleneck is network access.

Results

The table below summarizes the results across different modes, running both in Fabric notebooks and on my laptop.

  • DuckDB Disk caching yielded the shortest durations but the worst individual query performance, as copying large tables to disk takes time.
  • Delta_rs SQL performance was somewhat erratic.
  • Performance on my laptop was significantly slower, influenced by my internet connection speed.
  • Mounted storage offered the best overall experience, caching only the Parquet files needed for queries.

And here is the geomean

Key Takeaways

  • For optimal read performance, use mounted storage.
  • For write operations, use the abfss path.
  • Having a data center next to your laptop is probably a very good idea 🙂

Due to network traffic, Querying inside the same region will be faster than Querying from the web (I know, it is a pretty obvious observation)

but is Onelake throughput good ?

I guess that’s the core question, to answer that I changed the Python notebook to use 8 cores, and run the test from my laptop using the same data stored in my SSD Disk, no call to onelake, and the results are just weird

Reading from Onelake using mounted storage in Fabric Notebook is faster than reading the same data from my Laptop !!!!

Looking Ahead to 2025

2024 has been an incredible year for Python engines, evolving from curiosities to tools supported by major vendors. However, as of today, no single Python library supports disk caching for remote storage queries. This remains a gap, and I hope it’s addressed in 2025.

For Polars and Daft, seriously works on better SQL support

Smart Data Pipeline Design: Check for Delta Table Changes with Minimal Overhead

Scenario

I have a notebook that processes hot data every 5 minutes. Meanwhile, another pipeline processes historical data, and I want to create a summary table that uses the hot data incrementally but refreshes entirely when the historical data changes.

Problem

Checking for changes in historical data every 5 minutes is inefficient, slows down the hot data pipeline, and increases costs. There are many potential solutions for this use case, but one approach I used has been working well.

Solution

Using Delta Table Version

Delta tables provide a variety of functions to access metadata without reading actual data files. For instance, you can retrieve the latest table version, which is highly efficient and typically takes less than a second.

dt = try_get_deltatable(f'/lakehouse/default/Tables/{schema}/scada', storage_options=storage_options)
if dt is None:
    current_version = -1
else:
    current_version = dt.version()

Storing Custom Metadata

You can store arbitrary metadata, such as a Python dictionary, when writing a Delta table. This metadata storage does not modify Parquet files and can contain information like who wrote the table or any custom data. In my case, I store the version of the historical table used in creating my summary table.

write_deltalake(Summary_table_path,
                df,
                mode="overwrite",
                storage_options= storage_options,
                custom_metadata = {'scada':str(current_version)},
                engine='rust')

and here is how this custom metadata is stored

Combining Both Methods

The hot data pipeline incrementally adds data and checks the version of the historical table, storing it in the summary table. If the stored version differs from the latest version, this indicates a change, triggering a full refresh of the summary table.

Example Scenarios

  • When the Historical Table Has Not Changed
  • When a Change is Detected in the Historical Table

Key Takeaway

The Python Delta package is a versatile tool that can solve complex data engineering challenges efficiently.

You can download the two notebooks here

Horizontal Calculation in DuckDB SQL

Was doing some visualization in Fabric Python Notebook and I end up with this report, don’t worry, it is not the source table, my data is normalized, it is just a pivot table not a Delta Table.

It makes sense visually, I can easily see the numbers side by side, that’s a decent Pivot Table, there is something deeply human about seeing the raw numbers. 

Note : Don’t read too much in the data itself, it is not very rigorous ( OS cache is messing everything up)

SQL Excel Style 

Then I wanted to add a column to show the biggest number for every row. Yes, I know how to do it the Proper way 🙂  but why can’t I do it here without doing unpivot and stuff, after all it is trivial in Excel right ? turn out it is possible now using DuckDB SQL ( development build) 

xx = duckdb.sql(f" select *, greatest(*columns(*exclude(query))) as max from result  ")
xx

Yes, that’s right, it’s looking for the highest number horizontally !!! I think it is awesome, it seems only the greatest and the least are supported so far, but I will not be surprised if they add sum and addition after all there is no natural law that dictate a sum in SQL should accept only 1 column as a parameter. 

I want to normalize the numbers , the worst performance will be 1, I can do that by dividing the columns values  by the the column max, again, I am doing calculations at the row, level, probably an Excel user will wonder what’s the big deal about it 🙂

I want to count the worst Query by engine, I will just count all the rows with number 1

Now let’s use the original report “results” to see the total duration for all Engines

And geometric mean

Cast multiple columns in one go

This one was really helpful as I had a table with nearly 40 columns to cast to double, which is a trivial operation when using Dataframe API but a nightmare in SQL as you need to type every column name, with columns, it is trivial

cast(columns(*exclude(DUID,UNIT,SETTLEMENTDATE,I,XX,filename)) as double)

Take away

As an Excel user, who try to play a bit with SQL, sometimes I ask myself, surely there must be a better  way, why SQL can’t do this or that easily ?  and I remember asking the same question to an industry veteran a couple of years ago and his answer was basically the current status quo is the natural order of things and you should try a Dataframe API for that, but now  I know it is not true, SQL can do anything in theory, no one has bothered to do the hard work, I had the same reaction when I “discovered” TSQL where it acts literally as a programming language with loops and if then !!!

Having said that, i feel the industry has more appetite for a change, notice for example how quickly DWH vendors adopted group by all, maybe it is more practical to enhance SQL than hope for the adoption of a standard DataFrame API ?

and by the way, if you want to do analysis using SQL do yourself a favor and use a notebook 🙂

Some observations on single node vs distributed systems. 

Was doing an experimentation in a Fabric notebook, running TPCH benchmarks and kept increasing the size from 1 GB to 1 TB, using Spark, DuckDB reading both parquet files and native format,  it is not a very rigorous test, I was more interested about the overall system behavior then specific numbers, and again benchmarks means nothing, the only thing that matter is the actual workload.

The notebook cluster has 1 driver and up to 9 executors, spark dynamically add and remove executors , DuckDB is a single node system and runs only inside the driver.

Some observations

  • When the data fits into one machine, it will run faster than a distributed system, (assuming the engines have the same performance), in this test up to 100 GB , DuckDB Parquet is faster than Spark.
  • It is clear that DuckDB is optimized for its native file format, and there are a lot of opportunities to improve Parquet performance.
  • At 700 GB, DuckDB native file  is still competitive with Spark, even with multiple nodes, that’s very interesting technical achievement, although not very useful in Fabric ecosystem as only DuckDB can read it.
  • At 1 TB, DuckDB parquet timeout and the performance of the native file format degraded significantly, it is an indication we need a bigger machine.  

Although clearly I am a DuckDB fan, I appreciate the dynamic allocation of Spark resources,Spark is popular for reason 🙂

Yes I could have used a bigger machine for DuckDB, but that’s a manual process and does changes based on the specific workload, one can imagine a world where a single node get resources dynamically based on the workload.

I think the main takeaway, if a workload fits into a single machine then that will give you the best performance you can get.

Edit : to be clear, so far the main use case for Something like DuckDB inside Fabric is cheap ETL, I use it for more than a year and it works great, specially with the fact that single node notebooks in Fabric start in less than 10 seconds.