Experimenting with Text-to-SQL: Lessons from Optimizing Product Return Analysis

๐ŸŒŸ Introduction

While testing the DuckDB ODBC driver, which is getting better and better (not production ready but less broken compared to two years ago), I noticed something unexpected. Running queries through Power BI in DirectQuery mode was actually faster than executing them directly in the DuckDB native UI.

Naturally, that does not make sense !!!

What followed was an investigation that turned into a fun and insightful deep dive into text-to-SQL generation, Power BIโ€™s query behavior, and the enduring relevance of manual SQL tuning

๐Ÿงฉ The Goal: Find the Worst Product by Return Rate

The task was straightforward:

Calculate total sales, total returns, and return rate by product. Rank the products and find the top 5 with the highest return rates.

To make it interesting, I decided to try:

  1. Letting an LLM generate the SQL by loading the semantic model.
  2. Using PowerBI in Direct Query Mode.
  3. Finally, manually tuning the query.

๐Ÿ“ Step 1: LLM-generated SQL โ€” Clean and Understandable

chatgpt generated a good starting point:

WITH sales_by_product AS   (
SELECT
i.i_product_name AS product_name,
SUM(ss.ss_sales_price * ss.ss_quantity) AS total_sales
FROM store_sales ss
JOIN item i ON ss.ss_item_sk = i.i_item_sk
WHERE i.i_product_name IS NOT NULL
GROUP BY i.i_product_name
),

returns_by_product AS (
SELECT
i.i_product_name AS product_name,
SUM(sr.sr_return_amt) AS total_returns
FROM store_returns sr
JOIN item i ON sr.sr_item_sk = i.i_item_sk
WHERE i.i_product_name IS NOT NULL
GROUP BY i.i_product_name
),

combined AS (
SELECT
COALESCE(s.product_name, r.product_name) AS product_name,
COALESCE(s.total_sales, 0) AS total_sales,
COALESCE(r.total_returns, 0) AS total_returns
FROM sales_by_product s
FULL OUTER JOIN returns_by_product r
ON s.product_name = r.product_name
)

SELECT
product_name,
ROUND((total_returns / NULLIF(total_sales, 0)) * 100, 2) AS return_rate
FROM combined
WHERE total_sales > 0 -- Avoid divide by zero
ORDER BY return_rate DESC
limit 5 ;

โœ… Pros:

  • Clean and easy to read.
  • Logically sound.
  • Good for quick prototyping.

๐Ÿ” Observation: However, it used product_name (a text field) as the join key in the combined table, initially I was testing using TPC-DS10, the performance was good, but when I changed it to DS100, performance degraded very quickly!!! I should know better but did not notice that product_name has a lot of distinct values.

the sales table is nearly 300 M rows using my laptop, so it is not too bad

and it is nearly 26 GB of highly compressed data ( just to keep it in perspective)

๐Ÿ“Š Step 2: Power BI DirectQuery Surprises

Power BI automatically generate SQL Queries based on the Data Model, Basically you defined measures using DAX, you add a visual which generate a DAX query that got translated to SQL, based on some complex logic, it may or may not push just 1 query to the source system, anyway in this case, it did generated multiple SQL queries and stitched the result together.

๐Ÿ” Insight: Power BI worked exactly as designed:

  • It split measures into independent queries.
  • It grouped by product_name, because that was the visible field in my model.
  • And surprisingly, it was faster than running the same query directly in DuckDB CLI!

Hereโ€™s my screenshot showing Power BI results and DAX Studio:


๐Ÿงฉ Step 3: DuckDB CLI โ€” Slow with Text Joins

Running the same query directly in DuckDB CLI was noticeably slower, 290 seconds !!!


โš™๏ธ Step 4: Manual SQL Tuning โ€” Surrogate Keys Win

To fix this, I rewrote the SQL manually:

  • Switched to item_sk, a surrogate integer key.
  • Delayed lookup of human-readable fields.

Hereโ€™s the optimized query:

WITH sales_by_product AS (
SELECT
ss.ss_item_sk AS item_sk,
SUM(ss.ss_sales_price * ss.ss_quantity) AS total_sales
FROM store_sales ss
GROUP BY ss.ss_item_sk
),

returns_by_product AS (
SELECT
sr.sr_item_sk AS item_sk,
SUM(sr.sr_return_amt) AS total_returns
FROM store_returns sr
GROUP BY sr.sr_item_sk
),

combined AS (
SELECT
COALESCE(s.item_sk, r.item_sk) AS item_sk,
COALESCE(s.total_sales, 0) AS total_sales,
COALESCE(r.total_returns, 0) AS total_returns
FROM sales_by_product s
FULL OUTER JOIN returns_by_product r ON s.item_sk = r.item_sk
)

SELECT
i.i_product_name AS product_name,
ROUND((combined.total_returns / NULLIF(combined.total_sales, 0)) * 100, 2) AS return_rate
FROM combined
LEFT JOIN item i ON combined.item_sk = i.i_item_sk
WHERE i.i_product_name IS NOT NULL
ORDER BY return_rate DESC
LIMIT 5;

๐Ÿš€ Result: Huge performance gain! from 290 seconds to 41 seconds

Check out the improved runtime in DuckDB CLI:


๐ŸŒ In real-world models, surrogate keys arenโ€™t typically used

unfortunately in real life, people still use text as a join key, luckily PowerBI seems to do better there !!!


๐Ÿš€ Final Thoughts

LLMs are funny, when I asked chatgpt why it did not suggest a better SQL Query, I got this answer ๐Ÿ™‚

I guess the takeaway is this:


If youโ€™re writing SQL queries, always prefer integer types for your keys!

And maybe, just maybe, DuckDB (and databases in general) could get even better at optimizing joins on text columns. ๐Ÿ˜‰

But perhaps the most interesting question is:
What if, one day, LLMs not only generate correct SQL queries but also fully performance-optimized ones?

Now that would be exciting.

you can download the data here, it is using very small factor : https://github.com/djouallah/Fabric_Notebooks_Demo/tree/main/SemanticModel

Edit : run explain analyze show that it is group by is taking most of the time and not the joins

the optimized query assumed already that i.i_item_sk is unique, it is not very obvious for duckdb to rewrite the query without knowing the type of joins !!! I guess LLMs still have a lot to learn

How to Read a Delta Table with Deletion Vectors and Column Mapping in Python

When attempting to read a Delta table using Python with the deltalake library (Delta_rs, not Spark), you may encounter the following error:

import deltalake

DeltaTable('/lakehouse/default/Tables/xxx').to_pyarrow_dataset()

DeltaProtocolError: The table has set these reader features: {'deletionVectors'} but these are not yet supported by the deltalake reader.

Alternative: Using DuckDB

A simple alternative is to use DuckDB:

import duckdb

duckdb.sql("SELECT COUNT(*) FROM delta_scan('/lakehouse/default/Tables/xxx')")

Tested with a file that contains Deletion vectors

Column Mapping

The same approach applies to column mapping as well.

Upgrading DuckDB

Currently, Fabric Notebook comes preinstalled with DuckDB version 1.1.3. To use the latest features, you need to upgrade to the latest stable release (1.2.1) :

!pip install duckdb --upgrade
import sys
sys.exit(0)

Note: Installing packages using %pip install does not restart the kernel when you run the notebook , you need to use sys.exit(0) to apply the changes, as some packages may already be loaded into memory.

import duckdb
duckdb.sql(" force install delta from core_nightly ")
duckdb.sql(" from delta_scan('/lakehouse/default/Tables/dbo/evolution_column_change') ")

The Future of Delta Rust .

Currently, there are two Rust-based implementations of Delta:

  1. Delta_rs: The first and more mature implementation, developed by the community. It is an independent implementation of the Delta protocol and utilizes DataFusion and PyArrow (which will soon be deprecated) as its engine. However, Delta_rs does not support deletion vectors or column mapping, though it does support writing Delta tables.
  2. Delta Kernel_rs: The newer, “official” implementation of Delta, providing a low-level API for query engines. It is currently being adopted by DuckDB ( and Clickhouse apparently) with more engines likely to follow. However, it is still a work in progress and does not yet support writing.

There are ongoing efforts to merge Delta_rs with Delta Kernel_rs to streamline development and reduce duplication of work.

Note : although they are written in Rust, we mainly care about the Python API ๐Ÿ™‚

Conclusion

At least for now, in my personal opinion, the best approach is to:

  • Use DuckDB for reading Delta tables
  • Use Python Deltalake (Delta_rs) for writing

TPC-DS 100GB with Only 2 Cores and 16 GB of RAM

As the year comes to a close, I decided to explore a fun yet somewhat impractical challenge: Can DuckDB run the TPC-DS benchmark using just 2 cores and 16 GB of RAM? The answer is yes, but with a caveatโ€”itโ€™s slow. Despite the limitations, it works!

Notice; I am using lakehouse mounted storage, for a background on the different access mode, you can read the previous blog

Data Generation Challenges

Initially, I encountered an out-of-memory error while generating the dataset. Upgrading to the development release of DuckDB resolved this issue. However, the development release currently lacks support for reading Delta tables, as Delta functionality is provided as an extension available only in the stable release.

Here are some workarounds:

  1. Increase the available RAM.
  2. Use the development release to generate the data, then switch back to version 1.1.3 for querying.
  3. Wait for the upcoming version 1.2, which should resolve this limitation.

The data is stored as Delta tables in OneLake, it was exported as a parquet files by duckdb and converted to delta table using delta_rs (the conversion was very quick as it is a metadata only operation)

Query Performance

Running all 99 TPC-DS queries worked without errors, albeit very slowly( again using only 2 cores ).

I also experimented with different configurations:

4, 8, and 16 cores: Predictably, performance improved as more cores were utilized.

For comparison, I ran the same test on my laptop, which has 8 cores and reads my from local SSD storage, The Data was generated using the same notebook.

Results

Python notebook compute consumption is straightforward, 2 cores = 1 CUs, the cheapest option is the one that consume less capacity units, assuming speed of execution is not a priority.

  • Cheapest configuration: 8 cores offered a good balance between cost and performance.
  • Fastest configuration: 16 cores delivered the best performance.

Interestingly, the performance of a Fabric notebook with 8 cores reading from OneLake was comparable to my laptop with 8 cores and an SSD. This suggests that OneLakeโ€™s throughput is competitive with local SSDs.

Honestly, Itโ€™s About the Experience

At the end of the day, itโ€™s not just about the numbers. Thereโ€™s a certain joy in using a Python notebookโ€”it just feels right. DuckDB paired with Python creates an intuitive, seamless experience that makes analytical work enjoyable. Itโ€™s simply a very good product.

Conclusion

While this experiment may not have practical applications, it highlights DuckDBโ€™s robustness and adaptability. Running TPC-DS with such limited resources showcases its potential for lightweight analytical workloads.

You can download the notebook for this experiment here:

Testing TPC-DS 10GB Hosted in Fabric OneLake Using Python Data Engines

This is not an official benchmarkโ€”just an exercise to experiment with the new Fabric Python notebook.

You can download the notebook and the results here

There is a growing belief that most structured data will eventually be stored in an open table format within object stores, with users leveraging various engines to query that data. The idea of data being tied to a specific data warehouse (DWH) may soon seem absurd, as everything becomes more open and interoperable.

While I canโ€™t predict the future, 2024 will likely be remembered as the year when the lakehouse concept decoupled from Spark. It has become increasingly common for “traditional” DWHs or any Database for that matter to support open table formats out of the box. Fabric DWH, for instance, uses a native storage layer based on Parquet and publishes Delta tables for consumption by other engines. Snowflake now supports Iceberg, and BigQuery is slowly adding support as well.

Iโ€™m not particularly worried about those DWH enginesโ€”they have thousands of engineers and ample resources, they will be doing just fine.

My interest lies more in the state of open source Python engines, such as Polars and DataFusion, and how they behave with a limited resource environment.

Benchmarking Bias

Any test inherently involves bias, whether conscious or unconscious. For interactive queries, SQL is the right choice for me. Iโ€™m aware of the various DataFrame APIs, but Iโ€™m not inclined to learn a new API solely for testing. For OLAP-type queries, TPC-DS and TPC-H are the two main benchmarks. This time, I chose TPC-DS for reasons explained later.

Benchmark Setup

All data is stored in OneLakeโ€™s Melbourne region, approximately 1,400 km away from my location, the code will check if the data exists otherwise it will be generated, the whole thing is fully reproducible.

I ran each query only once, ensuring that the DuckDB cache, which is temporary, was cleared between sessions. This ensures a fair comparison.

I explicitly used the smallest available hardware since larger setups could mask bottlenecks. Additionally, I have a specific interest in the Fabric F2 SKU.

While any Python library can be used, as of this writing, only two librariesโ€”DuckDB and DataFusionโ€”support:

  • Running the 99 TPC-DS queries (DataFusion supports 95, which is sufficient for me).
  • Native Delta reads for abfss or at least local paths.
  • Python APIs, as they are required to run queries in a notebook.

Other libraries like ClickHouse, Databend, Daft, and Polars lack either mature Delta support or compatibility with complex SQL benchmarks like TPC-DS.

Why TPC-DS ?

TPC-DS presents a significantly greater challenge than TPC-H, with 99 queries compared to TPC-Hโ€™s 22. Its more complex schema, featuring multiple fact and dimension tables, provides a richer and more demanding testing environment.

Why 10GB?

The 10GB dataset reflects the type of data I encountered as a Power BI developer. My focus is more on scaling down than scaling up. For context:

  • The largest table contains 133 million rows.
  • The largest table by size is 1.1GB.

Admittedly, TPC-DS 10GB is overkill since my daily workload was around 1GB. However, running it on 2 cores and 16GB of RAM highlights DuckDBโ€™s engineering capabilities.

btw, I did run the same test using 100GB and the python notebook with 16 GB did works just fine, but it took 45 minutes.

OneLake Access Modes

You can query OneLake using either abfss or mounted storage. I prefer the latter, as it simulates a local path and libraries donโ€™t require authentication or knowledge of abfss. Moreover, it caches data on runtime SSDs, which is an order of magnitude faster than reading from remote storage. Transactions are also included in the base capacity unit consumption, eliminating extra OneLake costs.

Itโ€™s worth noting that disk storage in Fabric notebook is volatile and only available during the session, while OneLake provides permanent storage.

You can read more about how to laverage DuckDB native storage format as a cache layer here

Onelake Open internet throughput

My internet connection is not too bad but not great either, I managed to get a peak of 113 Mbps, notice here the extra compute of my laptop will not help much as the bottleneck is network access.

Results

The table below summarizes the results across different modes, running both in Fabric notebooks and on my laptop.

  • DuckDB Disk caching yielded the shortest durations but the worst individual query performance, as copying large tables to disk takes time.
  • Delta_rs SQL performance was somewhat erratic.
  • Performance on my laptop was significantly slower, influenced by my internet connection speed.
  • Mounted storage offered the best overall experience, caching only the Parquet files needed for queries.

And here is the geomean

Key Takeaways

  • For optimal read performance, use mounted storage.
  • For write operations, use the abfss path.
  • Having a data center next to your laptop is probably a very good idea ๐Ÿ™‚

Due to network traffic, Querying inside the same region will be faster than Querying from the web (I know, it is a pretty obvious observation)

but is Onelake throughput good ?

I guess that’s the core question, to answer that I changed the Python notebook to use 8 cores, and run the test from my laptop using the same data stored in my SSD Disk, no call to onelake, and the results are just weird

Reading from Onelake using mounted storage in Fabric Notebook is faster than reading the same data from my Laptop !!!!

Looking Ahead to 2025

2024 has been an incredible year for Python engines, evolving from curiosities to tools supported by major vendors. However, as of today, no single Python library supports disk caching for remote storage queries. This remains a gap, and I hope itโ€™s addressed in 2025.

For Polars and Daft, seriously works on better SQL support