PowerBI Incremental refresh Parquet files, without a Database.

TL;DR, you can incremental refresh PowerBI using Parquet files stored in an Azure Storage without using any Database in the middle, you can download sample pbix here

I am using this blog post by Gilbert Quevauvilliers which is based on a technique from Rafael Mendonça, Please read it first

Maybe read this, it is using Synapse Serverless , but has a section where you can Partition your data using Python to Parquet

1-Add a new Table, Parquet

make sure it is not loaded, here is the M code

let
     Source = AzureStorage.DataLake("https://xxxxxx.core.windows.net/parquet"),
     #"Removed Other Columns" = Table.SelectColumns(Source,{"Content", "Folder Path"}),
     #"Inserted Text Between Delimiters" = Table.AddColumn(#"Removed Other Columns", "Text Between Delimiters", each Text.BetweenDelimiters([Folder Path], "D", "/", 1, 0), type text),
     #"Renamed Columns" = Table.RenameColumns(#"Inserted Text Between Delimiters",{{"Text Between Delimiters", "Date"}}),
     #"Changed Type" = Table.TransformColumnTypes(#"Renamed Columns",{{"Date", type datetime}}),
     #"Removed Columns" = Table.RemoveColumns(#"Changed Type",{"Folder Path"})
 in
     #"Removed Columns"

here is the result

3-Merge using Inner Join

to read the parquet file content we use this function , notice we used inner join in the previous step to avoid reading null Content, which generate errors when you refresh in the service

Parquet.Document([Content])

and here is the final table

we configure incremental refresh to refresh the Last 2 days

4- Testing in PowerBI Service

as you can see the second refresh is way faster then the First one

here is the partition Table

now let’s check the transaction history from Azure storage, I refreshed again just to be sure

The second refresh read substantially less data as only two files are read

I Think with PowerBI desktop supporting Parquet, we will see more exciting scenarios, I can’t wait for Dataflow to support export to Parquet !!!!

if you are still reading, I appreciate a vote on this idea, Having an option in Dataflow to export to a dynamic file name

Load Data to PowerBI Push Dataset using Easymorph

Easymorph is a very Powerful Data preparation tools for Business users, you can export your results either to csv or a growing list of Database and PowerBI Push dataset

I think with the new composite Models, the Push Dataset became rather interesting as it behave more or less like a regular  Dataset (see limitation here), you can add relationship to other Tables etc

No Code Data Pipeline

Just to test it, I loaded 7 csv files, then the usual Transformation, filters, select columns, Unpivot then I generated two PowerBI Dataset, Fact and Dimension.

As Push Dataset are append by default, I first delete all the rows in PowerBI dataset before loading new data, just to avoid duplicates, the third export is a csv file ( I wish one day, PowerQuery will have that option without using hacks)

and here is the Model in the service

PowerBI Support

there two type of Support

Power BI Command

You can list workspace, refresh Dataset and Dataflow etc, see Documentation

Export Data to PowerBI

Microsoft provide only API for Push Dataset, see Documentation

Easymorph has a very generous free tier, and The license is reasonable, I think it is worth having a look.

PowerBI has to be more Open

Personally I think PowerBI will gain more by being more open to third party tools, I hope one day , The Vertipaq engine will be as open as SQL Server in the sense that any tools can write and read data, it is a database after all. I am not suggesting to make it open source or free, you obviously still need to pay for a license, for example Adding a PowerBI Rest API for regular dataset will be a good start.

Now maybe dreaming, but when I see file format like Parquet, I wonder why we don’t have an Open API to load and read from Vertipaq engine Storage format, it has an amazing compression, it is columnar and support multiple schema in the same dataset.

Marco Russo has expressed this idea more gracefully here

Pushdown Filters to Parquet From PowerBI Using Synapse Serverless

TL;DR, we use filepath()  function to get the partition Data which can be passed from PowerBI when used with Synapse Serverless, Potentially Making substantial cost saving when used in Incremental refresh as less files will be scanned.

Edit : there is another Technique to incrementally load parquet files without a Database

This blog is just me experimenting with the possibility of passing the filters from PowerBI to a Parquet file using Synapse Serverless.

when you deal with On demand Query engine Like Athena, BigQuery, Synapse Serverless etc, the less files read, the less cost incurred, so if somehow you manage to read only the files that contain the data without scanning all your folders, you can make substantial saving.

To Be clear, the first time I used Parquet was yesterday, so I have very basic knowledge at this stage.

1- Get the Source Data

I have a csv file with 3 Millions rows, here is the schema

2- Partition The Data

as far as my understanding of Synapse Serverless engine, when you filter by date for example, The engine will scan the whole file, maybe if the column is sorted, it will scan less data, I don’t know and have not try it yet, instead we are going to partition the Table by date, which is simply generating lot of files split by the date values, I imagine generating a lot of small files is not optimal too, it will reduce cost but potentially making the Query slower ( at the last this how other Engine Works).

I am using this python script to generate the partitions

import pandas as pd
import datetime
df = pd.read_csv('box.csv',parse_dates=True)
df['SETTLEMENTDATE'] = pd.to_datetime(df['SETTLEMENTDATE'])
df['Date'] = df['SETTLEMENTDATE'].dt.date
df.to_parquet('nem.parquet',partition_cols=['Date'],allow_truncated_timestamps=True)

and here is the results, a folder of parquet files grouped by Date

which I can read using the PowerBI Desktop

I just casually read a parquet file, without any Programing Language !!! ( Kudos for the Product team), this is only to test the parquet is properly generated.

3- Load The files to Azure Storage

This is only a show case, but in a production workflow, you maybe using something like ADF or Python Cloud Functions, anyway to upload a lot of files, I am using Azure storage explorer

now we can start using Synapse Analytics

4- Create a View In Synapse Serverless

we are going to leverage filepath()  function to get the partition date, please see the documentation , here is the Query to create a view

USE [test];
GO
DROP VIEW IF EXISTS parquet;
GO

CREATE VIEW parquet AS
SELECT
 *,convert(Datetime,result.filepath(1),120) as date
FROM
    OPENROWSET(
        BULK 'https://xxxxxxxxxx.dfs.core.windows.net/parquet/nem.parquet/Date=*/*.parquet',
        FORMAT='PARQUET'
    )  result

5- Test if the filter Partition Works

Let’s try some testing to see how much is scanned

no Filter, scan all the files, data processed 48 MB

now with filter , only Date 11/04/2018, Data processed 1 MB

6- Test in PowerBI

I build this small model, Date Table is import and Parquet is DirectQuery ( be cautious when using DirectQuery, PBI can be very chatty and generate a lot of SQL Queries, Currently, it is better to use only import mode, until cache support is added)

Case 1: View all Dates

Data Processed : 43 MB

Case 2: Filter Some Dates

let’s filter only 2 days

Note : please use Query reduction option in PowerBI desktop, otherwise, every time you move the slicer, a query will be generated

Here is the result

I was really excited when I saw the results: 1 MB Synapse Serverless was smart enough to scan only 2 files

7- Incremental Refresh

Instead of Direct Query, let’s try a more practical use case, I configured Incremental refresh to change dates only for the Last 3 days

and here is the List of Queries generated by PowerBI

I have only Data for 2018 and 2019, the second refresh (that Started at 3:45 PM) just refreshed the data for the Last three days and because there is no files for those dates the Query returned 0 MB, which is great.

Another Nice functionality is select top 100 which is generated by PowerQuery to check field type scanned only 1 MB !!!!

Just to be sure, I have done another refresh at 4 :24 PM and checked the Partitions using SSMS

Only the last three partions were refreshed and PQ sent only 6 Queries ( 1 select data for 1 day and the other check field type)

I think Azure storage + Synapse analytics Serverless + PowerBI Incremental Refresh may end up as a very powerful Pattern.

Push Dataset in PowerBI Composite Model, Near real Time made easy !!!

I was always intrigued by the Push dataset in PowerBI but it did had a big showstopper, you can’t mix it with other tables, which made is of little use for me, until Dec 2020, where it become possible to use it in a composite Model, see this excellent introduction

To see how it works, I tried it with this dataset, the Power Generation in Australia every 5 minutes.

1- Create Dataset in PowerBI service

2- Get the URL

3- Use some tool to start streaming data

You need some tool to start pushing data into that Table, for example you can use Easymorph, Personally I have an existing python script that download the data and push it to a DB, I had to add only a couple of line of codes to make it push to PowerBI dataset directly, I copied the code from this function

Please make sure the datetime format is correct, otherwise it will not work.

###### post to PowerBI
            
            df['SETTLEMENTDATE'] = [datetime.strftime(item, "%Y-%m-%dT%H:%M:%SZ") for item in df['SETTLEMENTDATE']]
            
            REST_API_URL = "https://api.powerbi.com/beta/XXXXXXXXXXXXXXXXXXXXXX"
            body = bytes(df.to_json(orient='records'), encoding='utf-8')
            req = urllib2.Request(REST_API_URL, body)
            response = urllib2.urlopen(req)  
            print("PowerBI: HTTP {0} {1}\n".format(response.getcode(), response.read()))
            ################################################

4- Build a Composite Model in PowerBI Desktop

DirectQuery to the Push dataset and a dimension Table as Import, we need the dimension table to know the fuel source of every generator ( wind, Coal etc), The Dimension Table change very rarely, less than 20 new rows per year !!!!

And here is the report

Maybe not a big deal for a lot of users, but Export to web does not work currently with composite Model as of 5-Jan-2021

5- Refresh the Page

Please notice, even if the data source refresh, you still needs to click refresh to get the Latest data on the visuals, when I try to add Automatic page refresh I get this error, I am using a pro license not PPU !!!!

Edit : Fred Kaffenberger suggested a very clever workaround by using a hidden Play Axis custom viz to force the refresh, and indeed it works very well !!!

Take away

To be very clear, Pushing a 5 minutes dataset is not something new , but it was a a relatively complex task, with the new composite model it become nearly trivial, and that’s the magic of PowerBI.