Dynamic Weekend using Parameter in PowerBI

I had a perfectly Working PowerBI report the last 7 Months , The data is refreshed daily, but we need to see the weekly Quantity basically on a rolling basis of the last 7 days.

When the data refresh, a calculated column generate a new weekly grouping based on the latest update from a Fact Table “Quantity Installed”, see example here

weekend.JPG

Everything Worked well, till someone asked if he can filter previous days, let’s say the report is Updated Monday, but he want to see the Data only till the previous Thursday , which is the official Cut off for the Client reporting.

Obviously it is easy just filter the data, but visually it is annoying as the weekend date is still Monday, calculated column change only when your refresh a report , Then they are fixed.

To show an example, this report show the total fatalities due to COVID19 per Week, The Cut off is 03/11/2020 which is a Tuesday, the weekly calendar is based on Tuesday as the end of the week

Now if you filter the data to the Previous Friday, the Weekend do not change, hence you get the impression that the total number for the Current week are going down which is not correct.

So, I asked in PowerBI forum how to do that, and Brian Maher solution is as columns do not change, we need to change the DAX,see link to the original solution

“The measure below checks to see if our date on the X-axis is the same day of the week as your slicer. If not it returns BLANK( ), if it is the same day of week, it sums the qty for the previous 7 days”

 Qty at 'week end' =
 VAR SelectedSlicerDate =
     SELECTEDVALUE ( 'Slicer Dates'[Date] )
 VAR ThisDateOnAxis =
     SELECTEDVALUE ( 'Calendar'[Date] )
 RETURN
     IF (
         WEEKDAY ( SelectedSlicerDate ) <> WEEKDAY ( ThisDateOnAxis ),
         BLANK (),
         CALCULATE (
             SUM ( 'Data Table'[qty] ),
             DATESBETWEEN ( 'Calendar'[Date], ThisDateOnAxis - 6, ThisDateOnAxis )
         )
     )

It is basically using a well know technique in DAX, show all the dimension value then anything with BLANK() result will be filtered out.

The DAX works perfectly, and solved the user request but , In my real case I had a lot of measures, changing all of them and risking introducing regression for a perfectly Working Data Model just for this small request was not worth it.

All I wanted is somehow make the column Weekend Dynamic at runtime, which was not possible till …… October 2020, we do have a new option, we can finally change the dimension dynamically and keep the DAX simple !!!!

if you have not heard about the New Parameter, maybe a good start is here.

My first attempt was to get the Day of the week in PowerQuery, but unfortunately Query Folding stopped Working and to be honest there is no clear way to know what’s supported or not as it is data Source specific, here I am using BigQuery.

Instead I create a simple calendar table with a column Day and 7 columns with all the possible week end ( Saturday, Sunday etc)

and then Create a new Parameter, with a default Day Thursday

Then Created a new Column “DynamicWeek” using this formula

I think we can agree the formula is trivial , a simple if then else

In order to make the parameter Dynamic, we create a new disconnected Table (not connected to any other Tables in the Model)

Then we Bind the Value of the Column Day to the Parameter

When the User changed the value of the disconnected Table using a Slicer

The Value get passed to the Parameter which Change the Value of the Column “Dynamic Parameter”

And here is the result

Update

a couple of hours after I published this blog, I got an idea that actually did Work.

Query folding does not work with the function Day of the Week, but Text.Middle did fold indeed, so all I had to do is create a new Disconnected Table like this

the same approach add a calculated column in the calendar Table

Just to be fancy, in the parameter, I used a Query that return always the latest date as a default value

and View of the Data Model

Created a new Measure that use the Disconnected Table as a filter Cut off, so the date get filtered

Death_cutoff = Var Sum_=sum(covid19new[daily_deaths])
return if (min(MstDates[Date])>SELECTEDVALUE(MstDates_Table_Cutoff[date]),BLANK(),Sum_)

I imagine you can search the date text and see if it is the end of the Month and generate Monthly calendar for example, a lot of flexibility.

notice here, because the Date Table is DirectQuery ( the fact and the parameter selection table are import) at least when using BigQuery the response time for the first select is around 3 seconds, but when you do the same selection, it become instantaneous as PowerBI cache the query result.

Now the Exciting part is which we don’t have yet, but apparently is coming next year at least for SnowflakeDB and Redshift, we will be able to write Custom SQL Query with Parameters in DirectQuery Mode, and then basically we will have finally “Dynamic Calculated Column“, ok if we have them in Import Mode, it is even better but let’s not get too Excited 🙂

DAX is extremely versatile , combine that with Dynamic column and we have a very powerful new tool, the future is exciting.

Dynamic Date Granularity using Parameter in Google Data Studio

I created this dynamic bar chart which change the granularity based on the date selected, so if you the date selected > year, it will group the data per year then you can drill down to Month, the same if it is Month, you can drill down to week etc. see Example here.

it was just for fun, but someone asked me how it works, I thought it is worth a Quick Blog.

As of 27 October 2020, the date parameter are only available if you connect to BigQuery.

ok, basically you need just create a custom Query that calculate the duration between the start date Parameter and end date parameter which are passed by the Calendar Control

This example is using Average Australian wholesale Electricity Market, Obviously it works with any dataset that contain date.

WITH
   xx AS (
   SELECT
     *EXCEPT(YEAR),
     CAST(DATE_TRUNC(DATE(SETTLEMENTDATE), YEAR) AS timestamp) AS YEAR,
     CAST(DATE_TRUNC(DATE(SETTLEMENTDATE), MONTH) AS timestamp) AS MONTH,
     CAST(DATE_TRUNC(DATE(SETTLEMENTDATE), WEEK) AS timestamp) AS WEEK,
     CAST(DATE(SETTLEMENTDATE) AS timestamp) AS date,
     DATE_DIFF(PARSE_DATE('%Y%m%d',
         @DS_END_DATE),PARSE_DATE('%Y%m%d',
         @DS_START_DATE),DAY) + 1 AS nbrdays
   FROM
     xxxxxxx.PRICEARCHIVE
   WHERE
     DATE(SETTLEMENTDATE) >= PARSE_DATE('%Y%m%d',
       @DS_START_DATE)
     AND DATE(SETTLEMENTDATE) <= PARSE_DATE('%Y%m%d',
       @DS_END_DATE)
     AND UNIT="DUNIT")
 SELECT
   REGIONID,
   RRP,
   nbrdays,
   CASE
     WHEN nbrdays <= 1 THEN SETTLEMENTDATE
     WHEN nbrdays <= 7 THEN date
     WHEN nbrdays <= 31 THEN WEEK
     WHEN nbrdays <= 365 THEN MONTH
   ELSE
   YEAR
 END
   AS newdate,
   CASE
     WHEN nbrdays <= 1 THEN SETTLEMENTDATE
     WHEN nbrdays <= 7 THEN SETTLEMENTDATE
     WHEN nbrdays <= 31 THEN date
     WHEN nbrdays <= 365 THEN WEEK
   ELSE
   MONTH
 END
   AS newdate2
 FROM
   xx

Change Dimension Dynamically using Parameter in PowerBI

At Last, PowerBI added support for parameters that can be changed by the end user, I guess from a Business perspective, it is mostly useful when you deal with Big Data load, and you want to control exactly the Query generated at the data source level, but in this short Blog, I will show how some use cases where hard or clunky using DAX became extremely easy to do using Parameters.

I think it is wise to read the documentation here first

Chris Webb has a great use case using Azure Data explorer here

pbix and report download here

Update : I added a new use case here, changing weekend Date Dynamically

We want to change a dimension based on a user selection from a slicer, currently Only DirectQuery is supported and to be honest, the documentation does not tell which data source works, we know SQL server is not one of them, Thanks to Alex for his clarification, Luckily BigQuery Works ( that was a very nice surprise to be honest)

I am using the Covid19 data set as an example (as it is free and don’t incur any charge till sept 2021), we want to switch dynamically between countries and continent

1- Load the main Table as import mode

2- Create a parameter ” Level_Details”

3- Import dimension Table with the values countries and Continent in Direct Mode:

I created a view in BigQuery , PowerQuery stopped folding when I tried to remove duplicated, although it is free data source, it is important to use directQuery only with dimension Tables to reduce cost and Data volume

4- Include the parameter logic in Dimension Table

I created a new Column “Grouping_Details” based on the Parameter Value, it will Take either Countries or Continent

5- create a new Table that contains all the possible values for the Parameter

by the way, you can use any table, either imported, or generated using DAX, this is a very clever implementation by the PowerBI team compared to Other BI Tool.

6- Bind the value of the column “Selection” to the Parameter

here is a View of the Data Model

it is very Important that “Selection_Details” stay as a disconnected Table, otherwise it will create new filter selection in the Queries which we don’t want, it will work but we want to control exactly the Query generated by PowerBI

And the Report

The feature is in Preview and I am sure, they will introduce more Data Sources and functionalities, by adding support to BigQuery, Microsoft sent a clear message, PowerBI is the best Data Analytics tool and they will support any third Party Data Warehouse, even if it is a direct Competitor.

Personally,I am very excited by the thought that we are very close to Finally have Parameter Action In PowerBI , and that will introduce a new class of Visual Analytics Interaction that was not even Possible, Please need some Votes here

Btw, if you use BigQuery with PowerBI, I appreciate some votes here, we need the support of Custom SQL Query with Parameter

Using PowerBI with Azure Synapse Serverless, First Look

Recently I come across a new use case, where I thought Azure Synapse serverless may make sense, if you never heard about it before, here is a very good introduction

TLDR; Interesting new Tool !!!!, will definitely have another serious look when they support cache for the same Queries

Basically a new file arrive daily in an azure storage and needs to be processed and later consumed in PowerBI

The setup is rather easy, here is an example of the user interface, this is not a step by step tutorial, but just my first impression.

I will use AEMO (Australian electricity market Operator) data as an example, the raw data is located here

Load Raw Data

First I load the csv file as it is, I define the columns to be loaded from 1 to 44 , make sure you load only 1 file to experiment then when you are ready you change this line

'https://xxxxxxxx.dfs.core.windows.net/tempdata/PUBLIC_DAILY_201804010000_20180402040501.CSV',
'https://xxxxxxxx.dfs.core.windows.net/tempdata/PUBLIC_DAILY_*.CSV',

Then it will load all files, notice when you use filename(), it will add a column with the files name, very handy

USE [test];
GO

DROP VIEW IF EXISTS aemo;
GO

CREATE VIEW aemo AS
SELECT
result.filename() AS [filename],
     *
FROM
    OPENROWSET(
        BULK 'https://xxxxxxxx.dfs.core.windows.net/tempdata/PUBLIC_DAILY_201804010000_20180402040501.CSV',
        FORMAT = 'CSV',
        PARSER_VERSION='2.0'
    )
    with (
c1   varchar(255),
c2   varchar(255),
c3   varchar(255),
c4   varchar(255),
c5   varchar(255),
c6   varchar(255),
c7   varchar(255),
c8   varchar(255),
c9   varchar(255),
c10   varchar(255),
c11   varchar(255),
c13   varchar(255),
c14   varchar(255),
c15   varchar(255),
c16   varchar(255),
c17   varchar(255),
c18   varchar(255),
c19   varchar(255),
c20   varchar(255),
c21   varchar(255),
c22   varchar(255),
c23   varchar(255),
c24   varchar(255),
c25   varchar(255),
c26   varchar(255),
c27   varchar(255),
c29   varchar(255),
c30   varchar(255),
c31   varchar(255),
c32   varchar(255),
c33   varchar(255),
c34   varchar(255),
c35   varchar(255),
c36   varchar(255),
c37   varchar(255),
c38   varchar(255),
c39   varchar(255),
c40   varchar(255),
c41   varchar(255),
c42   varchar(255),
c43   varchar(255),
c44   varchar(255)
     )
 AS result

The previous Query create a view that read the raw data

Create a View for a Clean Data

As you can imagine , Raw data by itself is not very useful, we will create another view that reference the raw data view and extract a nice table ( in this case the Power generation every 30 minutes)

USE [test];
GO

DROP VIEW IF EXISTS TUNIT;
GO

CREATE VIEW TUNIT AS
select [_].[filename] as [filename],
   convert(Datetime,[_].[c5],120) as [SETTLEMENTDATE],
    [_].[c7] as [DUID],
   cast( [_].[c8] as DECIMAL(18, 4)) as [INITIALMW]
from [dbo].[aemo] as [_]
where (([_].[c2] = 'TUNIT' and [_].[c2] is not null) and ([_].[c4] = '1' and [_].[c4] is not null)) and ([_].[c1] = 'D' and [_].[c1] is not null)

Connecting PowerBI

Connecting to azure synapse is extremely easy, PowerBI just see it as a normal SQL server.

here is the M script

let
Source = Sql.Databases("xxxxxxxxxxx-ondemand.sql.azuresynapse.net"),
test = Source{[Name="test"]}[Data],
dbo_GL_Clean = test{[Schema="dbo",Item="TUNIT"]}[Data]
in
dbo_GL_Clean

And the SQL Query generated by PowerQuery ( which Fold)

select [$Table].[filename] as [filename],
[$Table].[SETTLEMENTDATE] as [SETTLEMENTDATE],
[$Table].[DUID] as [DUID],
[$Table].[INITIALMW] as [INITIALMW]
from [dbo].[TUNIT] as [$Table]

Click refresh and perfect, here is 31 files loaded

Everything went rather smooth, nothing to set up and I have now an Enterprise Grade Data warehouse in Azure, how cool is that !!!

How Much it cost ?

Azure Synapse serverless pricing model is based on how much data is processed

First let’s try with only 1 file ,running Query from the Synapse Workspace, the file is 85 MB, good so far, data processed is 90 MB, file size + some meta Data

now let’s see using the Queries generated by PowerBI, in theory my files size are 300 MB, I will be paying only for 300 MB, let’s have a look at the Metrics

My first reaction was, there must be a bug , 2.4 GB !!!, I refreshed again and it is the same number !!!

A look at the PowerQuery diagnostic and a clear picture emerges, PowerBI SQL Connectors is famous for being “Chatty”, in this case you would expect PowerQuery to send only 1 Query but in reality it will send multiple Queries , at least 1 of them to check the top 1000 rows to define the fields type.

Keep in mind Azure Synapse Serverless has no cache ( they are working on it), so if you run the same query multiple times even with the same data, it will “scan” the files multiple times, and as there is no data statistic a select 1000 rows will read all files even without order by.

Obviously, I was using import mode, as you can imagine using it with directQuery will generate substantially more queries.

Just to be sure I tried to do refresh on the service.

The same, it is still 2.4 GB, I think it is fair to say, there is no way to control how many time PowerQuery send a SQL Query to Synapse.

Edit 17 October 2020 :

I got a feedback that probably my PowerBI desktop was open when I run the test in the service, turn out it is true, I tried again with The desktop closed and it worked as expected, one refresh generate 1 query

Notice even if the CSV file was compressed, it will not make a difference, Azure synapse bill uncompressed data.

Parquet file would made a difference as only columns used would be charged, but I did not want to used another tool in this example.

Take Away

It is an interesting Technology, the integration with Azure cloud storage is straightforward, the setup is easy,you can do transformation using only SQL, Pay only what you use and Microsoft is investing a lot of resources on it.

But the lack of cache is a show stopper !!

I will definitely check it again when they add the cache and cost control, after all it is still in Preview 🙂