Dynamic Change of Dimension using Google Data Studio Parameter

At Last Google Data Studio added the option to let the user change the value of parameter, which will make some new scenarios possible, I will try to show some new cases where either it was extremely painful to do, or simply not possible.

In this report, I added some cases where I think it is useful, for this Blog, I will start with a very common scenario

The report Show the Daily Electricity produced in eastern state of Australia, just by Using a slicer, the level of details will change to Region or Technology, or individual Generators

Currently it does not work with parameter in the formula engine,when I tried I got this error ( Nimantha has a solution using Regex which does not require BigQuery, you can see his report here)

Update as 26 August 2020

Riccardo from the dev team suggested a simple workaround,

let’s just create a dummy dimension that take the values from the parameter

( I swear, I first tried this before, but it was not working, anyway calculated field in GDS are still a mystery for me)

now you can use it in your calculation

Alternatively you can used a Custom Query from BigQuery, luckily it is accelerated by BI Engine, so it is fast and use the free 1 GB memory provided by Data Studio

SELECT
*,
CASE
WHEN @detail="Region" THEN Region
WHEN @detail="FuelSourceDescriptor" THEN FuelSourceDescriptor
WHEN @detail="StationName" THEN StationName
WHEN @detail="Technology" THEN Technology
ELSE
NULL
END
AS Level_detail
FROM
datastudio.today_view_MT

then you added the values to the parameter, notice, you can’t have a list of values from a data source, you have to manually type the values.

now the column “Level_detail” will dynamically switch to column “Technology”, “Region” etc based on the selected value in the parameter Detail

now you add the Parameter “detail” to a filter control, notice Parameter are color code Pink, a nice visual clue !!

now you use your dynamic column in a visual

and here is the final results

Personally I think it is a game changer for Data Studio, specially when you combine it with the Power of BI Engine, interesting time ahead

Connect Streamlit to PowerBI service using XMLA end point

Streamlit is a new framework to build data web app using only python, you don’t need any knowledge of javascript/HTML.

Connecting to a PowerBI using Python is well documented , see those excellent tutorials here by David Eldersveld

using this code, I managed to build a small app that using an existing XMLA end point, first it will extract the existing models and then you can run arbitrary DAX queries.

please note as of August 2020, XMLA end point is a PowerBI premium only feature

the main connection string and how to export to a df was copied from this Answer in Stackoverflow

import adodbapi as ado
import numpy as np
import pandas as pd
import streamlit as st

def get_df(data):
    ar = np.array(data.ado_results) # turn ado results into a numpy array
    df = pd.DataFrame(ar).transpose() # create a dataframe from the array
    df.columns = data.columnNames.keys() # set column names
    return df
source=st.sidebar.text_input('Write your XMLA endpoint')
if source:
    with ado.connect("Provider=MSOLAP.8; Data Source="+source) as con:
        with con.cursor() as cur:
         cur.execute('select * from $SYSTEM.DBSCHEMA_CATALOGS')
         data = cur.fetchall()
         catalogue = get_df(data)
         catalogue_Select= st.sidebar.selectbox('Select Models', catalogue['catalog_name'])
dax=st.text_area('Write your DAX Query:')
if dax:
    with ado.connect("Provider=MSOLAP.8; Data Source="+source+" ;Initial catalog="+catalogue_Select) as con:
        with con.cursor() as cur:
         cur.execute(dax)
         data = cur.fetchall()
         df = get_df(data) 
         st.write (df)

and here is the result

Unfortunately adodbapi required Windows , which make deploying the app a bit harder, yo can try Azure Web app which has a windows runtime, I wish it was as easy as Heroku !!!

The good new Microsoft added recently the support for .Net Core, so hopefully I will Update the blog with a cross platform solution

to run the app on your laptop, just type

streamlit run app.py

it is a proof of concept but I see a lot of use cases, an obvious one is to build web app for visualization not supported by PowerBI like massive dataset maps, or 3 D viz.

How to plot Digital Elevation Model in Data Studio.

TL;DR : a sample dataset with x,y,z,red,green,blue and a custom Viz in Google Data Studio Using Deck.GL point Cloud, see example here

I added a new dataset , so you can test it yourself, you can either load it using BigQuery or use the load file connector in Data studio.

section explain how we got the data, if you are only interested in testing the visual go to section 2.

1-How to get the Data

for some reason it it is extremely painful to get a dataset with x,y,z,r,g,b

luckily a couple of days ago, I was in twitter and saw this tweet by Michael Sumner

it turn out extracting coordinated and elevation is extremely easy using R, all you need is the center location and the dimension of the area you are interested in, and R ceramic will extract x,y,z automatically in a nice dataframe, then I took that data and uploaded it to BigQuery using the package bigrquery then plot using a custom Viz I built using Deck.gl ( see the linked report)

here is a script I used

library(raster)
library(ceramic)
library(bigrquery)
bq_auth("XXXXXXXX.json")
Sys.setenv(MAPBOX_API_KEY = "DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD")
cc <- ceramic::cc_location(cbind(14.428778,40.822973), buffer = c(2000, 2000), zoom = 15)
el <- ceramic::cc_elevation(cc)
el1 <- resample(el, cc, method = "bilinear")

df1 <- as.data.frame(cc,xy=TRUE)
df2 <- as.data.frame(el1,xy=TRUE)
df <- merge(x = df1, y = df2, by = c("x", "y"), all.x = TRUE)

df <-transform(df, lng=x/100000,lat=y/100000,red=layer.1,blue=layer.2,green=layer.3)
df <- df[c("lng", "lat","layer","red","blue","green")]
job <-  insert_upload_job("PROJECT_ID",
                "GIS",
                "VOLCANO",
                df,
                create_disposition = "CREATE_IF_NEEDED",
                write_disposition = "WRITE_TRUNCATE")
wait_for(job)

2-Plot the Data using Point Cloud Viz

the Custom Viz address is

or you can just copy the report and use your own data

all fields are required except tooltips, by default it will show coordinates

I used Mount Tahat as an example, it is a highest Moutain in the south of Algeria, extremely beautiful area

Data Studio limit the number of rows passed to a custom visual to 1 Million, here I made sure it is less than 750K as it is the maximum that can be downloaded from the visual

3-The end Results

Mount Uluru in Australia

Volcano Vesuvius in Italy

Using the new Convexhull function in BigQuery to reduce Geometry complexity

BigQuery recently introduced two new GIS functions, ST_CONVEXHULL and ST_DUMP

Read the announcement here , when I saw the announcement I already thought about this use case.

The Problem

Although showing map in BI software has improved dramatically in the last couple of years, still unless you use Tableau, there is always a hard limit how much data you can show in a map, even if you can show more, it is better to reduce the volume of data just for performance sake, users are so spoiled those day that they complain when their report does not show up in less than 2 second.

Although the example used here is very specified, I am sure it can be extended to other uses cases.

Let’s say you want to show a lot of points  with one colour coded attribute, in a lot of cases, the end user wants only to see the distribution of the attribute not the individual points, see here  

That’s a lot of points ( my real case is 58 Thousand)

Convexhull to the rescue

Convexhull is very handy the input will be group of points and the output will be a closed polygons, I use it a lot in QGIS, but the killer feature here, because it is SQL and the attribute are dynamic, (in my use case they changed daily), you can write a Query that dynamically generate new geometry, either polygons or linestring or even  keep the original points if they can’t be grouped.

Now the trick is we group by status and existing grouping, for example in this dataset.

  1. Check if in one area all the status is the same using count distinct, if in one area it is the same attribute, it will generate a polygons.
  2. 2- if one area has multiple status and hence multiple colours then fien, we jump to the row level and generate line strings.
  3.  If one line string has multiple colors then we jump to points.

I built this SQL View with the help of  Mikhail Berlyant, the source data is here, replace “xxxxx.SolarFarm ” with your table.

WITH
  source AS (
  SELECT
    *,
    ST_GEOGFROMTEXT(CONCAT( "POINT (",x," ", y,")")) AS POINT,
    COUNT(DISTINCT status) OVER (PARTITION BY ROW) AS multiple_status,
    COUNT(DISTINCT status) OVER (PARTITION BY area) AS multiple_status_area
  FROM
    `xxxxxxx.SolarFarm`),
  tt AS (
  SELECT
    id, pole_nr,color,area, ROW,status, POINT,
    CASE
      WHEN multiple_status_area=1 THEN area
      WHEN multiple_status=1 THEN row
    ELSE
    CAST (id AS string)
  END
    AS newgroup
  FROM
    source),
  ff AS (  SELECT newgroup, ST_ASTEXT(ST_CONVEXHULL(ST_UNION_AGG(POINT))) AS WKT
  FROM
    tt
  GROUP BY
    1),
  xx AS (
  SELECT
    tt.newgroup,
    wkt,
    tt.status
  FROM
    tt
  LEFT JOIN
    ff
  ON
    tt.newgroup = ff.newgroup)
SELECT
  newgroup,
  wkt,
  status
FROM
  xx
GROUP BY
  1,
  2,
  3

and here is the result side by side with the original data from 3528 rows to 283 rows, that’s a big improvement,

as of July 2020, Google Data Studio does not support Geometry, and the total number of points is limited to 10K, you can use other custom Visual but currently tiles are blocked.

if you are using PowerBI to view the data, you need to use the excellent Icon Map as it support WKT geometry