Building Complex Data Model using Nested Data in Malloy

Last year, I wrote a blog on how to build a nested complex Data Model using BigQuery , please read it first, unfortunately using standard SQL to nest and unnest Data was a painful exercise, maybe it was due to my lack of experience, or simply my brain is wired to think only about Tabular Data. happy to say, using the new Data Modeling language Malloy the experience became way simpler.

for a start Malloy will show a nice expended view of the nest table

The Model is self explanatory.

Now you need just to write Queries, Malloy will generate a SQL behind the scene

For example Total values of all measures :

No Idea what the SQL Means to be honest

Now Grouped by Category

Now More tricky, let’s group by year, notice, the Budget has no date values, the results is not correct !!!!

instead, you first group by year then nest the second level of grouping something like this

Malloy has a native export to nested Data Structure, but for this exercise I want a tabular results, you can easily flatten the previous Query using Project, notice how you can simply reference a previous Query, the measure Progress is simply Actual/Budget

I think it is simple enough that a “Normal” BI person can work it, you can find the Model here, The Model currently works only with BigQuery, found some bugs with DuckDB connector, hopefully it will be fixed in the next update of Malloy.

if you works with nested data, Malloy make it nearly easy 🙂

Update :

Malloy just added a semantic “All”, to ignore a group by dimension, see the same Query using the new syntax , this is Amazing !!!!

First Look at Google Malloy

Malloy is a new Modeling Language created by the original Author of Looker, it was released last year under an Open Source license, and made available to Windows users just last week, as someone who is enthusiast about Data Modeling I thought it is worth having a look at it.

Currently, Malloy is available under a free extension in Visual Studio Core, Malloy don’t have any “calculation Engine”, all it does, you build your model in a text where you define Source Tables, measures, relationship and dimension and then you write Queries, Malloy will parse those Queries and Generate SQL Code, it is more or less how every BI tools works behind the scene.

Interestingly, Malloy extension has DuckDB installed by default, it means, we have a full semantic layer and a fast OLAP engine as an open source offering, that’s a very big deal !!!

I spend sometime building a simple model, just one fact table and two dimensions, you can see the code here

When you run a Query, it will show the results in tabular format and you can see the SQL generated, you can even define some basic Visual like bar chart et

BI Engine not supported

Malloy has excellent support for BigQuery ( for obvious reason) and does support PostgreSQL too, but the SQL Generated is non trivial, BigQuery default Engine render the SQL extremely fast, no problem with that, but BI Engine struggle, basically any non trivial calculation are not supported. ( cross join and correlated variable etc)

Measure Behavior

let’s create three trivial measures

red is Quantity { where:  color = 'red' } 
black is Quantity { where:  color = 'BLACK' } 
red_or_black is Quantity { where:  color = 'red' or  color = 'black'}

then I run a simple group by

As a DAX users the results are rather unexpected

  • Measure Black : Return 0, it is case sensitive , maybe Malloy should maybe add a setting for DuckDB to ignore case sensitivity ( or it is by design, I don’t know)
  • Measure Red : return 2 only for red and 0 elsewhere, in DAX by default it will return 2 everywhere
  • red_or_black : was expecting to see the sum of both red and blacked which is 4 repeated in all rows

here is the same using DAX ( you can change that behavior by using keepfilters but I am interested only in the default behavior)

I don’t know enough about the language yet, but it would have being useful to have an option in the measure to ignore the group by ( it seems it is coming )

Why You Should care

For some historical reason, all Modeling language were proprietary and based only on vendor implementation, as far as I know, this is the first fully open source implementation, I am sure Google has a long term vision for Malloy and will show up in more service, I would not be surprised if BigQuery somehow integrated Malloy as a free semantic layer, after all it works well in a consumption model, I have not used it enough to have any good intuition, but I like the direction of the product, and Good on Google for making it Open Source, and DuckDB for being such an awesome SQL Engine.

Advance Geospatial analysis using location Parameter with Streamlit

This blog is a POC of something that I always wanted to have in a BI tool, and I tried Tableau, PowerBI and Data Studio, without success ( not interested in adding an invisible grid as a hack), The idea is extremely simple yet very powerful, retrieve data when you click on a map, you may think it should simple, it seems BI tool are good at retrieving data based on filter, but it is very hard to push a parameter from a map back to a source data.

Traditionally, if you want to have this kind of interactivity, you need to write code, to be honest the idea of writing javascript and learning how to deploy a web server was not very interesting for me, but luckily we have a new Option in Streamlit

Streamlit is a code first, web app platform using only Python, web page are generated behind the scene, and there are a lot of component where you need to write a minimum of code, and deployment is absolutely trivial using Streamlit Cloud, and because it is open source, you can deploy using alternative approach like Cloud Run, or Azure

I came across this component Streamlit-Folium recently, and it is magnificent work, when you click on a map, it does provide variable back on the last location clicked zoom, bounds etc, all for free, no code required !!!!

All I have done is copied the code from the source and built a SQL Query that take the last clicked item filter all the “cafe” in a radius of 500 m, the SQL Code is copied from this previous Blog

The Source Data is nearly half a Million, as you can imagine plotting a massive dataset just to see a small portion is a waste of computer resources.

here is the final results

Here an example of a SQL Query generated.

State management

I added the code here, again it was too easy to write as I nearly copied everything from the component sample code, the tricky part was how to update the value of a variable which was already declared, Streamlit has a brilliant solution using State Management, the solution is very simple

Assign a default value when the Streamlit run for the first time

if 'key' not in st.session_state:
    st.session_state.key = '( 153.024198,-27.467276)'
    st.session_state.key1 = [-27.467276, 153.024198]
    st.session_state.key2 = 16
point_clicked = st.session_state.key
location_ini  = st.session_state.key1
zoom_Start    = st.session_state.key2 

Update the values when a user click on the map, the next run in the same session will use those new values

 st.session_state.key = point_clicked
       st.session_state.key1 = location_ini
       st.session_state.key2 = map_data['zoom']

Currently I don’t know how to stop Streamlit from redrawing the map, as I am only interested in updating the markers.

Database

it works with Any Database as long as it has a minimum support for GIS functions, Currently I used bigQuery BI Engine as I am familiar with it and to speak freely :), it is very cheap for this kind of workload, small Data and potentially a lot of concurrency 🙂

I tried PowerBI Datamart but it seems Python access is blocked , DuckDB don’t support GIS functions yet, I am sure you can reproduce the results using only SQL, but I did not bother.

ST_DWithin(ST_GeogPoint(lng,lat),params.center,params.maxdist_m)

Take Away

I think there is a third way between no code and only Code, Streamlit managed to create a new category, maybe simple code 🙂 having said that BI Vendor should up their games, Location Parameter should not be that hard to implement.

Delta lake with Python, Local Storage and DuckDB

TL;DR : I added a streamlit app here

a new experimental support for Writing Delta storage format using only Python was added recently and I thought it is a nice opportunity to play with it.

Apache Spark had a native support since day one, but personally the volume of data I deal with does not justify running Spark, hence the excitement when I learned we can finally just use Python.

instead of another hot take on how Delta Works, I just built a Python Notebook, that download files from a web site ( Australian Energy Market), create a delta table, then I use DuckDB and vega lite to show a chart, all you need to do is to define the Location of the Delta Table, I thought it maybe a useful example, all the code are located here

And I added a PowerBI report using the delta Connector

Some Observations

Currently DuckDB don’t support Delta natively, instead we first read the Delta table using pyarrow which DuckDB can read automatically, at this stage, I am not sure if DuckDB can push down filter selection or read the stats saved in the Log file, and currently, it seems only AWS S3 is supported.

%d bloggers like this: