PowerBI, generate multiple Print quality Maps using R

In this blog, I will present a workflow, I have been using for the last 2 years with a rather a good feedback.

Obviously, I like interactive Dashboard, I want everyone to login to the PowerBI service and start doing their own analysis, but to my dismay, not everyone is interested in doing that, a lot of users want only a report that they can print. A took me a while to understand that that there is nothing wrong with that, and in a lot of use cases, a printed report is the best medium to convey information.

in my case, we do a lot of maps, and users want print quality maps, and because the data change daily, you need automation.

In previous blog, I wrote how to integrate PowerQuery with R, in the current blog, I will show how to generate multiple pdf with a customized map, by using R custom visual.

The PowerBI team has done a fantastic job, all you have to do is add the R script visual, add the fields you need, which automatically create a dataframe and write your code, and with one click, you can edit your code in RStudio!!!

Rstudio Integration works by creating a temporary csv file that hold the dataframe data

I personally prefer RStudio, but you can use any IDE

There are two caveats though

  1. The dataframe has a maximum of 150k rows.
  2. When you work in the desktop, it will use your R installation, all packages are supported, but when you publish to the service only the packages in this list are supported (ceramic is not supported, I think packages that downloaded external data are not supported), I found a workaround

Let’s generate some maps.

I am using the excellent package tmap for the mapping, you can customize any aspects of the map, layout, Text Size, legend, titles, it is really an amazing product and show the power of R, for tiles I am using ceramic .

you need a Mapbox token (their free tier is very generous), I will use South Australia car crash data as an example.

  • Fatalities >1

No code just adds a filter in the visual

The code

library(sf)

library(raster)

library(dplyr)

library(tmap)

library(tmaptools)

library(ceramic)

dataset = rename(dataset,y=lat,x=lng,status="Crash Type",labels="Total Fats")

dataset$color <- as.character(dataset$color)

dataset$labels <- as.character(dataset$labels)

map <- st_as_sf(dataset, coords = c("x", "y"), crs = 4326)

Sys.setenv(MAPBOX_API_KEY =”get your own key")

background <- cc_location(map)

dataset[dataset==""] <- NA

new_DF<-filter(dataset, !is.na(labels))

map1 <- st_as_sf(new_DF, coords = c("x", "y"), crs = 4326)

chartlegend <- unique(dataset[c("status", "color")])

m2 <- tm_shape(background)+

  tm_rgb() +

tm_shape(map) +

tm_symbols(col = "color", size = 0.04,shape=19)+

  tm_shape(map1) +

 tm_text(text="labels",col="white")+

  tm_add_legend(type='fill',labels=chartlegend$status, col=chartlegend$color)

  tmap_save(m2, "C:/Users/mimoune.djouallah/pdf/happyValey.pdf",width=3508, height=4961)

  m2

  • Copy the same custom visuals and just change the filters

Here we go

 Best part the pdf files

Now you can share those files per email or save it in a shared folder. the map show only dots, but you can load polygon if you need to, see this blog for further details

 You can download the pbix here, you need R to be installed, and your own Mapbox token.

PowerBI Incremental refresh using Python or R

In this blog, I will show how to leverage Python (or R) to implement an incremental refresh in PowerBI using PowerQuery and Python, nothing is really new ( I am sure Imke and Maxim has blogged about it before).

in a previous blog, I showed how to use R & Python integration to load data to a Database

This approach make sense only when you do a lot of heavy transformation and your data source change based on time.

As an example, in my previous job, we receive a new excel file every Monday (300K rows), this file gets approved and corrected every Thursday.

the workflow was:

save the files in a folder, do the transformation, which was fine , but after the first year, it was around 52 files, and although technically you need only to do transformation for the last file, and as PowerBI does not support incremental refresh, twice a week we redo everything, after two years, the refresh took nearly 30 Minutes and sometimes we get out of memory errors.

in the big picture,Half an hour was not that bad (we have a desktop just for refresh), the worst was, you refresh the model and once you finish, you get a new revision and you must refresh again.

Now using Python/R script, the idea is every file get transformed only 1 time, regardless of how many times you refresh, just by exporting the results of the transformation of every file as a csv in a staging folder.  

  • The first run is slow, as it will process all the existing files in Source Data, but the subsequent run, will transform only new files.
  • Let’s say File 2 was revised, all you need to do,is to delete File2.csv and it will be transformed again, but only that file.
  • Ok, if you see step 4, the files are reloaded each time, I am not too much worried about that, as the batch loading of csv files from a folder using PowerQuery is relatively fast (yes, a bit slow compared to R), the bottleneck is rather the transformation.

the code for python script is here, as you can see PowerQuery integration is amazing, just add a new step and you get a dataframe, that’s all,

# 'dataset' holds the input data for this script

df_by_filename = dataset.groupby("filename")

for (filename, filename_df) in df_by_filename:

    filename = filename.replace("zip", "csv")

    filename = filename.replace("PUBLIC_DAILY", "UNIT_PUBLIC_DAILY")    filename_df.to_csv("C:/results/"+filename,index=False)

the script split the dataframe by the column filename, and then export each file separately, currently it is saving into a local folder, but you can easily save those files into a cloud storage

to test it, I built a quick workflow using public data, PBIX here,  the source data is zip files in a public website, there is a new zip file daily, it is relatively complex transformation as you need to unzip the file split it, delete some columns etc, the first run is slow, as it is processing all the files (62 files), but the next run, will just process 1 file, you can simulate that just by deleting some csv files in the staging folder, when you refresh again, only the files deleted will be processed again.

I think the main take away is, Python and R integration are amazing tools to implement new possibilities that will not be necessary available in PowerBI, and you don’t need to be a programmer to use those integration, a serious search on stackoverflow will get you started quickly.

How to Keep Your Primavera P6 Clean?

This article addresses to all the schedulers and project professionals who import schedules into scrubbing P6 databases, remove undesired data, export the cleaned XER, and then import to a production database or share with third-parties such as contractors or sub-contractors.

If it happens to you to go through such a process, then you might want to read this article and see the better way to “clean” a XER file, prevent external data from corrupting your database thus maintain security and keep schedule integrity.

You can achieve this with a simple tool called ScheduleCleaner.

Now, I want to explain how the tool works and how you can benefit from it.

How to get started with ScheduleCleaner?

ScheduleCleaner is a desktop application for Windows operating system. It’s not connected to a database, and does not require internet connection to use it.

The “cleaning” process of an XER file can be achieved in 5 steps as explained below.

  1. Launch the software;
  2. Add an XER File;
  3. Select the output folder;
  4. Click on the categories of data you want to remove;
  5. Click “Clean” button.

As you can see, there is no manual work, no editing of a XER file in Notepad, and no scrubbing databases.

Steps to "clean" XER file with ScheduleCleaner

The software is intuitive, easy-to-use, and works offline as a standalone desktop application.

What’s more important, the software does not modify the original project plan. Instead it creates a copy and modifications are saved in the new file. The original project plan remain untouched.

Now, let’s see what you can accomplish with this tool in more specifics.

Removing POBS

If it takes a lot of time to import XER file intro Primavera P6 database, POBS data might be the reason for that.

Overall, the POBS defect affect the performance of the application and users lose valuable during the import operation. According Oracle, the POBS data is not used yet:

“We do not utilize the POBS table yet we export/import the data from this table when completing XER Export/Import. The XER export/import should be written to exclude this data with XER export/import operations of P6 Professional.”

The removal of POBS data can be done manually, but the process is prone to errors and can be time consuming.

The impact of all these errors when managing global data in an enterprise, will ultimately result in a polluted database and unconscious mistakes on a project level.

So using a tool for removing POBS data is desirable.

You can see a significant difference of the file size before and after cleaning POBS which greatly affects the time needed to import XER file into a Primavera P6 database.

Imagine the time that can be saved for larger XER files.

Remove Units, Rates, Cost, Pricing, Progress

As the purpose of exporting data files in XER format is to transmit project data to another database, in many cases data should be kept private. For example, a general contractor wants to send the project to a sub-contractors, but without the cost of resources.

Another examples is related with the GDPR regulation. Namely project schedulers and managers share files that contain sensitive information such as resource names that can disrupt the guidelines of the GDPR.

To be GDPR compliant, companies need to hide/anonymize confidential information, and ScheduleCleaner is the perfect tool to easily and securely protect sensitive information.

Just by clicking checkboxes, users who want to share the XER schedule can pick certain categories of data that want to be removed from the schedule before sending to third-parties or upload to a Primavera P6 database.

Before “cleaning” prices
After “cleaning” prices

Mask Project Data

Similar as removing certain categories of data, you can also mask project data.

The only difference is that with masking, you can add custom codes, labels or text for the specific categories.

Add Prefix/Suffix

Inserting prefix or suffix to different categories in the project plan, can give additional information to the person who reads the information and acts according them.

To add Prefix/Suffix, you need to select the template that will contain Prefix/Suffix, select the appropriate category, and add the terms that will be words’ prefix or suffix.

Then, you go to “Clean” ribbon and click on the “Batch” button. The end result when adding prefix/suffix are given in the image below.

Converting Data

The software features an option to convert Global and EPS activity codes to Project Activity codes and EPS to Global Activity Codes. The activity codes are important to schedulers and planning engineers when creating different types of work performance reports.

So here are the type of categories that can be converted with ScheduleCleaner:

  • Convert Global/EPS to Project Activity Codes.
  • Convert EPS to Global Activity Codes

Moreover, you can convert Global calendars that are used in the project plan into project and shared resource calendar. In this way, you will avoid errors when importing the XER file into P6 database.

Save time with process automation

Who doesn’t want automation? Automation saves time and gives a sense of comfort and security.

Here, it’s not actually a full automation because you still need to click on a button in order to perform an action or combination of actions. But this is quite useful when you have a set of actions that need to done on a daily basis such as sending a daily progress report to top management or uploading recent progress into a database.

Automation is ScheduleCleaner is viable through creating Templates, save them and apply to imported XER files.

Batch Clean

“Batch Clean” is a feature that works with templates. User must create at least one template and assign it to a file in order to use the batch file cleaning.

“Quick Clean” on the other side is more suitable when user wants to modify very small number of project files, while “Batch Clean” is useful when large number of data files, usually located in different folders, need to be modified.

Final Words

ScheduleCleaner enables you to quickly remove or anonymize confidential data in XER data files exported from Primavera P6, while keeping the schedule integrity.

It replaces the many work when “cleaning” XER file prior to sharing the file or import to a production database.

As the manual process of removing or anonymizing project data is time-consuming and unreliable, performing Batch Clean in combination with Templates can speed up the process.

Organizations can significantly improve their productivity, communication and security by integrating ScheduleCleaner in their working environment.

How to Export data from PowerQuery to BigQuery

Today was playing with a report in PowerBI and I got this idea of exporting data to BigQuery from PowerQuery, let me tell you something, it is very easy and it works rather well, PowerQuery is an amazing technology ( and it is free).

in PowerBI,you can export from R or Python visuals but there are a limitation of 150K rows, but if you use PowerQuery, there is no limitation ( I tried with a table of 23 Millions records and it works)

here is the code using Python, but you can use R

import pandas as pd
import os
from google.cloud import bigquery
dataset['SETTLEMENTDATE']=pd.to_datetime(dataset['SETTLEMENTDATE'])
dataset['INITIALMW']=pd.to_numeric(dataset['INITIALMW'])
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "C:/BigQuery/test-990c2f64d86d.json"
client = bigquery.Client()
dataset_ref = client.dataset('work')
table_ref = dataset_ref.table('test')
job_config = bigquery.LoadJobConfig()
job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE
job_config.schema = [
bigquery.SchemaField("SETTLEMENTDATE", "TIMESTAMP"),
bigquery.SchemaField("DUID", "STRING"),
bigquery.SchemaField("INITIALMW", "FLOAT"),
bigquery.SchemaField("UNIT", "STRING")]
job = client.load_table_from_dataframe(dataset, table_ref, job_config=job_config)
job.result() # Waits for table load to complete.

interesting after the step in Python we get a table, simply expand it

here is the total rows of the table in PowerBI

the results in BigQuery

ok, PowerQuery flow can execute many times, it is a black magic knowledge that’s only a handful of people knows, but in this cases, it does not matter, the BigQuery job truncate the tables every time, so there is no risk of data duplication.

probably you may ask why do that if there are a lot of data preparation tools that natively support BigQuery, based on my own experience, most of my data sources are Excel files and PowerQuery is just very powerful and versatile specially if you deal with “dirty” format

the second question is probably what’s the added value ? just load the data directly into PowerBI, the answer is very easy, data ubiquity

I want everyone to be able to access the data, regardless of the front end tools.

Construction Progress Report – PowerBI – by Darrin Kinney

A quick and easy construction progress and schedule dashboard.

I have previously outlined an approach that can be used for Engineering Progress.

This post is an extension to that which instead of looking at engineering model development, instead looks at construction development. I don’t want to delve too much into the details about exactly how this was built (again see the post above).

Some big differences is that I have used a resource assignment view. in addition to the date metrics This allows for resources histogram and progress curves to be quickly sorted down to an activity level. This approach also follows a prior post Resource Analysis Dashboard .

Construction02

The data

Construction01

The underlying data is very similar to our engineering progress example. We can use a flat file export direct from P6 with a standard set of columns. As I have mentioned before, you can achieve this in a SQL query as part of a larger data model, although with everything, a delicate balance is needed (balancing database formalism and easy excel solution)

We will also have the resource assignment data

Construction06data.JPG

The WBS Slicer and Area Selection

Construction03_wbs

This design element doesn’t work for project with too many WBS elements. For this example, each major area only has about 10 WBS elements, therefore I could pull this off with no drama. I really prefer this selection as opposed to drop downs where it is often difficult to quickly make  selection.

The Pie and Metrics

Construction04pies

Here we follow much of the look and feel I used with the engineering progress; however instead of just using activity count metrics, I have also inserted hour and percent complete metrics. There is nothing fancy about these.

The Data Table

Construction05table.JPG

I’ll sound like a broken record again, when you have a good design with one aspect of a project, you can likely take that and run with it for many other areas. In a following post I will detail this systems engineering aspect to nearly everything we touch.

Obviously the key inclusion into the table is the budget units and %’s. I still prefer these tables views vs the GANTT views. Having clear visibility into the last month dates, the prior month dates,  and variances is the purpose of this view.

The Future

Again, the extension of this are endless. At this stage, we are starting to see how pre filtered views provide more focused dashboard as compared to a one size fits all. Sitting in an EPCM world, most of the detailed activities and schedules are managed by our contractors. Thus, this construction view is more suited to using an export from a contractor Level 4 schedule.

At some point, we will need to begin to discuss an overarching design where a user can navigate to our various dashboard in a logic way.

Happy data wrangling!

Custom SQL in Google Datastudio

in the last 12 months, Google Datastudio has added many new interesting new features, specially the integration with BigQuery BI engine, and custom SQL Queries.

Obviousely, I am a huge PowerBI fan, and I think it is the best thing that happen to analytics since Excel, but if you want to share a secure report without requiring a license for every user, datastudio is becoming a valid option.

I have already blogged about building a near real time dashboard using Bigquery and Datastudio , but in this quick blog, I will try to show case that using SQL one can create a more complex business logic reports.

I am using a typical dataset we have in our industry, a lot of facts tables, with different granularity, the facts tables don’t all update at the same time, planned values changes only when there is a program revision, actual changes every day.

Instead of writing the steps here, please view the report that include the how to and the results.

The approach is pretty simple, all modern BI software works more or less the same way( at least PowerBI & Qlik, Tableau is coming soon), you load data to different tables then you model the data by creating relationships between the tables, then you create measures, when you click on a filter for example, or when you add dimension to a chart, the software generate a SQL query to the data source based on the existing relationship defined in the data model, it is really amazing , even without knowing any SQL coding you can do very complicated analysis.

DataStudio is no different to other tools, the Data Modeling is called Blending, it link all the tables together using left join, which is a big limitation as if some values exist in one table and not in others, you will miss data.

The idea is let’s bypass the modeling layer and write some SQL code, and to make it dynamic let’s use parameters, it is not an ideal solution for an average Business users ( we don’t particularly like code) but it is a workaround, till DataStudio improve it’s offering.

Engineering Progress Report – PowerBI – by Darrin Kinney

In this article, I will run through all the steps required to produce an elegant Engineering Progress Report.

Eng12

The intent is not to delve into the manner in which the progress or schedule are updated. I have assumed you have a schedule and progress status for each key area. It is quite amazing how easy is to generate this dashboard, and also the extensions available to use this not just for engineering, but for fabrication, material deliveries, major milestones, contractor key activities, etc.

I will outline the format for our 2 key datasets and then follow with the creation of 2 dashboards: An Overall Status Gauge, and the full detail EPR Dashboard

P6 Schedule Data

Below is our data set we want to use. This data set has been specifically tailored to our resulting visual. Thus, instead of linking directly to an XER, importing into a data model, and performing perhaps too much data work, a nice trick is to instead define specific VIEWS inside P6, so that you can easily copy-paste directly into Excel, then import directly into your dashboard. Thus, the below can be quickly generated each schedule update cycle.

Eng_9

A very nice aspect of this data set is the field “TYPE”. It is good practice to tag activities of a specific type (this ties into my belief about using a framework approach to approach controls). Thus, in theory, you can export the entire schedule, and drive many different dashboards by just filtering on different TYPE fields. In this example I have used

  • M090 = 90% Model Review
  • M100 = 100% AFC

Although, consider tagging every concrete pour Activity in your schedule with C010. You can then use that code to drive a similar dashboard for concrete pours: Or you use F100 for Module Fabrication, where we tag the completion activity for each module for use in dashboard. Ultimately you create a catalog of TYPE codes and can go dashboard crazy with how easy this turns out to me.

This data does not have all the fields we will need in our dashboard. Specifically we will want to create a several measures that will allow for a few metrics. We will need to know if an activity is “FINISHED”, “NOT FINISHED”, and “Critically LATE”. Because these fields are dependent on your target audience, its best to leave the generation of these to code (because everyone can code right!). If you wanted to display metrics on “Started”, then your source data would need to include the start date and perhaps the activity status field from P6. Again, its important to understand the relationship between your visual and your data. In this example, I am treating these activities as effectively milestones in which case the concept of “started” doesn’t apply. key conceptual discussions such as this are vital.

Progress Data

The progress data in this example is only overall progress. The intents is to just show an overview for the entire project and quick metrics for model reviews. Ultimately, you would want a “WBS Specific” dashboard that would display more information over the entire lifecycle of that WBS. In that view, you could present the engineering curve and perhaps EVMS metrics.

Strategy – Do not do everything in one place- keep focus

Too often, I see users pushing design features into dashboards, for what appears to just be whimsical value. Dashboards are not meant to answer 100 questions. Its easier to have 100 dashboards each displaying a key metric, as opposed to 1 dashboard displaying 100 metrics. Keep your approach CLEAN and FOCUSED.

Ideally, our progress data will include fields such as Area, WBS. In this example I have pulled data with just 1 data date and only 1 dataseries (Engineering_Overall). Your backend progress data will likely have data from multiple cut off dates and for multiple series.

Our progress data will look like this. The full data set will also contain a series for “Construction_Overall” too. This will be used on our summary page to outline the power in using this approach to progress data.

Eng_02

Linking our Data into PowerBI

In this example, both data files are simply Excel based files with the data converted to table. This allows for the easiest importing (and also allows for quick refresh of data). Housing the data in the excel files can also facilitate a movement to a more digital way of thinking (more on that in another article)

VISUALIZATION 1 – SUMMARY GAUGE

I am a firm believer in Overall Project Flash reports. So, when we think about dashboards we should have a starting point our overall project status. Thus, the elements presented here are only a key subset of metrics and visuals I would expect on a Project Status Report dashboard.

In this example, looking at engineering progress, we want to see what Percent % Complete we are and how that compares against our Planned % Complete.

A Gauge is a good way to provide a quick visual (Bullet charts are other, and really, the skies the limit)

Eng_Gauge

To generate this we need to create 2 measures: Actual % and Planned %. This is where you really need to understand how dashboards work and how databases work. If you feed a computer a data source, it is no innate way of know something as simple as “What is the current %”. Therefore, we need to write some code.

Because of the format of our progress data, we can search for the maximum data date, then find the value of our actual % field on that date. We can follow an identical approach for the Planned %. Depending on your data, you would need to custom build these measures.

Code to generate our measure for Current %

M_Progress_Actual = CALCULATE (
SUM ( data1[Actual] ),
FILTER (
data1,
data1[DataDate] = MAX ( data1[DataDate] ) && data1[Date]=MAX(data1[DataDate]
)
))

Code to generate our measure for Planned % (similarly we could also pull in our Plan late)

M_Progress_Plan = CALCULATE (
SUM ( data1[BL_Early] ),
FILTER (
data1,
data1[DataDate] = MAX ( data1[DataDate] ) && data1[Date]=MAX(data1[DataDate]
)
))

The required fields for the gauge are obviously these 2 measures.

  • Value = M_Progress_Actual
  • Target Value = M_Progress_Plan

We will also need to provide a filter where Series=”Engineering_Overall” (note that this gauge can now be easily reproduced to showcase planned vs actual for all Series inside our data source. Obviously in the image above you can see I created 2 gauges each with a filter for the specific data series. Ultimately if your back end data has multiple data series for progress sliced and diced different ways, all you have to do is adjust your filter and you can display an endless series of graphs. Of, you can fancy with smart slicers too.

VISUALIZATION 2 – Engineering Progress Report

This is perhaps the most easy to read, interactive and intuitive view into engineering I have ever seen. We can immediately filter into what areas are complete, what areas are critical, scroll to see upcoming deliverables and see an overall graph.

Eng12

It might seem we have a lot going on here, but again, this is all driven off 2 quite simple data sources, and for this page, mostly everything here is from 1 schedule driven table.

The Data Table

The Table is just pulling from our Schedule data (although I have inserted a page level filter to only include activities with the TYPE = M100 and M090). Our fields are as

Eng_4

In the above image, you can see I have had to insert a few measures. I don’t want to go into them all. I’ve inserted some conditional formatting into the Actual/Forecast date column. To achieve this, I created a measure Activity_Status_Num

Activity_Status_Num = IF(ISBLANK(Schedule[Float]),1,IF(Schedule[Float]<1,2, 0))

Then, with these values I can select a formatting specific just for that column in the table. This is very nice feature of the tables in PowerBI that can add nice level of polish.

The Donut Charts

Eng_6

A nice feature of the donut chart is the count metric in the middle. It is generated from a nice little bit of code as seen below. We have 2 Donut Charts. One for our 90% activity and another for the 100%. Thus, all we need to do is place a visual level filter on each.

IsFinished = IF(ISBLANK(Schedule[Float]),1,0)
DonutCounts = SUM(Schedule[IsFinished]) &”/”& COUNT(Schedule[ActivityDesc])
In the above, there are 2 measures: “IsFinished” and “DonutCount”. Again if you want anything to display on a dashboard in a digital world, you are going to have to see this type of code

The real power of the Donut Chart is to allow for very quick sorting – after all we want to see the critical late activities right! Just click the red 90% or 100% section.

Eng13

 

Progress Graph

Eng_7

We have a progress graph too. This is effectively a dumb page level graph. It is not linked to a specific progress series for each WBS. So it will not auto update, and our data model does not link these tables. Although, the graph should add context to the overall page. Deviations from the plan curves, should be viewed by a growing number of critically late packages.

Care needs to be made whenever we look at schedule dates and progress graphs. We do not typically create progress graphs at an activity level (although, you can certainty consider it – I would offer caution against going down that route).

EXTENSIONS

This example has show the power simple data sets can have to improve visibility into our projects. This only showcased a few engineering based activities. However, if you read between the lines, you will understand there is nothing “engineering specific” about what I have done. This approach is completely universal. Given this example also included a progress data set for Construction, obviously, the easiest extension will be to link in a few construction activities in the same way.