in the last 12 months, Google Datastudio has added many new interesting new features, specially the integration with BigQuery BI engine, and custom SQL Queries.
Obviousely, I am a huge PowerBI fan, and I think it is the best thing that happen to analytics since Excel, but if you want to share a secure report without requiring a license for every user, datastudio is becoming a valid option.
I have already blogged about building a near real time dashboard using Bigquery and Datastudio , but in this quick blog, I will try to show case that using SQL one can create a more complex business logic reports.
I am using a typical dataset we have in our industry, a lot of facts tables, with different granularity, the facts tables don’t all update at the same time, planned values changes only when there is a program revision, actual changes every day.
Instead of writing the steps here, please view the report that include the how to and the results.
The approach is pretty simple, all modern BI software works more or less the same way( at least PowerBI & Qlik, Tableau is coming soon), you load data to different tables then you model the data by creating relationships between the tables, then you create measures, when you click on a filter for example, or when you add dimension to a chart, the software generate a SQL query to the data source based on the existing relationship defined in the data model, it is really amazing , even without knowing any SQL coding you can do very complicated analysis.
DataStudio is no different to other tools, the Data Modeling is called Blending, it link all the tables together using left join, which is a big limitation as if some values exist in one table and not in others, you will miss data.
The idea is let’s bypass the modeling layer and write some SQL code, and to make it dynamic let’s use parameters, it is not an ideal solution for an average Business users ( we don’t particularly like code) but it is a workaround, till DataStudio improve it’s offering.
In this article, I will run through all the steps required to produce an elegant Engineering Progress Report.
The intent is not to delve into the manner in which the progress or schedule are updated. I have assumed you have a schedule and progress status for each key area. It is quite amazing how easy is to generate this dashboard, and also the extensions available to use this not just for engineering, but for fabrication, material deliveries, major milestones, contractor key activities, etc.
I will outline the format for our 2 key datasets and then follow with the creation of 2 dashboards: An Overall Status Gauge, and the full detail EPR Dashboard
P6 Schedule Data
Below is our data set we want to use. This data set has been specifically tailored to our resulting visual. Thus, instead of linking directly to an XER, importing into a data model, and performing perhaps too much data work, a nice trick is to instead define specific VIEWS inside P6, so that you can easily copy-paste directly into Excel, then import directly into your dashboard. Thus, the below can be quickly generated each schedule update cycle.
A very nice aspect of this data set is the field “TYPE”. It is good practice to tag activities of a specific type (this ties into my belief about using a framework approach to approach controls). Thus, in theory, you can export the entire schedule, and drive many different dashboards by just filtering on different TYPE fields. In this example I have used
M090 = 90% Model Review
M100 = 100% AFC
Although, consider tagging every concrete pour Activity in your schedule with C010. You can then use that code to drive a similar dashboard for concrete pours: Or you use F100 for Module Fabrication, where we tag the completion activity for each module for use in dashboard. Ultimately you create a catalog of TYPE codes and can go dashboard crazy with how easy this turns out to me.
This data does not have all the fields we will need in our dashboard. Specifically we will want to create a several measures that will allow for a few metrics. We will need to know if an activity is “FINISHED”, “NOT FINISHED”, and “Critically LATE”. Because these fields are dependent on your target audience, its best to leave the generation of these to code (because everyone can code right!). If you wanted to display metrics on “Started”, then your source data would need to include the start date and perhaps the activity status field from P6. Again, its important to understand the relationship between your visual and your data. In this example, I am treating these activities as effectively milestones in which case the concept of “started” doesn’t apply. key conceptual discussions such as this are vital.
The progress data in this example is only overall progress. The intents is to just show an overview for the entire project and quick metrics for model reviews. Ultimately, you would want a “WBS Specific” dashboard that would display more information over the entire lifecycle of that WBS. In that view, you could present the engineering curve and perhaps EVMS metrics.
Strategy – Do not do everything in one place- keep focus
Too often, I see users pushing design features into dashboards, for what appears to just be whimsical value. Dashboards are not meant to answer 100 questions. Its easier to have 100 dashboards each displaying a key metric, as opposed to 1 dashboard displaying 100 metrics. Keep your approach CLEAN and FOCUSED.
Ideally, our progress data will include fields such as Area, WBS. In this example I have pulled data with just 1 data date and only 1 dataseries (Engineering_Overall). Your backend progress data will likely have data from multiple cut off dates and for multiple series.
Our progress data will look like this. The full data set will also contain a series for “Construction_Overall” too. This will be used on our summary page to outline the power in using this approach to progress data.
Linking our Data into PowerBI
In this example, both data files are simply Excel based files with the data converted to table. This allows for the easiest importing (and also allows for quick refresh of data). Housing the data in the excel files can also facilitate a movement to a more digital way of thinking (more on that in another article)
VISUALIZATION 1 – SUMMARY GAUGE
I am a firm believer in Overall Project Flash reports. So, when we think about dashboards we should have a starting point our overall project status. Thus, the elements presented here are only a key subset of metrics and visuals I would expect on a Project Status Report dashboard.
In this example, looking at engineering progress, we want to see what Percent % Complete we are and how that compares against our Planned % Complete.
A Gauge is a good way to provide a quick visual (Bullet charts are other, and really, the skies the limit)
To generate this we need to create 2 measures: Actual % and Planned %. This is where you really need to understand how dashboards work and how databases work. If you feed a computer a data source, it is no innate way of know something as simple as “What is the current %”. Therefore, we need to write some code.
Because of the format of our progress data, we can search for the maximum data date, then find the value of our actual % field on that date. We can follow an identical approach for the Planned %. Depending on your data, you would need to custom build these measures.
Code to generate our measure for Current %
M_Progress_Actual = CALCULATE (
SUM ( data1[Actual] ),
data1[DataDate] = MAX ( data1[DataDate] ) && data1[Date]=MAX(data1[DataDate]
Code to generate our measure for Planned % (similarly we could also pull in our Plan late)
M_Progress_Plan = CALCULATE (
SUM ( data1[BL_Early] ),
data1[DataDate] = MAX ( data1[DataDate] ) && data1[Date]=MAX(data1[DataDate]
The required fields for the gauge are obviously these 2 measures.
Value = M_Progress_Actual
Target Value = M_Progress_Plan
We will also need to provide a filter where Series=”Engineering_Overall” (note that this gauge can now be easily reproduced to showcase planned vs actual for all Series inside our data source. Obviously in the image above you can see I created 2 gauges each with a filter for the specific data series. Ultimately if your back end data has multiple data series for progress sliced and diced different ways, all you have to do is adjust your filter and you can display an endless series of graphs. Of, you can fancy with smart slicers too.
VISUALIZATION 2 – Engineering Progress Report
This is perhaps the most easy to read, interactive and intuitive view into engineering I have ever seen. We can immediately filter into what areas are complete, what areas are critical, scroll to see upcoming deliverables and see an overall graph.
It might seem we have a lot going on here, but again, this is all driven off 2 quite simple data sources, and for this page, mostly everything here is from 1 schedule driven table.
The Data Table
The Table is just pulling from our Schedule data (although I have inserted a page level filter to only include activities with the TYPE = M100 and M090). Our fields are as
In the above image, you can see I have had to insert a few measures. I don’t want to go into them all. I’ve inserted some conditional formatting into the Actual/Forecast date column. To achieve this, I created a measure Activity_Status_Num
Then, with these values I can select a formatting specific just for that column in the table. This is very nice feature of the tables in PowerBI that can add nice level of polish.
The Donut Charts
A nice feature of the donut chart is the count metric in the middle. It is generated from a nice little bit of code as seen below. We have 2 Donut Charts. One for our 90% activity and another for the 100%. Thus, all we need to do is place a visual level filter on each.
In the above, there are 2 measures: “IsFinished” and “DonutCount”. Again if you want anything to display on a dashboard in a digital world, you are going to have to see this type of code
The real power of the Donut Chart is to allow for very quick sorting – after all we want to see the critical late activities right! Just click the red 90% or 100% section.
We have a progress graph too. This is effectively a dumb page level graph. It is not linked to a specific progress series for each WBS. So it will not auto update, and our data model does not link these tables. Although, the graph should add context to the overall page. Deviations from the plan curves, should be viewed by a growing number of critically late packages.
Care needs to be made whenever we look at schedule dates and progress graphs. We do not typically create progress graphs at an activity level (although, you can certainty consider it – I would offer caution against going down that route).
This example has show the power simple data sets can have to improve visibility into our projects. This only showcased a few engineering based activities. However, if you read between the lines, you will understand there is nothing “engineering specific” about what I have done. This approach is completely universal. Given this example also included a progress data set for Construction, obviously, the easiest extension will be to link in a few construction activities in the same way.
Can you imagine using Twitter in the world of Construction – I can! Read on..
Recently, it has hit me, just how poor our project management world is. The ability to clearly communicate and understand what is occurring on a project appears completely lost. Planners have absolutely amazing schedules (I do believe this), cost engineers have incredible detail about the cost build ups sliced and diced every which way, our document management systems are full with every manner of communication – BUT we are still left sitting in meetings where everyone is confused with various key information either not available, or buried inside someone own excel file, bounced off 100 emails threads, or a million other permutations.
Issue – Visibility into our Projects Sucks
Why has this happened? In my view, our leadership teams have failed at pushing good practices of project management. Leadership teams have transitioned from “manage the work” to “manage the people” (thank you Edin M for this quote).
Solution – Get Back to Basics
Project Management is task management (you can argue, but even all the new people management issues – are still tasks and can be managed in the same way).
We are building something – lets focus on the activities required to build (and engineer, and contract and procure). All the activities that exist in our Primavera schedules! We need these activities front in center and OWNED by project management. Honestly, how can we even have a management discussion and NOT have the schedule front and center. The schedule says what we should be working on, the schedule says when we should finish something, the schedule says what comes next!
I believe that schedules have been forgotten in project management because they are too unwieldy, to abstract, can only be run by P6 jockeys and not the project at large. We need to get our schedules into the hands of those that actually manage the scope.
In the past, our leadership were more in-tuned to schedules and this synergy was easier. However i fear in today’s world, our leadership have lost the tools and dealing with our schedules (no thanks to our reliance on antiquated tools like P6 that perpetuate the need for designated planning teams to operate the software in pure isolation to the real PM teams).
Issue – We need better Project Management tools.
Answer – They exist everywhere!
Commercially built, off the shelf project management software has risen to be one of the dominant fields of software development. The construction world needs to embrace the tools and get back to basics. It is odd in that 20 years ago, the leaders in project management was the construction world. However, when the technology world sprung up, they didn’t have the knowledge we did, so they built their own tools and approaches. I now believe the tide has turned to the point construction project management now severely trails the rest of the business world.
So How Does This Work?
First, the corporate strategy teams need to decide on a platform (hint – USE JIRA). However, really the choice of software isn’t that critical, its the work processes you are going to change – the new online PM tools are structurally all the same. What we are pushing is simply “clarity in what we are doing”. We are pushing the management oversight of what we do, into the hands of those that actually manage the scope. Don’t hide your schedule, don’t hide weekly and daily reports inside your document control system – embed everything into what is effectively a social media platform.
In this example, I am using OpenProject.org , however, keep in mind there are a lot of systems that all work similar.
The starting point for me would be to add you P6 activities to your tool. This is the natural place to begin. You schedule already has a structure and usually a very good balance of level of detail.
In the above, I have added a typical task that will exist in our P6 schedules. Immediately off the bat, we can see we are operating in a distributed web based environment. We have a nice detailed description for this activity and we have the ability to assign this task to a person.
Up to this point, we are a little overlapped with P6. However, what is lacking in P6 is the ability to really discuss and communicate and UPDATE information associated with a task. The ability to pull the task into a proper project management discussion.
The above example says more than you can find in any weekly or monthly report. A picture tells a 1000 stories! The picture is also properly assigned to the activity it represents. The activity has a clearly visible Finish date than can be live edited 24/7.
Our new tools are not meant to replace P6. They are meant to force our discussions into properly structured slices of the project. They are meant to clearly communicate the status of activities. They are meant to get everyone onto the same playing fields when discussing something so that 5 different people do not end up with 10 different dates.
An activity only has 1 start and finish date, an activity only has one percent complete. It is maddening when a project manager asks me to insert the contractual dates into a report. Honestly, when you are building something, the contract date is useless in helping you decide “when will this finish”. It was only the starting point. When you get people out of their office view, in into “I need to manage this scope” you quickly understand that the contract date, or even contractors weekly reports are useless. You have to make a determination of when an activity will finish based on what you know at that time – and be proactive in actively editing the dates when required
Empower people to update activities!
This is So Simple?
I sit and look at this capability of something I built in 30 minutes on a Sunday morning and really wonder why our Project Management is leading us down what may not be avenues of real improvement to projects. Does our Project Leadership have the vision to accept such simple solutions to improve our communication?
I have discussed this before, digital transformation is all about Keywords – not project management. Real digital transformation is about altering the way we work – not building a dashboard or a database. This is why digital transformation is not working. Provide tools and process to manage the work, enable your staff to manage their own scope, and clearly communicate and update their tasks.
Really think about how you manage your scope and how implementing a more social platform to break down the walls of communication. Understand how this is disruptive to our old ways of working (not updating schedules). Get people talking off just one play-sheet!
Twitter in Construction
I’d like to end this with what I thought was the most amazing application of this new management approach.
I kid you not. During construction, these guys posted nearly daily pictures and updates.
While working on this project from the home office, I got better updates from the project twitter site, then I did from the project manager. Yes, are you finally able to see that solutions exist, creative solutions exist, that can bring construction in the new digital world!
TLDR, the report is here, please note, my experience with BigQuery and Google stack is rather limited, this is just my own perspective as a business user .
Edit : 20 Sept 2019, DataStudio use now BI engine by default for connecting to BigQuery, now the report contains the historical data too.
I built already a dashboard that track AEMO Data using PowerBI, and it is nearly perfect except , the maximum update per day is 8 time, which is quite ok ( direct Query is not an option as it is not supported when you publish to web) , but for some reason, I thought how hard would it be to build a dashboard that show always the latest Data.
Edit : 23 Sept 2019, actually now, my go to solution for near real time reporting is Google Datastudio, once you get used to real time time, you can’t go back.
The requirements are
Very minimum cost, it is just a hobby
Near Real time (the data is published every 5
Export to csv
Free to share.
Ideally not too much technical, I don’t want
something to build from scratch.
I got some advices from a friend who works in this kind of
scenario and it seems the best option is to build a web app with a database
like Postgresql, with a front end in the
likes of apache superset or Rstudio Shiny and host it in a cheap VM by digitalocean ,
which I may eventually do, but I thought let’s give BigQuery a try, the free
tier is very generous, 1 TB of free Queries per month is more than enough, and
Datastudio is totally free and by default use live connection.
Unlike PowerBI which is a whole self service BI solution in one package, Google offering is split to three separate streams, ETL, the data warehouse (Biguery) and the reporting tool (Datastudio), the pricing is pay per usage
For the ETL, Dataprep would be the natural choice for me,( the service is provided by Trifacta), but to my surprise, apparently you can’t import data from an URL, I think I was a bit unfair to Trifacta, the data has to be in google storage first, which is fine, but the lack of support for zip is hard to understand, at least in the type of business I work for, everyone is using zip
I tried to use Data fusion, but it involve spinning a new spark cluster !!!! , and their price is around 3000 $ per month !!!!!
I think I will stick with Python for the moment.
The first thing you do after creating a new project in BigQuery is to setup cost control.
The minimum I could get for BigQeury is 0.5 TB per day
The source files are located here, very simple csv file, compressed by zip, I care only about three fields
SETTLEMENT DATE : timestamp
DUID : Generator ID , ( power station, solar, wind farm etc)
SCADAVALUE : Electricity produced in Mw
Add a table with partition per day and clustered
by the field DUID
Write a python script that load data to Bigquery,you can have a look at the code used here, hopefully I will blog about it separately
Schedule the script to run every 5 minutes: I am huge fan of azure WebJob, to be honest I tried to use Google function but you can’t write anything in the local folder by default, it seems the container has to be stateless but I just find it easy when I can write temporary data in the local folder (I have a limited understanding of Google function, that was my first impression anyway) , now, I am using google functions and cloud Scheduler, Google functions provide a /tmp that you can write to it, it will use some memory resources.
I added a dimension table that show a full Description for the generator id, region etc, I have the coordinates too, but strangely, Datastudio map visual does not support tiles!!!
Create a view that join the two tables and
remove any duplicate, and filter out the rows where there is no production
(SCADAVALUE =0), if there is no full Description yet for the generator id, use
the id instead
Notice here, although it is a view, the filter per
partition still works, and there is a minimum of 10 MB per table regardless of
the memory scanned, for billing BigQuery used the uncompressed size !!
One very good thing though, the queries results are cached for 1 day, if you do the same query again, it is free!
Create the Datastudio report : I will create two
live connection: pull only today data, every query cost 20 MB, as it is using only one date partition, (2 Tables), the speed is satisfactory, make sure to disactivate the cache
But to confuse everyone there two types of caches, see
the implication is sometimes you get different updated depending if your
selection hit the cache or not, as the editor of the report, it is not an issue,
I can manually click refresh, but for the viewer, to be honest, I am not even
sure how it works, sometimes, when I test it with incognito mode, I get the
latest data sometimes not.
Import connection : it is called extract, it load the data to Datastudio in-memory database (it uses BI engine created by one of the original authors of multidimensional) , just be careful as the maximum that can be imported is 100 MB (non compressed), which is rather very small (ok it is free so I can’t complain really), once I was very confused why the data did not match, it turn out Datastudio truncate the import without warning, anyway to optimise this 100 MB, I extract a summary of the data and removed the time dimension and filtered only to the last 14 days, and I schedule the extract to run every day at 12:30 AM, notice today data is not included.
Note : Because both datasets use the same data source,
cross filtering works by default, if using two different sources (let’s say,
csv and google search, you need some awkward workaround to make it works)
Voila the live report, 😊 a nice feature shown here (sorry for the gif quality) is the export to Sheet
Schedule email delivery
although the report
is very simple, I must admit, I find it very satisfying, there is some little pleasure
in watching real time data, some missing features, I would love to have
An option to disactivate all the caches or bring back the option to let the viewer manually refresh the report.
An option to trigger email delivery based on alert, (for example when a measure reaches a maximum value), or at least schedule email delivery multiple time per day.
Make datastudio web site mobile friendly, it is hard to select the report from the list of available reports.
Google Datastudio support for maps is nearly non existent, that’s a showstopper for a lot of business scenarios
Kelpler.gl is a Geospatial data analysis
by Uber !!!, in this short blog we show
how to build a map with a time attribute and then showing a time playback slider,
it is very and straightforward process, all you need is to format the data as a
This data was randomized, as an example I show the main step to build a solar farm, piling, tracker, module then Electrical works, but obviously you can try it with road , pipeline projects, any project really that have geographic data
Please note the data is rendered using your local computer, so it is totally safe
The map is here,
keep reading if you want to understand how it was built
I attached a sample dataset here;
the field are self-explanatory.
Just be careful with Excel for some reason, when you open
the csv file, it changes the date format, make sure the format is like this
And because in a medium sized solar farm, the total pile
will be around 50,000 Piles, if you want just to show 4 main stages, the total
number of rows will be 50,000 * 4 = 200,000 row, PowerBI will not support that
( Tableau shine in this scenario),
The documentation is very clear, please read it to experiment with your own data
If your dataset has this fields names
X_lat, X_lng, Kepler.gl will automatically recognised it as a point layer
2-Define the legend
We select colour based on the field status ( select the palette
3- Select time field
The filed should be recognized as time otherwise, it will
4- Add satellite as new map Style
For some reason, it is not there by default, but it is
trivial to add
You can play with the slider, either moving the end date
or a window, up to you
6-Now you have multiple options for sharing
You need to have a mapbox
account, it is virtual free to medium usage !!!
For this example, I will share it using Dropbox, but you can export to HTML and securely share it either per email or hosted in sharepoint
This is one of the best piece of software
that I had the privilege to use, nothing to install, free , simple and straight
to the point !!
I have been asked to produce a simple construction report, we need to show the last 4 weeks of actual progress data and 6 weeks of forecast and to make thing a little bit complex the average installation since the start of the project, nothing special three measures, average to date, install per week and forecast per week
Obviously, it is trivial to be done in Excel using named sets,
if you don’t know what’s named set and cube formula is, you are missing the
most powerful reporting paradigm in Excel, a good introduction is here,
and there are plenty of resources here.
Unfortunately named set is not supported yet in PowerBI, you
can vote here,
Just for demonstration purpose, if you try to add those three
measures to a matrix visual, PowerBI just repeat them for every time period, obviously
that’s not good at all, the actual
installation make sense only in the past and the forecast has to be in the future,
there is no option to hide a measure if there is no value in a column and even
if it was possible we need to show the average installation independently of
the time period, anyway this the report when you add the three measures
and because I already learned a new trick on how to dynamically add measures to a matrix visual in PowerBI, I was tempted to try and see if it works in this scenario.
So, let’s see how it can be done using the disconnected table
Create a disconnected table with two columns
Order and status
Add a calculated column,
As the cut-off date change at least three times a week, the week number change accordingly, we can’t simply hard code the dates, instead let’s add a new calculated column, which will just lookup the week date from a master calendar table based on the order, when the order is -4 it will return “average to date”, I added a dummy 0.5 order just to add an empty space between actual and forecast ( cosmetic is important)
Who doesn’t love the glossy Level 1 reports our project produce. But really, when you look into these beauties, really understand the difficulty that goes into them. What follows is first a description of what a typical Level 1 report is, and how we can structure our excel based data to be a bit smarter.
This is by no means a fully comprehensive guide on this subject. It is instead just a primer to get us thinking about how we feed data into our reports.
Who doesn’t love the glossy Level 1 reports our mega construction projects produce. But really, when you look into these beauties, do you really understand the difficulty that goes into them. What follows is first a description of what a typical Level 1 report is, and how we can structure our excel based data to be a bit smarter (which is the real message to this article).
Interspersed with hopefully be a few key strategy points which can guide your work.
I’ll then showcase how you can take what will now be structured data and upload into a powerBI visual (although the process to capture the data into any database and drive any visualization tool would be the same)
Strategy – Don’t be afraid to use excel (not everything needs to be automated)
Key Elements of a Level 1 Report
Cost and Progress
Here we are presented with:
Overall progress curve
Cost & Commitment curves
Some may argue what to lead with – for me its always %. No bigger value highlights where your are more than what % are we. Not displayed on the image above is a data series reflecting how many people are have and comparison against planned. People achieve progress. Its impossible to talk progress without talking how many people we have. The graphs provide enough enough context to allow for discussions about productivity without having to muddy the waters
The cost sections should include visibility into what our final forecast costs will be (and comparison against baseline). Underneath that key metric are a few sub items such as how much contingency we have, a few cost curves associated with spend profiles and commitment profiles.
Schedule and Narrative
The schedule aspects of a Level 1 report are always tricky. Do we need to only display the final project milestone? For me, on major projects no single DATE has any meaning. Thus even on a Level 1, I still prefer to include 10-15 dates that represent some key aspect of the project. All dates should be compared against what we said last month to highlight current month variances, and dates should be compared against our project baseline (or whatever current approved version thereof).
The narrative section of a Level 1 can nearly always be updated by simply reading the progress, cost and schedule tables. Just put words to the graphs. Key adders here are insights into RISKs. What may come in the future that will alter what we are saying today!
As always, safety metrics are also usually front and center. For me, this has always been a difficult aspect of our jobs. A political correctness that is forced into our reporting. Don’t get me wrong, safety is the most important aspect of a project. So, including a safety table somewhere on the Level 1 is always done. For this article, I want to instead focus on the key project control elements and data integration.
Level 1 Data Structures
So, we all know what a Level 1 report looks like, and I would fathom we can all mostly agree these are the elements included and can be rolled out as a standard for any major construction contractor. Most of our reports likely already report this information in some manner or another. The entire point of this article is that we should really focus on entering the data in a smart data centric way so that if you want to automate anything down the line, you have the foundations to do so.
At this stage, I don’t want to talk about the source data used to generate your summaries. We can leave that for a later discussion.
Key Data Domains
We are aiming towards consistency here and want to actually represent all the data required for our key Level 1 chart to be housed in a database. Therefore we need to have structure.
Strategy – Do not focus on systems, focus on DATA
A critical strategic element in my approach is that I do not care what systems you use. Our reporting is not a function of our systems (at least in this step 1 phase). We instead need a structure from which we can extract data and as easily as possible, move that data into a template or format in which we can drive our level 1.
If you go down the path to seamlessly integrate source systems with a Level 1, you unwittingly constrain yourself.
Typically our (time phased) progress data will be sourced from Primavera. There are other systems where the progress data may live, but again, that isn’t the focus of this article – I don’t care where it lives and neither will any seasoned project controls manager. We just need to know it exists and has a common structure
Here, a few key notes, use a consistent data format. The above structure is how all your progress data should be housed, not just Level 1. All time phased data, all the way down to Level 5 detail items should be managed in a data structure, not a fancy formatted excel file. Trust me, updating a table such as the above will serve you in the long run. Even if your data is fully managed inside a system such as P6 or PRISM or ECOSYS or COBRA, you should be able to at least extract Level 1 into the format defined above.
You guessed, we can capture our Level 1 cost data in exactly the same format
In the graphs we are building, there are only 11 specific datasets. Only 4 of these require update on a period basis. So again, we really boil this down to something simple.
Strategy – Do not over complicate anything in your Level 1 layer
The implementation of the specific data model I have outlined above fits the strategic approach to keep your level 1 simple. Any project can implement this data model for Level 1 with without any integration into source systems. Level 1 can be updated by the project controls team doing a few copy-pastes into excel to capture project wide data. Again, I would assume your teams already do this, but perhaps end up copying this data into various other corporate systems as well.
Again, we are keeping a simple approach and only capture the required information.
Here, we are forced into a different structure. So whereas the cost and progress data can fit the same data model (as seen above), we will need a different template for schedule dates. We will typically be using Primavera, as such this model fits P6, but the idea is universal.
I do not believe this information can ever be fully automated from our scheduling systems. These paths will continually be adjusted. The planning lead will always refine what activities are being tracked to be displayed on the Level 1. Behind the scenes, there are tricks upon tricks to pull the dates, however, again, we are talking about the data layer here, not necessarily HOW you get the data into this format.
It is entirely possible to have the assignments encoded into P6 activity codes. Therefore, it would be possible to integrate your Level 1 data directly into either the source P6 database, or an XER export. In my experience, any automation that is attempted in this arena (for Level 1 data), is futile. We are only talking 10-15 key activities. Let you lead planner sort out how they get the data into this format. Again, our strategy is to not over complicate this. If the data is provided to a digital team in the format about, you are for all intents done.
The model above only captures the finish dates. If added visuals with simplified GANTT charts are needed in your Level 1 (and will be discussed in my next Level 2 article), you would have to edit the above.
The nice value of the above structure is that we have effectively created an interface, an integration layer, between what will be P6 data and our dashboard. The list of what activities can easily be edited by way of a sharepoint list. Then, in your data model, you can link on scheduleID to pull the relevent date data (I suspect many do this).
Too often, narrative comments are shuffled between parties via email, entered into several documents, edited, customized, etc. The project controls team is always struggling sourcing commentary from various sources, and in my experience, we end up entering in something ourselves.
Level 1 data structures have to fit into these complications. In this realm, sharepoint offers a canned solution by way of sharepoint lists.
Strategy – If Technology already exists, use it
Strategy – Technology can be used in innovative ways – use a mashup mindset to use existing technology in a new way
I find that sharepoint lists offer unparalleled capabilities for commentary. However, for lists to be really functional, they need to be embedded into FORMS or some routines that provide export functionality
In this example, I have mocked up a simple INFOPATH form that could represent our sharepoint fields. The sky is the limit when it comes to existing technology that can automate the capture of this type of commentary.
The value adder here is that instead of allowing unstructured comments (via email or manually marking up a word , excel or power point file), we have structured comments that are housed in a database and that database can be updated in a distributed manner using WEB based technologies.
The above would be a web based form which will be updated by the associated responsible parties. However, we can’t quite import a form into our data model. When the above form is filled out, the data will be stored in a data model (which we will have to design first before we can even build the form above). Thus, what we are looking for is something akin to the below
The above is just a table in an excel file, but again, when we house data in this format, it can naturally flow into a database. That is what we need to focus on. Even in our excel reporting world, if you can capture commentary in this tabular data centric way, you can still link to it from your main dashboard tabs to be “smarter” in how information is managed.
Strategy – Focus on the DATA! (I can’t say this enough)
Everything we do can be captured in a data model. Every report we design should be able to pull direct data out of a data structure. Thus, before we add anything to reports, first consider the entire flow of data required.
Putting it all together
At no point in time in the above have I had to rely on a source system. However, I have been able to take a typical Level 1 report and extract everything from it. I have taken this data and outlined a data model (in simple form) that can drive not just 1 project, but an entire corporate endeavor in this space.
As with everything, nothing novel here. Many companies already have systems that capture some of this information. This is more just a thought experiment for those that perhaps do not have a clear data model that supports level 1 reporting. It also highlights the discussion topics of “what are the manual steps” – because there will be manual steps in getting the data into the right format.
For me, everything above has to be manual at some point up or down the food chain. Your projects and portfolios need to have the discussions about where this type of Level 1 data is housed. If all projects already have this data in consistent databases, all you need to do is query that source. Everything discussed here is system independent. You can easily generate these data tables by way of query a source system directly (if you can), but I have not limited or require that approach
Strategy – Whatever you do, allow for flexibility
Even though my data model is entirely excel based, the data structure is very powerful. I can, in quite automated steps, import and convert these datesets into a more database model and thus gain value from dashboards that wouldn’t be custom for your project, but could drive an entire portfolio (and when you see how this scales to Level 2 data and beyond, the worlds your oyster).
If you actually want to proceed with a dashboard, and if you have your data as outlined above, here is what you can do with it. In fact, I would recommend that your source tab in excel that is driving your dashboard looks like the below.
The above data isn’t “immediately” friendly for digital reporting. A few transformations are required. The key steps involved are (the below was done as just an example using PowerQuery)
Unpivot the Timephase date columns
Pivot the the “SeriesName” column to create a unique “Column” for each dataset (this is need to create unique lines on our dashboard graphs)
At this stage, we have a nicely formatted table and we can now import into PowerBI. The intent here is not to showcase a beautiful Level 1 dashboard in PowerBI. My intent is more to showcase the data structures need to drive a dashboard. With the above data, we get pull each data series into graphs, tables, cards, KPI metrics, etc.
Our model has tagged each record with a “As-Of” date. Thus you can utilize this structure to have your dashboard display ALL prior months by way of a slider or select. Given more advanced skills, you can also pull out metrics about current incremental values vs what we said last month. Although, I feel those metrics are best served in Level 2 report where more detail is available.
Apologies for the look and feel below, I just pulled in the data to showcase that indeed you can drive a dashboard with what is effectively just a few lines of data that every project already has. We can bring together cost, schedule, progress, and commentary quite easily and in a very data friendly way.
For me, there is no substitute for an excel based dashboard. The value in this for me is ensuring that when I produce a Level 1 Dashboard (in Excel), I should give consideration to ensuring my data is structured appropriately. This gives us a fighting change to perhaps go down the path of creating a more digital world. It also allows for perhaps more flexibility in dealing with Level 2 data to maybe have some real automation of rolling up of data.
Level 2 obviously. I hope to showcase how the same ideas and concepts here can also help you structure your raw excel based Level 2 data to perhaps be better utilized in a more digital world