Using WebJobs to scrap public website and copy data to Azure blob storage.

When I start that AEMO Dashboard , I had a hard time dealing with PowerBI gateway, it is just setting there, my laptop has to be  online whenever I need to schedule a refresh, it just annoyed me, and I could not understand how cloud based data needs on-premises gateway anyway,   obviously later I learned that strictly speaking it was not required, there was just undocumented feature to get away of it ( the trick is in the first blog, thanks @Rad_Reza ).

but before I was aware of that, I went to some rabbit holes dealing with new tools that are out of my comfort zone, and I think they are worth sharing.

my first thought was instead of accessing the data directly from the website, let’s instead copy the data to a cloud storage then read it from there, I have already a google storage account, it is very generous with a free 5GB storage, my data is not big around 2 GB of zipped csv..

first setback, there is no native connector to Google storage and even if there was ,we have something called egress fees, in a nutshell, cloud storage is really low cost, loading data is free, but getting your data out is not free, unless it is for the same provider and the same region, most of the cloud vendors use the same model,  as my data will be processed in PowerBI, the clear choice is azure blob storage

Azure Blob Storage

the setup is very simple I used the following options :

  • the same region as my PowerBI region ( otherwise your pay the egress fees)
  • for replication I used LRS 

as PowerBI don’t support data lake V2, I used the classical Blob Storage.

Let’s move some data 

anyway, now I have my storage, I need a tool to copy the files from here http://nemweb.com.au/Reports/Current/Daily_Reports/ to my storage account.

Azure data factory

 When you want to copy data, the official tool in azure is data factory, I tried to play around with copy activities, it is straightforward, my first attempt did work and it was fast , actually too fast 😊, no zip was transferred but rather an  HTML 

probably copy data just handle this case just fine, but when you use your own credit card on a cloud tool and you don’t know what you are doing, better stay back and take the time to understand how it works, I deleted the new created resources and went to the second option, Python !!!

 PYTHON

Normally I go with R but blob storage has no API for R, I have very limited experience with Python , just using it for the excellent package altair , let’s try something new.

I was very pleasantly surprised, the amount of documentation for Python is just amazing, actually once I asked a question on stackoverflow and got a very succinct answer in less than a minute, no one was judgemental or downvoted my question ( the question was very basic). the only drawback is that sometimes the code works well for python 2, but I am using Python 3 anyway enough talking let’s show some pseudo code.

step 1 : get a list of files name from the web site

url = “web address where the files are saved”

result = urlopen(url).read().decode(‘utf-8’)

pattern = re.compile(r'[\w.]*.zip’)

here is a snapshot of the results, the full list is 60 items.

[‘PUBLIC_DAILY_201904260000_20190427040503.zip’, ‘PUBLIC_DAILY_201904260000_20190427040503.zip’, ‘PUBLIC_DAILY_201904270000_20190428040502.zip’]

step 2 : get a list of files name from the blob storage

 in the first run, the list is empty as we did not load anything yet, I load a couple of files manually just to test if it is working, the API for blob storage are very simple, you only need to provide your storage account name and key and  I love that.

block_blob_service = BlockBlobService(account_name=’’xxxxxx’,                                                          account_key=’xxxxxx’’)

generator = block_blob_service.list_blobs(container_name,prefix=”current/”)

the same you get a list of names.

step 3 : get a list of files that exist in web  and don’t exist in the storage

the code in Python is very simple, it is simply substraction of two sets, and then you converted to a list using function list ( i get why people like Python)

files_to_upload = list(set(List_website)-set(list_azure))

step 4 : Upload the new files to Azure Blob Storage

the same here, the Azure API are very simple and clear, I had only when issues, when the script upload in a loop, it does not wait until the transfer is completed before jumping to the next file, my workaround was just to use sleep ( sync is supported but not in this scenario where the input is from an url), anyone i got the answer in stackoverflow

for x in files_to_upload:

    block_blob_service.copy_blob(container_name,x,url+x)

    copy_status = block_blob_service.get_blob_properties(container_name,x)
    #use code below to check the copy status, if it’s completed or not.

    while(str(copy_status.properties.copy.status) != “success”):

        copy_status = block_blob_service.get_blob_properties(container_name,x)

basically wait till the status of the copy is success before moving to the next item, ( did I say I love Python syntax)

the full script is here 

 WebJobs; a Free Job Schedule  

ok, so we do have a script that works, now we need to run it on a schedule, once per day at 5 AM,  keep in mind the whole purpose of this workflow is not to use on-premise software, I just need to find the service that runs a script on the cloud on a schedule, as I am already on azure, let’s stick in that ecosystem.

and it is a personal project, I prefer a free solution,  my script runs only every 24 hours, for a couple of minutes,  a quick google search and i find this little treasure, I will not repeat here the steps, WebJobs is a service that just do that.

note that the package azure-blob-storage is not a base package in Python you need to install first in WebJobs, the schedule functionality is very flexible as it is using CRON, I wish we had something like that in PowerBI Dataflows.

End results 

Every day at exactly 5 AM, a new file show up in the azure storage, although I don’t need those files, I am using now another approach to load the files directly in PowerBI, it is important to build a data lake ( yes, I just said that, I am just joke, data lake is folder in the cloud where you save the raw files, nothing more), storage is cheap but most importantly the requirement may change, I may need to report on another dimension and it is crucial to keep the raw unprocessed data.

Take away

  • Python is awesome
  • Azure API for python are straightforward
  • Azure is awesome.
  • Be careful of Egress fees
  • CRON is awesome wish it was supported in PowerBI dataflows.
  • Wish PowerBI dataflows could save a raw file, Powerquery is amazing but it does not copy raw files.
  • Wish WebJobs add support to R

Tracking AEMO data using PowerBI

I was looking for the Power Production  of a particular solar farm, and I couldn’t find any public dashboard that show this level of details, all I could find was high level aggregated data (Later after I built the dashboard I found this excellent resources Nemlog)

The dashboard is published here  https://djouallah.github.io/AEMO-POWERBI/  , it is refreshed every day at 5 AM

Capture

How it works

Australian Energy Market Operator (AEMO) publish all kind of datasets,  one I believe is real time (require a  subscription ) but for my particular use case, I m interested in this dataset

http://www.nemweb.com.au/#daily-reports

there are two folders :

  1. Current, last 60 days of data ( current day not included, Updated at 4 AM)
  2. Archive : the last 13 Months of data ( current month not included, Updated Monthly)

Pulling data from a website and building a dashboard in PowerBI is straightforward,  it took me a couple of hours on a weekend to do it, the problem is how to maintain it.

Ideally, you build a dashboard and all the refresh is done by the service, which was not the case here

  • Pulling the data directly from the archive is very slow, it takes nearly 3 hours ( unzip, filters only the data we are interested in), and is not sustainable as the earliest month will be removed from the website, I like to keep the history, and it is really bad practise to download the same data every day
  • To keep the history, we need to save the archive somewhere else, too easy , just save it on a local laptop
  • History issues solved, now we created a new problem, on-premise data require a gateway, basically you need to install a software on your laptop, and obviously the laptop must be on when you do the refresh

After playing around of some options, I come up with this workflow

  • Create a local folder that contains all the archive files.
  • Create a PowerBI data model on the desktop just to process the archive data
  • Export to clean tables ( price and Production ) to CSV using DAX studio !!!!!!
  • Load the CSV to azure blob storage ( to get rid of the gateway)
  • Load the current zip files from the web site , it does not require the gateway, but you need the following consideration                                                                                 – Use relative path in web.content functions ( see Chris Blog) and @TheBIccountant 

Web.Contents("http://www.nemweb.com.au/REPORTS/CURRENT/",[RelativePath = "Daily_Reports/" ])

Don’t use Web.Page function but parse it using XML or csv , ( Thanks Reda Rad for the advise)

so you can use something like this

Table.FromColumns({Lines.FromBinary(Web.Contents("http://www.nemweb.com.au/REPORTS/CURRENT/",[RelativePath = "Daily_Reports/" ]), null, null, 65001)})

  • Append the data from azure blob storage and the current folder from the web site, the refresh is now very fast, as PowerBI just read the csv without any transformation
  • Publish to web

Good so far, I manage to get rid of the gateway, the dashboard is refreshed automatically in the service, no maintenance for 60 days.

as the current folders contains data for the current 60 days only, you need to update the initial CSV files.

  • Download the pbix from the service, export the csv , and upload to blob storage, you need to do that only once every 60 days.

PRO

  • PowerBI Publish to web is an amazing service and it is totally free
  • Powerful solution without writing any codes
  • PowerBI free license is free 🙂

Cons

  • Publish to web is not suitable for real time, as it takes nearly 1 hours to propagate the update to the web site, that’s why I can’t publish the current day data, which is updated every 5 minutes.
  • Publish to web does not include export data from the visual
  • pricing for azure blob storage can be tricky : storage itself is very cheap, data upload is free, download in the same region is free ( for example blob to PowerBI service), but when you read data from the blob to PowerBI desktop you incurs charges, so just be careful, it is not your Onedrive model, where download is free.

we showed here a simple workflow using PowerBI free license and azure blob storage (Dropbox), it is very easy but with one inconvenient you need manual operation once every two months, that’s a bit annoying.

edit 23-June-2019 :after I published this blog, I got an excellent feedback from Maxim Zelensky, actually using PowerBI dataflows ( require a PRO license), we can fully automated the whole process, as with dataflows we can have a self reference query, I am not going to repeated here, go and read it

edit 24-June-2019: as it is a personal project, and the data is public, I am not really excited about using a paid service to host the CSV files, I moved the two csv files from blob storage to dropbox, it is totally free, so the whole dashboard infrastructure is free, Good work Microsoft

edit 26-June-2019 : a proper solution will be to save the raw data in a data lake, see here

Using Or Conditions between Slicers to filter Primavera Schedule using DAX

The Interactive report is published here,  you can download the PBIX here.

In PowerBI or Tableau or any BI solution by defaults when you have multiple slicers, the filter conditions is always, AND, in this particular case, the user want to filter a schedule based on a start date  OR the end date OR the activity status, basically how to reproduce the following Primavera Filter using DAX

Primavera

 

 

Using the normal slicers will not work as it will simply show the activities where all the conditions met, instead we will use some DAX techniques (disconnected slicers and filter using the result of a measure) to get a new behavior.

 

Let’s build a simple data mode.

1-The main table is task: it contains Activity ID, Activity Status, Start, Finish

2- add a date Table, Start_Date, don’t link it to any table, use it in a slicer , the table is generated using Powerquery,

3- add another Date Table, Mstdate, don’t link it either, use it in a slicer

model

 

4- add Activity status Table, don’t link it, , use it in a slicer

5- add this measure

Filter_measure = CALCULATE(COUNTROWS(task),

FILTER(task, ([Start] <= LASTDATE(Start_Date[Date])

&& [Start]>= FIRSTDATE(Start_Date[Date]))

||

([finish] <= LASTDATE(Finish_Date[Date])

&& [finish]>= FIRSTDATE(Finish_Date[Date]))

||

IF(NOT(ISFILTERED(‘Activity status'[Activity status])),BLANK(),’task'[Activity status]IN ALLSELECTED(‘Activity Status'[Activity Status]))))

 

Add this measure to the matrix visual as a filter only, and make it not equal null, and voila

result

Load polygons geometry into PowerBI data model

Starting a new project that involve plotting a relatively big map ( 3000 polygons), it should be straightforward, as I have done it before, but a limitation in my prefered PowerBI custom visuals had an interesting result.

1-  synoptic panel: in my view it is the best visual to show custom map, ( floor plan, general layout etc), unfortunately not this time, my new map has 3000 polygons, when I tried to plot some attributes, I get the equivalent of Windows screen of death for PowerBI, too many values !!!

1

That’s bad, I filled a bug report, the author was kind enough to reply that it will be fixed in a next release, (something to do with incremental loading) fine let’s try other options

2- Shape map : (it did not even render the polygons correctly, and to be honest the visual did not receive any update since the first time I use it, 2 years ago)

3- Mapbox: showstopper, to load your own shape file and keep it private, you need a business plan.

I am stuck, I can’t plot a 3000 polygons map in PowerBI, let’s try Rstats.

4-Leaflet: is very versatile mapping engine, right now I use it in other projects (outside of PowerBI) and it plot 60K points and nearly 500 polygons in sub 2 seconds, but there is a problem, Leaflet output is HTML which is not supported under PowerBI, I never manage to make decent screenshot, something to do with the zoom.

5-ggplot2 + SF : quick google and got this code

library(sf)

library(tidyverse)

map_shp <- read_sf(‘C:/Users/mimoune.djouallah/test.geojson’) map =dataset %>%

left_join(map_shp, ., by = c(‘id’ = ‘id’)) %>%

ggplot() +

geom_sf(aes(fill = type),lwd = 0)

map

2

Great, I have my map now ( this is only a portion, I can show the whole layout as it is proprietary), there is only a little problem, code take 25 seconds to render, it is very annoying, especially as in PowerBI you touch anything, and all the visuals render again.

my first thought maybe the join between PowerBI dataset ( dataframe) and the geojson is slow !!! the beauty of R integration in PowerBI is: you click on R icon, a new window open with an empty dataset, then you add the columns you want to analyse, yes that simple no ETL, no messing with data, you have to the mighty data model.

Loading Geometry data into PowerBI data model ?

 

After reading the SF documentation, I learned all I need is a dataframe  with a column that store geometry data, too easy just use QGIS , save as csv and make sure to select geometry, WKT

Capture

Now I loaded the csv file to the data model and join it with the attribute

3

Now here is the new code, notice, we don’t load any external file, all data is from the data model

library(sf)

library(tidyverse)

map_file= st_as_sf(dataset, wkt = “geometry”)

map = ggplot(map_file) +

geom_sf(aes(fill = type),lwd = 0)

map

basically, you need just to tell SF which column represent the geometry , unfortunately it is still slow maybe now it is 24 seconds

Another round of googling and I find the issue ggplot2 is very slow to render, no problem, this is R after all, there are a least a couple dozen of packages that do the same thing 🙂

6-Pure SF

 Turn out SF is not only to manage Geometric dataframe but it can plot too,( who would thought that)

library(sf)

map = plot(st_as_sf(dataset, wkt =geometry”)[“type”])

map

 render in less than 2 seconds, happy dance  

 

Bonus point, Cross-filing geometry

 My purpose to load geometry was just to speed ggplot2, but hold on, the geometry is in the data model, not only the attribute change ( install quantities, category etc), I can even filter polygons on the fly !!!, this is quite amazing, I can load all the layers and filter out any shape as I want.

4

HTML map using R, Part 1

I always tried to learn R, but I never succeed, I did not like it, after enjoying the GUI of PowerQuery, using R was like a self-inflicted torture. But obviously I was wrong as usual, R is not only for data cleaning.

in a PowerBI usergroup , I heard someone talking about R, and I made a mistake of telling him what R can do that PowerBI can not do ?, all I can say is I wish if I did not ask him this question, anyway, in defending R, he said something about generating HTML file and boom, that’s exactly what I was looking for, a file that anyone can read and it has interactivity and it works offline.

my problem is, I build very nice interactive map visualization, but for large distribution, I just print it to pdf, which obviously fail the purpose, not everyone has access to PowerBI service, and for external parties ( client, subcontractor), you can’t reasonably expect them to subscribe to PowerBI service just to see a map :), using Excel 2016 was an option but unfortunately some functionalities are still missing (can’t show a label) and honestly the integration of 3D map with Excel looks like it was a second thought by the developers.

The good news is, R has an excellent integration of PowerBI, so instead of recreating the wheel, just use R for the stuff that can not be done natively in PowerBI.

I hope to publish multiple blogs to document my workflow.

In my real case scenario which I can not publish for obvious reason, we need to view the status (Completed, or not) of nearly 50,000 Piles, show all the piles will clutter the view, instead, I found by accident that a library called Leaflet (the Engine used by MapBox), has the ability of showing layer only when you zoom.

I will use a dummy data, (literally a foundation on a mountain in a New Zealand as an example),

Capture

the data source is an excel file with the pile location and topojson file to show the foundations location all generated using the Excellent QGIS.
the R code is

library(readxl)
library(leaflet)
library(htmlwidgets)
#read Excel Data
data <- read_excel("pile.xlsx", col_types = c("text","text", "numeric", "numeric"))
#read json
map_data <- geojsonio::geojson_read("foundation.json",what = "sp")
######################################
map <-leaflet(map_data) %>%
addTiles(urlTemplate = "https://mts1.google.com/vt/lyrs=s&hl=en&src=app&x={x}&y={y}&z={z}&s=G", attribution = 'Google') %>%
addPolygons()%>%
addCircleMarkers(lng = data$x, lat = data$y,radius = 3)
map
saveWidget(map,file="foundation.html",selfcontained=TRUE,title = "foundation")

the result is a self contained html, at the moment, it is show only 2 foundations and some piles, in the next blog post, we will add zoom, and how to filter layers, and show labels and stuff.

the result is here, and the source data is located in this github repository.

 

ScheduleReader

Introduction to ScheduleReader™

Companies and organizations always strive to work hard towards stimulating employees’ motivation, improving company’s performance and, ultimately, achieving project success.

At times when all these aspirations depend on different factors, it is inevitable to adopt certain practices that would advance the working environment. For example, for the companies working on complex projects in large scale sectors such as construction, industrial manufacturing, oil and gas, and public utilities, it is imperative to practice project management.

The process of project management consists of several phases that are critical for successfully finishing a project so each needs to be carefully and properly executed. But the project management flow continually needs to be ameliorated so that key project’ deliverables are achieved. One essential factor in the project management process is the use of good project management software.

Organizations in the above-mentioned sectors generally plan their projects in Oracle® Primavera P6. For accessing these project plans, which are stored in Oracle’s database, is required to have a license. Obtaining licenses for all project participants might be costly, therefore many decide to buy a couple and to go with the .pdf reader as the tool for reading project plans.

But the upper management isn’t fully aware that this can be a risky way of managing a project, which can cause the project to fail. It can complicate the performance of all project participants and consequently prevent the progress or accomplishment of the project.

ScheduleReader is the software solution which can replace the .pdf reader and simplify the process of managing a project. It is considered as effective solution that contributes to all four phases that comprise the process of managing a project. By implementing the software in a specific working environment every project participant will be able to access the project plans without the need for an expensive Oracle license. It is possible to make the management of the project faster, more comfortable and definitely more functional.

The implementation of ScheduleReader in your environment can seem a challenging and unpleasant change for your project team, however it is extremely simple as ScheduleReader is a completely standalone solution and at the end, it will certainly bring a positive impact to your organization.

primaverareader

It is developed as an independent standalone software application, with the objective to offer a simple way of viewing project data without having Primavera P6 license. It supports project schedules exported in .XER, .XML and .XLS file formats. The viewer is designed with a highly customizable interface and overall organizational structure that resemble Oracle® Primavera P6.

This enables users to easily navigate through different tasks in the project. Choosing ScheduleReader™ as a software solution offers advantages to all project participants by giving them the possibility for a dynamic overview of the project, simple comparison of project’s baselines, easy navigation through activities and prompt reporting on project’s status.

Key benefits of ScheduleReader™ that boost success

As must have P6 addition to you project management software suit ScheduleReader™ offers   improved visualisation of project data and sharing of project information.

When it comes to the company’s and project teams’ benefits, it guarantees establishing better communication and collaboration. On the other hand, when it comes to the project, ScheduleReader™ enables better budget and time management, facilitates the process of monitoring and guarantees improved control, contributing to on schedule project delivery.

ScheduleReader Standard and PRO

There are two versions of the ScheduleReader software that are developed to answer different company and project needs. ScheduleReader Standard is designed to modernize the work and provide benefits for all project teams and stakeholders who are currently viewing the plans in PDF or XLS.

On the hard, for the experienced project professionals who need to analyze schedules, create reports, better visualize and present summaries of complex project data, ScheduleReader PRO was developed – a version consisted of ScheduleReader Standard with additional features for reports generation and KPI analysis that allows you to quickly generate rich visual reports with a single click of a button.

What can you do with this software?

There are many key features that help positioning ScheduleReader ™ as the optimal solution for viewing project plans created in Oracle® Primavera P6. Some of them are the following:

  • Activity View

Allows users to have a clear overview and a detailed graphical representation of all open project activities. With this customizable window, users can have an organized view of project’s specifics; they can create Groups, Filters and Sorts with multiple codes and UDFs; save the views as a layout and even customize the bars in the Gantt chart.

  • Trace Logic View

Combined with ‘Activity View’, this feature can be used for in-depth analysis of the project schedule activities. For example, when one activity is selected in the ‘Activity View’ the user can view its predecessor and successor activities in the ‘Trace Logic View’. For the users who would need to compare up to four baselines for a particular project schedule and get a graphical display of the project’s specifics, ScheduleReader™ provides the feature ‘Baselines View’.

Activities

  • Progress Update

Very important feature which gives the possibility to give constructive feedback, as well as to propose updates for % Complete, Actual Start, Actual Finish, Activity Status and Activity Codes, which can be accepted and rejected by their superiors. This feature is perfect for field work as it gives you the opportunity to propose activity updates without interfering with the original project schedule file.

Progress-update

  • Resource Usage Profile View

By setting the timescale for displaying data values, the users can view the quantity information for resources or roles or analyze their costs.

  • S-Curves

Help for a graphical presentation of cumulative costs and units. With this feature users can see how planned and actual quantities are plotted.

Resource-usage-profiles

  • Bar Customization

Enables adding new bar types, removing the unnecessary ones or making modifications. User can create view that is specific for its working team and share it with his team members.

  • Graphical Reports Generation

Create default and custom reports, analyze project KPIs and schedule quality, measure the project progress and communicate information more effectively with all project stakeholders.

Reports-and-dashboards

Note: this is a guest Post by ScheduleReader

Float Erosion report between two XER

I just added a new report in the Xer reader to show the float Erosion Report between two XER, I am using the “Harbour Pointe Assisted Living Center” from Primavera P6 default installation as example.

1- Download the pbix file here

2- Save a two XER in this folder C:\XER

2

3- Refresh PowerBI Desktop

3

4- Review the report, and export for Excel for sharing/Further analysis.

4

5- if you have PowerBI service in your company then publish it there, so other users can interact with the reports.

let me know if you want to see other reports.