Vacation Intelligence! Solving Holiday Challenges with Power BI: Part 1

Not My Usual Blog Post

Despite a career in Business Intelligence approaching twenty years, I have never used data in any meaningful way in my own life, until now: which is why I feel the need to blog about it!

I was planning a trip to Tokyo and wanted to identify the best locations in which to book hotels based on local attractions, with the intention of staying at several places over a fortnight.

However, between Tokyo being huge and the volume of entertainment, it became obvious that some sort of map was needed and the generic tourist maps available online are not personalised to my interests.

Once developed, and the hotels booked, I want to add information about places/activities of interest and fine-tune the holiday.  It is too far to fly to miss anything!

Power BI is the reporting tool chosen for this task due to its easy mapping capabilities and because I happen to be doing some work with it currently.

The Data

The data is limited to the following fields:


…which is all I need at this point to identify ideal hotel locations, especially as I am manually inputting all this data.

The range of data itself is very personal to myself and is unlikely to make a quality list for anyone else.

(A good example of this is how Denny’s, McDonalds and KFC are not in the data as I’m not flying that far East to eat Western fast food! Yet, I have Starbucks listed because I bought a Starbucks mug in Kyoto many years ago and would love one from Tokyo.)

The locations which are in the data set are not all “must sees”, in fact, most are not. The data set is a list of things that may be worth considering with the ‘Favourite’ field used to identify things of particular interest.

The ‘Type’ field was quite challenging due to masses of entertainment in Tokyo being too unigue to be of any particular type!  I decided to use ‘Japan’ as an ‘Other’ to contain things like cosplay Mario Carts or the Ninja Cafe!

All in all, I have over two hundred rows of data from which to work.

Map Steps in Power BI

I have not done anything remotely clever with Power BI mapping. I’ve just used the standard functionality, which is enough for the task at hand.

Here are the key points to creating the map aspects of the dashboard:

  • Configure Latitude and Longitude columns.
  • Insert a Map.


  • Drag and Drop co-ordinates to their respective fields.


  • Add a Slicer to filter and Table to show the details of a specific map item to finish off the dashboard.


The Result 

At this stage, the report is just a map with a table showing the name and opening hours. Currently this is all I need.

The full view map above shows centralised clusters of attractions and a few outliers which will need evaluating as to whether they are worth the effort to see.  (I expect the Ghibli Museum to justify any amount of travel!)

The Dashboard 

The following screen prints from the report show the two main clustered areas that are the prime candidates for accommodation, and an outlier that will require a dedicated trip.

Map 1: Cluster of Tourist Spots 


Map 2: Another Cluster of Tourist Spots 


Map 3: Outlier: Ghibli Museum 


With these clusters of attractions it was a breeze to identify the ideal hotel locations and book accommodation accordingly.

A few other insights emerged, such as a lot of novelty cafes being clustered together, making the likelihood of visiting many of them unlikely, especially when there is an endless list of ‘proper’ restaurants to be visited!

Further Work…

With months to go before the trip, and the hotels intelligently booked, it is now a case of enriching the existing data set over time.

I need to identify which activities require booking in advance and some will doubtless not be of interest once I investigate further.  For example, the aforementioned cosplay Mario Carts, looked fun at first glance, but after reading how they annoy locals, I decided against it.

A Cry for Help!!! 

Currently my knowledge of ‘cool stuff’ in Tokyo is (very) limited by what I can find on Google.  With that in mind, anyone out there with any suggested activities and/or eateries in Tokyo, please leave a comment on this blog post!

Posted in Power BI | Tagged , , , | Leave a comment

Snapshot Reporting the Lingering Threat to GDPR Compliance


The implementation of General Data Protection Regulation (GDPR) has caused the business community to reconsider how they collect, hold and distribute personal data. The GDPR is going to impact every business operating in the EU and with the effective date of May 25, 2018 fast approaching, cyber security professionals are working tirelessly to amend their practices in order to comply with the new legislation.

The GDPR provides a number of rights for individuals whose data you may be handling; these include the right to be informed; the right of access; the right to rectification; the right to restrict processing; the right to data portability; the right to object; rights in relation to automated decision making and profiling.

On top of this the right to erasure means that the data subject has the right to request erasure of all personal data related to them on any one of a number of grounds.

With all the new compliance methods being introduced, one thing seems to slip through the cracks – the fact that it is not uncommon for data to be moved around an organisation outside of official controls, even when established data channels are available.

This paper will look at the risk this rogue data poses to the GDPR compliance and how it can be aligned to the GDPR as well as the real, day to day business benefits that can make this alignment rework cost-effective and justifiable regardless of the GDPR.

What is Snapshot Reporting? 

The ‘snapshot’ is often used in reporting terminology to describe a data extract for a specific moment in time. This does not automatically mean the snapshot is a GDPR liability…but it doesn’t mean it isn’t either!

Why Snapshot Reporting Happens 

In many cases, changes to the SQL script used to export the data is all that is required to turn a snapshot into a repeatable report.  And once that logic is in place, the extract can be refreshed when required for the full data set, rather than accumulating a collection of historic extracts to build the same set.


Common situations that can make this challenging include:

  • It’s not SQL!

Most off-the-shelf software is packaged with some form of built-in reporting.  This may take the form of a report collection that meets the more basic queries the vendor thinks are important, or contain a method of report configuration that allows bespoke reporting… but usually with a subset of functionality to allow basic report development.

The functionality of these built-in reporting utilities varies widely from product to product.  Some do cater for the extended logic required for non-snapshot reporting, some do not.

  • Not Enough Time Stamps

The key to reusable extract logic is being able to consistently identify when something has happened/changed in the source data.  When timestamps do not exist for certain events it becomes impossible to know when or what has changed.

  • Available Specialist Reporting Software

Not having access to specialised reporting software with which to build dynamic reports is more common than many would suspect.  Sure, your organisation has an embedded reporting service but it takes forever going through the official channels and the data warehouse doesn’t have quite what you need anyway.

In this scenario, a snapshot direct from the source data looks like it will solve the problem nicely.  And it will, until the source data is updated and the extracts are out of alignment.

Ironically, this snapshot data invariably finds itself being sliced and diced in MS Excel, a product that is perfectly capable of direct data source access and applying the logical approach needed to avoid snapshot reporting.

  • In-house Expertise

Every instance of snapshot reporting I have encountered has been a ‘best endeavours’ solution created by well-meaning employees trying to fulfil a business need that would otherwise be impossible.  And it is not uncommon for key business activities to be dependent on said snapshots. (It needs pointing out however that for an experienced SQL user, this sort of change is a logic issue that is well within their skillset to resolve.)

If any variation of the above scenario exists within your organisation there is a good chance the accuracy of current reporting is compromised regardless of the GDPR.  Fortunately, none are necessarily showstoppers in the handling of snapshot reports but can change the focus of the problem from being purely logical to encompassing technical challenges, which we’ll look at in the next section.

The Risks

Non-repeatable snapshots may have a minimal GDPR risk if they are deleted after use, but are still a challenge to Data Governance and auditing.

It is worth considering the risk snapshot reporting brings to the organisation, before we look at the GDPR implications:

  • Manually Intensive

Snapshots don’t always mean that manual activities are required to derive value, but it is usually the case as work normally done within a data warehouse still needs to be applied.  This can range from summary operations to merging multiple snapshots together for historic trends.

  • Single Point of Failure

It is bad enough when an employee with extensive, specialist knowledge leaves the organisation, but when their private stash of snapshot extracts vanishes with them the risk can be exasperated.  And in the likely event that the snapshot extracts have had undocumented manual transformations, that knowledge is lost too.

  • Outside of Data Governance

If the logic used to create a data extract produces different results today than it did last Thursday, the extract recipient is likely to collect historic snapshots and build an unofficial data repository that is unknown to the Data Protection Officer (DPO).

The wider organisation and its associated data controls cannot know what snapshots have been collected for later use or what additional transformations have been applied.  This situation creates a risk for audit trails, data lineage and ‘one version of the truth’ as well as GDPR compliance.

The GDPR Risk

The below illustration shows how personal data can be extracted out of Data Governance and the GDPR compliance through a snapshot and resurface months later.


What are the GDPR Implications?

Not honouring a request to be forgotten, or using personal data for other than its permissible purpose opens the organisation to the  GDPR risk.

There are two levels of financial penalties related to GDPR:

  • Up to €10 million or 2% of the organisation’s global annual turnover from the previous financial year (whichever is higher).
  • Up to €20 million or 4% of the organisation’s global annual turnover from the previous financial year (whichever is higher).

DPO Responsibilities

The expectation under the GDPR is that the Data Protection Officer (DPO) will have the organisation’s data repositories documented and know exactly where to locate any instances of personal information.

With snapshot extracts being shared outside of any formal controls on distribution and content, the DPO will not have a view of these repositories which can hold data that has been removed from the rest of the organisation.

The GDPR may exist because of deliberate exploitation of personal data and shoddy security practices, but this does not make responsible organisations immune to getting caught in the crossfire.

What is the Solution?

The rest of this section outlines two possible approaches to handling the risk of snapshot reporting, their suitability will depend on current reporting practices and available resources.

When a Snapshot is not a Snapshot (Logical Approaches)

In a scenario where the only way to get data for the previous month is to run a snapshot extract on the first day of the current month: the recipient will collect those extracts in order to build quarterly and yearly summaries.

Most high GDPR risk snapshots are the result of this approach to report building and they are easily addressed once identified.

In this specific example, the recipient requires two changes to their current practices:

  1. The option to extract the data for any full month at any time and receive the same result.
  2. The option to extract the data for any previous quarter (or year) at any time and receive the same result, as well as aligning to the results of the aggregated monthly extracts for the same period.

These two changes and a policy of any reports being refreshed with current data before distribution will go a long way to minimise the GDPR risk without putting all data extracts into existing Data Governance.

Extended Enterprise Data Governance

The ideal solution for handling snapshot reporting is to shut it down completely and provision all data through a governed service.

This is likely to be expensive, but still cheaper than the possible GDPR fines!

Unfortunately, it is also likely to be time consuming, so applying the logical approach outlined above as an immediate safeguard is highly recommended.

The specifics on how this can be implemented will vary dramatically from organisation to organisation and is beyond the scope of this paper.

What to Do Next

The following bullet points outline a list of investigatory activities that should be undertaken if there is a suspicion of snapshot reporting within an organisation.  The amount of effort to carry out the below suggestions depends greatly on what data governance is already in place and the volume of reporting in use.

  • Review Known Reports/Extracts

The people using snapshot extracts within an organisation are doing so to meet a business need.  If that business need can be fulfilled by a more stable, less manually intensive activity it should be welcomed.

  • Review Unknown Reports/Extracts

Finding snapshot are currently in use which are not known to the organisation is its own challenge.  The ease of auditing what data is being extracted and how will vary depending on the technical approach used: and may be as simple as refreshing a canned audit report to a manually surveying activities.

  • Fix

Whether fixing is bringing a snapshot under strict Data Governance through an agreed method, such as being provisioned through a data warehouse, or simply improving the snapshot extract logic.  The fix is a success if there is no longer a need to store data extracts outside of Data Governance.

  • Remove and/or Refresh

Once troublesome snapshots are identified, and new data provisions are in place: any existing snapshot extracts should be deleted with the user of those extracts confident that the data will be available when they need it (i.e.: the user has a process they can invoke to get the data they need from the governed data service).

  • Amend Existing Data Governance Processes

With all the above done, it is important to improve or introduce official processes that stop the proliferation of snapshot reporting reappearing over time.

Regular audits to ensure Data Governance is maintained should be considered good practice in general, and are essential for continued data health.

The variation between organisations and technologies make detailed technical or procedural recommendations impossible.  Sooo..

Get in Touch!

One of the biggest challenges in addressing snapshot reporting is having the available resource with the required skillset to carry out the work, work that requires a mix of analysis and in-depth knowledge of data handling (including SQL).

In-house talent may be lacking or, more likely, too busy with their day job for extra work and hiring external expertise for a two-week piece of work can be difficult and expensive.

Get in touch to see how I can help out, or just to chat around the issues raised in this paper:

About the Author

I have two decades in the IT industry, consulting for multiple companies and public sector organisations, always with a focus on data in one way or another.  I’ve done everything from greenfield data warehouse implementation to statistic analytics, and everything in between.

I am always more than happy to undertake any work that leads to the demise of snapshot reporting!

Posted in Business Intelligence | Tagged , , , , , | Leave a comment

How to Chart Open Incidents in Power BI for ServiceNow or Any ITSM Using DAX

I have written at length about the logical approach required to accurately measure how many Incidents (Incident or otherwise) are open during a reporting period without resorting to hideous ‘snap-shot’ reporting.

My other writing has always been SQL-centric, but I recently faced this challenge with Power BI: which required a very different approach, so I thought I’d share!

The Issue

The data associated with an Incident contains an Open Date and a Resolved Date fields (I’m ignoring Closed Dates, but the logic is the same), if these dates span across two or more months there is no field(s) to capture these point’s in time from which to chart from.

Reporting on what isn’t there is always tricky!

The first choice for this sort of calculation is to push it to the data warehouse to calculate (which is where my earlier writing will help!) but Power BI is everywhere and often used directly from a relational data source or an extracted data set.  In many cases, this extra data will have to be generated within Power BI.

The Solution

Using DAX (Power BI’s built in scripting language) we are going to create a new table derived from a cut down mock-up of an ITIL/Incident extract that could easily be created using ServiceNow’s built in reporting capabilities or that of most other ITSM tools out there.

The bad news is that Power BI DAX is limited in its methods of creating new data, as opposed to summarising or reformatting existing data.  To be fair, this should be expected for reporting software!

To keep the focus on the matter at hand, the data set has been kept to the absolute minimum.  In the real world, the data set would be far more comprehensive.

Step 1

First, we need to know how many months (as most ITIL reporting is monthly) an Incident is open before we consider creating the new table.  With the DATEDIFF function, this is straight forward until we get to Incidents that are currently open and have no Resolved Date value.

We just compare the Open Date to the Resolved Date using monthly intervals to create a Calculated Field.

For the context of showing a historic trend of open Incidents, an Incident being Resolved after the reporting period is the same thing as it being open during said timeframe.  With this in mind, we shall treat the ResolvedDate as the current date for the sake of this calculation, like so:

MonthsCount =  

IF(Sheet1[ResolvedDate] = BLANK(),  

DATEDIFF(Sheet1[OpenDate], TODAY(),MONTH) ,  

DATEDIFF(Sheet1[OpenDate], Sheet1[ResolvedDate],MONTH)) 

A couple of points to note:

  • An Incident that was opened and resolved in the same month will return zero, which is just what we want.
  • By treating an unresolved Incident as being resolved on the current date will show the Incident as being open from its inception until the end of the reporting period (I.e.: the last twelve full months).

Step 2

With this newly created MonthsCount field, we can generate the Calculated Table upon which to populate charts.

To achieve this, we will use the following functions:


Altogether, it looks like this:

OpenTicketDates = GENERATE(  





            “Value”, [Value], “ChartMonth”, EDATE(Sheet1[OpenDate], [Value]) 


    [Value] <= Sheet1[MonthsCount]) 


The above code creates a new table and inserts as many rows as identified by the “MonthsCount”, while using EDATE to increment the month for each row.


  • DATEADD will not return a result that isn’t in the current date range…it is far easier to use EDATE!

Below is the resulting table which now contains a row for each month it is open:

DAX_Code_TableFig. 1: The dynamically generated trend table.

Step 3

Use the Calculated Table to populate the required chart(s).

Below is my end result, presented to show the dynamic trend data, rather than win any design awards:

TrendChart2018Fig. 2: The trend data showing the results for the full months in the current year. 

TrendChart2017Fig. 3: The same charts filtered for the full year. 

And that is it!


This solution is simple enough and with only a few gotchas along the way!  The most difficult thing is the switch in thinking away from an SQL mindset and into the wonderful world of Power BI’s Excel-esque approach to data management.  Personally, I love it!  But I’ve been using Excel as a supporting BI tool my entire career and most of Power BI/DAX makes perfect sense to me…and then there is DATEADD!

Of course, the real beauty of this solution is that it is built into the report and works from the original data, thus removing any manual interaction, reliance on a data warehouse or snap-shot reporting.










Posted in Business Intelligence, ITIL, Power BI, ServiceNow | Leave a comment

Making the Most of That Last Interview Question (Guest Post)

So much of the interview advice bandied about on the internet is about damage limitation. Advice like “Don’t be late,” or “Dress smartly” is great if you want to avoid sabotaging your interview but if you want to get ahead of the crowd advice gets thin on the ground.

This article is my humble attempt to share a true value-add interview tip, so with no more fluff or fuss…

Just the Tip

The interview is virtually over and it probably feels like the job is won or lost already, there is just that pointless final question left:

“Do you have any questions for us?”

The tip is to answer this with:

“Is there anything I can do between now and starting the job to make me even better suited?”

That line puts the interviewer in a position where they have to verbally say you are perfect for the job (which is no small thing), or name some specific shortcoming and give you another chance at convincing them of your abilities or experience.

Of course, if you are genuinely weak in the named area this is not going to help much. But often it is usually a case of accidental omission on the part of the interviewee. It is all too easy to not give enough coverage on a particular subject which the interviewer may be interested in, especially if the job requires numerous skills or experience.

My Personal Experience

I would never advise anything unless I had used it successfully myself.

And this tip is gold! Since being told about this by a fellow freelancer a few years ago, I have used this approach around ten-fifteen times (as a freelancer I tend to interview every 6 to 9 months) and it has been well received every time.

The usual response is a nod and wry smile from the interviewer(s) as they acknowledge the brilliance of this question. I have even an interviewers compliment me on it and admit they will use it themselves at their next interview.

Does it guarantee you’ll get the job?

No, but nothing will.

There are always factors outside our control. But personally, I can directly credit it with me securing at least three jobs when concerns were raised around my abilities or experience and I was able to address these fears.

Obviously, making this ‘tip’ public potentially weakens it as a strategy for myself, so I’ll probably have to return to asking if it is okay to use the disabled parking if I’m hungover!

About the Author

Dean Paumme is an active freelancer in the Business/IT arena and author of “The Secret Route to Riches: How to make millions with your current career”, a no nonsense, tip laden guide to making the most of your skillset.

Get his book here (

Or here (

The Secret Route to Riches by Dean Paumme

The Secret Route to Riches by Dean Paumme


Posted in Career, Uncategorized | Tagged , , , , , | Leave a comment

Helpful ServiceNow Reporting Articles

I thought putting together a contents list of the ServiceNow reporting blogs I’ve written over the last couple of years for easy reference would be helpful.

So, here it is!
ServiceNow Reporting and Why Direct ODBC Reporting is the Only option

A few Random Tips for ServiceNow Reporting

Reporting Software and ServiceNow

How to Overcome the Top 5 Challenges for ServiceNow  Reporting

Example ServiceNow Report

This post is more around generic ITIL reporting, but may still be interesting:
The Top Three ITIL Reporting Pitfalls (pt 1)

…and this last one has nothing to do with ServiceNow reporting, but I’m including it anyway as someone typically asks:
How to Become an IT Consultant for Fame and Profit

Posted in ITIL, ServiceNow, Uncategorized | Tagged , , | 1 Comment

How to Become an IT Consultant for Fame and Profit

I seem to get more emails asking about how to become a self-employed consultant than I do about any other thing I actually blog about.

I’ve not written that much recently so thought I would put up a quick post with some advice around this subject.

So, my top piece of advice (and only piece of advice!) is to buy this book:

The Secret Route to Riches by Dean Paumme

The Secret Route to Riches by Dean Paumme

(Please note, that is not a paid link, just a graphic I’m not getting paid for!)

The above book costs virtually nothing as a kindle download and covers all the things you need to know to get started as a consultant, whether this is as a Business Intelligence expert or any other IT discipline.

I received this book as a gift, about a decade too late for me, but it would have been a life saver (ok, money saver!) back when I was starting out. It’s packed with practical advice on everything from finding contracts to the correct mind set to make the most of it.

Posted in Business Intelligence, CodeProject | Tagged , , , | Leave a comment

How to Overcome the Top 5 Challenges for ServiceNow Reporting

The Stuff I Don’t Need to Tell You

I am pretty sure that if you’re reading this you already know why good reporting is crucial for successful ITIL and why control beyond the Report Module in ServiceNow is a necessity.

If not, look here.

And as if that isn’t grim enough, nearly two thirds of all Reporting/Business Intelligence projects fail or, at the least, don’t succeed fully.

Your Top 5 Enemies

I have worked on numerous Business Intelligence projects of all shapes and sizes and the same few issues arise across all types of reporting and all business sectors.

1.  Learning the Database

Which data lives in which tables and the different permutations on how they can all link together depending on the business need.

Figuring out all this from scratch is difficult. So difficult I wrote this post to try and help you out.

 2.  Organisational Thirst for Information

Businesses need information, and having to wait for a lengthy BI Project to complete can take months.  This can lead to end user attempts at report development in Excel and Access…which can be great, but often isn’t!

3.  Budget

A full BI Project is expensive and requires a range of skilled individuals to bring it to fruition before you even look at software licenses and hardware.

4.  Skills Shortages

Good BI people are constantly busy.

Finding the right person with the right software knowledge and business sector can be a huge challenge in its own right and ITIL is one of the more complex subjects to report on.

5.  Specialist Software

Reporting software can get very expensive, but even when it doesn’t, it can take a long time to buy through company purchasing procedures making the wait for information last even longer.

Help Me Help You

If you are worried about any or all of the above…you should be!   And these problems are generic, ITIL and ServiceNow come with their own challenges.

Let me help.

I know the data(1) having developed ServiceNow reports for a global market since 2010 (and ITIL reporting since 2004) and have a starter set of 20+ top quality reports that can be used almost instantly(2) for a low price(3) developed by myself(4) in Crystal Reports(5ish!).

Another example of the clear and concise ServiceNow reports available.

An example of the clear and concise ServiceNow reports available.

This pack contains a library of ODBC driven, clear and meaningful, ServiceNow reports which span Incident, Problem and Change management. Covering SLA/OLA/KPI measurement as well as volume and frequency for a holistic view of the service management slice of your ITIL implementation.

If required, these reports can be branded with your company colour(s) and logo at no extra charge.

An example of the clear and concise ServiceNow reports available.

Another example of the clear and concise ServiceNow reports available.

Included in the price is one day of offsite consultancy to aid any reconfiguration needed to match your company’s unique setup.

On-going Support

Crystal Reports is a common skillset, and with this starter pack as the foundation, most Crystal Reports developers should be able to pick it up and expand the suite.


At the least, simply use the SQL and table structures from Crystal Reports as a massive head start in both analysis and development in whatever reporting software you prefer.


Do both! Use the Crystal Reports to satisfy the need for data almost instantly while a more lengthy solution is developed.

I am available for future support, amendments or additional development in Crystal Reports, Qlikview, SSRS, BO/WebI and even Excel/Access (which are comfortably able of meeting BI requirements in the right hands).

I provide offsite support starting at half a day.


Another example of the clear and concise ServiceNow reports available.

Another example of the clear and concise ServiceNow reports available.

Annual Fees Can Kiss My SAAS

So, the actual cost: FREE with only one day of reasonably priced offsite consultancy!

This is a one off fee and support can be hired as and when you need it.  This may seem an odd approach, but I’ve chosen this route for three reasons:

1.  One day (eight hours) should be enough support to get you up and running with the reports.

2.  It makes it very easy to see what a great deal it is, as it would probably cost 2-5 days of consultancy per report.

3.  It is usually easier to get sign-off for a consultant than to purchase a product.

While I would like to sell as many of these packs as possible, I only have so much bandwidth for support and further development and want to ensure I provide a top class service.

To this end, only a limited number of these packs will be released at any one time.

Act Now!

If you want to discuss this offering in more depth, such as the specific reports that make up this library (or anything else around ServiceNow reporting for that matter!), please email me, I love this stuff and am happy to chat about it:

PS: Check out my various ServiceNow blog posts which will be of great help to any organisation looking at professional reporting.

PPS: Of course, I am also available for any bespoke reporting available in a wide variety of software (namely: Crystal Reports, Qlikview, SSRS, BO/WebI, even Excel/Access).

Posted in Business Intelligence, ITIL, ServiceNow | Tagged , , | 1 Comment

4 Tips on Creating an Effective BI Team (Guest Post!)

Thanks to software vendors investing heavily in making their applications easier-to-use, accessible via the Web and more affordable, Business intelligence (BI) software is now a viable option for employees in sales, marketing, operations, and other departments to utilize.
But with the proliferation of these tools, business leaders will have to rethink how they address business intelligence governance, or the roles, responsibilities and guidelines it provides its users to ensure BI tools are utilized correctly and appropriately.
When it comes to setting up these new BI teams for success, leaders should consider these four steps.
1. Establish Structure, Guidance and Boundaries The first step is to establish a hierarchy of roles and responsibilities. For example, larger organizations with a strict chain-of-command would be well served putting together a “BI Bible,” or written guidelines that define who should initiative a project or report, and when. Smaller organizations can drive the direction of its BI users with regular meetings to discuss current and future analysis.
2. Appoint Internal and External Domain Experts While users in sales and marketing positions may contain more domain knowledge, they often lack the technical expertise to utilize BI tools to their full potential. For these reasons, it’s important to identify technical experts to assist users with selecting the right regression to run or most appropriate data to pull.
3. Allow Business Users to Manage Data Organizations should ensure that they are correctly balancing resources to accurately capture data. It’s hard for IT to know where to push for accuracy and where to be more lenient when it comes to data collection and cleansing standards. That’s why it’s important to bring business users to the IT-side of data collection to help make these decisions.
4. Use Successful BI Processes to Solve Bigger Problems Once you establish a successful BI process, apply these methods to find (and answer) new questions within your organization. Businesses that provide guidance, leadership and set-up BI analysts for success have the potential to create a valuable strategic group within the organization.
Has your organization dealt with these issues? How did you address them and create effective business intelligence teams? Please leave a comment with any additional tips or suggestions below! In addition, you can check out more on this topic over on the Software Advice BI Blog at: 4 Steps to Creating Effective BI Teams.
Posted in Business Intelligence | Tagged , | Leave a comment

Business Intelligence Book Review (part 1)

For a bit of variety, I am going to start posting the occasional blog based on the most common questions I receive from readers.  This first blog in the (potential) series is in response to the repeated question I get from this blog, while training or while consulting:

“Can you recommend a good book for….”

Sure, we can all get by and survive by looking up things on the internet as we struggle on specific issues, but to thoroughly learn a subject and become an expert there really is no substitute for a good book.

In particular, I get regular emails asking me which Crystal Reports books I recommend to complement my own.  It seems many people buy my book after struggling with formula writing, then start wondering what else there is to know and which books to read to find out more.

So in this blog I have decided to review some of the better books out there, Crystal Reports and others.

Please note: the cover illustrations link directly to if you want to update your own library.

CRCP Crystal Reports Certified Professional All-in-One by Annette Harper

This book is based on Crystal Reports 10.  We are numerous versions past that.  The CRCP is now called BOCP and exam guides are notoriously bad for anything other than studying for that specific exam.

However, this book is actually my favourite Crystal Reports book.

This book covers everything to an intermediate level and is a great reference for those things you don’t use often (like Mapping).  If you want a reference guide beyond the built-in help files, this book cannot be beaten.

It is for version 10, but the differences between releases are so slight and this book is so good that it is still the best option.

Ignore the terrible and misleading title and invest!

The Complete Reference Guides (Crystal Reports) By George Peck

The Complete Reference Guides exist for a number of IT related subjects and vary in quality.  George’s Crystal Reports books are some of the better ones.

Personally, overall, I prefer Annette’s book, but the Complete References really do cover all the functionality in great depth and are easy to use as a reference book after the initial read.

An added bonus is that they exist in numerous incarnations to match the Crystal Reports versions.

I can’t imagine anyone ever buying more than one of them, but if you want to learn the basic functionality and building blocks in detail, I thoroughly recommend getting the Complete Reference that matches your release of Crystal Reports.


Information Dashboard Design (The Effective Visual Communication of Data) by Stephen Few

This book focuses on the presentation and formatting side of reporting.  I personally would not have thought this subject could justify a full book, but Stephen Few proves my misconception wrong.

Firstly, don’t be put off by the low page count.  It is only 200 pages long, but these pages are bigger than the standard and are all glossy and full colour.  And while it says “Dashboard” in the title of the book, the methods and tips presented are of great help to all report formatting.

Mainly, the book focuses on what data summaries to include within a dashboard and the best methods of displaying data for maximum clarity.

Chapter 3 shows an illustrated list of the worst examples of report formatting.   There is nothing quite like seeing one of your own reports on that list to spur you on to make changes.

The problem with all the new reporting software is that it has so many bells and whistles it is an effort not to include them.  Many times, these serve more to detract from the presentation than compliment the information you are trying to display.

The other issue is that many reports which look beautiful at first glance are actually terrible at displaying data clearly.  This can be due to either the content or the actual formatting and Stephen careful deconstructs all the components to provide guidance on what to do and what not to do.

This book is not an essential to a Business Intelligence expert, but for those who want to ensure they are producing the most useful reports they can: this book is a blessing.

Xcelsius 2008 Dashboard Best Practices by Loren Abdulezer

When it comes to books for software, there is one core thing they should all provide: more information than the help files in the software itself.

This book does not really measure up in this regard.  This is partly due to Xcelsius having a pretty good tutorial built in which takes some beating, but also because it glosses over any of the Excel side of things.

As Xcelsius is so heavily reliant on the Excel data being of a certain format and style, as well as the general approach of charts being based on Excel it is of upmost importance that these aspects be covered in any book for Xcelsius. They are not.

‘Xcelsius 2008 Dashboard Best Practices’ is mainly filled with a lot of generic talk and a distinct lack of practical examples or actual “how to” explanations.

It has some value for those who use Xcelsius exclusively and is more a “things to consider” guide than a specific ‘how-to’.

All that said, for those who want to know all there is to know about Xcelsius, this book is still worth a read as it may fill in a few gaps and the author obviously knows their stuff.

Lastly…Crystal Reports Formulas Explained.

My book!  Obviously it is a bit unfair to review my own book, but it’s my blog and it seems stupid not to put at least put a link!

The cheapest way to buy this book is directly from the publisher, just follow the link attached to the illustration below and use this code: JB665Y to get the Crystal Reports Formulas Explained for the bargain price of £19.99!

Posted in CodeProject, Crystal Reports | 2 Comments

Qlikview 2: Measuring Against KPIs

In Part 1 we setup some basic measurements in Qlikview to summarize the Incidents for the previous month.

The method used to identify how many were opened or closed etc, was to use the Qlikview Scripting language in a rudimentary fashion, but one that was easy to read and applicable to most reporting software.

Now we are going to compare our results to the KPIs stored on the Database.  This will also be done with the standard Scripting language.

There is a cool feature in Qlikview called Set Analysis, which can be used in this scenario.  But for what we are doing here there is little benefit.

Set Analysis allows another method of identifying a particular group (or Set) of data to summarize.  In the next tutorial, we will look at reporting historical service data on a monthly basis which makes full use of Set Analysis.

So, let’s set up our first criteria for Incidents that were responded to in time.

Measuring Elapsed Time

Now things get a little tricky!

The Interval() function which is used to find the difference between two date-times returns a value that is stored as a date-time.

I personally have found converting this result into an integer to be very problematic.

But the SLAs/KPIs are usually stored in the database as the number of minutes, which is also the case in our sample database.

So, rather than change the Interval() into a number, we shall represent the SLA as a numeric version of a Date-time Field.   This is not as straightforward as with…any software you care to mention!

When a Date-time is turned into a number, whole digits are counted as days and Hours, Minutes and Seconds are treated as a percent of a day.

Therefore 0.5 is noon/12:00, ie: half a day.

And therefore 0.00694 is a minute.

From this, by multiplying the number of minutes in the SLA field on the database we can get a comparable value for the report.  It is not neat, or pretty, but it works.

And more importantly, I have not been able to find any other way of doing this!

Applying the Calculations

Now we know how to make sense of the values in the SLA Fields in the database, we can identify how many Incidents were Resolved to in time and/or how many were Closed in time.

With these numbers, not only can we display the total of successful Resolved/Closed times, but we can easily calculate the success percentage for each.

We’ll look at the Resolved SLA in detail and expand our grid of summaries.

Resolved in SLA

Add a new Field to the Grid with the name: ‘Resolved in SLA’.

To determine whether an Incident has been Resolved within the SLA, we measure the elapsed time in Minutes between the Resolved Date and the Open Date (the first line of the code) then compare it to the Resolution Target Field which we modify into the same type of numeric representation (the last line of the code):


<=Resolution_Target * 0.000694, 1,0),0))

Resolved in SLA (%)

Add a new Field to the Grid with the name: ‘Resolved in SLA (%)’.

To find the percentage of the Incidents resolved successfully within SLA, we need to repeat the code of the previous summary to find the total of successfully resolved Incidents (the first two lines of code).  Then, divide that result by the total number of resolved Incidents and multiple up into a percentage (the last line):

=num(((sum(IF(InMonth(Resolved_Date,today(),-1),if(interval(Resolved_Date-Open_Date,’mm’) <= Resolution_Target * 0.000694, 1,0),0)))
(sum(IF(InMonth(Resolved_Date,today(),-1),1,0)))) * 100, 00.00)

The NUM function is used to format the percentage to two decimal places.

Note:  For simplicity’s sake (and because I created the dataset) I have not tested for any divide by zero errors.  In real life, this is very likely to occur, especially if the organisation does not have many high priority Incidents or you are splitting the results by certain teams.

Obviously, to create the same metrics for the Closed in SLA it is a simple case of repeating the above, but with different database fields.

Color Coding for Clarity

Currently, the summary grid is readable but it does not highlight when KPIs are being breached.

Color coding certain cells within the table based of their contents is a quick way to draw the audience’s attention to any trouble spots.

The standard approach to this style of color coding is to use a RAG (Red, Amber, Green) color scheme.

For simplicity, we shall just use Green or Red to indicate whether a KPI has been reached or not.

On a personal note: I am not a fan of implementing an Amber color band and find it often creates more issues than it solves.

The Priority table field ‘KPI Resolution’ contains the target percent for each priority.  We can use this field to control the background of a particular column.

There is a ‘Visual Cues’ tab in the table Properties which looks perfect for this job, but it does not work properly with database fields, just literal values which have to manually entered and apply to the whole column (rather than a dynamic measurement for a specific Priority like we need).

Instead, we shall use the ‘Background Color’ property which is accessible by expanding the ‘Resolved in SLA (%)’.

We can use the same code as for the ‘Resolved in SLA (%)‘ again to form most of the logic we need and wrap this up in an IF Statement in order to apply Green or Red color coding:

=if((num(((sum(IF(InMonth(Resolved_Date,now(),-1),if(interval(Resolved_Date-Open_Date,’mm’) <= Resolution_Target * 0.000694, 1,0),0)))
(sum(IF(InMonth(Resolved_Date,now(),-1),1,0)))) * 100,00.00)) > KPI_Resolution,rgb(128,255,128),rgb(255,128,128))
The result is a color coded ‘Resolved in SLA (%)‘ column:

The exact same process we have just used for Resolved column can be used for the Closed column. It is just a case of using the Resolved date instead of the Open date and the Closed date instead of the Resolved.

Once done, this can also be grouped together.  This is all the information we need, but looks a little ‘off’.

I particularly don’t like the total appearing at the top of the table. It just looks wrong to me and always has (Crystal Reports does the same thing).

This can be changed easily by checking the ‘Totals on Last Row’ on the Presentation tab for the table Properties.

To improve the table as a whole, go to the various tabs, click around and try out the different settings.

Beyond that, a few tweaks to alignment and font and it looks a whole lot better:


I demonstrated both variations of aggregation in these tutorial, and due to the simplicity of the dataset, both are as good as each other.

Anyone with a decent level of SQL experience will know that when more tables are added it is only a matter of time before parts of the returned data become duplicated.

The COUNT DISTINCT based on the Incident_ID removes all these worries by only counting each Incident_ID once.

There is a third option: using Set Analysis to pick out the required data. But in this situation, it suffers from the issue as the SUM example and cannot detect repeated data.

Obviously, none of this is a concern when working with Excel spreadsheets when the data tends not to be linked, but databases have additional requirements which need consideration.

Fitting It All In

Depending on the number of metrics associated with Incidents, this table can get a little too large and unclear.

Qlikview has an elegant solution to this problem.

Now, simply drag the ‘Resolve in SLA (%)’ over the ‘Resolve in SLA’, then drag the ‘Resolved’ Field onto either of those two.

Repeat the process for the Closed summaries and you will see this:

Click ok and the user can now select whether to see the count of Incidents resolved in SLA or the percent.

By repeating this simple maneuver and stacking the three Closed Fields together, we can turn this:

Into this:

This is a fantastic solution: as long as the target audience is aware they can cycle through the Columns, which is a far more elegant solution than an endless grid of data.


That wraps up the tutorial on how to create this data grid, which is a mainstay of ITIL Service Management reporting.  This example has been based on Incident Management, but the techniques can be easily adapted to reporting on Problems, Changes, Service Requests or Availability.

I may revisit this tutorial and add a third installment on how to deal with measuring elapsed time only during Service Support Hours, rather than the 24 hour clock I have assumed here for brevity.

The natural next step for ITIL reporting is to add context to the one month which is the focus of the report.  This normally takes the form of showing the monthly history for a period of six to twelve months.

Several values are usually plotted on the same graph and this is a challenge in its own right as it goes against the usual charting/grouping logic.

This is where Set Analysis comes in particularly handy and will form the basis of the next tutorial.

Posted in CodeProject, ITIL, Qlikview | Tagged , , | 2 Comments