Business Intelligence Book Review (part 1)

For a bit of variety, I am going to start posting the occasional blog based on the most common questions I receive from readers.  This first blog in the (potential) series is in response to the repeated question I get from this blog, while training or while consulting:

“Can you recommend a good book for….”

Sure, we can all get by and survive by looking up things on the internet as we struggle on specific issues, but to thoroughly learn a subject and become an expert there really is no substitute for a good book.

In particular, I get regular emails asking me which Crystal Reports books I recommend to complement my own.  It seems many people buy my book after struggling with formula writing, then start wondering what else there is to know and which books to read to find out more.

So in this blog I have decided to review some of the better books out there, Crystal Reports and others.

Please note: the cover illustrations link directly to Amazon.com if you want to update your own library.

CRCP Crystal Reports Certified Professional All-in-One by Annette Harper

This book is based on Crystal Reports 10.  We are numerous versions past that.  The CRCP is now called BOCP and exam guides are notoriously bad for anything other than studying for that specific exam.

However, this book is actually my favourite Crystal Reports book.

This book covers everything to an intermediate level and is a great reference for those things you don’t use often (like Mapping).  If you want a reference guide beyond the built-in help files, this book cannot be beaten.

It is for version 10, but the differences between releases are so slight and this book is so good that it is still the best option.

Ignore the terrible and misleading title and invest!

The Complete Reference Guides (Crystal Reports) By George Peck

The Complete Reference Guides exist for a number of IT related subjects and vary in quality.  George’s Crystal Reports books are some of the better ones.

Personally, overall, I prefer Annette’s book, but the Complete References really do cover all the functionality in great depth and are easy to use as a reference book after the initial read.

An added bonus is that they exist in numerous incarnations to match the Crystal Reports versions.

I can’t imagine anyone ever buying more than one of them, but if you want to learn the basic functionality and building blocks in detail, I thoroughly recommend getting the Complete Reference that matches your release of Crystal Reports.

                    

Information Dashboard Design (The Effective Visual Communication of Data) by Stephen Few

This book focuses on the presentation and formatting side of reporting.  I personally would not have thought this subject could justify a full book, but Stephen Few proves my misconception wrong.

Firstly, don’t be put off by the low page count.  It is only 200 pages long, but these pages are bigger than the standard and are all glossy and full colour.  And while it says “Dashboard” in the title of the book, the methods and tips presented are of great help to all report formatting.

Mainly, the book focuses on what data summaries to include within a dashboard and the best methods of displaying data for maximum clarity.

Chapter 3 shows an illustrated list of the worst examples of report formatting.   There is nothing quite like seeing one of your own reports on that list to spur you on to make changes.

The problem with all the new reporting software is that it has so many bells and whistles it is an effort not to include them.  Many times, these serve more to detract from the presentation than compliment the information you are trying to display.

The other issue is that many reports which look beautiful at first glance are actually terrible at displaying data clearly.  This can be due to either the content or the actual formatting and Stephen careful deconstructs all the components to provide guidance on what to do and what not to do.

This book is not an essential to a Business Intelligence expert, but for those who want to ensure they are producing the most useful reports they can: this book is a blessing.

Xcelsius 2008 Dashboard Best Practices by Loren Abdulezer

When it comes to books for software, there is one core thing they should all provide: more information than the help files in the software itself.

This book does not really measure up in this regard.  This is partly due to Xcelsius having a pretty good tutorial built in which takes some beating, but also because it glosses over any of the Excel side of things.

As Xcelsius is so heavily reliant on the Excel data being of a certain format and style, as well as the general approach of charts being based on Excel it is of upmost importance that these aspects be covered in any book for Xcelsius. They are not.

‘Xcelsius 2008 Dashboard Best Practices’ is mainly filled with a lot of generic talk and a distinct lack of practical examples or actual “how to” explanations.

It has some value for those who use Xcelsius exclusively and is more a “things to consider” guide than a specific ‘how-to’.

All that said, for those who want to know all there is to know about Xcelsius, this book is still worth a read as it may fill in a few gaps and the author obviously knows their stuff.

Lastly…Crystal Reports Formulas Explained.

My book!  Obviously it is a bit unfair to review my own book, but it’s my blog and it seems stupid not to put at least put a link!

The cheapest way to buy this book is directly from the publisher, just follow the link attached to the illustration below and use this code: JB665Y to get the Crystal Reports Formulas Explained for the bargain price of £19.99!

Advertisements
Posted in CodeProject, Crystal Reports | 2 Comments

Qlikview 2: Measuring Against KPIs

In Part 1 we setup some basic measurements in Qlikview to summarize the Incidents for the previous month.

The method used to identify how many were opened or closed etc, was to use the Qlikview Scripting language in a rudimentary fashion, but one that was easy to read and applicable to most reporting software.

Now we are going to compare our results to the KPIs stored on the Database.  This will also be done with the standard Scripting language.

There is a cool feature in Qlikview called Set Analysis, which can be used in this scenario.  But for what we are doing here there is little benefit.

Set Analysis allows another method of identifying a particular group (or Set) of data to summarize.  In the next tutorial, we will look at reporting historical service data on a monthly basis which makes full use of Set Analysis.

So, let’s set up our first criteria for Incidents that were responded to in time.

Measuring Elapsed Time

Now things get a little tricky!

The Interval() function which is used to find the difference between two date-times returns a value that is stored as a date-time.

I personally have found converting this result into an integer to be very problematic.

But the SLAs/KPIs are usually stored in the database as the number of minutes, which is also the case in our sample database.

So, rather than change the Interval() into a number, we shall represent the SLA as a numeric version of a Date-time Field.   This is not as straightforward as with…any software you care to mention!

When a Date-time is turned into a number, whole digits are counted as days and Hours, Minutes and Seconds are treated as a percent of a day.

Therefore 0.5 is noon/12:00, ie: half a day.

And therefore 0.00694 is a minute.

From this, by multiplying the number of minutes in the SLA field on the database we can get a comparable value for the report.  It is not neat, or pretty, but it works.

And more importantly, I have not been able to find any other way of doing this!

Applying the Calculations

Now we know how to make sense of the values in the SLA Fields in the database, we can identify how many Incidents were Resolved to in time and/or how many were Closed in time.

With these numbers, not only can we display the total of successful Resolved/Closed times, but we can easily calculate the success percentage for each.

We’ll look at the Resolved SLA in detail and expand our grid of summaries.

Resolved in SLA

Add a new Field to the Grid with the name: ‘Resolved in SLA’.

To determine whether an Incident has been Resolved within the SLA, we measure the elapsed time in Minutes between the Resolved Date and the Open Date (the first line of the code) then compare it to the Resolution Target Field which we modify into the same type of numeric representation (the last line of the code):

=sum(IF(InMonth(Resolved_Date,today(),-1),if(interval(Resolved_Date-Open_Date,’mm’)

<=Resolution_Target * 0.000694, 1,0),0))

Resolved in SLA (%)

Add a new Field to the Grid with the name: ‘Resolved in SLA (%)’.

To find the percentage of the Incidents resolved successfully within SLA, we need to repeat the code of the previous summary to find the total of successfully resolved Incidents (the first two lines of code).  Then, divide that result by the total number of resolved Incidents and multiple up into a percentage (the last line):

=num(((sum(IF(InMonth(Resolved_Date,today(),-1),if(interval(Resolved_Date-Open_Date,’mm’) <= Resolution_Target * 0.000694, 1,0),0)))
/
(sum(IF(InMonth(Resolved_Date,today(),-1),1,0)))) * 100, 00.00)

The NUM function is used to format the percentage to two decimal places.

Note:  For simplicity’s sake (and because I created the dataset) I have not tested for any divide by zero errors.  In real life, this is very likely to occur, especially if the organisation does not have many high priority Incidents or you are splitting the results by certain teams.

Obviously, to create the same metrics for the Closed in SLA it is a simple case of repeating the above, but with different database fields.

Color Coding for Clarity

Currently, the summary grid is readable but it does not highlight when KPIs are being breached.

Color coding certain cells within the table based of their contents is a quick way to draw the audience’s attention to any trouble spots.

The standard approach to this style of color coding is to use a RAG (Red, Amber, Green) color scheme.

For simplicity, we shall just use Green or Red to indicate whether a KPI has been reached or not.

On a personal note: I am not a fan of implementing an Amber color band and find it often creates more issues than it solves.

The Priority table field ‘KPI Resolution’ contains the target percent for each priority.  We can use this field to control the background of a particular column.

There is a ‘Visual Cues’ tab in the table Properties which looks perfect for this job, but it does not work properly with database fields, just literal values which have to manually entered and apply to the whole column (rather than a dynamic measurement for a specific Priority like we need).

Instead, we shall use the ‘Background Color’ property which is accessible by expanding the ‘Resolved in SLA (%)’.

We can use the same code as for the ‘Resolved in SLA (%)‘ again to form most of the logic we need and wrap this up in an IF Statement in order to apply Green or Red color coding:

=if((num(((sum(IF(InMonth(Resolved_Date,now(),-1),if(interval(Resolved_Date-Open_Date,’mm’) <= Resolution_Target * 0.000694, 1,0),0)))
/
(sum(IF(InMonth(Resolved_Date,now(),-1),1,0)))) * 100,00.00)) > KPI_Resolution,rgb(128,255,128),rgb(255,128,128))
The result is a color coded ‘Resolved in SLA (%)‘ column:

The exact same process we have just used for Resolved column can be used for the Closed column. It is just a case of using the Resolved date instead of the Open date and the Closed date instead of the Resolved.

Once done, this can also be grouped together.  This is all the information we need, but looks a little ‘off’.

I particularly don’t like the total appearing at the top of the table. It just looks wrong to me and always has (Crystal Reports does the same thing).

This can be changed easily by checking the ‘Totals on Last Row’ on the Presentation tab for the table Properties.

To improve the table as a whole, go to the various tabs, click around and try out the different settings.

Beyond that, a few tweaks to alignment and font and it looks a whole lot better:

 COUNT DISTINCT vs SUM

I demonstrated both variations of aggregation in these tutorial, and due to the simplicity of the dataset, both are as good as each other.

Anyone with a decent level of SQL experience will know that when more tables are added it is only a matter of time before parts of the returned data become duplicated.

The COUNT DISTINCT based on the Incident_ID removes all these worries by only counting each Incident_ID once.

There is a third option: using Set Analysis to pick out the required data. But in this situation, it suffers from the issue as the SUM example and cannot detect repeated data.

Obviously, none of this is a concern when working with Excel spreadsheets when the data tends not to be linked, but databases have additional requirements which need consideration.

Fitting It All In

Depending on the number of metrics associated with Incidents, this table can get a little too large and unclear.

Qlikview has an elegant solution to this problem.

Now, simply drag the ‘Resolve in SLA (%)’ over the ‘Resolve in SLA’, then drag the ‘Resolved’ Field onto either of those two.

Repeat the process for the Closed summaries and you will see this:

Click ok and the user can now select whether to see the count of Incidents resolved in SLA or the percent.

By repeating this simple maneuver and stacking the three Closed Fields together, we can turn this:

Into this:

This is a fantastic solution: as long as the target audience is aware they can cycle through the Columns, which is a far more elegant solution than an endless grid of data.

Summary

That wraps up the tutorial on how to create this data grid, which is a mainstay of ITIL Service Management reporting.  This example has been based on Incident Management, but the techniques can be easily adapted to reporting on Problems, Changes, Service Requests or Availability.

I may revisit this tutorial and add a third installment on how to deal with measuring elapsed time only during Service Support Hours, rather than the 24 hour clock I have assumed here for brevity.

The natural next step for ITIL reporting is to add context to the one month which is the focus of the report.  This normally takes the form of showing the monthly history for a period of six to twelve months.

Several values are usually plotted on the same graph and this is a challenge in its own right as it goes against the usual charting/grouping logic.

This is where Set Analysis comes in particularly handy and will form the basis of the next tutorial.

Posted in CodeProject, ITIL, Qlikview | Tagged , , | 2 Comments

Crystal Reports: Mixed Page Orientation Formatting

I have starting splitting this blog out by subject.  As a result, this Crystal Reports related post is now here:

http://crystalreportsexperttechniques.wordpress.com/2012/08/04/crystal-reports-mixed-page-orientation-formatting/

Posted in CodeProject, Crystal Reports | Tagged , , , , , , , | 2 Comments

Qlikview ITIL Incident Management Report (pt 1)

As this is my first Qlikview post, it feels like some sort of introduction is required.

If Crystal Reports and Xcelsius had a baby, which evolved and improved over many generations, it would be called Qlikview.

All the power of SQL and its own scripting language, mixed with more interactivity and drilldown functionality than you could ever want.

Qlikview is fantastic.

And…

ITIL is fantastic.

So here is a straightforward tutorial on how to build a simple ITIL Incident throughput and KPI table in Qlikview.

This post actually follows on quite nicely from my previous one, as it focuses on extracting several cuts of data in the most efficient and accurate way.

This is the result we are aiming for…

…though we will add some formatting to allow a user to select which columns they want to see and shorten the number of columns…

…which is often the key focal point of an Incident Dashboard, or a separate report with a couple of complementary charts.

To keep this tutorial short and sweet (in two parts!!!), the table does not include some of the columns often associated with this style of report.  Things like: Incidents on hold, re-opened, cancelled or transferred to another Business Unit.

If any of those columns are added, it can impact the integrity of the existing information as it subtly changes the context of the entire report.  So think carefully before doing so.

This tutorial is based on a simple Access Database which has only three tables, but the examples I work through should be easily applied to any ‘real life’ Service Management application:

If you’d like the actual Access database I use here, drop me an email (jasondove@hotmail.com).  To follow this tutorial exactly, you will need both this database and to set up an ODBC driver pointing to it.

What this style of report shows

The most basic version of this report shows the volume of Incidents flowing through a one month period.

Essentially, if the ‘Brought Forward’ value is consistently less than the ‘Carried Forward’, the responsible Business Unit is struggling to keep up with demand.

We will develop this part of the report before extending it to include SLAs.

Getting the Data

The trick with this style of report is to get all the data required for accurate results while keeping the actual volume of data to a minimum.

I have found that if separate Tables are added to Qlikview via individual SQL, something odd happens with the linking and the results can be questionable.

With this in mind, the main body of the data for this report is going to be gathered by one SQL statement.

From this one set of data each Incident will be counted one or more times as its criteria is tested for each summary column in the report.

1.  Open Qlikview and open a new report.

2.  Go to the menu File -> Edit Script.

3.  Click on the Connect button to point Qlikview at the database.

4.  Select your database from the list.  In this case it is ‘Dove_ITIL’.

5.  Enter the following SQL beneath the newly generated line of code for the ODBC:

SELECT Incident.Incident_ID,
Incident.Open_Date,
Incident.Resolved_Date,
Incident.Closed_Date,
Priority.Priority_Name,
Priority.Priority_ID,
Priority.Resolution_Target,
Priority.Close_Target,
Priority.KPI_Resolution,
Priority.KPI_Close
FROM Incident INNER JOIN Priority ON Incident.Priority_ID = Priority.Priority_ID
WHERE
(
Incident.Closed_Date IS NULL
OR
Incident.Closed_Date >= DateValue(Year(Now()) & ‘,’ & Month(Now())-1 & ‘,01’)
)
AND
Incident.Open_Date < DateValue(Year(Now()) & ‘,’ & Month(Now()) & ‘,01’);

6.  Click on the Save icon at the top of the window, and save the report.

7.  Click Reload icon to populate the data into the report.

The report now has access to the data we need to build the table.

Displaying the Data

With the subset of data selected, we need a container to hold the summary information required to create the ITIL stats.

In this case, we are going to use the Straight Table option within the Charts.

8.  Click the Chart icon.

9.  Enter a title and select the ‘Straight Table’ icon:

10. Click Next.

Dimensions / Grouping the Data

Because we are interesting in the SLAs for each different level of Priority we shall use this as a Dimension to focus our summaries.

11. Select the ‘Priority ID’ and click on ‘Add’ to move it to the ‘Used Dimensions’ panel.

12. With the field still highlighted, click on the ‘Edit…’ button, and type the following code:

=Priority_ID & ‘ – ‘ & Priority_Name

This will display both the Priority ID and Name together in the table and make it more meaningful.

13. In the Label, even though it is greyed out, type ‘Priority’ for the columns title.

14. Click ‘Next’.

The next screen automatically opens an Expression Edit window for the creation of one summary.

Brought Forward

We shall add the first column now, save and check the report, then add the additional summaries.

First things first: the table starts with the number of Incidents which were opened before the start of the target month and are still open as the month begins.

There should be an Expression window open from step 13…

15. Enter the following code:

=SUM(IF(Open_Date < AddMonths(MonthStart(ToDay()),-1), IF(isnull(Resolved_Date) OR Resolved_Date >= AddMonths(MonthStart(ToDay()),-1),1,0),0))

What does it do?

Well, by using ToDay() to find the current date, MonthStart() to wind the time back to the beginning of the current month and AddMonths() to move an entire month at a time we can calculate everything we could ever want for monthly report.

In this case, we want Incidents opened before the target month that were resolved during or after the month (or not resolved at all).  We put those conditions into an IF statement that evaluates to 1 if they are true (0 if not) and then SUM the 1s for the result.

16. Click ok to save the Expression and enter ‘Brought Forward’ as a Label for the Expression :

17. Click Finish.

The result should look a lot like this:

It is certainly not the most impressive thing ever created in Qlikview, but is a step in the right direction for this ITIL table.

From this point we can edit the table and add the additional rows.  For the reminder of Part I of this tutorial we shall focus on more basic summaries, similar to the Brought Forward column.

But first, we shall fix the Sort Order.

18. Right click on table and select ‘Properties…’.

19. Click on the ‘Sort’ tab and tick the ‘Text’ Sort By option.

20. Click on the ‘Expressions’ tab to be ready for the next part of the tutorial.

Open Incidents

This is a count of all the Incidents which are opened in the previous month and is one of the easiest to calculate.

21. From within the Expression tab, click ‘Add’ and enter the following code:

=COUNT(distinct IF(InMonth(Open_Date,ToDay(),-1),Incident_ID))

This code works a little differently to the previous example.  The IF Statement returns the Incident ID if true (and nothing if it does not) and these are then COUNTed.

22. Enter ‘Opened’ as the Label for this expression.

Resolved Incidents

This is a count of all the Incidents which are resolved in the previous month and is very similar to the Open Incidents in how it is calculated.

23. Add another Expression with the following code:

=COUNT(distinct IF(InMonth(Resolved_Date,ToDay(),-1),Incident_ID))

24. Label this Expression as ‘Resolved’.

Closed Incidents

Closed Incidents, the most annoying part of Service Management: getting an official sign-off of a completed job from a user who no longer cares as they are now happily working again.

This is a count of all the Incidents which are closed in the previous month and is very similar to the previous two columns in how it is calculated.

25. Add another Expression with the following code:

=COUNT(distinct IF(InMonth(Closed_Date,ToDay(),-1),Incident_ID))

26. Label this Expression as ‘Closed’.

Carried Forward

And lastly (for the moment) the number of Incidents which were opened before the end of the target month and are still open as the month ends.

This calculation is almost the same as the Brought Forward summary we started with, and when viewing successive months: the Carried Forward from the previous month should equal the Brought Forward of the next month.

27. Add another Expression with the following code:

=SUM(IF(Open_Date < MonthStart(ToDay()),IF(isnull(Resolved_Date) OR Resolved_Date >= MonthStart(ToDay()),1,0),0))

28. Label this Expression as ‘Carried Forward’.

29. Save the changes and view the Table.

And here it is:

Summary (so far)

These five columns provide the volume of Incidents and their throughput during the month.  Even if we stopped the development now, this report has some value.

As well as showing the volume of Incidents by Priority, it is easy to see if the Business Units responsible are coping or not, just by comparing the Brought Forward and Carried Forward fields.

Which is just as well, because I am going to end this tutorial here!

Part II is coming soon!  This will cover the SLA column scripts, formatting, tweaks and review the report as a whole.

Sign up on the right to get an email when it is available.

Posted in ITIL, Qlikview, ServiceNow | Tagged , , , , | 2 Comments

How Intelligent is Your Business Intelligence?

For a change from me bleating on about ServiceNow reporting, I thought I’d write something a lot more generic.

This is the number one recurring mistake I have seen consulting and pops its ugly head up time and again across all business sectors, reporting software and methodologies.

It is particularly prevalent for ITIL service reporting, but appears whenever time periods are used as a basis for measuring multiple values.

The thought process tends to go something like this:

  1. Ok, we need to know how many Incidents were opened last week, so let’s filter the report for those records.
  2. Now we need to know the summary by Priority, so add a group for the Priority and count the Incidents in each group.
  3. What’s next?  Oh yeah, Incidents which were closed last week.  Easy stuff, let’s just count the Incidents with a Status of ‘Closed’.
  4. Next, outstanding Incidents that were not closed at the end of the week.
  5. Easy, just count the Incidents that do not have a Status of ‘Closed’.
  6. Done!

It’s not done!  It’s massively wrong, misleading and hides the worst Incident resolution stats!

What this report will show is the number of Incident opened last week and the number of those opened Incidents which were closed.

Any Incident which was opened two weeks ago and closed last week will not appear in this report, not only that, it will not appear as closed in any report…ever.

Additionally, if the report for the last week is refreshed first thing on a Monday morning it will show one result, but if more Incidents from last week are closed after that and it is refreshed again: the results of the report will change.

This is horribly wrong for a report that should be a definitive record of the previous week’s results.  And a ‘solution’ of “run it once, first thing on a Monday” is amateurish in the extreme.

And if all that is not bad enough, with the above report it is possible for the KPIs for each week to be reached, but fail over a monthly or a yearly based report!

So What Went Wrong?

Quick answer: the date range.

Because the report developer’s focus was initially on the opened Incidents, the rest of the report metrics suffer.

To correct this, the date range filter needs to accommodate both open and closed Incidents within the last week.

This is what the report does currently (the solid lines depict which date ranges are evaluated in the filter):

Because the closed date is not used, only Incidents opened in the last week are selected with no regard as to whether or not they are closed.

The asterisks are there to illustrate my earlier point.  The green asterisk represents an Incident being opened towards the end of the week.  The red asterisk is the same Incident being closed in the following week.

The closing of this Incident does not fall into the report logic and will never be displayed, even though an Incident being closed (or resolved) is the best indicator of work being done.

The obvious solution may appear to be to use the closed date, rather than status, like this:

This works for the open and closed Incidents, but does not identify ALL the outstanding ones at the week’s end.

The problem is that an Incident can be opened before last week and remains open during the entirety of last week.

To cover all possibilities, a wider approach is needed:

In short, every Incident opened before the end of last week which was not closed before the start of last week.

This logic will provide the correct bucket from which the separate metrics can be extracted.

In theory this means scanning the entire database every week for long term open Incidents which can be a massive reporting overhead.

The way to handle this is to either make an arbitrary decision as to what age open Incidents can ignored in this style of report (but ideally be featured in a dedicated ‘Old Incidents’ report)…or investigate the actual data and find the real cut-off date for Incidents which are not closed and report from that point.

Finally

The above is focused purely on the conflict between measuring three simple values in the one report.

With other metrics added, the issue is compounded, especially when re-opened, re-allocated or suspended Incidents are thrown into the mix.

The key thing to keep in mind: when designing any report is to base the date filters on whole of the data required for ALL the metrics being featured in the report.

And finally, whenever adding a new metric to an existing report, re-examine the date filter in light of the new requirement.  It is sometimes the case that to get an accurate result for a new metric causes an impact on every other metric already in the report.

Posted in CodeProject | Tagged , , , | Leave a comment

Example ServiceNow Report

I posted this on Linkedin a while ago, but obviously things get pushed down in the group history until they are forgotten.

This is quite a clever Crystal Report (if I do say so myself) which displays when selected Services were impacted over a set time frame in quite a nice graphical way:

ServiceNow Report

(Obviously I deleted the service names on the left, but they will be replaced by your service names when the report is refreshed.)

If you want this report (free!), drop me an email at jasondove@hotmail.com.

For more cool information about ServiceNow Reporting, please look here!

 

Posted in Crystal Reports, ServiceNow | Tagged , , | Leave a comment

A few Random Tips for ServiceNow Reporting

I have been developing reports directly from ServiceNow ever since the ODBC connection was made available (http://wiki.service-now.com/index.php?title=ODBC_Driver).

This new functionality allows ServiceNow to be analysed directly with greater freedom and accuracy than through the built-in reporting (mentioned in more detail elsewhere in this blog).

ServiceNow have done a fantastic job with the structure and naming conventions within the table schema and it is a delight compared to a lot of systems I have reported on.

These few tips may be of interest to anyone embarking on this route.

Field Naming conventions

The fields in the ServiceNow table structure are quite consistently named to the point where general rules can be applied:

If a field name begins with “DV_”, the DV stands for “Display Value”, which is the same thing as is shown in the ServiceNow application and what the audience of your reports will expect to see as labels for groups, charts and summary names.

There is nearly always a corresponding field without the DV_ prefix which holds the same data in a less accessible way.  For reporting, it is the DV_ fields which should be used to label everything but any calculations (grouping, summaries, filtering etc) the corresponding data fields should be used.

Top tip: Ordering the database fields alphabetically within your reporting software will group all the DVs together and makes it easier to find the field you want.

Please note that both date and system fields tend not to have a DV equivalent.

Comparing DB field names to ServiceNow

The actual field names used in the database are different to those used in the ServiceNow interface which is different again to the built-in reporting field names.

This can make identifying the required fields difficult.

However, if a built-in report is exported from ServiceNow in a CSV format, the column headings match the database field names, while the names in the report template it game from match ServiceNow.

When viewed side by side, it is a simple enough task to identify which is which and can save a lot of guess work and frustration.

Correct Linking

Tables have to be linked together in order to create a report, like Incident to Service to find out what Services have been impacted.  Having this freedom to link through tables and retrieve data in any order is the true power of ODBC reporting.

The quick and easy route is to link tables based on the DV name fields, it is obvious what is being linked and you can actually look in the fields manually and compare values.

There are two problems with this:

1.  Content

There is no guarantee the content of a DV field is unique, so any filtering or grouping on these fields can lead to errors.

2.  Speed

DV fields do not have an Index (a database thing which makes any sort or search on the field massively quicker) and can make reporting painfully slow if they are used for anything other than display purposes.

Both of the above problems can be resolved by using the sys_ID  field .  The sys_ID field is unique for a particular row of data, not just in a single table, but across the whole database.

And of course, the sys_ID is also Indexed, so any report based on this field will run as quickly as is possible (bad structure and excessive data still kills performance).

It’s De-Normalised for a reason!

Ok, time to get a bit (more) nerdy!  Databases are usually designed so each piece of data is unique and identifiable from an ID.  This is called Normalisation and is the foundation of both database design and professional report development.

What the ServiceNow table structure (and a lot of other databases) does is repeat the same information in other tables to save you the effort of linking.

A good example of this is the Incident table.  The Incident table contains fields to link to other tables to retrieve the affected business unit or the root cause.  But these also exist as DV fields that can be pulled straight from the Incident table making reporting quicker to develop and run.

And Many More…

Of course this is just a handful of hints is the tip of the iceberg of the knowledge needed to get the most from ServiceNow reporting.

For more cool information about ServiceNow Reporting, please look here!

Posted in ServiceNow | Tagged , , | 5 Comments