EXPLORE what we know and do
EXPLORE what we know and do

Close

Competence areas

Contact Us

COFFEE OR TEA? Drop by for a cup.

Blog December 15, 2017

Do you suck at testing?

My initial guess would be "yes".

If we also narrow it down to testing of Business Intelligence solutions, I'm quite confident in saying "yeah, you suck!"

Welcome to an enlightening rant about testing of Analytics solutions!

Why do I know you suck? Well, not because it's you specifically, but rather what experience points at and because testing is indeed quite difficult.

Let me explain with an example. Customer John calls his dedicated (of course…) BI developer Johanna during his month end closing screaming "…everything's wrong! Do something!!". Johanna being service minded immediately tries to alleviate John’s pain and begins error tracking EVERYTHING…. finally to arrive at the conclusion that there is nothing wrong. Well, nothing "wrong" at least. What John sees is what the GL system provided, so - technically - it is all ok. Johanna calls John back satisfied with her findings of "nothing 'wrong'", but to her surprise John is still upset; "but it is not what I want to see!".

Any feelings of resemblance?

What do we actually talk about in this testing scenario? The most important one is communication. John is likely very happy that Johanna is helpful but did she actually take the time to understand what John meant instead of what he actually said?

Succesful testing begins with

  • Communication: Who communicates with who and how?
    • Do you understand what the customer is actually meaning?
  • Expectations
    • What am I seeing, is it wrong or is it “wrong”? 
  • Responsibility: Who is responsible for what?
    • If expectations deviate from reality, whose responsibility is it to fix?

These are big questions that most maintenance/DevOps teams should have (hopefully) ironed out early on. But what if not? And what about projects?

I mentioned earlier that testing of Analytics solutions is quite difficult, so let's break it down a bit. This what most people think about an Analytics project

Magnus Blog 1

This is more accurate.

 

Even in a one-stop solution where the BI tool reads the data sources directly, eg Qlik, Tableau or PowerBI, this is still going on behind the scenes. The difference between a full-blown EDW and a Qlik-solution is that functions tend to become separated more clearly and that responsibility differ between functions throughout the data delivery chain.

Does it matter? Well, sometimes it does. For example when it comes to testing.

When a tester look at a dashboard and find a number that is suspect, what she sees is actually the result of the whole data delivery process. The "wrong" could be that a source system has been patched and resulted in faulty exports, that integration logic is wrong, that data preparation don't handle business rules correctly. Or simply that the formatting is wrong in the final presentation.

Good testing procedures handle that testing needs to be split between functions. Why? Because first it saves time - A LOT of time - and testing done wrong will consume your entire budget. Secondly, it helps build trust with the solution.

If an user trust the Analytics solution, she will never call the BI developer about that something is wrong or deviate from expectations, she will instead immediately check the operative source system to see what actually is going on. This is the level of confidence we aspire for when testing. 

Efficient and successful testing requires the following

  1. Common testing procedures, tailored to what function is tested
  2. A fixed state of the solution and its content/data
  3. Zone of control / Avoid testing data quality

The first two are quite obvious to avoid full blown chaos. The third is often forgotten and it is related to what the project is responsible of delivering. 95% of all Analytics projects do not have any control over source systems, meaning that management of data quality issues in the sources must be agreed on prior project start. If data quality is NOT part of the project's responsibility to fix, then DON'T test for it. Really. Don't do it. Unless you have unlimited time and money and plan to never ever finish anything.

One and two are easy to achieve, the third is not. Not technically, but because of humans. Ever tried to make a test person ignore that a number differ from their expectations and just verify that the "faulty" number is handled correctly according to the stated business rules? Especially when the tester is a Controller or Accountant. Succeed much? There you have lost at least one days of work already…

How do you exclude data quality issues from the complexity of testing?

This is initially achieved by correctly handling expectations of what the project should deliver and what functions that the project is responsible of. The next step is make sure that the testing procedures also embraces this position. Finally, constant communication to stick to testing procedures is needed to avoid people falling back into old habits.

Did you do the thingy? Good! To know if you are in a good spot, then your testers should behave as the end user that trust the Analytic solution;

  • Something is wrong: I'll compare my findings to what the source system delivered
  • Result 1:
    • My dashboard has the same numbers as the source, I'll ask the source to check what is wrong
    • Done!
  • Result 2:
    • My dashboard does not have the same numbers as the source, I'll check the data delivery chain function by function to see where we do things wrong

Since the majority of issues in an Analytics project are related to data quality (Result 1), this testing routine will easily cut 70% of your testing time.

So, do you suck at testing of Analytics projects? Well, you do if you have forgot about excluding data quality… 

 Magnus Hagdahl,
Enfo Business Transformation