Mike Masciandaro: Part III - Eight Keys For Successful Reporting

This is the third part of a multi-part interview with Mike Masciandaro, a veteran BI practitioner, recently retired from Dow Chemical where he served as Director of BI for nearly 20 years. Our first two episodes focused on how to create and establish BI programs and BI teams and how to partner with the business and ensure high levels of customer satisfaction and adoption. This episode focuses on how to create outstanding reports that users will love and use.

Masciandaro is a veteran business intelligence practitioner who recently retired from an illustrious career at Dow Chemical as director of BI. During that time, Mike saw and did just about everything there is to do in the world of BI, data, and analytics. He is now intent on sharing his hard-won knowledge with others.

If you would like to contact Mike, email him at [email protected]

Key Findings:

  • Two levels of drilling will usually suffice, but each individual requires starting on a different level
  • The presentation of the data is often more important than the data itself
  • Never patch data down stream
  • Perception of accuracy can be a challenge that requires a cultural change
  • Properly securing data is a balancing act
  • Monitoring actions and their results is difficult, but essential

This is an excerpt of the third podcast between Wayne Eckerson and Mike Masciandaro

Wayne: What are the keys to creating a great report that users will use on a consistent basis? 

Mike: So just broadly, I’ll go through an overview of them. The first one is ease of use. Your reporting has to be very easy to use. It has to be intuitive and it has to be catchy. User interface and user experience is important there.

The second one is drill down. When you have an insight, you have to be able to drill into that insight and get down to the action.

Third point is monitor. So once you take action, the question that always comes up later is whether that action is sufficient or not.  For example, everybody takes actions. Then next month they’re getting together and people are saying, ‘Hey, were those actions good or not? Or did some condition change so we have to change our action?’ We have to facilitate that process.

Accuracy is the next point. We’re responsible for taking something from source systems and making sure that when we do our ETL process we didn’t make it so that there’s some inaccuracy. But there’s a lot more to accuracy than just ‘Did we do the ETL correctly?’ We have to be giving the right information. Some of that, interestingly, comes from human behavior. Sometimes it’s just perception of accuracy. People accept good news better than bad, for example, and you have to be prepared for that.

The next point is relevancy. Users have a different level of relevancy for every one of them. Analysts want a lot of detail, and that’s what’s relevant to them. A leader wants higher-level information, and that’s what’s relevant to them. When you’re building reports, you have to set context for users so that it becomes relevant and timely for them right away, which, by the way, is my next point, timeliness.

So, timeliness is all about the latency between when events are happening or even predictive information that you’re getting from advanced analytics. You have to deliver that to users on a timely basis. We hear about real time, and we’re heading more towards that. But timeliness is very important, especially in the beginning when you’re seeing events and asking ‘What is the impact of those events?’ and ‘After you took those actions did those actions work?’

The last two are responsiveness and security. Responsiveness gets into your technical architecture and how well it’s delivering. When somebody does a click, how long does it take the system to respond? We take a lot of heat in the industry when our architecture or our infrastructure doesn’t respond quickly. This is tough when people query millions and millions of rows of data. It’s got to be responsive – in my world, five seconds or less.

A clicker response has to happen in five seconds all the time. I realize that’s very difficult. We set metrics around that. We say 95+ percent of all clicks need to respond in less than five seconds.

Finally, as I mentioned before, security. You do all this stuff to get data out and in people’s hands and then you want to go lock it up. Security will come back to bite you because if you’ve done all those other things right and now you’re not secure, people will clam up and say, ‘We can’t have that stuff out there now, because it’s too important.’ That’s what you’re trying to do, right? You’re trying to make your data important and valuable for people to use, but once it’s really valuable, then people want to really make sure that their stuff is secure.

So those are my eight key concepts there, Wayne.

Wayne: Let’s drill down into each of these to get a little bit more detail that our audience will appreciate.

If we start at the top with ease of use, how simple should these dashboards be? There’s always a tradeoff between simplicity and complexity. Do you strive for making the dashboards so easy to use that you don’t have to train people how to use them?

Mike: I think it’s tough when you’re trying to get dashboards in the hands of different user bases to make something relevant ands hitting the right level of ease of use, but the point about training is a good one.

You should have a practice that goes out and understands what ease of use is all about. It’s not just simplicity. It’s screen design. It’s placement of important information. It’s testing elements to see if they can find it just by looking at the screen. Really sophisticated websites are doing this kind of activity to get their message out.

I believe that if something needs a lot of training, you’re never going to catch up to the demand. We have training though. Training goes from something fairly detailed to light communication or help tips that are actually built into the tool, and we’re starting to do more of that kind of stuff. Hover text is the simplest case of that. Ultimately, though, training courses and overlays are not sufficient because those things get out-of-date very quickly. They’re hard to keep up, and people don’t really look at help documents much. They expect to figure stuff out on the screen. We actually had a practice going with people who went out and got trained on UI experience and we had some outside help to analyze our tools.

Wayne: Did you have usability labs or did you actually watch the users using the software before you deploy it?

Mike: That sounds really formal when you say “the labs”. We absolutely have had a scientific approach towards getting users. We would pick a wide range of them, and get people who were very familiar users and users who were not familiar at all, and then go through a scientific process.

We didn’t have a formal lab, but we did have a formal process of taking new designs and getting people together and having usability testing. We delivered a lot of new information all the time, so it was difficult to keep up with all that new stuff.

Wayne: Do you show a different design or view of a report to someone who’s new versus someone who’s a veteran? The assumption here is that someone who’s new is easily overwhelmed with too much data, whereas someone who’s been around expects more data, more detail, more functionalities. Is that something that came into your design paradigm?

Mike: We’ve played around with that concept a lot to have reports where the default view is something simple and then you could crack it open and make the default view more complex. Well, I should say more detailed.

We were constantly tweaking that kind of thing. So, what is your starting point? Then try to get your tools to remember where each user got. So, if I was only worried about one geographical location or one country or one business segment, the system would remember that.

Starting points are difficult, though, because if you just have one default starting point, it never really hits the mark. So, then you start to break it down by personas of the user base and try to target the various kinds of personas upfront. Then when somebody new comes in, you could say, ‘Oh, you’re a user like this. Here’s how that gets set up for them.’

Wayne: How do you tailor a report to an individual persona or role, other than what you’ve just said? Are there any other tips that you could provide for doing that or is it completely tool-based?

Mike: It’s all about paying attention to the things that are not about the data itself. It’s about how you’re presenting the data to make people see things very quickly. I’ll give you one example.

If you tell somebody where things are and they go and click through it, and when they come back a week later, can they find it again? Or do they struggle to figure it out a second time? That repeatability is an attribute of scoring your system on ease of use. That’s an artwork tour, but relevancy is another piece of it, and I broke these things down into eight key concepts, which overlap quite a bit. So there’s relevancy – we’re always trying to make it relevant so that it’s easier for people to use.

Wayne: How many levels should someone be able to drill, and what’s the best way to drill?

Mike: When you have multiple devices and you’re dealing with laptops and touch pads, it’s a little difficult because the gestures are different on each one of them. But the idea behind drillability is going as far as its necessary to allow the initial user to recognize where there’s a problem. Now, I recognize where there’s an issue of interest, and now I want to drill to go, ‘Oh, now I understand where or whom the action has to come from.’

Drillability has to be on the screen. There should be easy ways to do it. Some of them are right mouse clicks. Some of them are hovers, perhaps. Sometimes you click and a thing refreshes at another level of detail. Lots of tools handle this very well these days. The problem that you get is if you start putting click events everywhere, then you have to be able to have undo click events. Because somebody might click into an area and go, ‘Oh, no, that’s not where I wanted to go and now I have to go back,’ So you want to be able to explore, drill down a little bit, and get back. So, you can easily overdo it with drillability.

You mentioned how many levels. I think two levels of detail is probably good for each person. Bear in mind, people might be coming in at a different level. So somebody might be coming in at two levels lower than somebody else. And they’d still want to go two levels lower from where they started.

Wayne: People are saying these days, ‘Why make people drill at all?’ Can’t we use algorithms to do root cause analysis and surface what they need before they have to do any work or right-clicking? Have you ever thought of that?

Mike: It’s possible to do some of that, but I don’t think we’re sophisticated enough that we’re going to take the human element out of these tools. You need analysts to analyze. You need people to point out important stuff, and they need to find out what’s going on. Algorithms can help us facilitate that, but not replace us.

Wayne: Let’s go to the next one: monitor reactions. That’s a really powerful one, one that presumes that you know what actions to take. I don’t know if your tools suggest actions, and then, two, if the tool or someone is remembering those actions, and, three, you come back at a later time and actually review them. I don’t think I’ve ever heard of any organization doing that, so I’m curious how you do that?

Mike: It’s surprising how organizations try to do this. If all you did was give them a report, allowed them to drill down, and now they’re getting together in meetings and taking some actions – how exactly are they deciding on those actions?

They’re probably going out there with a SharePoint site that has a list of actions and they’re manually documenting them. Then they’re trying to track actions and see if people did them. Sometimes the key is more about if the action happened.

If you’re falling short of your benchmark and took actions, the question really is, ‘Did I improve the situation next time?’ If not, by the monitoring you can see, ‘Oh, we took those actions, but they’re insufficient,’ or ‘We failed to take actions.’ It is not simple to thread together all those collaborative tools to make that happen, but this is happening in your organization and the best thing you can do is try to have tools that facilitate that as broadly as possible, and we are starting to do that.

Wayne: I take it that you don’t record the actions taken in the tool or in your reports or dashboards today, but it seems like that’s an opportunity to close the last mile of BI, maybe something vendors can help provide. Although, is the BI tool the best place to do that or are there other tools, collaboration tools in the organization, where that should be done?

Mike: You want to be able to circle back, and there’s a lot of things that are changing. It’s not just your actions that you’re taking. There are world events happening. Raw material prices are going up and down. There’s all these variables happening all the time, and when you’re in business you’re trying to manage as many variables as possible to come up with the best possible result for your organization.

So the key is ‘What actions do I have to take? How can I at least take the actions that I think are prudent? Were they effective or not?’ That’s an ongoing process. If you can facilitate that more and make it faster and have good throughput and people can see things ahead of time if you know where you’re going to land before you actually land, that’s a competitive advantage that you bring to your organization.

Wayne: Let’s drill into accuracy. That’s always a big stumbling block for many companies. Either the data in the reports is inaccurate or users just perceive that to be the case. So the first questions becomes where do you clean the data? In the report, in the ETL, in the source? And then, how do you deal with those perceptions of cleanliness?

Mike: This has to be a cultural thing that you continue to work on. I agree that you want to clean the data at the source. We don’t want to patch it up downstream because somebody might always go back to the source and then refute your number. It’s not always easy to do, but you have to.

We used to stream data on daily sales and if there was a mistransaction, such as if somebody fat fingered their receipt transaction instead of receiving £1,000 they received 10,000 pounds, then it was a big spike in daily sales for that particular segment. We didn’t try to catch that before it went out. It’d be nice to be able to do that.

We had some things on there with thresholds, but some of these things slip though. If you let that stream through, it gets a real feedback loop in the organization, and those kinds of things point at improvement in certain areas. That’s where real-time reporting can be helpful at the transactional level. Have real-time reporting up on the wall so that when people are doing something you can see your result, but that’s all part of accuracy as well.

Perception of accuracy I find very interesting. If somebody is taking data offline to a data silo and it’s not given the same result as the core system, people will refute it. We’ve seen that plenty of times, especially if that other source is giving them a better story. Then you go into it and there’s all kinds of exceptions; ‘Oh, yeah, well, we didn’t include that guy because, you know, that’s not really our responsibility, so we just eliminated that bad guy.’

So other people are cleansing the data, and the bottom line for us was that this stuff has to roll up and get on your profit and loss and balance sheet. Now, you can’t run away from all the transactions that are happening at some level of the company. You can’t have the sum of the parts. Everybody’s saying it’s great; then when you roll it up, it’s not so great, right?

It becomes cultural. In our system, when you looked at it and rolled it up, it was the numbers at the top of the house. Our commercial reporting tied with our financial reporting.

Wayne: Let’s move into some more architectural considerations here in terms of timeliness and responsiveness. How did you develop a real-time architecture to support reports that needed real-time updating? How did you ensure queries would come back in five seconds or less? Those two things require some architectural design, right?

Mike: We paid a lot of attention to good architectural design in terms of the backend side, the cube structures, and hardware. We were an SAP shop using the latest technology, SAP HANA in memory stuff, collapsing those various levels of the architecture so you don’t have so many layers, which is good. That reduces your latency, but we were very happy with in-memory capability. You can’t beat some of the hardware advancements.

We heavily invested in it, and I’ll give a plug there for SAP because that really paid off for us. It’s not a cheap investment, I can tell you that. So you might want to start small in that area, but it worked out for us to get us to the point where we were reliably getting the data click speed. For timeliness, not just responsiveness, by reducing layers you don’t have that throughput time and all your batch streams run a lot better.

Wayne: On the security side of things, where did you secure the data? Did you do it in the database with row and column level security or the application or something in-between?

Mike: Primarily the application layer, but there are some levels of database layer security that are important to us, especially when we have super critical requirements on the data. People used call certain data crown jewels, recipe information.

In the chemical business, recipes are important. We have bill material information in the system because when we’re looking at costs we have the whole cost structure that makes up finished products and the percentages of each. We would secure that even tighter, not just at the application level.

We also have some legal requirements. Sometimes you have certain data that you can’t have certain countries accessing out there. So we would get into the database, but primarily it’s at the application level. It’s important to strike the right balance of security without strangling yourself.

All the tools allow you to secure the data to the extent that you can completely hang yourself. It has to be simple enough that people can navigate through it, and if it’s not done right, it becomes a source of what people perceive as data inaccuracy. If my level of security is slightly different from yours, the results I’m going to see are going to be different, and people will interpret that has a data quality problem.

So that’s my advice there. It has to be done. We wish we could just be open, but we can’t, and you have to strike the right balance.

Wayne: I think that’s a great place to end. We covered all eight principles in some detail, and I know there’s much more that you could provide there. I encourage people to contact you if they have additional questions. Mike, thank you once again for sharing your insights

Mike: Thanks a lot, Wayne. I absolutely would be very happy to engage other people and give advice and so forth. Thanks very much.

Wayne Eckerson

Wayne Eckerson is an internationally recognized thought leader in the business intelligence and analytics field. He is a sought-after consultant and noted speaker who thinks critically, writes clearly and presents...

More About Wayne Eckerson