Pages

Wednesday, March 21, 2012

Bayesian Causal Analysis of Service Quality

Bayesian Causal Networks combine graphical representation with causal modeling and Bayesian probability to provide a useful tool for service quality analysis. They allow probabilistic causal models to be constructed to accomplish probabilistic forecasts of future events and situations. This post introduces some concepts which are necessary to apply to causal modeling in an service quality analysis context. It will provide some understanding of the principles of causation, principles of probability, and an exposure to the Bayes Theorem.  

Generally, a causal model is an abstract model that uses cause and effect logic to describe the behavior of a system.  A causal model is a specific type of model focusing on causal factors. The logic can be as simple as a Boolean, "if-then" model or as complicated as Bayesian. Causal modeling is related to but not the same as a variety of other mathematical techniques such as multiple regression. Multiple regression, for example, treats only one item as a dependent variable and tends to over-emphasis factors which are only of limited impact, but appear more frequently.

Let's start with a simple example related to modeling causality. Consider the conjecture that "if it rains, I will get wet." Clearly, if it does not rain, I will not get wet. But, on the other hand, whether or not I get wet also depends on whether or not I go outside, whether or not I have an umbrella and use it, whether or not it is raining when I go outside, and maybe some other factors. This example illustrates that typically there is a chain of causal factors and also typically a combination of causal factors. As Figure 1 shows, even simple causal models can take on a variety of structures.

Figure 1
The model creator must understand enough of the relevant factors and relationships involved in the model for it to be a credible model. In an ideal modeling scenario, the collection of factors is "collectively exhaustive and mutually exclusive." That is to say, in this ideal situation all the relevant factors are known and are completely independent of each other. Of course, very few problems allow such analytical luxuries. In the analytical experience, especially in the context of human sentiment and behavior, a model is merely an approximation of what exists in real life, the causal factors are typically ambiguous and overlapping, and the model must be continuously modified once further (and hopefully better) data is available.

It is also essential to understand the difference between coincidence, correlation and causality. Two events may occur simultaneously, or coincide, and still be completely independent of each other. To a statistician there is high correlation between the two events, but in fact there may be absolutely no causal relationship. A common example of such errors we find in the press is the confusion between correlation and causation in scientific and health-related studies. In theory, these are easy to distinguish - an action or occurrence can cause another (such as smoking causes lung cancer), or it can correlate with another (such as smoking is correlated with alcoholism). If one action causes another, then they are most certainly correlated. But just because two things occur together does not mean that one caused the other, even if it seems to make sense.

Unfortunately, our intuition can lead us astray when it comes to distinguishing between causality and correlation. For example, eating breakfast has long been correlated with success in school for elementary school children. It would be easy to conclude that eating breakfast causes students to be better learners. It turns out, however, that those who don't eat breakfast are also more likely to be absent or tardy - and it is absenteeism that is playing a significant role in their poor performance. When researchers retested the breakfast theory, they found that, independent of other factors, breakfast only helps undernourished children perform better.

Fundamentally, these errors ignore the notion of conditional probabilities. Bayesian methodology is based on conditional probabilities: if variables A and B are not independent then the belief in A given that B is known is the conditional probability P (A|B) = P (A, B) / P (B). This formula simply shows the degree of belief in the state of A when the state of B is known. Likewise, the probability of B given A can be calculated in the same manner, yielding what has come to be known as Bayes Law or Bayes theorem:
P (A|B) = P (B|A) P (A)/ P (B)
This rule is at the very heart of Bayesian analysis. It allows information updating in response to new information. Three steps are involved in Bayesian modeling: (1) developing a probability model that incorporates existing knowledge about event probabilities, (2) updating the knowledge by adjusting the probabilities according to observed data, and (3) evaluating the model with respect to the data and the sensitivity of the conclusions to the assumptions.

Figure 2





Consider the causal model of trouble ticket incidence in a communications service provider environment represented in Figure 2. In the first step, qualitative information is gathered (or documented) concerning the topic in question. Substantive knowledge of subject-matter experts on the key aspects of how and why trouble tickets are generated is collected in order to provide the basic structure of the model. In the second major step, observed data is brought in to represent the key factors in the model. For example, the "Network QoS" causal factor can be represented by real data variables measurable from the communications network elements. As seen in Figure 3, these could be packet loss, delay, and jitter.


Figure 3



Now it is possible to perform the second step in the modeling process, updating the knowledge by adjusting the probabilities according to observed data.With the current calculated probabilities in hand, we can evaluate and update the model with respect to the data and the sensitivity of the conclusions to the assumptions.


About Me

My Photo
Scott's leadership roles include innovation in analytic software and solutions at SDR Consulting, SAS Institute, and IBM Global Business Services; analytical organization building at leading companies such as GTE Wireless, AT&T Broadband, Nextel Communications, Vodafone, and The Home Depot. His experience includes defining customer data architecture, customer lifetime value modeling, predictive modeling, segmentation modeling, experimental designs, program evaluation, and other custom quantitative solutions. Having founded or built highly-skilled advanced analytics teams in multiple companies, Mr. Radcliffe has extensive experience in analytics process building, decision analytics systems, and strategies for customer value management.