SHEDDING LIGHT ON MEASUREMENT

by Martin R. Baird

I haven't been in a casino in the last 10 to 15 years that wasn’t doing some form of measurement.

They do everything from employee surveys to guest exit interviews. They use shoppers to rate the guest experience and measure the service standards that the casino has in place. Some properties evaluate valet employees on whether they bring a guest’s car around in a pre-determined amount of time. They measure how long it takes a restaurant entrée to arrive at the table after it’s been ordered. Basically, they do lots of surveys.
Collecting data on guests’ experiences is a wonderful thing. It's very important for casino managers to quantify what is happening on the property and then be able to discuss it intelligently with each other and with the staff. But I know from experience that casinos need to be more careful about how they gather data.

I recently spent some time with an expert in the field of measurement and statistics. He earned his doctorate from the University of Washington and specializes in measuring “education outcomes.” If he worked in my world, he would measure whether employee training generates the desired outcome of guest service improvement on the casino floor. He would examine the results of his efforts and try to determine why training worked or why it may not have been as successful as desired.

This expert would be appalled at what my company occasionally encounters when doing research for a casino. From time to time, we are asked by clients to use a previous vendor’s research “instrument.” (“Instrument” is another name for “survey.”) When this happens, we sometimes find that the instrument is difficult to use for many reasons. For example, it may ask a researcher to rate employees’ “attitude.” It sounds like a good idea to have a non-biased third party evaluate attitudes, but how do you measure such a thing? If an employee gives our researcher what he’s looking for, does that employee have a good attitude? If the employee doesn’t demonstrate the desired attributes, does that mean the staff member has a bad attitude? What if what we wanted in the way of good attitude is against the property’s rules or beyond the scope of the employee’s work responsibilities?

The lesson here is to think very carefully about what you want to assess. Give it serious thought as you create your instrument. What’s the point of spending time and money to measure something that can’t be measured?
When I visited with this expert, he often used two terms – validity and reliability. Let’s take a look at them because they are important.

He defined validity as whether the instrument measures what it is supposed to measure. That sounds alarmingly simple, but I have seen validity fly right out the window. Here’s an example. A casino will talk with us about doing a guest service survey. Everyone on the team is given an opportunity to suggest what the instrument should measure. By the time everyone gives their input, this so-called service survey has questions ranging from guest demographics to the perception of food portions. In other words, the instrument measures anything and everything. It is not a guest service survey.

I know why this happens. Each person wants to know different things about guests and management figures they might as well gather all the data possible in one fell swoop. If the instrument measures a range of different things, it is not valid for the purpose of measuring service. This is a problem because the instrument doesn’t help guide decisions about improving service, which was the reason for collecting data in the first place.
Reliability is an important concept because I think it speaks to our ability to train and improve based on the information. According to this expert, reliability describes whether the instrument will provide information (for making decisions) consistently. Here, the question is: could several researchers watch the same behavior and use the instrument in the same way? When the behavior being observed is clearly defined (e.g., did the employee say "thank you"?), different researchers will generally give the same rating. This is HUGE and I encourage you to re-read this paragraph.

Reliability is critical because when you get to the level of measuring clearly defined behaviors, it's not only something that can be consistently observed, it's also something that can be improved. In the above example, enlightening employees about how and why to say thank you is relatively simple and it's quantifiable. Reliability means that you are removing some of the subjectivity and that is what helps your front-line people consistently understand and demonstrate behaviors associated with providing outstanding guest service. Think for a moment.

We see instruments that have words such as “empowered,” “motivated” and “efficient.” How do you clearly define such things and how can researchers consistently quantify and rate them?
Take this one step further and ask yourself how my company can train people to be empowered? We can talk about empowerment in training but it's not an easily demonstrable behavior. If you measure what matters, you also want to improve what matters. Thus, what’s important must be clearly defined for long-term improvement and success.

My final take-away message from my expert is related to scoring systems.

This is not the Olympics or “Dancing With the Stars” where we use a 10-point scale for drama. If you are trying to measure clearly defined behaviors, your instrument probably should avoid the following type of scoring: 5 – Excellent, 4 – Good, 3 – OK, 2 – Fair, 1 – Poor. I see this all too often. How would you score an employee on saying “thank you”? Do they get a 5 if they are really sincere? What is the difference between an excellent and a good thank you?

I know I exaggerate, but at times casinos create instruments that are not far from this. To make such behavior easy to evaluate, it would be much better to rate with a simple yes-or-no answer. This increases the ability of researchers rating behavior to provide consistent ratings. Judging how motivated a person is can be challenging, but saying they either are or are not motivated is relatively easy.

What I learned from my visit with this measurement and statistics expert was invaluable. I hope what I have passed along encourages you to take a critical look at your existing research instruments. You may need to scrap some and start over. If you're not sure, perhaps you need a third-party review. Remember, the results are only as good as the instrument you use!

Date Posted: 13-Feb-2011

Martin R. Baird is chief executive officer of Robinson & Associates, Inc., a Boise, Idaho-based consulting firm to the global gaming industry that is dedicated to helping casinos improve their guest service so they can compete and generate future growth and profitability. Robinson & Associates is the world leader in casino guest experience measurement and improvement. For more information, visit the company’s Web sites at www.casinocustomerservice.com and www.advocatedevelopmentsystem.com or contact the company at 208-991-2037. Robinson & Associates is an associate member of the National Indian Gaming Association

2018-10-07T03:26:10+00:00