The Quantitative Evidence That Reputation Management Works
At times, the research findings published in our industry seem a little suspect. After all, most of them are produced by companies with a stake in the results, and very few come accompanied with the kinds of bona fides that give us confidence in more established sources such as scientific papers and academic studies. Were surveys conducted with a sufficient sample set to be meaningful? Are their questions free of bias? Are marketing results claimed in case studies truly balanced, or do they leave out evidence of tactics that don’t perform as expected?
This is why I’ve found it refreshing to discover that in one vertical in particular, there’s a body of academic research that speaks to exactly the kinds of questions we want answered about reputation management—namely, does online review monitoring and response really make a difference to a business’s bottom line?
The vertical to which I’m referring is hotels, where a body of interesting research exists going back several years. Two studies in particular have results that are especially worth highlighting. The first, authored by professors Davide Proserpio of USC and Giorgos Zervas of Boston University in 2016, is titled “Online Reputation Management: Estimating the Impact of Management Responses on Consumer Reviews.”
Analyzing tens of thousands of TripAdvisor reviews, the authors found that when hotels respond to consumer reviews, on average their review volume increases by 12% and their star rating goes up by 0.12 stars.
The authors note that 0.12 might not seem like a lot, but the effect can be more dramatic than it sounds. After all, TripAdvisor’s average ratings are rounded to the nearest half star, so if your mathematical average goes from just 4.14 to 4.26, consumers will see a 4.5 rating where a business used to have a 4. It’s worth noting that the same rounding effect can boost the impact of incremental improvements on other sites as well, such as Google and Yelp.
Some details in the study’s methodology make its conclusions especially compelling. For instance, the authors used Expedia reviews as a control, to ensure that no other factor was causing TripAdvisor reviews to improve. Because hotels responded much less frequently to Expedia reviews during the same time period and because Expedia reviews did not change significantly, Prosperpio and Zervas were able to conclude that the improvements on TripAdvisor were likely due to review response alone.
The authors also checked whether customers were simply happier because of improvements made by hotels as a result of TripAdvisor reviews. To do this, they compared hotel visitors whose reviews were written before and after the hotel started responding. The surprising conclusion: The mere fact that consumers saw the hotel responding caused reviews to be 0.1 stars higher. This and the other benefits applied when management responded to positive reviews as well as negative.
In a helpful summary of the study published earlier this year in Harvard Business Review, the authors offer the following conclusion: “While negative reviews are unavoidable, our work shows that managers can actively participate in shaping their firms’ online reputations. By monitoring and responding to reviews, a manager can make sure that when negative reviews come in—as they inevitably will—they can respond constructively and maybe even raise their firm’s rating along the way.”
A related line of research is well represented in a study by professors Xun Xu, Xuequn Wang, Yibai Lee, and Mohammad Haghighi. Their study, entitled “Business intelligence in online customer textual reviews: Understanding consumer perceptions and influential factors,” appeared in the International Journal of Information Management in 2017. The authors focus not on star ratings but on a thornier subject: the unstructured textual content of reviews.
The study reflects the important fact that emerging technologies designed to extract insight from so-called dark data, such as sentiment analysis and natural language processing, can be used to help businesses understand exactly what kinds of topics are driving consumers to write reviews and how review content can help businesses improve performance.
In this case, the authors use Latent Semantic Analysis (LSA) to examine thousands of hotel reviews from Booking.com, in order to test whether consumer sentiment is influenced by factors such as the purpose of travel (business or leisure) and hotel type (independent or chain). The authors find that hotel customers focus on different factors depending on their predetermined expectations. For example, customers of two-star hotels are more interested in basics like price and cleanliness, whereas four-star hotel customers expect a much more luxuriant stay with special amenities and highly attentive service.
Hotels that understand the predetermined expectations of their guests will stand a better chance of exceeding those expectations and winning praise from future reviewers. Those that fall short can focus on the specific improvements that will mean the most to their guests.
But the specifics of the study, in this case, are less important to me than the general observation that topic analysis can uncover the deeper value of review content. As the authors write, “Online textual reviews can provide a way for businesses to understand customer needs and improve their products and services. Compared with customer ratings, online textual reviews can show more details about customers’ consumption experiences and customer perceptions because of their open structure. Thus, managers can obtain more insights regarding customers’ expectations and needs and their perceived quality of product and services.”
It’s on the basis of well-documented, professional research like this that reputation management companies can build a strong business case for brands to allocate budget to review monitoring, response, and analysis.