Articles

This Is How NLP Can Help Managers Better Understand Client Satisfaction

Forbes | | 5 min read

When managers make strategic decisions, an important question that needs to be addressed is why and how their clients are satisfied. The corresponding answers need to be included in decision-making processes to increase user satisfaction. Clients’ written comments can be a useful source to achieve this objective. However, this strategy is too broad to incorporate all the factors influencing clients’ opinions. Artificial Intelligence (AI) has proven to be one of the most efficient resources to extract key results from vast amounts of data. Despite being difficult to represent, digital technology has created new ways to collect user feedback (for example, by using topic models to aggregate and summarize large datasets of user opinions). Although traditional surveys are valuable tools in some areas, they present too many limitations to obtain representative and dynamic insights on user satisfaction. There are also many challenges related to the use of new digital algorithms to collect data, as they require certain expertise with data science. In a recent study, Radoslaw Kowalski, Slava Jankin, and I unpack how to analyze large pieces of written user feedback using natural language processing and explore how carrying out text analyses with machine learning can help managers make better decisions based on user satisfaction data.

As mentioned, digital technologies have opened up many new opportunities to collect user feedback. Compared to traditional survey methods, these new data resources can be insightful, as they contain a full range of user opinions about different services and products. However, these datasets usually consist of unstructured text which is hard to summarize. So how should organizations process this text? One possible option would be to create a new department to analyze each user review individually, but the resources that this would require are just too high in most cases. Instead, new data science and AI developments allow us to take a different approach to these large unstructured datasets. In our study, we analyze the obstacles associated with the use of this unstructured written feedback and present a way to summarize a large number of reviews: that is, through the use of natural language processing models.

One way to incorporate users’ opinions into decision-making processes is by including their voices after processing them with machine-learning techniques, representing an improvement over other methods such as data collection, experiments or surveys. However, surveys do not consider the organizational changes that can take place inside organizations, as they are static tools that are unsuitable for continuous opinion monitoring. Another way to capture the voice of service users is their written reviews of the services or products they use, which can help managers include user feedback when deciding how to improve those same services or products. Written online reviews allow managers to analyze texts with machine-learning to make the most of service or product user satisfaction data. Moreover, this data tool enables dynamic performance monitoring of the decision-making process.

We tested this approach using a dataset of online reviews from the United Kingdom’s National Health Service (NHS) website to analyze citizens’ opinions about the public healthcare system in England. More specifically, we used machine-learning processes to analyze outcomes obtained from online reviews about publicly-funded primary care services in England, aiming to compare the survey instrument to unstructured online reviews. Anonymous users posted between 3,000 and 5,000 written comments per month between 2013 and 2017, accompanied by 5-point Likert-scale star ratings of 6 aspects regarding users’ experiences with their general practitioners (GP). The key topics included in the written documents database and used for topic modeling were: thanking doctors, complaining about reception staff, and commenting on the quality of their GPs’ facilities.

The topic-model outcomes obtained from the online reviews suggest that, although patients often complain about the difficulties in accessing GP services, this is not the most important determinant or cause of health service (dis)satisfaction among users. Instead, the way in which GP staff members treat patients determines whether the latter give their service experiences a higher or lower rating. Consequently, these results suggest that what would really increase public service user satisfaction levels would be to change the NHS’ communication styles. In other words, thanks to topic modeling, we have identified an important strategy, that could help managers improve the quality of public services and align them with users’ demands. On the other side of the spectrum, some results from surveys that omit important issues identified by patients may cause the NHS to make less optimal or less important decisions when trying to improve patients’ service experience.

We used machine-learning classification algorithms, such as ‘random forest’ (RF), to study the relationship between unstructured data (online reviews summarized using topic models) and structured data (from surveys). RF builds decision trees on randomly sampled data with a smaller set of randomly-sampled predictors. Its outcomes indicate that the topics generated from online reviews were related to the users’ survey responses. As mentioned, the relationship between service users and GP staff was one of the most important satisfaction-related factors involving GP services. Therefore, it is clear that understanding the determinants of patient (dis)satisfaction obtained through the use of machine-learning tools may prove useful in government efforts to increase user satisfaction and to align the public sector’s objectives with citizens’ demands.

Finally, we used fixed-effect models to evaluate the consistency of the correlation between the topics identified in the comments and the star ratings users gave in surveys. Our results show that what patients write in their reviews is correlated with how they rate their experiences.

In conclusion, these AI methods could allow managers to take important steps forward to better understand client satisfaction, helping them to bring service and product users closer to the organization’s decision-making processes.