INDUSTRY
Finding fake feedback
Algorithm thwarts fake eCommerce feedback and rating attacks
Today is Cyber Monday, the Monday after Thanksgiving when brands encourage customers to shop online for deals. How can consumers, trust customer feedback posted at online shopping sites when hoping to make a purchasing decision? Conversely, how can the company protect its reputation from false negative feedback? Researchers in Australia hope to answer these questions with computer software that can detect false feedback and ensure the integrity of eCommerce trust management systems. They provide details in the International Journal of Trust Management in Computing and Communications.
Soon Keow Chong and Jemal Abawajy of the Parallel and Distributed Computing Lab at Deakin University, Geelong, Australia, explain that trust management is a vital component of any eCommerce site; it forms and maintains the relationships between trading partners. However, it relies on feedback proffered by the trading partners and as such is not infallible. There is always the potential for feedback to be manipulated strategically to the detriment of the site's reputation on the small-scale and in the worst case scenario a site might undergo a "rating attack" that could cause serious damage to brand and company image.
The team has now successfully developed an algorithm that can identify and block falsified feedback being sent to a site's trust management system and so make it more robust against rating manipulation attacks. The team points out that the algorithm can detect when an established, credible user who has built up trust on a system suddenly begins cheating or when a multitude of new users are pushing false feedback on to the site.
The team explains that the feedback verification scheme uses a clustering algorithm to group similar ratings together and define the majority rating. The trust value of the rater is based on his/her past behavior and the frequency of rating submissions. In order to determine the quality of a rating, the team uses a trust threshold which designates a minimum value required to establish the trust relationship. All ratings that fall within the majority cluster are combined with the trust value of the rater, the transaction frequency and the transaction value to determine the credibility of the ratings.
The algorithm then adds "weight" (credibility) depending on various factors: rating frequency, total submissions, low value versus high-value transactions, total feedback on a given product and other parameters. It thus determines whether any given feedback falls below a set threshold for credibility and defines those that do as false and so avoids adding it to the trust management system, it also scores against the user's individual trust value.
TRENDING
- A new method for modeling complex biological systems: Is it a real breakthrough or hype?
- A new medical AI tool has revealed previously unrecognized cases of long COVID by analyzing patient health records
- Incredible findings from the James Webb Space Telescope reshape our understanding of how galaxies form