14/08/2015

By George Tye, Client Services Director at data consultancy CCR

Customer data is the lifeblood of any business and the amount of data we accumulate is growing exponentially, making it difficult for many organisations to manage it effectively. But what is the real cost of bad data? Recent research from Experian Data Quality shows that inaccurate data has a direct impact on the bottom line of 88% of companies, with the average company losing 12% of its revenue due to bad data.

One of the biggest data issues is duplication – where customers appear more than once on a database. This might not sound like the most pressing challenge but it is one that can have a lasting impact, not only on an organisation’s bottom line, but on its reputation too. The hidden costs of poorly managed data and duplicate records may be even greater than simply lost revenue — 28% of UK businesses that have had problems delivering direct mail or email campaigns say that customer service has suffered as a result, while 21% experienced reputational damage.

At a time when most marketers are striving to improve the performance and efficiency of their campaigns, eliminating duplicate records should be at the top of the ‘to do’ list. And yet, in our experience, organisations can have as much as 30% duplicate records on their database. So what impact can this have on a business?

How duplicate records harm your business:

The most obvious issue is marketing budget wastage. If you send five of the same mailpacks to one person, you’re paying for duplicate print and postage cost — plus, this has a negative impact on response rates and campaign ROI.

You can also harm your brand by bombarding one customer with multiple mailings — it looks unprofessional and any green credentials a brand has are totally discredited by repeat mailings to the same customers.

In addition, they create reporting issues and render a Single Customer View unfeasible, making it difficult to get a clear picture of customers and their behaviour which can lead to bad targeting and decision making. Duplicates can create problems in other areas of the business too: if someone calls with a customer service issue, it is much harder to resolve if you can’t identify who they are because the customer ID they ordered under is different to the one you find when you search for them on your database.

How do duplicates happen?

The main root of the problem is human error. Perhaps a customer may volunteer their information to your company twice, in slightly different ways. E.g.

Tye George, CCR, Unit 4 Minton Distribution Park, London Rd, Amesbury, Wilts, SP4 7RT

versus

Tye G, CCR Data Ltd, London Road, Amesbury, Wiltshire, SP4 7RT

These may be subtle differences but they’re enough to see you go on a database twice. Similarly, there may be a keystroke error when data is entered manually. Also, if a telesales operator can’t find a customer record quickly when they ring with an order, they often simply add a new record.

What should you do if you have duplicates?

To counteract the problem, businesses need to carry out data deduplication. This blends human insight, data processing and algorithms to identify potential duplicates based on likelihood scores and common sense. A good data consultancy will be able to analyse your database and provide a free data audit identifying potential duplicate records. They can then unravel these to identify definite duplicates or where, for instance, it’s different members of the same family living in the same house.

They will then help create deduplication rules and a bespoke deduplication strategy. The rules should take into account your decisions about how ‘strict’ you want to be with deduplication, in terms of maintaining the balance between losing valuable customer data and having a clean database. As part of this, data should be manually scanned and reviewed to check for any anomalies or duplicates that are obvious to the human eye.

You also need to decide how to manage the duplicates. You may decide to flag and exclude duplicates from future campaigns, remove them entirely or to merge key information from across all the records into one, unique ‘master ‘record’. If you decide on the latter, you will need to make a decision about the best records to keep as the ‘master record’ — e.g. do you keep the first record created, the most recent one or the one with the highest number/value of purchases?

However you decide, the result will be a clean and efficient marketing database.

Maintenance equates to success:

Once your database is clean, it’s essential to keep it maintained and free of duplicates as follows:

1. Create a rigorous process including stricter quality control on data capture or restrictions on who can create new data records.

2. Build duplicate management into your ongoing database strategy, assessing it regularly (and particularly prior to campaigns).

3. Recognise that the benefits of deduplicating your database outweighs the costs associated with getting your data clean. It’s worth the investment and consultancy fees are significantly lower than the money and the cost to your reputation that will be lost through poor CRM due to duplicate records.