Remove duplicate records from a table

A well known problem : a table has an id which is the Primary Key (PK) but some records have the others fields similar. The script must remove all duplicates and keep the record with the smallest id.

For example, a table Customers having a SERIAL as PK but no UNIQUE constraint on the other fields. Two or more employees are able to create customers. And after a certain amount of time, there is a quite high probability to have duplicates.

The job is done with a single statement but we need to create a table with duplicates and to perform statistics in order to verify our script.

--
-- Create table, populates it, take a backup
--
drop table if exists dups_test;
drop table if exists dups_test_backup;

create table dups_test (
 id int primary key
 ,title text
 ,info int
);

insert into dups_test
 select i, 'Record #' || i::text, i
 from generate_series(1, 10000) i;

create table dups_test_backup as select * from dups_test;

By the way, we can see how the PostgreSQL function generate_series is used.

The first records of dups_test :

select * from dups_test order by id;

gives

id   title         info
---- ----------    -----
1    Record #1	   1
2    Record #2     2
3    Record #3	   3
...

Let’s now introduce some duplicates. The id must be different because id is a PK but the other fields, title and info will be taken from initial data. In order to guarantee the unicity, we will use a sequence starting at the size of the initial table + 1.

--
-- insert duplicates into the table
--
create sequence dups_seq minvalue 10001;

insert into dups_test (id, title, info)
 select nextval('dups_seq'), title, info 
 from dups_test 
 order by random()  
 limit 1000;

insert into dups_test (id, title, info)
 select nextval('dups_seq'), title, info 
 from dups_test 
 order by random()  
 limit 500;

insert into dups_test (id, title, info)
 select nextval('dups_seq'), title, info 
 from dups_test 
 order by random()  
 limit 250;

insert into dups_test (id, title, info)
 select nextval('dups_seq'), title, info 
 from dups_test 
 order by random() 
 limit 125;

drop sequence dups_seq;

By the way, we can see how to pick n random records in a table : SELECT * FROM table ORDER BY random() LIMIT n.

We see that 1875 (1000 + 500 + 250 + 125) records have been added :

select count(*) from dups_test;

gives

count
-----
11875

Some records of dups_test :

select title, info, id from dups_test order by title, info, id;

gives

title          info   id
-----------    ----   -----
...
Record #1490   1490   1490
Record #1491   1491   1491
Record #1491   1491   11491
Record #1491   1491   32491
Record #1492   1492   1492
Record #1493   1493   1493
Record #1493   1493   11493
Record #1493   1493   21493
Record #1494   1494   1494
...

In order to know the repartition of duplicates, we execute the following query

select distinct count(*) as count_records, t.countd as count_dups 
from (
 select info, count(*) as countd
 from dups_test
 group by (info)
 having count(*) > 1) as t
group by t.countd
order by t.countd;

that gives

count_records	count_dups
-------------   ----------
1498            2
167             3
13              4
1               5

It means that 1498 records have 2 times the same fields. And 167 have 3 times the same fields etc. (The same title and info but not the same id because id is the PK)

And now, we delete duplicates with this tricky query :

--
-- remove duplicates
--
delete from dups_test 
 where id in (
  select id
   from (
    select 
     id
     ,row_number() over (partition by title,info order by id ) as r
    from dups_test ) t 
   where t.r > 1);

that gives

DELETE 1875

How works this tricky query ?

partition is a window function : a function that performs a calculation across a set of table rows that are somehow related to the current row like an aggregate function. But unlike regular aggregate functions, use of a window function does not cause rows to become grouped into a single output row . The rows retain their separate identities.

row_number() gives the number of the current row within its partition, counting from 1.

Let’s execute the following query on the subset we have examined before :

select 
  id
  ,title
  ,info
  ,row_number() over (partition by title,info ORDER BY id ) as r
 from dups_test
 where info in (1490, 1491, 1492, 1493, 1494)
 order by title, info

gives

id      title          info   r
----    ------------   ----   -
1490    Record #1490   1490   1 
1491    Record #1491   1491   1 
11491   Record #1491   1491   2 
32491   Record #1491   1491   3 
1492    Record #1492   1492   1 
1493    Record #1493   1493   1 
11493   Record #1493   1493   2 
21493   Record #1493   1493   3 
1494    Record #1494   1494   1

The deletion of the records in each partition having a ROW_NUMBER > 1 removes duplicates and keep initial record.

We have to verify that all duplicates have been deleted and only them.

select count(*) from dups_test;

gives the initial number of rows :

count
-----
10000

Good !

And we run now the script that computes the repartition of the duplicates.

select distinct count(*) as count_records, t.countd as count_dups 
from (
 select info, count(*) as countd
 from dups_test 
 group by (info) having count(*) > 1) as t 
group by t.countd order by t.countd; 

gives

count_records	count_dups
-------------   ----------
(0 rows)

Good !

And then we check the final table against the backup :

select count(*)
from dups_test a
full outer join dups_test_backup b
on a.id = b.id
where a.title = b.title and a.info = b.info;

gives

count
-----
10000

This query, comparing all fields, gives the initial number of rows.

A little explanation about OUTER JOINs :

table1 LEFT OUTER JOIN table2 ON table1.id = table2.id gives all the records of table1 even if there is no corresponding record in the table in table2.

table1 RIGHT OUTER JOIN table2 ON table1.id = table2.id gives all the records of table2 even if there is no corresponding record in the table in table1.

table1 FULL OUTER JOIN table2 ON table1.id = table2.id gives all records belonging to both table1 and table2.

Alternatively, the following script do the same job in searching fields having the same id but not the same title or info :

select count(*)
from dups_test a
full outer join dups_test_backup b
on a.id = b.id
where a.title != b.title or a.info != b.info;

gives

count
-------
(0 rows)

It seems that everything is OK.

By the way, have a look at the two last queries. The only difference is the WHERE clause

where a.title = b.title and a.info = b.info

giving all the records and

where a.title != b.title or a.info != b.info

giving no records.

It is a good example of the De Morgans’law :

The negation of a conjunction is the disjunction of the negations.
The negation of a disjunction is the conjunction of the negations.

or, in other words :

not (A AND B) = (not A) OR (not B)
not (A OR B) = (not A) AND (not B)

And now, a little bit cleaning before we leave :

drop table dups_test_backup;

And, last but not least, a visualization of the repartition of duplicates !

When we make statistics, it is always better to visualize the numbers. The R programming language is excellent for statistics and visualization ! And R has a PostgreSQL interface.

In order to produce a nice chart, a large amount of duplicates has been added.

And in order to avoid to put SQL in the front end application, as needed in client-server architecture, we create the following view that embeds the query we used before :

create or replace view stat_on_dups as
 select distinct count(*) as count_records, t.countd as count_dups 
 from (
  select info, count(*) as countd
  from dups_test
  group by (info)
  having count(*) > 1) as t
 group by t.countd
 order by t.countd;

that gives

count_records	count_dups
-------------   ----------
2027            2
1469            3
1082            4
857             5
549             6
376             7
260             8
182             9
124             10
73              11
53              12
38              13
30              14
26              15
19              16
7               17
11              18
5               19
2               20
1               21
3               22
1               26

This R script :

require("RPostgreSQL") 
# establish connection 
driver ← dbDriver("PostgreSQL") 
connexion ← dbConnect(driver, dbname="higayone"
                        ,host="localhost", port=5432
                        ,user="mchl", password="secret")                              

# query the database 
list ← dbGetQuery(connexion, "select * from stat_on_dups;") 

# create the plot 
vx ← seq(0, 2000, 100) 
vy ← seq(1, 30, 1) 
plot(list, type = "h", col = "blue", lwd = 1, 
        ,xlab = "Number of records", 
        ,ylab = "Number of duplicates", 
        ,main = "Repartion of duplicates"
        ,xaxt = "n", yaxt = "n") 
axis(side = 2, at = vy) 
axis(side = 1, at = vx) 

# clean before leave 
dbDisconnect(connexion) 
dbUnloadDriver(driver)

displays :

Rplot

The x-axis shows the number of records having duplicates and the y-axis shows the number of duplicates for those records.

We see at glance that we have a lot of records with few duplicates and few records with a lot of duplicates.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s