1. Create table
/* Navicat Premium Data Transfer Source Server : localhost Source Server Type : PostgreSQL Source Server Version : 110012 Source Host : localhost:5432 Source Catalog : postgres Source Schema : public Target Server Type : PostgreSQL Target Server Version : 110012 File Encoding : 65001 Date: 30/07/2021 10:10:04 */ -- ---------------------------- -- Table structure for test -- ---------------------------- DROP TABLE IF EXISTS "public"."test"; CREATE TABLE "public"."test" ( "id" int4 NOT NULL DEFAULT NULL, "name" varchar(255) COLLATE "pg_catalog"."default" DEFAULT NULL, "age" int4 DEFAULT NULL ) ; -- ---------------------------- -- Records of test -- ---------------------------- INSERT INTO "public"."test" VALUES (1, 'da', 1); INSERT INTO "public"."test" VALUES (2, 'da', 12); INSERT INTO "public"."test" VALUES (3, 'dd', 80); INSERT INTO "public"."test" VALUES (4, 'dd', 80); INSERT INTO "public"."test" VALUES (5, 'd1', 13); -- ---------------------------- -- Primary Key structure for table test -- ---------------------------- ALTER TABLE "public"."test" ADD CONSTRAINT "test_pkey" PRIMARY KEY ("id");
2. Get duplicates based on the name
Let's see which data are repeated first
select name ,count(1) from test group by name having count(1)>1
Output.
name count
da 2
dd 2
3. Delete all duplicate data
NoticeQuery the columns of data to be updated as a third-party table, and then filter the updates.
delete from test where name in (select from (select name ,count(1) from test group by name having count(1)>1) t)
4. Keep a line of data
Here is the data we need to keep:Duplicate data, which one retains the largest ID
SELECT 1. FROM test WHERE id NOT IN ( ( SELECT min( id ) AS id FROM test GROUP BY name ) )
5. Delete data
DELETE FROM test WHERE id NOT IN ( SELECT FROM ( SELECT max( id ) AS id FROM test GROUP BY name ) t )
This is the end of this article about the detailed explanation of postgresql deletion of duplicate data. For more related postgresql deletion of duplicate data, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!