remove duplicate rows in the csv file

Below is standalone example that shows how to filter duplicates. The idea is to get the values of each dict and convert them into tuple. Using a set we can filter out the duplicates.

import csv

csv_columns = ['No', 'Name', 'Country']
dict_data = [
    {'No': 1, 'Name': 'Alex', 'Country': ['India']},
    {'No': 1, 'Name': 'Alex', 'Country': ['India']},
    {'No': 1, 'Name': 'Alex', 'Country': ['India']},
    {'No': 1, 'Name': 'Alex', 'Country': ['India']},
    {'No': 2, 'Name': 'Ben', 'Country': ['USA']},

]
csv_file = "Names.csv"

with open(csv_file, 'w', newline='') as csvfile:
    writer = csv.DictWriter(csvfile, fieldnames=csv_columns)
    writer.writeheader()
    entries = set()
    for data in dict_data:
        val = tuple(','.join(v) if isinstance(v, list) else v for v in data.values())
        if val not in entries:
            writer.writerow(data)
            entries.add(val)
print('done')

Names.csv

 No,Name,Country
1,Alex,['India']
2,Ben,['USA']

CLICK HERE to find out more related problems solutions.

Leave a Comment

Your email address will not be published.

Scroll to Top