Python ways of getting a list of unique elements from a list that possibly contains duplicates
We may use a wide range of approaches to solve this problem, common ways incl.:
set (when seq order is not a concern)
simply looping through both the original list and the result list, we can further modify the lines to use list comprehension:
res =[]for e in ori_list:if e notin res:
res.append(e)
# list comprehension# ori_list = [2, 2, 3, 5, 7, 11, 11, 11, 13]
res =[][res.append(e)for e in ori_list if e notin res]# no assignment!>>> res
[2,3,5,7,11,13]
The use of OrderedDict, built-in, efficient
from collections import OrderedDict
res =list(OrderedDict.fromkeys(ori_list))
Using enumerate()
res =[v for i, v inenumerate(ori_list)if v notin ori_list[:i]]
We noticed that if the duplicates are consecutive elements, we can also use the following methods from itertools:
groupby() + list comprehension
from itertools import groupby
res =[item[0]for item in groupby(ori_list)]
zip_longest() + list comprehension
from itertools import zip_longest
res =[i for i, j in zip_longest(ori_list, ori_list[1:])if i != j]