本文翻译自:How do I find the duplicates in a list and create another list with them?
How can I find the duplicates in a Python list and create another list of the duplicates? 如何在Python列表中找到重复项并创建另一个重复项列表? The list only contains integers. 该列表仅包含整数。
#1楼
参考:https://stackoom.com/question/fGjK/如何在列表中找到重复项并使用它们创建另一个列表
#2楼
>>> l = [1,2,3,4,4,5,5,6,1]
>>> set([x for x in l if l.count(x) > 1])
set([1, 4, 5])
#3楼
list2 = [1, 2, 3, 4, 1, 2, 3]
lset = set()
[(lset.add(item), list2.append(item))
for item in list2 if item not in lset]
print list(lset)
#4楼
A bit late, but maybe helpful for some. 有点晚了,但可能对某些人有所帮助。 For a largish list, I found this worked for me. 对于一个庞大的清单,我发现这对我有用。
l=[1,2,3,5,4,1,3,1]
s=set(l)
d=[]
for x in l:
if x in s:
s.remove(x)
else:
d.append(x)
d
[1,3,1]
Shows just and all duplicates and preserves order. 只显示所有重复项并保留顺序。
#5楼
一线解决方案:
set([i for i in list if sum([1 for a in list if a == i]) > 1])
#6楼
I came across this question whilst looking in to something related - and wonder why no-one offered a generator based solution? 我在寻找相关问题时遇到了这个问题-想知道为什么没人提供基于发电机的解决方案? Solving this problem would be: 解决此问题的方法是:
>>> print list(getDupes_9([1,2,3,2,1,5,6,5,5,5]))
[1, 2, 5]
I was concerned with scalability, so tested several approaches, including naive items that work well on small lists, but scale horribly as lists get larger (note- would have been better to use timeit, but this is illustrative). 我担心可伸缩性,因此测试了几种方法,包括幼稚的项目,这些项目在小列表上都可以很好地工作,但是随着列表变大就可怕地扩展(注意-使用timeit会更好,但这只是说明性的)。
I included @moooeeeep for comparison (it is impressively fast: fastest if the input list is completely random) and an itertools approach that is even faster again for mostly sorted lists... Now includes pandas approach from @firelynx -- slow, but not horribly so, and simple. 我加入了@moooeeeep进行比较(这是非常快的:如果输入列表是完全随机的,则是最快的),而itertools方法对于大多数已排序的列表来说甚至更快...现在包括@firelynx的pandas方法-慢,但没有很可怕,而且很简单。 Note - sort/tee/zip approach is consistently fastest on my machine for large mostly ordered lists, moooeeeep is fastest for shuffled lists, but your mileage may vary. 注意-对于大型的有序列表,sort / tee / zip方法在我的机器上始终是最快的,对于混洗的列表,moooeeeep最快,但是您的里程可能会有所不同。
Advantages 好处
- very quick simple to test for 'any' duplicates using the same code 使用相同的代码非常容易地测试“任何”重复项
Assumptions 假设条件
- Duplicates should be reported once only 重复应仅报告一次
- Duplicate order does not need to be preserved 重复的订单不需要保留
- Duplicate might be anywhere in the list 重复项可能在列表中的任何位置
Fastest solution, 1m entries: 最快的解决方案,一百万个条目:
def getDupes(c):
'''sort/tee/izip'''
a, b = itertools.tee(sorted(c))
next(b, None)
r = None
for k, g in itertools.izip(a, b):
if k != g: continue
if k != r:
yield k
r = k
Approaches tested 经过测试的方法
import itertools
import time
import random
def getDupes_1(c):
'''naive'''
for i in xrange(0, len(c)):
if c[i] in c[:i]:
yield c[i]
def getDupes_2(c):
'''set len change'''
s = set()
for i in c:
l = len(s)
s.add(i)
if len(s) == l:
yield i
def getDupes_3(c):
'''in dict'''
d = {}
for i in c:
if i in d:
if d[i]:
yield i
d[i] = False
else:
d[i] = True
def getDupes_4(c):
'''in set'''
s,r = set(),set()
for i in c:
if i not in s:
s.add(i)
elif i not in r:
r.add(i)
yield i
def getDupes_5(c):
'''sort/adjacent'''
c = sorted(c)
r = None
for i in xrange(1, len(c)):
if c[i] == c[i - 1]:
if c[i] != r:
yield c[i]
r = c[i]
def getDupes_6(c):
'''sort/groupby'''
def multiple(x):
try:
x.next()
x.next()
return True
except:
return False
for k, g in itertools.ifilter(lambda x: multiple(x[1]), itertools.groupby(sorted(c))):
yield k
def getDupes_7(c):
'''sort/zip'''
c = sorted(c)
r = None
for k, g in zip(c[:-1],c[1:]):
if k == g:
if k != r:
yield k
r = k
def getDupes_8(c):
'''sort/izip'''
c = sorted(c)
r = None
for k, g in itertools.izip(c[:-1],c[1:]):
if k == g:
if k != r:
yield k
r = k
def getDupes_9(c):
'''sort/tee/izip'''
a, b = itertools.tee(sorted(c))
next(b, None)
r = None
for k, g in itertools.izip(a, b):
if k != g: continue
if k != r:
yield k
r = k
def getDupes_a(l):
'''moooeeeep'''
seen = set()
seen_add = seen.add
# adds all elements it doesn't know yet to seen and all other to seen_twice
for x in l:
if x in seen or seen_add(x):
yield x
def getDupes_b(x):
'''iter*/sorted'''
x = sorted(x)
def _matches():
for k,g in itertools.izip(x[:-1],x[1:]):
if k == g:
yield k
for k, n in itertools.groupby(_matches()):
yield k
def getDupes_c(a):
'''pandas'''
import pandas as pd
vc = pd.Series(a).value_counts()
i = vc[vc > 1].index
for _ in i:
yield _
def hasDupes(fn,c):
try:
if fn(c).next(): return True # Found a dupe
except StopIteration:
pass
return False
def getDupes(fn,c):
return list(fn(c))
STABLE = True
if STABLE:
print 'Finding FIRST then ALL duplicates, single dupe of "nth" placed element in 1m element array'
else:
print 'Finding FIRST then ALL duplicates, single dupe of "n" included in randomised 1m element array'
for location in (50,250000,500000,750000,999999):
for test in (getDupes_2, getDupes_3, getDupes_4, getDupes_5, getDupes_6,
getDupes_8, getDupes_9, getDupes_a, getDupes_b, getDupes_c):
print 'Test %-15s:%10d - '%(test.__doc__ or test.__name__,location),
deltas = []
for FIRST in (True,False):
for i in xrange(0, 5):
c = range(0,1000000)
if STABLE:
c[0] = location
else:
c.append(location)
random.shuffle(c)
start = time.time()
if FIRST:
print '.' if location == test(c).next() else '!',
else:
print '.' if [location] == list(test(c)) else '!',
deltas.append(time.time()-start)
print ' -- %0.3f '%(sum(deltas)/len(deltas)),
print
print
The results for the 'all dupes' test were consistent, finding "first" duplicate then "all" duplicates in this array: “所有重复”测试的结果是一致的,在此数组中找到“第一个”重复项,然后是“所有”重复项:
Finding FIRST then ALL duplicates, single dupe of "nth" placed element in 1m element array
Test set len change : 500000 - . . . . . -- 0.264 . . . . . -- 0.402
Test in dict : 500000 - . . . . . -- 0.163 . . . . . -- 0.250
Test in set : 500000 - . . . . . -- 0.163 . . . . . -- 0.249
Test sort/adjacent : 500000 - . . . . . -- 0.159 . . . . . -- 0.229
Test sort/groupby : 500000 - . . . . . -- 0.860 . . . . . -- 1.286
Test sort/izip : 500000 - . . . . . -- 0.165 . . . . . -- 0.229
Test sort/tee/izip : 500000 - . . . . . -- 0.145 . . . . . -- 0.206 *
Test moooeeeep : 500000 - . . . . . -- 0.149 . . . . . -- 0.232
Test iter*/sorted : 500000 - . . . . . -- 0.160 . . . . . -- 0.221
Test pandas : 500000 - . . . . . -- 0.493 . . . . . -- 0.499
When the lists are shuffled first, the price of the sort becomes apparent - the efficiency drops noticeably and the @moooeeeep approach dominates, with set & dict approaches being similar but lessor performers: 当列表首先被洗牌时,排序的价格显而易见-效率显着下降,@ moooeeeep方法占主导地位,set和dict方法相似,但表现较差:
Finding FIRST then ALL duplicates, single dupe of "n" included in randomised 1m element array
Test set len change : 500000 - . . . . . -- 0.321 . . . . . -- 0.473
Test in dict : 500000 - . . . . . -- 0.285 . . . . . -- 0.360
Test in set : 500000 - . . . . . -- 0.309 . . . . . -- 0.365
Test sort/adjacent : 500000 - . . . . . -- 0.756 . . . . . -- 0.823
Test sort/groupby : 500000 - . . . . . -- 1.459 . . . . . -- 1.896
Test sort/izip : 500000 - . . . . . -- 0.786 . . . . . -- 0.845
Test sort/tee/izip : 500000 - . . . . . -- 0.743 . . . . . -- 0.804
Test moooeeeep : 500000 - . . . . . -- 0.234 . . . . . -- 0.311 *
Test iter*/sorted : 500000 - . . . . . -- 0.776 . . . . . -- 0.840
Test pandas : 500000 - . . . . . -- 0.539 . . . . . -- 0.540