匿名函数
lamda表达式:
def add(x,y):
return x+y
等价于
f = lamda x,y:x+y
f(1,2)
三元表达式:
wef
x = 1
y = 2
c = x if x
map映射关系:
###############
def square(x):
return x*x
list_x = [1,3,10]
list_r = map(square,list_x)
###############
list_x = [1,3,10]
list_r = map( lamda x:x*x,list_x)
#################
for x in list_x:
square(x)
map语句很快(可能for 循环那里不是用的解释性的语言,把它一次性存起来 编好了),但是有个明显的问题,map格式转成list格式的时间非常慢。可能for循环处理函数需要50ms,但是map语句只需要0.05ms,但是可能map转list格式需要很久大概(20ms)
map操作对象不能用下标描述,'map' object is not subscriptable
但是map操作对象可以用for 循环迭代找到数值
时间测试:(比较list for 循环,map操作,numpy操作,torch tensor 操作,torch tensor cuda 操作)
f = lambda x : x*x*x
list_r = [i for i in range(1000000)]
print("device_count",torch.cuda.device_count())
t0 = time.time()
numpy_r = numpy.array(list_r)
t1 = time.time()
device = torch.device('cuda:0')
torch_r = torch.tensor(list_r)
torch_r = torch_r.to(device)
t2 = time.time()
torch_r = torch_r * torch_r *torch_r
t3 = time.time()
numpy_r = numpy_r *numpy_r*numpy_r
t4 = time.time()
list_res = map(f,list_r)
t5 = time.time()
print("t0:",(t1-t0)*1000)
print("t1:",(t2-t1)*1000)
print("t2:",(t3-t2)*1000)
print("t3:",(t4-t3)*1000)
print("t4:",(t5-t4)*1000)
##### result 使用 cuda
t0: 50.150156021118164
t1: 2310.8725547790527
t2: 0.3681182861328125
t3: 5.092620849609375
t4: 0.004291534423828125
### result 不使用 cuda
t0: 51.50794982910156
t1: 21.13056182861328
t2: 4.422187805175781
t3: 5.149364471435547
t4: 0.004291534423828125
cuda提高速度