TensorFlow tf.data.Dataset

方法描述
apply(transformation_func)
batch(batch_size,drop_remainder=False)drop_remainder为真,则最后一个batch如果元素个数不够则会被丢弃
cache(filename=’’)
concatenate(dataset)
enumerate(start=0)和python中的enumerate相同
filter(predicate)predicate是一个方法
flat_map(map_func)用来展开多维度的tensor
from_generator(generator,output_types,output_shapes=None,args=None)静态方法,
from_tensor_slices(tensors)静态方法
from_tensors(tensors)静态方法
interleave(map_func,cycle_length=AUTOTUNE,block_length=1,num_parallel_calls=None)插值?
list_files(file_pattern,shuffle=None,seed=None)
map(map_func,num_parallel_calls=None)
options()
padded_batch(batch_size,padded_shapes,padding_values=None,drop_remainder=False)
prefetch(buffer_size)
range(*args)静态方法
reduce(initial_state,reduce_func)
repeat(count=None)
shard(num_shards,index)
shuffle(buffer_size,seed=None,reshuffle_each_iteration=None)
skip(count)
take(count)去除count个数据
unbatch()
window(size,shift=None,stride=1,drop_remainder=False)
with_options(options)
zip(datasets)静态方法

range/batch/from_tensor_slices/concatenate

a = Dataset.range(1, 4)  # ==> [ 1, 2, 3 ]
b = Dataset.range(4, 8)  # ==> [ 4, 5, 6, 7 ]
c = Dataset.range(8, 14).batch(2)  # ==> [ [8, 9], [10, 11], [12, 13] ]
d = Dataset.from_tensor_slices([14.0, 15.0, 16.0])

a.concatenate(b)  # ==> [ 1, 2, 3, 4, 5, 6, 7 ]

enumerate

a = { 1, 2, 3 }
b = { (7, 8), (9, 10) }
a.enumerate(start=5)) == { (5, 1), (6, 2), (7, 3) }
b.enumerate() == { (0, (7, 8)), (1, (9, 10)) }

filter

d = tf.data.Dataset.from_tensor_slices([1, 2, 3])
d = d.filter(lambda x: x < 3)  # ==> [1, 2]

flat_map

a = Dataset.from_tensor_slices([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ])
a.flat_map(lambda x: Dataset.from_tensor_slices(x + 1)) # ==>
#  [ 2, 3, 4, 5, 6, 7, 8, 9, 10 ]

map

a = Dataset.range(1, 6)  # ==> [ 1, 2, 3, 4, 5 ]
a.map(lambda x: x + 1)  # ==> [ 2, 3, 4, 5, 6 ]

range

Dataset.range(5) == [0, 1, 2, 3, 4]
Dataset.range(2, 5) == [2, 3, 4]
Dataset.range(1, 5, 2) == [1, 3]
Dataset.range(1, 5, -2) == []
Dataset.range(5, 1) == []
Dataset.range(5, 1, -2) == [5, 3]

zip

a = Dataset.range(1, 4)  # ==> [ 1, 2, 3 ]
b = Dataset.range(4, 7)  # ==> [ 4, 5, 6 ]
c = Dataset.range(7, 13).batch(2)  # ==> [ [7, 8], [9, 10], [11, 12] ]
d = Dataset.range(13, 15)  # ==> [ 13, 14 ]

# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
Dataset.zip((a, b))  # ==> [ (1, 4), (2, 5), (3, 6) ]
Dataset.zip((b, a))  # ==> [ (4, 1), (5, 2), (6, 3) ]

# The `datasets` argument may contain an arbitrary number of
# datasets.
Dataset.zip((a, b, c))  # ==> [ (1, 4, [7, 8]),
                        #       (2, 5, [9, 10]),
                        #       (3, 6, [11, 12]) ]

# The number of elements in the resulting dataset is the same as
# the size of the smallest dataset in `datasets`.
Dataset.zip((a, d))  # ==> [ (1, 13), (2, 14) ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值