pytorch学习5

python isinstance()函数

isinstance() 函数来判断一个对象是否是一个已知的类型,类似 type()。

isinstance(object, classinfo)
参数
object – 实例对象。
classinfo – 可以是直接或间接类名、基本类型或者由它们组成的元组。

可变参数

学过php或者其他语言的同学都知道,php里面的参数不是个数不是可变的(只是很多时候是可以省略的,因为在函数定义的时候为参数设置了默认值)。但是在python里却不是这样,python里面运行可变参数的出现,参数中出现(*arg,**arg2)的形式。

今天我们来详解一下这种用法:

例如:

def foo1(arg1,arg2,key1=1,key2=2,*arg,**keywords):
print "arg1 parameters is ",arg1
print "arg2 parameters is ",arg2
print "key1 parameter is ",key1
print "key2 parameter is ",key2
print "Arbitrary parameter is ", arg
print "keywords parameter is ",keywords

foo1(1,2,3,4,5,6,k1=1,k2=2,k3=3)

输出:

arg1 parameters is 1
arg2 parameters is 2
key1 parameter is 3
key2 parameter is 4
arg parameter is (5, 6)
keywords parameter is {‘k3’: 3, ‘k2’: 2, ‘k1’: 1}

函数参数分为四部分:
arg1,arg2,key1,key2普通参数
*arg 非关键字参数列表
**keywords 关键字参数列表

函数声名部分,参数的四个部分不可颠倒位置,可以没有其中某几部分。

python函数的这种特性使得函数参数更加灵活,参数个数也不受限制。

os相关函数

os.getcwd() 方法用于返回当前工作目录。

os.path.join() 函数用于路径拼接文件路径,可以传入多个参数

从倒数第一个,以‘/’开头的参数开始拼接,之前的参数全部丢弃。以‘/’结尾的,以及参数中间有‘/’的,斜杠仅作为参数的一部分。

print os.path.join(‘111’,‘000/333’) #111/000/333

print os.path.join(’/111’,’/222’,‘333’) #/222/333

os.path.abspath(path) :返回绝对路径

class ImageFolder(DatasetFolder):
    """A generic data loader where the images are arranged in this way: ::

        root/dog/xxx.png
        root/dog/xxy.png
        root/dog/xxz.png

        root/cat/123.png
        root/cat/nsdf3.png
        root/cat/asd932_.png

    Args:
        root (string): Root directory path.
        transform (callable, optional): A function/transform that  takes in an PIL image
            and returns a transformed version. E.g, ``transforms.RandomCrop``
        target_transform (callable, optional): A function/transform that takes in the
            target and transforms it.
        loader (callable, optional): A function to load an image given its path.
        is_valid_file (callable, optional): A function that takes path of an Image file
            and check if the file is a valid file (used to check of corrupt files)

     Attributes:
        classes (list): List of the class names sorted alphabetically.
        class_to_idx (dict): Dict with items (class_name, class_index).
        imgs (list): List of (image path, class_index) tuples
    """

json库的一些用法:

方法作用
json.dumps()将python对象编码成Json字符串
json.loads()将Json字符串解码成python对象
json.dump()将python中的对象转化成json储存到文件中
json.load()将文件中的json的格式转化成python对象提取出来

min()函数里面参数不只可以有两个,意思不用非得比较两个,比较多个是可以的。

pickle

pickle提供了一个简单的持久化功能。可以将对象以文件的形式存放在磁盘上。

pickle模块只能在Python中使用,python中几乎所有的数据类型(列表,字典,集合,类等)都可以用pickle来序列化,

pickle序列化后的数据,可读性差,人一般无法识别。

pickle.dump(obj, file[, protocol])
  序列化对象,并将结果数据流写入到文件对象中。参数protocol是序列化模式,默认值为0,表示以文本的形式序列化。protocol的值还可以是1或2,表示以二进制的形式序列化。


pickle.load(file)
  反序列化对象。将文件中的数据解析为一个Python对象。

其中要注意的是,在load(file)的时候,要让python能够找到类的定义,否则会报错:

比如下面的例子

import pickle  
class Person:  
    def __init__(self,n,a):  
        self.name=n  
        self.age=a  
    def show(self):  
        print self.name+"_"+str(self.age)  
aa = Person("JGood", 2)  
aa.show()  
f=open('d:\\p.txt','w')  
pickle.dump(aa,f,0)  
f.close()  
#del Person  
f=open('d:\\p.txt','r')  
bb=pickle.load(f)  
f.close()  
bb.show() 

如果不注释掉del Person的话,那么会报错

clear_memo()
  清空pickler的“备忘”。使用Pickler实例在序列化对象的时候,它会“记住”已经被序列化的对象引用,所以对同一对象多次调用dump(obj),pickler不会“傻傻”的去多次序列化。

torch.load()

从文件中加载一个用torch.save()保存的对象。

map_location – 函数、torch.device或者字典指明如何重新映射存储位置。

"""Loads an object saved with :func:`torch.save` from a file.

:func:`torch.load` uses Python's unpickling facilities but treats storages,
which underlie tensors, specially. They are first deserialized on the
CPU and are then moved to the device they were saved from. If this fails
(e.g. because the run time system doesn't have certain devices), an exception
is raised. However, storages can be dynamically remapped to an alternative
set of devices using the :attr:`map_location` argument.

If :attr:`map_location` is a callable, it will be called once for each serialized
storage with two arguments: storage and location. The storage argument
will be the initial deserialization of the storage, residing on the CPU.
Each serialized storage has a location tag associated with it which
identifies the device it was saved from, and this tag is the second
argument passed to :attr:`map_location`. The builtin location tags are ``'cpu'``
for CPU tensors and ``'cuda:device_id'`` (e.g. ``'cuda:2'``) for CUDA tensors.
:attr:`map_location` should return either ``None`` or a storage. If
:attr:`map_location` returns a storage, it will be used as the final deserialized
object, already moved to the right device. Otherwise, :func:`torch.load` will
fall back to the default behavior, as if :attr:`map_location` wasn't specified.

If :attr:`map_location` is a :class:`torch.device` object or a string containing
a device tag, it indicates the location where all tensors should be loaded.

Otherwise, if :attr:`map_location` is a dict, it will be used to remap location tags
appearing in the file (keys), to ones that specify where to put the
storages (values).

User extensions can register their own location tags and tagging and
deserialization methods using :func:`torch.serialization.register_package`.

Args:
    f: a file-like object (has to implement :meth:`read`, :meth`readline`, :meth`tell`, and :meth`seek`),
        or a string or os.PathLike object containing a file name
    map_location: a function, :class:`torch.device`, string or a dict specifying how to remap storage
        locations
    pickle_module: module used for unpickling metadata and objects (has to
        match the :attr:`pickle_module` used to serialize file)
    pickle_load_args: (Python 3 only) optional keyword arguments passed over to
        :func:`pickle_module.load` and :func:`pickle_module.Unpickler`, e.g.,
        :attr:`errors=...`.

net.train(),net.eval()

net.requires_grad_() :更改autograd是否应记录此模块中参数的操作。

def train(self: T, mode: bool = True) -> T:
    r"""Sets the module in training mode.

    This has any effect only on certain modules. See documentations of
    particular modules for details of their behaviors in training/evaluation
    mode, if they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`,
    etc.

    Args:
        mode (bool): whether to set training mode (``True``) or evaluation
                     mode (``False``). Default: ``True``.

    Returns:
        Module: self
    """
    self.training = mode
    for module in self.children():
        module.train(mode)
    return self

def eval(self: T) -> T:
    r"""Sets the module in evaluation mode.

    This has any effect only on certain modules. See documentations of
    particular modules for details of their behaviors in training/evaluation
    mode, if they are affected, e.g. :class:`Dropout`, :class:`BatchNorm`,
    etc.

    This is equivalent with :meth:`self.train(False) <torch.nn.Module.train>`.

    Returns:
        Module: self
    """
    return self.train(False)

def requires_grad_(self: T, requires_grad: bool = True) -> T:
    r"""Change if autograd should record operations on parameters in this
    module.

    This method sets the parameters' :attr:`requires_grad` attributes
    in-place.

    This method is helpful for freezing part of the module for finetuning
    or training parts of a model individually (e.g., GAN training).

    Args:
        requires_grad (bool): whether autograd should record operations on
                              parameters in this module. Default: ``True``.

    Returns:
        Module: self
    """

tqdm

python进度条(那种变化的)

参数:

- iterable: 可迭代的对象, 在手动更新时不需要进行设置
- desc: 字符串, 左边进度条描述文字
- total: 总的项目数
- leave: bool值, 迭代完成后是否保留进度条
- file: 输出指向位置, 默认是终端, 一般不需要设置
- ncols: 调整进度条宽度, 默认是根据环境自动调节长度, 如果设置为0, 就没有进度条, 只有输出的信息
- unit: 描述处理项目的文字, 默认是'it', 例如: 100 it/s, 处理照片的话设置为'img' ,则为 100 img/s
- unit_scale: 自动根据国际标准进行项目处理速度单位的换算, 例如 100000 it/s >> 100k it/s
import time
from tqdm import tqdm

# 发呆0.5s
def action():
    time.sleep(0.5)
with tqdm(total=100000, desc='Example', leave=True, ncols=100, unit='B', unit_scale=True) as pbar:
    for i in range(10):
        # 发呆0.5秒
        action()
        # 更新发呆进度
        pbar.update(10000)
Example: 100%|███████████████████████████████████████████████████| 100k/100k [00:05<00:00, 19.6kB/s]

vscode正常也可以运行pytorch的程序。调试也可以,只是pycharm功能更全更舒服。

zip() 函数用于将可迭代的对象作为参数,将对象中对应的元素打包成一个个元组,然后返回由这些元组组成的列表。

os操作

os.path.splitext() 将文件名和扩展名分开

#os.path.split() 返回文件的路径和文件名

# -*- coding:utf-8 -*-
"""
@author:lei 
"""
import os
 
#os.path.join() 将分离的部分合成一个整体
filename=os.path.join('/home/ubuntu/python_coding','split_func')
print filename
#输出为:/home/ubuntu/python_coding/split_func
 
#os.path.splitext()将文件名和扩展名分开
fname,fename=os.path.splitext('/home/ubuntu/python_coding/split_func/split_function.py')
print 'fname is:',fname
print 'fename is:',fename
#输出为:
# fname is:/home/ubuntu/python_coding/split_func/split_function
#fename is:.py
 
#os.path.split()返回文件的路径和文件名
dirname,filename=os.path.split('/home/ubuntu/python_coding/split_func/split_function.py')
print dirname
print filename
#输出为:
# /home/ubuntu/python_coding/split_func
#split_function.py
 
#split()函数
#string.split(str="", num=string.count(str))[n]
#str - - 分隔符,默认为所有的空字符,包括空格、换行(\n)、制表符(\t)等。
#num - - 分割次数。
#[n] - - 选取的第n个分片
string = "hello.world.python"
print string.split('.')#输出为:['hello', 'world', 'python']
print(string.split('.',1))#输出为:['hello', 'world.python']
print(string.split('.',1)[0])#输出为:hello
print(string.split('.',1)[1])#输出为:world.python
string2="hello<python.world>and<c++>end"
print(string2.split("<",2)[2].split(">")[0])#输出为:c++
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值