python中demo函数是什么意思_Python基本语法(demo1-8)

Python环境的安装

·安装解释器

·安装pycharm

1. Python基础语法

1.1常用的数据类型

数字 列表 字符串 元组 字典 集合

1.1.1 列表

类似于c中的数组,但是与数组不同的是,list可以存储不同类型的数据

代码示例

#船舰一个列表

herolist=['鲁班七号','安其拉','李白','杜甫','100''10.01']

print(herolist)

#总结:列表使用[]进行创建

#为什么使用列表? 列表可以将我们需要的很多元素封装到一个容器中

#列表的相关操作:

#1。访问列表中的元素 列表名[索引]

print('英雄为',herolist[1],herolist[0])

#2.添加元素 append 是在列表的末尾进行添加元素

herolist.append('我以为你逗我呢')

print('添加后的列表为',herolist)

#3.修改

herolist[4]='貂蝉'

print(herolist)

#4.删除

del herolist[5]

print('删除后的列表问',herolist)

# 生成一个[0,1,2.......20]的列表

# 可以使用循环创建

# 创建一个空列表

list1 = []

# 使用循环不停的 append

for i in range(21):

list1.append(i)

print(list1)

# 遍历herolist

for hero in herolist:

print(hero)

# len()可以检测对象的元素个数

for i in range(len(herolist)):

#print(herolist[i])

if herolist[i] == '后裔':

print('恭喜你选中了隐藏英雄')

else:

print('不是隐藏英雄')

Python制作进度条

# 安装 tqdm库

# pip install 库的名称

# 导入 tqdm 类

from tqdm import tqdm

import time

mylist = []

for i in range(10):

mylist.append(i)

#遍历mylist

for i in tqdm(mylist):

time.sleep(2)

1.1.2 字符串

# 字符串

# 表示 ' ' “ ”

# 要注意的是

name = 'k“o”be'

print(name)

# 访问 双引号字符串也占一个位置

print(name[2])

# 修改 字符串不支持修改

#name[1] = 'i'

#print(name)

name = 'kobe'

print(name)

# 常用操作

price = '¥9.9'

# 字符串的替换

price = price.replace('¥',' ')

print(price)

# 价格涨价10倍

new_price = float(price)*10

print(new_price)

#写一个价值一亿的AI代码

#while True:

# seg = input('')

# seg = seg.replace('吗?','!')

# print(seg)

# strip 去空格操作

name =' neuedu '

print(len(name))

name = name.strip()

print(len(name))

# join # 将列表变成字符串

li = ['你','好','帅']

disk_path = ['C:','Users','张','Desktop']

path = '\\'.join(disk_path)

print(path)

li = ''.join(li)

print(li)

1.1.2 元组

# 元组

# tuple( )

# list()

# int()

# str()

# 创建

# 元组和列表很相似,只不过不能修改

a = (1,'1',[''])

print(a)

print(type(a))

# 访问

# print(a[2])

# a[2] = 6 不能修改

#元组的用处:

# 1.写保护,安全,Python内置函数返回的类型都是元组

# 2.相对于列表来讲,元组更节省空间,效率更高

# 掌握

# 1.拥有一个元素的元组

b = (100,)

print(type(b))

# 我们经常使用的组合方式:

# list2 = [('a',22),('b',33),('c',99),]

1.1.2 字典

# 字典

# 创建字典 key -value

info = {'name':'张打野','age':18,'gender':'female'}

print(type(info))

#访问字典 通过建访问值

print(info['name'])

#访问不存在的键

#print(info['add']) keyerror

# 当不存在这个键时,可以返回默认设置的值。

# 有这个键时正常返回

print(info.get('addr','抚顺市'))

# 修改

info['age'] = 3

print(info)

# 增加 当字典中不存在这个键,就会添加

info['addr'] = '大连市'

print(info)

#删除

del info['age']

print(info)

# 遍历

for i, v in info.items():

print(i, '---->',v)

# 获取所有的键

print(list(info.keys()))

# 获取所有的值

print(list(info.values()))

1.2判断和循环

·正式的python之旅

·Python语言介绍

·Python中的变量

代码示例

#声明一个变量

a = 100 # 不用声明变量的类型,不用写;

# 交换两个变量

# c语言 int a = 100; int b = 1000;

#balabala

# Python

a = 100

b = 1000

# c = 0

# c = a

# a = b

# b = c

a, b = b, a

print(a, b)

判断语句

格式:

if 要判断的条件:

满足条件时要执行的事情

if(a < b){

操作

}

代码示例

age = input('请输入您的年龄')

#age = 18

print(type(age))

#转化成数字类型

age = int(age)

print(type(age))

if age >= 18:

print('恭喜你成年了,可以去网吧了')

else:

print('sorry,你还是个宝宝')

#elif = switch

score = 80

if score >=90 and score <=100:

print('你的考试等级为A')

elif score >=80 and score <90:

print('你的考试等级为B')

elif score >=70 and score <80:

print('你的考试等级为C')

elif score >=60 and score <70:

print('你的考试等级为D')

Python 中的循环

先介绍for循环

格式:

for 临时变量 in 可迭代对象:

循环体

代码示例

name = 'neusoft'

for i in name:

print(i)

if i == 's':

print('哈哈')

?循环次数哪里去了?

这个i是什么鬼? i是临时变量,不用提前声明 Python自动为你创建

range(起始位置,终止位置,步长)可以写循环次数

起始位置省略默认为0,步长省略为1

给女朋友道歉一百次

for i in range(1,101,2):

# for i in range(101):

print('对不起老哥我错了',i+1,'次向您道歉')

1.3函数

函数 面向过程

方法 面向对象

Python 中的函数

def 函数名():

函数体

代码示例

def say_hello(name):

print('hello',name)

say_hello('neusoft')

# 1到 任意数之间累加和 5050

def caculate_num(num):

sun_num = 0 #存求和

for i in range(1,num+1):

sun_num = sun_num + i

return sun_num

print(caculate_num(100))

未完待续

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
文件: import scrapy from demo1.items import Demo1Item import urllib from scrapy import log # BOSS直聘网站爬虫职位 class DemoSpider(scrapy.Spider): # 爬虫名, 启动爬虫时需要的参数*必填 name = 'demo' # 爬取域范围,允许爬虫在这个域名下进行爬取(可选) allowed_domains = ['zhipin.com'] # 爬虫需要的url start_urls = ['https://www.zhipin.com/c101280600/h_101280600/?query=测试'] def parse(self, response): node_list = response.xpath("//div[@class='job-primary']") # 用来存储所有的item字段 # items = [] for node in node_list: item = Demo1Item() # extract() 将xpath对象转换为Unicode字符串 href = node.xpath("./div[@class='info-primary']//a/@href").extract() job_title = node.xpath("./div[@class='info-primary']//a/div[@class='job-title']/text()").extract() salary = node.xpath("./div[@class='info-primary']//a/span/text()").extract() working_place = node.xpath("./div[@class='info-primary']/p/text()").extract() company_name = node.xpath("./div[@class='info-company']//a/text()").extract() item['href'] = href[0] item['job_title'] = job_title[0] item['sa 报错: C:\Users\xieqianyun\AppData\Local\Programs\Python\Python36\python.exe "C:\Users\xieqianyun\PyCharm Community Edition 2019.2.5\helpers\pydev\pydevconsole.py" --mode=client --port=55825 import sys; print('Python %s on %s' % (sys.version, sys.platform)) sys.path.extend(['C:\\Users\\xieqianyun\\demo1', 'C:/Users/xieqianyun/demo1']) Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18) [MSC v.1900 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 7.10.0 -- An enhanced Interactive Python. Type '?' for help. PyDev console: using IPython 7.10.0 Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18) [MSC v.1900 64 bit (AMD64)] on win32 runfile('C:/Users/xieqianyun/demo1/demo1/begin.py', wdir='C:/Users/xieqianyun/demo1/demo1') Traceback (most recent call last): File "C:\Users\xieqianyun\AppData\Local\Programs\Python\Python36\lib\site-packages\IPython\core\interactiveshell.py", line 3319, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-fc5979762143>", line 1, in <module> runfile('C:/Users/xieqianyun/demo1/demo1/begin.py', wdir='C:/Users/xieqianyun/demo1/demo1') File "C:\Users\xieqianyun\PyCharm Community Edition 2019.2.5\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "C:\Users\xieqianyun\PyCharm Community Edition 2019.2.5\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Users/xieqianyun/demo1/demo1/begin.py", line 3, in <module> cmdline.execute('scrapy crawl demo'.split()) File "C:\Users\xieqianyun\AppData\Local\Programs\Python\Python36\lib\site-packages\scrapy\cmdline.py", line 145, in execute cmd.crawler_process = CrawlerProcess(settings) File "C:\Users\xieqianyun\AppData\Local\Programs\Python\Python36\lib\site-packages\scrapy\crawler.py", line 267, in __init__ super(CrawlerProcess, self).__init__(settings) File "C:\Users\xieqianyun\AppData\Local\Programs\Python\Python36\lib\site-packages\scrapy\crawler.py", line 145, in __init__ self.spider_loader = _get_spider_loader(settings) File "C:\Users\xieqianyun\AppData\Local\Programs\Python\Python36\lib\site-packages\scrapy\crawler.py", line 347, in _get_spider_loader return loader_cls.from_settings(settings.frozencopy()) File "C:\Users\xieqianyun\AppData\Local\Programs\Python\Python36\lib\site-packages\scrapy\spiderloader.py", line 61, in from_settings return cls(settings) File "C:\Users\xieqianyun\AppData\Local\Programs\Python\Python36\lib\site-packages\scrapy\spiderloader.py", line 25, in __init__ self._load_all_spiders() File "C:\Users\xieqianyun\AppData\Local\Programs\Python\Python36\lib\site-packages\scrapy\spiderloader.py", line 47, in _load_all_spiders for module in walk_modules(name): File "C:\Users\xieqianyun\AppData\Local\Programs\Python\Python36\lib\site-packages\scrapy\utils\misc.py", line 73, in walk_modules submod = import_module(fullpath) File "C:\Users\xieqianyun\AppData\Local\Programs\Python\Python36\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "C:\Users\xieqianyun\demo1\demo1\spiders\demo.py", line 4, in <module> from scrapy import log ImportError: cannot import name 'log'

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值