爬虫
linker6619
数据分析新手,代码略差,还请原谅
展开
-
爬虫基础知识(三)
import requests #引入相关操作库from bs4 import BeautifulSoupimport pandas as pdranks = [] #创建空列表存储数据信息names = []name_englishs = []fortunes = []sources = []areas = []url = 'http://www.forbeschina.com/lists/1733' #获取网址res = requests.get(url)soup原创 2020-12-12 19:19:44 · 114 阅读 · 0 评论 -
爬虫基础训练(二)
import requestsfrom bs4 import BeautifulSoupimport pandas as pdurl = 'https://www.sequoiacap.com/china/companies/'headefs = { 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537原创 2020-12-12 18:40:51 · 253 阅读 · 0 评论 -
爬虫基础训练(一)
from bs4 import BeautifulSoup #beautifulsoup4库使用时是简写的bs4import requestsimport pandas as pdr = requests.get('http://blackarchitect.us/')demo = r.textsoup = BeautifulSoup(demo, 'html.parser') #解析器:html.parserdata_city = soup.find_all('td', .原创 2020-12-11 21:10:11 · 105 阅读 · 0 评论