爬虫
linker6619
数据分析新手,代码略差,还请原谅
展开
-
爬虫基础知识(三)
import requests #引入相关操作库 from bs4 import BeautifulSoup import pandas as pd ranks = [] #创建空列表存储数据信息 names = [] name_englishs = [] fortunes = [] sources = [] areas = [] url = 'http://www.forbeschina.com/lists/1733' #获取网址 res = requests.get(url) soup原创 2020-12-12 19:19:44 · 118 阅读 · 0 评论 -
爬虫基础训练(二)
import requests from bs4 import BeautifulSoup import pandas as pd url = 'https://www.sequoiacap.com/china/companies/' headefs = { 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537原创 2020-12-12 18:40:51 · 258 阅读 · 0 评论 -
爬虫基础训练(一)
from bs4 import BeautifulSoup #beautifulsoup4库使用时是简写的bs4 import requests import pandas as pd r = requests.get('http://blackarchitect.us/') demo = r.text soup = BeautifulSoup(demo, 'html.parser') #解析器:html.parser data_city = soup.find_all('td', .原创 2020-12-11 21:10:11 · 113 阅读 · 0 评论