python爬取网页信息心得

 先是干货

配置好Python之后请在cmd里敲如下命令:

pip install lxml

pip install beautifulsoup4

pip install html5lib

pip install requests

然后是python代码,爬取前程无忧网的,

import csv
import requests
from bs4 import BeautifulSoup

url = "https://search.51job.com/list/030200%252C040000,000000,0000,00,9,99,%25E8%25BD%25AF%25E4%25BB%25B6%25E5%25BC%2580%25E5%258F%2591%25E5%25B7%25A5%25E7%25A8%258B%25E5%25B8%2588,2,21.html?lang=c&stype=1&postchannel=0000&workyear=99&cotype=99&degreefrom=99&jobterm=99&companysize=99&lonlat=0%2C0&radius=-1&ord_field=0&confirmdate=9&fromType=&dibiaoid=0&address=&line=&specialarea=00&from=&welfare="

r = requests.get(url)
#
f = open("neituiWeb2.csv", "a", newline="")
writer = csv.writer(f)

soup = BeautifulSoup(r.content, "lxml")

link = soup.find("div", {"id": "resultList"}).find("div", {"class": "el title"}).next_siblings
# print(soup.get_text())
# sibs = bs.find("table", {"id": "giftList"}).tr.next_sibling.next_sibling

for item in link:
    # print(item)
    try:
        t1= item.find("p", class_='t1').a.text.strip()
        t2 = item.find("span", class_='t2').text
        t3 = item.find("span", class_='t3').text
        t4 = item.find("span", class_='t4').text
        t5 = item.find("span", class_='t5').text
        writer.writerow([t1, t2, t3, t4, t5])
    except:
        pass

最后心得:先用find找到单个的内容,之后再用find_all和for来循环查找所有的。

                   还有就是用find("table", {"class": "giftList"})这种形式,会有很多问题出来,不信可以将find("p", class_='t1')

                   改一下。

 
 
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值