【第一个爬虫】python爬取58同城企业信息并插入数据库

import urllib
import urllib2
import HTMLParser
from bs4 import BeautifulSoup
import re
import MySQLdb as mdb
import json

i=1 #number order of companys
def GetOnePageUrl(url):
	global i
	flag = 0
	request = urllib2.Request(url)
	html = urllib2.urlopen(request)
	soup = BeautifulSoup(html, "lxml")
	for link in soup.find_all(name='a', attrs={"href": re.compile(r'^http://qy.58.com/mq/[0-9]*/$')}):
		#print link.get('href')
		if flag%2 == 0:
			GetOneUrlInfo(link.get('href'))
			print i
			i += 1
		flag += 1

def GetOneUrlInfo(url):
	global i
	request = urllib2.Request(url)
	html = urllib2.urlopen(request)
	soup = BeautifulSoup(html,"lxml")
	#for addr in soup.find_all(name='td',limit=5):
	#   print addr.string
	fiveinfo = soup.find_all(name='td',limit=5)
	if len(fiveinfo) == 0: #the company'
  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,我们可以使用 Python 中的 requests 库和 BeautifulSoup 库来实现 58 同租房信息的功能。具体步骤如下: 1. 导入所需库: ```python import requests from bs4 import BeautifulSoup ``` 2. 构造 URL 并发送请求: ```python url = 'https://hz.58.com/chuzu/pn1/' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299' } response = requests.get(url, headers=headers) ``` 3. 解析 HTML 并提数据: ```python soup = BeautifulSoup(response.text, 'html.parser') house_list = soup.select('.list > li') for house in house_list: title = house.select('.des > h2 > a')[0].text.strip() price = house.select('.money > b')[0].text.strip() house_type = house.select('.room > p')[0].text.strip() location = house.select('.add > a')[0].text.strip() print(title, price, house_type, location) ``` 完整代码如下: ```python import requests from bs4 import BeautifulSoup url = 'https://hz.58.com/chuzu/pn1/' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 Edge/16.16299' } response = requests.get(url, headers=headers) soup = BeautifulSoup(response.text, 'html.parser') house_list = soup.select('.list > li') for house in house_list: title = house.select('.des > h2 > a')[0].text.strip() price = house.select('.money > b')[0].text.strip() house_type = house.select('.room > p')[0].text.strip() location = house.select('.add > a')[0].text.strip() print(title, price, house_type, location) ``` 这样就完成了 58 同租房信息的功能。需要注意的是,爬虫涉及到的数据抓和使用需要遵守相关法律法规,如有违反后果自负。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值