前言
最近由于项目需求,研究了下爬虫,并写了爬取去哪儿、蚂蜂窝以及携程的景点与游记的代码。
这里献上蚂蜂窝的游记爬取代码,较为粗糙且不包含数据清理。
爬取的内容,包括每一篇游记的标题、出行日期、游玩天数、费用、人物等数据,如图红框:
最后,蚂蜂窝反爬很严格,最好多准备几个代理ip。(此代码为无代理ip的版本)
话不多说,上代码——
代码
// An highlighted block
# -*- coding:utf-8 -*-
import requests
import random
import time
import pandas as pd
from lxml import etree
User_Agent = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36",
"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
"Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36"]
HEADERS = {
'User-Agent': User_Agent[random.randint(0, 5)],
# 'User-Agent'