使用爬虫爬取网站的所有url

转载请注明出处:https://blog.csdn.net/weixin_45163516

import multiprocessing as mp
import time
from urllib.request import urlopen, urljoin
from bs4 import BeautifulSoup
import re


base_url = 'https://morvanzhou.github.io/'
restricted_crawl = True
#DON'T OVER CRAWL THE WEBSITE OR YOU MAY NEVER VISIT AGAIN


def crawl(url):
    response = urlopen(url)

    return response.read().decode()

def parse(html):
    soup = BeautifulSoup(html, 'lxml')
    urls = soup.find_all('a', {"href": re.compile('^/.+?/$')})
    title = soup.find('h1').get_text().strip()
    page_urls = set([urljoin(base_url, url['href']) for url in urls])
    url = soup.find('meta', {'property': "og:url"})['content']
    return title, page_urls, url
#使用普通方式爬取
unseen = set([base_url, ])
seen = set()

count, t1 = 1, time.time()

while len(unseen) != 0:  # still get some url to visit
	#限制爬取的数量20
    if restricted_crawl and len(seen) > 20:
        break

    print('\nDistributed Crawling...')
    htmls = [crawl(url) for url in unseen]

    print('\nDistributed Parsing...')
    results = [parse(html) for html in htmls]

    print('\nAnalysing...')
    seen.update(unseen)  # seen the crawled
    unseen.clear()  # nothing unseen

    for title, page_urls, url in results:
        print(count, title, url)
        count += 1
        unseen.update(page_urls - seen)  # get new url to crawl
print('Total time: %.1f s' % (time.time() - t1,))  # 53 s

#使用多进程的方式爬取
unseen = set([base_url,])
seen = set()

pool = mp.Pool(4)
count, t1 = 1, time.time()
while len(unseen) != 0:                 # still get some url to visit
    if restricted_crawl and len(seen) > 20:
            break
    print('\nDistributed Crawling...')
    crawl_jobs = [pool.apply_async(crawl, args=(url,)) for url in unseen]
    htmls = [j.get() for j in crawl_jobs]                                       # request connection

    print('\nDistributed Parsing...')
    parse_jobs = [pool.apply_async(parse, args=(html,)) for html in htmls]
    results = [j.get() for j in parse_jobs]                                     # parse html

    print('\nAnalysing...')
    seen.update(unseen)         # seen the crawled
    unseen.clear()              # nothing unseen

    for title, page_urls, url in results:
        print(count, title, url)
        count += 1
        unseen.update(page_urls - seen)     # get new url to crawl
print('Total time: %.1f s' % (time.time()-t1, ))    # 16 s !!!

学习自:莫烦python

  • 2
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Python爬虫用于从网站上抓取数据,包括图片URL使用Python进行图片URL爬取通常会涉及以下几个步骤: 1. **选择库**: Python有许多库可以用来爬虫,如`requests`(发送HTTP请求)、`BeautifulSoup`(解析HTML)和`PIL`(处理图片)。`Scrapy`也是一个强大的框架,专门用于爬虫开发。 2. **发送请求**: 使用`requests.get()`获取网页内容,如果目标是图片链接,可能需要检查响应头中的Content-Type来确认。 3. **解析HTML**: 使用BeautifulSoup或其他解析库分析HTML结构,找到包含图片URL的元素。例如,img标签的src属性通常是图片链接。 4. **提取链接**: 提取出图片链接,这通常涉及到定位和解析相关的HTML属性或CSS选择器。 5. **保存链接**: 将获取到的图片URL保存到文件、数据库或者列表中。 6. **处理图片**: 可选地,如果你需要下载图片,可以使用`requests`下载图片,然后用`PIL`进行进一步处理。 以下是简单示例代码片段: ```python import requests from bs4 import BeautifulSoup def get_image_urls(url): response = requests.get(url) soup = BeautifulSoup(response.text, 'html.parser') img_tags = soup.find_all('img') # 查找所有img标签 image_urls = [img['src'] for img in img_tags] # 提取src属性作为链接 return image_urls # 示例使用 url_to_crawl = 'http://example.com' image_links = get_image_urls(url_to_crawl) # 存储图片URL with open('image_links.txt', 'w') as f: for link in image_links: f.write(f'{link}\n') # 下载图片(如果需要) for link in image_links: response = requests.get(link) with open(link.split('/')[-1], 'wb') as f: f.write(response.content)

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值