python爬虫(未完)与很少一点http

HTTP协议简述

w3schoolHTTP状态消息
w3schoolGET与POST

协议分类

HTTP1.0:开放80端口,一次一个连接
HTTP1.1:开放80端口,多次可以一个连接
HTTP采用明文传输,不安全
HTTPS:开放443端口,加密数据传输

请求方法

HTTP1.0:GET POST HEAD
HTTP1.1:OPTIONS PUT DELETE TRACE CONNECT PATCH

HTTP请求头

User-Agent:浏览器版本信息
Accept-encoding:浏览器接受的编码
Referer:当前网页从哪里跳转过来
Cookie:Cookie信息
Location:跳转到哪里
Set-Cookie:设置Cookie信息
WWW-Authenticate:用于身份验证

HTTP响应状态码

1XX:信息提示
2XX:成功
3XX:重定向
4XX:客户端错误
5XX:服务端错误

python处理HTTP请求

import requests

GET

不带参数:r = requests.get(url)
带参数:r = requests.get(url=url,params={‘key1’:‘value1’,‘key2’:‘value2’})
r.url:查看请求的url

POST

不带参数:r = requests.post(url)
带参数:r = requests.post(url=url,data={‘key1’:‘value1’,‘key2’:‘value2’})

自定义请求头

headers=(‘key1’:‘value1’)
requests.get(url,headers=headers)

其他请求

r = requests.put
r = requests.delete
r = requests.head
r = requests.options

python与HTTP响应

获取响应状态码

r.status_code

获取响应文本

r.content:二进制文本,获取之后不是一个字符串
r.text:获取之后是一个字符串,但中文乱码,可以使用r.encoding="utf-8"来转码

获取响应头

响应头:r.headers
请求头:r.request.headers

获取cookie

r.cookie

获取请求url

​ r.url

python与HTTP代理

代理设置

http
https

参数设置

proxies:设置代理
verify=False:HTTPS要验证证书合法性,设置此参数可以避免

import requests
url="https://www.baidu.com"
#ip 192.168.100.9
proxise = {'http':'http://192.168.100.9:8080','https':'https://192.168.100.9:8080'}
r = requests.get(url,proxise=proxise,verify=False)

结合BP查看

python会话

携带cookie的会话

访问某些页面时,会通过set-cookie设置cookie值,以便下一次访问自动提交cookie进行身份验证

python-Session

s = requests.Session()
r = s.get(url)
import requests
url = "https://www.baidu.com"
s = requests.Session()

r = s.get(url)
print(r.cookies)
print(r.request.headers)

r1=s.get(url)
print(r1.request.headers)

python目录扫描

目录扫描原理

1.读取字典文件,拼接url
2.HTTP GET请求URL
3.判断状态码,status_code=200,输出对应的目录

字典文件读取

#读入文件
with open("filename.txt","r") as f:
#每次读取一行
f.readline()
#全部读取
f.readlines()
#读取a个字节数
f.read(a)
with open("dir.txt",r) as f:
    for line in f.readlines():
        print(line.strip())#strip()删除空行

工具编写

读取字典文件
HTTP GET请求
参数优化

import requests
import sys

url = sys.argv[1]#通过命令行来获取
with open ("dir.txt","r") as f:
    for line in f.readlines():
        line = line.strip()
        r=requests.get(url+line)
        if r.status_code == 200:
            print("url:"+r.url+" exist")
此时命令行语句:python a.py url地址
若dic=sys.argv[2]
with open(dic,"r") as f:
则命令行语句:python a.py url地址 爆破参数地址

命令行参数传递

sys.argv[0]

文件读写

open(filename,mode)

r:读
w:写,会覆盖之前的内容
a:追加

自定义User-Agent

获取合法的user-agent,避免反爬

python-IIS PUT漏洞

工具原理

IIS中WebDAV支持HTTP方法,也提供了一些其他功能强大的方法(move),使得开启WebDAV可以直接上传任意文件
使用HTTP options方法可以探测出服务器支持的HTTP方法

工具编写

1.确定目标服务器
2.发送options请求
3.确定结果中是否具有move put

import requests

url="http://172.16.1.129"

r=requests.options(url)

print(r.headers)
print(r.headers["Allow"])
print(r.headers["Public"])
#webdav开启的方法会在public中展示
result = r.headers['Public']
if result.find("PUT") and result.find("MOVE"):
    print("exist iis put vul")
else:
    print("not exist")

python获取HTTP服务器信息

获取中间件信息

IIS
Apache

获取脚本信息

IIS:

import requests

url="http://172.16.1.129"

r=requests.get(url)
print(r.headers['Server'])#服务器中间件
print(r.headers['X-Powered-By'])#服务器脚本

Python漏洞检测

ms15-034

漏洞原理

漏洞被披露后,根据漏洞原理写出对应的POC代码,用来验证漏洞是否存在

代码编写

“GET/HTTP/1.1\r\nHost:stuff\r\nRange:bytes=0-18446744073709551615\r\n\r\n”

import requests

url = "http://172.16.1.130"

r=requests.get(url)
#检测是否是存在漏洞的版本
remote_sever = r.headers['Server']
if remote_sever.find(7.5) or remote_find(8.0):
    payload={'Host':'stuff','Range':'bytes=0-18446744073709551615'}
    r1 = requests.get(url,headers=payload)
    if str(r1.content).find("Requested Range Not Satisfiable"):
        print(url + "exit vuln ms15-034")
    else:
        print("no vuln")
else:   
	print("Server not exit th vul")

验证检测效果

搭建漏洞环境用于测试
与其他工具进行效果对比

#站点地图构建

测试Web App首要任务

获取站点完整目录与文件

技术种类

1.通过基于字典的目录文件扫描确定:使用python requests可以轻松完成任务
2.通过基于网络爬虫的技术确定:使用python scrapy完成

使用BP构建

python-scrapy

scrapy命令行帮助信息

输入scrapy,查看

创建工程命令

scrapy startproject 工程名

工程文件作用

1.items.py:设置要爬取的字段
2.pipelines.py:设置保存爬取内容
3.settings.py:设置文件,比如user-agent

初始化工程

1.scrapy startproject Tencent
2.items.py添加爬取字段:
a.scrapy.Filed()
b.calss TencentItem(scrapy.Item)

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class TencentItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    position_name = scrapy.Field()
    position_link = scrapy.Field()
    position_type = scrapy.Field()
    position_number = scrapy.Field()
    position_site = scrapy.Field()
    position_time = scrapy.Field()

创建基础爬虫类

1.scrapy genspider tencentPostion “tencent.com”
2.tencentPostion为爬虫名,tencent.com为爬虫作用范围
3.执行命令后会在spiders文件夹中创建一个tencentPostion.py的文件,在这里编写爬虫的内容

初始与结束条件

1.初始化url
https://hr.tencent.com/position.php?start=
2.每次偏移offset+10,初始化offset=0
3.爬虫初始化urls start_urls=[url+str(offset)]
4.结束条件
https://hr.tencent.com/position.php?start=2870#a
offset<2870
offset = offset+10
5.循环爬取
每次处理完一页的数据之后,重新发送下一页页面请求

yield scrapy.Request(self.url+str(self.offset),callback=self.parse)

初始化item对象

1.类中调用自身属性和方法
self.属性名
self.方法名
2.循环读取行

for each in response.xpath("//tr[@class='even']|//tr[@class='odd']")
#xpath中//标签名 获取全部标签
#[@class]获取类名为even或odd的行

3.循环读取列

#初始化item对象
item=TencentItem()
#获取职位名称
item['position_name']=each.xpath("./td[1]/a/text()").extract()[0]
#获取详情链接
item['positionlink']=each.xpath("./td[1]/a/@href").extract()[0]

保存爬取内容

1.打卡文件

在初始化__init__中self.filename=open("tencent.json","w")

2.保存数据到文件

text=json.dumps(dict(item),ensure_ascii=False)+",\n"
self.filename.write(text.encode("utf-8"))

3.关闭文件
在close_spider(self,spider)中 关闭文件self.filename.close()

设置配置文件

1.设置请求头
user-agent
accpet

2.设置item-pipelines

3.执行命令,爬取数据
scrapy crwal tencentPosition

4.bug修复
在tencentPosition中
在pipelines中

代码汇总

1.items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class TencentItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    position_name = scrapy.Field()
    position_link = scrapy.Field()
    position_type = scrapy.Field()
    position_number = scrapy.Field()
    position_site = scrapy.Field()
    position_time = scrapy.Field()

2.tencentPosition.py

# -*- coding: utf-8 -*-
import scrapy
from Tencent.items import TencentItem

class TencentpositionSpider(scrapy.Spider):
    name = 'tencentPosition'
    allowed_domains = ['tencent.com']
    url='https://hr.tencent.com/position.php?start='
    offset=0
    start_urls = [url+str(offset)]

    def parse(self, response):
        #爬虫主要代码
        for line in response.xpath("//tr[@class='even'] | //tr[@class='odd']"):
            item = TencentItem()
            item['position_name'] = line.xpath("./td[1]/a/text()").extract()[0]
            item['position_link'] = line.xpath("./td[1]/a/@href").extract()[0]
            item['position_type'] = line.xpath("./td[2]/text()").extract()[0]
            item['position_number'] = line.xpath("./td[3]/text()").extract()[0]
            item['position_site'] = line.xpath("./td[4]/text()").extract()[0]
            item['position_time'] = line.xpath("./td[5]/text()").extract()[0]
            yield item
        
        if self.offset < 2870:
            self.offset = self.offset+10
            
        yield scrapy.Request(self.url+str(self.offset),callback=self.parse)
        

3.pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import json

class TencentPipeline(object):
    def __init__(self):
        self.filename = open("tencent.json","w")
        
    def process_item(self, item, spider):
        text=json.dumps(dict(item),ensure_ascii=False)+",\n"
        self.filename.write(str(text))
        return item
    def close_spider(self,spider):
        self.filename.close()

4.settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for Tencent project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'Tencent'

SPIDER_MODULES = ['Tencent.spiders']
NEWSPIDER_MODULE = 'Tencent.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'Tencent (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
   'Accept-Language': 'en',
   'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36',
}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'Tencent.middlewares.TencentSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'Tencent.middlewares.TencentDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'Tencent.pipelines.TencentPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

python与字典

工具原理

通过读取字典文件获取内容,拼接URL,执行GET http请求,获取响应状态码,根据状态码判断目录文件资源是否存在

工具思路

命令行工具参数获取
字典读取
多线程访问
状态码获得判断输出结果
结果分析

工具初始化

Banner信息函数

def banner(): 用于介绍工具与名称

使用方法信息函数

def usage():

使用方法:1.url 2.thread 3.dictionary

命令行参数的获得

模块介绍

1.sys
sys.argv获取python命令行执行的数据

2.getopt
python自带的解析命令行参数模块

参数获得

opts,args = getopt.getopt(sys.argv[1:],"u:t:d:")

根据使用方法,可知len(sys.argv)等于7才能执行。将参数获得的内容封装到start函数中

字典文件读取

python字典文件读取

with open(filename,mode) as f:
f.readlines()

多线程思路

一个线程读取固定数目的字典文件内容
制作多线程使用的字典列表 存储都是以列表格式

多线程访问

python多线程

threading.Thread(target=,args=())
start()

线程列表

读取字典列表中的内容
扫描函数 scan

密码暴破

需要一个足够大的字典

在线

针对在线服务的认证凭证进行合法用户枚举

离线

密文加密

Burpsuite破解

基于表单验证的破解和基于HTTP认证的破解

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值