kali工具熟悉——情报分析

情报分析

涉及到的工具有maltego、spiderfoot、spiderfoot-cli、theharvester。
从一些地方收集有用的信息,比如网站的一些信息。

情报收集作用

这里的信息收集是网站的信息收集,主要在网址提供商那里获取到网站
的一些信息,好像是用于构建社工库之类的。
  • 知己知彼,百战不殆。知道的信息越多,越容易做好一件事,知道的信息越多,很有可能就直接碰上某个库的*码啦。

maltego 与域名相关的信息搜集

    Maltego是一款功能极为强大的信息收集和网络侦查工具。
    只要给出一个域名,Maltego就可以找出该网站的大量相关信息(子域名、
IP地址、DNS服务、相关电子邮件)。Maltego还可以去调查一个人的信息。

至于这些信息有什么用,如下所示:

  • 子域名
    子域名为该域名下一级的网址,有可能部署到同一台服务器上,也有可能再不同的服务器上。

  • IP
    IP就不同多说了

  • 其他
    可能作为什么库信息的
    在这里插入图片描述

指令

root@kali:~# maltego -h
java is /usr/bin/java
found java executable in PATH
pwd: file:/usr/share/maltego/maltego-ui/modules/ext/Java_Config_App.jar
install conf: /usr/share/maltego/etc/maltego.conf
install version: v4.3.0
user conf: /root/.maltego/v4.3.0/etc/maltego.conf
current java: /usr/lib/jvm/java-11-openjdk-amd64
/usr/lib/jvm: /usr/lib/jvm
/usr/lib/jvm: /usr/lib/jvm/java-11-openjdk-amd64
/usr/lib/jvm: /usr/lib/jvm/default-java
/usr/lib/jvm: /usr/lib/jvm/openjdk-11
/usr/lib/jvm: /usr/lib/jvm/java-1.11.0-openjdk-amd64
does not exist: /usr/lib/jvm/openjdk-11/bin/java
not jre/jdk: /usr/lib/jvm/openjdk-11
does not exist: /usr/lib/jvm/bin/java
not jre/jdk: /usr/lib/jvm
/usr/lib/jvm/java-1.11.0-openjdk-amd64 VS /usr/lib/jvm/java-11-openjdk-amd64 (/usr/lib/jvm/java-11-openjdk-amd64/bin/java)
/usr/lib/jvm/java-1.11.0-openjdk-amd64 sym /usr/lib/jvm/java-11-openjdk-amd64
/usr/lib/jvm/java-1.11.0-openjdk-amd64 VS /usr/lib/jvm/default-java (/usr/lib/jvm/java-11-openjdk-amd64/bin/java)
/usr/lib/jvm/default-java dig /usr/lib/jvm/java-1.11.0-openjdk-amd64
trying path: /usr/lib/jvm/default-java
canonical: /usr/lib/jvm/java-11-openjdk-amd64
javaHome: /usr/lib/jvm/default-java
resource:com/paterva/maltego/java/config/jre/TestJDK.class -> /tmp/temp76285981688762274512573397534810/TestJDK.class
executing: /usr/lib/jvm/java-11-openjdk-amd64/bin/java -classpath /tmp/temp76285981688762274512573397534810 TestJDK, in: .
 result: 0
 command execution finished
 out: 11.0.16, 11.0.16+8-post-Debian-1, Debian, Linux, amd64
 runtime: 11.0.16 x64 Debian
runtimes: 1
selected java: /usr/lib/jvm/default-java
Detecting appropriate heap size...
resource:com/paterva/maltego/java/config/mem/TestMem.class -> /tmp/temp62886301052411038832573469880462/TestMem.class
memOut: 15651086336
pm: 14926/14926
7563...
11244...
13085...
14005...
14465...
14695...
14810...
14868...
14897...
14911...
selected heap size: 11440
canonical: /usr/lib/jvm/java-11-openjdk-amd64
javaHome: /usr/lib/jvm/default-java
executing: /usr/lib/jvm/java-11-openjdk-amd64/bin/java -classpath /tmp/temp76285981688762274512573397534810 TestJDK, in: .
 result: 0
 command execution finished
 out: 11.0.16, 11.0.16+8-post-Debian-1, Debian, Linux, amd64
./../platform/lib/nbexec: WARNING: environment variable DISPLAY is not set
Module reload options:
  --reload /path/to/module.jar  install or reinstall a module JAR file

Additional module options:
  -o, --open <arg1>...<argN> 
  -u, --updates <arg>        
  -s, --serverHttpAllowed    
  -h, --hub <arg>            
  -i, --import <arg>         
  -p, --automationPort <arg> 
  -m, --machine <arg>        

Core options:
  --laf <LaF classname> use given LookAndFeel class instead of the default
  --fontsize <size>     set the base font size of the user interface, in points
  --locale <language[:country[:variant]]> use specified locale
  --userdir <path>      use specified directory to store user settings
  --cachedir <path>     use specified directory to store user cache, must be different from userdir
  --nosplash            do not show the splash screen

spiderfoot 自动化网站收集目标信息

此程序包包含开源智能 (OSINT) 自动化工具。其目标是自动收集有关给定目标的情报,该目标可能是 IP 地址、域名、主机名、网络子网、ASN、电子邮件地址或人员姓名。
SpiderFoot可用于进攻,即作为黑箱渗透测试的一部分,以收集有关目标的信息,或用于防御,以确定您或您的组织可能向攻击者提供哪些信息来攻击您。
在这里插入图片描述

指令

root@kali:~# spiderfoot -h
usage: sf.py [-h] [-d] [-l IP:port] [-m mod1,mod2,...] [-M] [-C scanID]
             [-s TARGET] [-t type1,type2,...]
             [-u {all,footprint,investigate,passive}] [-T] [-o {tab,csv,json}]
             [-H] [-n] [-r] [-S LENGTH] [-D DELIMITER] [-f]
             [-F type1,type2,...] [-x] [-q] [-V] [-max-threads MAX_THREADS]

SpiderFoot 4.0.0: Open Source Intelligence Automation.

options:
  -h, --help            show this help message and exit
  -d, --debug           Enable debug output.
  -l IP:port            IP and port to listen on.
  -m mod1,mod2,...      Modules to enable.
  -M, --modules         List available modules.
  -C scanID, --correlate scanID
                        Run correlation rules against a scan ID.
  -s TARGET             Target for the scan.
  -t type1,type2,...    Event types to collect (modules selected
                        automatically).
  -u {all,footprint,investigate,passive}
                        Select modules automatically by use case
  -T, --types           List available event types.
  -o {tab,csv,json}     Output format. Tab is default.
  -H                    Don't print field headers, just data.
  -n                    Strip newlines from data.
  -r                    Include the source data field in tab/csv output.
  -S LENGTH             Maximum data length to display. By default, all data
                        is shown.
  -D DELIMITER          Delimiter to use for CSV output. Default is ,.
  -f                    Filter out other event types that weren't requested
                        with -t.
  -F type1,type2,...    Show only a set of event types, comma-separated.
  -x                    STRICT MODE. Will only enable modules that can
                        directly consume your target, and if -t was specified
                        only those events will be consumed by modules. This
                        overrides -t and -m options.
  -q                    Disable logging. This will also hide errors!
  -V, --version         Display the version of SpiderFoot and exit.
  -max-threads MAX_THREADS
                        Max number of modules to run concurrently.

spiderfoot-cli

root@kali:~# spiderfoot-cli -h
usage: sfcli.py [-h] [-d] [-s URL] [-u USER] [-p PASS] [-P PASSFILE] [-e FILE]
                [-l FILE] [-n] [-o FILE] [-i] [-q] [-k] [-b]

SpiderFoot: Open Source Intelligence Automation.

options:
  -h, --help   show this help message and exit
  -d, --debug  Enable debug output.
  -s URL       Connect to SpiderFoot server on URL. By default, a connection
               to http://127.0.0.1:5001 will be attempted.
  -u USER      Username to authenticate to SpiderFoot server.
  -p PASS      Password to authenticate to SpiderFoot server. Consider using
               -P PASSFILE instead so that your password isn't visible in your
               shell history or in process lists!
  -P PASSFILE  File containing password to authenticate to SpiderFoot server.
               Ensure permissions on the file are set appropriately!
  -e FILE      Execute commands from FILE.
  -l FILE      Log command history to FILE. By default, history is stored to
               ~/.spiderfoot_history unless disabled with -n.
  -n           Disable history logging.
  -o FILE      Spool commands and output to FILE.
  -i           Allow insecure server connections when using SSL
  -q           Silent output, only errors reported.
  -k           Turn off color-coded output.
  -b, -v       Print the banner w/ version and exit.```

Theharvester 一些信息收集

该软件包包含一个工具,用于从不同的公共来源(搜索引擎、pgp密钥服务器)收集子域名、电子邮件地址、虚拟主机、开放端口/横幅和员工姓名。
在这里插入图片描述

MD,知道了,所以可能好像弃用了。
在这里插入图片描述

指令

root@kali:~# theHarvester -h

*******************************************************************
*  _   _                                            _             *
* | |_| |__   ___    /\  /\__ _ _ ____   _____  ___| |_ ___ _ __  *
* | __|  _ \ / _ \  / /_/ / _` | '__\ \ / / _ \/ __| __/ _ \ '__| *
* | |_| | | |  __/ / __  / (_| | |   \ V /  __/\__ \ ||  __/ |    *
*  \__|_| |_|\___| \/ /_/ \__,_|_|    \_/ \___||___/\__\___|_|    *
*                                                                 *
* theHarvester 4.0.3                                              *
* Coded by Christian Martorella                                   *
* Edge-Security Research                                          *
* cmartorella@edge-security.com                                   *
*                                                                 *
******************************************************************* 


usage: theHarvester [-h] -d DOMAIN [-l LIMIT] [-S START] [-g] [-p] [-s]
                    [--screenshot SCREENSHOT] [-v] [-e DNS_SERVER]
                    [-t DNS_TLD] [-r] [-n] [-c] [-f FILENAME] [-b SOURCE]

theHarvester is used to gather open source intelligence (OSINT) on a company
or domain.

options:
  -h, --help            show this help message and exit
  -d DOMAIN, --domain DOMAIN
                        Company name or domain to search.
  -l LIMIT, --limit LIMIT
                        Limit the number of search results, default=500.
  -S START, --start START
                        Start with result number X, default=0.
  -g, --google-dork     Use Google Dorks for Google search.
  -p, --proxies         Use proxies for requests, enter proxies in
                        proxies.yaml.
  -s, --shodan          Use Shodan to query discovered hosts.
  --screenshot SCREENSHOT
                        Take screenshots of resolved domains specify output
                        directory: --screenshot output_directory
  -v, --virtual-host    Verify host name via DNS resolution and search for
                        virtual hosts.
  -e DNS_SERVER, --dns-server DNS_SERVER
                        DNS server to use for lookup.
  -t DNS_TLD, --dns-tld DNS_TLD
                        Perform a DNS TLD expansion discovery, default False.
  -r, --take-over       Check for takeovers.
  -n, --dns-lookup      Enable DNS server lookup, default False.
  -c, --dns-brute       Perform a DNS brute force on the domain.
  -f FILENAME, --filename FILENAME
                        Save the results to an XML and JSON file.
  -b SOURCE, --source SOURCE
                        anubis, baidu, bing, binaryedge, bingapi,
                        bufferoverun, censys, certspotter, crtsh, dnsdumpster,
                        duckduckgo, fullhunt, github-code, google,
                        hackertarget, hunter, intelx, linkedin,
                        linkedin_links, n45ht, omnisint, otx, pentesttools,
                        projectdiscovery, qwant, rapiddns, rocketreach,
                        securityTrails, spyse, sublist3r, threatcrowd,
                        threatminer, trello, twitter, urlscan, virustotal,
                        yahoo, zoomeye

restfulHarvest

root@kali:~# restfulHarvest -h
usage: restfulHarvest [-h] [-H HOST] [-p PORT] [-l LOG_LEVEL] [-r]

options:
  -h, --help            show this help message and exit
  -H HOST, --host HOST  IP address to listen on default is 127.0.0.1
  -p PORT, --port PORT  Port to bind the web server to, default is 5000
  -l LOG_LEVEL, --log-level LOG_LEVEL
                        Set logging level, default is info but
                        [critical|error|warning|info|debug|trace] can be set
  -r, --reload          Enable automatic reload used during development of the
                        api

这几个工具主要是对网站的一些信息进行收集,原理应该是从网址提供商那获取有关信息,省的自己去一个一个的查,方便多了。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值