Set up Splunk on linux

0. Download splunk install package 

http://www.splunk.com/download/splunk-6.1.2-213098-linux-2.6-x86_64.rpm

1. Install enterprise :

    npm -i splunk-6.1.2-213098-linux-2.6-x86_64.rpm

2. start splunk:

    cd /opt/splunk/bin

    ./splunk start --accept-license

   ./splunk enable boot-start -user root

3. Install universal forwarder:

    npm -i splunkforwarder-6.1.2-213098-linux-2.6-x86_64.rpm

4. start splunk forwarder:

      cd /opt/splunkforwarder/bin

    ./splunk start --accept-license

   ./splunk enable boot-start -user root

5. Change forwarder admin password:

     cd /opt/splunkforwarder/bin

    ./splunk edit user admin -password <new password> -role admin admin: changeme <changed to forwardme>

6. configure universal forwarder act as a deplyment client:

    ./splunk set deploy-poll 127.0.0.1:8089

7. configure universal forwarder to forward a specific receiving indexer:

  ./splunk add forward-server 127.0.0.1:9997 admin:forwardme

8. configure forwarder inputs.conf:

   cd /opt/splunkforwarder/etc/system/local

   gedit inputs.conf

[monitor:<the directory you would like to monitor> ] //my sample: /home/aimqa/Desktop/SG_JobsResults

disabled=false

sourcetype=<your sourcetype name that you need to set up on server> //my sample: sg_production

9. Additional setting:

     if you want to clone your data to the end sever, you may clone data to another server by:

    cd /opt/splunkforwarder/etc/system/local

    gedit outputs.conf


before edit, you should get the outputs.conf like this:

   [tcpout]

  defaultGroup=<target_group>

 [tcpout:<target_group>]

server=<receiving_server1>:<port>
<attribute1> = <val1>
<attribute2> = <val2>
 
to set up date clone, modify the outputs.conf to: 

 [tcpout]

  defaultGroup=<target_group1>,<target_group2>

 [tcpout:<target_group1>]

   server=<receiving_server1>:<port>
 

 [tcpout:<target_group2>]

   server=<receiving_server2>:<port>
10. add more monitor directory to added to different sourcetype:
    

cd /opt/splunkforwarder/etc/system/local

   gedit inputs.conf

add another line:

[monitor:<the directory you would like to monitor> ] //my sample: /home/aimqa/Desktop/SG_JobsResults

11. Now you could set up a data monitor to grasp data you want to monitor and added it to the monitor directory so that data is forwarded to server for deeper search.
 
    I used crontab on linux to repeat query data and populate to monitor directory: 
    /sbin/service crond stop
    crontab -e
    0 * * * * /bin/sh <your bash file .sh>
    //before I start the timely update log, I queried all history data and forwarded it to server, then I start crond to query data once an hour at sharp time 
    /sbin/service crond start







本研究利用Sen+MK方法分析了特定区域内的ET(蒸散发)趋势,重点评估了使用遥感数据的ET空间变化。该方法结合了Sen斜率估算器和Mann-Kendall(MK)检验,为评估长期趋势提供了稳健的框架,同时考虑了时间变化和统计显著性。 主要过程与结果: 1.ET趋势可视化:研究利用ET数据,通过ET-MK和ET趋势图展示了蒸散发在不同区域的空间和时间变化。这些图通过颜色渐变表示不同的ET水平及其趋势。 2.Mann-Kendall检验:应用MK检验来评估ET趋势的统计显著性。检验结果以二元分类图呈现,标明ET变化的显著性,帮助识别出有显著变化的区域。 3.重分类结果:通过重分类处理,将区域根据ET变化的显著性进行分类,从而聚焦于具有显著变化的区域。这一过程确保分析集中在具有实际意义的发现上。 4.最终输出:最终结果以栅格图和png图的形式呈现,支持各种应用,包括政策规划、水资源管理和土地利用变化分析,这些都是基于详细的时空分析。 ------------------------------------------------------------------- 文件夹构造: data文件夹:原始数据,支持分析的基础数据(MOD16A2H ET数据 宁夏部分)。 results文件夹:分析结果与可视化,展示研究成果。 Sen+MK_optimized.py:主分析脚本,适合批量数据处理和自动化分析。 Sen+MK.ipynb:Jupyter Notebook,复现可视化地图。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值