dmdecode 与 megacli 命令用法参考

https://blog.csdn.net/signmem/article/details/42424695

dmdecode 常用命令


机器型号

  1. [root@test_raid ~] # dmidecode | grep "Product"
  2. Product Name: PowerEdge R720xd
  3. Product Name: 068CDY

厂商

  1. [root@test_raid ~] # dmidecode| grep "Manufacturer"
  2. Manufacturer: Dell Inc.


序号信息

  1. [root@test_raid ~] # dmidecode | grep -B 4 "Serial Number" | more
  2. System Information
  3. Manufacturer: Dell Inc.
  4. Product Name: PowerEdge R720xd
  5. Version: Not Specified
  6. Serial Number: 8V3Q342
  7. --
  8. Base Board Information
  9. Manufacturer: Dell Inc.
  10. Product Name: 068CDY
  11. Version: A01
  12. Serial Number: ..CN779214AR02CC.



CPU 信息

  1. [root@test_raid ~] # dmidecode | grep "CPU"
  2. Socket Designation: CPU1
  3. Version: Intel(R) Xeon(R) CPU E5- 2695 v2 @ 2.40GHz
  4. Socket Designation: CPU2
  5. Version: Intel(R) Xeon(R) CPU E5- 2695 v2 @ 2.40GHz


物理 CPU 个数

  1. [root@test_raid ~] # dmidecode | grep "Socket Designation: CPU" |wc -l
  2. 2


生产日期

  1. [root@test_raid ~]# dmidecode | grep "Date"
  2. Release Date: 07/ 09/ 2014

megacli 常用命令

电源管理

充电状态

  1. [root@test_raid ~] # megacli -AdpBbuCmd -GetBbuStatus -aALL |grep "Charger Status"
  2. Charger Status: Complete

充电百分比

  1. [root@test_raid ~] # megacli -AdpBbuCmd -GetBbuStatus -aALL |grep "Relative State of Charge"
  2. Relative State of Charge: 100 %


当前 RAID 数量

  1. [root@test_raid ~] # megacli -cfgdsply -aALL |grep "Number of DISK GROUPS:"
  2. Number of DISK GROUPS: 1


信息检测


RAID 卡信息

  1. [root@test_raid ~] # megacli -cfgdsply –aALL | more
  2. ==============================================================================
  3. Adapter: 0
  4. Product Name: PERC H710P Mini
  5. Memory: 1024MB
  6. BBU: Present
  7. Serial No: 49F033N
  8. ==============================================================================
  9. Number of DISK GROUPS: 1
  10. DISK GROUP: 0
  11. Number of Spans: 1
  12. SPAN: 0
  13. Span Reference: 0x00
  14. Number of PDs: 2
  15. Number of VDs: 1
  16. Number of dedicated Hotspares: 0
  17. Virtual Drive Information:
  18. Virtual Drive: 0 (Target Id: 0)
  19. Name :system_vd
  20. RAID Level : Primary -1, Secondary -0, RAID Level Qualifier -0
  21. Size : 3.637 TB
  22. Mirror Data : 3.637 TB
  23. State : Optimal
  24. Strip Size : 64 KB
  25. Number Of Drives : 2
  26. Span Depth : 1
  27. Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
  28. Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
  29. Default Access Policy: Read/Write
  30. Current Access Policy: Read/Write
  31. Disk Cache Policy : Disk 's Default
  32. Ongoing Progresses:
  33. Background Initialization: Completed 13%, Taken 63 min.
  34. Encryption Type : None
  35. Default Power Savings Policy: Controller Defined
  36. Current Power Savings Policy: None
  37. Can spin up in 1 minute: Yes
  38. LD has drives that support T10 power conditions: Yes
  39. LD 's IO profile supports MAX power savings with cached writes: No
  40. Bad Blocks Exist: No
  41. Is VD Cached: Yes
  42. Cache Cade Type : Read Only
  43. Physical Disk Information:
  44. Physical Disk: 0
  45. Enclosure Device ID: 32
  46. Slot Number: 0
  47. Drive 's postion: DiskGroup: 0, Span: 0, Arm: 0
  48. Enclosure position: 1
  49. Device Id: 0
  50. WWN: 5000C50062A960D0
  51. Sequence Number: 2
  52. Media Error Count: 0
  53. Other Error Count: 0
  54. Predictive Failure Count: 0
  55. Last Predictive Failure Event Seq Number: 0
  56. PD Type: SAS
  57. Raw Size: 3.638 TB [ 0x1d1c0beb0 Sectors]
  58. Non Coerced Size: 3.637 TB [ 0x1d1b0beb0 Sectors]
  59. Coerced Size: 3.637 TB [ 0x1d1b00000 Sectors]
  60. Firmware state: Online, Spun Up
  61. Device Firmware Level: GS0F
  62. Shield Counter: 0
  63. Successful diagnostics completion on : N/A
  64. SAS Address( 0): 0x5000c50062a960d1
  65. SAS Address( 1): 0x0
  66. Connected Port Number: 0(path0)
  67. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6ABTC
  68. FDE Capable: Not Capable
  69. FDE Enable: Disable
  70. Secured: Unsecured
  71. Locked: Unlocked
  72. Needs EKM Attention: No
  73. Foreign State: None
  74. Device Speed: 6.0Gb/s
  75. Link Speed: 6.0Gb/s
  76. Media Type: Hard Disk Device
  77. Drive Temperature : 29C ( 84.20 F)
  78. PI Eligibility: No
  79. Drive is formatted for PI information: No
  80. PI: No PI
  81. Port -0 :
  82. Port status: Active
  83. Port 's Linkspeed: 6.0Gb/s
  84. Port -1 :
  85. Port status: Active
  86. Port 's Linkspeed: Unknown
  87. Drive has flagged a S.M.A.R.T alert : No
  88. Physical Disk: 1
  89. Enclosure Device ID: 32
  90. Slot Number: 1
  91. Drive 's postion: DiskGroup: 0, Span: 0, Arm: 1
  92. Enclosure position: 1
  93. Device Id: 1
  94. WWN: 5000C50062A98C78
  95. Sequence Number: 2
  96. Media Error Count: 0
  97. Other Error Count: 0
  98. Predictive Failure Count: 0
  99. Last Predictive Failure Event Seq Number: 0
  100. PD Type: SAS
  101. Raw Size: 3.638 TB [ 0x1d1c0beb0 Sectors]
  102. Non Coerced Size: 3.637 TB [ 0x1d1b0beb0 Sectors]
  103. Coerced Size: 3.637 TB [ 0x1d1b00000 Sectors]
  104. Firmware state: Online, Spun Up
  105. Device Firmware Level: GS0F
  106. Shield Counter: 0
  107. Successful diagnostics completion on : N/A
  108. SAS Address( 0): 0x5000c50062a98c79
  109. SAS Address( 1): 0x0
  110. Connected Port Number: 0(path0)
  111. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6ABD4
  112. FDE Capable: Not Capable
  113. FDE Enable: Disable
  114. Secured: Unsecured
  115. Locked: Unlocked
  116. Needs EKM Attention: No
  117. Foreign State: None
  118. Device Speed: 6.0Gb/s
  119. Link Speed: 6.0Gb/s
  120. Media Type: Hard Disk Device
  121. Drive Temperature : 29C ( 84.20 F)
  122. PI Eligibility: No
  123. Drive is formatted for PI information: No
  124. PI: No PI
  125. Port -0 :
  126. Port status: Active
  127. Port 's Linkspeed: 6.0Gb/s
  128. Port -1 :
  129. Port status: Active
  130. Port 's Linkspeed: Unknown
  131. Drive has flagged a S.M.A.R.T alert : No


其他物理信息

[root@test_raid ~]# megacli -PDList -aALL 



当前 RAID 磁盘信息

  1. [root@test_raid ~]# megacli -LDInfo -LALL –aAll
  2. Adapter 0 -- Virtual Drive Information:
  3. Virtual Drive: 0 (Target Id: 0)
  4. Name :system_vd
  5. RAID Level : Primary- 1, Secondary- 0, RAID Level Qualifier- 0
  6. Size : 3.637 TB
  7. Mirror Data : 3.637 TB
  8. State : Optimal
  9. Strip Size : 64 KB
  10. Number Of Drives : 2
  11. Span Depth : 1
  12. Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
  13. Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
  14. Default Access Policy: Read/ Write
  15. Current Access Policy: Read/ Write
  16. Disk Cache Policy : Disk 's Default
  17. Ongoing Progresses:
  18. Background Initialization: Completed 14%, Taken 64 min.
  19. Encryption Type : None
  20. Default Power Savings Policy: Controller Defined
  21. Current Power Savings Policy: None
  22. Can spin up in 1 minute: Yes
  23. LD has drives that support T10 power conditions: Yes
  24. LD's IO profile supports MAX power savings with cached writes: No
  25. Bad Blocks Exist: No
  26. Is VD Cached: Yes
  27. Cache Cade Type : Read Only




raid 控制器个数

  1. [root@test_raid ~] # megacli -adpCount
  2. Controller Count: 1.




raid 控制器时间

  1. [root@test_raid ~]# megacli -AdpGetTime –aALL
  2. Adapter 0:
  3. Date: 12/ 31/ 2014
  4. Time: 16: 21: 15




缓存及策略

raid cache 策略

  1. [root@test_raid ~] # megacli -cfgdsply -aALL |grep Polic
  2. Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
  3. Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
  4. Default Access Policy: Read/Write
  5. Current Access Policy: Read/Write
  6. Disk Cache Policy : Disk 's Default
  7. Default Power Savings Policy: Controller Defined
  8. Current Power Savings Policy: None



查看磁盘缓存策略

  1. [root@test_raid ~] # megacli -LDGetProp -Cache -L0 -a0 <- 第一个 RAID
  2. [root@test_raid ~] # megacli -LDGetProp -Cache -L1 -a0 <- 第二个 RAID
  3. [root@test_raid ~] # megacli -LDGetProp -Cache -LALL -a0
  4. Adapter 0-VD 0(target id: 0): Cache Policy:WriteBack, ReadAdaptive, Direct, No Write Cache if bad BBU


设置磁盘缓存策略

缓存策略解释:

  1. WT (Write through)
  2. WB (Write back)
  3. NORA (No read ahead)
  4. RA (Read ahead)
  5. ADRA (Adaptive read ahead)
  6. Cached
  7. Direct
  8. -RW |RO|Blocked |RemoveBlocked | WT |WB|ForcedWB [-Immediate] |RA|NORA | DsblP | Cached |Direct | -EnDskCache |DisDskCache | CachedBadBBU |NoCachedBadBBU
  9. -Lx|-L 0, 1, 2 |-Lall -aN|-a 0, 1, 2 |-aALL


设定直接回写

  1. [root@test_raid ~]# megacli -LDSetProp WT -L0 -a0
  2. Set Write Policy to WriteThrough on Adapter 0, VD 0 (target id: 0) success



直接回写

  1. [root@test_raid ~] # megacli -LDSetProp -Direct -L0 -a0
  2. Set Cache Policy to Direct on Adapter 0, VD 0 (target id: 0) success



关闭缓存

  1. [root@test_raid ~] # megacli -LDSetProp -DisDskCache -L0 -a0
  2. Set Disk Cache Policy to Disabled on Adapter 0, VD 0 (target id: 0) success



 raid 管理


磁盘检测方法

查询磁盘个数, 序号

  1. [root@test_raid ~] # megacli -PDList -aALL | grep 'Inquiry Data:'
  2. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6ABTC
  3. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6ABD4
  4. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z69SFJ
  5. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A4Z7
  6. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A4X5
  7. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A5YG
  8. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6AB8R
  9. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6AALM
  10. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A4N 0
  11. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A51S
  12. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z69ST5
  13. Inquiry Data: SEAGATE ST4000NM0023 GS0FZ1Z6A4V1

  1. [root@test_raid ~] # megacli -PDList -aALL | grep WWN
  2. WWN: 5000C50062A960D 0
  3. WWN: 5000C50062A98C78
  4. WWN: 5000C50062A9AF54
  5. WWN: 5000C50062A98F3 0
  6. WWN: 5000C50062A993AC
  7. WWN: 5000C50062A93EA4
  8. WWN: 5000C50062A9998C
  9. WWN: 5000C50062A9CB4C
  10. WWN: 5000C50062A9B52C
  11. WWN: 5000C50062A98CB 0
  12. WWN: 5000C50062A99CF 0
  13. WWN: 5000C50062A9999 0




检测磁盘 ID 注意, 该ID 值用于标注磁盘

  1. [root@test_raid ~] # megacli -PDlist -aALL | grep "ID" | uniq
  2. Enclosure Device ID: 32



检测当前 raid 组及每个 raid 组对应的磁盘

  1. [root@test_raid ~] # megacli -cfgdsply –aALL | grep -E "DISK\ GROUP|Slot\ Number"
  2. Number of DISK GROUPS: 1
  3. DISK GROUP: 0
  4. Slot Number: 0
  5. Slot Number: 1


查询当前磁盘的序号, 并且可以检测磁盘是否损坏 (注, 当前第 6 个磁盘出问题)

  1. [root@test_raid ~]# megacli -PDList -aALL | grep -E "Drive\:\ \ Not\ Supported|Slo"
  2. Slot Number: 0
  3. Slot Number: 1
  4. Slot Number: 2
  5. Slot Number: 3
  6. Slot Number: 4
  7. Slot Number: 5
  8. Drive: Not Supported
  9. Slot Number: 6
  10. Slot Number: 7
  11. Slot Number: 8
  12. Slot Number: 9
  13. Slot Number: 10
  14. Slot Number: 11


foreign 管理


创建 RAID 前, 需要检测是否具有 foreign 配置, 如果有需要清除( foreign = 某个新加入的磁盘之前已经创建了 RAID, 需要初始化)

  1. [root@test_raid ~] # megacli -PDlist -aALL | grep "Foreign State"
  2. Foreign State: None
  3. Foreign State: None
  4. Foreign State: None
  5. Foreign State: None
  6. Foreign State: None
  7. Foreign State: None (Foreign) 加入具有 foreign 配置, 则显示该配置 (对应上一个命令中 Slot Number: 5磁盘)
  8. Foreign State: None
  9. Foreign State: None
  10. Foreign State: None
  11. Foreign State: None
  12. Foreign State: None
  13. Foreign State: None


把上面标注为 Foreign 磁盘标注为 unconfigrue good

  1. [root@test_raid ~]# megacli -PDMakeGood -PhysDrv [32:5] -a0
  2. Adapter: 0: Failed to change PD state at EnclId-32 SlotId-5. [由于当前并不是 foreign 状态, 因此返回错误]
  3. Exit Code: 0 x01


清除 foreign 配置

  1. [ root@test_raid ~] # megacli -CfgForeign -Scan -a0
  2. There is no foreign configuration on controller 0.
  3. Exit Code: 0x00



raid 0 管理

创建 raid 0 方法 ( 3 个磁盘 )

  1. [root@test_raid ~]# megacli -CfgLdAdd -r0 [32:2,32:3,32:4] WB Direct -a0
  2. Adapter 0: Created VD 1
  3. Adapter 0: Configured the Adapter!!
  4. Exit Code: 0 x00




检查 raid 组与对应磁盘

  1. [root@test_raid ~] # megacli -cfgdsply –aALL | grep -E "DISK\ GROUP|Slot\ Number|RAID\ Level|Target"
  2. Number of DISK GROUPS: 2
  3. DISK GROUP: 0
  4. Virtual Drive: 0 (Target Id: 0) [磁盘虚拟 ID, 删除时候使用]
  5. RAID Level : Primary -1, Secondary -0, RAID Level Qualifier -0 [primay -1] raid 1
  6. Slot Number: 0
  7. Slot Number: 1
  8. DISK GROUP: 1
  9. Virtual Drive: 1 (Target Id: 1)
  10. RAID Level : Primary -0, Secondary -0, RAID Level Qualifier -0 [primary -0] raid 0
  11. Slot Number: 2
  12. Slot Number: 3
  13. Slot Number: 4


查询磁盘是否在创建中

  1. [root@test_raid ~] # megacli -PDRbld -ProgDsply -PhysDrv [32:3,32,2,32,4] -aALL
  2. Device(Encl -32 Slot -3) is not in rebuild process
  3. Device(Encl -32 Slot -2) is not in rebuild process
  4. Device(Encl -32 Slot -4) is not in rebuild process



删除某个 RAID

  1. [root@test_raid ~] # megacli -CfgLdDel -L1 -a0
  2. Virtual Disk is associate with Cache Cade. Please Use force option to delete <- 需要使用 force 参数
  3. [root@test_raid ~] # megacli -CfgLdDel -L1 -force -a0
  4. Adapter 0: Deleted Virtual Drive -1(target id -1)
  5. Exit Code: 0x00


raid 1 管理

利用两个磁盘创建 RAID 1

  1. [root@test_raid ~]# megacli -CfgLdAdd -r1 [32:5,32:6] WB Direct -a0
  2. Adapter 0: Created VD 2
  3. Adapter 0: Configured the Adapter!!
  4. Exit Code: 0 x00




检测方法

  1. [root@test_raid ~] # megacli -cfgdsply –aALL | grep -E "DISK\ GROUP|Slot\ Number|RAID\ Level|Target"
  2. Number of DISK GROUPS: 3
  3. DISK GROUP: 0
  4. Virtual Drive: 0 (Target Id: 0)
  5. RAID Level : Primary -1, Secondary -0, RAID Level Qualifier -0
  6. Slot Number: 0
  7. Slot Number: 1
  8. DISK GROUP: 1
  9. Virtual Drive: 1 (Target Id: 1)
  10. RAID Level : Primary -0, Secondary -0, RAID Level Qualifier -0
  11. Slot Number: 2
  12. Slot Number: 3
  13. Slot Number: 4
  14. DISK GROUP: 2
  15. Virtual Drive: 2 (Target Id: 2)
  16. RAID Level : Primary -1, Secondary -0, RAID Level Qualifier -0
  17. Slot Number: 5
  18. Slot Number: 6



删除方法同上

[root@test_raid ~]# megacli -CfgLdDel -L2 -force -a0


RAID 5 管理


一个热备 3 个组 RAID 方法

  1. [root@test_raid ~] # megacli -CfgLdAdd -r5 [32:7,32:8,32:9] WB Direct -Hsp[32:10] -a0
  2. Adapter 0: Created VD 3
  3. Adapter: 0: Set Physical Drive at EnclId -32 SlotId -10 as Hot Spare Success.
  4. Adapter 0: Configured the Adapter!!
  5. Exit Code: 0x00



组 RAID 马上完成, 组 RAID 后, 马上能够在磁盘上看见设备名称

  1. [root@test_raid ~] # megacli -PDRbld -ProgDsply -PhysDrv [32:7,32:8,32:9] -aALL
  2. Device(Encl -32 Slot -7) is not in rebuild process
  3. Device(Encl -32 Slot -8) is not in rebuild process
  4. Device(Encl -32 Slot -9) is not in rebuild process


查询当前 RAID 5 磁盘大小

  1. [root@test_raid ~] # fdisk -l /dev/sdd
  2. Disk /dev/sdd: 8000.5 GB, 8000450330624 bytes
  3. 255 heads, 63 sectors/track, 972666 cylinders
  4. Units = cylinders of 16065 * 512 = 8225280 bytes
  5. Sector size (logical/physical): 512 bytes / 512 bytes
  6. I/O size (minimum/optimal): 512 bytes / 512 bytes
  7. Disk identifier: 0x00000000



查询热盘方法

  1. [root@test_raid ~] # megacli -PDList -aALL | grep -E "DISK\ GROUP|Slot\ Number|postion:|Firmware\ state:"
  2. Slot Number: 0 & lt;- 磁盘序号
  3. Drive 's postion: DiskGroup: 0, Span: 0, Arm: 0 <- DiskGroup: 0 标注当前属于那个 RAID 组
  4. Firmware state: Online, Spun Up <- Online 标注磁盘当前在线
  5. Slot Number: 1
  6. Drive' s postion: DiskGroup: 0, Span: 0, Arm: 1
  7. Firmware state: Online, Spun Up
  8. Slot Number: 2
  9. Drive 's postion: DiskGroup: 1, Span: 0, Arm: 0
  10. Firmware state: Online, Spun Up
  11. Slot Number: 3
  12. Drive' s postion: DiskGroup: 1, Span: 0, Arm: 1
  13. Firmware state: Online, Spun Up
  14. Slot Number: 4
  15. Drive 's postion: DiskGroup: 1, Span: 0, Arm: 2
  16. Firmware state: Online, Spun Up
  17. Slot Number: 5
  18. Drive' s postion: DiskGroup: 2, Span: 0, Arm: 0
  19. Firmware state: Online, Spun Up
  20. Slot Number: 6
  21. Drive 's postion: DiskGroup: 2, Span: 0, Arm: 1
  22. Firmware state: Online, Spun Up
  23. Slot Number: 7
  24. Drive' s postion: DiskGroup: 3, Span: 0, Arm: 0
  25. Firmware state: Online, Spun Up
  26. Slot Number: 8
  27. Drive 's postion: DiskGroup: 3, Span: 0, Arm: 1
  28. Firmware state: Online, Spun Up
  29. Slot Number: 9
  30. Drive' s postion: DiskGroup: 3, Span: 0, Arm: 2
  31. Firmware state: Online, Spun Up
  32. Slot Number: 10
  33. Firmware state: Hotspare, Spun Up & lt;- HotSpare 表示当前为热盘
  34. Slot Number: 11
  35. Firmware state: Unconfigured(good), Spun Up


RAID 5 扩容[失败]

  1. [root@test_raid ~]# megacli -LDRecon -Start -r5 -Add -PhysDrv[ 32: 11] -L3 -a0
  2. Failed to Start Reconstruction of Virtual Drive.
  3. FW error description:
  4. The requested virtual drive operation cannot be performed because consistency check is in progress.
  5. Exit Code: 0x17


故障模拟


模拟故障磁盘, 遇到故障后, 需要把磁盘执行 OFFLINE 操作

  1. [root@test_raid ~] # megacli -PDOffline -PhysDrv [32:9] -a0
  2. Adapter: 0: EnclId- 32 SlotId- 9 state changed to OffLine.
  3. Exit Code: 0x00



当磁盘 9 执行 OFFLINE 后, 热备会自动 REBUILD

  1. [root@test_raid ~] # megacli -PDList -aALL | grep -E "DISK\ GROUP|Slot\ Number|postion:|Firmware\ state:"
  2. Drive 's postion: DiskGroup: 3, Span: 0, Arm: 0
  3. Firmware state: Online, Spun Up
  4. Slot Number: 8
  5. Drive' s postion: DiskGroup: 3, Span: 0, Arm: 1
  6. Firmware state: Online, Spun Up
  7. Slot Number: 9
  8. Firmware state: Unconfigured(good), Spun Up & lt;- offline 操作后状态会自动修改
  9. Slot Number: 10
  10. Drive 's postion: DiskGroup: 3, Span: 0, Arm: 2
  11. Firmware state: Rebuild <--- 自动进行 rebuild 状态


查询 rebuild 状态

  1. [root@test_raid ~]# megacli -PDRbld -ProgDsply -PhysDrv [32:10] -aALL
  2. Rebuild progress of physical drives...
  3. Enclosure :Slot Percent Complete Time Elps
  4. 032 :10 ***********************00 %*********************** 00 :03 :44




把第九个磁盘重新作为热备使用

  1. [root@test_raid ~]# megacli -PDHSP - Set -Dedicated -Array3 -physdrv[ 32: 9] -a0
  2. Adapter: 0: Set Physical Drive at EnclId- 32 SlotId- 9 as Hot Spare Success.
  3. Exit Code: 0x00

查询状态

  1. Firmware state: Online, Spun Up
  2. Slot Number: 7
  3. Drive 's postion: DiskGroup: 3, Span: 0, Arm: 0
  4. Firmware state: Online, Spun Up
  5. Slot Number: 8
  6. Drive' s postion: DiskGroup: 3, Span: 0, Arm: 1
  7. Firmware state: Online, Spun Up
  8. Slot Number: 9
  9. Firmware state: Hotspare, Spun Up
  10. Slot Number: 10
  11. Drive 's postion: DiskGroup: 3, Span: 0, Arm: 2
  12. Firmware state: Rebuild



让故障磁盘闪灯

  1. [root@test_raid ~] # megacli -PdLocate -start -physdrv[32:11] -a0
  2. Adapter: 0: Device at EnclId -32 SlotId -11 -- PD Locate Start Command was successfully sent to Firmware
  3. Exit Code: 0x00



停止闪灯

  1. [root@test_raid ~]# megacli -PdLocate - stop -physdrv[ 32: 11] -a0
  2. Adapter: 0: Device at EnclId -32 SlotId -11 -- PD Locate Stop Command was successfully sent to Firmware
  3. Exit Code: 0x00


启动管理


指定启动的 raid 组

megacli -AdpBootDrive -set -L0 -a0 


  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值