Kernel中出现的panic 都会带Code段, 如红色字体所示.
The following is an example of a protection fault in a loadable module
processed by klogd:
---------------------------------------------------------------------------
Aug 29 09:51:01 blizard kernel: Unable to handle kernel paging request at virtual address f15e97cc
Aug 29 09:51:01 blizard kernel: current->tss.cr3 = 0062d000, %cr3 = 0062d000
Aug 29 09:51:01 blizard kernel: *pde = 00000000
Aug 29 09:51:01 blizard kernel: Oops: 0002
Aug 29 09:51:01 blizard kernel: CPU: 0
Aug 29 09:51:01 blizard kernel: EIP: 0010:[oops:_oops+16/3868]
Aug 29 09:51:01 blizard kernel: EFLAGS: 00010212
Aug 29 09:51:01 blizard kernel: eax: 315e97cc ebx: 003a6f80 ecx: 001be77b edx: 00237c0c
Aug 29 09:51:01 blizard kernel: esi: 00000000 edi: bffffdb3 ebp: 00589f90 esp: 00589f8c
Aug 29 09:51:01 blizard kernel: ds: 0018 es: 0018 fs: 002b gs: 002b ss: 0018
Aug 29 09:51:01 blizard kernel: Process oops_test (pid: 3374, process nr: 21, stackpage=00589000)
Aug 29 09:51:01 blizard kernel: Stack: 315e97cc 00589f98 0100b0b4 bffffed4 0012e38e 00240c64 003a6f80 00000001
Aug 29 09:51:01 blizard kernel: 00000000 00237810 bfffff00 0010a7fa 00000003 00000001 00000000 bfffff00
Aug 29 09:51:01 blizard kernel: bffffdb3 bffffed4 ffffffda 0000002b 0007002b 0000002b 0000002b 00000036
Aug 29 09:51:01 blizard kernel: Call Trace: [oops:_oops_ioctl+48/80] [_sys_ioctl+254/272] [_system_call+82/128]
Aug 29 09:51:01 blizard kernel: Code: c7 00 05 00 00 00 eb 08 90 90 90 90 90 90 90 90 89 ec 5d c3
---------------------------------------------------------------------------
如果要把这段二进制指令反汇编为汇编语言可以通过下面方法:
1. 把 这段数据保存到文本文件, 比如下面例子中的/home/decode.txt文件中
2. 使用linux代码中的decodecode脚本(路径: linux/kernel/scripts) 来对这段指令反汇编, 如下面内容所示. 其中code段有<8b>比较特殊, <>框住的指令代表ipanic是在这条指令上出现的问题.
fred@fred:~/linux/kernel/scripts$ ./decodecode </home/decode.txt
Code: f9 0f 8d f9 00 00 00 8d 42 0c e8 dd 26 11 c7 a1 60 ea 2b f9 8b 50 08 a1 64 ea 2b f9 8d 34 82 8b 1e 85 db 74 6d 8b 15 60 ea 2b f9 <8b> 43 04 39 42 54 7e 04 40 89 42 54 8b 43 04 3b 05 00 f6 52 c0
All code
========
0: f9 stc
1: 0f 8d f9 00 00 00 jge 0x100
7: 8d 42 0c lea 0xc(%rdx),%eax
a: e8 dd 26 11 c7 callq 0xffffffffc71126ec
f: a1 60 ea 2b f9 8b 50 movabs 0xa108508bf92bea60,%eax
16: 08 a1
18: 64 fs
19: ea (bad)
1a: 2b f9 sub %ecx,%edi
1c: 8d 34 82 lea (%rdx,%rax,4),%esi
1f: 8b 1e mov (%rsi),%ebx
21: 85 db test %ebx,%ebx
23: 74 6d je 0x92
25: 8b 15 60 ea 2b f9 mov -0x6d415a0(%rip),%edx # 0xfffffffff92bea8b
2b:* 8b 43 04 mov 0x4(%rbx),%eax <-- trapping instruction
2e: 39 42 54 cmp %eax,0x54(%rdx)
31: 7e 04 jle 0x37
33: 40 89 42 54 rex mov %eax,0x54(%rdx)
37: 8b 43 04 mov 0x4(%rbx),%eax
3a: 3b 05 00 f6 52 c0 cmp -0x3fad0a00(%rip),%eax # 0xffffffffc052f640
Code starting with the faulting instruction
===========================================
0: 8b 43 04 mov 0x4(%rbx),%eax
3: 39 42 54 cmp %eax,0x54(%rdx)
6: 7e 04 jle 0xc
8: 40 89 42 54 rex mov %eax,0x54(%rdx)
c: 8b 43 04 mov 0x4(%rbx),%eax
f: 3b 05 00 f6 52 c0 cmp -0x3fad0a00(%rip),%eax # 0xffffffffc052f615
翻译来于: linux/kernel/Documentation/oops-tracing.txt
---------------------------------
How to track down an Oops.. [originally a mail to linux-kernel]
The main trick is having 5 years of experience with those pesky oops
messages ;-)
Actually, there are things you can do that make this easier. I have two
separate approaches:
gdb /usr/src/linux/vmlinux
gdb> disassemble <offending_function>
That's the easy way to find the problem, at least if the bug-report is
well made (like this one was - run through ksymoops to get the
information of which function and the offset in the function that it
happened in).
Oh, it helps if the report happens on a kernel that is compiled with the
same compiler and similar setups.
The other thing to do is disassemble the "Code:" part of the bug report:
ksymoops will do this too with the correct tools, but if you don't have
the tools you can just do a silly program:
char str[] = "\xXX\xXX\xXX...";
main(){}
and compile it with gcc -g and then do "disassemble str" (where the "XX"
stuff are the values reported by the Oops - you can just cut-and-paste
and do a replace of spaces to "\x" - that's what I do, as I'm too lazy
to write a program to automate this all).
Alternatively, you can use the shell script in scripts/decodecode.
Its usage is: decodecode < oops.txt
The hex bytes that follow "Code:" may (in some architectures) have a series
of bytes that precede the current instruction pointer as well as bytes at and
following the current instruction pointer. In some cases, one instruction
byte or word is surrounded by <> or (), as in "<86>" or "(f00d)". These
<> or () markings indicate the current instruction pointer. Example from
i386, split into multiple lines for readability:
Code: f9 0f 8d f9 00 00 00 8d 42 0c e8 dd 26 11 c7 a1 60 ea 2b f9 8b 50 08 a1
64 ea 2b f9 8d 34 82 8b 1e 85 db 74 6d 8b 15 60 ea 2b f9 <8b> 43 04 39 42 54
7e 04 40 89 42 54 8b 43 04 3b 05 00 f6 52 c0
Finally, if you want to see where the code comes from, you can do
cd /usr/src/linux
make fs/buffer.s # or whatever file the bug happened in
and then you get a better idea of what happens than with the gdb
disassembly.
Now, the trick is just then to combine all the data you have: the C
sources (and general knowledge of what it _should_ do), the assembly
listing and the code disassembly (and additionally the register dump you
also get from the "oops" message - that can be useful to see _what_ the
corrupted pointers were, and when you have the assembler listing you can
also match the other registers to whatever C expressions they were used
for).
Essentially, you just look at what doesn't match (in this case it was the
"Code" disassembly that didn't match with what the compiler generated).
Then you need to find out _why_ they don't match. Often it's simple - you
see that the code uses a NULL pointer and then you look at the code and
wonder how the NULL pointer got there, and if it's a valid thing to do
you just check against it..
Now, if somebody gets the idea that this is time-consuming and requires
some small amount of concentration, you're right. Which is why I will
mostly just ignore any panic reports that don't have the symbol table
info etc looked up: it simply gets too hard to look it up (I have some
programs to search for specific patterns in the kernel code segment, and
sometimes I have been able to look up those kinds of panics too, but
that really requires pretty good knowledge of the kernel just to be able
to pick out the right sequences etc..)
_Sometimes_ it happens that I just see the disassembled code sequence
from the panic, and I know immediately where it's coming from. That's when
I get worried that I've been doing this for too long ;-)
Linus