异常处理表

平台:linux3.10.44

译文: 

  Linux内核异常处理

  原作者:Joerg Pommnitz <joerg@raleigh.ibm.com>

当进程陷入内核后,内核经常需要访问该进程用户空间的内存,因为该地址

来自不可信的用户态程序,内核为了保护自己,在使用该地址前会验证该地

址是否有效。

旧版本使用

int verify_area(int type, const void * addr, unsigned long size)

函数(已经被access_ok()替代了).

该函数验证addr地址开始size大小的空间中使用type(读或写)操作是否可以。

为了实现该功能,verify_read需要在虚拟地址空间查询包含该地址的区域是否

可操作。正常情况下(正确实现的应用程序)该测试都会通过。只有对于存在bug

的程序才会失败。对于某些对性能要求较高且正确实现的系统而言,这个测试浪

费了大量时间。

为了克服这个问题,Linus决定改用硬件提供的虚拟内存管理做这个测试。

如何实现呢?

当内核试图访问不可访问的区域时,CPU会产生一个缺页异常,执行缺页异常处理

程序:

void do_page_fault(struct pt_regs *regs, unsigned long error_code)

位于 arch/x86/mm/fault.c. 该函数的入参是由位于arch/x86/kernel/entry_32.S

中的汇编语言实现的,regs指向栈中保存的寄存器值,error_code是异常的原因码。

do_page_fault首先从CPU控制寄存器CR2中获取访问异常的地址,如果该地址位于有效

的虚拟地址区域,该异常是正常的,原因可能是该页尚未换入或者处于写保护等。

然而,我们感兴趣的是其他情况:该地址无效。这种情况内核跳转到bad_area标注的

标签处开始运行。

然后通过产生异常的指令的地址(如 regs->eip)找到一个可以继续执行的地址。如果

成功了,异常处理修改返回地址(还是 regs->eip)然后返回,继续从fixup处执行。

fixup指向哪里呢?

因为我们跳转到fixup指向的地方开始运行,fixup肯定指向了可执行代码。该代码是通过

宏实现的,以定义在arch/x86/include/asm/uaccess.h中的宏get_user 宏为例,因为

这个宏太难于理解,我们直接查看预处理及编译后的代码,我选择了drivers/char/sysrq.c

中的get_user为例。

原始代码位于 sysrq.c 587行:

        get_user(c, buf);

预处理后的输出如下 (为了方便阅读做了格式上的修改):

(

  {

    long __gu_err = - 14 , __gu_val = 0;

    const __typeof__(*( (  buf ) )) *__gu_addr = ((buf));

    if (((((0 + current_set[0])->tss.segment) == 0x18 )  ||

       (((sizeof(*(buf))) <= 0xC0000000UL) &&

       ((unsigned long)(__gu_addr ) <= 0xC0000000UL - (sizeof(*(buf)))))))

      do {

        __gu_err  = 0;

        switch ((sizeof(*(buf)))) {

          case 1:

            __asm__ __volatile__(

              "1:      mov" "b" " %2,%" "b" "1\n"

              "2:\n"

              ".section .fixup,\"ax\"\n"

              "3:      movl %3,%0\n"

              "        xor" "b" " %" "b" "1,%" "b" "1\n"

              "        jmp 2b\n"

              ".section __ex_table,\"a\"\n"

              "        .align 4\n"

              "        .long 1b,3b\n"

              ".text"        : "=r"(__gu_err), "=q" (__gu_val): "m"((*(struct __large_struct *)

                            (   __gu_addr   )) ), "i"(- 14 ), "0"(  __gu_err  )) ;

              break;

          case 2:

            __asm__ __volatile__(

              "1:      mov" "w" " %2,%" "w" "1\n"

              "2:\n"

              ".section .fixup,\"ax\"\n"

              "3:      movl %3,%0\n"

              "        xor" "w" " %" "w" "1,%" "w" "1\n"

              "        jmp 2b\n"

              ".section __ex_table,\"a\"\n"

              "        .align 4\n"

              "        .long 1b,3b\n"

".text" : "=r"(__gu_err), "=r" (__gu_val) : "m"((*(struct __large_struct *)

                            (   __gu_addr   )) ), "i"(- 14 ), "0"(  __gu_err  ));

              break;

          case 4:

            __asm__ __volatile__(

              "1:      mov" "l" " %2,%" "" "1\n"

              "2:\n"

              ".section .fixup,\"ax\"\n"

              "3:      movl %3,%0\n"

              "        xor" "l" " %" "" "1,%" "" "1\n"

              "        jmp 2b\n"

              ".section __ex_table,\"a\"\n"

              "        .align 4\n"        "        .long 1b,3b\n"

              ".text"        : "=r"(__gu_err), "=r" (__gu_val) : "m"((*(struct __large_struct *)

                            (   __gu_addr   )) ), "i"(- 14 ), "0"(__gu_err));

              break;

          default:

            (__gu_val) = __get_user_bad();

        }

      } while (0) ;

    ((c)) = (__typeof__(*((buf))))__gu_val;

    __gu_err;

  }

);

哇! 无法理解的GCC/assembly好神奇啊. 很难看清楚具体怎么做的,让我们看看它产生的代码吧::

>         xorl %edx,%edx

>         movl current_set,%eax

>         cmpl $24,788(%eax)

>         je .L1424

>         cmpl $-1073741825,64(%esp)

>         ja .L1423

> .L1424:

>         movl %edx,%eax

>         movl 64(%esp),%ebx

> #APP

> 1:      movb (%ebx),%dl                /* 这一行是访问用户地址空间的代码 */

> 2:

> .section .fixup,"ax"

> 3:      movl $-14,%eax

>         xorb %dl,%dl

>         jmp 2b

> .section __ex_table,"a"

>         .align 4

>         .long 1b,3b

> .text

> #NO_APP

> .L1423:

>         movzbl %dl,%esi

优化程序干的不错,给了我们可以理解的东西,我们可以理解么?实际访问用户空间的代码很容易理解。

借助于统一的地址空间,内核可以直接访问用户空间的内容,但是那些.section 东东是做什么的呢?????

为了理解这个我们有必要看一下最终内核是什么样的:

> objdump --section-headers vmlinux

>

> vmlinux:     file format elf32-i386

>

> Sections:

> Idx Name          Size      VMA       LMA       File off  Algn

>   0 .text         00098f40  c0100000  c0100000  00001000  2**4

>                   CONTENTS, ALLOC, LOAD, READONLY, CODE

>   1 .fixup        000016bc  c0198f40  c0198f40  00099f40  2**0

>                   CONTENTS, ALLOC, LOAD, READONLY, CODE

>   2 .rodata       0000f127  c019a5fc  c019a5fc  0009b5fc  2**2

>                   CONTENTS, ALLOC, LOAD, READONLY, DATA

>   3 __ex_table    000015c0  c01a9724  c01a9724  000aa724  2**2

>                   CONTENTS, ALLOC, LOAD, READONLY, DATA

>   4 .data         0000ea58  c01abcf0  c01abcf0  000abcf0  2**4

>                   CONTENTS, ALLOC, LOAD, DATA

>   5 .bss          00018e21  c01ba748  c01ba748  000ba748  2**2

>                   ALLOC

>   6 .comment      00000ec4  00000000  00000000  000ba748  2**0

>                   CONTENTS, READONLY

>   7 .note         00001068  00000ec4  00000ec4  000bb60c  2**0

>                   CONTENTS, READONLY

很显然有2个非标准的ELF段。但是首先我们要确定的是那些带.section的代码在最终内核中起什么作用:

> objdump --disassemble --section=.text vmlinux

>

> c017e785 <do_con_write+c1> xorl   %edx,%edx

> c017e787 <do_con_write+c3> movl   0xc01c7bec,%eax

> c017e78c <do_con_write+c8> cmpl   $0x18,0x314(%eax)

> c017e793 <do_con_write+cf> je     c017e79f <do_con_write+db>

> c017e795 <do_con_write+d1> cmpl   $0xbfffffff,0x40(%esp,1)

> c017e79d <do_con_write+d9> ja     c017e7a7 <do_con_write+e3>

> c017e79f <do_con_write+db> movl   %edx,%eax

> c017e7a1 <do_con_write+dd> movl   0x40(%esp,1),%ebx

> c017e7a5 <do_con_write+e1> movb   (%ebx),%dl

> c017e7a7 <do_con_write+e3> movzbl %dl,%esi

最终访问用户内存的代码减少到10 条 x86 机器指令。被.section 包含的代码不在正常的执行流中,他们

位于其他段中:

> objdump --disassemble --section=.fixup vmlinux

>

> c0199ff5 <.fixup+10b5> movl   $0xfffffff2,%eax

> c0199ffa <.fixup+10ba> xorb   %dl,%dl

> c0199ffc <.fixup+10bc> jmp    c017e7a7 <do_con_write+e3>

最终的__ex_table段:

> objdump --full-contents --section=__ex_table vmlinux

>

>  c01aa7c4 93c017c0 e09f19c0 97c017c0 99c017c0  ................

>  c01aa7d4 f6c217c0 e99f19c0 a5e717c0 f59f19c0  ................

>  c01aa7e4 080a18c0 01a019c0 0a0a18c0 04a019c0  ................

转换成方便我们人类阅读的方式:

>  c01aa7c4 c017c093 c0199fe0 c017c097 c017c099  ................

>  c01aa7d4 c017c2f6 c0199fe9 c017e7a5 c0199ff5  ................

                               ^^^^^^^^^^^^^^^^^

                               注意这个地方!

>  c01aa7e4 c0180a08 c019a001 c0180a0a c019a004  ................

发生了什么事? 以下汇编指示

.section .fixup,"ax"

.section __ex_table,"a"

告诉assembler将后边的代码放到ELF中指定的段中,所以以下指令:

3:      movl $-14,%eax

        xorb %dl,%dl

        jmp 2b

在.fixup 段中最后边,并且如下地址

        .long 1b,3b

在 __ex_table 段的最后边. 1b 和 3b 都是标签,1b (1b 表示下一个标签 1

backward) 就是可能产生异常的指令地址,在这个例子中,标签1的地址是 c017e7a5:

原始的汇编代码: > 1:      movb (%ebx),%dl

链接到vmlinux后     : > c017e7a5 <do_con_write+e1> movb   (%ebx),%dl

标签 3 (还是向后的) 是负责处理异常的地址,本例子中是 c0199ff5:

原始的汇编代码: > 3:      movl $-14,%eax

链接到vmlinux后     : > c0199ff5 <.fixup+10b5> movl   $0xfffffff2,%eax

汇编代码

> .section __ex_table,"a"

>         .align 4

>         .long 1b,3b

生成一对一对的值

>  c01aa7d4 c017c2f6 c0199fe9 c017e7a5 c0199ff5  ................

                               ^this is ^this is

                               1b       3b

c017e7a5,c0199ff5 处于内核的异常处理表.

所以,当内核中发生了虚拟地址不正确的异常会怎么样呢?

1.) 访问非法地址:

> c017e7a5 <do_con_write+e1> movb   (%ebx),%dl

2.) MMU 产生异常

3.) CPU 调用 do_page_fault

4.) do page fault 调用 search_exception_table (regs->eip == c017e7a5);

5.) search_exception_table 在异常表中找到了 c017e7a5

    (i.e. 在ELF 的 __ex_table段中)

    返回对应的异常处理地址c0199ff5.

6.) do_page_fault 修改返回地址指向异常处理的地址,然后返回。

7.) 执行异常处理的代码.

8.) 8a) EAX 赋值为 -EFAULT (== -14)

    8b) DL  赋值为0  (表示我们从用户空间读到的值,实际上是没有读到)

    8c) 继续执行标签2处的代码 (紧接在异常访问指令后边的那条指令的地址).

8a到8c这几步模拟了异常处理的过程。

就是这样。如果参考我们的例子,你可能会问,为什么我们把EAX设置成-EFAULT呢?

因为get_user宏正常情况下返回0,如果失败的话,返回-EFAULT。虽然我们原本的代码没有

检查该返回值。GCC使用EAX寄存器返回值。

注意:

因为异常处理表是按顺序排列的,所以仅.text段使用就好。如果其他段使用将会导致异常表排序

错误,异常处理将会失败。

原文:

     Kernel level exception handling in Linux

  Commentary by Joerg Pommnitz <joerg@raleigh.ibm.com>

When a process runs in kernel mode, it often has to access user

mode memory whose address has been passed by an untrusted program.

To protect itself the kernel has to verify this address.

In older versions of Linux this was done with the

int verify_area(int type, const void * addr, unsigned long size)

function (which has since been replaced by access_ok()).

This function verified that the memory area starting at address

'addr' and of size 'size' was accessible for the operation specified

in type (read or write). To do this, verify_read had to look up the

virtual memory area (vma) that contained the address addr. In the

normal case (correctly working program), this test was successful.

It only failed for a few buggy programs. In some kernel profiling

tests, this normally unneeded verification used up a considerable

amount of time.

To overcome this situation, Linus decided to let the virtual memory

hardware present in every Linux-capable CPU handle this test.

How does this work?

Whenever the kernel tries to access an address that is currently not

accessible, the CPU generates a page fault exception and calls the

page fault handler

void do_page_fault(struct pt_regs *regs, unsigned long error_code)

in arch/x86/mm/fault.c. The parameters on the stack are set up by

the low level assembly glue in arch/x86/kernel/entry_32.S. The parameter

regs is a pointer to the saved registers on the stack, error_code

contains a reason code for the exception.

do_page_fault first obtains the unaccessible address from the CPU

control register CR2. If the address is within the virtual address

space of the process, the fault probably occurred, because the page

was not swapped in, write protected or something similar. However,

we are interested in the other case: the address is not valid, there

is no vma that contains this address. In this case, the kernel jumps

to the bad_area label.

There it uses the address of the instruction that caused the exception

(i.e. regs->eip) to find an address where the execution can continue

(fixup). If this search is successful, the fault handler modifies the

return address (again regs->eip) and returns. The execution will

continue at the address in fixup.

Where does fixup point to?

Since we jump to the contents of fixup, fixup obviously points

to executable code. This code is hidden inside the user access macros.

I have picked the get_user macro defined in arch/x86/include/asm/uaccess.h

as an example. The definition is somewhat hard to follow, so let's peek at

the code generated by the preprocessor and the compiler. I selected

the get_user call in drivers/char/sysrq.c for a detailed examination.

The original code in sysrq.c line 587:

        get_user(c, buf);

The preprocessor output (edited to become somewhat readable):

(

  {

    long __gu_err = - 14 , __gu_val = 0;

    const __typeof__(*( (  buf ) )) *__gu_addr = ((buf));

    if (((((0 + current_set[0])->tss.segment) == 0x18 )  ||

       (((sizeof(*(buf))) <= 0xC0000000UL) &&

       ((unsigned long)(__gu_addr ) <= 0xC0000000UL - (sizeof(*(buf)))))))

      do {

        __gu_err  = 0;

        switch ((sizeof(*(buf)))) {

          case 1:

            __asm__ __volatile__(

              "1:      mov" "b" " %2,%" "b" "1\n"

              "2:\n"

              ".section .fixup,\"ax\"\n"

              "3:      movl %3,%0\n"

              "        xor" "b" " %" "b" "1,%" "b" "1\n"

              "        jmp 2b\n"

              ".section __ex_table,\"a\"\n"

              "        .align 4\n"

              "        .long 1b,3b\n"

              ".text"        : "=r"(__gu_err), "=q" (__gu_val): "m"((*(struct __large_struct *)

                            (   __gu_addr   )) ), "i"(- 14 ), "0"(  __gu_err  )) ;

              break;

          case 2:

            __asm__ __volatile__(

              "1:      mov" "w" " %2,%" "w" "1\n"

              "2:\n"

              ".section .fixup,\"ax\"\n"

              "3:      movl %3,%0\n"

              "        xor" "w" " %" "w" "1,%" "w" "1\n"

              "        jmp 2b\n"

              ".section __ex_table,\"a\"\n"

              "        .align 4\n"

              "        .long 1b,3b\n"

              ".text"        : "=r"(__gu_err), "=r" (__gu_val) : "m"((*(struct __large_struct *)

                            (   __gu_addr   )) ), "i"(- 14 ), "0"(  __gu_err  ));

              break;

          case 4:

            __asm__ __volatile__(

              "1:      mov" "l" " %2,%" "" "1\n"

              "2:\n"

              ".section .fixup,\"ax\"\n"

              "3:      movl %3,%0\n"

              "        xor" "l" " %" "" "1,%" "" "1\n"

              "        jmp 2b\n"

              ".section __ex_table,\"a\"\n"

              "        .align 4\n"        "        .long 1b,3b\n"

              ".text"        : "=r"(__gu_err), "=r" (__gu_val) : "m"((*(struct __large_struct *)

                            (   __gu_addr   )) ), "i"(- 14 ), "0"(__gu_err));

              break;

          default:

            (__gu_val) = __get_user_bad();

        }

      } while (0) ;

    ((c)) = (__typeof__(*((buf))))__gu_val;

    __gu_err;

  }

);

WOW! Black GCC/assembly magic. This is impossible to follow, so let's

see what code gcc generates:

>         xorl %edx,%edx

>         movl current_set,%eax

>         cmpl $24,788(%eax)

>         je .L1424

>         cmpl $-1073741825,64(%esp)

>         ja .L1423

> .L1424:

>         movl %edx,%eax

>         movl 64(%esp),%ebx

> #APP

> 1:      movb (%ebx),%dl                /* this is the actual user access */

> 2:

> .section .fixup,"ax"

> 3:      movl $-14,%eax

>         xorb %dl,%dl

>         jmp 2b

> .section __ex_table,"a"

>         .align 4

>         .long 1b,3b

> .text

> #NO_APP

> .L1423:

>         movzbl %dl,%esi

The optimizer does a good job and gives us something we can actually

understand. Can we? The actual user access is quite obvious. Thanks

to the unified address space we can just access the address in user

memory. But what does the .section stuff do?????

To understand this we have to look at the final kernel:

> objdump --section-headers vmlinux

>

> vmlinux:     file format elf32-i386

>

> Sections:

> Idx Name          Size      VMA       LMA       File off  Algn

>   0 .text         00098f40  c0100000  c0100000  00001000  2**4

>                   CONTENTS, ALLOC, LOAD, READONLY, CODE

>   1 .fixup        000016bc  c0198f40  c0198f40  00099f40  2**0

>                   CONTENTS, ALLOC, LOAD, READONLY, CODE

>   2 .rodata       0000f127  c019a5fc  c019a5fc  0009b5fc  2**2

>                   CONTENTS, ALLOC, LOAD, READONLY, DATA

>   3 __ex_table    000015c0  c01a9724  c01a9724  000aa724  2**2

>                   CONTENTS, ALLOC, LOAD, READONLY, DATA

>   4 .data         0000ea58  c01abcf0  c01abcf0  000abcf0  2**4

>                   CONTENTS, ALLOC, LOAD, DATA

>   5 .bss          00018e21  c01ba748  c01ba748  000ba748  2**2

>                   ALLOC

>   6 .comment      00000ec4  00000000  00000000  000ba748  2**0

>                   CONTENTS, READONLY

>   7 .note         00001068  00000ec4  00000ec4  000bb60c  2**0

>                   CONTENTS, READONLY

There are obviously 2 non standard ELF sections in the generated object

file. But first we want to find out what happened to our code in the

final kernel executable:

> objdump --disassemble --section=.text vmlinux

>

> c017e785 <do_con_write+c1> xorl   %edx,%edx

> c017e787 <do_con_write+c3> movl   0xc01c7bec,%eax

> c017e78c <do_con_write+c8> cmpl   $0x18,0x314(%eax)

> c017e793 <do_con_write+cf> je     c017e79f <do_con_write+db>

> c017e795 <do_con_write+d1> cmpl   $0xbfffffff,0x40(%esp,1)

> c017e79d <do_con_write+d9> ja     c017e7a7 <do_con_write+e3>

> c017e79f <do_con_write+db> movl   %edx,%eax

> c017e7a1 <do_con_write+dd> movl   0x40(%esp,1),%ebx

> c017e7a5 <do_con_write+e1> movb   (%ebx),%dl

> c017e7a7 <do_con_write+e3> movzbl %dl,%esi

The whole user memory access is reduced to 10 x86 machine instructions.

The instructions bracketed in the .section directives are no longer

in the normal execution path. They are located in a different section

of the executable file:

> objdump --disassemble --section=.fixup vmlinux

>

> c0199ff5 <.fixup+10b5> movl   $0xfffffff2,%eax

> c0199ffa <.fixup+10ba> xorb   %dl,%dl

> c0199ffc <.fixup+10bc> jmp    c017e7a7 <do_con_write+e3>

And finally:

> objdump --full-contents --section=__ex_table vmlinux

>

>  c01aa7c4 93c017c0 e09f19c0 97c017c0 99c017c0  ................

>  c01aa7d4 f6c217c0 e99f19c0 a5e717c0 f59f19c0  ................

>  c01aa7e4 080a18c0 01a019c0 0a0a18c0 04a019c0  ................

or in human readable byte order:

>  c01aa7c4 c017c093 c0199fe0 c017c097 c017c099  ................

>  c01aa7d4 c017c2f6 c0199fe9 c017e7a5 c0199ff5  ................

                               ^^^^^^^^^^^^^^^^^

                               this is the interesting part!

>  c01aa7e4 c0180a08 c019a001 c0180a0a c019a004  ................

What happened? The assembly directives

.section .fixup,"ax"

.section __ex_table,"a"

told the assembler to move the following code to the specified

sections in the ELF object file. So the instructions

3:      movl $-14,%eax

        xorb %dl,%dl

        jmp 2b

ended up in the .fixup section of the object file and the addresses

        .long 1b,3b

ended up in the __ex_table section of the object file. 1b and 3b

are local labels. The local label 1b (1b stands for next label 1

backward) is the address of the instruction that might fault, i.e.

in our case the address of the label 1 is c017e7a5:

the original assembly code: > 1:      movb (%ebx),%dl

and linked in vmlinux     : > c017e7a5 <do_con_write+e1> movb   (%ebx),%dl

The local label 3 (backwards again) is the address of the code to handle

the fault, in our case the actual value is c0199ff5:

the original assembly code: > 3:      movl $-14,%eax

and linked in vmlinux     : > c0199ff5 <.fixup+10b5> movl   $0xfffffff2,%eax

The assembly code

> .section __ex_table,"a"

>         .align 4

>         .long 1b,3b

becomes the value pair

>  c01aa7d4 c017c2f6 c0199fe9 c017e7a5 c0199ff5  ................

                               ^this is ^this is

                               1b       3b

c017e7a5,c0199ff5 in the exception table of the kernel.

So, what actually happens if a fault from kernel mode with no suitable

vma occurs?

1.) access to invalid address:

> c017e7a5 <do_con_write+e1> movb   (%ebx),%dl

2.) MMU generates exception

3.) CPU calls do_page_fault

4.) do page fault calls search_exception_table (regs->eip == c017e7a5);

5.) search_exception_table looks up the address c017e7a5 in the

    exception table (i.e. the contents of the ELF section __ex_table)

    and returns the address of the associated fault handle code c0199ff5.

6.) do_page_fault modifies its own return address to point to the fault

    handle code and returns.

7.) execution continues in the fault handling code.

8.) 8a) EAX becomes -EFAULT (== -14)

    8b) DL  becomes zero (the value we "read" from user space)

    8c) execution continues at local label 2 (address of the

        instruction immediately after the faulting user access).

The steps 8a to 8c in a certain way emulate the faulting instruction.

That's it, mostly. If you look at our example, you might ask why

we set EAX to -EFAULT in the exception handler code. Well, the

get_user macro actually returns a value: 0, if the user access was

successful, -EFAULT on failure. Our original code did not test this

return value, however the inline assembly code in get_user tries to

return -EFAULT. GCC selected EAX to return this value.

NOTE:

Due to the way that the exception table is built and needs to be ordered,

only use exceptions for code in the .text section.  Any other section

will cause the exception table to not be sorted correctly, and the

exceptions will fail.

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
SpringMVC框架提供了强大的异常处理机制,可以帮助开发者处理应用程序中的各种异常。具体实现方式如下: 1. 使用@ControllerAdvice注解实现全局异常处理 可以在@ControllerAdvice注解标注的类中实现全局异常处理,对应的异常处理方法需要使用@ExceptionHandler注解标注。例如,下面的代码实现了对所有RuntimeException异常的全局处理: ```java @ControllerAdvice public class GlobalExceptionHandler { @ExceptionHandler(RuntimeException.class) public ModelAndView handleRuntimeException(HttpServletRequest request, RuntimeException ex) { ModelAndView mav = new ModelAndView(); mav.addObject("exception", ex); mav.addObject("url", request.getRequestURL()); mav.setViewName("error"); return mav; } } ``` 2. 在@Controller注解标注的类中实现局部异常处理 也可以在@Controller注解标注的类中实现局部异常处理,对应的异常处理方法也需要使用@ExceptionHandler注解标注。例如,下面的代码实现了对所有NumberFormatException异常的局部处理: ```java @Controller @RequestMapping("/user") public class UserController { @ExceptionHandler(NumberFormatException.class) public ModelAndView handleNumberFormatException(HttpServletRequest request, NumberFormatException ex) { ModelAndView mav = new ModelAndView(); mav.addObject("exception", ex); mav.addObject("url", request.getRequestURL()); mav.setViewName("error"); return mav; } @RequestMapping("/add") public String addUser(@RequestParam("id") int id) { // do something return "success"; } } ``` 3. 使用SimpleMappingExceptionResolver实现异常处理 还可以使用SimpleMappingExceptionResolver类实现异常处理,该类可以根据异常类型和异常映射异常转化为对应的视图。例如,下面的代码实现了对所有RuntimeException异常的处理: ```java <bean class="org.springframework.web.servlet.handler.SimpleMappingExceptionResolver"> <property name="exceptionMappings"> <props> <prop key="java.lang.RuntimeException">error</prop> </props> </property> </bean> ``` 总之,SpringMVC框架提供了多种方式来实现异常处理,开发者可以根据实际需求选择合适的方式来实现。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值