Options That Control Optimization
These options control various sorts of optimizations.
Without any optimization option, the compiler’s goal is to reduce the cost of compilation and to make debugging produce the expected results. Statements are independent: if
you stop the program with a breakpoint between statements, you can then assign a new value to any variable or change the program counter to any other statement in the func-
tion and get exactly the results you would expect from the source code.
Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the
program.
The compiler performs optimization based on the knowledge it has of the program. Using the -funit-at-a-time flag will allow the compiler to consider information gained from
later functions in the file when compiling a function. Compiling multiple files at once to a single output file (and using -funit-at-a-time) will allow the compiler to use
information gained from all of the files when compiling each of them.
Not all optimizations are controlled directly by a flag. Only optimizations that have a flag are listed.
-O
-O1 Optimize. Optimizing compilation takes somewhat more time, and a lot more memory for a large function.
With -O, the compiler tries to reduce code size and execution time, without performing any optimizations that take a great deal of compilation time.
-O turns on the following optimization flags: -fdefer-pop -fmerge-constants -fthread-jumps -floop-optimize -fif-conversion -fif-conversion2 -fdelayed-branch
-fguess-branch-probability -fcprop-registers
-O also turns on -fomit-frame-pointer on machines where doing so does not interfere with debugging.
-O2 Optimize even more. GCC performs nearly all supported optimizations that do not involve a space-speed tradeoff. The compiler does not perform loop unrolling or func-
tion inlining when you specify -O2. As compared to -O, this option increases both compilation time and the performance of the generated code.
-O2 turns on all optimization flags specified by -O. It also turns on the following optimization flags: -fforce-mem -foptimize-sibling-calls -fstrength-reduce
-fcse-follow-jumps -fcse-skip-blocks -frerun-cse-after-loop -frerun-loop-opt -fgcse -fgcse-lm -fgcse-sm -fgcse-las -fdelete-null-pointer-checks -fexpensive-opti-
mizations -fregmove -fschedule-insns -fschedule-insns2 -fsched-interblock -fsched-spec -fcaller-saves -fpeephole2 -freorder-blocks -freorder-functions -fstrict-alias-
ing -funit-at-a-time -falign-functions -falign-jumps -falign-loops -falign-labels -fcrossjumping
Please note the warning under -fgcse about invoking -O2 on programs that use computed gotos.
-O3 Optimize yet more. -O3 turns on all optimizations specified by -O2 and also turns on the -finline-functions, -fweb, -frename-registers and -funswitch-loops options.
-O0 Do not optimize. This is the default.
-Os Optimize for size. -Os enables all -O2 optimizations that do not typically increase code size. It also performs further optimizations designed to reduce code size.
-Os disables the following optimization flags: -falign-functions -falign-jumps -falign-loops -falign-labels -freorder-blocks -fprefetch-loop-arrays
If you use multiple -O options, with or without level numbers, the last such option is the one that is effective.
Options of the form -fflag specify machine-independent flags. Most flags have both positive and negative forms; the negative form of -ffoo would be -fno-foo. In the table
below, only one of the forms is listed---the one you typically will use. You can figure out the other form by either removing no- or adding it.
The following options control specific optimizations. They are either activated by -O options or are related to ones that are. You can use the following flags in the rare
cases when ‘‘fine-tuning’’ of optimizations to be performed is desired.
-fno-default-inline
Do not make member functions inline by default merely because they are defined inside the class scope (C++ only). Otherwise, when you specify -O, member functions
defined inside class scope are compiled inline by default; i.e., you don’t need to add inline in front of the member function name.
-fno-defer-pop
Always pop the arguments to each function call as soon as that function returns. For machines which must pop arguments after a function call, the compiler normally lets
arguments accumulate on the stack for several function calls and pops them all at once.
Disabled at levels -O, -O2, -O3, -Os.
-fforce-mem
Force memory operands to be copied into registers before doing arithmetic on them. This produces better code by making all memory references potential common subexpres-
sions. When they are not common subexpressions, instruction combination should eliminate the separate register-load.
Enabled at levels -O2, -O3, -Os.
-fforce-addr
Force memory address constants to be copied into registers before doing arithmetic on them. This may produce better code just as -fforce-mem may.
-fomit-frame-pointer
Don’t keep the frame pointer in a register for functions that don’t need one. This avoids the instructions to save, set up and restore frame pointers; it also makes an
extra register available in many functions. It also makes debugging impossible on some machines.
On some machines, such as the VAX, this flag has no effect, because the standard calling sequence automatically handles the frame pointer and nothing is saved by pre-
tending it doesn’t exist. The machine-description macro "FRAME_POINTER_REQUIRED" controls whether a target machine supports this flag.
Enabled at levels -O, -O2, -O3, -Os.
-foptimize-sibling-calls
Optimize sibling and tail recursive calls.
Enabled at levels -O2, -O3, -Os.
-fno-inline
Don’t pay attention to the "inline" keyword. Normally this option is used to keep the compiler from expanding any functions inline. Note that if you are not optimiz-
ing, no functions can be expanded inline.
-finline-functions
Integrate all simple functions into their callers. The compiler heuristically decides which functions are simple enough to be worth integrating in this way.
If all calls to a given function are integrated, and the function is declared "static", then the function is normally not output as assembler code in its own right.
Enabled at level -O3.
-finline-limit=n
By default, GCC limits the size of functions that can be inlined. This flag allows the control of this limit for functions that are explicitly marked as inline (i.e.,
marked with the inline keyword or defined within the class definition in c++). n is the size of functions that can be inlined in number of pseudo instructions (not
counting parameter handling). The default value of n is 600. Increasing this value can result in more inlined code at the cost of compilation time and memory consump-
tion. Decreasing usually makes the compilation faster and less code will be inlined (which presumably means slower programs). This option is particularly useful for
programs that use inlining heavily such as those based on recursive templates with C++.
Inlining is actually controlled by a number of parameters, which may be specified individually by using --param name=value. The -finline-limit=n option sets some of
these parameters as follows:
@item max-inline-insns-single
is set to I<n>/2.
@item max-inline-insns-auto
is set to I<n>/2.
@item min-inline-insns
is set to 130 or I<n>/4, whichever is smaller.
@item max-inline-insns-rtl
is set to I<n>.
See below for a documentation of the individual parameters controlling inlining.
Note: pseudo instruction represents, in this particular context, an abstract measurement of function’s size. In no way, it represents a count of assembly instructions
and as such its exact meaning might change from one release to an another.
-fkeep-inline-functions
Even if all calls to a given function are integrated, and the function is declared "static", nevertheless output a separate run-time callable version of the function.
This switch does not affect "extern inline" functions.
-fkeep-static-consts
Emit variables declared "static const" when optimization isn’t turned on, even if the variables aren’t referenced.
GCC enables this option by default. If you want to force the compiler to check if the variable was referenced, regardless of whether or not optimization is turned on,
use the -fno-keep-static-consts option.
-fmerge-constants
Attempt to merge identical constants (string constants and floating point constants) across compilation units.
This option is the default for optimized compilation if the assembler and linker support it. Use -fno-merge-constants to inhibit this behavior.
Enabled at levels -O, -O2, -O3, -Os.
-fmerge-all-constants
Attempt to merge identical constants and identical variables.
This option implies -fmerge-constants. In addition to -fmerge-constants this considers e.g. even constant initialized arrays or initialized constant variables with
integral or floating point types. Languages like C or C++ require each non-automatic variable to have distinct location, so using this option will result in non-con-
forming behavior.
-fnew-ra
Use a graph coloring register allocator. Currently this option is meant only for testing. Users should not specify this option, since it is not yet ready for produc-
tion use.
-fno-branch-count-reg
Do not use ‘‘decrement and branch’’ instructions on a count register, but instead generate a sequence of instructions that decrement a register, compare it against zero,
then branch based upon the result. This option is only meaningful on architectures that support such instructions, which include x86, PowerPC, IA-64 and S/390.
The default is -fbranch-count-reg, enabled when -fstrength-reduce is enabled.
-fno-function-cse
Do not put function addresses in registers; make each instruction that calls a constant function contain the function’s address explicitly.
This option results in less efficient code, but some strange hacks that alter the assembler output may be confused by the optimizations performed when this option is not
used.
The default is -ffunction-cse
-fno-zero-initialized-in-bss
If the target supports a BSS section, GCC by default puts variables that are initialized to zero into BSS. This can save space in the resulting code.
This option turns off this behavior because some programs explicitly rely on variables going to the data section. E.g., so that the resulting executable can find the
beginning of that section and/or make assumptions based on that.
The default is -fzero-initialized-in-bss.
-fstrength-reduce
Perform the optimizations of loop strength reduction and elimination of iteration variables.
Enabled at levels -O2, -O3, -Os.
-fthread-jumps
Perform optimizations where we check to see if a jump branches to a location where another comparison subsumed by the first is found. If so, the first branch is redi-
rected to either the destination of the second branch or a point immediately following it, depending on whether the condition is known to be true or false.
Enabled at levels -O, -O2, -O3, -Os.
-fcse-follow-jumps
In common subexpression elimination, scan through jump instructions when the target of the jump is not reached by any other path. For example, when CSE encounters an
"if" statement with an "else" clause, CSE will follow the jump when the condition tested is false.
Enabled at levels -O2, -O3, -Os.
-fcse-skip-blocks
This is similar to -fcse-follow-jumps, but causes CSE to follow jumps which conditionally skip over blocks. When CSE encounters a simple "if" statement with no else
clause, -fcse-skip-blocks causes CSE to follow the jump around the body of the "if".
Enabled at levels -O2, -O3, -Os.
-frerun-cse-after-loop
Re-run common subexpression elimination after loop optimizations has been performed.
Enabled at levels -O2, -O3, -Os.
-frerun-loop-opt
Run the loop optimizer twice.
Enabled at levels -O2, -O3, -Os.
-fgcse
Perform a global common subexpression elimination pass. This pass also performs global constant and copy propagation.
Note: When compiling a program using computed gotos, a GCC extension, you may get better runtime performance if you disable the global common subexpression elimination
pass by adding -fno-gcse to the command line.
Enabled at levels -O2, -O3, -Os.
-fgcse-lm
When -fgcse-lm is enabled, global common subexpression elimination will attempt to move loads which are only killed by stores into themselves. This allows a loop con-
taining a load/store sequence to be changed to a load outside the loop, and a copy/store within the loop.
Enabled by default when gcse is enabled.
-fgcse-sm
When -fgcse-sm is enabled, a store motion pass is run after global common subexpression elimination. This pass will attempt to move stores out of loops. When used in
conjunction with -fgcse-lm, loops containing a load/store sequence can be changed to a load before the loop and a store after the loop.
Enabled by default when gcse is enabled.
-fgcse-las
When -fgcse-las is enabled, the global common subexpression elimination pass eliminates redundant loads that come after stores to the same memory location (both partial
and full redundancies).
Enabled by default when gcse is enabled.
-floop-optimize
Perform loop optimizations: move constant expressions out of loops, simplify exit test conditions and optionally do strength-reduction and loop unrolling as well.
Enabled at levels -O, -O2, -O3, -Os.
-fcrossjumping
Perform cross-jumping transformation. This transformation unifies equivalent code and save code size. The resulting code may or may not perform better than without
cross-jumping.
Enabled at levels -O, -O2, -O3, -Os.
-fif-conversion
Attempt to transform conditional jumps into branch-less equivalents. This include use of conditional moves, min, max, set flags and abs instructions, and some tricks
doable by standard arithmetics. The use of conditional execution on chips where it is available is controlled by "if-conversion2".
Enabled at levels -O, -O2, -O3, -Os.
-fif-conversion2
Use conditional execution (where available) to transform conditional jumps into branch-less equivalents.
Enabled at levels -O, -O2, -O3, -Os.
-fdelete-null-pointer-checks
Use global dataflow analysis to identify and eliminate useless checks for null pointers. The compiler assumes that dereferencing a null pointer would have halted the
program. If a pointer is checked after it has already been dereferenced, it cannot be null.
In some environments, this assumption is not true, and programs can safely dereference null pointers. Use -fno-delete-null-pointer-checks to disable this optimization
for programs which depend on that behavior.
Enabled at levels -O2, -O3, -Os.
-fexpensive-optimizations
Perform a number of minor optimizations that are relatively expensive.
Enabled at levels -O2, -O3, -Os.
-foptimize-register-move
-fregmove
Attempt to reassign register numbers in move instructions and as operands of other simple instructions in order to maximize the amount of register tying. This is espe-
cially helpful on machines with two-operand instructions.
Note -fregmove and -foptimize-register-move are the same optimization.
Enabled at levels -O2, -O3, -Os.
-fdelayed-branch
If supported for the target machine, attempt to reorder instructions to exploit instruction slots available after delayed branch instructions.
Enabled at levels -O, -O2, -O3, -Os.
-fschedule-insns
If supported for the target machine, attempt to reorder instructions to eliminate execution stalls due to required data being unavailable. This helps machines that have
slow floating point or memory load instructions by allowing other instructions to be issued until the result of the load or floating point instruction is required.
Enabled at levels -O2, -O3, -Os.
-fschedule-insns2
Similar to -fschedule-insns, but requests an additional pass of instruction scheduling after register allocation has been done. This is especially useful on machines
with a relatively small number of registers and where memory load instructions take more than one cycle.
Enabled at levels -O2, -O3, -Os.
-fno-sched-interblock
Don’t schedule instructions across basic blocks. This is normally enabled by default when scheduling before register allocation, i.e. with -fschedule-insns or at -O2
or higher.
-fno-sched-spec
Don’t allow speculative motion of non-load instructions. This is normally enabled by default when scheduling before register allocation, i.e. with -fschedule-insns or
at -O2 or higher.
-fsched-spec-load
Allow speculative motion of some load instructions. This only makes sense when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher.
-fsched-spec-load-dangerous
Allow speculative motion of more load instructions. This only makes sense when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher.
-fsched-stalled-insns=n
Define how many insns (if any) can be moved prematurely from the queue of stalled insns into the ready list, during the second scheduling pass.
-fsched-stalled-insns-dep=n
Define how many insn groups (cycles) will be examined for a dependency on a stalled insn that is candidate for premature removal from the queue of stalled insns. Has an
effect only during the second scheduling pass, and only if -fsched-stalled-insns is used and its value is not zero.
-fsched2-use-superblocks
When scheduling after register allocation, do use superblock scheduling algorithm. Superblock scheduling allows motion across basic block boundaries resulting on faster
schedules. This option is experimental, as not all machine descriptions used by GCC model the CPU closely enough to avoid unreliable results from the algorithm.
This only makes sense when scheduling after register allocation, i.e. with -fschedule-insns2 or at -O2 or higher.
-fsched2-use-traces
Use -fsched2-use-superblocks algorithm when scheduling after register allocation and additionally perform code duplication in order to increase the size of superblocks
using tracer pass. See -ftracer for details on trace formation.
This mode should produce faster but significantly longer programs. Also without "-fbranch-probabilities" the traces constructed may not match the reality and hurt the
performance. This only makes sense when scheduling after register allocation, i.e. with -fschedule-insns2 or at -O2 or higher.
-fcaller-saves
Enable values to be allocated in registers that will be clobbered by function calls, by emitting extra instructions to save and restore the registers around such calls.
Such allocation is done only when it seems to result in better code than would otherwise be produced.
This option is always enabled by default on certain machines, usually those which have no call-preserved registers to use instead.
Enabled at levels -O2, -O3, -Os.
-fmove-all-movables
Forces all invariant computations in loops to be moved outside the loop.
-freduce-all-givs
Forces all general-induction variables in loops to be strength-reduced.
Note: When compiling programs written in Fortran, -fmove-all-movables and -freduce-all-givs are enabled by default when you use the optimizer.
These options may generate better or worse code; results are highly dependent on the structure of loops within the source code.
These two options are intended to be removed someday, once they have helped determine the efficacy of various approaches to improving loop optimizations.
Please contact <gcc@gcc.gnu.org>, and describe how use of these options affects the performance of your production code. Examples of code that runs slower when these
options are enabled are very valuable.
-fno-peephole
-fno-peephole2
Disable any machine-specific peephole optimizations. The difference between -fno-peephole and -fno-peephole2 is in how they are implemented in the compiler; some tar-
gets use one, some use the other, a few use both.
-fpeephole is enabled by default. -fpeephole2 enabled at levels -O2, -O3, -Os.
-fno-guess-branch-probability
Do not guess branch probabilities using a randomized model.
Sometimes GCC will opt to use a randomized model to guess branch probabilities, when none are available from either profiling feedback (-fprofile-arcs) or
__builtin_expect. This means that different runs of the compiler on the same program may produce different object code.
In a hard real-time system, people don’t want different runs of the compiler to produce code that has different behavior; minimizing non-determinism is of paramount
import. This switch allows users to reduce non-determinism, possibly at the expense of inferior optimization.
The default is -fguess-branch-probability at levels -O, -O2, -O3, -Os.
-freorder-blocks
Reorder basic blocks in the compiled function in order to reduce number of taken branches and improve code locality.
Enabled at levels -O2, -O3.
-freorder-functions
Reorder basic blocks in the compiled function in order to reduce number of taken branches and improve code locality. This is implemented by using special subsections
".text.hot" for most frequently executed functions and ".text.unlikely" for unlikely executed functions. Reordering is done by the linker so object file format must
support named sections and linker must place them in a reasonable way.
Also profile feedback must be available in to make this option effective. See -fprofile-arcs for details.
Enabled at levels -O2, -O3, -Os.
-fstrict-aliasing
Allows the compiler to assume the strictest aliasing rules applicable to the language being compiled. For C (and C++), this activates optimizations based on the type of
expressions. In particular, an object of one type is assumed never to reside at the same address as an object of a different type, unless the types are almost the same.
For example, an "unsigned int" can alias an "int", but not a "void*" or a "double". A character type may alias any other type.
Pay special attention to code like this:
union a_union {
int i;
double d;
};
int f() {
a_union t;
t.d = 3.0;
return t.i;
}
The practice of reading from a different union member than the one most recently written to (called ‘‘type-punning’’) is common. Even with -fstrict-aliasing, type-pun-
ning is allowed, provided the memory is accessed through the union type. So, the code above will work as expected. However, this code might not:
int f() {
a_union t;
int* ip;
t.d = 3.0;
ip = &t.i;
return *ip;
}
Every language that wishes to perform language-specific alias analysis should define a function that computes, given an "tree" node, an alias set for the node. Nodes in
different alias sets are not allowed to alias. For an example, see the C front-end function "c_get_alias_set".
Enabled at levels -O2, -O3, -Os.
-falign-functions
-falign-functions=n
Align the start of functions to the next power-of-two greater than n, skipping up to n bytes. For instance, -falign-functions=32 aligns functions to the next 32-byte
boundary, but -falign-functions=24 would align to the next 32-byte boundary only if this can be done by skipping 23 bytes or less.
-fno-align-functions and -falign-functions=1 are equivalent and mean that functions will not be aligned.
Some assemblers only support this flag when n is a power of two; in that case, it is rounded up.
If n is not specified or is zero, use a machine-dependent default.
Enabled at levels -O2, -O3.
-falign-labels
-falign-labels=n
Align all branch targets to a power-of-two boundary, skipping up to n bytes like -falign-functions. This option can easily make code slower, because it must insert
dummy operations for when the branch target is reached in the usual flow of the code.
-fno-align-labels and -falign-labels=1 are equivalent and mean that labels will not be aligned.
If -falign-loops or -falign-jumps are applicable and are greater than this value, then their values are used instead.
If n is not specified or is zero, use a machine-dependent default which is very likely to be 1, meaning no alignment.
Enabled at levels -O2, -O3.
-falign-loops
-falign-loops=n
Align loops to a power-of-two boundary, skipping up to n bytes like -falign-functions. The hope is that the loop will be executed many times, which will make up for any
execution of the dummy operations.
-fno-align-loops and -falign-loops=1 are equivalent and mean that loops will not be aligned.
If n is not specified or is zero, use a machine-dependent default.
Enabled at levels -O2, -O3.
-falign-jumps
-falign-jumps=n
Align branch targets to a power-of-two boundary, for branch targets where the targets can only be reached by jumping, skipping up to n bytes like -falign-functions. In
this case, no dummy operations need be executed.
-fno-align-jumps and -falign-jumps=1 are equivalent and mean that loops will not be aligned.
If n is not specified or is zero, use a machine-dependent default.
Enabled at levels -O2, -O3.
-frename-registers
Attempt to avoid false dependencies in scheduled code by making use of registers left over after register allocation. This optimization will most benefit processors
with lots of registers. It can, however, make debugging impossible, since variables will no longer stay in a ‘‘home register’’.
-fweb
Constructs webs as commonly used for register allocation purposes and assign each web individual pseudo register. This allows the register allocation pass to operate on
pseudos directly, but also strengthens several other optimization passes, such as CSE, loop optimizer and trivial dead code remover. It can, however, make debugging
impossible, since variables will no longer stay in a ‘‘home register’’.
Enabled at levels -O3.
-fno-cprop-registers
After register allocation and post-register allocation instruction splitting, we perform a copy-propagation pass to try to reduce scheduling dependencies and occasion-
ally eliminate the copy.
Disabled at levels -O, -O2, -O3, -Os.
-fprofile-generate
Enable options usually used for instrumenting application to produce profile useful for later recompilation with profile feedback based optimization. You must use
"-fprofile-generate" both when compiling and when linking your program.
The following options are enabled: "-fprofile-arcs", "-fprofile-values", "-fvpt".
-fprofile-use
Enable profile feedback directed optimizations, and optimizations generally profitable only with profile feedback available.
The following options are enabled: "-fbranch-probabilities", "-fvpt", "-funroll-loops", "-fpeel-loops", "-ftracer".
The following options control compiler behavior regarding floating point arithmetic. These options trade off between speed and correctness. All must be specifically
enabled.
-ffloat-store
Do not store floating point variables in registers, and inhibit other options that might change whether a floating point value is taken from a register or memory.
This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a "double" is sup-
posed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE
floating point. Use -ffloat-store for such programs, after modifying them to store all pertinent intermediate computations into variables.
-ffast-math
Sets -fno-math-errno, -funsafe-math-optimizations, -fno-trapping-math, -ffinite-math-only, -fno-rounding-math and -fno-signaling-nans.
This option causes the preprocessor macro "__FAST_MATH__" to be defined.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO
rules/specifications for math functions.
-fno-math-errno
Do not set ERRNO after calling math functions that are executed with a single instruction, e.g., sqrt. A program that relies on IEEE exceptions for math error handling
may want to use this flag for speed while maintaining IEEE arithmetic compatibility.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO
rules/specifications for math functions.
The default is -fmath-errno.
-funsafe-math-optimizations
Allow optimizations for floating-point arithmetic that (a) assume that arguments and results are valid and (b) may violate IEEE or ANSI standards. When used at
link-time, it may include libraries or startup files that change the default FPU control word or other similar optimizations.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO
rules/specifications for math functions.
The default is -fno-unsafe-math-optimizations.
-ffinite-math-only
Allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO
rules/specifications.
The default is -fno-finite-math-only.
-fno-trapping-math
Compile code assuming that floating-point operations cannot generate user-visible traps. These traps include division by zero, overflow, underflow, inexact result and
invalid operation. This option implies -fno-signaling-nans. Setting this option may allow faster code if one relies on ‘‘non-stop’’ IEEE arithmetic, for example.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO
rules/specifications for math functions.
The default is -ftrapping-math.
-frounding-math
Disable transformations and optimizations that assume default floating point rounding behavior. This is round-to-zero for all floating point to integer conversions, and
round-to-nearest for all other arithmetic truncations. This option should be specified for programs that change the FP rounding mode dynamically, or that may be exe-
cuted with a non-default rounding mode. This option disables constant folding of floating point expressions at compile-time (which may be affected by rounding mode) and
arithmetic transformations that are unsafe in the presence of sign-dependent rounding modes.
The default is -fno-rounding-math.
This option is experimental and does not currently guarantee to disable all GCC optimizations that are affected by rounding mode. Future versions of GCC may provide
finer control of this setting using C99’s "FENV_ACCESS" pragma. This command line option will be used to specify the default state for "FENV_ACCESS".
-fsignaling-nans
Compile code assuming that IEEE signaling NaNs may generate user-visible traps during floating-point operations. Setting this option disables optimizations that may
change the number of exceptions visible with signaling NaNs. This option implies -ftrapping-math.
This option causes the preprocessor macro "__SUPPORT_SNAN__" to be defined.
The default is -fno-signaling-nans.
This option is experimental and does not currently guarantee to disable all GCC optimizations that affect signaling NaN behavior.
-fsingle-precision-constant
Treat floating point constant as single precision constant instead of implicitly converting it to double precision constant.
The following options control optimizations that may improve performance, but are not enabled by any -O options. This section includes experimental options that may produce
broken code.
-fbranch-probabilities
After running a program compiled with -fprofile-arcs, you can compile it a second time using -fbranch-probabilities, to improve optimizations based on the number of
times each branch was taken. When the program compiled with -fprofile-arcs exits it saves arc execution counts to a file called sourcename.gcda for each source file
The information in this data file is very dependent on the structure of the generated code, so you must use the same source code and the same optimization options for
both compilations.
With -fbranch-probabilities, GCC puts a REG_BR_PROB note on each JUMP_INSN and CALL_INSN. These can be used to improve optimization. Currently, they are only used in
one place: in reorg.c, instead of guessing which path a branch is mostly to take, the REG_BR_PROB values are used to exactly determine which path is taken more often.
-fprofile-values
If combined with -fprofile-arcs, it adds code so that some data about values of expressions in the program is gathered.
With -fbranch-probabilities, it reads back the data gathered from profiling values of expressions and adds REG_VALUE_PROFILE notes to instructions for their later usage
in optimizations.
-fvpt
If combined with -fprofile-arcs, it instructs the compiler to add a code to gather information about values of expressions.
With -fbranch-probabilities, it reads back the data gathered and actually performs the optimizations based on them. Currently the optimizations include specialization
of division operation using the knowledge about the value of the denominator.
-fnew-ra
Use a graph coloring register allocator. Currently this option is meant for testing, so we are interested to hear about miscompilations with -fnew-ra.
-ftracer
Perform tail duplication to enlarge superblock size. This transformation simplifies the control flow of the function allowing other optimizations to do better job.
-funit-at-a-time
Parse the whole compilation unit before starting to produce code. This allows some extra optimizations to take place but consumes more memory.
-funroll-loops
Unroll loops whose number of iterations can be determined at compile time or upon entry to the loop. -funroll-loops implies -frerun-cse-after-loop. It also turns on
complete loop peeling (i.e. complete removal of loops with small constant number of iterations). This option makes code larger, and may or may not make it run faster.
-funroll-all-loops
Unroll all loops, even if their number of iterations is uncertain when the loop is entered. This usually makes programs run more slowly. -funroll-all-loops implies the
same options as -funroll-loops.
-fpeel-loops
Peels the loops for that there is enough information that they do not roll much (from profile feedback). It also turns on complete loop peeling (i.e. complete removal
of loops with small constant number of iterations).
-funswitch-loops
Move branches with loop invariant conditions out of the loop, with duplicates of the loop on both branches (modified according to result of the condition).
-fold-unroll-loops
Unroll loops whose number of iterations can be determined at compile time or upon entry to the loop, using the old loop unroller whose loop recognition is based on notes
from frontend. -fold-unroll-loops implies both -fstrength-reduce and -frerun-cse-after-loop. This option makes code larger, and may or may not make it run faster.
-fold-unroll-all-loops
Unroll all loops, even if their number of iterations is uncertain when the loop is entered. This is done using the old loop unroller whose loop recognition is based on
notes from frontend. This usually makes programs run more slowly. -fold-unroll-all-loops implies the same options as -fold-unroll-loops.
-funswitch-loops
Move branches with loop invariant conditions out of the loop, with duplicates of the loop on both branches (modified according to result of the condition).
-funswitch-loops
Move branches with loop invariant conditions out of the loop, with duplicates of the loop on both branches (modified according to result of the condition).
-fprefetch-loop-arrays
If supported by the target machine, generate instructions to prefetch memory to improve the performance of loops that access large arrays.
Disabled at level -Os.
-ffunction-sections
-fdata-sections
Place each function or data item into its own section in the output file if the target supports arbitrary sections. The name of the function or the name of the data
item determines the section’s name in the output file.
Use these options on systems where the linker can perform optimizations to improve locality of reference in the instruction space. Most systems using the ELF object
format and SPARC processors running Solaris 2 have linkers with such optimizations. AIX may have these optimizations in the future.
Only use these options when there are significant benefits from doing so. When you specify these options, the assembler and linker will create larger object and exe-
cutable files and will also be slower. You will not be able to use "gprof" on all systems if you specify this option and you may have problems with debugging if you
specify both this option and -g.
-fbranch-target-load-optimize
Perform branch target register load optimization before prologue / epilogue threading. The use of target registers can typically be exposed only during reload, thus
hoisting loads out of loops and doing inter-block scheduling needs a separate optimization pass.
-fbranch-target-load-optimize2
Perform branch target register load optimization after prologue / epilogue threading.
These options control various sorts of optimizations.
Without any optimization option, the compiler’s goal is to reduce the cost of compilation and to make debugging produce the expected results. Statements are independent: if
you stop the program with a breakpoint between statements, you can then assign a new value to any variable or change the program counter to any other statement in the func-
tion and get exactly the results you would expect from the source code.
Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the
program.
The compiler performs optimization based on the knowledge it has of the program. Using the -funit-at-a-time flag will allow the compiler to consider information gained from
later functions in the file when compiling a function. Compiling multiple files at once to a single output file (and using -funit-at-a-time) will allow the compiler to use
information gained from all of the files when compiling each of them.
Not all optimizations are controlled directly by a flag. Only optimizations that have a flag are listed.
-O
-O1 Optimize. Optimizing compilation takes somewhat more time, and a lot more memory for a large function.
With -O, the compiler tries to reduce code size and execution time, without performing any optimizations that take a great deal of compilation time.
-O turns on the following optimization flags: -fdefer-pop -fmerge-constants -fthread-jumps -floop-optimize -fif-conversion -fif-conversion2 -fdelayed-branch
-fguess-branch-probability -fcprop-registers
-O also turns on -fomit-frame-pointer on machines where doing so does not interfere with debugging.
-O2 Optimize even more. GCC performs nearly all supported optimizations that do not involve a space-speed tradeoff. The compiler does not perform loop unrolling or func-
tion inlining when you specify -O2. As compared to -O, this option increases both compilation time and the performance of the generated code.
-O2 turns on all optimization flags specified by -O. It also turns on the following optimization flags: -fforce-mem -foptimize-sibling-calls -fstrength-reduce
-fcse-follow-jumps -fcse-skip-blocks -frerun-cse-after-loop -frerun-loop-opt -fgcse -fgcse-lm -fgcse-sm -fgcse-las -fdelete-null-pointer-checks -fexpensive-opti-
mizations -fregmove -fschedule-insns -fschedule-insns2 -fsched-interblock -fsched-spec -fcaller-saves -fpeephole2 -freorder-blocks -freorder-functions -fstrict-alias-
ing -funit-at-a-time -falign-functions -falign-jumps -falign-loops -falign-labels -fcrossjumping
Please note the warning under -fgcse about invoking -O2 on programs that use computed gotos.
-O3 Optimize yet more. -O3 turns on all optimizations specified by -O2 and also turns on the -finline-functions, -fweb, -frename-registers and -funswitch-loops options.
-O0 Do not optimize. This is the default.
-Os Optimize for size. -Os enables all -O2 optimizations that do not typically increase code size. It also performs further optimizations designed to reduce code size.
-Os disables the following optimization flags: -falign-functions -falign-jumps -falign-loops -falign-labels -freorder-blocks -fprefetch-loop-arrays
If you use multiple -O options, with or without level numbers, the last such option is the one that is effective.
Options of the form -fflag specify machine-independent flags. Most flags have both positive and negative forms; the negative form of -ffoo would be -fno-foo. In the table
below, only one of the forms is listed---the one you typically will use. You can figure out the other form by either removing no- or adding it.
The following options control specific optimizations. They are either activated by -O options or are related to ones that are. You can use the following flags in the rare
cases when ‘‘fine-tuning’’ of optimizations to be performed is desired.
-fno-default-inline
Do not make member functions inline by default merely because they are defined inside the class scope (C++ only). Otherwise, when you specify -O, member functions
defined inside class scope are compiled inline by default; i.e., you don’t need to add inline in front of the member function name.
-fno-defer-pop
Always pop the arguments to each function call as soon as that function returns. For machines which must pop arguments after a function call, the compiler normally lets
arguments accumulate on the stack for several function calls and pops them all at once.
Disabled at levels -O, -O2, -O3, -Os.
-fforce-mem
Force memory operands to be copied into registers before doing arithmetic on them. This produces better code by making all memory references potential common subexpres-
sions. When they are not common subexpressions, instruction combination should eliminate the separate register-load.
Enabled at levels -O2, -O3, -Os.
-fforce-addr
Force memory address constants to be copied into registers before doing arithmetic on them. This may produce better code just as -fforce-mem may.
-fomit-frame-pointer
Don’t keep the frame pointer in a register for functions that don’t need one. This avoids the instructions to save, set up and restore frame pointers; it also makes an
extra register available in many functions. It also makes debugging impossible on some machines.
On some machines, such as the VAX, this flag has no effect, because the standard calling sequence automatically handles the frame pointer and nothing is saved by pre-
tending it doesn’t exist. The machine-description macro "FRAME_POINTER_REQUIRED" controls whether a target machine supports this flag.
Enabled at levels -O, -O2, -O3, -Os.
-foptimize-sibling-calls
Optimize sibling and tail recursive calls.
Enabled at levels -O2, -O3, -Os.
-fno-inline
Don’t pay attention to the "inline" keyword. Normally this option is used to keep the compiler from expanding any functions inline. Note that if you are not optimiz-
ing, no functions can be expanded inline.
-finline-functions
Integrate all simple functions into their callers. The compiler heuristically decides which functions are simple enough to be worth integrating in this way.
If all calls to a given function are integrated, and the function is declared "static", then the function is normally not output as assembler code in its own right.
Enabled at level -O3.
-finline-limit=n
By default, GCC limits the size of functions that can be inlined. This flag allows the control of this limit for functions that are explicitly marked as inline (i.e.,
marked with the inline keyword or defined within the class definition in c++). n is the size of functions that can be inlined in number of pseudo instructions (not
counting parameter handling). The default value of n is 600. Increasing this value can result in more inlined code at the cost of compilation time and memory consump-
tion. Decreasing usually makes the compilation faster and less code will be inlined (which presumably means slower programs). This option is particularly useful for
programs that use inlining heavily such as those based on recursive templates with C++.
Inlining is actually controlled by a number of parameters, which may be specified individually by using --param name=value. The -finline-limit=n option sets some of
these parameters as follows:
@item max-inline-insns-single
is set to I<n>/2.
@item max-inline-insns-auto
is set to I<n>/2.
@item min-inline-insns
is set to 130 or I<n>/4, whichever is smaller.
@item max-inline-insns-rtl
is set to I<n>.
See below for a documentation of the individual parameters controlling inlining.
Note: pseudo instruction represents, in this particular context, an abstract measurement of function’s size. In no way, it represents a count of assembly instructions
and as such its exact meaning might change from one release to an another.
-fkeep-inline-functions
Even if all calls to a given function are integrated, and the function is declared "static", nevertheless output a separate run-time callable version of the function.
This switch does not affect "extern inline" functions.
-fkeep-static-consts
Emit variables declared "static const" when optimization isn’t turned on, even if the variables aren’t referenced.
GCC enables this option by default. If you want to force the compiler to check if the variable was referenced, regardless of whether or not optimization is turned on,
use the -fno-keep-static-consts option.
-fmerge-constants
Attempt to merge identical constants (string constants and floating point constants) across compilation units.
This option is the default for optimized compilation if the assembler and linker support it. Use -fno-merge-constants to inhibit this behavior.
Enabled at levels -O, -O2, -O3, -Os.
-fmerge-all-constants
Attempt to merge identical constants and identical variables.
This option implies -fmerge-constants. In addition to -fmerge-constants this considers e.g. even constant initialized arrays or initialized constant variables with
integral or floating point types. Languages like C or C++ require each non-automatic variable to have distinct location, so using this option will result in non-con-
forming behavior.
-fnew-ra
Use a graph coloring register allocator. Currently this option is meant only for testing. Users should not specify this option, since it is not yet ready for produc-
tion use.
-fno-branch-count-reg
Do not use ‘‘decrement and branch’’ instructions on a count register, but instead generate a sequence of instructions that decrement a register, compare it against zero,
then branch based upon the result. This option is only meaningful on architectures that support such instructions, which include x86, PowerPC, IA-64 and S/390.
The default is -fbranch-count-reg, enabled when -fstrength-reduce is enabled.
-fno-function-cse
Do not put function addresses in registers; make each instruction that calls a constant function contain the function’s address explicitly.
This option results in less efficient code, but some strange hacks that alter the assembler output may be confused by the optimizations performed when this option is not
used.
The default is -ffunction-cse
-fno-zero-initialized-in-bss
If the target supports a BSS section, GCC by default puts variables that are initialized to zero into BSS. This can save space in the resulting code.
This option turns off this behavior because some programs explicitly rely on variables going to the data section. E.g., so that the resulting executable can find the
beginning of that section and/or make assumptions based on that.
The default is -fzero-initialized-in-bss.
-fstrength-reduce
Perform the optimizations of loop strength reduction and elimination of iteration variables.
Enabled at levels -O2, -O3, -Os.
-fthread-jumps
Perform optimizations where we check to see if a jump branches to a location where another comparison subsumed by the first is found. If so, the first branch is redi-
rected to either the destination of the second branch or a point immediately following it, depending on whether the condition is known to be true or false.
Enabled at levels -O, -O2, -O3, -Os.
-fcse-follow-jumps
In common subexpression elimination, scan through jump instructions when the target of the jump is not reached by any other path. For example, when CSE encounters an
"if" statement with an "else" clause, CSE will follow the jump when the condition tested is false.
Enabled at levels -O2, -O3, -Os.
-fcse-skip-blocks
This is similar to -fcse-follow-jumps, but causes CSE to follow jumps which conditionally skip over blocks. When CSE encounters a simple "if" statement with no else
clause, -fcse-skip-blocks causes CSE to follow the jump around the body of the "if".
Enabled at levels -O2, -O3, -Os.
-frerun-cse-after-loop
Re-run common subexpression elimination after loop optimizations has been performed.
Enabled at levels -O2, -O3, -Os.
-frerun-loop-opt
Run the loop optimizer twice.
Enabled at levels -O2, -O3, -Os.
-fgcse
Perform a global common subexpression elimination pass. This pass also performs global constant and copy propagation.
Note: When compiling a program using computed gotos, a GCC extension, you may get better runtime performance if you disable the global common subexpression elimination
pass by adding -fno-gcse to the command line.
Enabled at levels -O2, -O3, -Os.
-fgcse-lm
When -fgcse-lm is enabled, global common subexpression elimination will attempt to move loads which are only killed by stores into themselves. This allows a loop con-
taining a load/store sequence to be changed to a load outside the loop, and a copy/store within the loop.
Enabled by default when gcse is enabled.
-fgcse-sm
When -fgcse-sm is enabled, a store motion pass is run after global common subexpression elimination. This pass will attempt to move stores out of loops. When used in
conjunction with -fgcse-lm, loops containing a load/store sequence can be changed to a load before the loop and a store after the loop.
Enabled by default when gcse is enabled.
-fgcse-las
When -fgcse-las is enabled, the global common subexpression elimination pass eliminates redundant loads that come after stores to the same memory location (both partial
and full redundancies).
Enabled by default when gcse is enabled.
-floop-optimize
Perform loop optimizations: move constant expressions out of loops, simplify exit test conditions and optionally do strength-reduction and loop unrolling as well.
Enabled at levels -O, -O2, -O3, -Os.
-fcrossjumping
Perform cross-jumping transformation. This transformation unifies equivalent code and save code size. The resulting code may or may not perform better than without
cross-jumping.
Enabled at levels -O, -O2, -O3, -Os.
-fif-conversion
Attempt to transform conditional jumps into branch-less equivalents. This include use of conditional moves, min, max, set flags and abs instructions, and some tricks
doable by standard arithmetics. The use of conditional execution on chips where it is available is controlled by "if-conversion2".
Enabled at levels -O, -O2, -O3, -Os.
-fif-conversion2
Use conditional execution (where available) to transform conditional jumps into branch-less equivalents.
Enabled at levels -O, -O2, -O3, -Os.
-fdelete-null-pointer-checks
Use global dataflow analysis to identify and eliminate useless checks for null pointers. The compiler assumes that dereferencing a null pointer would have halted the
program. If a pointer is checked after it has already been dereferenced, it cannot be null.
In some environments, this assumption is not true, and programs can safely dereference null pointers. Use -fno-delete-null-pointer-checks to disable this optimization
for programs which depend on that behavior.
Enabled at levels -O2, -O3, -Os.
-fexpensive-optimizations
Perform a number of minor optimizations that are relatively expensive.
Enabled at levels -O2, -O3, -Os.
-foptimize-register-move
-fregmove
Attempt to reassign register numbers in move instructions and as operands of other simple instructions in order to maximize the amount of register tying. This is espe-
cially helpful on machines with two-operand instructions.
Note -fregmove and -foptimize-register-move are the same optimization.
Enabled at levels -O2, -O3, -Os.
-fdelayed-branch
If supported for the target machine, attempt to reorder instructions to exploit instruction slots available after delayed branch instructions.
Enabled at levels -O, -O2, -O3, -Os.
-fschedule-insns
If supported for the target machine, attempt to reorder instructions to eliminate execution stalls due to required data being unavailable. This helps machines that have
slow floating point or memory load instructions by allowing other instructions to be issued until the result of the load or floating point instruction is required.
Enabled at levels -O2, -O3, -Os.
-fschedule-insns2
Similar to -fschedule-insns, but requests an additional pass of instruction scheduling after register allocation has been done. This is especially useful on machines
with a relatively small number of registers and where memory load instructions take more than one cycle.
Enabled at levels -O2, -O3, -Os.
-fno-sched-interblock
Don’t schedule instructions across basic blocks. This is normally enabled by default when scheduling before register allocation, i.e. with -fschedule-insns or at -O2
or higher.
-fno-sched-spec
Don’t allow speculative motion of non-load instructions. This is normally enabled by default when scheduling before register allocation, i.e. with -fschedule-insns or
at -O2 or higher.
-fsched-spec-load
Allow speculative motion of some load instructions. This only makes sense when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher.
-fsched-spec-load-dangerous
Allow speculative motion of more load instructions. This only makes sense when scheduling before register allocation, i.e. with -fschedule-insns or at -O2 or higher.
-fsched-stalled-insns=n
Define how many insns (if any) can be moved prematurely from the queue of stalled insns into the ready list, during the second scheduling pass.
-fsched-stalled-insns-dep=n
Define how many insn groups (cycles) will be examined for a dependency on a stalled insn that is candidate for premature removal from the queue of stalled insns. Has an
effect only during the second scheduling pass, and only if -fsched-stalled-insns is used and its value is not zero.
-fsched2-use-superblocks
When scheduling after register allocation, do use superblock scheduling algorithm. Superblock scheduling allows motion across basic block boundaries resulting on faster
schedules. This option is experimental, as not all machine descriptions used by GCC model the CPU closely enough to avoid unreliable results from the algorithm.
This only makes sense when scheduling after register allocation, i.e. with -fschedule-insns2 or at -O2 or higher.
-fsched2-use-traces
Use -fsched2-use-superblocks algorithm when scheduling after register allocation and additionally perform code duplication in order to increase the size of superblocks
using tracer pass. See -ftracer for details on trace formation.
This mode should produce faster but significantly longer programs. Also without "-fbranch-probabilities" the traces constructed may not match the reality and hurt the
performance. This only makes sense when scheduling after register allocation, i.e. with -fschedule-insns2 or at -O2 or higher.
-fcaller-saves
Enable values to be allocated in registers that will be clobbered by function calls, by emitting extra instructions to save and restore the registers around such calls.
Such allocation is done only when it seems to result in better code than would otherwise be produced.
This option is always enabled by default on certain machines, usually those which have no call-preserved registers to use instead.
Enabled at levels -O2, -O3, -Os.
-fmove-all-movables
Forces all invariant computations in loops to be moved outside the loop.
-freduce-all-givs
Forces all general-induction variables in loops to be strength-reduced.
Note: When compiling programs written in Fortran, -fmove-all-movables and -freduce-all-givs are enabled by default when you use the optimizer.
These options may generate better or worse code; results are highly dependent on the structure of loops within the source code.
These two options are intended to be removed someday, once they have helped determine the efficacy of various approaches to improving loop optimizations.
Please contact <gcc@gcc.gnu.org>, and describe how use of these options affects the performance of your production code. Examples of code that runs slower when these
options are enabled are very valuable.
-fno-peephole
-fno-peephole2
Disable any machine-specific peephole optimizations. The difference between -fno-peephole and -fno-peephole2 is in how they are implemented in the compiler; some tar-
gets use one, some use the other, a few use both.
-fpeephole is enabled by default. -fpeephole2 enabled at levels -O2, -O3, -Os.
-fno-guess-branch-probability
Do not guess branch probabilities using a randomized model.
Sometimes GCC will opt to use a randomized model to guess branch probabilities, when none are available from either profiling feedback (-fprofile-arcs) or
__builtin_expect. This means that different runs of the compiler on the same program may produce different object code.
In a hard real-time system, people don’t want different runs of the compiler to produce code that has different behavior; minimizing non-determinism is of paramount
import. This switch allows users to reduce non-determinism, possibly at the expense of inferior optimization.
The default is -fguess-branch-probability at levels -O, -O2, -O3, -Os.
-freorder-blocks
Reorder basic blocks in the compiled function in order to reduce number of taken branches and improve code locality.
Enabled at levels -O2, -O3.
-freorder-functions
Reorder basic blocks in the compiled function in order to reduce number of taken branches and improve code locality. This is implemented by using special subsections
".text.hot" for most frequently executed functions and ".text.unlikely" for unlikely executed functions. Reordering is done by the linker so object file format must
support named sections and linker must place them in a reasonable way.
Also profile feedback must be available in to make this option effective. See -fprofile-arcs for details.
Enabled at levels -O2, -O3, -Os.
-fstrict-aliasing
Allows the compiler to assume the strictest aliasing rules applicable to the language being compiled. For C (and C++), this activates optimizations based on the type of
expressions. In particular, an object of one type is assumed never to reside at the same address as an object of a different type, unless the types are almost the same.
For example, an "unsigned int" can alias an "int", but not a "void*" or a "double". A character type may alias any other type.
Pay special attention to code like this:
union a_union {
int i;
double d;
};
int f() {
a_union t;
t.d = 3.0;
return t.i;
}
The practice of reading from a different union member than the one most recently written to (called ‘‘type-punning’’) is common. Even with -fstrict-aliasing, type-pun-
ning is allowed, provided the memory is accessed through the union type. So, the code above will work as expected. However, this code might not:
int f() {
a_union t;
int* ip;
t.d = 3.0;
ip = &t.i;
return *ip;
}
Every language that wishes to perform language-specific alias analysis should define a function that computes, given an "tree" node, an alias set for the node. Nodes in
different alias sets are not allowed to alias. For an example, see the C front-end function "c_get_alias_set".
Enabled at levels -O2, -O3, -Os.
-falign-functions
-falign-functions=n
Align the start of functions to the next power-of-two greater than n, skipping up to n bytes. For instance, -falign-functions=32 aligns functions to the next 32-byte
boundary, but -falign-functions=24 would align to the next 32-byte boundary only if this can be done by skipping 23 bytes or less.
-fno-align-functions and -falign-functions=1 are equivalent and mean that functions will not be aligned.
Some assemblers only support this flag when n is a power of two; in that case, it is rounded up.
If n is not specified or is zero, use a machine-dependent default.
Enabled at levels -O2, -O3.
-falign-labels
-falign-labels=n
Align all branch targets to a power-of-two boundary, skipping up to n bytes like -falign-functions. This option can easily make code slower, because it must insert
dummy operations for when the branch target is reached in the usual flow of the code.
-fno-align-labels and -falign-labels=1 are equivalent and mean that labels will not be aligned.
If -falign-loops or -falign-jumps are applicable and are greater than this value, then their values are used instead.
If n is not specified or is zero, use a machine-dependent default which is very likely to be 1, meaning no alignment.
Enabled at levels -O2, -O3.
-falign-loops
-falign-loops=n
Align loops to a power-of-two boundary, skipping up to n bytes like -falign-functions. The hope is that the loop will be executed many times, which will make up for any
execution of the dummy operations.
-fno-align-loops and -falign-loops=1 are equivalent and mean that loops will not be aligned.
If n is not specified or is zero, use a machine-dependent default.
Enabled at levels -O2, -O3.
-falign-jumps
-falign-jumps=n
Align branch targets to a power-of-two boundary, for branch targets where the targets can only be reached by jumping, skipping up to n bytes like -falign-functions. In
this case, no dummy operations need be executed.
-fno-align-jumps and -falign-jumps=1 are equivalent and mean that loops will not be aligned.
If n is not specified or is zero, use a machine-dependent default.
Enabled at levels -O2, -O3.
-frename-registers
Attempt to avoid false dependencies in scheduled code by making use of registers left over after register allocation. This optimization will most benefit processors
with lots of registers. It can, however, make debugging impossible, since variables will no longer stay in a ‘‘home register’’.
-fweb
Constructs webs as commonly used for register allocation purposes and assign each web individual pseudo register. This allows the register allocation pass to operate on
pseudos directly, but also strengthens several other optimization passes, such as CSE, loop optimizer and trivial dead code remover. It can, however, make debugging
impossible, since variables will no longer stay in a ‘‘home register’’.
Enabled at levels -O3.
-fno-cprop-registers
After register allocation and post-register allocation instruction splitting, we perform a copy-propagation pass to try to reduce scheduling dependencies and occasion-
ally eliminate the copy.
Disabled at levels -O, -O2, -O3, -Os.
-fprofile-generate
Enable options usually used for instrumenting application to produce profile useful for later recompilation with profile feedback based optimization. You must use
"-fprofile-generate" both when compiling and when linking your program.
The following options are enabled: "-fprofile-arcs", "-fprofile-values", "-fvpt".
-fprofile-use
Enable profile feedback directed optimizations, and optimizations generally profitable only with profile feedback available.
The following options are enabled: "-fbranch-probabilities", "-fvpt", "-funroll-loops", "-fpeel-loops", "-ftracer".
The following options control compiler behavior regarding floating point arithmetic. These options trade off between speed and correctness. All must be specifically
enabled.
-ffloat-store
Do not store floating point variables in registers, and inhibit other options that might change whether a floating point value is taken from a register or memory.
This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a "double" is sup-
posed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE
floating point. Use -ffloat-store for such programs, after modifying them to store all pertinent intermediate computations into variables.
-ffast-math
Sets -fno-math-errno, -funsafe-math-optimizations, -fno-trapping-math, -ffinite-math-only, -fno-rounding-math and -fno-signaling-nans.
This option causes the preprocessor macro "__FAST_MATH__" to be defined.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO
rules/specifications for math functions.
-fno-math-errno
Do not set ERRNO after calling math functions that are executed with a single instruction, e.g., sqrt. A program that relies on IEEE exceptions for math error handling
may want to use this flag for speed while maintaining IEEE arithmetic compatibility.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO
rules/specifications for math functions.
The default is -fmath-errno.
-funsafe-math-optimizations
Allow optimizations for floating-point arithmetic that (a) assume that arguments and results are valid and (b) may violate IEEE or ANSI standards. When used at
link-time, it may include libraries or startup files that change the default FPU control word or other similar optimizations.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO
rules/specifications for math functions.
The default is -fno-unsafe-math-optimizations.
-ffinite-math-only
Allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO
rules/specifications.
The default is -fno-finite-math-only.
-fno-trapping-math
Compile code assuming that floating-point operations cannot generate user-visible traps. These traps include division by zero, overflow, underflow, inexact result and
invalid operation. This option implies -fno-signaling-nans. Setting this option may allow faster code if one relies on ‘‘non-stop’’ IEEE arithmetic, for example.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO
rules/specifications for math functions.
The default is -ftrapping-math.
-frounding-math
Disable transformations and optimizations that assume default floating point rounding behavior. This is round-to-zero for all floating point to integer conversions, and
round-to-nearest for all other arithmetic truncations. This option should be specified for programs that change the FP rounding mode dynamically, or that may be exe-
cuted with a non-default rounding mode. This option disables constant folding of floating point expressions at compile-time (which may be affected by rounding mode) and
arithmetic transformations that are unsafe in the presence of sign-dependent rounding modes.
The default is -fno-rounding-math.
This option is experimental and does not currently guarantee to disable all GCC optimizations that are affected by rounding mode. Future versions of GCC may provide
finer control of this setting using C99’s "FENV_ACCESS" pragma. This command line option will be used to specify the default state for "FENV_ACCESS".
-fsignaling-nans
Compile code assuming that IEEE signaling NaNs may generate user-visible traps during floating-point operations. Setting this option disables optimizations that may
change the number of exceptions visible with signaling NaNs. This option implies -ftrapping-math.
This option causes the preprocessor macro "__SUPPORT_SNAN__" to be defined.
The default is -fno-signaling-nans.
This option is experimental and does not currently guarantee to disable all GCC optimizations that affect signaling NaN behavior.
-fsingle-precision-constant
Treat floating point constant as single precision constant instead of implicitly converting it to double precision constant.
The following options control optimizations that may improve performance, but are not enabled by any -O options. This section includes experimental options that may produce
broken code.
-fbranch-probabilities
After running a program compiled with -fprofile-arcs, you can compile it a second time using -fbranch-probabilities, to improve optimizations based on the number of
times each branch was taken. When the program compiled with -fprofile-arcs exits it saves arc execution counts to a file called sourcename.gcda for each source file
The information in this data file is very dependent on the structure of the generated code, so you must use the same source code and the same optimization options for
both compilations.
With -fbranch-probabilities, GCC puts a REG_BR_PROB note on each JUMP_INSN and CALL_INSN. These can be used to improve optimization. Currently, they are only used in
one place: in reorg.c, instead of guessing which path a branch is mostly to take, the REG_BR_PROB values are used to exactly determine which path is taken more often.
-fprofile-values
If combined with -fprofile-arcs, it adds code so that some data about values of expressions in the program is gathered.
With -fbranch-probabilities, it reads back the data gathered from profiling values of expressions and adds REG_VALUE_PROFILE notes to instructions for their later usage
in optimizations.
-fvpt
If combined with -fprofile-arcs, it instructs the compiler to add a code to gather information about values of expressions.
With -fbranch-probabilities, it reads back the data gathered and actually performs the optimizations based on them. Currently the optimizations include specialization
of division operation using the knowledge about the value of the denominator.
-fnew-ra
Use a graph coloring register allocator. Currently this option is meant for testing, so we are interested to hear about miscompilations with -fnew-ra.
-ftracer
Perform tail duplication to enlarge superblock size. This transformation simplifies the control flow of the function allowing other optimizations to do better job.
-funit-at-a-time
Parse the whole compilation unit before starting to produce code. This allows some extra optimizations to take place but consumes more memory.
-funroll-loops
Unroll loops whose number of iterations can be determined at compile time or upon entry to the loop. -funroll-loops implies -frerun-cse-after-loop. It also turns on
complete loop peeling (i.e. complete removal of loops with small constant number of iterations). This option makes code larger, and may or may not make it run faster.
-funroll-all-loops
Unroll all loops, even if their number of iterations is uncertain when the loop is entered. This usually makes programs run more slowly. -funroll-all-loops implies the
same options as -funroll-loops.
-fpeel-loops
Peels the loops for that there is enough information that they do not roll much (from profile feedback). It also turns on complete loop peeling (i.e. complete removal
of loops with small constant number of iterations).
-funswitch-loops
Move branches with loop invariant conditions out of the loop, with duplicates of the loop on both branches (modified according to result of the condition).
-fold-unroll-loops
Unroll loops whose number of iterations can be determined at compile time or upon entry to the loop, using the old loop unroller whose loop recognition is based on notes
from frontend. -fold-unroll-loops implies both -fstrength-reduce and -frerun-cse-after-loop. This option makes code larger, and may or may not make it run faster.
-fold-unroll-all-loops
Unroll all loops, even if their number of iterations is uncertain when the loop is entered. This is done using the old loop unroller whose loop recognition is based on
notes from frontend. This usually makes programs run more slowly. -fold-unroll-all-loops implies the same options as -fold-unroll-loops.
-funswitch-loops
Move branches with loop invariant conditions out of the loop, with duplicates of the loop on both branches (modified according to result of the condition).
-funswitch-loops
Move branches with loop invariant conditions out of the loop, with duplicates of the loop on both branches (modified according to result of the condition).
-fprefetch-loop-arrays
If supported by the target machine, generate instructions to prefetch memory to improve the performance of loops that access large arrays.
Disabled at level -Os.
-ffunction-sections
-fdata-sections
Place each function or data item into its own section in the output file if the target supports arbitrary sections. The name of the function or the name of the data
item determines the section’s name in the output file.
Use these options on systems where the linker can perform optimizations to improve locality of reference in the instruction space. Most systems using the ELF object
format and SPARC processors running Solaris 2 have linkers with such optimizations. AIX may have these optimizations in the future.
Only use these options when there are significant benefits from doing so. When you specify these options, the assembler and linker will create larger object and exe-
cutable files and will also be slower. You will not be able to use "gprof" on all systems if you specify this option and you may have problems with debugging if you
specify both this option and -g.
-fbranch-target-load-optimize
Perform branch target register load optimization before prologue / epilogue threading. The use of target registers can typically be exposed only during reload, thus
hoisting loads out of loops and doing inter-block scheduling needs a separate optimization pass.
-fbranch-target-load-optimize2
Perform branch target register load optimization after prologue / epilogue threading.