IEEE Standard
The rule of thumb is to "round to nearest, tie to even", by which I mean it's the IEEE recommended rounding for floating point representation, though it's not strictly followed, case in point, tensorflow seems not to, wrapping around mantissa when converting fp32 to bf16. (check floating point - Convert FP32 to Bfloat16 in C++ - Stack Overflow)
- Round to nearest, ties to even – rounds to the nearest value; if the number falls midway, it is rounded to the nearest value with an even least significant digit.
Key Update
there is a major misunderstanding on my part, taking "even LSB" as always setting the LSB to 0; the proper IEEE standard is
Rounding half to even
One may also round half to even, a tie-breaking rule without positive/negative bias and without bias toward/away from zero. By this convention, if the fractional part of x is 0.5, then y is the