Take an example of a decision tree, starting from the start state (the root node), down to the next states (leaf nodes), each step has a probability.
The Bellman equation simply averages over all the possibilities, weighting each move by its probability.
"Bellman equation states that, the value of the start state must equal the (discounted) value of the expected next state, plus the reward expected along the way".
The value function is the unique solution to its Bellman equation.
reference:
<<Reinforcement Learning An Introduction>> R.S.Sutton