Tendermint, Mempool
The mempool pushes txs onto the proxy app.
Initialization
In NewNode(), mempool and mempoolReactor will be created as well. Later, the reactor will be added to p2p.Switch to part in the P2P network.
The P2P switch will start all Reactors it holds.
func (sw *Switch) OnStart() error {
// Start reactors
for _, reactor := range sw.reactors {
err := reactor.Start()
if err != nil {
return errors.Wrapf(err, "failed to start %v", reactor)
}
}
// Start accepting Peers.
go sw.acceptRoutine()
return nil
}
func createMempoolAndMempoolReactor(config *cfg.Config, proxyApp proxy.AppConns,
state sm.State, memplMetrics *mempl.Metrics, logger log.Logger) (*mempl.Reactor, *mempl.CListMempool) {
mempool := mempl.NewCListMempool(
config.Mempool,
proxyApp.Mempool(),
state.LastBlockHeight,
mempl.WithMetrics(memplMetrics),
mempl.WithPreCheck(sm.TxPreCheck(state)),
mempl.WithPostCheck(sm.TxPostCheck(state)),
)
mempoolLogger := logger.With("module", "mempool")
mempoolReactor := mempl.NewReactor(config.Mempool, mempool)
mempoolReactor.SetLogger(mempoolLogger)
if config.Consensus.WaitForTxs() {
mempool.EnableTxsAvailable()
}
return mempoolReactor, mempool
}
Mempool Config
Size -> how many Txs the mempool can hold
CacheSize -> replay protection mechainsm, can be disabled if the App has its own
MaxTxBytes -> Max size in bytes of a TX
type MempoolConfig struct {
RootDir string `mapstructure:"home"`
Recheck bool `mapstructure:"recheck"`
Broadcast bool `mapstructure:"broadcast"`
WalPath string `mapstructure:"wal_dir"`
Size int `mapstructure:"size"`
MaxTxsBytes int64 `mapstructure:"max_txs_bytes"`
CacheSize int `mapstructure:"cache_size"`
MaxTxBytes int `mapstructure:"max_tx_bytes"`
}
// DefaultMempoolConfig returns a default configuration for the Tendermint mempool
func DefaultMempoolConfig() *MempoolConfig {
return &MempoolConfig{
Recheck: true,
Broadcast: true,
WalPath: "",
// Each signature verification takes .5ms, Size reduced until we implement
// ABCI Recheck
Size: 5000,
MaxTxsBytes: 1024 * 1024 * 1024, // 1GB
CacheSize: 10000,
MaxTxBytes: 1024 * 1024, // 1MB
}
}
Mempool interface
// Mempool defines the mempool interface.
//
// Updates to the mempool need to be synchronized with committing a block so
// apps can reset their transient state on Commit.
type Mempool interface {
// CheckTx executes a new transaction against the application to determine
// its validity and whether it should be added to the mempool.
CheckTx(tx types.Tx, callback func(*abci.Response), txInfo TxInfo) error
// ReapMaxBytesMaxGas reaps transactions from the mempool up to maxBytes
// bytes total with the condition that the total gasWanted must be less than
// maxGas.
// If both maxes are negative, there is no cap on the size of all returned
// transactions (~ all available transactions).
ReapMaxBytesMaxGas(maxBytes, maxGas int64) types.Txs
// ReapMaxTxs reaps up to max transactions from the mempool.
// If max is negative, there is no cap on the size of all returned
// transactions (~ all available transactions).
ReapMaxTxs(max int) types.Txs
// Lock locks the mempool. The consensus must be able to hold lock to safely update.
Lock()
// Unlock unlocks the mempool.
Unlock()
// Update informs the mempool that the given txs were committed and can be discarded.
// NOTE: this should be called *after* block is committed by consensus.
// NOTE: unsafe; Lock/Unlock must be managed by caller
Update(
blockHeight int64,
blockTxs types.Txs,
deliverTxResponses []*abci.ResponseDeliverTx,
newPreFn PreCheckFunc,
newPostFn PostCheckFunc,
) error
// FlushAppConn flushes the mempool connection to ensure async reqResCb calls are
// done. E.g. from CheckTx.
FlushAppConn() error
// Flush removes all transactions from the mempool and cache
Flush()
// TxsAvailable returns a channel which fires once for every height,
// and only when transactions are available in the mempool.
// NOTE: the returned channel may be nil if EnableTxsAvailable was not called.
TxsAvailable() <-chan struct{}
// EnableTxsAvailable initializes the TxsAvailable channel, ensuring it will
// trigger once every height when transactions are available.
EnableTxsAvailable()
// Size returns the number of transactions in the mempool.
Size() int
// TxsBytes returns the total size of all txs in the mempool.
TxsBytes() int64
// InitWAL creates a directory for the WAL file and opens a file itself.
InitWAL()
// CloseWAL closes and discards the underlying WAL file.
// Any further writes will not be relayed to disk.
CloseWAL()
}
Mempool().CheckTx
The call back is invoked by App when receiving a new TX.
It will check the size of tx, cache ans such for TX validity and then invoke the App for checking Tx furthremore. In the end, Tx will be added to mempool if it passes the validation.
CheckTx is secured by mempool.Lock()
// It blocks if we're waiting on Update() or Reap().
// cb: A callback from the CheckTx command.
// It gets called from another goroutine.
// CONTRACT: Either cb will get called, or err returned.
func (mem *CListMempool) CheckTx(tx types.Tx, cb func(*abci.Response), txInfo TxInfo) (err error) {
mem.proxyMtx.Lock()
// use defer to unlock mutex because application (*local client*) might panic
defer mem.proxyMtx.Unlock()
var (
memSize = mem.Size()
txsBytes = mem.TxsBytes()
txSize = len(tx)
)
if memSize >= mem.config.Size ||
int64(txSize)+txsBytes > mem.config.MaxTxsBytes {
return ErrMempoolIsFull{
memSize, mem.config.Size,
txsBytes, mem.config.MaxTxsBytes}
}
// The size of the corresponding amino-encoded TxMessage
// can't be larger than the maxMsgSize, otherwise we can't
// relay it to peers.
if txSize > mem.config.MaxTxBytes {
return ErrTxTooLarge{mem.config.MaxTxBytes, txSize}
}
if mem.preCheck != nil {
if err := mem.preCheck(tx); err != nil {
return ErrPreCheck{err}
}
}
// CACHE
if !mem.cache.Push(tx) {
// Record a new sender for a tx we've already seen.
// Note it's possible a tx is still in the cache but no longer in the mempool
// (eg. after committing a block, txs are removed from mempool but not cache),
// so we only record the sender for txs still in the mempool.
if e, ok := mem.txsMap.Load(txKey(tx)); ok {
memTx := e.(*clist.CElement).Value.(*mempoolTx)
memTx.senders.LoadOrStore(txInfo.SenderID, true)
// TODO: consider punishing peer for dups,
// its non-trivial since invalid txs can become valid,
// but they can spam the same tx with little cost to them atm.
}
return ErrTxInCache
}
// END CACHE
// WAL
if mem.wal != nil {
// TODO: Notify administrators when WAL fails
_, err := mem.wal.Write([]byte(tx))
if err != nil {
mem.logger.Error("Error writing to WAL", "err", err)
}
_, err = mem.wal.Write([]byte("\n"))
if err != nil {
mem.logger.Error("Error writing to WAL", "err", err)
}
}
// END WAL
// NOTE: proxyAppConn may error if tx buffer is full
if err = mem.proxyAppConn.Error(); err != nil {
return err
}
reqRes := mem.proxyAppConn.CheckTxAsync(abci.RequestCheckTx{Tx: tx})
reqRes.SetCallback(mem.reqResCb(tx, txInfo.SenderID, txInfo.SenderP2PID, cb))
return nil
}
Call stack of addTx:
github.com/tendermint/tendermint/mempool.(*CListMempool).addTx at clist_mempool.go:342
github.com/tendermint/tendermint/mempool.(*CListMempool).resCbFirstTime at clist_mempool.go:385
github.com/tendermint/tendermint/mempool.(*CListMempool).reqResCb.func1 at clist_mempool.go:327
github.com/tendermint/tendermint/abci/client.(*ReqRes).SetCallback at client.go:104
github.com/tendermint/tendermint/mempool.(*CListMempool).CheckTx at clist_mempool.go:282
github.com/tendermint/tendermint/mempool.Mempool.CheckTx-fm at mempool.go:18
github.com/hyperledger/burrow/execution.(*Transactor).CheckTxAsyncRaw at transactor.go:237
github.com/hyperledger/burrow/execution.(*Transactor).CheckTxSyncRaw at transactor.go:206
github.com/hyperledger/burrow/execution.(*Transactor).CheckTxSync at transactor.go:134
github.com/hyperledger/burrow/execution.(*Transactor).BroadcastTxSync at transactor.go:83
github.com/hyperledger/burrow/rpc/rpctransact.(*transactServer).BroadcastTxSync at transact_server.go:55
github.com/hyperledger/burrow/rpc/rpctransact._Transact_BroadcastTxSync_Handler.func1 at rpctransact.pb.go:505
github.com/hyperledger/burrow/rpc.unaryInterceptor.func1 at grpc.go:104
github.com/hyperledger/burrow/rpc/rpctransact._Transact_BroadcastTxSync_Handler at rpctransact.pb.go:507
google.golang.org/grpc.(*Server).processUnaryRPC at server.go:1024
google.golang.org/grpc.(*Server).handleStream at server.go:1313
google.golang.org/grpc.(*Server).serveStreams.func1.1 at server.go:722
runtime.goexit at asm_amd64.s:1357
- Async stack trace
google.golang.org/grpc.(*Server).serveStreams.func1 at server.go:720
mempool.ReapMaxBytesMaxGas
It is called by Consensus to propose a new block. In principle, it will reap the TXs in mempool as many as possible.
Note that mempool is locked during the invokation.
func (mem *CListMempool) ReapMaxBytesMaxGas(maxBytes, maxGas int64) types.Txs {
mem.proxyMtx.Lock()
defer mem.proxyMtx.Unlock()
for atomic.LoadInt32(&mem.rechecking) > 0 {
// TODO: Something better?
time.Sleep(time.Millisecond * 10)
}
var totalBytes int64
var totalGas int64
// TODO: we will get a performance boost if we have a good estimate of avg
// size per tx, and set the initial capacity based off of that.
// txs := make([]types.Tx, 0, tmmath.MinInt(mem.txs.Len(), max/mem.avgTxSize))
txs := make([]types.Tx, 0, mem.txs.Len())
for e := mem.txs.Front(); e != nil; e = e.Next() {
memTx := e.Value.(*mempoolTx)
// Check total size requirement
aminoOverhead := types.ComputeAminoOverhead(memTx.tx, 1)
if maxBytes > -1 && totalBytes+int64(len(memTx.tx))+aminoOverhead > maxBytes {
return txs
}
totalBytes += int64(len(memTx.tx)) + aminoOverhead
// Check total gas requirement.
// If maxGas is negative, skip this check.
// Since newTotalGas < masGas, which
// must be non-negative, it follows that this won't overflow.
newTotalGas := totalGas + memTx.gasWanted
if maxGas > -1 && newTotalGas > maxGas {
return txs
}
totalGas = newTotalGas
txs = append(txs, memTx.tx)
}
return txs
}
mempool.Update
Invoked by state.Commit when a new blocked is committed. The Txs have to be removed from mempool.txList and txMap if they have been committed.
Commit will invoke mempool.Lock() for locking mempool.
Note: Recheck might be done for any remained TXs at this point, when the config indicates recheck is true.
github.com/tendermint/tendermint/mempool.(*CListMempool).Update at clist_mempool.go:547
github.com/tendermint/tendermint/state.(*BlockExecutor).Commit at execution.go:231
github.com/tendermint/tendermint/state.(*BlockExecutor).ApplyBlock at execution.go:167
github.com/tendermint/tendermint/consensus.(*State).finalizeCommit at state.go:1431
github.com/tendermint/tendermint/consensus.(*State).tryFinalizeCommit at state.go:1350
github.com/tendermint/tendermint/consensus.(*State).enterCommit.func1 at state.go:1285
github.com/tendermint/tendermint/consensus.(*State).enterCommit at state.go:1322
github.com/tendermint/tendermint/consensus.(*State).addVote at state.go:1819
github.com/tendermint/tendermint/consensus.(*State).tryAddVote at state.go:1642
github.com/tendermint/tendermint/consensus.(*State).handleMsg at state.go:709
github.com/tendermint/tendermint/consensus.(*State).receiveRoutine at state.go:660
runtime.goexit at asm_amd64.s:1357
- Async stack trace
github.com/tendermint/tendermint/consensus.(*State).OnStart at state.go:335
mempool.Recheck and known issues
When it is enabled, mempool will try to recheck the remained TXs once upon a new block is committed. This is a chance to remove any invalid TXs which are good then before the block is committed. Actually, it replays all existing TXs on the proxyApp. In addition, this feature is important for Burrow, since it enables Burrow checker to catch up with the remained TXs - when a new block committed, the state of Burrow checker will be reset and pointed to the latest block. BAD, we still have TXs in mempool, so we have to replay them all.
Even though, there is a race condition - a new TX might invoke SignMempool with the checker state which is just reset and therefore contains a wrong sequence.
state.ApplyBlock
state.Commit
mempool.Update
mempool.recheckTxs
proxyApp.CheckTxAsync
mempool.resCbRecheck
mempool.notifyTxsAvailable()
mempool Cache
A mechanism to prevent from replay attack.
Tx will be pushed into cache when CheckTx is invoked, and will be removed if it is invalidated. Furthremore, it will be evnetually removed from cache when Tx is commited with a new block. See mempool.Update() for more details.
mempool txs & txsMap
Core data structure to record Txs. txMap provides the fast lookup for Txs.
Lifecycle of Txs is similar wiht the Cache.
mempool Reactor
It handles mempool tx broadcasting amongst peers.
type Reactor struct {
p2p.BaseReactor
config *cfg.MempoolConfig
mempool *CListMempool
ids *mempoolIDs
}
type mempoolIDs struct {
mtx sync.RWMutex
peerMap map[p2p.ID]uint16
nextID uint16 // assumes that a node will never have over 65536 active peers
activeIDs map[uint16]struct{} // used to check if a given peerID key is used, the value doesn't matter
}
The mempoolReactor is handled by Switch, with a channel:
// GetChannels implements Reactor.
// It returns the list of channels for this reactor.
func (memR *Reactor) GetChannels() []*p2p.ChannelDescriptor {
return []*p2p.ChannelDescriptor{
{
ID: MempoolChannel,
Priority: 5,
},
}
}
- It receives Tx from Switch and then add to mempool by invoking mempool.CheckTx()
- It starts a go routine to broadcast Txs to peer when a new peer is added
One thing to be noted: mempoolReactor has to track the sender of Tx so that it wouldn’t gossip the tx to its original sender.