fabric 2.0, Gossip Service

Peers leverage gossip to broadcast ledger and channel data in a scalable fashion. Gossip messaging is continuous, and each peer on a channel is constantly receiving current and consistent ledger data from multiple peers.

Gossiped message is digitally signed.

Main functions

  • Manages peer discovery and channel membership, by continually identifying available member peers, and eventually detecting peers that have gone offline.
  • Disseminates ledger data across all peers on a channel. Any peer with data that is out of sync with the rest of the channel identifies the missing blocks and syncs itself by copying the correct data.
  • Bring newly connected peers up to speed by allowing peer-to-peer state transfer update of ledger data.

Gossip-based broadcasting operates by peers receiving messages from other peers on the channel, and then forwarding these messages to a number of randomly selected peers on the channel, where this number is a configurable constant. Peers can also exercise a pull mechanism rather than waiting for delivery of a message. This cycle repeats, with the result of channel membership, ledger and state information continually being kept current and in sync. For dissemination of new blocks, the leader peer on the channel pulls the data from the ordering service and initiates gossip dissemination to peers in its own organization.

Basic

gossipSvc is the interface, implemeted by gossip node.

type GossipService struct {
	gossipSvc
	privateHandlers map[string]privateHandler
	chains          map[string]state.GossipStateProvider
	leaderElection  map[string]election.LeaderElectionService
	deliveryService map[string]deliverservice.DeliverService
	deliveryFactory DeliveryServiceFactory
	lock            sync.RWMutex
	mcs             api.MessageCryptoService
	peerIdentity    []byte
	secAdv          api.SecurityAdvisor
	metrics         *gossipmetrics.GossipMetrics
	serviceConfig   *ServiceConfig
	privdataConfig  *gossipprivdata.PrivdataConfig
}


// Node is a member of a gossip network
type Node struct {
	selfIdentity          api.PeerIdentityType
	includeIdentityPeriod time.Time
	certStore             *certStore
	idMapper              identity.Mapper
	presumedDead          chan common.PKIidType
	disc                  discovery.Discovery
	comm                  comm.Comm
	selfOrg               api.OrgIdentityType
	*comm.ChannelDeMultiplexer
	logger            util.Logger
	stopSignal        *sync.WaitGroup
	conf              *Config
	toDieChan         chan struct{}
	stopFlag          int32
	emitter           batchingEmitter
	discAdapter       *discoveryAdapter
	secAdvisor        api.SecurityAdvisor
	chanState         *channelState
	disSecAdap        *discoverySecurityAdapter
	mcs               api.MessageCryptoService
	stateInfoMsgStore msgstore.MessageStore
	certPuller        pull.Mediator
	gossipMetrics     *metrics.GossipMetrics
}


// New creates the gossip service.
func New(
	peerIdentity identity.SignerSerializer,
	gossipMetrics *gossipmetrics.GossipMetrics,
	endpoint string,
	s *grpc.Server,
	mcs api.MessageCryptoService,
	secAdv api.SecurityAdvisor,
	secureDialOpts api.PeerSecureDialOpts,
	credSupport *corecomm.CredentialSupport,
	deliverGRPCClient *corecomm.GRPCClient,
	gossipConfig *gossip.Config,
	serviceConfig *ServiceConfig,
	privdataConfig *gossipprivdata.PrivdataConfig,
	deliverServiceConfig *deliverservice.DeliverServiceConfig,
) (*GossipService, error) {
	serializedIdentity, err := peerIdentity.Serialize()
	if err != nil {
		return nil, err
	}

	logger.Infof("Initialize gossip with endpoint %s", endpoint)

	gossipComponent := gossip.New(
		gossipConfig,
		s,
		secAdv,
		mcs,
		serializedIdentity,
		secureDialOpts,
		gossipMetrics,
	)

	return &GossipService{
		gossipSvc:       gossipComponent,
		mcs:             mcs,
		privateHandlers: make(map[string]privateHandler),
		chains:          make(map[string]state.GossipStateProvider),
		leaderElection:  make(map[string]election.LeaderElectionService),
		deliveryService: make(map[string]deliverservice.DeliverService),
		deliveryFactory: &deliveryFactoryImpl{
			signer:               peerIdentity,
			credentialSupport:    credSupport,
			deliverGRPCClient:    deliverGRPCClient,
			deliverServiceConfig: deliverServiceConfig,
		},
		peerIdentity:   serializedIdentity,
		secAdv:         secAdv,
		metrics:        gossipMetrics,
		serviceConfig:  serviceConfig,
		privdataConfig: privdataConfig,
	}, nil
}

Gossip Message

Message Type

	//	*GossipMessage_AliveMsg
	//	*GossipMessage_MemReq		Membership Request, handled by Discovery
	//	*GossipMessage_MemRes		Membership Response, handled by Discovery
	//	*GossipMessage_DataMsg		Data Block
	//	*GossipMessage_Hello		BlockPuller Hello
	//	*GossipMessage_DataDig		BlockPuller Digest
	//	*GossipMessage_DataReq		BlockPuller DataReq
	//	*GossipMessage_DataUpdate	BlockPuller DataUpdate
	//	*GossipMessage_Empty
	//	*GossipMessage_Conn
	//	*GossipMessage_StateInfo			Published by Channel periodically
	//	*GossipMessage_StateSnapshot		Response to StateInfoPullReq, handled by Channel
	//	*GossipMessage_StateInfoPullReq		Sent out by Channel, periodically
	//	*GossipMessage_StateRequest			StateProvider antiEntropy Request
	//	*GossipMessage_StateResponse		StateProvider antiEntropy Response
	//	*GossipMessage_LeadershipMsg		Handled by Election
	//	*GossipMessage_PeerIdentity
	//	*GossipMessage_Ack					ACK, only by StateProvider
	//	*GossipMessage_PrivateReq
	//	*GossipMessage_PrivateRes
	//	*GossipMessage_PrivateData

Message Tag

Message Tag is way to group messages. It indicates who should be interested in the messages.

const (
	GossipMessage_UNDEFINED    GossipMessage_Tag = 0
	GossipMessage_EMPTY        GossipMessage_Tag = 1
	GossipMessage_ORG_ONLY     GossipMessage_Tag = 2	// Org message
	GossipMessage_CHAN_ONLY    GossipMessage_Tag = 3	// Channel message
	GossipMessage_CHAN_AND_ORG GossipMessage_Tag = 4
	GossipMessage_CHAN_OR_ORG  GossipMessage_Tag = 5
)

idMapper

The idMapper holds mappings between pkiID and certificates(identities) of peers.


// Mapper holds mappings between pkiID
// to certificates(identities) of peers
type Mapper interface {
	// Put associates an identity to its given pkiID, and returns an error
	// in case the given pkiID doesn't match the identity
	Put(pkiID common.PKIidType, identity api.PeerIdentityType) error

	// Get returns the identity of a given pkiID, or error if such an identity
	// isn't found
	Get(pkiID common.PKIidType) (api.PeerIdentityType, error)

	// Sign signs a message, returns a signed message on success
	// or an error on failure
	Sign(msg []byte) ([]byte, error)

	// Verify verifies a signed message
	Verify(vkID, signature, message []byte) error

	// GetPKIidOfCert returns the PKI-ID of a certificate
	GetPKIidOfCert(api.PeerIdentityType) common.PKIidType

	// SuspectPeers re-validates all peers that match the given predicate
	SuspectPeers(isSuspected api.PeerSuspector)

	// IdentityInfo returns information known peer identities
	IdentityInfo() api.PeerIdentitySet

	// Stop stops all background computations of the Mapper
	Stop()
}

MessageStoreExpirable

It is a MessageStore with the message replacing policy and invalidation trigger passed. It supports old message expiration after msgTTL, during expiration first external lock taken, expiration callback invoked and external lock released. Callback and external lock can be nil.

Basically, it is a tool providing validation and transient store.

This mechanism is used by:

  • Gossip : add StateInfo msg to store
  • Gossip Discovery : add Alive msg to store
  • Gossip Channel : add Block data msg
  • Gossip Channel : add Leadership msg
func NewMessageStoreExpirable(pol common.MessageReplacingPolicy, trigger invalidationTrigger, msgTTL time.Duration, externalLock func(), externalUnlock func(), externalExpire func(interface{})) MessageStore {
	store := newMsgStore(pol, trigger)
	store.msgTTL = msgTTL

	if externalLock != nil {
		store.externalLock = externalLock
	}

	if externalUnlock != nil {
		store.externalUnlock = externalUnlock
	}

	if externalExpire != nil {
		store.expireMsgCallback = externalExpire
	}

	go store.expirationRoutine()
	return store
}

PubSub

Internally, Gossip uses a simple PubSub mechanism for coordinating peer connection asynchronously. This PusSub is used internally by some sub moudles of Gossip Service.

// Publish publishes an item to all subscribers on the topic
func (ps *PubSub) Publish(topic string, item interface{}) error {
	ps.RLock()
	defer ps.RUnlock()
	s, subscribed := ps.subscriptions[topic]
	if !subscribed {
		return errors.New("no subscribers")
	}
	for _, sub := range s.ToArray() {
		c := sub.(*subscription).c
		select {
		case c <- item:
		default: // Not enough room in buffer, continue in order to not block publisher
		}
	}
	return nil
}

// Subscribe returns a subscription to a topic that expires when given TTL passes
func (ps *PubSub) Subscribe(topic string, ttl time.Duration) Subscription {
	sub := &subscription{
		top: topic,
		ttl: ttl,
		c:   make(chan interface{}, subscriptionBuffSize),
	}

	ps.Lock()
	// Add subscription to subscriptions map
	s, exists := ps.subscriptions[topic]
	// If no subscription set for the topic exists, create one
	if !exists {
		s = NewSet()
		ps.subscriptions[topic] = s
	}
	ps.Unlock()

	// Add the subscription
	s.Add(sub)

	// When the timeout expires, remove the subscription
	time.AfterFunc(ttl, func() {
		ps.unSubscribe(sub)
	})
	return sub
}

Puller Mediator

Puller will do pull sync with the other endpoints. It can be initialized with the configuration to pull
the specified information.

A B Hello Digest, This is what I have Req, I need something RES, Here you are A B
// Mediator is a component wrap a PullEngine and provides the methods
// it needs to perform pull synchronization.
// The specialization of a pull mediator to a certain type of message is
// done by the configuration, a IdentifierExtractor, IdentifierExtractor
// given at construction, and also hooks that can be registered for each
// type of pullMsgType (hello, digest, req, res).
type Mediator interface {
	// Stop stop the Mediator
	Stop()

	// RegisterMsgHook registers a message hook to a specific type of pull message
	RegisterMsgHook(MsgType, MessageHook)

	// Add adds a GossipMessage to the Mediator
	Add(*protoext.SignedGossipMessage)

	// Remove removes a GossipMessage from the Mediator with a matching digest,
	// if such a message exits
	Remove(digest string)

	// HandleMessage handles a message from some remote peer
	HandleMessage(msg protoext.ReceivedMessage)
}
	go func() {
		for !engine.toDie() {
			time.Sleep(sleepTime)
			if engine.toDie() {
				return
			}
			engine.initiatePull()
		}
	}()

func (engine *PullEngine) initiatePull() {
	engine.lock.Lock()
	defer engine.lock.Unlock()

	engine.acceptDigests()
	for _, peer := range engine.SelectPeers() {
		nonce := engine.newNONCE()
		engine.outgoingNONCES.Add(nonce)
		engine.nonces2peers[nonce] = peer
		engine.peers2nonces[peer] = nonce
		engine.Hello(peer, nonce)
	}

	time.AfterFunc(engine.digestWaitTime, func() {
		engine.processIncomingDigests()
	})
}
  • When GossipService receives a pull message, it will do some verifications and pass the message to Puller for further processing.
  • Puller will send a hello message to initiate the protocol and returns an NONCE that is expected to be returned in the digest message. This is actually asking what do you have
  • From the degiest messages, Puller send Request to slected random destiation. This is to request the missing data
  • Puller will handle the response, and the add the information to local store. Ok, now I have the data.

CertStore and Puller

Puller for certStore is configured as below:

PullMsgType_IDENTITY_MSG
GossipMessage_EMPTY
	conf := pull.Config{
		MsgType:           pg.PullMsgType_IDENTITY_MSG,
		Channel:           []byte(""),
		ID:                g.conf.InternalEndpoint,
		PeerCountToSelect: g.conf.PullPeerNum,
		PullInterval:      g.conf.PullInterval,
		Tag:               pg.GossipMessage_EMPTY,
		PullEngineConfig: algo.PullEngineConfig{
			DigestWaitTime:   g.conf.DigestWaitTime,
			RequestWaitTime:  g.conf.RequestWaitTime,
			ResponseWaitTime: g.conf.ResponseWaitTime,
		},
	}
	...
	selfIDMsg, err := certStore.createIdentityMessage()
	if err != nil {
		certStore.logger.Panicf("Failed creating self identity message: %+v", errors.WithStack(err))
	}
	puller.Add(selfIDMsg)
	puller.RegisterMsgHook(pull.RequestMsgType, func(_ []string, msgs []*protoext.SignedGossipMessage, _ protoext.ReceivedMessage)

CertStore will add selfIDMsg to local store for the others to pull from. It also call RegisterMsgHook to intecept the Request message to update the idMapper of Gossip Node.

Gossip Node Start

Gossip will start a few background goroutines.

  • syncDiscovery(), trigger Discover to sync up
  • handlePresumedDead(), to process Comm dead
  • Make a Gossip Msg selector and go acceptMessages(). Selector = !(Conn || Empty || PrivateData)

func (g *Node) start() {
	go g.syncDiscovery()
	go g.handlePresumedDead()

	msgSelector := func(msg interface{}) bool {
		gMsg, isGossipMsg := msg.(protoext.ReceivedMessage)
		if !isGossipMsg {
			return false
		}

		isConn := gMsg.GetGossipMessage().GetConn() != nil
		isEmpty := gMsg.GetGossipMessage().GetEmpty() != nil
		isPrivateData := protoext.IsPrivateDataMsg(gMsg.GetGossipMessage().GossipMessage)

		return !(isConn || isEmpty || isPrivateData)
	}

	incMsgs := g.comm.Accept(msgSelector)

	go g.acceptMessages(incMsgs)

	g.logger.Info("Gossip instance", g.conf.ID, "started")
}

Gossip HandleMessage

Channel Message -> Channel.HandleMessage
DiscoveryMessage -> forwardDiscoveryMsg(m)
PullMsg -> g.certStore.handleMessage(m)


func (g *Node) handleMessage(m protoext.ReceivedMessage) {
	...
	if protoext.IsChannelRestricted(msg.GossipMessage) {
		if gc := g.chanState.lookupChannelForMsg(m); gc == nil {
			// If we're not in the channel, we should still forward to peers of our org
			// in case it's a StateInfo message
			if g.IsInMyOrg(discovery.NetworkMember{PKIid: m.GetConnectionInfo().ID}) && protoext.IsStateInfoMsg(msg.GossipMessage) {
				if g.stateInfoMsgStore.Add(msg) {
					g.emitter.Add(&emittedGossipMessage{
						SignedGossipMessage: msg,
						filter:              m.GetConnectionInfo().ID.IsNotSameFilter,
					})
				}
			}
			if !g.toDie() {
				g.logger.Debug("No such channel", msg.Channel, "discarding message", msg)
			}
		} else {
			if protoext.IsLeadershipMsg(m.GetGossipMessage().GossipMessage) {
				if err := g.validateLeadershipMessage(m.GetGossipMessage()); err != nil {
					g.logger.Warningf("Failed validating LeaderElection message: %+v", errors.WithStack(err))
					return
				}
			}
			gc.HandleMessage(m)
		}
		return
	}

	if selectOnlyDiscoveryMessages(m) {
		// It's a membership request, check its self information
		// matches the sender
		if m.GetGossipMessage().GetMemReq() != nil {
			sMsg, err := protoext.EnvelopeToGossipMessage(m.GetGossipMessage().GetMemReq().SelfInformation)
			if err != nil {
				g.logger.Warningf("Got membership request with invalid selfInfo: %+v", errors.WithStack(err))
				return
			}
			if !protoext.IsAliveMsg(sMsg.GossipMessage) {
				g.logger.Warning("Got membership request with selfInfo that isn't an AliveMessage")
				return
			}
			if !bytes.Equal(sMsg.GetAliveMsg().Membership.PkiId, m.GetConnectionInfo().ID) {
				g.logger.Warning("Got membership request with selfInfo that doesn't match the handshake")
				return
			}
		}
		g.forwardDiscoveryMsg(m)
	}

	if protoext.IsPullMsg(msg.GossipMessage) && protoext.GetPullMsgType(msg.GossipMessage) == pg.PullMsgType_IDENTITY_MSG {
		g.certStore.handleMessage(m)
	}
}

Discovery

Discovery starts some background goroutines to probe the network for peers.

// Discovery is the interface that represents a discovery module
type Discovery interface {
	// Lookup returns a network member, or nil if not found
	Lookup(PKIID common.PKIidType) *NetworkMember

	// Self returns this instance's membership information
	Self() NetworkMember

	// UpdateMetadata updates this instance's metadata
	UpdateMetadata([]byte)

	// UpdateEndpoint updates this instance's endpoint
	UpdateEndpoint(string)

	// Stops this instance
	Stop()

	// GetMembership returns the alive members in the view
	GetMembership() []NetworkMember

	// InitiateSync makes the instance ask a given number of peers
	// for their membership information
	InitiateSync(peerNum int)

	// Connect makes this instance to connect to a remote instance
	// The identifier param is a function that can be used to identify
	// the peer, and to assert its PKI-ID, whether its in the peer's org or not,
	// and whether the action was successful or not
	Connect(member NetworkMember, id identifier)
}

...
// NewDiscoveryService returns a new discovery service with the comm module passed and the crypto service passed
func NewDiscoveryService(self NetworkMember, comm CommService, crypt CryptoService, disPol DisclosurePolicy,
	config DiscoveryConfig) Discovery {
	d := &gossipDiscoveryImpl{
		self:             self,
		incTime:          uint64(time.Now().UnixNano()),
		seqNum:           uint64(0),
		deadLastTS:       make(map[string]*timestamp),
		aliveLastTS:      make(map[string]*timestamp),
		id2Member:        make(map[string]*NetworkMember),
		aliveMembership:  util.NewMembershipStore(),
		deadMembership:   util.NewMembershipStore(),
		crypt:            crypt,
		comm:             comm,
		lock:             &sync.RWMutex{},
		toDieChan:        make(chan struct{}),
		logger:           util.GetLogger(util.DiscoveryLogger, self.InternalEndpoint),
		disclosurePolicy: disPol,
		pubsub:           util.NewPubSub(),

		aliveTimeInterval:            config.AliveTimeInterval,
		aliveExpirationTimeout:       config.AliveExpirationTimeout,
		aliveExpirationCheckInterval: config.AliveExpirationCheckInterval,
		reconnectInterval:            config.ReconnectInterval,

		bootstrapPeers: config.BootstrapPeers,
	}

...
	d.msgStore = newAliveMsgStore(d)

	go d.periodicalSendAlive()
	go d.periodicalCheckAlive()
	go d.handleMessages()
	go d.periodicalReconnectToDead()
	go d.handleEvents()

	return d
}

Discovery will get the Gossip Msg from Gossip Service.

  • periodicalSendAlive: Gossip Alive message which contains my external endpoint address
  • periodicalCheckAlive: check memebership, move dead member out of alive member list
  • handleMessages: process gossipped alive/memebership message from other peers
  • periodicalReconnectToDead: try to connect dead peers and send membership request
  • handleEvents: handle peer Disconnect and mark as dead; handle PKIID change and remove peer

batchingEmitter

It is the message batching mechanism, with a period flush. batchingEmitter is used for the gossip push/forwarding phase. Messages are added into the batchingEmitter, and they are forwarded periodically T times in batches and then discarded. If the batchingEmitter’s stored message count reaches a certain capacity, that also triggers a message dispatch.

type batchingEmitter interface {
	// Add adds a message to be batched
	Add(interface{})

	// Stop stops the component
	Stop()

	// Size returns the amount of pending messages to be emitted
	Size() int
}

comm.Comm

It is the communication manager for underlying gRPC, managing RPC connections with other peers.

RPC method and stream:

var _Gossip_serviceDesc = grpc.ServiceDesc{
	ServiceName: "gossip.Gossip",
	HandlerType: (*GossipServer)(nil),
	Methods: []grpc.MethodDesc{
		{
			MethodName: "Ping",
			Handler:    _Gossip_Ping_Handler,
		},
	},
	Streams: []grpc.StreamDesc{
		{
			StreamName:    "GossipStream",
			Handler:       _Gossip_GossipStream_Handler,
			ServerStreams: true,
			ClientStreams: true,
		},
	},
	Metadata: "gossip/message.proto",
}

Interface:


// Comm is an object that enables to communicate with other peers
// that also embed a CommModule.
type Comm interface {

	// GetPKIid returns this instance's PKI id
	GetPKIid() common.PKIidType

	// Send sends a message to remote peers asynchronously
	Send(msg *protoext.SignedGossipMessage, peers ...*RemotePeer)

	// SendWithAck sends a message to remote peers, waiting for acknowledgement from minAck of them, or until a certain timeout expires
	SendWithAck(msg *protoext.SignedGossipMessage, timeout time.Duration, minAck int, peers ...*RemotePeer) AggregatedSendResult

	// Probe probes a remote node and returns nil if its responsive,
	// and an error if it's not.
	Probe(peer *RemotePeer) error

	// Handshake authenticates a remote peer and returns
	// (its identity, nil) on success and (nil, error)
	Handshake(peer *RemotePeer) (api.PeerIdentityType, error)

	// Accept returns a dedicated read-only channel for messages sent by other nodes that match a certain predicate.
	// Each message from the channel can be used to send a reply back to the sender
	Accept(common.MessageAcceptor) <-chan protoext.ReceivedMessage

	// PresumedDead returns a read-only channel for node endpoints that are suspected to be offline
	PresumedDead() <-chan common.PKIidType

	// IdentitySwitch returns a read-only channel about identity change events
	IdentitySwitch() <-chan common.PKIidType

	// CloseConn closes a connection to a certain endpoint
	CloseConn(peer *RemotePeer)

	// Stop stops the module
	Stop()
}

chainState

Maintains the Join/Leave state of a channel.

A GossipChannel will be created, initialized and put into chainState map when a channel created.

Call stack

github.com/hyperledger/fabric/gossip/gossip/channel.NewGossipChannel at channel.go:297
github.com/hyperledger/fabric/gossip/gossip.(*channelState).joinChannel at chanstate.go:108
github.com/hyperledger/fabric/gossip/gossip.(*Node).JoinChan at gossip_impl.go:188
github.com/hyperledger/fabric/gossip/service.(*GossipService).updateAnchors at gossip_service.go:425
github.com/hyperledger/fabric/gossip/service.(*configEventer).ProcessConfigUpdate at eventer.go:81
github.com/hyperledger/fabric/core/peer.(*Peer).createChannel.func1 at peer.go:267
github.com/hyperledger/fabric/common/channelconfig.(*BundleSource).Update at bundlesource.go:46
github.com/hyperledger/fabric/common/channelconfig.NewBundleSource at bundlesource.go:38
github.com/hyperledger/fabric/core/peer.(*Peer).createChannel at peer.go:323
type channelState struct {
	stopping int32
	sync.RWMutex
	channels map[string]channel.GossipChannel
	g        *Node
}

// GossipChannel defines an object that deals with all channel-related messages
type GossipChannel interface {
	// Self returns a StateInfoMessage about the peer
	Self() *protoext.SignedGossipMessage

	// GetPeers returns a list of peers with metadata as published by them
	GetPeers() []discovery.NetworkMember

	// PeerFilter receives a SubChannelSelectionCriteria and returns a RoutingFilter that selects
	// only peer identities that match the given criteria
	PeerFilter(api.SubChannelSelectionCriteria) filter.RoutingFilter

	// IsMemberInChan checks whether the given member is eligible to be in the channel
	IsMemberInChan(member discovery.NetworkMember) bool

	// UpdateLedgerHeight updates the ledger height the peer
	// publishes to other peers in the channel
	UpdateLedgerHeight(height uint64)

	// UpdateChaincodes updates the chaincodes the peer publishes
	// to other peers in the channel
	UpdateChaincodes(chaincode []*proto.Chaincode)

	// IsOrgInChannel returns whether the given organization is in the channel
	IsOrgInChannel(membersOrg api.OrgIdentityType) bool

	// EligibleForChannel returns whether the given member should get blocks
	// for this channel
	EligibleForChannel(member discovery.NetworkMember) bool

	// HandleMessage processes a message sent by a remote peer
	HandleMessage(protoext.ReceivedMessage)

	// AddToMsgStore adds a given GossipMessage to the message store
	AddToMsgStore(msg *protoext.SignedGossipMessage)

	// ConfigureChannel (re)configures the list of organizations
	// that are eligible to be in the channel
	ConfigureChannel(joinMsg api.JoinChannelMessage)

	// LeaveChannel makes the peer leave the channel
	LeaveChannel()

	// Stop stops the channel's activity
	Stop()
}

GossipChannel

It refers to every joined channel.

Background Goroutines
	// Periodically publish state info
	go gc.periodicalInvocation(gc.publishStateInfo, gc.stateInfoPublishScheduler.C)
	// Periodically request state info
	go gc.periodicalInvocation(gc.requestStateInfo, gc.stateInfoRequestScheduler.C)

	ticker := time.NewTicker(gc.GetConf().TimeForMembershipTracker)
	gc.membershipTracker = &membershipTracker{
		getPeersToTrack: gc.GetPeers,
		report:          gc.reportMembershipChanges,
		stopChan:        make(chan struct{}, 1),
		tickerChannel:   ticker.C,
		metrics:         metrics,
		chainID:         channelID,
	}

	go gc.membershipTracker.trackMembershipChanges()
  • publishStateInfo: Gossip my state info periodically
  • requestStateInfo: Request state info from some random peers periodically
  • trackMembershipChanges, checks which peers are offline and which are online for channel periodically. Make a report in case of any changes
Stop Channel
// Stop stop the channel operations
func (gc *gossipChannel) Stop() {
	close(gc.stopChan)
	close(gc.membershipTracker.stopChan)
	gc.blocksPuller.Stop()
	gc.stateInfoPublishScheduler.Stop()
	gc.stateInfoRequestScheduler.Stop()
	gc.leaderMsgStore.Stop()
	gc.stateInfoMsgStore.Stop()
	gc.blockMsgStore.Stop()
}
BlockPuller

Don’t really understand what Block Puller can do for us
GossipChannel will create and start the block puller. Bolck puller will try to pull Gossip DataMsg from other peers.

Gossip DataMsg is added to puller local store when a new data block is sent/received to/from peer. This local store will be managed by expiry mechanism of GossipChannel.

GossipChannel.HandleMessage will call gc.blocksPuller.HandleMessage(msg) for the Pull messages.

Call stack of creation:

github.com/hyperledger/fabric/gossip/gossip/algo.NewPullEngineWithFilter at pull.go:120
github.com/hyperledger/fabric/gossip/gossip/pull.NewPullMediator at pullstore.go:153
github.com/hyperledger/fabric/gossip/gossip/channel.(*gossipChannel).createBlockPuller at channel.go:464
github.com/hyperledger/fabric/gossip/gossip/channel.NewGossipChannel at channel.go:214
github.com/hyperledger/fabric/gossip/gossip.(*channelState).joinChannel at chanstate.go:108
github.com/hyperledger/fabric/gossip/gossip.(*Node).JoinChan at gossip_impl.go:188
github.com/hyperledger/fabric/gossip/service.(*GossipService).updateAnchors at gossip_service.go:425
github.com/hyperledger/fabric/gossip/service.(*configEventer).ProcessConfigUpdate at eventer.go:81
github.com/hyperledger/fabric/core/peer.(*Peer).createChannel.func1 at peer.go:267
github.com/hyperledger/fabric/common/channelconfig.(*BundleSource).Update at bundlesource.go:46
github.com/hyperledger/fabric/common/channelconfig.NewBundleSource at bundlesource.go:38
github.com/hyperledger/fabric/core/peer.(*Peer).createChannel at peer.go:323
github.com/hyperledger/fabric/core/peer.(*Peer).Initialize at peer.go:513

Puller is initialized with:

PullMsgType_BLOCK_MSG
GossipMessage_CHAN_AND_ORG
	conf := pull.Config{
		MsgType:           proto.PullMsgType_BLOCK_MSG,
		Channel:           []byte(gc.chainID),
		ID:                gc.GetConf().ID,
		PeerCountToSelect: gc.GetConf().PullPeerNum,
		PullInterval:      gc.GetConf().PullInterval,
		Tag:               proto.GossipMessage_CHAN_AND_ORG,
		PullEngineConfig: algo.PullEngineConfig{
			DigestWaitTime:   gc.GetConf().DigestWaitTime,
			RequestWaitTime:  gc.GetConf().RequestWaitTime,
			ResponseWaitTime: gc.GetConf().ResponseWaitTime,
		},
	}

Per Channel Service

Every joined channel needs to be initialized with Gossip service.

privateHandlers map[string]privateHandler
chains          map[string]state.GossipStateProvider
leaderElection  map[string]election.LeaderElectionService
deliveryService map[string]deliverservice.DeliverService
// InitializeChannel allocates the state provider and should be invoked once per channel per execution
func (g *GossipService) InitializeChannel(channelID string, ordererSource *orderers.ConnectionSource, store *transientstore.Store, support Support) {
	g.lock.Lock()
	defer g.lock.Unlock()
	// Initialize new state provider for given committer
	logger.Debug("Creating state provider for channelID", channelID)
	servicesAdapter := &state.ServicesMediator{GossipAdapter: g, MCSAdapter: g.mcs}

	// Initialize private data fetcher
	dataRetriever := gossipprivdata.NewDataRetriever(store, support.Committer)
	collectionAccessFactory := gossipprivdata.NewCollectionAccessFactory(support.IdDeserializeFactory)
	fetcher := gossipprivdata.NewPuller(g.metrics.PrivdataMetrics, support.CollectionStore, g.gossipSvc, dataRetriever,
		collectionAccessFactory, channelID, g.serviceConfig.BtlPullMargin)

	coordinatorConfig := gossipprivdata.CoordinatorConfig{
		TransientBlockRetention:        g.serviceConfig.TransientstoreMaxBlockRetention,
		PullRetryThreshold:             g.serviceConfig.PvtDataPullRetryThreshold,
		SkipPullingInvalidTransactions: g.serviceConfig.SkipPullingInvalidTransactionsDuringCommit,
	}
	selfSignedData := g.createSelfSignedData()
	mspID := string(g.secAdv.OrgByPeerIdentity(selfSignedData.Identity))
	coordinator := gossipprivdata.NewCoordinator(mspID, gossipprivdata.Support{
		ChainID:            channelID,
		CollectionStore:    support.CollectionStore,
		Validator:          support.Validator,
		Committer:          support.Committer,
		Fetcher:            fetcher,
		CapabilityProvider: support.CapabilityProvider,
	}, store, selfSignedData, g.metrics.PrivdataMetrics, coordinatorConfig,
		support.IdDeserializeFactory)

	var reconciler gossipprivdata.PvtDataReconciler

	if g.privdataConfig.ReconciliationEnabled {
		reconciler = gossipprivdata.NewReconciler(channelID, g.metrics.PrivdataMetrics,
			support.Committer, fetcher, g.privdataConfig)
	} else {
		reconciler = &gossipprivdata.NoOpReconciler{}
	}

	pushAckTimeout := g.serviceConfig.PvtDataPushAckTimeout
	g.privateHandlers[channelID] = privateHandler{
		support:     support,
		coordinator: coordinator,
		distributor: gossipprivdata.NewDistributor(channelID, g, collectionAccessFactory, g.metrics.PrivdataMetrics, pushAckTimeout),
		reconciler:  reconciler,
	}
	g.privateHandlers[channelID].reconciler.Start()

	blockingMode := !g.serviceConfig.NonBlockingCommitMode
	stateConfig := state.GlobalConfig()
	g.chains[channelID] = state.NewGossipStateProvider(
		channelID,
		servicesAdapter,
		coordinator,
		g.metrics.StateMetrics,
		blockingMode,
		stateConfig)
	if g.deliveryService[channelID] == nil {
		g.deliveryService[channelID] = g.deliveryFactory.Service(g, ordererSource, g.mcs, g.serviceConfig.OrgLeader)
	}

	// Delivery service might be nil only if it was not able to get connected
	// to the ordering service
	if g.deliveryService[channelID] != nil {
		// Parameters:
		//              - peer.gossip.useLeaderElection
		//              - peer.gossip.orgLeader
		//
		// are mutual exclusive, setting both to true is not defined, hence
		// peer will panic and terminate
		leaderElection := g.serviceConfig.UseLeaderElection
		isStaticOrgLeader := g.serviceConfig.OrgLeader

		if leaderElection && isStaticOrgLeader {
			logger.Panic("Setting both orgLeader and useLeaderElection to true isn't supported, aborting execution")
		}

		if leaderElection {
			logger.Debug("Delivery uses dynamic leader election mechanism, channel", channelID)
			g.leaderElection[channelID] = g.newLeaderElectionComponent(channelID, g.onStatusChangeFactory(channelID,
				support.Committer), g.metrics.ElectionMetrics)
		} else if isStaticOrgLeader {
			logger.Debug("This peer is configured to connect to ordering service for blocks delivery, channel", channelID)
			g.deliveryService[channelID].StartDeliverForChannel(channelID, support.Committer, func() {})
		} else {
			logger.Debug("This peer is not configured to connect to ordering service for blocks delivery, channel", channelID)
		}
	} else {
		logger.Warning("Delivery client is down won't be able to pull blocks for chain", channelID)
	}

}
Created with Raphaël 2.2.0 开始 Make privateHandler, start Reconciler Make Gossip State Provider Make deliverClient Election Static ? StartDeliverClient 结束 start LeaderElection beLeader yes no

Private Handler

Private Data allows a defined subset of organizations on a channel the ability to endorse, commit, or query private data without having to create a separate channel. As below:
Private Data In Fabric

Initialization

Reconciler and Coordinator might be created to support PrivateData handler.

	var reconciler gossipprivdata.PvtDataReconciler

	if g.privdataConfig.ReconciliationEnabled {
		reconciler = gossipprivdata.NewReconciler(channelID, g.metrics.PrivdataMetrics,
			support.Committer, fetcher, g.privdataConfig)
	} else {
		reconciler = &gossipprivdata.NoOpReconciler{}
	}

	pushAckTimeout := g.serviceConfig.PvtDataPushAckTimeout
	g.privateHandlers[channelID] = privateHandler{
		support:     support,
		coordinator: coordinator,
		distributor: gossipprivdata.NewDistributor(channelID, g, collectionAccessFactory, g.metrics.PrivdataMetrics, pushAckTimeout),
		reconciler:  reconciler,
	}
	g.privateHandlers[channelID].reconciler.Start()

Coordinator

It can help with ledger operations. It is used to persist private data into transient store by StorePvtData(), when endorder starts to simulate proposal.

// Coordinator orchestrates the flow of the new
// blocks arrival and in flight transient data, responsible
// to complete missing parts of transient data for given block.
type Coordinator interface {
	// StoreBlock deliver new block with underlined private data
	// returns missing transaction ids
	StoreBlock(block *common.Block, data util.PvtDataCollections) error

	// StorePvtData used to persist private data into transient store
	StorePvtData(txid string, privData *protostransientstore.TxPvtReadWriteSetWithConfigInfo, blckHeight uint64) error

	// GetPvtDataAndBlockByNum gets block by number and also returns all related private data
	// that requesting peer is eligible for.
	// The order of private data in slice of PvtDataCollections doesn't imply the order of
	// transactions in the block related to these private data, to get the correct placement
	// need to read TxPvtData.SeqInBlock field
	GetPvtDataAndBlockByNum(seqNum uint64, peerAuth protoutil.SignedData) (*common.Block, util.PvtDataCollections, error)

	// Get recent block sequence number
	LedgerHeight() (uint64, error)

	// Close coordinator, shuts down coordinator service
	Close()
}

Private Data Puller

A puller is created to listen on PrivateDataMsg from Gossip network.

  • handleRequest: make response mesage and send back
  • handleResponse: publish the hashes of the elements in the received message

PvtDataReconciler

Reconciler starts pulling the missing private data preiodically. It sends the request to the Gossip network, meanwhile subscribes the HASH of the reuqests to PubSub service, and then waiting for the response from Puller. In the end, the private data wil be committed by CommitPvtDataOfOldBlocks()

type Reconciler struct {
	channel                string
	metrics                *metrics.PrivdataMetrics
	ReconcileSleepInterval time.Duration
	ReconcileBatchSize     int
	stopChan               chan struct{}
	startOnce              sync.Once
	stopOnce               sync.Once
	ReconciliationFetcher
	committer.Committer
}

func (r *Reconciler) run() {
	for {
		select {
		case <-r.stopChan:
			return
		case <-time.After(r.ReconcileSleepInterval):
			logger.Debug("Start reconcile missing private info")
			if err := r.reconcile(); err != nil {
				logger.Error("Failed to reconcile missing private info, error: ", err.Error())
				break
			}
		}
	}
}

PvtDataDistributor

It is ued to distibute private data when endorser starts to simulate proposal.

// PvtDataDistributor interface to defines API of distributing private data
type PvtDataDistributor interface {
	// Distribute broadcast reliably private data read write set based on policies
	Distribute(txID string, privData *transientstore.TxPvtReadWriteSetWithConfigInfo, blkHt uint64) error
}

There is an ACK mechnism in PvtDataDistributor. That is, any private data messages distributed require ACK responding fromt the StateProvider when it processes Gossip message with private data.

State Provider

GossipStateProvider is the interface to acquire sequences of the ledger blocks. It is capable to full fill missing blocks by running state replication and sending request to get missing block to other nodes.

StateProvider will set up filter to Gossip Comm to only accept message belonged to the specific channel ID:

  • Data GossipMessage
  • State Change
  • PrivateData GossipMessage

StateProvider starts some goroutines in the end:

// Listen for incoming communication
go s.receiveAndQueueGossipMessages(gossipChan)
go s.receiveAndDispatchDirectMessages(commChan)
// Deliver in order messages into the incoming channel
go s.deliverPayloads()
if s.config.StateEnabled {
	// Execute anti entropy to fill missing gaps
	go s.antiEntropy()
}
// Taking care of state request messages
go s.processStateRequests()
  • receiveAndQueueGossipMessages: receive payload message and push to local buffer
  • receiveAndDispatchDirectMessages: receive state change req/rsp message and do further processing; receive private data, and store to transient store
  • deliverPayloads: Receiver block and private data, commit to ledger; update the other peers with the new block height via Gossip network
  • antiEntropy: check the if our height is lower than the known height of all peers. Otherwise, start to fix the gap by sending State Request message via Gossip network. The State Response will be handled and directed by receiveAndDispatchDirectMessages In the end, blocks in the Response will be added. This is to allow the lagging peer to catch up with the network in speed.
  • processStateRequests: handle State Request and send back the Response with the requested blocks

In summary, StateProvider will try to sync up the ledger with the network, by processing Gossip Message with block payload, or by requesting the missing block gap with the State Message. Internally StateProvider is using a buffer mechanism to handle the block payloads

Deliver Client

The delivery service instance, is used to establish connection to the specified in the configuration ordering service. It tries to set up a gRPC connection to Orderer’s Deliver service.


// StartDeliverForChannel starts blocks delivery for channel
// initializes the grpc stream for given chainID, creates blocks provider instance
// that spawns in go routine to read new blocks starting from the position provided by ledger
// info instance.
func (d *deliverServiceImpl) StartDeliverForChannel(chainID string, ledgerInfo blocksprovider.LedgerInfo, finalizer func()) error {
	d.lock.Lock()
	defer d.lock.Unlock()
	if d.stopping {
		errMsg := fmt.Sprintf("Delivery service is stopping cannot join a new channel %s", chainID)
		logger.Errorf(errMsg)
		return errors.New(errMsg)
	}
	if _, exist := d.blockProviders[chainID]; exist {
		errMsg := fmt.Sprintf("Delivery service - block provider already exists for %s found, can't start delivery", chainID)
		logger.Errorf(errMsg)
		return errors.New(errMsg)
	}
	logger.Info("This peer will retrieve blocks from ordering service and disseminate to other peers in the organization for channel", chainID)

	dc := &blocksprovider.Deliverer{
		ChannelID:     chainID,
		Gossip:        d.conf.Gossip,
		Ledger:        ledgerInfo,
		BlockVerifier: d.conf.CryptoSvc,
		Dialer: DialerAdapter{
			Client: d.conf.DeliverGRPCClient,
		},
		Orderers:          d.conf.OrdererSource,
		DoneC:             make(chan struct{}),
		Signer:            d.conf.Signer,
		DeliverStreamer:   DeliverAdapter{},
		Logger:            flogging.MustGetLogger("peer.blocksprovider").With("channel", chainID),
		MaxRetryDelay:     time.Duration(d.conf.DeliverServiceConfig.ReConnectBackoffThreshold),
		MaxRetryDuration:  d.conf.DeliverServiceConfig.ReconnectTotalTimeThreshold,
		InitialRetryDelay: 100 * time.Millisecond,
		YieldLeadership:   !d.conf.IsStaticLeader,
	}

	if d.conf.DeliverGRPCClient.MutualTLSRequired() {
		dc.TLSCertHash = util.ComputeSHA256(d.conf.DeliverGRPCClient.Certificate().Certificate[0])
	}

	d.blockProviders[chainID] = dc
	go func() {
		dc.DeliverBlocks()
		finalizer()
	}()
	return nil
}
func (d *Deliverer) DeliverBlocks() {
	failureCounter := 0
	totalDuration := time.Duration(0)

	// InitialRetryDelay * backoffExponentBase^n > MaxRetryDelay
	// backoffExponentBase^n > MaxRetryDelay / InitialRetryDelay
	// n * log(backoffExponentBase) > log(MaxRetryDelay / InitialRetryDelay)
	// n > log(MaxRetryDelay / InitialRetryDelay) / log(backoffExponentBase)
	maxFailures := int(math.Log(float64(d.MaxRetryDelay)/float64(d.InitialRetryDelay)) / math.Log(backoffExponentBase))
	for {
		select {
		case <-d.DoneC:
			return
		default:
		}

		if failureCounter > 0 {
			var sleepDuration time.Duration
			if failureCounter-1 > maxFailures {
				sleepDuration = d.MaxRetryDelay
			} else {
				sleepDuration = time.Duration(math.Pow(1.2, float64(failureCounter-1))*100) * time.Millisecond
			}
			totalDuration += sleepDuration
			if totalDuration > d.MaxRetryDuration {
				if d.YieldLeadership {
					d.Logger.Warningf("attempted to retry block delivery for more than %v, giving up", d.MaxRetryDuration)
					return
				}
				d.Logger.Warningf("peer is a static leader, ignoring peer.deliveryclient.reconnectTotalTimeThreshold")
			}
			d.sleeper.Sleep(sleepDuration, d.DoneC)
		}

		ledgerHeight, err := d.Ledger.LedgerHeight()
		if err != nil {
			d.Logger.Error("Did not return ledger height, something is critically wrong", err)
			return
		}

		seekInfoEnv, err := d.createSeekInfo(ledgerHeight)
		if err != nil {
			d.Logger.Error("Could not create a signed Deliver SeekInfo message, something is critically wrong", err)
			return
		}

		deliverClient, endpoint, cancel, err := d.connect(seekInfoEnv)
		if err != nil {
			d.Logger.Warningf("Could not connect to ordering service: %s", err)
			failureCounter++
			continue
		}

		connLogger := d.Logger.With("orderer-address", endpoint.Address)

		recv := make(chan *orderer.DeliverResponse)
		go func() {
			for {
				resp, err := deliverClient.Recv()
				if err != nil {
					connLogger.Warningf("Encountered an error reading from deliver stream: %s", err)
					close(recv)
					return
				}
				select {
				case recv <- resp:
				case <-d.DoneC:
					close(recv)
					return
				}
			}
		}()

	RecvLoop: // Loop until the endpoint is refreshed, or there is an error on the connection
		for {
			select {
			case <-endpoint.Refreshed:
				connLogger.Infof("Ordering endpoints have been refreshed, disconnecting from deliver to reconnect using updated endpoints")
				break RecvLoop
			case response, ok := <-recv:
				if !ok {
					connLogger.Warningf("Orderer hung up without sending status")
					failureCounter++
					break RecvLoop
				}
				err = d.processMsg(response)
				if err != nil {
					connLogger.Warningf("Got error while attempting to receive blocks: %v", err)
					failureCounter++
					break RecvLoop
				}
				failureCounter = 0
			case <-d.DoneC:
				break RecvLoop
			}
		}
		// cancel and wait for our spawned go routine to exit
		cancel()
		<-recv
	}
}

DeliverBlocks() is used to pull out blocks from the ordering service to distributed them across peers. It will set up the gRPC client, dial onto Orderer Deliver service, seek for the block with the specific height. Any valid block response will be handled by GossipStateProvider.AddBlock(), and then gossipped to the network.

		...
		gossipMsg := &gossip.GossipMessage{
			Nonce:   0,
			Tag:     gossip.GossipMessage_CHAN_AND_ORG,
			Channel: []byte(d.ChannelID),
			Content: &gossip.GossipMessage_DataMsg{
				DataMsg: &gossip.DataMessage{
					Payload: payload,
				},
			},
		}		
		// Add payload to local state payloads buffer
		if err := d.Gossip.AddPayload(d.ChannelID, payload); err != nil {
			d.Logger.Warningf("Block [%d] received from ordering service wasn't added to payload buffer: %v", blockNum, err)
			return errors.WithMessage(err, "could not add block as payload")
		}

		// Gossip messages with other nodes
		d.Logger.Debugf("Gossiping block [%d]", blockNum)
		d.Gossip.Gossip(gossipMsg)

Leader Election

It is used to elect one peer per organization which will maintain connection with the ordering service and initiate distribution of newly arrived blocks across the peers of its own organization.

  • Static — a system administrator manually configures a peer in an organization to be the leader.
  • Dynamic — peers execute a leader election procedure to select one peer in an organization to become leader.

When a channel is initialized, newLeaderElectionComponent() will be invoked if Dynamic election is configured. Othwersize, Gossip will start the Deliver Client to Orderer if Static is configured.

ElectionAdpater

Details of how election works:

// Gossip leader election module
// Algorithm properties:
// - Peers break symmetry by comparing IDs
// - Each peer is either a leader or a follower,
// and the aim is to have exactly 1 leader if the membership view
// is the same for all peers
// - If the network is partitioned into 2 or more sets, the number of leaders
// is the number of network partitions, but when the partition heals,
// only 1 leader should be left eventually
// - Peers communicate by gossiping leadership proposal or declaration messages

// The Algorithm, in pseudo code:
//
//
// variables:
// leaderKnown = false
//
// Invariant:
// Peer listens for messages from remote peers
// and whenever it receives a leadership declaration,
// leaderKnown is set to true
//
// Startup():
// wait for membership view to stabilize, or for a leadership declaration is received
// or the startup timeout expires.
// goto SteadyState()
//
// SteadyState():
// while true:
// If leaderKnown is false:
// LeaderElection()
// If you are the leader:
// Broadcast leadership declaration
// If a leadership declaration was received from
// a peer with a lower ID,
// become a follower
// Else, you’re a follower:
// If haven’t received a leadership declaration within
// a time threshold:
// set leaderKnown to false
//
// LeaderElection():
// Gossip leadership proposal message
// Collect messages from other peers sent within a time period
// If received a leadership declaration:
// return
// Iterate over all proposal messages collected.
// If a proposal message from a peer with an ID lower
// than yourself was received, return.
// Else, declare yourself a leader

// LeaderElectionAdapter is used by the leader election module
// to send and receive messages and to get membership information
type LeaderElectionAdapter interface {
	// Gossip gossips a message to other peers
	Gossip(Msg)

	// Accept returns a channel that emits messages
	Accept() <-chan Msg

	// CreateProposalMessage
	CreateMessage(isDeclaration bool) Msg

	// Peers returns a list of peers considered alive
	Peers() []Peer

	// ReportMetrics sends a report to the metrics server about a leadership status
	ReportMetrics(isLeader bool)
}

func (ai *adapterImpl) Accept() <-chan Msg {
	adapterCh, _ := ai.gossip.Accept(func(message interface{}) bool {
		// Get only leadership org and channel messages
		return message.(*proto.GossipMessage).Tag == proto.GossipMessage_CHAN_AND_ORG &&
			protoext.IsLeadershipMsg(message.(*proto.GossipMessage)) &&
			bytes.Equal(message.(*proto.GossipMessage).Channel, ai.channel)
	}, false)

	msgCh := make(chan Msg)
	...

Election will use the Gossip Service to archieve the election purpose. The adapter itself is created with the Gossip service, but only handle leadership org and channel messages

adapter := election.NewAdapter(g, PKIid, gossipcommon.ChannelID(channelID), electionMetrics)

When adapter wants to propose/declare leadership, it asks Gossip Service to send the message to other gossip nodes.

// NewLeaderElectionService returns a new LeaderElectionService
func NewLeaderElectionService(adapter LeaderElectionAdapter, id string, callback leadershipCallback, config ElectionConfig) LeaderElectionService {
	if len(id) == 0 {
		panic("Empty id")
	}
	le := &leaderElectionSvcImpl{
		id:            peerID(id),
		proposals:     util.NewSet(),
		adapter:       adapter,
		stopChan:      make(chan struct{}),
		interruptChan: make(chan struct{}, 1),
		logger:        util.GetLogger(util.ElectionLogger, ""),
		callback:      noopCallback,
		config:        config,
	}

	if callback != nil {
		le.callback = callback
	}

	go le.start()
	return le
}

func (le *leaderElectionSvcImpl) start() {
	le.stopWG.Add(2)
	go le.handleMessages()
	le.waitForMembershipStabilization(le.config.StartupGracePeriod)
	go le.run()
}

beLeader

Once becoming the leader, it will invoke onStatusChangeFactory() to start the deliver client.


func (g *GossipService) onStatusChangeFactory(channelID string, committer blocksprovider.LedgerInfo) func(bool) {
	return func(isLeader bool) {
		if isLeader {
			yield := func() {
				g.lock.RLock()
				le := g.leaderElection[channelID]
				g.lock.RUnlock()
				le.Yield()
			}
			logger.Info("Elected as a leader, starting delivery service for channel", channelID)
			if err := g.deliveryService[channelID].StartDeliverForChannel(channelID, committer, yield); err != nil {
				logger.Errorf("Delivery service is not able to start blocks delivery for chain, due to %+v", err)
			}
		} else {
			logger.Info("Renounced leadership, stopping delivery service for channel", channelID)
			if err := g.deliveryService[channelID].StopDeliverForChannel(channelID); err != nil {
				logger.Errorf("Delivery service is not able to stop blocks delivery for chain, due to %+v", err)
			}
		}
	}
}

Be Leader call stack

github.com/hyperledger/fabric/core/deliverservice.(*deliverServiceImpl).StartDeliverForChannel at deliveryclient.go:109
github.com/hyperledger/fabric/gossip/service.(*GossipService).onStatusChangeFactory.func1 at gossip_service.go:488
github.com/hyperledger/fabric/gossip/election.(*leaderElectionSvcImpl).beLeader at election.go:400
github.com/hyperledger/fabric/gossip/election.(*leaderElectionSvcImpl).leaderElection at election.go:319
github.com/hyperledger/fabric/gossip/election.(*leaderElectionSvcImpl).run at election.go:268
runtime.goexit at asm_amd64.s:1357
 - Async stack trace
github.com/hyperledger/fabric/gossip/election.(*leaderElectionSvcImpl).start at election.go:193

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值