oscache集群配置参数说明

oscache集群配置参数说明

url:http://www.cs.cornell.edu/Info/Projects/JavaGroupsNew/userguide/html/user/index.html

这里使用默认的cache.event.listeners=com.opensymphony.oscache.plugins.clustersupport.JavaGroupsBroadcastingListener


oscache集群的默认配置如下:
UDP(mcast_addr=231.12.21.132;mcast_port=45566;ip_ttl=32;\
mcast_send_buf_size=150000;mcast_recv_buf_size=80000):\
PING(timeout=2000;num_initial_members=3):\
MERGE2(min_interval=5000;max_interval=10000):\
FD_SOCK:VERIFY_SUSPECT(timeout=1500):\
pbcast.NAKACK(gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):\
UNICAST(timeout=300,600,1200,2400):\
pbcast.STABLE(desired_avg_gossip=20000):\
FRAG(frag_size=8096;down_thread=false;up_thread=false):\
pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=true)


FD
The failure detection layer (FD) periodically tries to reach its nearest neighbor to the right
Figure 7.1: Failure detection layer
The nearest neighbor is always computed based on the local view. Since all views in all stacks have the same member ordering, every member can always determine its next neighbor to the right. When a new view is received, the neighbor is recomputed.
The FD layer periodically pings its neighbor. When no response has been received after max_tries (each with a timeout), a SUSPECT message is multicast to all members of the group. The GMS layer which is currently the coordinator processes the message, all others ignore it7.1. In the example, the coordinator would be A. It pings B which in turn pings C. The last member ( E) 'wraps around' and pings the first (A).


FRAG(the fragmentation protocol layer)

It essentially breaks up bigger messages into smaller ones and reassembles them at the receiver's side.


GMS

The group membership service is probably the most important protocol layer, and also the most complex to implement. The description is based on the current (March 99) GMS layer, but work is underway to replace this layer with a new one (RpcGMS).
When a CONNECT event is received by the GMS layer, it tries to join the group. To do so, it first tries to retrieve the initial membership. If no other members can be found, it assumes it is the first member and sends a VIEW_CHANGE event up/down the stack. Otherwise, it determines the coordinator and sends a unicast message to the latter to join it to the group. The coordinator adds the new member to its local view and multicasts the new view to all GMS layers, which then in turn generate VIEW_CHANGE events up/down the stack.
When a member wants to leave a group, it disconnects from the channel. This causes a DISCONNECT event to be sent down where it is caught by the GMS layer. The latter sends a unicast LEAVE request to the coordinator, which in turn removes the member from its local view and multicasts the new view.
When a SUSPECT event is received by the GMS layer, if the layer is the current coordinator, the member will be removed from the local view and a new view multicast to all members. Otherwise, the event is just dropped. In case the suspected member is the coordinator itself, the next member of the group in order takes over and multicasts a new view (excluding the old coordinator).


PING

The PING layer is responsible for finding the initial membership of a group. It does so upon reception of a FIND_INITIAL_MBRS event (sent by the GMS layer). When done, a FIN_INITIAL_MBRS_OK event is sent up the stack, containing the members found as argument. GMS waits until it receives the initial membership, and - based on it - determines the current coordinator to which it then sends a join request. When not receiving the initial members within a certain time frame, a timeout is received, and the GMS creates a singleton group (with itself as only member).
Currently the initial membership is found either using a multicast to an IP multicast address to which all members respond, or using the Router daemon (see 3.8.1), if IP multicast is not enabled.


UDP

UDP is currently the bottommost layer available to use in a protocol stack.
When it receives a START event (upon channel connection), it creates a unicast and an IP multicast socket. The IP address plus the port number of the unicast socket form the channel's address: every message sent to either a single destination or the whole group will be marked with this address (in the source field). As soon as the local address is know, a SET_LOCAL_ADDRESS event will be sent up the stack, followed by a START_OK.
When a STOP event is received, the sockets are closed and a STOP_OK event is passed up the stack as acknowledgment. When a channel is closed, local address becomes meaningless (since based on a open socket).
Messages sent down the stack will be added a UDP header containing the group name and then put on the network as datagram packets. Datagrams received from the network will be converted into messages and their UDP header removed. If the header's group name is different from the channel's group name, then the message will be dropped. Otherwise, it is passed up the stack.
UDP can either use IP multicasting or unicasting to disseminate messages to the group. If option ip_mcast is false, unicast is used: when a message is to be sent to all group members, it is sent n times to each member. To do this, UDP has to cache the membership when receiving VIEW_CHANGE events.
The IP multicast address and port can be configured using options mcast_addr and mcast_port. Although it is not a problem when different groups use the same IP multicast address and port (since messages by members belonging to a different group are discarded by UDP), it is often preferable to choose different parameters. However, all group members must have their UDP layers configured too the same IP multicast address and port (when using IP mcast).
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值