Cluster interconnect
If a block of data is on one node and the user asks for it on another node, oracle uses cache fusion to pass one block through the interconnect to the other node.Parallel processing relies on passing messages among multiple processors. Processors running parallel programs call for data and instructions,and then perform. calculations.Each processor checks back periodically with the other nodes or a master node to plan its next move or to synchronize the delivery of results.These activities rely on message-passing software, such as industry-standard Message Paassing interface(MPI).
In parallel databases, there is a great deal of message passing and data blocks,or pages, transferring to the local cache of another node.Much of the functionality and performance depends on the efficiency of the transport medium or methodology. It becomes very critical for the overall performance of the cluster and usage of the parallel application. As the parallel databases do not impose any constraints on the nodes to which users can connect and access,users have a choice to connect to any node in the cluster.Irrespective of the nature of the application, OLTP , or data warehousing databases, the movement of data blocks from one node to another using the interconnect is widely practiced. The role of the cluster interconnect to provide some kind of extended cache encompassing the cache from all the nodes is one of the most significant design features of the cluster. In general, the cluster interconnect is used for the following high-level functions:
Health, status, and synchronization of messages
Distributed lock manager messages
Accessing remote file systems
Application-specific traffic
Cluster alias routing
High performance , by distributing the computations across an array of nodes in the cluster, requires the cluster interconnect to provide a high data transfer rate and low latency communication between nodes. Also, the interconnect needs to be capable to detecting and isolating faults, and using alternative paths. Some of the essential requirements for the interconnect are
Low latency for short messages
High speed and sustained data rates for large messages
Low host-CPU utilization per message
Flow control, error control, and hearbeat continuity monitoring
Host interfaces that execute control programs to interact directly with host processes
Switch networks that scale well
Many of the cluster vendors have designed very competitive technology. Many of the interconnect products described next come close to the latency levels of a SMP(symmetric multiprocessing) bus. Table 11-1 summarizes the various interconnect capabilities(they will be faster yet by the time you read this).
The HP Memory channel : Memory channel interconnect is a high-speed network interconnect that provides applications with a cluster-wide address space. Applications map portions of this address space into their own virtual address space as 8kb pages and then read from or write into this address space just like normal memory.
Myrinet : Myrinet is a cost-effective, high-performance packet communication and switching technology . It is widely used in Linux clusters. Myrinet software supports most common hosts and operating systems. The software is supplied open source.
Scalable Interconnect (SCI) SCI is the Sun’s best-performing cluster interconnect because of its high data rate and low latency. Applications that stress the interconnect will scale better using SCI compared to using lower-performing alternatives. Sun SCI implements Remote Shared Memory(RSM), a feature that bypasses the TCP/IP communication overhead of Solaris. This improves cluster performance.
Veritas: Database Edition/Advanced cluster(DBE/AC) communications consist of LLT(low-latency transport) and GAB(Group Membership and Atomic Broadcast) services. LLT provides kernel-to-kernel communications and functions as a performance booster for the IP stack. Use of LLT rather than IP reduces latency and overhead with the IP stack. This is now known as Storage Foundations.
HP HyperFabric Hyper Mesasging Protocol(HMP) HP HyperFabric supports both standard TCP/UDP over IP and HP’s proprietary Hyper Messaging Protocol. HyperFabric extends the scalability and reliability of TCP/UPD by providing transparent load balancing of connection traffic across multiple network interface cards. HMP coupled with OS bypass capability and the hardware support for protocol offload provides low latency and extremely low CPU utilization.
For building a high-performance oracle rac, selecting the right interconnect is important. Care should be taken to select the appropriate technology suitable for your environment. Check with your vendor to get the most up-to-date hardware that is available.
Table 11.1 ….
The key here is that going to disk is in the millisecond range, whereas going though the interconnect is in the microsecond or single-digit millisecond range.
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/104152/viewspace-162822/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/104152/viewspace-162822/