Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Mule High Availability (HA) Cluster 

ramya
December 21, 2015

Mule High Availability (HA) Cluster 

Mule High Availability (HA) Cluster 

ramya

December 21, 2015
Tweet

More Decks by ramya

Other Decks in Education

Transcript

  1. Overview  Introduction  About Clustering  About Queues 

    About High-Reliability Applications  Cluster Support for Transports  Clustering and Reliable Applications  Clustering and Networking  Clustering and Load Balancing  Clustering for High Performance  Best Practices  Conclusion RAMYA
  2. INTRODUCTION RAMYA A cluster is a set of Mule instances

    that acts as a unit. In other words, a cluster is a virtual server composed of multiple nodes. The servers in a cluster communicate and share information through a distributed shared memory grid. This means that the data is replicated across memory in different physical machines.
  3. ABOUT CLUSTERING RAMYA A Mule ESB Cluster consists of two

    to eight Mule ESB server instances, or nodes, grouped together and treated as a single unit. Thus, you can deploy, monitor, or stop all the nodes in a cluster as if they were a single Mule server. Mule uses an active-active model to cluster servers, rather than an active-passive model. In an active-passive model, one server in a cluster acts as the primary, or active node, while the others are secondary, or passive nodes. The application in such a model runs on the primary server, and only ever runs on the secondary server if the first one fails. In this model, the processing power of the secondary node(s) is mostly wasted in passive waiting for the primary node to fail. In an active-active model, no one server in the cluster acts as the primary server; all servers in the cluster support the application. This application in this model runs on all the servers, even splitting apart message processing between nodes to expedite processing across nodes.
  4. ABOUT QUEUES RAMYA You can set up a VM queue

    explicitly to load balance across nodes. Thus, if your entire application flow contains a sequence of child flows, Mule can assign each successive child flow to whichever node happens to be available at the time. Potentially, Mule can process a single message on multiple nodes as it passes through the entire application flow, as illustrated in next figure:
  5. ABOUT HIGH-RELIABILITY APPLICATIONS RAMYA A high-reliability application must feature the

    following: 1. zero tolerance for message loss 2. a reliable underlying enterprise service bus (ESB) 3. highly reliable individual connections
  6. CLUSTER SUPPORT FOR TRANSPORTS RAMYA Mule supports three basic types

    of transports: 1. Socket-based transports read input sent to network sockets that Mule owns. Examples include TCP, UDP, and HTTP[S]. 2. listener-based transports read data using a protocol that fully supports concurrent multiple accessors. Examples include JMS and VM. 3. resource-based transports read data from a resource that allows multiple concurrent accessors, but does not natively coordinate their use of the resource. For instance, suppose multiple programs are processing files in the same shared directory by reading, processing, and then deleting the files. These programs must use an explicit, application-level locking strategy to prevent the same file from being processed more than once. Examples of resource-based transports include File, FTP, SFTP, E-mail, and JDBC.
  7. CLUSTERING AND RELIABLE APPLICATIONS RAMYA High-reliability applications (ones that have

    zero tolerance for message loss) not only require the underlying ESB to be reliable, but that reliability needs to extend to individual connections. Reliability Patterns give you the tools to build fully reliable applications in your clusters.
  8. CLUSTERING AND NETWORKING RAMYA To ensure reliable connectivity between cluster

    nodes, all nodes of a cluster should be located on the same LAN. Implementing a cluster with nodes across geographically separated locations, such as different datacenters connected through a VPN, is possible but not recommended and not supported.
  9. CLUSTERING AND LOAD BALANCING RAMYA When Mule clusters are used

    to serve TCP requests (where TCP includes SSL/TLS, UDP, Multicast, HTTP, and HTTPS), some load balancing is needed to distribute the requests among the clustered instances. There are various software load balancers available.
  10. CLUSTERING FOR HIGH PERFORMANCE RAMYA If high performance is your

    primary goal (rather than reliability), you can configure a Mule cluster or an individual application for maximum performance using a performance profile. Setting the performance profile has two effects: 1. It disables distributed queues, using local queues instead to prevent data serialization/deserialization and distribution in the shared data grid. 2. It implements the object store without backups, to avoid replication.
  11. BEST PRACTICES RAMYA There are a number of recommended practices

    related to clustering. These include: • As much as possible, organize your application into a series of steps where each step moves the message from one transactional store to another. • If your application processes messages from a non-transactional transport, use areliability pattern to move them to a transactional store such as a VM or JMS store. • Use transactions to process messages from a transactional transport. This ensures that if an error is encountered, the message will be reprocessed. • Use distributed stores such as those used with the VM or JMS transport – these stores are available to an entire cluster. This is preferable to the non-distributed stores used with transports such as File, FTP, and JDBC – these stores are read by a single node at a time. • Use the VM transport to get optimal performance. Use the JMS transport for applications where data needs to be saved after the entire cluster exits. • Create the number of nodes within a cluster that best meets your needs. • Implement reliability patterns to create high reliability applications.
  12. CONCLUSION RAMYA • Currently you can create a cluster consisting

    of at least two servers and up to a maximum of eight. However, each server must run in a different physical (or virtual) machine. • To maintain synchronization between the nodes in the cluster, Mule HA requires a reliable network connection between servers. • You must keep the following ports open in order to set up a Mule cluster: port 5701 and port 54327. • Because new cluster member discovery is performed using multicast, you need to enable the multicast IP: 224.2.2.3