Friday, 7 May 2010

HA of an MQLink between SIBus and WMQ

Just answered a question from a colleague in Switzerland & I thought that it would be worth posting the answer here for posterity.

Suppose you have an SIBus with a Foreign Bus Connection or MQLink from a queue manager to the SIBus. And further suppose that you've created a Cluster Bus member that contains one messaging engine, which will host the SIBus end of the MQLink. The SIBus end of the MQLink knows the endpoint of the queue manager, and the queue manager end of the link knows the endpoint of the messaging engine, which will be configured in the CONNAME property of the WMQ sender channel.

You can configure the messaging engine to be able to failover between the servers in the WAS cluster, for high availability. As the messaging engine moves (i.e. fails over) from one server to another, it will be listening on a different host/port. This poses the question as to what you should configure for the host/port in the WMQ Sender channel's CONNAME.

There are 3 solutions to this problem, depending on which version of WMQ you are using:

1. If you are using WMQ prior to v7.0.1 then:

a) you could use a shared disk style HA cluster to manage the ME and its endpoint (note: this is not a nice solution, I only mention it for completeness)

b) you could install WMQ supportpac MR01 at the queue manager, which adds a channel exit to the Sender channel. You can then configure a list of the endpoints of the WAS servers and this is used to select an endpoint to use in the CONNAME when starting the channel. This has the advantage that the channel exit "remembers" the last known good endpoint, which minimises reconnect time.

2. If you are using WMQ 7.0.1 then you can configure a comma-separated list of endpoints in the CONNAME of the Sender channel. A disadvantage of this is that when you start the sender channel it always searches from the beginning of the list, trying each endpoint in turn, so there can be a slight delay before a successful connection is made. This is not significant provided you don't disconnect an idle channel to eagerly - i.e. set the DISCINT (disconnect interval) to be relatively long.

[19/05/2010 - added paragraph describing overall topology]

Saturday, 29 August 2009

New WebSphere Messaging Redbook

On Monday this week I got a call from goods inwards at work to tell me I had a package for me from Poughkeepsie. This was a surprise, I wasn't expecting anything, and although I do know people who work in Poughkeepsie if they were going to send me something they would have mentioned it. So down I trotted to pick up my package, and as it turned out seven other packages for people I work with.

The package contains a copy of the new WebSphere Application Server V7 Messaging Administration Guide. Well I say new, it was published in July, but I didn't realize. The book was reviewed by several people who work on developing WebSphere Messaging, hence me walking back with seven copies of the book.

The best thing about the Redbook is that although you can purchase it, you can also download it for free as a PDF, or view it online.

Another good resource for WebSphere Messaging users everywhere.
Alasdair

Tuesday, 2 June 2009

Listener Ports and EJB 3

About a year ago we released the EJB 3 Feature Pack for WebSphere Application Server v6.1. This introduced the EJB3 programming model for the first time, and many people rushed up and downloaded it for their application development. Some of the more eagle eyed people noticed a restriction. Message Driven Beans had to use activation specifications, no listener ports here. While this seems to be a simple restriction, when you realize that the only supported way of getting WebSphere MQ working with an MDB in v6.1 is with a listener port you realize the problem. The non-alignment of the release dates for the feature pack and the availability of activation specs in the Application Server led to an unforeseen gap. Oops you might say.

Without going into all the gory details we have been busy working on a solution and until now we have been able to say very little about what has been going on behind the scenes. Now things are different though we have published APAR PK86005 EJB 3.0 MESSAGE-DRIVEN BEANS CANNOT BE USED WITH LISTENER PORTS. This APAR is targeted for inclusion in 6.1.0.25 (and 7.0.0.5) at which point you can bind an EJB3 MDB against a listener port, just like you could an EJB 2.1, or 2.0 MDB.

No code changes are needed to make this work, all you need to do is set the message listener bindings to name a listener port.

Alasdair

Wednesday, 27 May 2009

Listener Ports and WebSphere Application Server v7

When we released WebSphere Application Server v7 one of the things people were most shocked about was the following statement in the list of deprecated features:

Support for configuring and using message-driven beans (MDBs) through JMS listener ports is deprecated.

I would like to attempt to put peoples minds at rest here by explaining the motivation behind the deprecation.

When J2EE 1.3 was released it added in message driven beans support, but without specifying how this should work. So we invented listener ports (WAS v5). Then J2EE 1.4 came a long and told us how it should work. It introduced this new mechanism called "activation specifications" which are part of a resource adapter. So we went ahead and implemented activation specifications (WAS v6), but we still maintained listener ports as this was still how we integrated WebSphere MQ as a JMS provider into message driven beans.

More recently we released a resource adapter for WebSphere MQ and in WAS v7 we integrated this into WAS as the way to integrate WebSphere MQ with MDBs. As a result we now have two ways to do the exact same thing. Having two ways of doing things often causes confusion because people want to know which they should use. In general we want our answer to be to use activation specifications. There are lots of good things about activation specifications, like being able to define them once for a cluster, rather than once per cluster member, so in an effort to avoid confusion we deprecated listener ports.

While this is good for new application developers, as they have a clear direction on which to use. It is not so good for existing users of listener ports who might worry that the rug is being pulled out from under them. So for those people who are using listener ports, don't worry we have no plans to remove them from the product. They are there for the foreseeable future.

For those who are happy to migrate we have endeavoured to make this as simple as possible, and applications do not need to be re-written to take advantage of activation specifications, they do not even need to be redeployed, a simple change to the message listener bindings followed by a restart of the application is all that is needed. We even have a nice whizzy button to convert a listener port to an activation specification.

So in summary, if you are writing new applications then we recommend that you use activation specifications, for existing applications then don't worry - listener ports are still there, and we won't be getting rid of them for some time to come. I wouldn't want anyone to worry that a crack team of IBM engineers will blast a hole in their data centres to surgically remove the listener ports from their product.

Alasdair

Wednesday, 18 February 2009

Updated: How MDBs work in a cluster

It has been a long time since I blogged, and then an email question dropped in my inbox, so I thought I would respond with a blog post. The question was:

I have an application that uses MDB's. I have set up 2 messaging engines in a cluster and both are running concurrently (active/active) on 2 different app servers within a WAS cluster. My application is deployed in the same WAS cluster as the SIBus. My activation specs reference the SI Bus. But I find that only MDB's in 1 app server are active.

Is it possible to make use of both app servers so that MDB's on both app servers can process messages from the queue? My understanding was that destination gets partitioned across the messaging engines. I am confused about why only one of the messaging engine is being used.

When you deploy an application containing a Message Driven Bean (MDB) to a cluster and it is hooked up to a bus, you need to be aware of how the bus topology affects delivery to the Message Driven Beans. If the cluster the application is deployed to is also a member of the bus then the MDB will only receive messages if the cluster server member also has a messaging engine running on it. This means that if you have a cluster of five servers, and only three messaging engines, only MDBs on three of the servers will get driven.

In some scenarios this is undesirable because you are not sharing the work out across all the application servers, reducing the scalability of message receipt. In v6.x the only solution to this problem is to have two distinct clusters. One for the messaging traffic, and one for the applications. This also gives the benefit that the messaging traffic cannot starve the application of resources, and vice versa. The downside is that you need a TCP/IP connection to get messages, rather than using Java method calls.

The good news is that in v7 we added a new option on an activation spec called Always activate MDBs in all servers which when specified will cause all MDBs in the cluster to receive messages. The v7 infocenter has a really good article on how all this works, which is well worth reading, it is also relevant to the old v6.1 behaviour, just ignore the section showing cross connections. To save you searching just click here.

Updated (Thursday 19th February 2009): It has been pointed out that I have failed to mention one caution with the remote cluster solution. You need to carefully configure your activation specifications to ensure that all the MDB's do not connect to the same messaging engine in the remote cluster. If all the MDB's connect to a single engine then messages that have been sent to the partitions on the other messaging engines will be marooned and not received. An easy solution to this problem is to have a single messaging engine in the remote cluster which will give you H.A., but not scalability.

Alasdair

Tuesday, 18 November 2008

A brief introduction to the service integration bus

When Graham started this blog in September there was a definite idea that we would be talking about what is new in v7. It was launched at the same time v7 shipped, so to deny a link would be ludicrous, but it has recently occurred to me that it might be useful just to cover some of the basics. I am basing this on a large number of conversations I have had over the last month or so where I have had to explain the basics to people who had not quite got one of the concepts. So here I go:

What is a bus?

A bus is a number of things, which makes coming up with a single definition hard, but when we talk about a bus we mean all of the following:
  1. A name space for destinations
  2. A cloud to which destinations are defined and client applications connect.
  3. A set of interconnected application servers and/or clusters that co-operate to provide messaging function
  4. A set of interconnected messaging engines that co-operate to provide messaging function
While some of these might seem similar they are in fact different and that difference should become clear later on (each statement is not quite equal).

What is a destination?

A destination is a point of addressibility within the bus. Messages are sent to and received from destinations. There are a number of different types of destination. These are:
  1. A queue. This provides point to point messaging capabilities. A message is delivered to exactly one connected client and the messages are broadly processed in a first in first out order (message priority can affect this order).
  2. A topic space. This provides publish/subscribe messaging capabilities. A message is delivered to all matching connected clients.
There are other types of destinations, but they are less common, so I have skimmed over those.

What is a messaging engine?

A bus is a logical entity and as such provides no value on its own. The "runtime" of the bus is provided by a set of messaging engines which co-operate to provide this runtime. Messaging engines provide two important functions. The first is that clients connect to messaging engines, and the second is that messages are managed by the messaging engine.

What is a bus member?

While messaging engines provide the runtime they are not directly configurable (except for one key exception I will cover later). Instead servers and/or clusters are added to the bus. When a server or cluster is added to the bus it causes a single messaging engine to be created. A server bus member can host at most one messaging engine per bus, a cluster can have multiple, which is the only time you can create messaging engines.

Destinations are then "assigned" to a bus member at which point the messaging engines running on those bus members get something called a message point which is where the messages are stored.

Multiple servers and clusters can be added to a single bus. This is an important point. Some discussions I have had recently point to this being a point of confusion. Two different servers or clusters can be added to to the same bus. A bus can be as large as the cell in which it is defined. It can be larger than a single cluster.

How does High Availability work?

A certain level of availability is provided just by adding multiple application servers as a bus, a client can connect into any running messaging engines. The problem is that if messaging engine is not running the message points it is managing are not available. This does not provide an ideal HA story.

If you want HA you just use a cluster as a bus member instead. When you add a cluster as a bus member you get one messaging engine which can run on any application server in that cluster. If the server in the cluster fails then the messaging engine will be started in another server in the cluster. This can be configured using a policy.

How does Scalability work?

Scalability also utilizes application server clusters. By configuring multiple messaging engines in a cluster each messaging engine in the cluster will have a message point for destinations the cluster manages. We call this a partitioned destination. This is because each messaging engine only knows about a subset of the messages on the destination.

The upshot of all this is that the work load is shared by multiple servers.

And finally


So there we have it. The infocenter does cover a lot of this in more detail. I have linked the titles to the appropriate part of the infocenter for learning about the topics.

If you have any questions feel free to ask in the comments.
Alasdair

Friday, 17 October 2008

Setting provider endpoints

When a client running in an application server wishes to connect to the bus to do some messaging work it looks up a connection factory in JNDI and uses it to create a connection. In order to create a connection the connection factory queries the Work Load Manager (WLM) service to find out where a messaging engine is running and connects to it. Configuring the connection factory simply requires the bus name to be specified.

When a client running remotely from the cell, for instance in the client container, or a thin client, connects it does something similar. It obtains a connection factory (usually via a JNDI lookup to an application server) and creates a connection. In this situation though the connection factory has no WLM service to use to locate a suitable messaging engine. In this situation the connection factory connects to an application server and that application server performs the WLM query. This application server is termed a bootstrap server.

In v6.x a bootstrap server is designated simply by configuring the SIB service to be on, so all servers that are members of any bus are automatically bootstrap servers. Additionally in v7 you can also choose to designate servers as bootstrap member.

The servers to connect to to run the query are configured in the connection factory using a property called the provider endpoints. Working out which servers and ports can be used to bootstrap can be a bit of a chore in v6.x. Several panels need to be navigated around to work out the host and ports that are available. In v7 we have introduced a new feature to make this much simpler. This is the bootstrap members collection.

This collection can be found on the main bus panel and is titled "Bootstrap Members" (below the Bus Members link). Clicking on it shows the following collection:


This lists each server that can be used as a provider endpoint, including the ports that are available on that panel. There is one proviso though. It lists all the ports associated with messaging includes ones that may not be enabled. In the example shown above the ports 7278, 7279 and 7276 are shown, but the bus has been configured to allow only SSL protected transports, so the unsecured ports will not be open.

Alasdair