Tuesday, 18 November 2008

A brief introduction to the service integration bus

When Graham started this blog in September there was a definite idea that we would be talking about what is new in v7. It was launched at the same time v7 shipped, so to deny a link would be ludicrous, but it has recently occurred to me that it might be useful just to cover some of the basics. I am basing this on a large number of conversations I have had over the last month or so where I have had to explain the basics to people who had not quite got one of the concepts. So here I go:

What is a bus?

A bus is a number of things, which makes coming up with a single definition hard, but when we talk about a bus we mean all of the following:
  1. A name space for destinations
  2. A cloud to which destinations are defined and client applications connect.
  3. A set of interconnected application servers and/or clusters that co-operate to provide messaging function
  4. A set of interconnected messaging engines that co-operate to provide messaging function
While some of these might seem similar they are in fact different and that difference should become clear later on (each statement is not quite equal).

What is a destination?

A destination is a point of addressibility within the bus. Messages are sent to and received from destinations. There are a number of different types of destination. These are:
  1. A queue. This provides point to point messaging capabilities. A message is delivered to exactly one connected client and the messages are broadly processed in a first in first out order (message priority can affect this order).
  2. A topic space. This provides publish/subscribe messaging capabilities. A message is delivered to all matching connected clients.
There are other types of destinations, but they are less common, so I have skimmed over those.

What is a messaging engine?

A bus is a logical entity and as such provides no value on its own. The "runtime" of the bus is provided by a set of messaging engines which co-operate to provide this runtime. Messaging engines provide two important functions. The first is that clients connect to messaging engines, and the second is that messages are managed by the messaging engine.

What is a bus member?

While messaging engines provide the runtime they are not directly configurable (except for one key exception I will cover later). Instead servers and/or clusters are added to the bus. When a server or cluster is added to the bus it causes a single messaging engine to be created. A server bus member can host at most one messaging engine per bus, a cluster can have multiple, which is the only time you can create messaging engines.

Destinations are then "assigned" to a bus member at which point the messaging engines running on those bus members get something called a message point which is where the messages are stored.

Multiple servers and clusters can be added to a single bus. This is an important point. Some discussions I have had recently point to this being a point of confusion. Two different servers or clusters can be added to to the same bus. A bus can be as large as the cell in which it is defined. It can be larger than a single cluster.

How does High Availability work?

A certain level of availability is provided just by adding multiple application servers as a bus, a client can connect into any running messaging engines. The problem is that if messaging engine is not running the message points it is managing are not available. This does not provide an ideal HA story.

If you want HA you just use a cluster as a bus member instead. When you add a cluster as a bus member you get one messaging engine which can run on any application server in that cluster. If the server in the cluster fails then the messaging engine will be started in another server in the cluster. This can be configured using a policy.

How does Scalability work?

Scalability also utilizes application server clusters. By configuring multiple messaging engines in a cluster each messaging engine in the cluster will have a message point for destinations the cluster manages. We call this a partitioned destination. This is because each messaging engine only knows about a subset of the messages on the destination.

The upshot of all this is that the work load is shared by multiple servers.

And finally


So there we have it. The infocenter does cover a lot of this in more detail. I have linked the titles to the appropriate part of the infocenter for learning about the topics.

If you have any questions feel free to ask in the comments.
Alasdair

Friday, 17 October 2008

Setting provider endpoints

When a client running in an application server wishes to connect to the bus to do some messaging work it looks up a connection factory in JNDI and uses it to create a connection. In order to create a connection the connection factory queries the Work Load Manager (WLM) service to find out where a messaging engine is running and connects to it. Configuring the connection factory simply requires the bus name to be specified.

When a client running remotely from the cell, for instance in the client container, or a thin client, connects it does something similar. It obtains a connection factory (usually via a JNDI lookup to an application server) and creates a connection. In this situation though the connection factory has no WLM service to use to locate a suitable messaging engine. In this situation the connection factory connects to an application server and that application server performs the WLM query. This application server is termed a bootstrap server.

In v6.x a bootstrap server is designated simply by configuring the SIB service to be on, so all servers that are members of any bus are automatically bootstrap servers. Additionally in v7 you can also choose to designate servers as bootstrap member.

The servers to connect to to run the query are configured in the connection factory using a property called the provider endpoints. Working out which servers and ports can be used to bootstrap can be a bit of a chore in v6.x. Several panels need to be navigated around to work out the host and ports that are available. In v7 we have introduced a new feature to make this much simpler. This is the bootstrap members collection.

This collection can be found on the main bus panel and is titled "Bootstrap Members" (below the Bus Members link). Clicking on it shows the following collection:


This lists each server that can be used as a provider endpoint, including the ports that are available on that panel. There is one proviso though. It lists all the ports associated with messaging includes ones that may not be enabled. In the example shown above the ports 7278, 7279 and 7276 are shown, but the bus has been configured to allow only SSL protected transports, so the unsecured ports will not be open.

Alasdair

Friday, 3 October 2008

Connecting Buses

Suppose you wanted to send a greetings card to someone living in a foreign country. You could address the envelope to "King Alfred, The Broadway, Winchester, England" and post it anywhere in the world. Suppose you did this in the US. The local postman only needs to see the country part of the address to get the envelope heading in the right direction. It'll be conveyed to an airport from where an aircraft conveys it across the Atlantic. On landing in England, it'd reach a sorting office and be routed within the English postal system. The English postie will know where Winchester is. Well, you'd hope so anyway. Each of the US and UK postal systems is its own domain, with knowledge of how to deliver mail to addresses in that domain, and how to route mail addressed to another domain.

This is pretty much how WAS messaging works too. You can link SIBuses together so that messages can be routed from one bus to the other. Why would you want to do this? Why not just create one great big bus? Well, to a limited extent you could try and do this - but eventually you'd run up against one of a number of limits. An SIBus is defined within a cell and can't extend beyond the edge of the cell. There's a limit to how large you want to make your cell - in terms of numbers of nodes and servers and in terms of geographical span or organisational spread. So your bus will only reach as far as the cell that contains it and if you have an application in that cell that needs to send a message to a remote application - one that is in a different cell - then you're going to need to be able to reach outside of the local cell.

You could use a client connection but you'd need to know how to route the client connection to an apropriate endpoint, or liaise with the other cell administrator to bridge the coregroups between the cells. But you'd need to do this for all applications that need to send such messages and it would be a lot of work, and fragile. It's much more natural to connect your application to the local bus and let the SIBus do the leg-work. Hence it's better to link the buses.

You may have created multiple buses within the same cell, e.g. for traffic separation and then decide you need to link them together. That's done just the same way as for a pair of buses in different cells.

So how do you do it? You define a foreign bus and you designate a pair of messaging engines, one in each bus, that will link to one another. The foreign bus is a lot like the US Postal System knowing that there is a country called England and knowing which airservice to route it to. The link is like the air service, and the messaging engines are like the airports used by the air service. When the link and the foreign bus are defined, an application can send a message addressed to a destination hosted in the foreign bus, and the messaging engines in each bus will take care of routing the message, via the link, to the other bus and ultimately to the destination within that bus.

Since WAS v6.0 it's been possible to do this. You'd create a definition of a "foreign bus" and a "link" to the other bus. You could create a direct route to a neighbouring bus or an indirect route - meaning that you can reach another bus by routing through an intermediate bus. So you could construct a spanning tree of interconnected buses. The foreign bus support in WAS v6.0 allowed for the fact that the foreign bus could be either another SIBus or it could be a WMQ queue manager. In the latter case protocol conversion is needed at runtime, so you have to create an appropriate type of link - either a SIB-SIB link or a SIB-WMQ link. You could then create the foreign bus definition, including the name of the link.

There were a few difficulties with the support in WAS v6. One difficulty was knowing what objects you needed to create; another was that the creation of these objects required separate wizards or console panels; yet another was that you had to cretae the link before the foreign bus or there'd be strange fizzing noises and something in the distance would go bang. The separation of the objects was bad for usability. There was also a certain amount of finesse required to know that certain parameters should ideally match the values for certain other parameters.

All in all it was a bit hard, which made it a great opportunity to impress your friends and the bad news is that WAS v7.0 has a simple wizard that makes the task a whole lot easier...even your manager might stand a chance of getting it right. The wizard provides a single start point in the console for creation of a foreign bus connection between SIB-SIB or SIB-WMQ. The wizard guides you through the process prompting for the minimum amount of information and creation all the necessary ojbects in a consistent manner.

It's not a panacea - some of our more experienced users who got to grips with the early drivers found the wizard rather limiting, because it does hide a lot of the complexity. For them it hid too much; they're used to bashing all the configuration properties into one huge panel. Instead they now have to step through the wizard to create what is effectively a simple object and then go into the detail view of that object to set up some of the more advanced properties, such as SSL. I guess it shows that the mantra of "making simple things easy..." can backfire occasionally, or maybe that you "can't please all the people all the time"!

There are of course wsadmin commands that'll let you do everything in a single action. But we're aware that the wizard is not to everyone's liking and will try and improve upon it next time around.

Graham

Friday, 26 September 2008

Ensuring XA Recovery works with a secure bus

I have been responsible for security for the service integration bus for a number of years now and most of the problems I have dealt with over that time have come down to configuration problems. While helping the tenth person to hit a particular problem can be a little frustrating, at least I am not having to explain security flaws. Their are a number of very common problems, such as typing in the wrong userid and password, to not understanding how foreign bus security works. One of the more worrying problems that occurs relates to XA Recovery.

The problem here will be that during normal operation of the environment connections will be made, to a secure bus, authentication will occur and messages sent and received; all is good in the world. Then disaster strikes and for some reason the application server goes down leaving uncommitted work in the bus. The node agent restarts the application server which then connects to the bus and performs recovery. At least that is what should happen. In this case the connection fails with a JMS SecurityException. The original connection was established, but recovery does not work.

So what went wrong here? During normal operation when a connection is made to a transactional resource and a transaction is in effect the connection factory is written into something called the partner log. This contains details of how to connect to all the transactional partners that may be needed during recovery. In this case the connection factory does not contain any information on what security credentials should be used, so no credentials are used, causing this problem.

So if you see this how do we get the transactions resolved? Their are two of options and the first one would be preferable:
  1. Grant the special Everyone group access to the bus. Assuming dynamic configuration is enabled this will allow recovery to work quickly.
  2. Turn security off for the bus until recovery is complete. We generally advice restarting the whole bus, but a single application server may work on a temporary basis.
So now you have solved the temporary problem and your business is up and running with no in-doubt work, how do you solve it more generally? Every JMS connection factory can have an XA Recovery Alias specified. This should be configured with a user that has bus connector authority (only bus connector authority is required). Once this has been configured save, sync and restart and any new JMS work with the bus will recover with no security problem. At least until the password expires.

In fact the XA Recovery Alias is not JMS specific it exists for all JCA resources, so it can help when using WebSphere MQ and DB2 too.

Did I hear someone say "yuck"? You do not like this? Well to be honest neither did we. One of the themes of the WAS v7 release was to make it more usable (we use the term "consumable", but who'd want to eat WAS?) so we have tried to simplify here. In WAS v7 the XA recovery alias is no longer required; during recovery the application server will use the WAS server identity to perform recovery. Their are a few limitations. The first is that the special Server group needs to have bus connector authority, and the second is that the recovery server must be in the same cell as the bus. Other than that you are good to go. Oh and do not worry, if you already have an XA recovery alias we will continue to use this unless a problem occurs.

Alasdair

Tuesday, 23 September 2008

WAS V7.0 Cluster Bus Member Wizard




There's a really neat consumability improvement in WAS V7.0 in the shape of the new console wizard that guides you through configuration of a cluster bus member. The new wizard is invoked when you click "Add" to create the bus member for a cluster.

If you've ever added an appserver cluster to a bus, then you'll probably agree that anything other than a simple "HA" setup was pretty hard to configure - if you wanted to create multiple messaging engines to operate in parallel within the cluster then it's likely that you needed to create your own core group policies, and configure their match criteria and properties. If you made a mistake along the way you probably only found out later because the engine wasn't running on the server you expected, or wasn't failing back to a preferred server after recovery. The reason was often that the match criteria contained a typo and the engine was not bound to the policy you intended.

The new cluster bus member wizard in V7.0 spares you from having to worry about any of that, by providing a pattern-based approach to creation of the cluster bus member. On launching the wizard, you can select one of a small number of patterns and the wizard then sets up the messaging engines and core group policies with settings that corresponding to the pattern. The patterns cover the popular uses; you can select "high availability" if you want a single engine that can failover, or you can select "scalability" if you want multiple engines. There's also a pattern that provides both of the above.

The new console wizard makes life really easy by saving you from having to enter the core group policies by hand. It also provides feedback on your curent configuration and provides hints about how to improve it. For example it'll spot that you might have asked for a highly available bus member, but your whole cluster is running on one node. You can do that if you want, but the node is a single-point-of-failure and the wizard will detect it and politely suggest that might want to remove the SPOF.

If your browser supports SVG you even get a visual representation of the cluster showing the nodes and servers and the messaging engines.

The visual aspects of the wizard and the amount of work it saves you make it a pretty useful addition to the console.

Welcome!

This blog is for discussion WebSphere Application Server (WAS) and Messaging.

In particular it will cover the service integration technology included in WAS, which is known as the "SIBus" or just "SIB". SIBus provides the default messaging provider for JMS applications in WAS. It also provides an asynchronous transport used by SCA and the WPS/WESB stack products.

Instead of using the default messaging provider, you can configure JMS Resource for an alternative JMS provider, such as WebSphere MQ (WMQ). WAS V7.0 includes a JCA 1.5 Resource Adapter for WMQ and new panels for configuration of JMS resources.

We hope you find this blog useful - the authors are all folks who work on or with WAS Messaging and have direct practical experience.