A team of enterprise architects was designing an SOA infrastructure for a bank I know. The system they were building would be based on interfaces, so that it would be possible to deploy parts of the system as separate instances later on. This was their notion of SOA…
The good thing about it is that there are interfaces in their design, meaning it is likely to be loosely-coupled. The bad news is that this is not SOA, at least not in my view: one of the biggest advantages of SOA - reuse in place - is never realized in this way. So, whereas this approach to 'SOA' may be loosely coupled in design, it is not loosely coupled in deployment (which is at least as important).
The consequence? Whenever a 'service' is upgraded, they will need to upgrade all the dependent services and redeploy them. This is because each 'service' is really an embedded module inside other parts of the system.
I guess this also holds for the debate on cloud vs grid computing: in my view, a cloud is more loosely coupled than a grid in its deployment.
So BPEL is ruled out for me - at least as far as compensation goes. What about WS-BA? It is a step in the right direction, but unfortunately it is a bloated protocol, very inefficient and loaded with application-level messages that pollute the compensating part. Even worse, it also suffers in a large part from the lack of timeout and depends on the BPEL to at least trigger compensation.
Also, WS-BA doesn't allow for application logic on close - I won't go and bother you with the entire spec details but it is like a try..catch…finally where the exception is raised by the client (ugly!) and where the finally block can only be empty! Again, Atomikos TCC is far superior, more efficient and more elegant. It is also more natural for compensation than any BPEL engine will ever be.
One last note on BPEL and this supposed "modeling the compensation in the business process": I was talking to an IBM architect the other day. He said that they were doing a large telco project with BPEL to co-ordinate things. One of the things he complained about was exactly this: they have to model the compensation and error logic as explicit workflow paths, and it was literally overloading everything with complexity. Moreover, this complexity is hard to test. As he correctly put it, they were implementing a transaction manager at the business logic (BPEL) level, over and again in every process model. In addition, this was also hard to test he said and that it was virtually killing the project - especially if there were change requests to consider. I believe him:-) I gave him the URL to our TCC article above. Atomikos and TCC allow you to focus on the happy path of your workflow models. We take care of the rest. Now imagine what a reduction in complexity that is, and how much more reliable things get! So no, compensation should NOT be modeled at the business level. Except on rare occasions maybe.With this simplicity, REST also leverages the ubiquitous HTTP protocol as the underlying mechanism. More and more people seem to like this, including me.
However, the big question for me is: how do you make this reliable? Imagine that you integrate 4 systems in a REST style. You would be using HTTP and a synchronous invocation mechanism for each service. Now comes the question: how reliable is this? The answer: less than the least reliable system that you are using! More precisely, availability goes down quickly because your aggregated service fails as soon as one of the services fails…
With transports like JMS you can improve reliability, but how do you do REST of JMS, given its close relationship with HTTP and URLs? That is the problem with REST for me.
Let me explain what I mean. The idea in SOA is that you define more or less independent services that correspond (hopefully) to clearly defined and business-related activities. For instance, you could have a customer management service and a payment/invoicing service. The customer management service belongs to CRM, the invoicing to the billing department. However, both of these services might need the same customer data. Now what do you do? Basically, you have the following options:
So what do you do? My preference tends to go to the second option. However, it means that realistic SOA architectures are likely to have an event-driven nature.
Why our technology is so different from what you know…