We now require customers to install subscription files for production use, similar to what we do in 6.0.
We've had some bad experiences with subscriptions purchased via resellers / intermediaries, that were first used by a development team, then moved onto other teams in production without us being able to keep track. This has lead to unpleasant experiences for both our customers and ourselves.
For more details, see this documentation page.You need to install 2 files as explained on the documentation page mentioned above. Note: development or testing use does not need subscription files.
We now help you find out about known vulnerable maven dependencies that your application is using.
With current hacker efforts moving more and more into the application stack via vulnerable open source dependencies, we notice customers are getting worried (or facing audits of their applications). Log4j, anyone?
Our approach has been two-fold:
Starting in the near future, expect more and more "security updates" in that respect.
None.
| Severity: | 4 |
|---|---|
| Affected version(s): | 5.0.x |
Any spring.jta properties in Spring Boot should now be taken into account.
This has proven to confuse users, so we now take such spring.jta properties into account.
Any spring.jta properties in Spring Boot should now be taken into account.
| Severity: | 4 |
|---|---|
| Affected version(s): | 5.0.x |
You can now count on the property values from jta.properties to be taken into account even with Spring Boot.
The Atomikos JTA properties implementation in our Spring Boot starter would define default values for many properties, meaning that their value specified jta.properties would not be taken into account. This has now been fixed.
Your properties specified in jta.properties should now work.
| Severity: | 1/2/3/4 |
|---|---|
| Affected version(s): | 5.0.x |
AtomikosRestPort URL is not set (for transactions across remoting calls).
AtomikosRestPort URL was not set, the client template would report a misleading message saying that there is no transaction for the thread. Instead, the root cause is a URL that is missing - so we fixed that for you.
None.
| Severity: | 4 |
|---|---|
| Affected version(s): | 5.0.x |
You can now use transitive readOnly remoting transactions in all cases.
This is known as a "diamond case" because the invocation diagram looks like a diamond.
This issue has been fixed in the following way: our product will now avoid the readOnly optimisation in this specific scenario. This is still correct, at a minor performance overhead in the exotic cases where this does happen.
None.
| Severity: | 2 |
|---|---|
| Affected version(s): | 5.0.x |
You can now use @RequiresNew with imported transactions.
Our code had a side effect of suspend/resume that changed the local sibling count.
Sibling counts help detect orphaned invocations (and their transactional updates to persistent storage) that arise out of lost replies. For instance, consider this scenario with services A and B:
This risk here is the different views of A and B regarding the scope of the transaction: A thinks it commits one update to B, whereas B commits two different updates. This can be a problem for data consistency, so we avoid this by keeping sibling counts at B and A. A constructs its sibling count picture with each result it actually receives with its replies from A. Before commit, A passes on the "count" it has for invocations at B, and if B finds that there is no match then it refuses to commit.
This would avoid the problem outlined above, because in step 4 service A will miss a count, so in step 6 service A will pass a count of 1 for service B, whereas B will see 2 and refuses the commit process.
In short, sibling counts have their purpose. However due to a bug, this was affected by a suspend/resume at service B (when it has @RequiresNew logic inside).
You should now be able to configure @RequiresNew on any service that needs it.
| Severity: | 4 |
|---|---|
| Affected version(s): | 5.0.x |
You should now be able to perform any number of JNDI lookups in Tomcat without getting warnings about the resource already existing with the same name.
During JNDI lookups, Tomcat applications would sometimes get warnings like below due to a race condition. This has been fixed.
WARNING: Cannot initialize AtomikosConnectionFactoryBean
java.lang.IllegalStateException: Another resource already exists with name XAConnectionFactory - pick a different name
at com.atomikos.icatch.config.Configuration.addResource(Configuration.java:241)
at com.atomikos.jms.AtomikosConnectionFactoryBean.doInit(AtomikosConnectionFactoryBean.java:440)
at com.atomikos.jms.AtomikosConnectionFactoryBean.init(AtomikosConnectionFactoryBean.java:354)
at com.atomikos.jms.AtomikosConnectionFactoryBean.createConnection(AtomikosConnectionFactoryBean.java:620)
None.
You can now configure a networkTimeout parameter for the pool.
Network issues are a recurring problem for connection pools: a pool attempts to keep connections open, whereas intermediaries on the network tend to close them (silently). In addition, backed servers going down can also invalidate the pool's connections.
These conditions can easily lead to long block times on the pool and its connections and the application thus becomes unresponsive. By setting the new networkTimeout property on our datasource classes you can limit the time that applications can block on the network.
This new feature only works if the underlying driver supports it (leave the property unset if not). Also, any timeout value you configure must be higher than the typical duration of your SQL operations, so it must also be higher than the transaction timeout.
FREE TEXT / OPTIONAL
| Severity: | 3 |
|---|---|
| Affected version(s): | 5.0.107 |
We now log warnings for errors during the prepare phase.
When an error happens during prepare then we used to log debug information. Consequently, some useful information was hard to find, in particular failures due to deferred constraint violations. We now log as warnings instead.
None.
| Severity: | 4 |
|---|---|
| Affected version(s): | 5.0.107 |
You can now (again) access the javadoc in your IDE.
We encountered a release problem in the 5.0.107 release for which we had to disable the javadoc plugin. This meant that most of that release went undocumented. We have now fixed this.
None, except that the documentation is now included.
Contributed by the Spring (Boot) team, we are happy to announce our new Spring Boot starter module! You can now use our releases 5.0 with the latest releases from Spring Boot.
Our releases 5.0 were not compatible with the Spring Boot starter code as it was implemented by the Spring Boot team (and included in the "native" starters for Spring Boot).
So based on a generous contribution from the Spring Boot team and Pivotal, we now have our own starter module for you to use: just add transactions-spring-boot-starter to your pom and off you go. See https://www.atomikos.com/Documentation/SpringBootIntegration for additional details.No other changes: we have preserved the Spring Boot configuration options.
This module has been renamed and part of the code inside has been moved.
| Severity: | 4 |
|---|---|
| Affected version(s): | 4.0.x, 5.0.x |
You can now run our example programs with JDK 11.
Our examples did not build well with JDK 11 due to the new Java module system introduced, and the fact that some packages are no longer visible (by default) in the JDK. This has now been fixed.
None: there is a separate maven build profile that activates itself when a recent JDK is found, and tunes the modules accordingly.
| Severity: | 4 |
|---|---|
| Affected version(s): | 5.0.107 |
This release does not include the full javadoc.
The javadoc generation failed during the upload of the release due to incompatibility issues with the Spring Boot starter's integration tests. To avoid additional delays, we have for now uploaded this release without most of the javadoc.
You can now use Hazelcast 4 with our JTA/XA transactions, via an additional module "transactions-hazelcast4". This was required due to breaking API changes that were introduced in Hazelcast 4.
When using the prior Hazelcast integration (made for Hazelcast 3, not 4) you would get the following exception when trying to configure a JTA/XA enabled HazelcastInstance:
java.lang.UnsupportedOperationException: Client config object only supports adding new data structure configurations at com.hazelcast.client.impl.clientside.ClientDynamicClusterConfig.getLicenseKey(ClientDynamicClusterConfig.java:897) at com.hazelcast.config.ConfigXmlGenerator.generate(ConfigXmlGenerator.java:129) at com.atomikos.hazelcast.HazelcastTransactionalResource.<init>(HazelcastTransactionalResource.java:23) at com.atomikos.hazelcast.AtomikosHazelcastInstance.<init>(AtomikosHazelcastInstance.java:31) at com.atomikos.hazelcast.AtomikosHazelcastInstanceFactory.createAtomikosInstance(AtomikosHazelcastInstanceFactory.java:17) at ...
| Severity: | 4 |
|---|---|
| Affected version(s): | 5.0.104 |
When setting recycleActiveConnectionsInTransaction=true, you can now reuse connections more flexibly.
Consider the following use case with recycleActiveConnectionsInTransaction enabled:
With recycleActiveConnectionsInTransaction=true, c1 will be the same connection instance as c2.
So after method bar() closes c2, c1 will also be closed and this caused errors like these in step 4 of method foo():
The underlying XA session is closed
at com.atomikos.jdbc.internal.AtomikosSQLException.throwAtomikosSQLException(AtomikosSQLException.java:29)
at com.atomikos.jdbc.internal.AtomikosJdbcConnectionProxy.enlist(AtomikosJdbcConnectionProxy.java:108)
at com.atomikos.jdbc.internal.AtomikosJdbcConnectionProxy.updateTransactionContext(AtomikosJdbcConnectionProxy.java:61)
at com.atomikos.jdbc.internal.AbstractJdbcConnectionProxy.prepareStatement(AbstractJdbcConnectionProxy.java:64)
at sun.reflect.GeneratedMethodAccessor228.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.atomikos.util.DynamicProxySupport.callProxiedMethod(DynamicProxySupport.java:162)
at com.atomikos.util.DynamicProxySupport.invoke(DynamicProxySupport.java:116)
at com.sun.proxy.$Proxy801.prepareStatement(Unknown Source)
None.
You can now allow recycling of active JDBC/XA pooled connections within the same transaction, before they are "closed" by the application. This means that certain deadlock scenarios can be avoided.
Imagine the following use case:
Before this feature, step 4 would return a different physical connection from the pool. This would trigger a new XA branch, with unspecified isolation (locking) behaviour with respect to any updates performed via the connection in step 2. This could even cause deadlocks.
Therefore, people have asked us to allow for step 4 to reuse the same connection, c1. You can now enable this new behaviour by callingsetRecycleActiveConnectionsInTransaction(true) on the AtomikosDataSourceBean.
A new, optional setter on our datasource. The default is false for backward compatibility.
| Severity: | 2 |
|---|---|
| Affected version(s): | 5.0.x, 4.0.x |
You now no longer get "Log corrupted - restart JVM" exceptions after you interrupt a thread that is writing to the transaction log file, or after any other exception that make a log checkpoint fail.
com.atomikos.recovery.fs.CachedRepository class, leaving the instance in an invalid state:
2021-03-01 16:15:56.662 ERROR 41669 --- [pool-1-thread-1] c.a.recovery.fs.FileSystemRepository : Failed to write checkpoint java.nio.channels.ClosedByInterruptException: null at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) ~[na:1.8.0_192] at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:392) ~[na:1.8.0_192] at com.atomikos.recovery.fs.FileSystemRepository.writeCheckpoint(FileSystemRepository.java:196) ~[transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.recovery.fs.CachedRepository.performCheckpoint(CachedRepository.java:84) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.recovery.fs.CachedRepository.put(CachedRepository.java:77) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.recovery.fs.OltpLogImp.write(OltpLogImp.java:46) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.persistence.imp.StateRecoveryManagerImp.preEnter(StateRecoveryManagerImp.java:51) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.finitestates.FSMImp.notifyListeners(FSMImp.java:164) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.finitestates.FSMImp.setState(FSMImp.java:251) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.imp.CoordinatorImp.setState(CoordinatorImp.java:284) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.imp.CoordinatorStateHandler.commitFromWithinCallback(CoordinatorStateHandler.java:346) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.imp.ActiveStateHandler$6.doCommit(ActiveStateHandler.java:273) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.imp.CoordinatorStateHandler.commitWithAfterCompletionNotification(CoordinatorStateHandler.java:587) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.imp.ActiveStateHandler.commit(ActiveStateHandler.java:268) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.imp.CoordinatorImp.commit(CoordinatorImp.java:550) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.imp.CoordinatorImp.terminate(CoordinatorImp.java:682) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.imp.CompositeTransactionImp.commit(CompositeTransactionImp.java:279) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.jta.TransactionImp.commit(TransactionImp.java:168) [transactions-jta-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.jta.TransactionManagerImp.commit(TransactionManagerImp.java:428) [transactions-jta-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.jta.UserTransactionManager.commit(UserTransactionManager.java:160) [transactions-jta-5.0.9-SNAPSHOT.jar:na] at org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1035) [spring-tx-5.2.5.RELEASE.jar:5.2.5.RELEASE] at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:743) [spring-tx-5.2.5.RELEASE.jar:5.2.5.RELEASE] at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711) [spring-tx-5.2.5.RELEASE.jar:5.2.5.RELEASE] at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:152) [spring-tx-5.2.5.RELEASE.jar:5.2.5.RELEASE] at com.example.atomikos.AtomikosApplicationTests.lambda$4(AtomikosApplicationTests.java:78) [test-classes/:na] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_192] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_192] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_192] at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_192]
Later requests trying to read from the transaction logs would get systematic corruption errors like this:
com.atomikos.recovery.LogReadException: Log corrupted - restart JVM at com.atomikos.recovery.fs.CachedRepository.assertNotCorrupted(CachedRepository.java:137) ~[transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.recovery.fs.CachedRepository.findAllCommittingCoordinatorLogEntries(CachedRepository.java:145) ~[transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.recovery.fs.RecoveryLogImp.getExpiredPendingCommittingTransactionRecordsAt(RecoveryLogImp.java:52) ~[transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.imp.RecoveryDomainService.performRecovery(RecoveryDomainService.java:76) ~[transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.icatch.imp.RecoveryDomainService$1.alarm(RecoveryDomainService.java:55) [transactions-5.0.9-SNAPSHOT.jar:na] at com.atomikos.timing.PooledAlarmTimer.notifyListeners(PooledAlarmTimer.java:101) [atomikos-util-5.0.9-SNAPSHOT.jar:na] at com.atomikos.timing.PooledAlarmTimer.run(PooledAlarmTimer.java:88) [atomikos-util-5.0.9-SNAPSHOT.jar:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_192] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_192] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_192]
This has now been fixed.
None.
| Severity: | 3 |
|---|---|
| Affected version(s): | 5.0.x |
getActive() in the DataSourceBeanMetadata classes of module transactions-springboot2 now no longer returns the total number of open connections, but rather the number of connections that are currently being used by the application.
Due to a misunderstanding of Spring Boot's semantics, this method returned the wrong result: the total number of open connections in the pool, rather than the number of connections being used. This has now been fixed.
None.
| Severity: | 3 |
|---|---|
| Affected version(s): | 5.0.x |
DataSourcePoolMetadata in Spring Boot, even if one of our datasources is used in wrapped or proxied mode in your Spring Boot runtime.
We used to return metadata in the following style:
if (dataSource instanceof AtomikosDataSourceBean) {
return new AtomikosDataSourceBeanMetadata((AtomikosDataSourceBean) dataSource);
}
(and similar for our AtomikosNonXADataSourceBean class)
This would not work if the dataSource presented is wrapped or proxied. So we now use the built-in Spring Boot DataSourceUnwrapper.unwrap to handle those cases.
None.
| Severity: | 2 |
|---|---|
| Affected version(s): | 5.0.x, 4.0.x, 3.9.x |
The XA implementation of PostgreSQL ignores the transaction timeout, which means that you may have long-lived orphaned SQL sessions in your database server.
The following workarounds are available:
Set thequeryTimeout on your JDBC Statement objects, or try setting a server-level timeout like this:
SET SESSION idle_in_transaction_session_timeout = '5min’;
If you have any other solution then please let us know - thanks!
You can now choose to disable retrying commit or rollback for heuristic hazard transactions.
Heuristic hazard transactions can arise out of network connectivity issues during the commit phase: if a resource gets a prepare request and subsequently becomes unreachable during commit or rollback then the transaction will go into "heuristic hazard" mode. This essentially means that commit will be retried a number of times, even if com.atomikos.icatch.oltp_max_retries is set to zero. The rationale being: it is better to terminate pending in-doubt transactions sooner rather than later because of the pending locks they may be holding on to.
If you don't want this behaviour then you can now disable this, and rely on the recovery process in the background to take care of it (which also works, but will happen only periodically). To disable, just set this new property to false:
com.atomikos.icatch.retry_on_heuristic_hazard=false
A new startup property that can optionally be set. If not present, it will default to true to preserve compatibility with existing behaviour.
You can now explicitly trigger recovery in your application, via our API.
import com.atomikos.icatch.RecoveryService; import com.atomikos.icatch.config.Configuration; boolean lax = true; //false to force recovery, true to allow intelligent mode RecoveryService rs = Configuration.getRecoveryService(); rs.performRecovery(lax);
In order for this to work, make sure to set (in jta.properties):
# set to Long.MAX_VALUE so background recovery is disabled com.atomikos.icatch.recovery_delay=9223372036854775807L
We have added methods on an existing API interface, which does not break existing clients.
| Severity: | 2 |
|---|---|
| Affected version(s): | 5.0.x |
XAResource.recover() when failures happen during the regular commit or rollback, so the overhead for the backend is reduced.
For historical reasons we used to call the XA recovery routine on the backed whenever commit or rollback failed. The most common cause is network glitches, meaning that big clusters with a short network problem would suddenly hit the backends with recovery for all active transactions. Since recovery can be an expensive operation, this would result in needless load on the backends.
The rationale behind this was to avoid needless commit retries (based on the value of com.atomikos.icatch.oltp_max_retries), but the overhead does not justify the possible benefit.
From now on we no longer do this, since it is either the recovery process (in the background) or the application (via our API) that controls when recovery happens.
Worst case, this can lead to needless commit retries, in which case the backend should respond with error code XAER_NOTA and our code will handle this gracefully. However, we have historical records where some older version of ActiveMQ did not behave like this. This would result in errors in the ActiveMQ log files, in turn leading to alerts for the operations team.
If you experience issues with this, then it suffices to set com.atomikos.icatch.oltp_max_retries to zero. That will disable regular commit retries and delegate to the recovery background process.
| Severity: | 2 |
|---|---|
| Affected version(s): | 5.0.x |
For releases 5.0 or higher, the maximum timeout should not be set to 0 or recovery will interfere with regular application-level commits.
The 5.0 release has a new recovery workflow that is incompatible with com.atomikos.icatch.max_timeout being zero. That is because recovery depends on the maximum timeout to perform rollback of pending (orphaned) prepared transactions in the backends. If the maximum timeout is zero then recovery (in the background) will rollback prepared transactions that are concurrently being committed in your application. This will result in heuristic exceptions and inconsistent transaction outcomes.
Keep in mind that the maximum timeout is also indicative of maximum lock duration in your databases, so choose it wisely! If you are / were depending on an unlimited maximum timeout then you are also allowing unlimited lock times.
4
5.0.x
You can now more easily determine when connections are reaped because of another connection timing out on network I/O or DB locks.
We already used to collect the stack trace of the thread that acquired a reaped connection. However, we now also collect the thread name to correlate reap situations with timeouts, for instance like this:
Before this fix, you would see a timeout + application's thread name + stack trace for step 2, and a stack trace for 3. The stack trace would show where in your application the connection was gotten in step 1, but not by which thread. Indeed, step 3 would log the stack trace within the context of the pool maintenance thread, not the original application thread in step 1.
With this fix you will now also see the application's thread name (i.e., the thread of step 2) in step 3 so you can easily correlate 1-2-3 and determine the timeout in 2 as the root cause for the reap.
None.
3
4.0.x, 5.0.x
You can now easily see whether prepare fails due to either a timeout, or due to a resource-internal issue.
The warning message in the log files now distinguishes between a transaction timeout (where you can increase your timeout settings) and other reasons that don't require changes in timeout settings.
None.
4
5.0.x
Concurrent connection pool requests now have less waiting on synchronised code for connections that are already in use.
AtomikosConnectionProxy.isAvailable() has a synchronised block of code that would be entered every time, for every concurrent request and for every connection. Now, when the connection is already in use we avoid entering the locked section of code.
This leads to lower contention in high load environments.
None.
2
5.0.x
When problematic transaction commits are given up and delegate to recovery, we now also stop the transaction's background timer thread.
Before this change, problematic commits keep on retrying in the transaction's background thread. This is generally fine, but at some time the transaction manager gives up and delegates to the background recovery process. However, the transaction's background timer thread would stay active in that case. This has now been fixed.
None.
3
5.0.x
Exceptions when either testing or destroying a connection in the pool will now clarify that the connection will be replaced with a new connection, so you don't have to worry about having to do anything special.
Before this change, exceptions during testing and/or destroying a pooled connection would have a vague message. With this change, the message now clarifies that the connection will be replaced with a new one. This means your operations team does not have to wonder what to do.
None.
3
5.0.x
com.atomikos.jdbc.AtomikosNonXADataSourceBean now notify waiting getConnection() requests when a connection in use without a JTA transaction is closed (and becomes eligible for reuse in the pool). This means you will see less frequent waits for borrowConnectionTimeout, so you should see shorter request processing delays.
com.atomikos.jdbc.internal.JtaUnawareThreadLocalConnection proxy was not firing the required event to notify waiting threads when a thread returned a connection to the connection pool. This in opposition to class com.atomikos.jdbc.internal.JtaAwareThreadLocalConnection that WAS actually firing this event. Due to this, when using JtaUnawareThreadLocalConnection the waiting threads would exhaust the maximum borrow timeout instead of trying to lease connection when notified. Client would notice that awaiting threads always reach the configured borrow timeout regardless another thread returned a usable connection to the pool.
None.
4
5.0.x
com.atomikos.spring.BatchingMessageListenerContainer require an instance of com.atomikos.jms.AtomikosConnectionFactoryBean. If you make a mistake in your wiring then you can now see what (other) connection factory class you are trying to set.
com.atomikos.spring.BatchingMessageListenerContainer.setConnectionFactory(connectionFactory) with a wrong argument would merely throw an IllegalArgumentException. We now added the actual class name of the argument to the exception's message, so debugging your configuration becomes easier.
None.
3
5.0.x
Heuristic transaction states in your application no longer lead to infinite retries in the background.
com.atomikos.icatch.oltp_max_retries=0, the classes HeurHazardStateHandler and HeurHazardStateHandler would lead to endless retries in the background. This was due to the fact that the dispose method in the state handler logic would not delegate to the underlying CoordinatorImp instance, so the timer thread would stay active even after things were left for the recovery process in the background. The timer thread would trigger the retry mechanism.
None.
2
5.0.x, 4.0.x
XaResourceTransaction.commit detects that there is no XAResource to use.
Without this fix, you could get into situations like this (which then got logged a LOT due to issue 189273 above):
19/11/2020 15:20:43.467 [Atomikos:3321] ERROR com.atomikos.datasource.xa.XAResourceTransaction - XAResourceTransaction: 31302E3235342E3134362E3131302E746D313539353531353037383735393139373038:31302E3235342E3134362E3131302E746D373030333533: no XAResource to commit?
19/11/2020 15:20:43.467 [Atomikos:3321] ERROR com.atomikos.icatch.imp.CommitMessage - Unexpected error in commit
com.atomikos.icatch.HeurHazardException: XAResourceTransaction: 31302E3235342E3134362E3131302E746D313539353531353037383735393139373038:31302E3235342E3134362E3131302E746D373030333533: no XAResource to commit?
at com.atomikos.datasource.xa.XAResourceTransaction.commit(XAResourceTransaction.java:529)
at com.atomikos.icatch.imp.CommitMessage.send(CommitMessage.java:52)
at com.atomikos.icatch.imp.CommitMessage.send(CommitMessage.java:23)
at com.atomikos.icatch.imp.PropagationMessage.submit(PropagationMessage.java:67)
at com.atomikos.icatch.imp.Propagator$PropagatorThread.run(Propagator.java:63)
at com.atomikos.icatch.imp.Propagator.submitPropagationMessage(Propagator.java:42)
at com.atomikos.icatch.imp.HeurHazardStateHandler.onTimeout(HeurHazardStateHandler.java:71)
at com.atomikos.icatch.imp.CoordinatorImp.alarm(CoordinatorImp.java:650)
at com.atomikos.timing.PooledAlarmTimer.notifyListeners(PooledAlarmTimer.java:95)
at com.atomikos.timing.PooledAlarmTimer.run(PooledAlarmTimer.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
19/11/2020 15:20:53.468 [Atomikos:3321] ERROR com.atomikos.datasource.xa.XAResourceTransaction - XAResourceTransaction: 31302E3235342E3134362E3131302E746D313539353531353037383735393139373038:31302E3235342E3134362E3131302E746D373030333533: no XAResource to commit?
We have no report of how the system got into this state, but this can happen on rare occasions when the original connection breaks and a refresh cannot be done.
This is now a warning (since recovery will deal with it in the background) and thanks to the fix for 189273 it will no longer repeat endlessly.
None.
4
5.0.x
We no longer log a warning if a heuristic event happens and no event listeners were registered.
Previously, when a heuristic transaction happened then the event would get logged as a warning in a place where it was out of context. This was very confusing, even for our team. We considered this logging to be obsolete, since the absence of any event listener signals that the application does not care in the first place.
With this in mind, the warning has been removed. Our monitoring extensions should be used for awareness of such events.
None.
You could already make transactions span http remoting calls. With this change, you can now also make transactions span gRPC calls.
You can now ship transaction propagation headers along with gRPC calls, so your transaction commit / rollback scope can span gRPC distributed applications.
com.atomikos.remoting.grpc.TransactionAwareClientInterceptor and com.atomikos.remoting.grpc.TransactionAwareServerInterceptor
Fixed a bug that would happen in certain class loading environments and prevented CallableStatements from being created. This would lead to errors like this:
java.lang.ClassCastException: com.sun.proxy.$Proxy364 cannot be cast to java.sql.CallableStatement
at com.atomikos.jdbc.internal.AbstractJdbcConnectionProxy.prepareCall(AbstractJdbcConnectionProxy.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.atomikos.util.DynamicProxySupport.callProxiedMethod(DynamicProxySupport.java:162)
at com.atomikos.util.DynamicProxySupport.invoke(DynamicProxySupport.java:116)
at com.sun.proxy.$Proxy64.prepareCall(Unknown Source)
Added an example showing how to configure Micrometer for transaction metrics.
Added an example showing how to configure OpenTracing.
Based on your feedback, we've added / improved the metadata (tags) for JDBC.
We now also show the URI when logging remote participants.
Fixed a bug in the class loading, which showed up in a particular test case (no other occurrences were reported so far).
We now also call stop() on the JMS connection when a container is stopped. This should avoid pending receiver threads in the JMS driver.
We improved shutdown to wait for active timer threads, so there are less warnings in Tomcat installations (concerning pending threads and possible memory leaks).
We've reverted a decision we made when releasing the 5.0: localTransactionMode should not be false (by default) but rather true. This is to avoid backward compatibility issues with existing installations that want to upgrade.
We fixed an incompatibility in the way our pooled connections are initialised and a Shadow JDBC driver limitation.
We improved distributed recovery for com.atomikos.remoting.taas.RestTransactionServiceImp.
We have improved remoting recovery by automatically adding the URL of the recovery coordinator. Previously this required setting a separate manual startup property.
Abandoned or heuristic transactions should not delete the remote participant information - so remoting recovery can still do its job.
We previously lowered the log to DEBUG. For customers that don't want a warning, please:
We added debug logs to make it easier to see if batching is working.
When a shared DB is used, all information is present in the DB itself - so remote http calls are not needed. This has been implemented for recovery of imported transactions.
When batching JMS receipt in one transaction, we already did reset the batch on MessageListener exceptions. We now also do this on any rollback.
We now log (in DEBUG) the current batch size so it's easier to verify that batching is being done for some receivers. The rest will be taken care of in 5.0.94.
Due to some Camel internal default configuration, this example did not yet use batching for receiving JMS messages. That has been fixed.
IMPORTANT: this is still in beta so we welcome your feedback if you want to try it.
We've merged-in a recent contribution that enables transactions across Apache Dubbo RPC calls.
We now check to optimise performance (by avoiding logging overhead if DEBUG is not enabled).
We fixed a bug where the new pools (with concurrent validation) could grow beyond maxPoolSize.
Fixed a bug where remoting would generate URLs with a double slash. This worked fine in our tests, but some HTTP stacks have issues with it.
For customers on a Professional subscription: we've added monitoring for your Splunk / Kibana / Graphite (or other) monitoring tools so you can define SLAs and set alerts, as well as get insights into the health of your distributed transaction application. Contact your support representative if you want to learn more…
For customers on a Professional subscription: we've added commit tracing so you can see all commits for each individual transaction, for each participating resource. Contact your support representative if you want to learn more…
For remoting: we've improved the way that expired in-doubt transactions are handled.
The 5.0 release seemed to have an issue with IBM DB2's currentSchema. While the underlying cause is really some strange behaviour in the IBM connection, we've still fixed this via a workaround in our code.
Added a new example project to show how to configure Camel with XA/Atomikos and our new batching message listener container - all in Spring Boot.
Some people use the big zip file for installing our product and it seems the sources and javadoc were missing there. This has been fixed.
It is now even simpler to configure batching JMS processing because you no longer have to worry about init / destroy ordering.
We've improved our proxies so we at least log exceptions that can be hidden by 3rd party frameworks.
We've clarified the exception message when you attempt to start a transaction manager with a transaction log that is still in use by a concurrently running instance.
Spring boot requires a factory for configuring message-driven containers, so we added one.
Disk-full situations can be problematic so we now check regularly and log warnings when the transaction logs are stored on a disk that could go out of free space soon.
Failing to write to the transaction log is typical for out-of-disk scenarios so we also log this as being fatal.
Since performance features are exclusive to customers on a Professional subscription or higher, this is not for everyone.
The client did not handle rollback exceptions well, which would give confusing error messages.
You can now share the same DBMS for different microservices - so you don't need to instantiate multiple DB instances.
To upgrade, it is best to do a clean shutdown (in no-force mode) so you are sure that there are no pending transactions.
The following schema changes are required on your end:
The default value should be whatever you have for com.atomikos.icatch.tm_unique_name in your microservice.
The property value com.atomikos.icatch.tm_unique_name is now implicit in part of the primary key columns. So you don't need to insert this as a separate row any more.
Made the log files and stack traces less confusing when resolution of a pending in-doubt transaction fails due to connectivity issues.
Fixed a NullPointerException with remoting recovery towards a remote participant that cannot find the transaction in question.
When a transaction timed out we used to hint at different options / causes in the warning. This has been improved to be accurate now.
Fixed a bug that could cause problems in recovery when multiple nodes compete for the recovery master role.
Fixed a bug where some pending HTTP remote participants were not recovered correctly.
Fixed a bug that would occur only if classpath issues caused some interfaces not to be found.
Example stack trace:
java.lang.IllegalArgumentException: ... is not an interface
at java.lang.reflect.Proxy$ProxyClassFactory.apply(Proxy.java:590)
at java.lang.reflect.Proxy$ProxyClassFactory.apply(Proxy.java:557)
at java.lang.reflect.WeakCache$Factory.get(WeakCache.java:230)
at java.lang.reflect.WeakCache.get(WeakCache.java:127)
at java.lang.reflect.Proxy.getProxyClass0(Proxy.java:419)
at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:719)
at com.atomikos.util.ClassLoadingHelper.newProxyInstance(ClassLoadingHelper.java:75)
We included several tweaks to make logging both more expressive and less verbose. This should make DEBUG log easier to read and configuration problems somewhat easier to diagnose.
We tuned connection reuse in the pool for repeated REST invocations within the same remote transaction. Users reported a 30% increase in performance thanks to this.
Fixed a bug in the new ignoreJtaTransactions mode: calling rollback is not allowed if the connection is used in autoCommit mode.
We added logging of all event listeners found at startup - so it is easier to diagnose loading issues.
We fixed a bug that prevented this important monitoring event from being published.
A wrong interpretation of seconds caused the monitoring to be a bit verbose. We fixed this.
Added extra safety setting for the LogCloud - so we are absolutely certain to use regular JDBC transactions for logging.
When using and closing the non-XA connections in JTA mode, reusing the same connection in the same transaction was impossible due to the connection being marked as closed. This has been fixed.
Some of the inconsistent logic for non-XA (fixed in 5.0.4) was still around for localTransactionMode, causing connection recycling (i.e., reusing the same connection in the same thread and transaction) to fail. This resulted in 2 connections being used, and 2 participants for the commit (which in turn meant 2-phase commit). The net effect: failing prepare and no commit allowed. We've fixed this.
There was an exception when trying to do the following:
This was due to remaining state from step 1, causing the last step to fail with a ClassCastException. This has been fixed.
In our enthusiasm to improve the non-XA JDBC we accidentally made it inconsistent with respect to the XA JDBC. This has now been fixed: if you set localTransactionMode then JTA transactions are still taken into account, if there is a transaction found for the calling thread.
We've updated the Spring example to be more modern with Java config (instead of XML) and @Transactional annotations.
Upgrading should be easy, and in case you have issues then do the following to get backward compatibility:
IMPORTANT: since the log file format has changed, make sure to do a clean shutdown first so you can safely remove the existing transaction logs.
Ever heard of ACID transactions being an anti-pattern? Mostly this is because people don't know how to do it. This has now changed!
Check out examples-jta-rest-jaxrs to see how easy it is to commit or rollback across multiple services.com.atomikos.icatch.jta.template.TransactionTemplate to demarcate transactions with the known semantics (REQUIRED, REQUIRES_NEW, MANDATORY, NEVER, SUPPORTS, NOT_SUPPORTED and NESTED). Lambda's are supported, too.
We no longer throw on heuristic exceptions - because heuristics are an operations issue, not a developer issue. However, if your code depends on heuristics then you can switch on the old behaviour in jta.properties like this:
com.atomikos.icatch.throw_on_heuristic=true
We've made recovery even simpler, so it works better and faster - even in the cloud: your application's transaction processing now merely inserts log records, and all other processing and cleanup is done in the background (by the recovery subsystem). This achieves the ultimate and complete separation of concerns between OLTP logic and recovery logic. The logs are also more compact because we no longer store individual XIDs.
Heuristic problem cases are now mostly archived automatically so they will no longer "pollute" the log files. Your application can capture details with our event API - so you can log whatever you need to deal with such problems.
The LogAdministrator interface had been removed from our API because it became obsolete now that we deal with heuristics automatically.
With the 4.0 recovery, multiple connection factories or datasource to the same backend would lead to many recovery calls. Now we filter out duplicates - so each backed is only recovered once during each recovery scan.
We've added support for easy and elegant dynamic proxy creation. This means that we can have better proxies (typically needed for JTA enlist/delist). Our JDBC and JMS modules have been refactored to leverage this new design.
We've simplified our (internal) API based on insights we've collected over the years. Most notably, our API no longer requires Serializable support and the transaction import/export mechanism has also been cleaned up to allow for elegant HTTP / microservice transactions.
The classes for the non-XA datasource have been improved for more safety and flexibility:
Here are some relevant GitHub issues:
Also, the class AtomikosNonXADataSourceBean has now been moved into package com.atomikos.jdbc (alongside our AtomikosDataSourceBean).
We used to allow a mix of local transactions and JTA transactions (depending on whether a JTA transaction existed or not). As of this release, only JTA transactions are allowed for JDBC - unless explicitly overridden by setting "localTransactionMode=true" on any of our datasources.
We no longer keep JDBC statements after they are closed, which means less memory consumption.
For the lambda lovers: you can now do these with the JmsSenderTemplateCallback.
We've included a working sample with hibernate 5
We've upgraded the Spring example to use a recent version of Spring.
Some people thought we had a JUL logger dependency in the examples. This has now been removed.
Using Spring Boot (2)? No problem, we support that!
We've added lots of interesting core events to be monitored, and a monitoring module to log these events - so you can use your favourite log analysis tools to monitor your distributed transactions. The transaction ID serves as the correlation ID.
The AllegroGraph graph/NoSQL database system supports XA but not easy JTA enlistment - leaving it to your application to enlist low-level XAResource instances with the JTA transaction. We've added support for this, so you don't have to bother any more: you can now use AllegroGraph with the same level of comfort and ease as you can do regular JDBC or JMS with transactions.
See Configuring AllegroGraph for JTA/XA with Atomikos for details.Like AllegroGraph, Hazelcast does not support JTA/XA at a high level (so you end up doing XA operations yourself). This has been improved: we now do this for you, behind the scenes.
See Configuring Hazelcast for JTA/XA with Atomikos for details.We've made the LogCloud better, based on feedback from users of our previous 4.0 release. Logging is now done directly in the DBMS which simplifies a lot of things for you.
For the full details, see LogCloud Documentation.Nobody uses RMI any more, so we removed our RMI transactions module. For transactions that span microservices: try REST/HTTP instead.
Ask your support representative for details on how to configure event logging / monitoring.
These modules have now been merged into one module / jar.
The JMX controls for our datasource / connection factory now also show the last reap time.