Table of Contents
Overview
When it comes to data integrity, the famous example that people usually mention is the bank transfer scenario: Alice sends money to Bob, and Bob sends money to Alice. I also read that using @Transactional is the solution for such cases.
However, is using @Transactional annotation enough to make Alice and Bob happy?
Let’s find out.
Scenario
Let’s say we operate a bank and there are three clients with the following balance
Alice | Bob | Jack |
100 | 200 | 300 |
To represent the bank account, we have this class:
public class BankAccount { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private Double balance; private String owner; }
On app start-up, let’s insert the records into the database:
public static void main(String[] args) { SpringApplication.run(SpringDataApplication.class, args); } @Override public void run(String... args) throws Exception { //create new two bank accounts var alice = new BankAccount(); alice.setOwner("Alice"); alice.setBalance(100.0); var bob = new BankAccount(); bob.setOwner("Bob"); bob.setBalance(200.0); var jack = new BankAccount(); jack.setOwner("Jack"); jack.setBalance(300.0); bankAccountRepository.save(alice); bankAccountRepository.save(bob); bankAccountRepository.save(jack); log.info("Bank accounts: {}", bankAccountRepository.findAll()); }
Implement the transfer method
The transfer method is a method inside the bank transfer service. There is nothing fancy here:
public class BankTransferService { private final BankAccountRepository balanceRepository; @Transactional() public void transfer(long from, long to, double amount) { var fromAccount = balanceRepository.findById(from).orElseThrow(); var toAccount = balanceRepository.findById(to).orElseThrow(); //validate balance if (fromAccount.getBalance() < amount) { throw new IllegalArgumentException("Insufficient balance"); } //update balance fromAccount.setBalance(fromAccount.getBalance() - amount); toAccount.setBalance(toAccount.getBalance() + amount); balanceRepository.save(fromAccount); //simulate a random delay balanceRepository.save(toAccount); log.info("Transfer from {} to {} for amount {}", from, to, amount); } }
You can see that the method is annotated with @Transactional. That means all operations in this method are either successful or fail.
Lastly, let’s create some code to stress test the transfer to check for data integrity:
@PostMapping("/heavy") public String heavyTransfer() throws InterruptedException { var executors = Executors.newFixedThreadPool(5); for (int i = 0; i < 500; i++) { var from = getRandomFrom(); var to = getRandomTo(from); var amount = new Random().nextInt(1, 20); log.info("Transfer from {} to {} for amount {}", from, to, amount); executors.submit(() -> transferService.transfer(from, to, amount)); } executors.shutdown(); executors.awaitTermination(2, java.util.concurrent.TimeUnit.MINUTES); return "Done"; } private long getRandomFrom() { return new Random().nextLong(1, 4); } private long getRandomTo(long from) { return from == 2L ? (new Random().nextBoolean() ? 1L : 3L) : 2L; }
Here, we have a thread pool with size 5 and submit 500 transfer requests to this pool. The helper methods getRandomFrom and getRandomTo are to randomly generate from and to account id.
The amount for each transfer is somewhere from 1 to 20.
Let’s observe the code in action.
The panel on the left is DBeaver which I used to view the data. I set the refresh rate to 1 second. As you can see, the sum changed even though we only transferred money between bank accounts.
This is a huge issue for a bank.
How to fix this?
Fixing data integrity issues
You can see that slapping @Transactional on your methods or classes doesn’t guarantee your app’s correctness. There are a few ways to fix the previous issue. I will provide two here.
Use the right isolation level
Remember ACID? Isolation is the I in that acronym. Quoting Wikipedia:
It determines how transaction integrity is visible to other users and systems. A lower isolation level increases the ability of many users to access the same data at the same time, but also increases the number of concurrency effects (such as dirty reads or lost updates) users might encounter. Conversely, a higher isolation level reduces the types of concurrency effects that users may encounter, but requires more system resources and increases the chances that one transaction will block another.
With a low isolation level, you are more likely to experience issues with concurrency. With high isolation level, you will trade performance for the integrity of your data.
So, what are the isolation levels? There are four:
- read uncommitted
- read committed
- repeatable read
- serializable
What, and why are these isolation levels, you can read here
When using @Transactional, if you don’t specify the isolation level, spring data jpa will use Isolation.DEFAULT, which is the default level of the underlying database. The default level for MySQL is REPEATABLE READ, while PostgreSQL’s default level is READ COMMITTED.
READ COMMITTED is not enough for protecting your data in multiple concurrent modifications but REPEATABLE READ is.
So, if I specify the isolation level to REPEATABLE READ, the issue will be fixed.
Why does setting the isolation level to REPEATABLE READ prevent the issue? Let’s read the definition from spring data JPA
A constant indicating that dirty reads and non-repeatable reads are prevented; phantom reads can occur.
This level prohibits a transaction from reading a row with uncommitted changes in it, and it also prohibits the situation where one transaction reads a row, a second transaction alters the row, and the first transaction re-reads the row, getting different values the second time (a “non-repeatable read”).
The cause of our issue is exactly what REPEATABLE READ prevents. It ensures that all changes are visible to all participating transactions.
You can set the isolation level to a higher value (SERIALIZABLE). However, why pay a higher performance penalty without gaining anything more?
Use optimistic locking
Another way you can address this issue is to use optimistic locking by setting a version field in the bank account entity.
public class BankAccount { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private Double balance; private String owner; @Version // Optimistic locking version field private Long version; }
By adding the version field with @Version, there is a new int field in the bank account entity that increases every time the entity is updated. At the beginning of the transaction, the entity has version x1, for example. When writing the entity back to the database, if the version in the database is not x1 then the transaction will fail and all changes will be discarded.
Essentially, the update query is update bank_account set balance=?,owner=?,version=2 where id=? and version=
x1
If another transaction modified the version to other value than x1, then the query above couldn’t find the entity to update.
Conclusion
As you can see, using transactional alone doesn’t protect you from lost updates. Using the right isolation level or lock is the key.

I build softwares that solve problems. I also love writing/documenting things I learn/want to learn.