Table of Contents
Java gives you more than one way to coordinate access to shared state. Most developers start with synchronized and never look further — but understanding the full lock landscape, especially ReentrantLock, unlocks (pun intended) a set of capabilities that can be critical for production-grade concurrent code.
This post covers the full picture: from the JVM memory model guarantees that make locks work, to practical patterns with ReentrantLock, ReadWriteLock, and StampedLock.
Why Locks Exist
Threads share heap memory. Without coordination, two threads can interleave reads and writes in ways that produce results neither thread intended — a race condition. The JVM’s memory model (JMM) doesn’t guarantee that a write by thread A is visible to thread B unless a happens-before relationship exists between them.
Locks establish happens-before. When thread A releases a lock and thread B subsequently acquires it, everything A did before the release is visible to B after the acquire. This is the foundational guarantee all Java synchronization primitives build on.
The synchronized Baseline
Before ReentrantLock, you used synchronized:
public class Counter {
private int count = 0;
public synchronized void increment() {
count++;
}
public synchronized int get() {
return count;
}
}
synchronized is simple, but it has limitations:
- No try-lock: You either block until you get the lock, or you don’t try at all.
- No timeout: A thread waiting on a lock can wait forever.
- No interruptibility: You can’t interrupt a thread blocked on
synchronized. - No fairness control: The JVM picks which waiting thread gets the lock — typically not FIFO.
- Single condition: You get one implicit condition queue per lock. Need multiple? You’re out of luck.
These gaps are exactly what ReentrantLock was designed to fill.
Introducing ReentrantLock
ReentrantLock is part of java.util.concurrent.locks, introduced in Java 5. It implements the Lock interface and provides all the functionality synchronized lacks.
Basic Usage
import java.util.concurrent.locks.ReentrantLock;
public class Counter {
private int count = 0;
private final ReentrantLock lock = new ReentrantLock();
public void increment() {
lock.lock();
try {
count++;
} finally {
lock.unlock();
}
}
public int get() {
lock.lock();
try {
return count;
} finally {
lock.unlock();
}
}
}
The try/finally block is not optional. If you call lock.lock() and your code throws before unlock(), the lock is never released — a deadlock waiting to happen.
Why “Reentrant”?
A lock is reentrant when the thread that holds it can acquire it again without blocking itself. Both synchronized and ReentrantLock are reentrant.
public void outer() {
lock.lock();
try {
inner(); // safe — same thread acquiring again
} finally {
lock.unlock();
}
}
public void inner() {
lock.lock();
try {
// do work
} finally {
lock.unlock();
}
}
ReentrantLock tracks the hold count. Each lock() increments it; each unlock() decrements it. The lock is released to other threads only when the count reaches zero. This means mismatched lock/unlock calls are a bug — guard against it with try/finally.
ReentrantLock Key Features
1. Try-Lock (Non-Blocking Acquisition)
if (lock.tryLock()) {
try {
// we got the lock
} finally {
lock.unlock();
}
} else {
// lock is held by someone else — do something else
}
Useful for avoiding deadlocks in situations where you need two locks and want to back off if you can’t get both.
2. Try-Lock with Timeout
if (lock.tryLock(500, TimeUnit.MILLISECONDS)) {
try {
// acquired within 500ms
} finally {
lock.unlock();
}
} else {
// timeout expired
}
This is invaluable in systems where lock contention has SLA implications. A timeout lets you fail fast or retry rather than blocking indefinitely.
3. Interruptible Lock Acquisition
try {
lock.lockInterruptibly();
try {
// critical section
} finally {
lock.unlock();
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
// handle cancellation
}
lockInterruptibly() allows a thread waiting to acquire the lock to be interrupted via Thread.interrupt(). This is essential for responsive cancellation in long-lived services.
4. Fairness
// Fair lock — threads acquire in roughly FIFO order ReentrantLock fairLock = new ReentrantLock(true); // Unfair lock (default) — better throughput, possible starvation ReentrantLock unfairLock = new ReentrantLock(false);
A fair lock prevents starvation: no thread waits indefinitely while others keep jumping the queue. The tradeoff is throughput — the JVM can’t do barging (allowing a newly-arrived thread to steal the lock from the head of the wait queue), so overall throughput drops.
Unless you have a measurable starvation problem, use the default unfair mode.
5. Lock Introspection
ReentrantLock exposes diagnostic methods useful during testing or debugging:
lock.isLocked(); // is anyone holding the lock? lock.isHeldByCurrentThread(); // does this thread hold it? lock.getHoldCount(); // how many times has this thread locked it? lock.getQueueLength(); // estimate of threads waiting lock.hasQueuedThreads(); // is anyone waiting?
These have no equivalent with synchronized.
Condition Variables
ReentrantLock supports multiple Condition objects, each with its own wait queue. This replaces the single wait()/notify()/notifyAll() mechanism on Object.
Bounded Buffer (Producer-Consumer)
public class BoundedBuffer<T> {
private final Queue<T> buffer = new LinkedList<>();
private final int capacity;
private final ReentrantLock lock = new ReentrantLock();
private final Condition notFull = lock.newCondition();
private final Condition notEmpty = lock.newCondition();
public BoundedBuffer(int capacity) {
this.capacity = capacity;
}
public void put(T item) throws InterruptedException {
lock.lock();
try {
while (buffer.size() == capacity) {
notFull.await();
}
buffer.add(item);
notEmpty.signal();
} finally {
lock.unlock();
}
}
public T take() throws InterruptedException {
lock.lock();
try {
while (buffer.isEmpty()) {
notEmpty.await();
}
T item = buffer.poll();
notFull.signal();
return item;
} finally {
lock.unlock();
}
}
}
Key observations:
await()atomically releases the lock and suspends the thread.signal()wakes one waiter;signalAll()wakes all. Prefersignal()when only one waiter can make progress (as above).- Always re-check the condition in a
whileloop, notif— spurious wakeups are possible. notFullandnotEmptyare separate queues. Signaling one doesn’t wake the other — a huge advantage overnotifyAll()on a single object.
ReadWriteLock and ReentrantReadWriteLock
When reads heavily outnumber writes, exclusive locking wastes concurrency — two simultaneous readers can’t interfere with each other, so there’s no reason to serialize them.
ReentrantReadWriteLock gives you a read lock (shared) and a write lock (exclusive):
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class Cache<K, V> {
private final Map<K, V> map = new HashMap<>();
private final ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock();
private final ReentrantReadWriteLock.ReadLock readLock = rwLock.readLock();
private final ReentrantReadWriteLock.WriteLock writeLock = rwLock.writeLock();
public V get(K key) {
readLock.lock();
try {
return map.get(key);
} finally {
readLock.unlock();
}
}
public void put(K key, V value) {
writeLock.lock();
try {
map.put(key, value);
} finally {
writeLock.unlock();
}
}
}
Rules:
- Multiple threads can hold the read lock simultaneously.
- The write lock is exclusive — no readers, no other writers.
- A thread holding the write lock can downgrade to a read lock (acquire read, then release write). The reverse (read → write upgrade) is not supported and will deadlock.
When to use it: High read/write ratio with relatively coarse-grained operations. For fine-grained concurrent map access, prefer ConcurrentHashMap.
StampedLock (Java 8+)
StampedLock takes the read/write idea further with an optimistic read mode — a way to read without acquiring any lock, then validate after:
import java.util.concurrent.locks.StampedLock;
public class Point {
private double x, y;
private final StampedLock sl = new StampedLock();
public void move(double deltaX, double deltaY) {
long stamp = sl.writeLock();
try {
x += deltaX;
y += deltaY;
} finally {
sl.unlockWrite(stamp);
}
}
public double distanceFromOrigin() {
long stamp = sl.tryOptimisticRead();
double curX = x, curY = y;
if (!sl.validate(stamp)) {
// a write happened — fall back to real read lock
stamp = sl.readLock();
try {
curX = x;
curY = y;
} finally {
sl.unlockRead(stamp);
}
}
return Math.sqrt(curX * curX + curY * curY);
}
}
The optimistic read is essentially free if no write is in progress. The validate() call checks whether any write occurred between the read and the validation point. If it did, you retry with a proper read lock.
Critical caveats:
StampedLockis not reentrant. Don’t call write from write, or you’ll deadlock.- It has no
Conditionsupport. - It’s harder to use correctly — optimistic reads must be carefully scoped.
Use StampedLock when profiling shows ReentrantReadWriteLock is a bottleneck, read sections are short, and you can afford the extra code complexity.
Lock Selection Guide
| Scenario | Recommended |
|---|---|
| Simple mutual exclusion | synchronized |
| Need timeout or interruptibility | ReentrantLock |
| Need multiple condition queues | ReentrantLock + Condition |
| High read/write ratio | ReentrantReadWriteLock |
| Extreme read throughput, short critical sections | StampedLock |
| Concurrent map/list operations | ConcurrentHashMap / CopyOnWriteArrayList |
Common Pitfalls
Forgetting finally
// BUG: if doWork() throws, lock is never released
lock.lock();
doWork();
lock.unlock();
// Correct
lock.lock();
try {
doWork();
} finally {
lock.unlock();
}
Mismatched Hold Count
lock.lock();
lock.lock(); // hold count = 2
try {
// ...
} finally {
lock.unlock(); // hold count = 1 — still locked!
}
If you lock n times, you must unlock n times. When using reentrancy, track this carefully.
Deadlock from Lock Ordering
// Thread 1: locks A then B // Thread 2: locks B then A // → deadlock
Always acquire locks in a consistent global order. ReentrantLock.tryLock() with backoff is the other classic remedy.
Holding Locks Too Long
A thread blocked on I/O while holding a lock will block all contenders for the duration. Keep critical sections as short as possible — do I/O outside the lock, capture state inside, process outside.
Performance Considerations
In low-contention scenarios, synchronized is often as fast or faster than ReentrantLock due to JVM intrinsics and lock biasing optimizations. ReentrantLock‘s advantages show at high contention, especially with fairness, timeout, or multiple conditions.
Profile before replacing synchronized with ReentrantLock purely for performance. Use ReentrantLock when you need its features — not because you assume it’s faster.
Summary
synchronizedis the right default for simple mutual exclusion.ReentrantLockis your tool when you need timeouts, interruptible waits, fairness control, or multiple condition queues.ReentrantReadWriteLockshines in read-heavy workloads where write contention is low.StampedLockoffers maximum read throughput with optimistic reads at the cost of reentrancy and higher complexity.- Always release locks in
finallyblocks. Always.
Locking is not a feature you add after the fact — it’s a design decision baked into your data model. Think clearly about what invariants a lock is protecting and what operations need to be atomic, and the right primitive usually becomes obvious.

I build softwares that solve problems. I also love writing/documenting things I learn/want to learn.