On this page

What is a Database Transaction?

ACID Properties: The Foundation of Reliable Transactions

The Challenge of Concurrency in Transactions

Concurrency Control: Pessimistic vs. Optimistic

Pessimistic Concurrency Control (Locks)

Optimistic Concurrency Control (Versioning)

Further Learning

FAQs

Q1. What does ACID mean in databases?

Q2. Why are transactions important in databases?

Q3. What’s the difference between pessimistic and optimistic concurrency control?

Q4. What are isolation levels in a database?

Q5. How do databases ensure durability?

ACID & Database Transactions 101: Keeping Data Consistent in Concurrent Systems

Image
Arslan Ahmad
Learn how modern databases juggle hundreds of simultaneous transactions without losing integrity. A simple breakdown of ACID properties and how locking and optimistic strategies work.
Image
On this page

What is a Database Transaction?

ACID Properties: The Foundation of Reliable Transactions

The Challenge of Concurrency in Transactions

Concurrency Control: Pessimistic vs. Optimistic

Pessimistic Concurrency Control (Locks)

Optimistic Concurrency Control (Versioning)

Further Learning

FAQs

Q1. What does ACID mean in databases?

Q2. Why are transactions important in databases?

Q3. What’s the difference between pessimistic and optimistic concurrency control?

Q4. What are isolation levels in a database?

Q5. How do databases ensure durability?

This blog demystifies database transactions, covering how the ACID properties keep data safe and consistent. It explains what transactions are and how databases handle concurrent operations through pessimistic and optimistic concurrency control techniques.

Ever wondered how your banking app guarantees your money doesn’t vanish mid-transfer, or how two people can’t buy the last concert ticket at the same time?

These everyday actions rely on database transactions working behind the scenes.

In modern applications – whether booking a ride, purchasing an item, or updating a balance – the backend performs multiple steps (reads, writes, verifications) often across different tables or services.

All these steps must succeed together or fail as a unit to avoid half-finished orders or inconsistent data.

That’s where transactions step in as the all-or-nothing workhorses of a database.

But ensuring data correctness isn’t trivial, especially when many transactions run simultaneously.

Real-world systems have dozens, hundreds, or thousands of users making updates at once. The database must balance isolation, performance, and accuracy without turning into a bottleneck.

In this guide, we’ll break down the fundamentals of transactions and ACID, then discuss how databases tame the chaos of concurrency (via pessimistic vs. optimistic control).

What is a Database Transaction?

A database transaction is a sequence of one or more operations (like SQL queries) treated as a single unit of work.

Either all of the operations in the sequence complete successfully (the transaction commits) or none do (the transaction rolls back).

In other words, transactions ensure that the database can never be left in a half-updated state.

For example, imagine an app that transfers 200 from Alice’s savings to her checking account. The steps might include deducting 200 from savings, adding $200 to checking, and recording the transfer.

If any step fails (say, the add-to-checking step errors out), the entire transaction is aborted and rolled back – Alice’s accounts remain unchanged, rather than losing money from one account without crediting the other.

This all-or-nothing behavior is crucial for maintaining data integrity.

Transactions give us a logical guarantee: even if the system crashes or multiple operations conflict, the database will either reflect all of a transaction’s changes or none.

To enforce this guarantee, databases rely on the ACID properties.

ACID Properties: The Foundation of Reliable Transactions

ACID” is an acronym for the four key properties that ensure database transactions are processed reliably:

  • Atomicity: The transaction is “indivisible” – all its steps execute, or nothing does. There’s no in-between. This means the database won’t save partial changes from a transaction. If one part fails, the whole transaction is aborted and any interim changes are undone.

  • Consistency: A transaction must take the database from one valid state to another, preserving all predefined rules, constraints, and triggers. The data should never violate integrity constraints (like uniqueness, foreign keys, balance can’t go negative, etc.) as a result of the transaction. Any data modifications must satisfy the database’s consistency rules both before and after the transaction.

  • Isolation: Concurrent transactions should not interfere with each other. Each transaction “thinks” it’s running alone, even if others are happening at the same time. The intermediate results of a transaction are invisible to other transactions until it commits. This prevents anomalies like one transaction seeing half-updated data from another. In practical terms, proper isolation ensures that if two people are updating the same data simultaneously, the outcome is as if the transactions ran one after the other (this is the essence of the strictest level, serializable isolation).

  • Durability: Once a transaction is committed, its changes persist permanently. Even if the database or system crashes immediately after, the committed data will survive. Durability guarantees that if you get a success confirmation, your data won’t magically revert or disappear – the changes are safely stored on disk or non-volatile memory.

These ACID properties work together to guarantee that each transaction keeps the database correct and trustworthy despite errors or crashes.

Most relational databases (MySQL, PostgreSQL, Oracle, etc.) are designed to be ACID-compliant.

They use features like integrity constraints (for consistency) and isolation levels (for managing concurrency) to uphold these principles.

The Challenge of Concurrency in Transactions

Ensuring a single transaction’s correctness is one thing; handling many at once is another adventure altogether.

In multi-user systems, transactions run concurrently – meaning multiple transactions may try to read or write the same data at the same time.

Without special care, this can lead to all sorts of consistency problems.

Some classic issues that arise from poor concurrency control include:

  • Dirty Reads: Transaction A reads data that Transaction B has modified but not yet committed. If B rolls back, A read a “dirty” (invalid) value.

  • Lost Updates: Two transactions both update the same item. The later commit overwrites the first change, losing data.

  • Inconsistent Reads (Non-repeatable reads): Transaction A reads the same row twice and gets different data because another transaction modified it in between.

  • Phantom Reads: Transaction A re-reads a query result set and finds rows added or removed by another transaction.

For instance, imagine two users trying to reserve the last seat on a flight at the same moment.

If both transactions see that one seat is available, both might proceed to book it – resulting in an overbooking conflict.

Or consider one transaction reading a record while another is in the middle of updating it – the first might get outdated or partial data.

Without concurrency control, such scenarios lead to race conditions or even deadlocks that can freeze the system.

Concurrency control mechanisms are therefore essential. They ensure transactions can execute simultaneously without stepping on each other’s toes, maintaining that all-important isolation and consistency.

The goal is for the outcome of concurrent transactions to be equivalent to some serial order of execution (this property is called serializability).

To achieve this, databases typically use one of two approaches (or a blend of both): pessimistic locking or optimistic concurrency control.

Concurrency Control: Pessimistic vs. Optimistic

When multiple transactions might conflict, the database needs a strategy to handle it:

Pessimistic Concurrency Control (Locks)

Pessimistic approach assumes conflicts are likely, so it actively prevents them by locking data up front.

When a transaction wants to read or modify some data, the database will lock that data (like a specific row, set of rows, or even a whole table) so that other transactions cannot modify it until the lock is released (usually when the transaction commits or aborts).

It’s like putting a “do not disturb” sign on the data.

  • If Transaction A has a write lock on a row, Transaction B wanting to read or write that row must wait until A finishes (commits or rolls back). This guarantees no other transaction can change or even see intermediate changes to that row.

  • Read locks (also called shared locks) may allow multiple concurrent reads on the same data, but block any writes until the reading is done.

Pessimistic locking is very safe – it stops conflicts before they happen by essentially doing serial execution when needed.

Our flight seat example under pessimistic control: if User A’s transaction locks the seat record to book it, User B’s transaction attempting the same seat will be paused (blocked) until A commits.

Only one succeeds and the other will either wait or get a failure, ensuring no double-booking.

The downside is reduced concurrency: transactions can be forced to wait, and if not carefully managed, locks can lead to deadlock (two transactions waiting on each other’s locks).

Optimistic Concurrency Control (Versioning)

Optimistic approach assumes conflicts are rare, so it lets transactions proceed without locks and only checks for conflicts at the end (when trying to commit).

Each transaction works on the data as if alone, and before committing, the database verifies that no other transaction meddled with the same data in the meantime.

If a conflict is detected, the committing transaction will abort (and typically retry). This way, no locking overhead is incurred during the transaction, which can improve performance in low-contention scenarios.

Commonly, this is implemented with data versions or timestamps. For example, every row might have a version number or last-updated timestamp. Transaction A reads a row (with version N) and makes some changes. When A goes to commit, the database checks the current version of that row:

  • If it’s still N, no other transaction touched it, so A’s commit succeeds (the version is incremented to N+1).

  • If it’s not N (meaning some other transaction updated that row and it’s now version N+1 or more), then a conflict occurred. A will fail to commit and might have to restart using the new data.

Using our seat booking example optimistically: both User A and User B could “think” they booked the last seat because neither locked it.

When both try to finalize the booking, the system sees that one seat record was already changed by the first commit – thus it rejects the second user’s transaction (which would then retry or show an error “sorry, seat no longer available”).

Optimistic control avoids upfront locking, which means higher throughput and no waiting when conflicts truly are infrequent.

However, if conflicts do happen often, transactions will keep aborting and retrying, which can become inefficient. It’s best in scenarios like many read-only operations or mostly disjoint writes.

In practice, databases often use Multiversion Concurrency Control (MVCC), a form of optimistic control where readers see a snapshot of the database and writers create new versions of data.

MVCC (used in systems like PostgreSQL, MySQL InnoDB, Oracle, etc.) allows readers and writers to not block each other: readers get the old version of data while a writer is working, and upon commit the new version is saved.

This yields snapshot isolation (no dirty reads, etc.) and often balances performance and consistency well.

Further Learning

To deepen your understanding of system design and databases (especially for technical interviews), check out these courses by DesignGurus:

  1. Grokking System Design FundamentalsFoundational system design concepts for scalable systems.

  2. Grokking the System Design InterviewPrepare for system design interview questions with real-world examples.

  3. Grokking Database Fundamentals for Tech InterviewsLearn core database concepts (like ACID, indexing, etc.) in an interview-focused way.

  4. Grokking SQL for Tech InterviewsPractical SQL query and database problem-solving skills for coding interviews.

  5. Grokking Relational Database Design and Modeling for Software EngineersMaster database schema design, normalization, and modeling techniques.

FAQs

Q1. What does ACID mean in databases?

ACID stands for Atomicity, Consistency, Isolation, Durability. These properties ensure that database transactions are processed reliably. Atomicity means all-or-nothing execution of each transaction. Consistency means transactions preserve database rules/constraints. Isolation means transactions don’t interfere with each other (concurrent transactions act as if executed one by one). Durability means once a transaction commits, its changes persist permanently.

Q2. Why are transactions important in databases?

Transactions guarantee data integrity and consistency. They ensure that a series of operations either all succeed or all fail together, so the database isn’t left in a partial state. This is critical for scenarios like financial transfers, orders, or any multi-step process – you wouldn’t want money deducted without being deposited elsewhere, or an order placed without inventory being updated. Transactions also allow recovery from errors: if something goes wrong, the database can roll back to a safe state.

Q3. What’s the difference between pessimistic and optimistic concurrency control?

Pessimistic concurrency control uses locks to prevent conflicts before they happen – it assumes collisions are likely. For example, if one transaction is editing a row, others must wait (it’s like one person at a time). Optimistic concurrency control assumes conflicts are rare and doesn’t lock data during the transaction. Instead, it lets transactions execute concurrently and checks at commit time for conflicts. If a conflict is detected (another transaction changed the data first), the committing transaction will abort and retry. Pessimistic = safer but can slow things down with waiting; optimistic = faster when few conflicts but may require retries if conflicts occur.

Q4. What are isolation levels in a database?

Isolation levels are settings that control how much a transaction is isolated from others. Common levels (from lowest to highest isolation) are Read Uncommitted, Read Committed, Repeatable Read, and Serializable. Lower levels allow more concurrency but risk phenomena like dirty reads or non-repeatable reads. Higher levels (Serializable being the strictest) ensure transactions execute with full isolation (as if one after the other) to prevent anomalies, at the cost of throughput. For instance, Read Committed prevents reading uncommitted data (no dirty reads), and Serializable prevents all of the anomalies but may use more locking or abort more transactions to do so.

Q5. How do databases ensure durability?

Databases ensure durability by writing data to stable storage when a transaction commits. Techniques include WAL (Write-Ahead Logging) and transaction logs that record changes before they are applied, so if a crash happens, the database can recover committed transactions from the log. Many systems also use replication (writing data to multiple servers/disks) and periodic backups. Essentially, once you get a “commit succeeded” message, the database has made sure it can recreate that data even if hardware fails immediately after.

System Design Interview

What our users say

Eric

I've completed my first pass of "grokking the System Design Interview" and I can say this was an excellent use of money and time. I've grown as a developer and now know the secrets of how to build these really giant internet systems.

MO JAFRI

The courses which have "grokking" before them, are exceptionally well put together! These courses magically condense 3 years of CS in short bite-size courses and lectures (I have tried System Design, OODI, and Coding patterns). The Grokking courses are godsent, to be honest.

KAUSHIK JONNADULA

Thanks for a great resource! You guys are a lifesaver. I struggled a lot in design interviews, and this course gave me an organized process to handle a design problem. Please keep adding more questions.

More From Designgurus
Annual Subscription
Get instant access to all current and upcoming courses for one year.

Access to 50+ courses

New content added monthly

Certificate of completion

$33.25

/month

Billed Annually

Recommended Course
Grokking the System Design Interview

Grokking the System Design Interview

0+ students

4.7

Grokking the System Design Interview is a comprehensive course for system design interview. It provides a step-by-step guide to answering system design questions.

View Course
Join our Newsletter

Get the latest system design articles and interview tips delivered to your inbox.

Image
One-Stop Portal For Tech Interviews.
Copyright © 2025 Design Gurus, LLC. All rights reserved.