Changed in version 3.0.
MongoDB allows multiple clients to read and write the same data. In order to ensure consistency, it uses locking and other concurrency control measures to prevent multiple clients from modifying the same piece of data simultaneously. Together, these mechanisms guarantee that all writes to a single document occur either in full or not at all and that clients never see an inconsistent view of the data.
What type of locking does MongoDB use?
MongoDB uses multi-granularity locking  that allows operations to lock at the global, database or collection level, and allows for individual storage engines to implement their own concurrency control below the collection level (e.g., at the document-level in WiredTiger).
MongoDB uses reader-writer locks that allow concurrent readers shared access to a resource, such as a database or collection, but in MMAPv1, give exclusive access to a single write operation.
In addition to a shared (S) locking mode for reads and an exclusive (X) locking mode for write operations, intent shared (IS) and intent exclusive (IX) modes indicate an intent to read or write a resource using a finer granularity lock. When locking at a certain granularity, all higher levels are locked using an intent lock.
For example, when locking a collection for writing (using mode X), both the corresponding database lock and the global lock must be locked in intent exclusive (IX) mode. A single database can simultaneously be locked in IS and IX mode, but an exclusive (X) lock cannot coexist with any other modes, and a shared (S) lock can only coexists with intent shared (IS) locks.
Locks are fair, with reads and writes being queued in order. However, to optimize throughput, when one request is granted, all other compatible requests will be granted at the same time, potentially releasing them before a conflicting request. For example, consider a case in which an X lock was just released, and in which the conflict queue contains the following items:
IS → IS → X → X → S → IS
In strict first-in, first-out (FIFO) ordering, only the first two IS modes would be granted. Instead MongoDB will actually grant all IS and S modes, and once they all drain, it will grant X, even if new IS or S requests have been queued in the meantime. As a grant will always move all other requests ahead in the queue, no starvation of any request is possible.
|||See the Wikipedia page on Multiple granularity locking for more information.|
How granular are locks in MongoDB?
Changed in version 3.0.
Beginning with version 3.0, MongoDB ships with the WiredTiger storage engine.
For most read and write operations, WiredTiger uses optimistic concurrency control. WiredTiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between two operations, one will incur a write conflict causing MongoDB to transparently retry that operation.
Some global operations, typically short lived operations involving multiple databases, still require a global “instance-wide” lock. Some other operations, such as dropping a collection, still require an exclusive database lock.
The MMAPv1 storage engine uses collection-level locking as of the 3.0 release series, an improvement on earlier versions in which the database lock was the finest-grain lock. Third-party storage engines may either use collection-level locking or implement their own finer-grained concurrency control.
For example, if you have six collections in a database using the MMAPv1 storage engine and an operation takes a collection-level write lock, the other five collections are still available for read and write operations. An exclusive database lock makes all six collections unavailable for the duration of the operation holding the lock.
How do I see the status of locks on my
For reporting on lock utilization information on locks, use any of the following methods:
- mongostat, and/or
- the MongoDB Cloud Manager or Ops Manager, an on-premise solution available in MongoDB Enterprise Advanced
in the output of serverStatus, or the
in the current operation reporting provides insight into the type of locks
and amount of lock contention in your
To terminate an operation, use
Does a read or write operation ever yield the lock?
In some situations, read and write operations can yield their locks.
Long running read and write operations, such as queries, updates, and deletes, yield under many conditions. MongoDB operations can also yield locks between individual document modifications
in write operations that affect multiple documents like
MongoDB’s MMAPv1 storage engine uses heuristics based on its access pattern to predict whether data is likely in physical memory before performing a read. If MongoDB predicts that the data is not in physical memory, an operation will yield its lock while MongoDB loads the data into memory. Once data is available in memory, the operation will reacquire the lock to complete the operation.
For storage engines supporting document level concurrency control, such as WiredTiger, yielding is not necessary when accessing storage as the intent locks, held at the global, database and collection level, do not block other readers and writers.
Changed in version 2.6: MongoDB does not yield locks when scanning an index even if it predicts that the index is not in memory.
Which operations lock the database?
The following table lists common database operations and the types of locks they use.
|Issue a query||Read lock|
|Get more data from a cursor||Read lock|
|Insert data||Write lock|
|Remove data||Write lock|
|Update data||Write lock|
|Map-reduce||Read lock and write lock, unless operations are specified as non-atomic. Portions of map-reduce jobs can run concurrently.|
|Create an index||Building an index in the foreground, which is the default, locks the database for extended periods of time.|
Deprecated since version 3.0.
Write lock. The
Deprecated since version 3.0.
Write lock. By default,
Which administrative commands lock the database?
Certain administrative commands can exclusively lock the database for extended periods of time. In some deployments, for large databases, you may consider taking the
offline so that clients are not affected. For example, if a
part of a replica set, take the
and let other members of the set service load while maintenance is in progress.
The following administrative operations require an exclusive lock at the database level for extended periods:
db.collection.createIndex(), when issued without setting
db.createCollection(), when creating a very large (i.e. many gigabytes) capped collection,
db.copyDatabase(). This operation may lock all databases. See Does a MongoDB operation ever lock more than one database?.
The following administrative commands lock the database but only hold the lock for a very short time:
Does a MongoDB operation ever lock more than one database?
The following MongoDB operations lock multiple databases:
db.copyDatabase()must lock the entire
mongodinstance at once.
db.repairDatabase()obtains a global write lock and will block other operations until it finishes.
- Journaling, which is an internal operation, locks all databases for short intervals. All databases share a single journal.
- User authentication requires a read lock on the
admindatabase for deployments using 2.6 user credentials. For deployments using the 2.4 schema for user credentials, authentication locks the
admindatabase as well as the database the user is accessing.
- All writes to a replica set’s primary lock
both the database receiving the writes and then the
localdatabase for a short time. The lock for the
localdatabase allows the
mongodto write to the primary’s oplog and accounts for a small portion of the total time of the operation.
How does sharding affect concurrency?
Sharding improves concurrency by distributing collections over multiple
allowing shard servers (i.e.
to perform any number of operations concurrently to the various downstream
In a sharded cluster, locks apply to each individual shard, not to the whole cluster; i.e. each
is independent of the others in the sharded cluster and uses its own locks.
The operations on one
do not block the operations on any others.
How does concurrency affect a replica set primary?
With replica sets, when MongoDB writes to a collection on the primary,
MongoDB also writes to the primary’s oplog, which is a special collection
local database. Therefore, MongoDB must lock both the collection’s database and the
lock both databases at the same time to keep the database consistent and ensure that write operations, even with replication, are “all-or-nothing” operations.
How does concurrency affect secondaries?
In replication, MongoDB does not apply writes serially to secondaries. Secondaries collect oplog entries in batches and then apply those batches in parallel. Secondaries do not allow reads while applying the write operations, and apply write operations in the order that they appear in the oplog.
Does MongoDB support transactions?
MongoDB does not support multi-document transactions.
However, MongoDB does provide atomic operations on a single document. Often these document-level atomic operations are sufficient to solve problems that would require ACID transactions in a relational database.
For example, in MongoDB, you can embed related data in nested arrays or nested documents within a single document and update the entire document in a single atomic operation. Relational databases might represent the same kind of data with multiple tables and rows, which would require transaction support to update the data atomically.
What isolation guarantees does MongoDB provide?
MongoDB provides the following guarantees in the presence of concurrent read and write operations. These guarantees hold on systems configured with either the MMAPv1 or WiredTiger storage engines.
Write operations are atomic with respect to a single document; i.e. if a write is updating multiple fields in the document, a reader will never see the document with only some of the fields updated.
With a standalone
mongodinstance, a set of read and write operations to a single document is serializable. With a replica set, a set of read and write operations to a single document is serializableonly in the absence of a rollback.
Although MongoDB provides these strong guarantees for single-document operations, read and write operations may access an arbitrary number of documents during execution. Multi-document operations do notoccur transactionally and are not isolated from concurrent writes. This means that the following behaviors are expected under the normal operation of the system, for both the MMAPv1 and WiredTiger storage engines:
- Non-point-in-time read operations. Suppose a read operation begins at time t1 and starts reading documents. A write operation then commits an update to one of the documents at some later time t2. The reader may see the updated version of the document, and therefore does not see a point-in-time snapshot of the data.
- Non-serializable operations. Suppose a read operation reads a document d1 at time t1 and a write operation updates d1 at some later time t3. This introduces a read-write dependency such that, if the operations were to be serialized, the read operation must precede the write operation. But also suppose that the write operation updates document d2 at time t2 and the read operation subsequently reads d2 at some later time t4. This introduces a write-read dependency which would instead require the read operation to come after the write operation in a serializable schedule. There is a dependency cycle which makes serializability impossible.
- Reads may miss matching documents that are updated during the course of the read operation.
Can reads see changes that have not been committed to disk?
see the results of writes before they are made durable, regardless of
write concern level or journaling configuration. As a result, applications may observe the following behaviors:
- MongoDB will allow a concurrent reader to see the result of the write operation before the write is acknowledged to the client application. For details on when writes are acknowledged for different write concern levels, see Write Concern.
- Reads can see data which may subsequently be rolled back in cases such as replica set failover or power loss. It does not mean that read operations can see documents in a partially written or otherwise inconsistent state.
Other systems refer to these semantics as read uncommitted.
Changed in version 3.2.