InnoDB locking does not need to set locks to achieve consistent reads. It uses row-level locking for DML statements. InnoDB never escalates locks and uses wait-for graph detection for deadlocks.
InnoDB has the following general locking properties:
- InnoDB does not need to set locks to achieve consistent reads because it uses multi-versioning to make them unnecessary. Transactions that modify rows see their own versions of those rows and the undo logs allow other transactions to see the original rows. To force SELECT statements to lock data, you can add locking modifiers to the statements.
- When locks are necessary, InnoDB uses row-level locking. In conjunction with multi-versioning, this results in good query concurrency because a given table can be read and modified by different clients at the same time.
- InnoDB may acquire row locks as it discovers them to be necessary. It never escalates a lock by converting it to a page lock or table lock. This keeps lock contention to a minimum and improves concurrency (although it does use table-level locking for DDL operations).
- Deadlock is possible because InnoDB does not acquire locks during a transaction until they are needed. InnoDB can detect a deadlock and roll back one transaction to resolve the deadlock.
- Failed transactions eventually begin to time out, and InnoDB rolls them back as they do.
InnoDB uses an algorithm called next-key locking with row-level locking. The locking is performed in such a way that when an index of a table is searched or scanned, it sets shared or exclusive locks on the index records encountered. Thus, the row-level locks are actually index record locks. The next-key locks that InnoDB sets on index records also affect the “gap” before that index record. If a user has a shared or exclusive lock on a record in an index, another user cannot insert a new index record immediately before the locked record (the gap) in the index order.
This next-key locking of gaps is done to prevent the so-called phantom problem. The phantom problem occurs within a transaction when the same query produces different sets of rows at different times. For example, if a SELECT is executed twice but returns a row the second time that was not returned the first time, the row is a “phantom” row. In the following example, the first transaction locks any values greater than 10, even if they do not exist:
SELECT c FROM t WHERE c > 10 FOR UPDATE;
INSERT INTO t(c) VALUES (50);
Consistent Non-Locking Reads
A consistent read means that InnoDB uses multi-versioning to present your query with a snapshot of the database at a point in time. Your query sees the changes made by transactions that committed before that point of time, and does not see changes made by later or uncommitted transactions.
In the example above, Session 1 sees the row inserted by Session 2 only when 2 has committed the insert and 1 has committed as well so that the timepoint is advanced past the commit of 2. If you want to see the “freshest” state of the database, use either the READ COMMITTED isolation level or a locking read.
To reduce the possibility of deadlocks, use transactions rather than LOCK TABLE statements:
- Keep transactions that insert or update data small enough (a small number of rows) that they do not stay open for long periods of time. Commit your transactions often.
- When different transactions update multiple tables or large ranges of rows, always try to reference those tables or rows in the same sequence.
- Create indexes on the columns used in the WHERE clause.
The isolation level changes the behavior of read operations, while deadlocks occur because of write operations. If a deadlock does occur, InnoDB detects the condition and rolls back one of the transactions (the victim). Therefore, even if your application logic is perfectly correct, you must still handle the case where a transaction must be retried. To monitor how frequently deadlocks occur (as well as many other InnoDB stats), use the SHOW ENGINE INNODB STATUS command:
mysql> SHOW ENGINE INNODB STATUS\G ===================================== 110222 16:54:12 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 5 seconds
Foreign Key Locking
A constraint is simply a restriction placed on one or more column values of a table to actively enforce integrity rules. Constraints are implemented using indexes. If a foreign key constraint is defined on a table, a shared record-level lock is placed on any record that is used in an insert, update, or delete operation that references foreign key constraints. InnoDB also sets these locks in the case where the constraint fails.
InnoDB supports foreign key CONSTRAINT settings, which:
- Require checking the constraint condition
- Are used for INSERT, UPDATE, and DELETE
- Set shared record-level locks on records to check
- Also set locks on cases where a constraint fails
InnoDB checks foreign key constraints row by row. When performing foreign key checks, InnoDB sets shared row-level locks on child or parent records that it has to look at. InnoDB checks foreign key constraints immediately; the check is not deferred to transaction commit.
Add foreign key constraints when creating a table:
CREATE TABLE City ( ... CountryCode char(3) NOT NULL DEFAULT '', ... KEY CountryCode (CountryCode), CONSTRAINT city_ibfk_1 FOREIGN KEY (CountryCode) REFERENCES country (Code) ...
The constraint example above shows that the CountryCode column of the City table is related to the Code column of the Country table. Any changes to either are constrained by this relationship.