Title and Author of Paper
This paper is divided in two sections: granularity of locks, and degrees of consistency. Each section answers questions on how lock choice in a database affects throughput and consistency.
Granularity of Locks
In the granularity section, the choice of lockable units is discussed. A lockable unit represents a section of logical data that is atomically locked during a transaction. Locking smaller units such as individual records improves concurrency for “simple” transactions that access a small number of records. On the other hand, locking at a record level can reduce throughput for “complex” transactions that require access to many records — the overhead of acquiring and releasing locks overwhelms the computation. It follows that having different sizes of lockable units in the same system is required to handle multiple use cases.
Given this requirement, the paper presents a locking system that provides varying levels of lock granularity for hierarchical data. In the example, a database is divided into areas, each area has one or more files, and each file has one or more records. Locks may be obtained on any one of these hierarchical levels to provide concurrent access for transactional operations.
Two types of locks are presented: X or exclusive locks, and S or shared locks. Exclusive locks cannot be shared between transactions and are used for write access, shared locks can be shared between transactions and are used for read access. Requesting a lock on a node in the hierarchy implicitly locks any descendent of the node as well.
The main contribution of this section is methods for coordinating shared and exclusive locks between ancestor and descendants in the hierarchy. To do so, the notion of intention locks is introduced. Intention locks are implicit locks that are granted to an entire sub-tree of a luck. Thus, if you request an S (shared) lock on a parent, an IS (intention shared) lock on all descendants of the parent is granted. Likewise, for X locks. In order to facilitate transactional locking, any two lock requests check for existing S, X, IS, or IX locks and either acquire a lock if able, or wait for a lock to become free. To safely acquire a lock requires checking existing locks for compatibility. For example, shared (S) locks are compatible with other shared locks; concurrent read-only access to the same data is allowed. However, exclusive (X) locks are incompatible with shared locks; concurrent read access to data that is being written is disallowed. Similar rules are defined for additional combinations of locks and intention locks.
The section goes on to provide a proof that defined compatibility rules are equivalent to more coarse-grained locking methods.
Degrees of Consistency
The paper defines consistency as satisfying all assertions on the data. Depending on the operation being done on the database, it may become temporarily inconsistent until a full sequence of operations has completed. As such, sequences of operations are combined into atomic transactions — it is the database’s responsibility to ensure consistency at transaction boundaries. If transactions are ran serially then each transaction will have a consistent view of the data. Concurrent execution of transactions raises the possibility of reading inconsistent data.
The paper defines four different modes of labelled degrees 0 through 3. More recently we would call Degree 1 “Read Uncommitted”, and Degree 2 “Read Committed”, with Degree 3 meaning “Repeatable Read”. Transactions are also defined as the unit of recovery — for example, after a transaction has been committed you cannot rollback since other transactions may have already read the committed value.
A degree 0 transaction is the least restrictive type. It promises to not over-write data from other transactions. It requires having any transaction take a lock on any data it writes.
Degree 1 (Read Uncommitted)
Degree 1 consistency keeps the promise of Degree 1 (not to overwrite data) and also agrees not to commit any writes until the end of the transaction. In this case, you may require a longer lock that is held until the end of the transaction.
Degree 2 (Read Committed)
Degree 2 consistency further restricts a transaction to only read values that have been committed (to contrast, a degree 1 transaction may read dirty values). In addition to acquiring locks for all data written during the transaction, a degree 2 transaction acquires locks for all data read during the transaction.
A degree 2 transaction may still read two different values (both committed by different transactions) if it reads the same entity twice
Degree 3 (Repeatable Read)
Degree 3 consistency completely isolates a transaction from each other. It acquires long-lived locks on both data read and data written. Degree 3 provides the highest level of isolation described in the paper.
- Paper Review: Consistency Analysis in Bloom: a CALM and Collected Approach
- Paper Review: The CQL continuous query language: semantic foundations and query execution
- Paper Review: BlinkDB: Queries with Bounded Errors and Bounded Response Times on Very Large Data
- Paper Review: Informix under CONTROL: Online Query Processing