Title and Author of Paper
OLTP Through the Looking Glass, and What We Found There, Harizopoulos et al.
Disk I/O has been the primary limiting factor in database performance for most commercial databases. However, as prices of main-memory have dropped it has become feasible to keep the entire working set of a database in RAM. With this architectural change, it makes sense to evaluate database design decisions made to avoid disk I/O to see which ones still hold promise in a main-memory world. This paper provides such a performance analysis.
To achieve the paper’s goals, the authors update an existing database by progressively removing functionality to see what broad performance gains are possible. Each of these features are ones that may not be necessary for a database residing in main-memory and together they provide a set of design goals for anyone wishing to implement a main-memory database.
The first optimization is to remove support for write-ahead logging. A write-ahead log is used in a database to provide recovery guarantees after system failures. In a main-memory system you may be able to restore a site using data from a replica site rather than by replaying changes from a log file. The paper shows that logging contributes an overhead of approximately 15% to query processing.
Locking and latching
One of the primary assumptions of building a performant database is that multi-threaded access is required so that some queries can progress while others are waiting for disk I/O to complete. This has led to some foundational research into concurrency control using locking — traditionally implemented using a lock manager and lock table. In a main-memory database, disk I/O is no longer a bottleneck and a single- threaded design may actually be preferable for optimal performance. The paper goes on to show that locking can account for up to 30% of query time; eliminating locks can lead to significant gains.
In a traditional database, the buffer manager is responsible for holding pages from the file system in memory for efficient access to “hot” data from multiple threads. With a main-memory database, however, the buffer manager is no longer needed since all data is already in memory. Buffer management accounts for a significant portion of query cost (up to 30%).
The paper shows that a significant contribution to processing time is managing concurrent access through locks (via a lock manager and buffer pools). This suggests that there are large gains to be had by analyzing the conditions where your database needs locking and removing all other cases. A second major contributing factor to database performance is logging and recovery. If your system can support recovery from existing replicas, you may be able to remove write-ahead logging and avoid that performance overhead.
- Paper Review: The CQL continuous query language: semantic foundations and query execution
- Paper Review: BlinkDB: Queries with Bounded Errors and Bounded Response Times on Very Large Data
- Paper Review: Informix under CONTROL: Online Query Processing
- Paper Review: An Array-Based Algorithm for Simultaneous Multidimensional Aggregates