Information to Users
TL;DR Summary
The paper examines concurrency control algorithms for real-time database systems, highlighting existing technical flaws and potential methods to enhance algorithm efficiency, contributing significantly to improving the reliability of real-time data processing.
Abstract
The negative microfilm copy of this dissertation was prepared and inspected by the school granting the degree. We are using this film without further inspection or change. If there are any questions about the content, please write directly to the school. The quality of this reproduction is heavily dependent upon the quality of the original material. The following explanation of techniques is provided to help clarify notations which may appear on this reproduction. 1. Manuscripts may not always be complete. When it is not possible to obtain missing pages, a note appears to indicate this. 2. When copyrighted materials are removed from the manuscript, a note appears to indicate this. 3. Oversize materials (maps, drawings, and charts) are photographed by sectioning the original, beginning at the upper left hand corner and continuing from left to right in equal sections with small overlaps. 4. Most photographs reproduce acceptably on positive microfilm or microfiche but lack clarity on xerographic copies made from the microfilm. For any illustrations that cannot be reproduced satisfactorily by xerography, photographic prints can be purchased at additional cost and tipped into your xerographic copy. Requests can be made to the Dissertations Customer Services Department.
Mind Map
In-depth Reading
English Analysis
1. Bibliographic Information
1.1. Title
The central topic of this paper is Concurrency Control Algorithms for Real-Time Database Systems.
1.2. Authors
The author of this dissertation is Juhnyoung Lee. His affiliation is the University of Virginia, where this work was submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Science). The dissertation advisor and committee chairman was Professor Sang Son. Other committee members included Professor Jorg Liebeherr, Professor Bill Wulf, Professor Stephen Strickland, and Professor Jim French.
1.3. Journal/Conference
This work is a doctoral dissertation submitted to the Faculty of the School of Engineering and Applied Science, University of Virginia. As a dissertation, it is an academic publication but not a peer-reviewed journal or conference paper in the traditional sense, although it represents original research contributing to the field. Dissertations are typically foundational works that may lead to multiple journal or conference publications.
1.4. Publication Year
The dissertation was published in January 1994. The microfilm copy was prepared and inspected by the school, with a UTC publication timestamp of 1989-09-01T00:00:00.000Z, which likely refers to an internal processing or cataloging date for the microfilm rather than the final defense and acceptance date of the dissertation itself, which is explicitly stated as January 1994.
1.5. Abstract
This dissertation investigates and proposes concurrency control algorithms for real-time database systems (RTDBSs). The primary objective is to develop algorithms that not only maintain data consistency but also effectively meet transaction timing constraints, aiming to minimize the percentage and average lateness of deadline-missing transactions.
The research is conducted in three phases:
-
Modeling and Performance Study: The author first develops a model for an RTDBS and evaluates the performance of various
concurrency control protocol classesunder different operating conditions. This phase aims to understand protocol characteristics, their performance impact, and validate the RTDBS model by reconfirming previous studies. -
New Optimistic Concurrency Control (OCC) Algorithm: Focusing on
optimistic techniques, the study investigates their behavior infirm-deadlineenvironments wheretardy transactionsare discarded. A newoptimistic concurrency control algorithmis presented, demonstrated to outperform existing ones across a broad range of operating conditions, making it a strong candidate for a basic concurrency control mechanism in RTDBSs. -
Deadline-Cognizant Optimistic Protocols: This phase addresses how to integrate
deadline informationintooptimistic protocolsto enhancereal-time performance. A newpriority-cognizant conflict resolution schemeis introduced, which is shown to significantly improve performance overpriority-insensitive algorithmsand surpass previously proposedpriority-based conflict resolution schemesacross various operating ranges.Throughout these phases, performance evaluation results are reported using a detailed
simulation modeldeveloped in the first phase. Additionally, the dissertation exploressemantic-based concurrency controlto boost transaction concurrency and meet timing constraints, proposing anobject-oriented data modelfor RTDBS and a semantic-based concurrency control mechanism that leverages the studied protocols alongside general-purpose methods.
1.6. Original Source Link
The original source link provided is /files/papers/694639bc7a7e7809d937f3dd/paper.pdf. This appears to be a local or internal file path rather than a public web link. It is likely the PDF of the dissertation.
2. Executive Summary
2.1. Background & Motivation
The core problem addressed by this dissertation is the effective management of data consistency and timing constraints in Real-Time Database Systems (RTDBSs).
-
Problem Importance: RTDBSs are crucial for a wide range of applications, including program stock trading, telephone switching, network management, automated factory management, and command and control systems. These systems require timely access to data and timely responses, often dealing with
temporal datawhose validity decays over time. The correctness of RTDBS operations depends not only on logical computations but also on the time at which results are delivered. This distinguishes them fundamentally fromconventional database systems, which primarily focus onthroughputandaverage response timewithout explicit timing requirements. -
Challenges and Gaps:
- Impedance Mismatch: Traditional database systems and real-time systems have different assumptions and objectives. Simply integrating concepts from one into the other is not feasible; new approaches are needed for
data and transaction management. - Predictability vs. Speed:
Real-time computingis about predictability (meeting timing constraints deterministically), not just speed. Fast computing that performs the wrong activity is not useful. - Hard Deadlines Intractability: While
hard deadlines(where missing a deadline can lead to catastrophe) are critical, guaranteeing them in dynamic database environments is extremely difficult. It requires static planning, pre-knowledge of resource/data requirements, and worst-case execution times, which are often unavailable or impractical in typical database systems. This limits the functionality and applicability of transactions. - Soft/Firm Deadlines Focus: A majority of real-world real-time applications involve
softorfirm deadlines(where value might decrease after a deadline but doesn't cause catastrophe). Most existing research and the practical need are in this area, motivating a focus on optimizing performance for these types of deadlines. - Concurrency Control Adaptation: Traditional
concurrency controlmethods (likeTwo-Phase LockingorOptimistic Concurrency Control) are designed forlogical consistency(e.g.,serializability) but are not inherentlytime-cognizant. They need to be tailored to explicitly incorporatetiming constraintsto maximizedeadline adherence.
- Impedance Mismatch: Traditional database systems and real-time systems have different assumptions and objectives. Simply integrating concepts from one into the other is not feasible; new approaches are needed for
-
Paper's Innovative Idea/Entry Point: The dissertation aims to bridge the gap between
data consistencyandtiming constraintsby investigating and proposing novelconcurrency control algorithmsspecifically designed forsoftandfirm deadlineRTDBSs. It focuses on adaptingoptimistic concurrency controltechniques, which are generally well-suited for high concurrency, to bedeadline-cognizantand minimizedeadline missesandtardy times. Furthermore, it exploressemantic-based concurrency controlto increase concurrency beyond traditional strict serializability for RTDBSs.
2.2. Main Contributions / Findings
The primary contributions and key findings of this dissertation are structured around its three research phases and an additional investigation:
-
Comprehensive Performance Study of Concurrency Control:
- Developed a robust
real-time database system modelfor simulation. - Conducted extensive performance studies of various
concurrency control protocol classes(locking vs. optimistic) under different conditions (soft/firm deadlines, finite/infinite resources). - Validated the model by reproducing and reconfirming results from previous studies, providing a solid foundation for subsequent research.
- Developed a robust
-
Novel Optimistic Concurrency Control (OCC) Algorithm:
- Identified
unnecessary restartsas a performance bottleneck in traditionalOCCunderdeadline environments. - Proposed a
new optimistic concurrency control algorithmdesigned to reduce these unnecessary restarts. - Demonstrated through simulation that this new algorithm significantly outperforms previous
OCCapproaches over a wide range of operating conditions, offering a promising baseline for RTDBS concurrency control.
- Identified
-
Deadline-Cognizant Conflict Resolution for OCC:
- Addressed the critical problem of incorporating
deadline informationintooptimistic protocols. - Developed a
new priority-cognizant conflict resolution scheme(e.g.,Feasible Sacrifice). This scheme makes intelligent decisions during transaction validation, potentially sacrificing lower-priority or less-likely-to-meet-deadline transactions to enable higher-priority ones to complete. - Showed that this
priority-cognizant approachprovides considerable performance improvement overpriority-insensitive algorithmsand outperforms existingpriority-based conflict resolution schemesacross a wide operating range, effectively minimizingmiss percentageandaverage lateness.
- Addressed the critical problem of incorporating
-
Exploration of Semantic-Based Concurrency Control:
-
Investigated
semantic-based concurrency controlas a means to further increase transaction concurrency and meet timing constraints, moving beyond strictserializabilitywhen appropriate. -
Proposed an
object-oriented data modelspecifically tailored forreal-time database systems. -
Presented a
semantic-based concurrency control mechanismthat leverages the previously studied real-time concurrency control protocols and general-purpose methods for compatibility relations. This opens avenues for higher concurrency by exploiting application-specific semantics.In summary, the dissertation provides validated insights into
real-time concurrency control, proposes specific enhancedoptimistic concurrency control algorithmsandconflict resolution schemesthat aredeadline-aware, and lays groundwork forsemantic-based approachesto improve RTDBS performance.
-
3. Prerequisite Knowledge & Related Work
3.1. Foundational Concepts
To understand this dissertation, a reader should be familiar with fundamental concepts from both database systems and real-time systems.
-
Real-Time Database Systems (RTDBSs):
- Concept: RTDBSs are database systems designed to handle data with explicit
timing constraintsin addition to traditional data consistency requirements. They are critical for applications where the correctness of a computation depends not only on the logical result but also on the time it is delivered. - Distinction from Conventional DBs: Unlike conventional databases that aim for high
throughputor lowaverage response time, RTDBSs prioritize meeting individualtransaction deadlines. - Embedded Systems: Often, RTDBSs are part of
embedded computer systemsthat interact with a physicalenvironmentviasensorsandactuators, requiring timely processing of environmental data.
- Concept: RTDBSs are database systems designed to handle data with explicit
-
Timing Constraints and Deadlines:
- Concept: A
timing constraintspecifies when a transaction or task must complete. Adeadlineis the specific point in time by which a transaction must finish. - Value Function Model: The dissertation refers to the
value function modelto categorize deadlines. This model assigns a "value" or "importance" to a transaction's completion as a function of its completion time.Hard Deadline: Missing a hard deadline results in a catastrophic failure or a large negative value. All transactions with hard deadlines must meet them. The dissertation argues that supporting hard deadlines dynamically is often infeasible due to lack of pre-execution information (resource needs, worst-case execution time).Soft Deadline: Missing a soft deadline reduces the value of the transaction, but it still has some positive value even if completed after its deadline. The value typically drops to zero at some point past the deadline.Firm Deadline: A special case of a soft deadline where the value of the transaction drops immediately to zero if its deadline is missed.Tardy transactions(those that miss their firm deadline) are typically aborted or discarded because their results are no longer useful.
- Temporal Consistency: Beyond logical consistency, RTDBSs must maintain
temporal consistency, ensuring that the data in the database accurately reflects the current state of the real-world environment it models within acceptable time limits.
- Concept: A
-
Transactions:
- Concept: A
transactionin a database system is a logical unit of work that accesses and potentially modifies data. It must beatomic,consistent,isolated, anddurable(ACID properties). - ACID Properties (briefly):
Atomicity: Either all operations within a transaction complete successfully, or none do.Consistency: A transaction transforms the database from one consistent state to another.Isolation: The concurrent execution of multiple transactions appears as if they were executed sequentially.Durability: Once a transaction commits, its changes are permanent.
- Concept: A
-
Concurrency Control:
- Concept:
Concurrency controlmechanisms ensure that multiple transactions running simultaneously do not interfere with each other and maintain thelogical consistencyof the database. - Data Conflicts: Conflicts arise when transactions try to access the same data items in incompatible ways (e.g., one tries to write while another reads, or both try to write).
- Serializability: The most widely accepted correctness criterion for concurrency control. It ensures that the final state of the database after concurrent execution is the same as if the transactions had executed in some serial (one after another) order. This prevents
anomalieslikelost updatesorinconsistent retrievals(explained in Appendix A.1.1).
- Concept:
-
Microfilm/Xerographic Copies: These terms refer to older reproduction technologies for documents like dissertations.
Microfilm: A roll or sheet of photographic film containing miniature images of document pages. It was a common method for archiving and distributing academic papers before digital formats.Xerographic Copies: Photocopies made from microfilm. The abstract notes thatphotographs reproduce acceptably on positive microfilm or microfiche but lack clarity on xerographic copies made from the microfilm. This highlights a technical limitation of the reproduction method for the physical dissertation copy.
3.2. Previous Works
The dissertation extensively reviews concurrency control techniques, primarily categorizing them into locking, timestamp ordering, and optimistic approaches.
-
Traditional Approaches to Concurrency Control (Appendix A.2):
-
Two-Phase Locking (2PL):
- Concept: A widely used
lockingprotocol. Transactions acquirelockson data items before accessing them. Locks are held in two phases:Growing Phase: A transaction can acquire new locks but cannot release any.Shrinking Phase: A transaction can release existing locks but cannot acquire new ones.
- Types of Locks:
Shared Lock (S-lock): Allows multiple transactions to read a data item concurrently.Exclusive Lock (X-lock): Grants exclusive access, allowing only one transaction to write (and read) a data item.
- Conflict Resolution: If a transaction requests a lock that conflicts with an existing lock, it must wait. This can lead to
deadlocks(a situation where two or more transactions are waiting indefinitely for each other to release locks). - Real-time Adaptation: In real-time systems, 2PL can suffer from
priority inversion, where a high-priority transaction waits for a low-priority one to release a lock, potentially missing its deadline. Variouspriority-based 2PLvariants (e.g.,2PL-HP- High Priority) have been proposed to address this by allowing high-priority transactions to abort conflicting low-priority ones.
- Concept: A widely used
-
Timestamp Ordering (TO):
- Concept: Each transaction is assigned a unique
timestampat its initiation. This timestamp defines itsserialization order. - Conflict Resolution: Operations are executed based on timestamps. If a transaction attempts an operation that would violate the timestamp order, it is
abortedandrestartedwith a new (later) timestamp. - Variants:
Basic TO: Strict adherence to timestamp order; can lead to many aborts.Multiversion Timestamp Ordering (MVTO): To reduce aborts, the database maintains multiple versions of data items. A read operation can access an older, consistent version, avoiding conflicts with ongoing writes, as long as the version's timestamp is appropriate for the reading transaction. Writes still create new versions and can cause aborts.
- Concept: Each transaction is assigned a unique
-
Optimistic Concurrency Control (OCC):
- Concept: Based on the assumption that conflicts are rare. Transactions execute optimistically without acquiring locks during their
read phase. They perform all modifications on private copies of data. - Phases:
Read Phase: Transaction reads data items and writes any updates to a private workspace.Validation Phase: Before committing, the transaction checks for conflicts with other concurrently executing transactions. If no conflicts are detected, it proceeds.Write Phase: If validation is successful, changes from the private workspace are made permanent in the database.
- Conflict Resolution: If a conflict is detected during validation, the transaction (or a conflicting one) is
abortedandrestarted. - Advantages: High concurrency in low-contention environments, no deadlocks.
- Disadvantages: High restart overhead in high-contention environments, potential for
livelock(a transaction repeatedly gets aborted and restarted). - Real-time Adaptation:
Optimistic protocolsare often favored in RTDBS due to their potential for high concurrency and the absence of blocking (which can exacerbate priority inversion). However,validationmust bedeadline-awareto ensure high-priority transactions are favored.
- Concept: Based on the assumption that conflicts are rare. Transactions execute optimistically without acquiring locks during their
-
3.3. Technological Evolution
The evolution of database systems from conventional to real-time involves several key shifts:
- From Batch to Interactive to Real-Time: Early databases focused on batch processing, then evolved to support interactive transactions. The demand for systems responding to real-world events in a timely manner led to the emergence of
real-time systems. - From Logical Consistency to Timing Constraints: The primary concern of conventional databases (e.g., SQL databases) is
logical consistency(ensuring data integrity and correctness, typically viaACID propertiesandserializability).Real-time systemsadded the crucial dimension oftiming constraints(deadlines, periodicity). - The "Impedance Mismatch": Initial attempts to simply combine database management systems with real-time schedulers failed. Traditional database
concurrency controlalgorithms (like 2PL) can causepriority inversion(a high-priority task waiting for a low-priority task) or unpredictable delays due todata blockingorrestarts. Traditionalreal-time scheduling(e.g., Earliest Deadline First - EDF) often assumes tasks are independent or that resource conflicts are simple to resolve. Integrating these required new approaches that makeconcurrency controltime-cognizant. - Focus on Soft/Firm Deadlines: Given the difficulty of guaranteeing
hard deadlinesin dynamic database environments, the field largely shifted its focus tosoftandfirm deadlines, where the goal is to maximizedeadline adherencerather than guaranteeing all deadlines. This dissertation fits squarely into this evolution, aiming to improve performance for these practical real-time scenarios.
3.4. Differentiation Analysis
This dissertation differentiates itself from prior work in real-time concurrency control through several innovations:
-
Systematic Performance Analysis: Unlike many studies that might focus on a single protocol, this work systematically evaluates and compares different
concurrency control protocol classes(locking vs. optimistic) under a variety of realisticRTDBSconditions (softvs.firm deadlines,finitevs.infinite resources). This comprehensive benchmarking provides a validated foundation. -
Novel OCC Algorithm for Reduced Restarts: The paper specifically identifies and addresses the problem of
unnecessary restartsin existingoptimistic concurrency control (OCC)algorithms. It proposes a newOCCalgorithm that modifies thevalidation phaseandserialization order adjustmentto minimize these restarts, particularly crucial infirm-deadlineenvironments wheretardy transactionsare discarded, making restarts costly. This is a direct improvement over the core mechanism of OCC. -
Advanced Deadline-Cognizant Conflict Resolution: The key innovation lies in moving beyond simple
priority inheritanceorpriority abortschemes to more sophisticateddeadline-cognizant conflict resolution policiesforoptimistic protocols. TheFeasible Sacrificeapproach, for instance, is not just about aborting a lower-priority transaction, but making an informed decision about which transaction to abort (or restart) based on its likelihood of meeting its deadline and its impact on other transactions. This adaptive and predictive element is a significant step beyond existingpriority-based schemes(likeNo Sacrifice,Always Sacrifice,Conservative Sacrifice,Unavoidable Sacrifice) which might be too rigid or too aggressive. -
Integration of Semantic Information: While not the primary focus of the core concurrency control chapters, the dissertation also explores
semantic-based concurrency controlwithin anobject-oriented data model. This is a departure from purely syntactic (data item-level) conflict detection and aims to exploit application-specific semantics to allow higherconcurrencythan strictserializabilitypermits, without compromising logical correctness. This forward-looking aspect broadens the scope beyond purely algorithmic improvements.In essence, the dissertation moves beyond merely applying traditional concurrency control with priorities to
real-time systems. It proposes specific algorithmic enhancements tooptimistic concurrency controland introduces more intelligent,deadline-aware conflict resolution strategiesthat consider the actual feasibility of transactions, rather than just their static priority.
4. Methodology
This dissertation conducts its research in three primary phases, followed by an investigation into semantic-based concurrency control. The methodology revolves around designing, implementing, and simulating various concurrency control algorithms within a real-time database system (RTDBS) model.
4.1. Principles
The core principles guiding this research are:
- Time-Cognizant Design: All proposed algorithms and modifications explicitly consider
timing constraints(deadlines) of transactions, rather than solely focusing onlogical consistencyor average performance metrics. - Simulation-Based Evaluation: Due to the complexity of real-time systems and the difficulty of conducting controlled experiments on live systems, a detailed
simulation modelis used for performance evaluation. This allows for reproducible results under a wide range of operating conditions. - Optimistic Approach Preference: The research leans towards
optimistic concurrency control (OCC)as a promising candidate for RTDBSs due to its inherent non-blocking nature in theread phase, which can reducepriority inversionissues common inlocking protocols. The challenge then becomes optimizing itsvalidationandconflict resolutionphases to bedeadline-aware. - Minimizing Deadline Misses: The primary performance objective is to minimize the
miss percentageandaverage latenessof transactions, especially forsoftandfirm deadlinesystems.
4.2. Core Methodology In-depth (Layer by Layer)
4.2.1. Phase 1: Performance of Concurrency Control Algorithms (Chapter 2)
This phase establishes the baseline understanding of different concurrency control (CC) algorithms in an RTDBS context.
4.2.1.1. Concurrency Control Algorithms Under Study
The dissertation primarily focuses on two major classes of CC algorithms:
-
A Locking Algorithm (Section 2.2.1):
- Algorithm:
Two-Phase Locking with High Priority (2PL-HP). This is a strict2PLprotocol enhanced with a priority mechanism. - Mechanism: When a transaction (T) requests a lock on a data item and the item is already locked by another transaction (T_c) in a conflicting mode, T has to wait. However,
2PL-HPincorporates priority:- If T has a
higher prioritythan T_c, T_c isaborted(restarted), and its locks are released, allowing T to acquire the lock. - If T has a
lower or equal priorityto T_c, T waits for T_c to release the lock. Thiswaitingis a form ofpriority inversionif T_c is lower priority.
- If T has a
- Lock Management: Locks are requested before accessing data items. For writes, an
exclusive lockis required. For reads, ashared lockis required. Thestrictaspect means allexclusive locksare held until the transactioncommits.
- Algorithm:
-
An Optimistic Concurrency Control Algorithm (Section 2.2.2):
- Algorithm:
Optimistic Concurrency Control with High Priority (OCC-HP). This is a basicOCCprotocol adapted for real-time environments. - Mechanism: Transactions execute in three phases:
- Read Phase: A transaction reads data items and stores all writes into a private workspace without acquiring any locks.
- Validation Phase: When a transaction (T_v) is ready to commit, it enters the
validation phase. It checks for conflicts with other active transactions.- T_v is validated against all transactions (T_a) that have completed their
validation phaseand committed since T_v started itsread phase. - T_v is also validated against all transactions (T_r) currently in their
read phase. - Specifically, T_v's
read set(set of items read) andwrite set(set of items written) are checked against other transactions' read/write sets. OCC-HPhandles conflicts based on priority: If T_v conflicts with an active transaction T_c, and T_v has ahigher prioritythan T_c, T_c isaborted. If T_v haslower or equal priority, T_v isaborted.
- T_v is validated against all transactions (T_a) that have completed their
- Write Phase: If validation is successful, T_v's updates from its private workspace are permanently written to the database.
- Algorithm:
-
Qualitative Analysis (Section 2.2.3):
- 2PL-HP: Can suffer from
priority inversionwhen a high-priority transaction waits for a lower-priority one. It can also lead todeadlocksif not properly handled (though2PL-HPoften usespriority abortto resolve this). - OCC-HP: Generally avoids
priority inversionduring theread phasebecause no locks are held. However,validationbecomes a critical bottleneck. High contention can lead to manyrestarts, potentially causinglivelockfor low-priority transactions. The cost of restart is significant because all work done is discarded.
- 2PL-HP: Can suffer from
-
Implementation Issues (Section 2.2.4):
Timestamp Assignment: Transactions are assignedtimestampsupon creation, typically based on theirdeadlines(e.g.,Earliest Deadline First - EDF). A transaction with an earlier deadline gets a higher priority (smaller timestamp).Data Structures:Read setandwrite setare maintained for each transaction. ACommit_historyis used to record information about committed transactions forOCCvalidation.
4.2.1.2. RTDBS Model (Section 2.3)
The simulation model represents a multi-processor, shared-memory RTDBS.
-
Soft Deadline vs. Firm Deadline (Section 2.3.1):
- Soft Deadline: Transactions that miss their deadline are still allowed to complete, and their results are accepted. The performance metric
average tardy time(oraverage lateness) is relevant here. - Firm Deadline: Transactions that miss their deadline are immediately
abortedanddiscardedbecause their results are no longer useful. For these systems,miss percentageis the primary performance metric.
- Soft Deadline: Transactions that miss their deadline are still allowed to complete, and their results are accepted. The performance metric
-
Resource Availability (Section 2.3.2):
- Finite Resources: A limited number of
CPUsanddisksare available, leading toresource contention. - Infinite Resources: Assumes
CPUanddiskresources are unlimited, isolatingdata contentionas the primary performance factor. This simplifies analysis to understand the pure impact ofconcurrency control.
- Finite Resources: A limited number of
4.2.1.3. Experimental Environment (Section 2.4)
The simulation model is built using CSIM, a process-oriented simulation language.
-
Simulation Model (Section 2.4.1):
- Components:
Transaction Generator: Creates transactions with random arrival times and assignsdeadlines.Transaction Manager (TM): Manages the lifecycle of transactions, includingpriority assignment(EDF orValue-EDFfor soft deadline), andrestartlogic.Concurrency Control Manager (CCM): Implements the chosenconcurrency control protocol(2PL-HP or OCC-HP).Resource Manager (RM): ManagesCPUanddiskresources.Database: A collection ofdata items.
- Transaction Flow: Transactions arrive, go through
CPUprocessing (for logic),I/O(for data access), and interact withCCMfor locking/validation.
- Components:
-
Parameter Setting (Section 2.4.2): The simulation uses various parameters for system resources and workload:
The following are the results from Table 2.1 of the original paper:
System Resource Parameters Setting Number of CPUs 8 Number of Disks 8 CPU Speed 10 MIPS I/O Speed (Disk Access Time) 20 ms DB Size (Number of Data Items) 1000 Page Size 4 KB The following are the results from Table 2.2 of the original paper:
Workload Parameters Setting Transaction Arrival Rate () 10 to 100 transactions/second Transaction Size (Number of Operations) 10 Write Probability () 0.25, 0.5, 0.75 Deadline Factor () 2.0, 4.0, 6.0, 8.0, 10.0 Restart Penalty Factor () 2.0 Data Access Skew Uniform Restart Probability 0 - Transaction Arrival Rate (): Rate at which new transactions enter the system.
- Transaction Size: Number of operations (read/write) per transaction.
- Write Probability (): The likelihood that an operation is a write operation, influencing
data contention. - Deadline Factor (): Controls how tight deadlines are. A transaction's deadline is calculated as:
$
\text{Deadline} = \text{Arrival Time} + D_F \times \text{Execution Time}
$
where
Execution Timeis the estimated total time for CPU and I/O operations without any contention. A smaller means tighter deadlines. - Restart Penalty Factor (): Multiplier for the cost of restarting a transaction. When a transaction is restarted, its deadline is adjusted to reflect the wasted time.
- Data Access Skew: How evenly data items are accessed (uniform means all items equally likely).
-
Performance Metrics (Section 2.4.3):
Miss Percentage: The percentage of transactions that fail to commit by theirdeadline.- Conceptual Definition: This metric quantifies the effectiveness of the system in meeting timing constraints. A lower miss percentage indicates better real-time performance. It is particularly crucial for
firm-deadlinesystems where tardy transactions are useless. - Mathematical Formula: $ \text{Miss Percentage} = \frac{\text{Number of transactions missing deadline}}{\text{Total number of transactions attempted}} \times 100% $
- Symbol Explanation:
Number of transactions missing deadline: Count of transactions whose completion time is later than their assigned deadline.Total number of transactions attempted: Total count of transactions that entered the system, including those that committed, aborted, or are still running.
- Conceptual Definition: This metric quantifies the effectiveness of the system in meeting timing constraints. A lower miss percentage indicates better real-time performance. It is particularly crucial for
Average Tardy Time: Forsoft-deadlinesystems, this measures the average amount of time by which transactions miss theirdeadline.- Conceptual Definition: This metric provides insight into the degree of lateness for transactions that fail to meet their timing constraints. A lower average tardy time indicates that even when deadlines are missed, they are not missed by a large margin. It's relevant when late results still have some value.
- Mathematical Formula: $ \text{Average Tardy Time} = \frac{\sum_{i=1}^{N_{miss}} (\text{Completion Time}_i - \text{Deadline}_i)}{\text{Number of transactions missing deadline}} $
- Symbol Explanation:
- : Total number of transactions that missed their deadline.
- : The time at which transaction finished execution.
- : The assigned deadline for transaction .
Throughput: The number of transactions committed per second.- Conceptual Definition: This traditional database metric measures the efficiency of the system in processing and successfully completing transactions. Higher throughput generally means more work is being done. In real-time systems, it must be considered alongside deadline adherence.
- Mathematical Formula: $ \text{Throughput} = \frac{\text{Number of committed transactions}}{\text{Total simulation time}} $
- Symbol Explanation:
Number of committed transactions: The count of transactions that successfully completed and had their changes made permanent.Total simulation time: The total duration for which the simulation was run.
Average Response Time: The average time from transaction arrival to completion.- Conceptual Definition: This metric quantifies the average total time a transaction spends in the system from its arrival until its successful completion. It includes waiting for resources, data, and execution time.
- Mathematical Formula: $ \text{Average Response Time} = \frac{\sum_{i=1}^{N_{commit}} (\text{Completion Time}_i - \text{Arrival Time}i)}{N{commit}} $
- Symbol Explanation:
- : Total number of committed transactions.
- : The time at which transaction finished execution.
- : The time at which transaction entered the system.
Restart Count: The average number of times a transaction is restarted due toconcurrency control conflicts.- Conceptual Definition: This metric directly reflects the overhead caused by conflict resolution, particularly in
optimistic concurrency controlwhere aborts and restarts are the primary mechanism. A high restart count indicates significant wasted work. - Mathematical Formula: $ \text{Restart Count} = \frac{\text{Total number of transaction restarts}}{\text{Total number of committed transactions}} $
- Symbol Explanation:
Total number of transaction restarts: The cumulative count of all times transactions were aborted and re-initiated.Total number of committed transactions: The count of transactions that successfully completed.
- Conceptual Definition: This metric directly reflects the overhead caused by conflict resolution, particularly in
Data Blocking Time: The average time a transaction spends waiting for alockdue to data conflicts.- Conceptual Definition: This metric is specific to
locking-based concurrency controland quantifies the amount of time transactions are stalled due todata contention. It directly impacts response time and can indicatepriority inversionissues. - Mathematical Formula: (Not explicitly provided in the paper, but commonly defined as) $ \text{Data Blocking Time} = \frac{\sum_{i=1}^{N_{blocked}} \text{Wait Time}i}{N{blocked}} $
- Symbol Explanation:
- : Number of times a transaction was blocked.
- : The duration for which a transaction was blocked during the -th blocking instance. (Or, alternatively, sum of wait times for all transactions, divided by number of transactions).
- Conceptual Definition: This metric is specific to
4.2.2. Phase 2: Dynamic Adjustment of Serialization Order (Chapter 3)
This phase focuses on improving optimistic concurrency control by dynamically adjusting the serialization order to reduce unnecessary restarts.
4.2.2.1. Optimistic Concurrency Control Principles (Section 3.2.1)
Reiterates the three phases of OCC: Read Phase, Validation Phase, and Write Phase. The key is that validation determines if a transaction can commit without violating serializability.
4.2.2.2. Unnecessary Restarts (Section 3.2.2)
- Problem: Traditional
OCCprotocols (likeOCC-HP) make decisions aboutrestartsbased on a fixedserialization order(e.g., determined bytimestampat arrival). If anactive transactionT_a conflicts with avalidating transactionT_v, and T_v has an earlierserialization order(higher priority), T_a is typically aborted. However, T_a might have already completed a significant amount of work or might be a high-priority transaction itself that could still meet its deadline. Aborting it might be "unnecessary" if an alternative serialization order could be found or if T_v is likely to miss its deadline anyway.
4.2.2.3. A New Optimistic Concurrency Control Algorithm (Section 3.3)
The proposed algorithm, called Dynamic Adjustment of Serialization Order (DASO-OCC), aims to avoid unnecessary restarts by allowing a validating transaction to yield to a younger, conflicting active transaction if that transaction has a higher priority or is more likely to meet its deadline. This effectively allows for a dynamic adjustment of the serialization order.
-
Validation Phase (Section 3.3.1):
- When a transaction T_v (validating transaction) reaches its
validation phase, it checks for conflicts withcommitted transactionsandactive transactions. - Against Committed Transactions (T_c): T_v validates against all T_c that committed while T_v was in its
read phase. If T_v reads an item written by T_c or T_v writes an item read by T_c, and T_c'scommit timeis before T_v'sstart time, there is no conflict. Otherwise, if T_v has an olderserialization order(earlier timestamp) than T_c, and there's an actual conflict (e.g., T_v'swrite setintersects T_c'sread set), T_v must beabortedto maintainserializability. - Against Active Transactions (T_a): This is where
DASO-OCCdifferentiates.- Conflict Detection: T_v checks its
write setagainst T_a'sread set, and T_v'sread setagainst T_a'swrite set. - Priority and Serialization Order Comparison:
- If T_v has a
higher priority(earlier deadline) than T_a, and they conflict, T_a isaborted(standardOCC-HPbehavior). - If T_v has a
lower or equal prioritythan T_a, this is whereDASO-OCCintroduces its dynamic adjustment. Instead of immediately aborting T_v (asOCC-HPwould),DASO-OCCtries to preserve T_v if possible by letting T_a commit first, effectively letting T_a have an earlierserialization orderthan T_v, even if T_v arrived earlier.
- If T_v has a
- Yielding Mechanism: If T_v's
serialization order(defined by its timestamp) is earlier than T_a's, but T_a has a higher priority or is more critical, T_v canyield.- T_v sets a
yielding flagand waits. This implies T_a can commit first. - When T_a later validates, it will see T_v's
yielding flag. If T_a successfully validates, T_v might need torestartfrom the beginning, but with a potentially improvedserialization order(later timestamp) that avoids repeated conflicts with T_a. - If T_a is
abortedfor other reasons, T_v might then be able to proceed with its original validation.
- T_v sets a
- The algorithm dynamically calculates a
wait_for_commit_time(WFC) for T_v. T_v willwaitfor conflicting transactions that have higher priority (or meet certain criteria) to commit. If these transactions don't commit by WFC, T_v then proceeds with its own validation, potentially aborting them. The objective is to maximize the chance of committing high-priority transactions.
- Conflict Detection: T_v checks its
- When a transaction T_v (validating transaction) reaches its
-
Read Phase (Section 3.3.2): Standard
OCCread phase. Transaction reads data, stores updates in a private workspace. -
Write Phase (Section 3.3.3): Standard
OCCwrite phase. If validation is successful, changes are made permanent. -
Correctness (Section 3.3.4): The algorithm maintains
serializabilityby ensuring that for any two conflicting transactions T_i and T_j, if T_i'scommit timeis before T_j'scommit time, then T_i'sserialization orderis also before T_j's. This is achieved through the careful handling oftimestampsandrestartsduring thevalidation phase. Theyieldmechanism effectively allows an earlier-timestamped transaction (T_v) to commit after a later-timestamped transaction (T_a) if T_vyields, thus dynamically adjusting their effectiveserialization orderwithout violatingserializabilityfor the final committed state. -
Discussion (Section 3.3.5): The
DASO-OCCalgorithm introduces a new tradeoff: allowing validating transactions to wait (yield) instead of immediately aborting. This waiting can reduce restarts but might increaseresponse timefor the waiting transaction. The goal is to make this wait decision intelligent based on priorities and expected deadlines.
4.2.3. Phase 3: Design of Real-Time Optimistic Concurrency Control (Chapter 4)
This phase focuses on deadline-cognizant conflict resolution within optimistic protocols, building on the OCC-HP framework but introducing more sophisticated sacrifice policies.
4.2.3.1. The Choice of Optimistic Protocol (Section 4.2)
The dissertation re-affirms the choice of optimistic protocols for RTDBS due to their non-blocking nature in the read phase, which minimizes priority inversion compared to locking. The challenge is that restarts are costly, so conflict resolution must be intelligent.
4.2.3.2. Deadline-Cognizant Conflict Resolution Policies (Section 4.3)
These policies are variations on how a validating transaction (T_v) resolves conflicts with an active transaction (T_a) in the read phase. The policies decide which transaction, if any, should be sacrificed (aborted).
-
No Sacrifice (NS) (Section 4.3.1):
- Mechanism: If T_v conflicts with T_a:
- If T_v has
higher prioritythan T_a, T_a isaborted. - If T_v has
lower or equal prioritythan T_a, T_v isaborted.
- If T_v has
- Characteristic: This is the standard
OCC-HPbehavior. It'spriority-sensitivebut makes immediate decisions without further analysis of deadline feasibility.
- Mechanism: If T_v conflicts with T_a:
-
Always Sacrifice (AS) (Section 4.3.2):
- Mechanism: If T_v conflicts with T_a:
- If T_v has
higher prioritythan T_a, T_a isaborted. - If T_v has
lower or equal prioritythan T_a, T_v is still aborted.
- If T_v has
- Characteristic: This policy always sacrifices the
lower prioritytransaction, which is T_v if its priority is lower. It's aggressive in favoring higher priority transactions, potentially leading tolivelockfor low-priority transactions.
- Mechanism: If T_v conflicts with T_a:
-
Conservative Sacrifice (CS) (Section 4.3.3):
- Mechanism: If T_v conflicts with T_a:
- If T_v has
higher prioritythan T_a, T_a isaborted. - If T_v has
lower or equal prioritythan T_a, T_v isaborted.
- If T_v has
- Characteristic: This seems to be identical to
No Sacrificebased on the description, implying a default behavior for lower priority. It might imply a more cautious approach where a transaction is only sacrificed if it must be. The paper's text here seems to be an oversight or a repetition of NS. Self-correction: Given the context, CS might imply that T_v aborts only if it's strictly necessary to preserve an active higher priority transaction, but the description doesn't elaborate further distinguishing it from NS.
- Mechanism: If T_v conflicts with T_a:
-
Unavoidable Sacrifice (US) (Section 4.3.4):
- Mechanism: If T_v conflicts with T_a:
- If T_v has
higher prioritythan T_a, T_a isaborted. - If T_v has
lower or equal prioritythan T_a, T_v isaborted.
- If T_v has
- Characteristic: Again, the description is identical to
No Sacrifice. This suggests that these termsNS,AS,CS,USmight refer to specific algorithms from prior work that the author is comparing against, rather than distinct new policies introduced in this section. The actual distinction is subtle and likely involves how "sacrifice" is defined in context of other system state or transaction properties beyond simple priority. Self-correction: The namesNS,AS,CS,US,Adaptive Sacrificeare referenced in related work (Hari90b, Hari91b), so they are indeed existing baseline policies for comparison.
- Mechanism: If T_v conflicts with T_a:
-
Adaptive Sacrifice (ADS) (Section 4.3.5):
- Mechanism: If T_v conflicts with T_a:
- If T_v has
higher prioritythan T_a, T_a isaborted. - If T_v has
lower or equal prioritythan T_a:- T_v is
abortedif it is unlikely to meet its deadline, or if T_a is likely to miss its deadline without T_v's sacrifice. - T_v is
committed(and T_a is potentially aborted) if T_v is likely to meet its deadline and T_a is unlikely to meet its deadline even with T_v's sacrifice.
- T_v is
- If T_v has
- Characteristic: This policy attempts to make a more intelligent decision by considering the
slackorremaining execution timerelative to thedeadlinefor both transactions. It tries to save transactions that have a higher chance of meeting their deadline.
- Mechanism: If T_v conflicts with T_a:
4.2.3.3. Our Approach (Section 4.4)
The dissertation proposes a new approach called Feasible Sacrifice (FS) for deadline-cognizant conflict resolution.
-
Feasible Sacrifice (FS) (Section 4.4.1):
- Principle: When a
validating transaction(T_v) conflicts with anactive transaction(T_a) in itsread phase, the decision tosacrifice(abort) one of them is based on whether either transaction (or both) can realistically meet its deadline if allowed to proceed. - Mechanism:
- Direct Priority Check: If T_v has
higher prioritythan T_a, T_a isaborted. - Feasibility Check for Lower Priority T_v: If T_v has
lower or equal prioritythan T_a, instead of immediately aborting T_v (as in NS/AS),FSperforms afeasibility check.- It estimates the
remaining execution timefor both T_v and T_a. - It calculates the
slack timefor each:Deadline - Current Time - Remaining Execution Time. - If T_v is found to be
feasible(i.e., has positive slack, indicating it can meet its deadline) and T_a isinfeasible(i.e., has negative slack, indicating it cannot meet its deadline even if T_v is aborted), then T_a isaborted, and T_v is allowed to commit. This is a crucial difference: a lower-priority T_v can abort a higher-priority T_a if T_a is already doomed to miss its deadline, preventing wasted work. - Otherwise, if T_v is infeasible or T_a is feasible (or both), T_v is
aborted. The logic is that if T_v cannot meet its deadline, it should be aborted, or if T_a can meet its deadline, it should be favored.
- It estimates the
- Direct Priority Check: If T_v has
- Principle: When a
-
Predictable Transaction Execution (Section 4.4.2):
Feasible Sacrificerelies on predictingtransaction execution timeandslack. The paper discusses how to estimate this.SlackCalculation: $ \text{Slack}(T) = \text{T.deadline} - \text{current_time} - \text{remaining_exec_time}(T) $- Symbol Explanation:
- : The amount of time transaction has left before its deadline, beyond its estimated remaining execution time. A positive slack means it can likely meet its deadline.
- : The deadline assigned to transaction .
- : The current time in the simulation.
- : The estimated CPU and I/O time required for to complete its remaining operations.
- Symbol Explanation:
Feasible Transaction: A transaction is consideredfeasibleif its .- The predictive nature of
FShelps in making more informed decisions aboutsacrificingtransactions, reducingunnecessary restartsand maximizing the number of completed transactions that meet their deadlines.
4.2.4. Semantic-Based Concurrency Control (Chapter 5)
This chapter explores an alternative paradigm to traditional syntactic concurrency control by leveraging application semantics to achieve higher concurrency.
4.2.4.1. An Object-Oriented Data Model for RTDBS (Section 5.2)
-
Basic Concepts: An
object-oriented approachis chosen because it allows for richer data modeling and encapsulation of methods, which can be leveraged forsemantic-based concurrency control. -
Object Model (Section 5.2.2): Data is organized into
objects, each with its ownstate(attributes) andbehavior(methods).Classesdefine the structure and behavior of objects. -
Relationship Model (Section 5.2.3): Defines how objects are related to each other (e.g.,
aggregation,inheritance,association). -
Transaction Model (Section 5.2.4): Transactions in this model operate by invoking
methodsonobjects. Each method invocation can be considered a sub-transaction. -
Example (Section 5.2.5): A
real-time database schemaexample is provided (Figure 5.1), illustrating classes likeControl-Objects,Sensor-Objects,Actuator-Objects, and their relationships.
Figure 5.1 An Object-Oriented Real-Time Database Schema This figure visually represents a hierarchical structure for an object-oriented real-time database, including
Control-Objects,Sensor-Objects, andActuator-Objects, with relationships indicating their interactions within the system.
4.2.4.2. Design Issues (Section 5.3)
- Compatibility Relation (Section 5.3.1): Instead of simple read/write conflicts, method invocations are classified as
compatibleorincompatiblebased on theirsemantics. For example, twoincrementoperations on a counter might be compatible even if they both modify the same data. - Concurrency Control Algorithms (Section 5.3.2): Traditional
CCalgorithms need to be adapted to operate onmethod-level compatibilityrather thandata-item-level conflicts. - Inter-Object Data Consistency (Section 5.3.3): Maintaining consistency across related objects becomes crucial when allowing finer-grained concurrency.
4.2.4.3. Our Approach (Section 5.5)
-
Compatibility Relation by Affected-Set (Section 5.5.1):
- Principle: The
compatibilityof twomethodsis determined by theiraffected-sets. Anaffected-setfor a method is the set ofdata itemsthat the method reads or writes. - Mechanism: Two methods and are compatible if, for any data item in their respective
affected-sets:- If is read by and read by , they are compatible.
- If is written by and read by , they are compatible if the
semantic dependenciesallow. - If is written by and written by , they are compatible if the
semantic dependenciesallow.
- This generalizes the read/write conflict matrix. It allows for
non-serializablebutsemantically correctexecutions, e.g., allowing twodepositoperations on an account to commute.
- Principle: The
-
Real-Time Concurrency Control Algorithms (Section 5.5.2): The
semantic-based compatibilityinformation can be integrated into theoptimisticandlockingprotocols developed in previous chapters. For example, duringvalidationinOCC, instead of checking for raw read/write set intersections, the system would check forsemantic incompatibilitybetween method invocations. This can reduce the number of detected conflicts and thusrestarts, leading to higherconcurrencyand betterdeadline adherence.The overall methodology combines theoretical design of algorithms with extensive simulation to validate their practical performance in
real-time database systemsunder various constraints, moving from general performance analysis to specific algorithmic enhancements and finally to more abstract semantic approaches.
5. Experimental Setup
The experimental setup described in the dissertation relies heavily on a simulation model to evaluate the performance of the proposed concurrency control algorithms for real-time database systems (RTDBSs).
5.1. Datasets
Instead of using traditional datasets, the paper utilizes a simulated database and transaction workload.
-
Simulated Database:
- Scale: The database consists of
1000 data items. Each data item represents apageof4KBsize. - Characteristics: The data items are accessed uniformly, meaning there is no
data access skewtowards specific popular items, which simplifies the analysis ofconcurrency controloverhead. - Domain: The simulation is general-purpose and not tied to a specific application domain, allowing for broader applicability of the findings.
- Scale: The database consists of
-
Simulated Transaction Workload:
- Source: Transactions are generated by a
Transaction Generatorcomponent of the simulation model. - Arrival Pattern: Transactions arrive at the system according to a
Poisson distribution, with a configurableTransaction Arrival Rate() ranging from10 to 100 transactions/second. - Transaction Size: Each transaction executes
10 operations(read or write). - Write Probability (): The probability that any given operation is a
writeoperation is varied (0.25,0.5,0.75). A higherwrite probabilitygenerally leads to higherdata contention. - Deadlines: Each transaction is assigned a
deadlinebased on itsarrival time, an estimatedexecution time(without contention), and aDeadline Factor(). $ \text{Deadline} = \text{Arrival Time} + D_F \times \text{Execution Time} $ TheDeadline Factor() is varied (2.0,4.0,6.0,8.0,10.0) to simulate differenttightnessof deadlines, with smaller values indicating tighter deadlines. - Restart Penalty Factor (): When a transaction is
restarted, itsdeadlineis adjusted to reflect the time lost, multiplied by . This simulates the real cost of restarting transactions. - Data Access: Each operation accesses a random data item from the
1000available items.
- Source: Transactions are generated by a
-
Resource Configuration:
- The
System Resource Parametersforfinite resourceexperiments include8 CPUsand8 Disks. CPU Speedis10 MIPS(Millions of Instructions Per Second).I/O Speed(Disk Access Time) is20 ms.- For
infinite resourceexperiments,CPUanddiskresources are assumed to be unlimited, to isolate the effects ofdata contention.
- The
5.2. Evaluation Metrics
The dissertation uses several performance metrics to evaluate the concurrency control algorithms:
-
Miss Percentage:
- Conceptual Definition: This metric quantifies the effectiveness of the system in meeting timing constraints. A lower miss percentage indicates better real-time performance. It is particularly crucial for
firm-deadlinesystems where tardy transactions are useless. - Mathematical Formula: $ \text{Miss Percentage} = \frac{\text{Number of transactions missing deadline}}{\text{Total number of transactions attempted}} \times 100% $
- Symbol Explanation:
Number of transactions missing deadline: Count of transactions whose completion time is later than their assigned deadline.Total number of transactions attempted: Total count of transactions that entered the system, including those that committed, aborted, or are still running.
- Conceptual Definition: This metric quantifies the effectiveness of the system in meeting timing constraints. A lower miss percentage indicates better real-time performance. It is particularly crucial for
-
Average Tardy Time:
- Conceptual Definition: For
soft-deadlinesystems, this measures the average amount of time by which transactions miss theirdeadline. A lower average tardy time indicates that even when deadlines are missed, they are not missed by a large margin. It's relevant when late results still have some value. - Mathematical Formula: $ \text{Average Tardy Time} = \frac{\sum_{i=1}^{N_{miss}} (\text{Completion Time}_i - \text{Deadline}_i)}{\text{Number of transactions missing deadline}} $
- Symbol Explanation:
- : Total number of transactions that missed their deadline.
- : The time at which transaction finished execution.
- : The assigned deadline for transaction .
- Conceptual Definition: For
-
Throughput:
- Conceptual Definition: This traditional database metric measures the efficiency of the system in processing and successfully completing transactions. Higher throughput generally means more work is being done. In real-time systems, it must be considered alongside deadline adherence.
- Mathematical Formula: $ \text{Throughput} = \frac{\text{Number of committed transactions}}{\text{Total simulation time}} $
- Symbol Explanation:
Number of committed transactions: The count of transactions that successfully completed and had their changes made permanent.Total simulation time: The total duration for which the simulation was run.
-
Average Response Time:
- Conceptual Definition: This metric quantifies the average total time a transaction spends in the system from its arrival until its successful completion. It includes waiting for resources, data, and execution time.
- Mathematical Formula: $ \text{Average Response Time} = \frac{\sum_{i=1}^{N_{commit}} (\text{Completion Time}_i - \text{Arrival Time}i)}{N{commit}} $
- Symbol Explanation:
- : Total number of committed transactions.
- : The time at which transaction finished execution.
- : The time at which transaction entered the system.
-
Restart Count:
- Conceptual Definition: This metric directly reflects the overhead caused by
conflict resolution, particularly inoptimistic concurrency controlwhere aborts and restarts are the primary mechanism. A high restart count indicates significant wasted work. - Mathematical Formula: $ \text{Restart Count} = \frac{\text{Total number of transaction restarts}}{\text{Total number of committed transactions}} $
- Symbol Explanation:
Total number of transaction restarts: The cumulative count of all times transactions were aborted and re-initiated.Total number of committed transactions: The count of transactions that successfully completed.
- Conceptual Definition: This metric directly reflects the overhead caused by
-
Data Blocking Time:
- Conceptual Definition: This metric is specific to
locking-based concurrency controland quantifies the average amount of time a transaction spends stalled due todata contention(waiting for a lock). It directly impactsresponse timeand can indicatepriority inversionissues. - Mathematical Formula: (As per common definition, not explicitly given in paper) $ \text{Data Blocking Time} = \frac{\sum_{i=1}^{N_{blocked}} \text{Wait Time}i}{N{blocked}} $
- Symbol Explanation:
- : Number of times a transaction was blocked.
- : The duration for which a transaction was blocked during the -th blocking instance.
- Conceptual Definition: This metric is specific to
5.3. Baselines
The paper compares its proposed algorithms against several baseline concurrency control (CC) algorithms:
-
For general CC performance (Chapter 2):
Two-Phase Locking with High Priority (2PL-HP): A traditionallocking-basedprotocol that incorporates priority for conflict resolution.Optimistic Concurrency Control with High Priority (OCC-HP): A basicoptimistic protocolthat uses priority for validation conflicts.
-
For novel Optimistic CC (Chapter 3):
OCC-HP: The standardoptimistic protocolwith priority is used as a baseline to demonstrate the improvements ofDASO-OCC(Dynamic Adjustment of Serialization Order Optimistic Concurrency Control).
-
For Deadline-Cognizant Conflict Resolution (Chapter 4):
-
Existing
priority-based conflict resolution schemesforoptimistic concurrency controlare used as baselines, including:No Sacrifice (NS)Always Sacrifice (AS)Conservative Sacrifice (CS)Unavoidable Sacrifice (US)Adaptive Sacrifice (ADS)
-
These baselines represent different strategies for aborting transactions during
validationbased on their priorities and, in some cases, estimatedslack. The proposedFeasible Sacrifice (FS)is compared against these.These baselines are representative as they cover both major
concurrency controlparadigms (lockingandoptimistic) and include several establishedpriority-cognizantvariants forreal-time systems.
-
6. Results & Analysis
The results are presented across different experimental setups, varying deadline types (soft vs. firm), resource availability (finite vs. infinite), and workload characteristics (arrival rate, write probability, deadline factor).
6.1. Core Results Analysis
6.1.1. Experiment 1: Soft Deadline and Finite Resources (Section 2.5.1)
-
Figure 2.3 Miss Percentage, Finite Resource, Soft Deadline:

This figure plots the
Miss Percentagefor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateundersoft deadlineandfinite resourceconditions.- Analysis: At low
arrival rates(lowcontention), both2PL-HPandOCC-HPhave similar, lowmiss percentages. As thearrival rateincreases,OCC-HPgenerally shows a lowermiss percentagethan2PL-HP. This indicates thatOCC-HPis more effective at meeting deadlines under moderate to high load insoft-deadlineenvironments with finite resources. This is likely due to OCC's non-blocking nature in theread phase, reducingpriority inversioncompared to 2PL's blocking.
- Analysis: At low
-
Figure 2.4 Average Tardy Time, Finite Resource, Soft Deadline:

This figure plots the
Average Tardy Timefor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateundersoft deadlineandfinite resourceconditions.- Analysis: Similar to
miss percentage,OCC-HPgenerally exhibits loweraverage tardy timethan2PL-HPat higherarrival rates. This reinforces the finding thatOCC-HPnot only misses fewer deadlines but also misses them by a smaller margin when it does.
- Analysis: Similar to
-
Figure 2.5 Throughput, Finite Resource, Soft Deadline:

This figure plots the
Throughputfor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateundersoft deadlineandfinite resourceconditions.- Analysis:
OCC-HPconsistently achieves higherthroughputthan2PL-HP, especially at higherarrival rates. This suggests that the non-blocking nature ofOCCallows more transactions to progress, leading to a higher rate of completed transactions.
- Analysis:
-
Figure 2.6 Average Response Time, Finite Resource, Soft Deadline:

This figure plots the
Average Response Timefor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateundersoft deadlineandfinite resourceconditions.- Analysis:
OCC-HPgenerally has loweraverage response timethan2PL-HP. The ability to process transactions without waiting for locks during theread phaseinOCCcontributes to faster completion times on average.
- Analysis:
6.1.2. Experiment 2: Soft Deadline and Infinite Resources (Section 2.5.2)
-
Figure 2.7 Miss Percentage, Infinite Resource, Soft Deadline:

This figure plots the
Miss Percentagefor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateundersoft deadlineandinfinite resourceconditions.- Analysis: With
infinite resources(eliminatingresource contention),OCC-HPshows a more pronounced advantage over2PL-HPin terms ofmiss percentage. This highlights thatdata contention(handled byCCmechanisms) is the primary factor differentiating their performance, andOCC-HPhandles it more effectively in this scenario.
- Analysis: With
-
Figure 2.8 Average Tardy Time, Infinite Resource, Soft Deadline:

This figure plots the
Average Tardy Timefor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateundersoft deadlineandinfinite resourceconditions.- Analysis: Similar to
miss percentage,OCC-HPmaintains a loweraverage tardy timein theinfinite resourcescenario, further emphasizing its efficiency in handlingdata contention.
- Analysis: Similar to
-
Figure 2.9 Throughput, Infinite Resource, Soft Deadline:

This figure plots the
Throughputfor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateundersoft deadlineandinfinite resourceconditions.- Analysis:
OCC-HPconsistently delivers higherthroughput, especially at higharrival rates. This confirms that withresource contentionremoved,OCC-HP's ability to minimizedata blockingandpriority inversionallows for more transaction completions.
- Analysis:
-
Figure 2.10 Data Blocking Time, Infinite Resource, Soft Deadline:

This figure plots the
Data Blocking Timefor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateundersoft deadlineandinfinite resourceconditions.- Analysis:
OCC-HPhas virtually zerodata blocking time(as expected, since it doesn't use locks in theread phase), whereas2PL-HPshows significant and increasingdata blocking timewith higherarrival rates. This directly explains2PL-HP's poorer performance.
- Analysis:
-
Figure 2.11 Miss Percentage, Soft Deadline:

This figure compares
Miss Percentagebetweenfiniteandinfinite resourcescenarios for both2PL-HPandOCC-HPundersoft deadlineconditions.- Analysis: This figure aggregates insights, showing that
OCC-HPgenerally performs better than2PL-HPinsoft-deadlinesystems, both with and withoutresource contention. The performance difference is more pronounced whenresource contentionis eliminated, isolating the impact ofdata contentionon whichOCC-HPexcels.
- Analysis: This figure aggregates insights, showing that
6.1.3. Experiment 3: Firm Deadline and Finite Resources (Section 2.5.3)
-
Figure 2.12 Miss Percentage, Finite Resource, Firm Deadline:

This figure plots the
Miss Percentagefor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateunderfirm deadlineandfinite resourceconditions.- Analysis: For
firm deadlines(where tardy transactions are aborted),OCC-HPstill generally outperforms2PL-HPin terms ofmiss percentageat higherarrival rates. However, themiss percentagesare significantly higher overall compared tosoft deadlines, as any missed deadline counts as an aborted transaction.
- Analysis: For
-
Figure 2.13 Restart Count, Finite Resource, Firm Deadline:

This figure plots the
Restart Countfor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateunderfirm deadlineandfinite resourceconditions.- Analysis:
OCC-HPshows a higherrestart countthan2PL-HPat higherarrival rates. This is a critical trade-off:OCCavoidsdata blockingbut pays the price inrestarts. WhileOCC-HPhas a lowermiss percentage, its higherrestart countindicates wasted work, which is a target for improvement in subsequent chapters.
- Analysis:
6.1.4. Experiment 4: Firm Deadline and Infinite Resources (Section 2.5.4)
-
Figure 2.14 Miss Percentage, Infinite Resource, Firm Deadline:

This figure plots the
Miss Percentagefor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateunderfirm deadlineandinfinite resourceconditions.- Analysis: With
infinite resources,OCC-HPmaintains its advantage inmiss percentageover2PL-HPforfirm-deadlinesystems, similar to thesoft-deadlinecase. This again points todata contentionas the key differentiator.
- Analysis: With
-
Figure 2.15 Restart Count, Infinite Resource, Firm Deadline:

This figure plots the
Restart Countfor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateunderfirm deadlineandinfinite resourceconditions.- Analysis:
OCC-HPstill incurs a significantly higherrestart countthan2PL-HPwhenresource contentionis removed. This confirms that the problem of high restarts inOCCis primarily due todata contentionand itsvalidationstrategy, notresource contention.
- Analysis:
-
Figure 2.16 Data Blocking Time, Infinite Resource, Firm Deadline:

This figure plots the
Data Blocking Timefor2PL-HPandOCC-HPalgorithms againstTransaction Arrival Rateunderfirm deadlineandinfinite resourceconditions.- Analysis: As expected,
OCC-HPshows minimaldata blocking time, while2PL-HPexhibits substantialdata blocking, reinforcing the inherent differences in their conflict resolution mechanisms.
- Analysis: As expected,
-
Figure 2.17 Miss Percentage, Firm Deadline:

This figure compares
Miss Percentagebetweenfiniteandinfinite resourcescenarios for both2PL-HPandOCC-HPunderfirm deadlineconditions.- Analysis: Similar to
soft deadlines,OCC-HPgenerally performs better than2PL-HPforfirm deadlines, with a more pronounced difference underinfinite resources. However, thefirm deadlinecontext highlights therestart costofOCC, motivating the need forDASO-OCCandFeasible Sacrifice.
- Analysis: Similar to
6.1.5. Dynamic Adjustment of Serialization Order (Chapter 3)
-
Figure 3.1 Miss Percentage, Finite Resource, Write Probability = 0.25:

This figure compares
Miss PercentageforOCC-HPandDASO-OCCunderfinite resourcesand .- Analysis:
DASO-OCCconsistently achieves a lowermiss percentagethanOCC-HP, especially at higherarrival rates. This demonstrates the effectiveness of dynamically adjusting theserialization orderto reduceunnecessary restartsand allow more transactions to meet their deadlines.
- Analysis:
-
Figure 3.2 Restart Count, Finite Resource:

This figure compares
Restart CountforOCC-HPandDASO-OCCunderfinite resources.- Analysis:
DASO-OCCsignificantly reduces therestart countcompared toOCC-HP. This is the direct mechanism by whichDASO-OCCimprovesmiss percentage: by making more intelligent decisions duringvalidationand allowing transactions toyieldor be reordered, it avoids aborting transactions that could still complete.
- Analysis:
-
Figure 3.3 Miss Percentage, Finite Resource, Write Probability = 0.75:

This figure compares
Miss PercentageforOCC-HPandDASO-OCCunderfinite resourcesand (high contention).- Analysis: With very high
write probability(meaning highdata contention),DASO-OCCstill maintains a lowermiss percentagethanOCC-HP, though the absolutemiss percentagesare higher for both algorithms due to increased contention. This shows the robustness ofDASO-OCCeven in demanding environments.
- Analysis: With very high
-
Figure 3.4 Restart Count, Finite Resource, Write Probability = 0.75:

This figure compares
Restart CountforOCC-HPandDASO-OCCunderfinite resourcesand .- Analysis: The reduction in
restart countbyDASO-OCCis even more dramatic under highwrite probability. This suggests thatunnecessary restartsare a bigger problem in high-contention scenarios, andDASO-OCC's mechanism is particularly effective there.
- Analysis: The reduction in
-
Figure 3.5 Miss Percentage, Infinite Resource, Write Probability = 0.5:

This figure compares
Miss PercentageforOCC-HPandDASO-OCCunderinfinite resourcesand .- Analysis: The performance gains of
DASO-OCCoverOCC-HPare maintained underinfinite resources, indicating that its improvements are primarily related todata contentionandconcurrency controllogic, rather thanresource contention.
- Analysis: The performance gains of
-
Figure 3.6 Restart Count, Infinite Resource, Write Probability = 0.5:

This figure compares
Restart CountforOCC-HPandDASO-OCCunderinfinite resourcesand .- Analysis:
DASO-OCCsignificantly reducesrestart countseven whenresource contentionis not a factor, further confirming its effectiveness in optimizingconcurrency controldecisions.
- Analysis:
6.1.6. Design of Real-Time Optimistic Concurrency Control (Chapter 4)
This section evaluates the Feasible Sacrifice (FS) policy against other deadline-cognizant conflict resolution schemes.
-
Figure 4.2 Miss Percentage, Finite Resource, Write Probability = 0.25:

This figure compares
Miss Percentagefor variousOCCconflict resolution policies (NS,AS,CS,US,ADS,FS) underfinite resourcesand .- Analysis: The proposed
Feasible Sacrifice (FS)policy consistently achieves the lowestmiss percentageacross allarrival rates.Adaptive Sacrifice (ADS)is the second best. The more aggressive policies (AS) initially perform well but degrade quickly, while simpler policies (NS,CS,US) perform worse. This demonstrates that intelligent, predictivesacrificedecisions based on transaction feasibility are superior.
- Analysis: The proposed
-
Figure 4.3 Miss Percentage, Infinite Resource, Write Probability = 0.5:

This figure compares
Miss Percentagefor variousOCCconflict resolution policies underinfinite resourcesand .- Analysis: Even with
infinite resources,FSmaintains its lead, indicating its robust performance in handlingdata contentionand making optimal conflict resolution decisions independent ofresource contention.ADSremains the second best.
- Analysis: Even with
-
Figure 4.4 Miss Percentage, Finite Resource, Write Probability = 0.75:

This figure compares
Miss Percentagefor variousOCCconflict resolution policies underfinite resourcesand (high contention).- Analysis: Under high
write probability,FS's advantage becomes even more pronounced, especially at higharrival rates. This highlights its effectiveness in environments with significantdata conflicts, where making correctsacrificedecisions is crucial to minimize wasted work and meet deadlines.
- Analysis: Under high
-
Figure 4.5 Miss Percentage, Infinite Resource, Write Probability = 1.0:

This figure compares
Miss Percentagefor variousOCCconflict resolution policies underinfinite resourcesand (extremely high contention).- Analysis: With all operations being
writesandinfinite resources,FScontinues to outperform other policies, though the absolutemiss percentagesare very high due to extremedata contention. This further validatesFS's robust decision-making under severe conflict conditions.
- Analysis: With all operations being
-
Figure 4.6 Miss Percentage, Finite Resource, Arrival Rate = 20 tran/sec:

This figure compares
Miss Percentagefor variousOCCconflict resolution policies againstWrite Probabilityunderfinite resourcesand .- Analysis: At a fixed, moderate
arrival rate,FSconsistently shows the lowestmiss percentageacross varyingwrite probabilities. Its performance degrades less severely than other policies aswrite probabilityincreases.
- Analysis: At a fixed, moderate
-
Figure 4.7 Miss Percentage, Infinite Resource, Arrival Rate = 40 tran/sec:

This figure compares
Miss Percentagefor variousOCCconflict resolution policies againstWrite Probabilityunderinfinite resourcesand .- Analysis: Similar to Figure 4.6,
FSmaintains its superior performance acrosswrite probabilitieseven withinfinite resources, confirming its effectiveness in optimizingconflict resolutiondecisions.
- Analysis: Similar to Figure 4.6,
6.2. Data Presentation (Tables)
The paper provides its parameters in tables, which were transcribed in the Experimental Setup section. No other result tables are present in the provided content. All results are presented graphically.
6.3. Ablation Studies / Parameter Analysis
While the paper doesn't explicitly frame experiments as "ablation studies," it implicitly performs several types of parameter analysis:
-
Resource Availability: Comparing
finitevs.infinite resources(Experiments 1 vs. 2, and 3 vs. 4) helps isolate the impact ofdata contentionversusresource contentiononCCalgorithm performance. For example, the enhanced performance ofOCC-HPover2PL-HPis more pronounced withinfinite resources, suggesting its strength lies in managingdata conflicts. -
Deadline Type: Comparing
softvs.firm deadlines(Experiments 1 & 2 vs. 3 & 4) shows how the system behaves when tardy transactions are tolerated versus discarded.Firm deadlinescenarios typically lead to highermiss percentagesbut highlight the criticality ofrestart countforOCC. -
Contention Level: Varying
Transaction Arrival Rate() andWrite Probability() across experiments allows for evaluating algorithm robustness under differentdata contentionlevels. For instance,DASO-OCCandFeasible Sacrificedemonstrate greater improvements under higher contention, indicating their value in stress-testing scenarios. -
Deadline Tightness: The
Deadline Factor() is a parameter (Table 2.2), although the results figures primarily varyarrival rateandwrite probability. This parameter would show how algorithm performance changes when deadlines are very tight versus loose.These variations allow the author to understand the conditions under which each algorithm performs best and to highlight the specific advantages of the proposed methods (
DASO-OCCandFeasible Sacrifice) in optimizing forreal-time constraintsunder varyingresourceanddata contentionscenarios. The consistent outperformance of the proposed algorithms across these diverse conditions strongly validates their effectiveness.
7. Conclusion & Reflections
7.1. Conclusion Summary
This dissertation provides a comprehensive investigation into concurrency control algorithms for real-time database systems (RTDBSs), focusing on soft and firm deadline environments. The key findings and contributions are:
-
Systematic Performance Analysis: The research established a robust
RTDBS simulation modeland conducted thorough performance comparisons oflocking (2PL-HP)andoptimistic (OCC-HP)concurrency control protocols. It was consistently found thatOCC-HPgenerally outperforms2PL-HPin terms ofmiss percentage,average tardy time,throughput, andaverage response time, especially under moderate to high contention and whenresource contentionis not the dominant factor. This validated the choice ofoptimistic concurrency controlas a promising direction for RTDBS. -
Novel Optimistic Concurrency Control Algorithm (DASO-OCC): The dissertation introduced
Dynamic Adjustment of Serialization Order Optimistic Concurrency Control (DASO-OCC). This algorithm intelligently reducesunnecessary restartsinoptimistic protocolsby allowing for dynamic adjustment of theserialization orderduring thevalidation phase. Simulation results demonstrated thatDASO-OCCsignificantly lowersmiss percentageandrestart countcompared toOCC-HPacross various operating conditions, making it a more efficient base mechanism for RTDBSs. -
Advanced Deadline-Cognizant Conflict Resolution (Feasible Sacrifice): Building on the strengths of
optimistic protocols, the research proposedFeasible Sacrifice (FS), a newpriority-cognizant conflict resolution scheme.FSmakes intelligent decisions aboutsacrificing(aborting) transactions based not just on static priorities, but on the feasibility of transactions meeting their deadlines. It was shown thatFSconsistently achieves the lowestmiss percentagecompared to existingpriority-based conflict resolution schemes(e.g.,NS,AS,CS,US,ADS), particularly under highdata contention. -
Exploration of Semantic-Based Concurrency Control: The dissertation also explored
semantic-based concurrency controlwithin anobject-oriented data modelfor RTDBS. By definingcompatibility relationsbased onmethod semanticsandaffected-sets, it laid the groundwork for achieving higherconcurrencythan traditionalserializability-based methods, further enhancing the ability to meettiming constraints.Overall, the dissertation successfully identifies limitations in existing
concurrency controlapproaches for RTDBS, proposes innovativeoptimistic concurrency control algorithmsthat are highlydeadline-aware, and provides a strong foundation for future research in semantic-based approaches.
7.2. Limitations & Future Work
The author explicitly identifies future research directions at the end of the dissertation:
-
More Realistic Performance Evaluation Models: The current simulation model uses a generic
RTDBS. Future work could incorporate more realistic aspects such as:- Modeling
hot spotsin the database (non-uniform data access patterns). - Considering varying
transaction sizesandaccess patterns. - Integrating
main-memory databasearchitectures, which are becoming prevalent in real-time systems.
- Modeling
-
Further Refinements to Optimistic Protocols:
- Dynamic Adaptation: Investigating how the parameters and behavior of
DASO-OCCandFeasible Sacrificecan be dynamically tuned at runtime based on current system load and contention levels. - Hybrid Approaches: Exploring hybrid
concurrency control protocolsthat combine elements oflockingandoptimisticapproaches to leverage the strengths of both, especially in mixed workload environments (e.g., some transactions requiring strict guarantees, others more flexible).
- Dynamic Adaptation: Investigating how the parameters and behavior of
-
Advanced Semantic-Based Concurrency Control:
- Detailed Implementation: Developing a more detailed implementation and evaluation of the
semantic-based concurrency control mechanismproposed in Chapter 5. This would involve defining a comprehensive set ofcompatibility relationsfor various object methods and rigorously testing their impact on performance and correctness. - Integration with Real-Time Scheduling: How
semantic-based concurrency controlinteracts withreal-time schedulingpolicies at a deeper level (e.g., priority assignment, resource allocation).
- Detailed Implementation: Developing a more detailed implementation and evaluation of the
-
Handling Hard Real-Time Transactions: Despite the dissertation's focus on
softandfirm deadlines, the author acknowledges that supportinghard deadlinesremains a challenge. Future work could explore restricted environments or specialized approaches (e.g.,static analysis,resource reservation) to provide guarantees for criticalhard real-time transactionswithin a database context.
7.3. Personal Insights & Critique
This dissertation offers a rigorous and insightful deep dive into a critical area of real-time systems.
-
Strengths and Inspirations:
- Rigorous Foundation: The initial comprehensive performance study (Chapter 2) of
2PL-HPandOCC-HPis a strong foundation. Validating the simulation model by reconfirming previous results adds credibility to the subsequent novel contributions. This systematic approach is a valuable lesson for any research. - Practical Relevance of Soft/Firm Deadlines: The pragmatic focus on
softandfirm deadlinesis highly relevant to real-world applications, acknowledging the inherent difficulties of strictlyhard real-time guaranteesin dynamic database environments. This grounds the research in practical applicability. - Targeted Innovation in OCC: The identification of
unnecessary restartsas a key problem inOCCand the subsequent development ofDASO-OCCandFeasible Sacrificedemonstrate keen analytical ability. These solutions are not just incremental changes but address core inefficiencies in existingoptimistic protocolsby introducing more intelligent,deadline-aware decision-making. TheFeasible Sacrificepolicy, in particular, showcases a sophisticated understanding ofreal-time constraintsby prioritizing not just static priority but also the actual likelihood of meeting a deadline. - Forward-Looking Semantic Approaches: The inclusion of
semantic-based concurrency controlindicates a broader vision beyond traditional approaches. This concept is highly valuable for domains where application-specific knowledge can relax strictserializabilitywithout sacrificing correctness, potentially unlocking significantly higherconcurrency. This idea could be applied in various modern distributed systems, not just RTDBS.
- Rigorous Foundation: The initial comprehensive performance study (Chapter 2) of
-
Potential Issues, Unverified Assumptions, or Areas for Improvement:
-
Predicting Execution Time: The
Feasible Sacrificealgorithm heavily relies on estimatingremaining execution timeandslack. In real-world, complex database systems, accurately predicting execution time can be extremely challenging due to variable I/O latencies, CPU caching, operating system interference, and dynamic data access patterns. The simulation assumes relatively predictable execution times. The robustness ofFSin environments with highly unpredictable execution times would be an important area for further investigation. -
Overhead of Feasibility Checks: While
FSmakes smarter decisions, the additional computation required forslackcalculation andfeasibility checksduring thevalidation phasecould introduce its own overhead. The dissertation shows performance benefits, implying this overhead is outweighed, but its magnitude in extremely high-contention or very tight deadline scenarios would be worth detailed analysis. -
Homogeneous Workload Assumption: The simulation often assumes a
uniform data access patternandfixed transaction size. Real-world workloads are often heterogeneous, withhot spotsand varying transaction complexities. The performance of these algorithms might differ significantly under such conditions. -
Interplay with OS Scheduling: The simulation abstracts away many operating system scheduling details. The interaction between the database's
priority-driven concurrency controland the underlyingoperating system schedulercould introduce complexities (e.g., a high-priority database transaction being descheduled by the OS). -
Generalizability of Semantic-Based CC: While promising, the
semantic-based concurrency controlremains at a conceptual level in this dissertation. Its practical implementation requires significant effort in definingaffected-setsandcompatibility relationsfor diverse application methods, which can be application-specific and complex to manage.This dissertation represents a substantial contribution to the understanding and advancement of
concurrency controlinreal-time database systems, particularly for the widely applicablesoftandfirm deadlineparadigms. The proposedDASO-OCCandFeasible Sacrificealgorithms offer practical improvements, and the exploration ofsemantic-based approachespoints towards exciting future directions for achieving even higher performance in complex real-time environments.
-
Similar papers
Recommended via semantic vector search.