Quick Study Revision Points: Deadlocks and Concurrency Control

Quick Study Revision Points: Deadlocks and Concurrency Control

Deadlocks:

  1. Definition: A deadlock occurs in a concurrent system when two or more processes are unable to proceed because each is waiting for a resource held by another process.

  2. Conditions for Deadlock:

    • Mutual Exclusion: At least one resource is non-shareable, meaning only one process can use it at a time.

    • Hold and Wait: Processes hold resources while waiting for additional resources.

    • No Preemption: Resources cannot be forcibly taken away from a process.

    • Circular Wait: A circular chain of two or more processes exists, where each process is waiting for a resource held by the next process in the chain.

  3. Deadlock Prevention:

    • Resource Allocation Graph (RAG): A graphical representation used to detect and prevent deadlocks by identifying cycles in the graph.

    • Deadlock Avoidance: Employing resource allocation strategies to avoid unsafe resource allocations that may lead to deadlocks.

    • Deadlock Detection and Recovery: Periodically checking for deadlocks and recovering by terminating or preempting processes involved in the deadlock.

  4. Deadlock Avoidance vs. Deadlock Detection:

    • Deadlock Avoidance attempts to prevent deadlocks by carefully selecting resource allocations based on predicted future resource requests.

    • Deadlock Detection identifies the existence of deadlocks after they occur, using algorithms such as Banker's algorithm or resource scheduling algorithms.

  5. Deadlock Handling:

    • Deadlock Ignorance: Ignoring the possibility of deadlocks and focusing on system performance.

    • Deadlock Prevention: Taking measures to ensure that one or more of the deadlock conditions cannot occur.

    • Deadlock Detection and Recovery: Detecting deadlocks and resolving them through processes like preemption or termination.

    • Deadlock Avoidance: Using resource allocation strategies to avoid unsafe resource allocations that may lead to deadlocks.

    • Deadlock Tolerance: Allowing deadlocks to occur and resolving them once they happen.

Concurrency Control:

  1. Definition: Concurrency control refers to the techniques and mechanisms used to manage concurrent access to shared resources and maintain data consistency in a multi-user or multi-threaded environment.

  2. Concurrency Problems:

    • Lost Update: When multiple transactions read and update the same data concurrently, leading to one transaction's updates being overwritten by another.

    • Dirty Read: Occurs when a transaction reads uncommitted data from another transaction, which is later rolled back.

    • Inconsistent Analysis: An issue where concurrent transactions read intermediate results of other transactions, resulting in inconsistent data.

    • Phantom Read: When a transaction reads a set of records, and during the read, another transaction inserts new records that satisfy the read condition, causing inconsistencies.

  3. Concurrency Control Techniques:

    • Locking: Using locks to ensure exclusive access to resources and prevent conflicts between concurrent transactions.

    • Two-Phase Locking (2PL): Ensuring serializability by acquiring all necessary locks before performing any updates and releasing them only after the transaction completes.

    • Optimistic Concurrency Control: Allowing transactions to proceed concurrently and detecting conflicts during the commit phase.

    • Timestamp Ordering: Assigning unique timestamps to transactions and using them to order operations to maintain serializability.

    • Multiversion Concurrency Control (MVCC): Creating and maintaining multiple versions of data to allow for concurrent reads and writes without conflicts.

    • Read Committed and Repeatable Read Isolation Levels: Defining the level of isolation and consistency required for concurrent transactions.

  4. Deadlock in Concurrency Control:

    • Deadlocks can also occur in systems implementing concurrency control techniques.

    • Deadlock prevention, detection, and recovery mechanisms should be in place to handle deadlocks that may arise due to concurrent access to resources.