(Information Science Expert) Lecture on Database
- Atomicity
- Atomicity ensures that operations are either fully completed or not performed at all.
- Without atomicity, various error cases can occur, making it difficult to handle.
- Consistency
- Consistency ensures that data remains in a valid state after processing.
- Isolation
- Isolation guarantees that concurrent execution produces the same result as serial execution.
- It is important to avoid simultaneous access or updates to the same data.
- How to achieve isolation:
- It would be ideal to know if we are currently operating on the latest data.
- ”Pessimistic Concurrency Control“:
- Limit the latest version of the data to one (locking).
- Although parallel execution is difficult, it is less likely to fail.
- Suitable for cases where collisions are likely to occur.
- ”Optimistic Concurrency Control“:
- Record the timestamp of data access/update and check again when the transaction is finished to ensure that no one else has accessed it.
- If someone else has accessed it, the transaction is aborted (failed).
- (To ensure atomicity, everything is rolled back.)
- Suitable for cases where collisions are less likely to occur.
- ”Pessimistic Concurrency Control“:
- Alternatively, there are also DBMSs that allow multiple versions of the latest edition, similar to git.
- MVCC (Only limit the latest version during updates)
- Comparison between git and traditional DBMS
- DBMS: Independence in terms of ACID Properties
- git
- Allows multiple latest versions
- Attempts automatic merging (manual merging if failed)
- DBMS (Manages only a single edition)
- Manual merging after failed merging? That’s not feasible.
- Ensures independence through serializability.
- It would be ideal to know if we are currently operating on the latest data.
- Durability
- Ensures persistence even in the event of failures.