The correct answer is B. Checkpoints. CompTIA DataSys+ explains that checkpoints play a critical role in database recovery mechanisms by limiting the amount of transaction log data that must be processed during recovery. Log-based recovery relies on transaction logs to restore the database to a consistent state after a failure. If checkpoints are infrequent, the database engine must replay a large portion of the transaction log, which increases recovery time and system overhead.
A checkpoint is a process where the database writes all modified (dirty) pages from memory to disk and records a synchronization point in the transaction log. After a checkpoint completes, the database knows that all changes up to that point are safely stored on disk. During recovery, the system only needs to process log entries that occurred after the most recent checkpoint, significantly reducing recovery workload and time. DataSys+ highlights checkpoints as a key performance and availability feature, especially in systems with high transaction volumes.
Option A, deadlocks, are concurrency issues that occur when two or more transactions block each other indefinitely. While deadlock handling is important for transaction throughput, it does not reduce log-based recovery overhead. Option C, locks, control concurrent access to data and help maintain consistency, but they do not affect how much log data must be replayed during recovery. Option D, indexes, improve query performance but can actually increase logging activity because index changes are also logged.
CompTIA DataSys+ emphasizes that effective recovery planning includes optimizing logging behavior and checkpoint frequency. Properly configured checkpoints strike a balance between runtime performance and recovery efficiency by reducing excessive log replay without causing unnecessary disk I/O.
Therefore, to best mitigate the overhead caused by log-based recoveries, the database manager should use checkpoints, making option B the correct and fully verified answer.