The velocity of big data refers to the speed at which data is generated, transmitted, processed, and analyzed, often in near real-time or real-time contexts. This makes option B the correct answer.
Big data is commonly described using the “three Vs”: Volume, Velocity, and Variety. Volume refers to the sheer amount of data generated. Variety refers to the different types of data, such as structured, semi-structured, and unstructured data. Velocity, however, specifically focuses on how quickly data flows into systems and how fast it must be processed to remain valuable.
Option B accurately reflects this concept by emphasizing rapid processing and real-time handling, which is critical in use cases such as fraud detection, network monitoring, and real-time analytics. In these scenarios, delayed processing significantly reduces the usefulness of the data.
Option A is incorrect because it describes volume, not velocity. Option C is incorrect because it describes variety. From an information security and audit perspective, velocity introduces additional risks, such as reduced time for validation, logging, and manual review, which must be addressed through automated controls.
Therefore, the correct description of big data velocity is the rapidity at which data is processed in a real-time context.