The bottleneck in this architecture is typically the relational database (RDS MySQL), which has limited concurrent connections and write throughput compared with bursty API traffic. Because transaction volume spikes during the day and drops to near zero at night, the system needs a way to absorb bursts without over-provisioning the database (which would be costly and underutilized at night).
A cost-effective way to add elasticity is to introduce Amazon SQS as a buffer between API ingestion and database writes. With SQS, devices (or the API/Lambda) enqueue transactions quickly, and Lambda consumers process the queue at a controlled rate. The key is to prevent Lambda from overwhelming RDS with too many concurrent writes or connections. Setting reserved concurrency on the Lambda function limits the maximum number of concurrent executions, which effectively caps database connection pressure and smooths write load. This prevents overload events that cause errors/timeouts while still allowing the system to scale up to the reserved concurrency limit during peaks.
Option D captures this correct control mechanism: SQS for buffering + Lambda reserved concurrency matched to the DB’s connection capacity.
Option C is incorrect because “enhanced fanout” is a Kinesis Data Streams feature, not an SQS feature.
Options A and B focus on scaling Aurora read replicas, which helps read scaling, not write-heavy transaction ingestion. POS transaction processing is typically write-intensive; scaling read replicas does not solve the core elasticity problem. Also, migrating databases is higher effort and cost than adding a queue and tuning Lambda concurrency.
Therefore, adding SQS and setting Lambda reserved concurrency to protect RDS connections is the most cost-effective elasticity improvement.