The curious case of DynamoDB pricing

Each year, during the AWS re:invent conference I’m always eager to follow the keynote presentations, and then selectively dig into some of the breakout sessions. Besides the Docker integration, Aurora interested me quite a bit. I’m really looking forward to seeing some further performance tests coming out, something I’m sure will happen as soon a people start getting preview access. Till then I’ll write this post basing it on the limited information AWS has given us already.

Now here’s the performance numbers AWS claim for Aurora:

AWS Aurora performance

6 million inserts per minute and 30 million selects per minute is quite good. Of course we don’t know how complex these selects are, or the batch size for inserts. It mentions I2 instances, but it’s a little unclear if that compared a MySQL server running on an I2 instance with Aurora also running on the same I2 instance type when compared. Given that it looks like Aurora is only available on R3 instances this makes me think MySQL was running on I2, compared against most likely the largest R3 instance. Let’s assume Aurora was running on the largest R3 instance type.

And here’s what I find particularly interesting on the stats in isolation compared to DynamoDB: 6 million inserts per minute, or rather 100k per second and 500k selects per second. Their largest instance, db.r3.8xlarge, costs $4.64 / hr.

Let’s compare that to DynamoDB:

In the same region (US East) 10 units of write operations per second costs $0.0065 / hr, and 50 units of read operations per second for the same. Getting the same amount of performance as claimed for Aurora would then work out to be:

Read: 500k / s divided by 50 per unit multiplied by $0.0065 = $65 / hr
Write: 100k / s divided by 10 per unit multiplied by $0.0065 = $65 / hr

Total cost for similar read/write performance on DynamoDB would therefore be $130 / hr. That is a massive difference for a NoSQL store where someone manages sharding on your behalf. For simple data access patterns, as required by NoSQL solutions like DynamoDB, at that cost it’s tempting to just opt for a cluster of Auroras with self-managed sharding between them.

In addition: Storage on Aurora seems to be at least comparable, if not a lot cheaper. It’s also important to remember that as soon as DynamoDB decides to shard your table your provisioned throughput gets equally divided across that storage. And trust me, it will shard way before you run into any upper Aurora storage size limits.

UPDATE: There is more info from re:Invent 2014 regarding Amazon Aurora on this YouTube video.