AWS recently announced that DynamoDb will now scale read and write capacity automatically. While there was already a lot of database administration that DynamoDb took care of (backups, underlying infrastructure provisioning), setting the proper capacities initially, and updating them as your application changed, was a key task that fell to the user. No more.
I posted a link to the news to a discussion channel I participate in, and someone asked: “what’s left to manage?”. Drawing from that discussion, here are a few items remaining:
- Appropriate partition keys. Make sure they are spread uniformly.
- Choosing the right primary key. Since you typically want to avoid table scans and can only query by primary key, making sure you pick the right one is important. (I would also call this “data model design”.)
- Enforcing data integrity, initially and through time. This is a challenge with every nosql solution.
- Creating the appropriate secondary global indices for your application.
- Securing and controlling access to your data.
These are all still important tasks, but DynamoDb is getting easier and easier to use for high performance applications for which nosql is a good fit. (And for which you don’t mind being tied to AWS.)