Changing table scheme

0

Hey all,

I just want to confirm something... we use CloudFormation to deploy our stack to aws, including defining our DynamoDB tables. Since we're still in early development I didn't bother researching before hand, BUT I went ahead and changed the AttributeName of our hashKey for one of our tables. After I redeployed, it seemed as if the tables was deleted and recreated because all of the old entries we gone. Is that what happened? Unfortunately I don't have the arn of the old table, so I can't compare the two.

If this is what happens, is the proper way to do this in a live environment to create a secondary table with the new key schema, some how copy old entries, update them with the new keys (and remove the old ones if we want), then write to the new table, and upon completion, delete the old table? If so, is there an DynamoDB API that does a migration like this?

Thanks!

AustinK
已提问 5 年前1000 查看次数
1 回答
0
已接受的回答

Hi,
For your first question, yes, your cloudformation deleted your old table and created a new table. Here is a nice article that explains what happens and a few strategies on how to prevent this from happening in the future by utilizing stack policies and how to utilize the UpdateReplacePolicy.
https://www.alexdebrie.com/posts/understanding-cloudformation-updates/

Also, here's a good article I found online by, Abhaya Ahauhan, on how to change your key schema. note: he also includes a link to his code to perform these operations.
https://www.abhayachauhan.com/2018/01/dynamodb-changing-table-schema/

1. Create a new table (let us call this NewTable), with the desired key structure, LSIs, GSIs.
2. Enable DynamoDB Streams on the original table
3. Associate a Lambda to the Stream, which pushes the record into NewTable. (This Lambda should trim off the migration flag in Step 5)
4. [*Optional*] Create a GSI on the original table to speed up scanning items. Ensure this GSI only has attributes: Primary Key, and Migrated (See Step 5).
5. Scan the GSI created in the previous step (or entire table) and use the following Filter:
FilterExpression = "attribute_not_exists(Migrated)"
Update each item in the table with a migrate flag (ie: “Migrated”: { “S”: “0” }, which sends it to the DynamoDB Streams (using UpdateItem API, to ensure no data loss occurs).

**NOTE** You may want to increase write capacity units on the table during the updates.

6. The Lambda will pick up all items, trim off the Migrated flag and push it into NewTable.
7. Once all items have been migrated, repoint the code to the new table
8. Remove original table, and Lambda function once happy all is good.

Hope this helps!
-randy

已回答 5 年前
  • FYI - the Abhaya Ahauhan link is broken and take you to spam/pishing sites.

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则