Changing table scheme

0

Hey all,

I just want to confirm something... we use CloudFormation to deploy our stack to aws, including defining our DynamoDB tables. Since we're still in early development I didn't bother researching before hand, BUT I went ahead and changed the AttributeName of our hashKey for one of our tables. After I redeployed, it seemed as if the tables was deleted and recreated because all of the old entries we gone. Is that what happened? Unfortunately I don't have the arn of the old table, so I can't compare the two.

If this is what happens, is the proper way to do this in a live environment to create a secondary table with the new key schema, some how copy old entries, update them with the new keys (and remove the old ones if we want), then write to the new table, and upon completion, delete the old table? If so, is there an DynamoDB API that does a migration like this?

Thanks!

AustinK
gefragt vor 5 Jahren1000 Aufrufe
1 Antwort
0
Akzeptierte Antwort

Hi,
For your first question, yes, your cloudformation deleted your old table and created a new table. Here is a nice article that explains what happens and a few strategies on how to prevent this from happening in the future by utilizing stack policies and how to utilize the UpdateReplacePolicy.
https://www.alexdebrie.com/posts/understanding-cloudformation-updates/

Also, here's a good article I found online by, Abhaya Ahauhan, on how to change your key schema. note: he also includes a link to his code to perform these operations.
https://www.abhayachauhan.com/2018/01/dynamodb-changing-table-schema/

1. Create a new table (let us call this NewTable), with the desired key structure, LSIs, GSIs.
2. Enable DynamoDB Streams on the original table
3. Associate a Lambda to the Stream, which pushes the record into NewTable. (This Lambda should trim off the migration flag in Step 5)
4. [*Optional*] Create a GSI on the original table to speed up scanning items. Ensure this GSI only has attributes: Primary Key, and Migrated (See Step 5).
5. Scan the GSI created in the previous step (or entire table) and use the following Filter:
FilterExpression = "attribute_not_exists(Migrated)"
Update each item in the table with a migrate flag (ie: “Migrated”: { “S”: “0” }, which sends it to the DynamoDB Streams (using UpdateItem API, to ensure no data loss occurs).

**NOTE** You may want to increase write capacity units on the table during the updates.

6. The Lambda will pick up all items, trim off the Migrated flag and push it into NewTable.
7. Once all items have been migrated, repoint the code to the new table
8. Remove original table, and Lambda function once happy all is good.

Hope this helps!
-randy

beantwortet vor 5 Jahren
  • FYI - the Abhaya Ahauhan link is broken and take you to spam/pishing sites.

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen