E-commerce (load balanced) system on LightSail migrating to EC2


Hi There,

hope your doing fine. Now here what we are working with. We have a instance running on Lightsail Bitnami WP (WC multivendor solution) build on a 8GB /2vCPU instance on the server there is WooCommerce installed. Images are on a S3 bucket. And the Database is a MSQL dBase.

The instance is running behind a distribution and the bucket as well. Now that's the short version of this setup we want it load balances later on however we wanted to wait for this to see how we can go towards a Ec2 solution.

Here is what we want actually, auto scalable solution in 2 regions (we are now using 1 in Frankfurt), keeping our existing build as is and here is the part when the mkt team for example updates the frontend or the multivendor solution adds new vendor it propagates to all the instances. Now we found a template of a stack which could be useful in cloudformation designer called (Sample CloudFormation templates) Wordpress Multi AZ however keeping the above in mind we would like to use our base snapshot for the same. Now if we autoscale we could run for example behind a balancer each instance of 4gb/2vCPU and than when the system demands it it pull a few more instances in and when the traffic is low it closes them. Dbase snaphot and instance snapshot can be created easily. S3 is already active and SES as well. As you can imagine we do not know how many people will be on the site from the beginning which is a bit of a problem in calculation capacity etc.

For us its important that the changes our mkt makes or when a customer uploads products or other information on each server there is a identical copy of this facing the end user.

Sorry that this is so long but i hope now this is explained a bit more detailed as we can not find any docs relating to this kind of setup how to migrate it from LightSail into a scalable EC2 solution which will propagate information maybe real time...

Ohhh yes and important fact we have a monthly budget for this of around 120 US$/EUR p/m in base config so NOT when scaling hits in. Any ideas how to make this happen...

Let us know what could be done in here and how this could be approached as were a bit stuck in a loop here and our know how is not up to par for this part.

Cheers Dragan

asked a year ago257 views
1 Answer

First, for EC2 scaling, you can use AutoScaling.
AutoScaling allows you to scale by EC2 load and configure scheduled scaling.
Costs can be kept down by flexibly responding to the situation, such as using a single EC2 unit during times of low access and increasing to two units when the number of accesses increases.
Applications can be deployed to multiple regions by using CI/CD with GitHub and CodePipeline.

Another way is to operate EFS in a replication configuration.
However, EFS is a bit more expensive and may be cost prohibitive.

profile picture
answered a year ago
  • Hi riku,

    thank you for your answer. About Autoscaling we read this doc and starting to get a bit of a grasp on it.

    Now we can set up manually or via cloudformation a setup for now the easy route is cloudformation to use the template of the Multi AZ stack. So this is clear. The part is we have now a build setup in lightsail, config done on the server etc. we would like to use this https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-cloudformation-stackssnapshot to the Multi AZ stack, Also the dbase we have separated. I found docs on it that this all needs to be set up manually and can not be done in a automated way. https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-creating-ec2-instances-from-exported-snapshots#aws-cloud-formation-stack

    now there should be a way to automate this process, of course and to use already a robust solution which AWS has proposed/templated.

    Now codepipeline we never used nor have experience with it. We would not use GIT but rather have a staging site where we test it on the instance and when this is all ok we push it onto the live build.

    Now here is the catch, because the site will launch probably with about 7-10k products on it (we separate the dbase from WP instance) in the months to come this would grow to probably 25-30k products and end goal is around 75k. Now this would mean that the oru customer/vendor will be connected to our site from their own dashboard and update their own shop.

  • So this would mean they are a lot on the backend adding pics, and product entries. These should reflect in "realtime" on the site as well. I hope this is a bit more clear now, about the data replication and pushing this onto all the instances. As the WP CMS is connected to this separated dbase i "think" the updating of 1 instance is not the issue ONLY issue i can think of is the replication of this data onto the running instances. Example if there are always 2 running data should be identical on both of them. The staging site is just a solution probably for the front end part and ideally this would be running instance which we push onto the existing instance to update.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions