Questions tagged with Architecture Strategy

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

  • 1
  • 2
  • 12 / page

AWS Real-Time Ad Tracker Architecture

Hello. I'm attempting to build an ad-tracking application that can attribute, store, and then query and analyze website visitor information in real or near real-time. Unfortunately, I'm finding difficulty designing the application architecture as I am new to AWS overall. So far, I expected my application to look like this: 1. API Gateway to serve as a secure endpoint for websites and ad servers to send website visitor information (think utm parameters, device resolution, internal ID's etc) 2. Lambda/Node.js to route and attribute session information 3. DynamoDB for its ability to handle high-volume write rates in a cost-efficient way. 4. S3 to create frequent/on-demand backups of DynamoDB which can then be analyzed by 5. ? Considering passing all S3 data back for client-side processing in my dashboard. **However:** I just found [this case study with Nasdaq](https://aws.amazon.com/solutions/case-studies/nasdaq-case-study/?pg=ln&sec=c) utilizing [redshift and other services shown here](https://aws.amazon.com/redshift/?p=ft&c=aa&z=3). Judging from the 'Data' label featured in the first illustration of the latter link (clickstreams, transactions, etc) it appears to be exactly what I need. So, I suppose my question would be from a cost, simplicity and efficiency standpoint: Would it just be easier to eliminate dynamodb and s3 and instead configure my lambda functions to send their data directly into redshift? Any guidance would be greatly appreciated, thank you!
2
answers
0
votes
81
views
asked a month ago

Find out hidden costs when moving to AWS

Hello everyone 👋 I am opening this post to ask for some costs-related information as I would like to get an estimation of how much money I would be paying for the architecture of my service. The architecture is quite simple. I would like to ship some data from Google Cloud infrastructure to an AWS S3 bucket and then download it in an EC2 machine to process it. This is a picture of that diagram: ![AWS arch diagram](/media/postImages/original/IMu5VnT59oTDuiZ7T2sYOHPg) With regards to costs, and as far as I have found, I would **only** be paying the data transfer Google Cloud to the AWS one as **network egress costs** and the costs related to hosting the information in S3. As stated in the ["overview of data transfer costs for common architectures"](https://aws.amazon.com/blogs/architecture/overview-of-data-transfer-costs-for-common-architectures/) and [Amazon S3 pricing guide - data transfer section](https://aws.amazon.com/s3/pricing/): - I don't pay for data transferred from an Amazon S3 bucket to any AWS service(s) within the same AWS Region as the S3 bucket (including to a different account in the same AWS Region) if I use an Internet Gateway to access that data. - I don't pay for data transferred in from the internet. Am I right? Am I missing anything in this diagram (sorry for the networking abstraction I'm making in the diagram above. As stated in the paragraph above, I'd be accessing the S3 bucket by using an Internet Gateway, both EC2/S3 running in the same region). Thanks lots in advance!
1
answers
0
votes
40
views
asked 2 months ago

AWS IoT Greengrass (V2) and Video Streaming

Hello, The use case I have is this - There are two types of AWS IoT Greengrass V2 core devices that are implemented, which are connected (in the same private LAN network) in hub and spoke architecture. None of them are connected to client devices (Greengrass is being used because of its IPC and orchestration benefits): 1. [Spoke] AWS IoT Greengrass V2 core device is directly attached to a camera. The Video stream is sent to an Hub AWS IoT Greengrass V2 core device for ML processing (inference) that must be near-real time. 2. [Hub] AWS IoT Greengrass V2 core device that is processing and Fan-Out video streams: **A)** to ML inference interface (**local component of the hub**) **B)** to Kinesis Firehose (S3; to re-train the model) **C)** AWS Kinesis Video Stream (for human to view the video online) I have a couple of questions: 1. Is the architecture feasible? Make sense? 2. What is the best (performance and security wise) technology (open source, AWS component, protocol) to use in Spoke and Hub devices to send the video stream from the spokes to the hub (the video has to be high quality with minimal/no compression to keep the inference accuracy high)? 2. Can the Stream Manager component of AWS IoT Greengrass V2 core send (Hub) streams in fun-out mode (**e.g., to two different destinations concurrently, AWS Kinesis Firehose and AWS Kinesis Video Streams**)? Thank you, Yossi
1
answers
0
votes
143
views
yossico
asked 6 months ago

Architecture for multi-region ECS application

Hi everyone, I just wanted to get feedback on my proposed solution for a multi-region ECS dockerized app. Currently we have the following resources in Region A: ``` Postgres DB (Used for user accounts only) Backend+Frontend NextJS App (Dockerized) ECS Backend Microservice App for conversion of files (Dockerized) ECS Backend 3rd party API + Datastore (This resource is also deployed in other regions) Unknown architecture ``` I now need to deploy to Regions B and C. The Backend 3rd party API is already deployed in these regions. I am thinking of deploying the following resources to the following regions: ``` Backend+Frontend NextJS App (Dockerized) Backend Microservice App for conversion of files (Dockerized) ``` Our app logs in the user (authentication + authorization) using the 3rd party API, and after login we can see which region their data is in. So after login I can bounce them + their token to the appropriate region. I cannot use Route53 routing reliably because the Source of Truth about their region is available after login, and, for example, they may be (rarely) accessing from region B (if they are travelling) while their datastore is in region C (In which case I need to bounce them to region C). I also don't need to replicate our database to other regions because it only stores their account information for billing purposes, so the performance impact is minimal and only checked on login/logout. Currently we have low 10s of users, so I can easily restructure and deploy a different architecture if/when we start scaling. Critique is welcome!
1
answers
0
votes
495
views
ManavDa
asked 7 months ago
  • 1
  • 2
  • 12 / page