Questions tagged with Microservices

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hi, I am deploying a lambda function that utilizes the NLTK packages for preprocessing text. For the application to work I need to download the stop words, punkt and wordnet libraries. I have deployed using a docker image and SAM cli. When the function runs on AWS, I get a series of errors when trying to access the NLTK libraries. The first error I got was that '/home/sbx_user1051/' cannot be edited. After reading solutions on stack over flow, I was pointed in the direction of needing to store the NLTK libraries in the /tmp/ directory because that is the only directory that can be modified. Now, after redeploying the image with the changes to the code, I have the files stored in temp, but the lambda function does not search for that file when trying to access the stop words. It still tries to search for the file in these directories: - '/home/sbx_user1051/nltk_data' - '/var/lang/nltk_data' - '/var/lang/share/nltk_data' - '/var/lang/lib/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' What should I do about importing the NLTK libraries needed when running this function on aws lambda?
0
answers
0
votes
11
views
Tyler
asked a day ago
Hi team, I followed this [blog](https://aws.amazon.com/blogs/security/extend-aws-iam-roles-to-workloads-outside-of-aws-with-iam-roles-anywhere/) to use IAM role for a workload outside AWS in my case I want a Pipeline running in Azure devops to push an image into amazon ECR for example following the blog I was able to generate credentials from the IAM role and hit AWS s3 but I'm not sure how this is applicable for a workload running in azure for example what are the steps to follow to make a Pipeline in Azure assume an IAM role in AWS and push images to ECR I don't know how to apply the IAM role anywhere principle in Azure is there AWS docs /blog explaining the steps? Thank you!
0
answers
0
votes
20
views
Jess
asked a day ago
Hi team, my org relies on Azure devops Pipeline we want to deploy from Azure to our ECS fargate cluster but we have some consideration - we cannot create long-lived credentials in AWS - we don't have outbound internet connectivity in AWS from within our VPC how can we deploy the built artifact from Azure to ECS without using AWS long-lived credentials? i saw the solution of using a build agent [build agents](https://medium.com/hashmapinc/automate-code-deployment-with-aws-ec2-build-agents-for-your-azure-devops-pipelines-6636fe1c8e21) can Azure assume a role in AWS without using build agents? how can Azure Assume a role in AWS but still, need AWS credentials
1
answers
0
votes
27
views
Jess
asked 3 days ago
Can someone please share any example implementation of setting up Cloudtrail to audit a microservice architecture in AWS.
0
answers
0
votes
15
views
asked 5 days ago
Hi All, We are using AWS ECS Fargate ALB & API gateway to serve our API, mostly its is always healthy, but at time it throws status code 0 or 503, sharing the error message that is accompanied with these statuses. We have 1 task always active and trigger another one on 80% CPU load. But we always see 2 tasks active though it barely uses .25 CPU and 512 Memory system. We are not sure what is the issue here and why we keep getting these errors. Not sure if it has anything to do with the size of the payload received. Timeout is set to 15 secs at API gateway level. Not sure where we are going wrong. Any help here is much appreciated. Error Status & Message ~~~ Status 0: "responseBody":".execute-api.ap-south-1.amazonaws.com: Temporary failure in name resolution" Status 503: "responseBody":"<html> <head><title>503 Service Temporarily Unavailable</title></head> <body> <center><h1>503 Service Temporarily Unavailable</h1></center> </body> </html> "
1
answers
0
votes
16
views
suchit
asked 5 days ago
Hi team, in my team, we have our code and pipelines in AWS code commit and codePipeline, **our AWS account doesn't allow creating IAM users nor long-lived credentials. also, outbound connections are blocked in our ASEA AWS account (no internet access)** we need to integrate with other teams using AzureDevops (ADO), in this case, how can we allow to deploy to AWS from ADO? is there a specific AWS role to allow another cloud vendor to deploy to AWS (ADO --> AWS) Thank you!!
1
answers
0
votes
26
views
Jess
asked 6 days ago
Hello all! I am investigating an issue happening with recent API Gateway deployments that have resulting warnings in the Jenkins console output resembling the following: ``` "warnings": [ "More than one server provided. Ignoring all but the first for defining endpoint configuration", "More than one server provided. Ignoring all but the first for defining endpoint configuration", "Ignoring response model for 200 response on method 'GET /providers/{id}/identity/children' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring request model for 'PUT /providers/{id}/admin_settings' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring response model for 200 response on method 'GET /providers/{id}/profile/addresses/{address_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring response model for 200 response on method 'GET /providers/{id}/profile/anecdotes/{anecdote_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring request model for 'POST /providers/{id}/routes' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring response model for 200 response on method 'GET /providers/{id}/routes/{route_id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring response model for 200 response on method 'GET /service_type_groups/{id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method.", "Ignoring response model for 200 response on method 'GET /service_types/{id}' because a model with the same name already exists. Please reference the defined model/schema, rather than redefine it on this method." ] ``` Here is an example of the 200 response for an effected method in the OAS doc: ``` responses: '200': description: Array of Provider Identities that are children of this Provider content: 'application/json': schema: description: Array of children provider identities type: array items: $ref: '#/components/schemas/providerIdentityExpansion' '404': $ref: '#/components/responses/not_found' '500': $ref: '#/components/responses/server_error' ``` Based on the language in the warnings text, my understanding is that there is some kind of default request/200 response model defined, and it is somehow being overwritten in the API methods themselves. But when comparing some other (seemingly) non-warning methods they look identical in how they are implemented. I have tried a few potential fixes with removing adding attributes, but none have worked so far. Would anyone be able to help me in finding what exactly is going wrong here in the OAS doc?
0
answers
0
votes
21
views
asked 6 days ago
Hi Team, Currently we have a customer running application in C program compiled as dll's and hosted the apps into Windows 2016 server in their on-prem data center. Now we have to migrate the apps into AWS cloud. Customer prefer to deploy this apps into container solution without code change. Is it possible to run C program dlls into EKS with small code change ? If NOT then what is the best possible treatment we should offer to customers for this application seamlessly deployed into AWS cloud. Thanks.
1
answers
0
votes
25
views
Elango
asked 9 days ago
On my lightsail instance I have tried to use the bncert-tool to setup an SSL cert, but it fails on the final part which is enabling auto-renewal. I got it working by manually renewing it https://aws.amazon.com/premiumsupport/knowledge-center/lightsail-bitnami-renew-ssl-certificate/ (It kept renewing successfully but would not show on the website, except for the first time, which I have no idea why? ``` 2023/03/16 22:59:39 [INFO] [MYDOMAIN] acme: Trying renewal with 2158 hours remaining 2023/03/16 22:59:39 [INFO] [MYDOMAIN] acme: Obtaining bundled SAN certificate 2023/03/16 22:59:39 [INFO] [MYDOMAIN] AuthURL: https://acme-v02.api.letsencrypt.org /acme/authz-v3/ 2023/03/16 22:59:39 [INFO] [MYDOMAIN] acme: authorization already valid; skipping c hallenge 2023/03/16 22:59:39 [INFO] [MYDOMAIN] acme: Validations succeeded; requesting certi ficates 2023/03/16 22:59:40 [INFO] [MYDOMAIN] Server responded with a certificate. ``` And now I've reached the limit of 5 certs) but then I tried to use bncert again and now no method is working. But regardless I would like to get the automatic method working if possible. ``` Domain MYDOMAIN did not pass HTTP challenge validation ``` https://docs.bitnami.com/google/how-to/understand-bncert/#certificates-not-renewed-automatically This page lists a solution but I still can't manage to get it working. I'm not sure if I have set the flags in the correct place? ``` RewriteCond %{REQUEST_URI} !^/\.well-known ``` ``` ProxyPass /.well-known ! ``` I placed them in my virtual host files myapp-https-vhost.conf ``` <VirtualHost _default_:443> RewriteCond %{REQUEST_URI} !^/\.well-known ServerAlias * SSLEngine on SSLCertificateFile "/opt/bitnami/apache/conf/MYDOMAIN.crt" SSLCertificateKeyFile "/opt/bitnami/apache/conf/MYDOMAIN.key" DocumentRoot "/home/bitnami/htdocs/staging-api" <Directory "/home/bitnami/htdocs/staging-api"> Require all granted </Directory> ProxyPass /.well-known ! ProxyPass / http://localhost:3000/ ProxyPassReverse / http://localhost:3000/ </VirtualHost> ``` myapp-http-vhost.conf ``` <VirtualHost _default_:80> RewriteCond %{REQUEST_URI} !^/\.well-known ServerAlias * DocumentRoot "/home/bitnami/htdocs/staging-api" <Directory "/home/bitnami/htdocs/staging-api"> Require all granted </Directory> ProxyPass /.well-known ! ProxyPass / http://localhost:3000/ ProxyPassReverse / http://localhost:3000/ </VirtualHost> ``` I also placed it in the public/.htaccess file because someone suggested it should go there. ``` Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.html [QSA,L] RewriteCond %{REQUEST_URI} !^/\.well-known ``` Not really sure where these flags are meant to go `
0
answers
0
votes
15
views
sb
asked 9 days ago
Looking to get the same metrics graphs for CPU utilization, memory, disk etc for ECS and RDS from CloudWatch and display in a similar format on OpenSearch Dashboard for performance monitoring. How can I achieve this? May also add in Container Insights. Thanks
1
answers
0
votes
17
views
asked 9 days ago
I am working on Airbnb like project. There are Public RESTful APIs that need to be secured with API Gateway and oauth 2.0 I want a solution to secure the public RESTful APIs with OAuth 2.0. Thanks
1
answers
1
votes
31
views
zeeshan
asked 10 days ago
My app has its back-end on API Gateway and front-end is on a S3 bucket. That means they have different URLs and the cookie ends up being samesite: None. Because of that, Safari Browser doesn't store the login cookie I send from the back-end even with secure: true. My question is, is it possible to mantain this architecture and still manage to send a cookie that Safari can store ? If not possible, what would the architecture look like to be able to send cookies samesite: true ? If you can point me to the right direction I appreciate it.
0
answers
0
votes
4
views
Fred
asked 10 days ago