By using AWS re:Post, you agree to the Terms of Use
Questions in Well-Architected Framework
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Power Users can't invite external users?

In the WorkDocs documentation, in [this link](https://docs.aws.amazon.com/workdocs/latest/adminguide/manage-sites.html#ext-invite-settings) in the section on "Security - external invitations", it claims that Power Users can be set up to invite external users. However, in the administration panel it doesn't exist. Our company has one administrator for WorkDocs, but could potentially have a few hundred power users. Those power users will have control over their allocated 1TB of space (and be on their own site), and they need to be able to invite external users to view a folder. Each power user might have a hundred or so external users that need to view folders in their space. What won't work at all is those power users having to contact the admin to send a link to every single external user they need to view their folders because that could potentially be 20,000+ external invitations that would be piled onto the one admin. It also won't work to make each of those power users an admin, because you'd run into the possibility that they could inadvertently create and/or invite paid users, and the cost to our company would skyrocket unnecessarily. Bottom line, we need to be able to have power users invite external users and ONLY external users--they should have ZERO ability to create or invite paid users. Those external users need to be able to view the contents of folders that the power user sets up. Can this be done? Thank you, -Brent
0
answers
0
votes
18
views
asked 12 days ago

[Urgent Action Required] - Upgrade your RDS for PostgreSQL minor versions

This announcement is for customers that are running one or more Amazon RDS DB instances with a version of PostgreSQL, that has been deprecated by Amazon RDS and requires attention. The RDS PostgreSQL minor versions that are listed in the table below are supported, and any DB instances running earlier versions will be automatically upgraded to the version marked as "preferred" by RDS, no earlier than July 15, 2022 starting 12 AM PDT: | Major Versions Supported | Minor Versions Supported | | --- | --- | | 14 | 14.1 and later | | 13 |13.3 and later | | 12 | 12.7 and later | | 11 |11.12 and later | | 10 |10.17 and later| | 9 |none | Amazon RDS supports DB instances running the PostgreSQL minor versions listed above. Minor versions not included above do not meet our high quality, performance, and security bar. In the PostgreSQL versioning policy [1] the PostgreSQL community recommends that you always run the latest available minor release for whatever major version is in use. Additionally, we recommend that you monitor the PostgreSQL security page for documented vulnerabilities [2]. If you have automatic minor version upgrade enabled as a part of your configuration settings, you will be automatically upgraded. Alternatively, you can take action yourselves by performing the upgrade earlier. You can initiate an upgrade by going to the Modify DB Instance page in the AWS Management Console and change the database version setting to a newer minor/major version of PostgreSQL. Alternatively, you can also use the AWS CLI to perform the upgrade. To learn more about upgrading PostgreSQL minor versions in RDS, review the 'Upgrading Database Versions' page [3]. The upgrade process will shutdown the database instance, perform the upgrade, and restart the database instance. The DB instance may restart multiple times during the process. If you choose the "Apply Immediately" option, the upgrade will be initiated immediately after clicking on the "Modify DB Instance" button. If you choose not to apply the change immediately, the upgrade will be performed during your next maintenance window. Starting no earlier than July 15, 2022 12 AM PDT, we will automatically upgrade the DB instances running deprecated minor version to the preferred minor version of the specific major version of your RDS PostgreSQL database. (For example, instances running RDS PostgreSQL 10.1 will be automatically upgraded to 10.17 starting no earlier than July 15, 2022 12 AM PDT) Should you need to create new instances using the deprecated version(s) of the database, we recommend that you restore from a recent DB snapshot [4]. You can continue to run and modify existing instances/clusters using these versions until July 14, 2022 11:59 PM PDT, after which your DB instance will automatically be upgraded to the preferred minor version of the specific major version of your RDS PostgreSQL database. Starting no earlier than July 15, 2022 12 AM PDT, restoring the snapshot of a deprecated RDS PostgreSQL database instance will result in an automatic version upgrade of the restored database instance using the same upgrade process as described above. Should you have any questions or concerns, please see the RDS FAQs [5] or you can contact the AWS Support Team on the community forums and via AWS Support [6]. Sincerely, Amazon RDS [1] https://www.postgresql.org/support/versioning/ [2] https://www.postgresql.org/support/security/ [3] http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html [4] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html [5] https://aws.amazon.com/rds/faqs/ [search for "guidelines for deprecating database engine versions"] [6] https://aws.amazon.com/support
0
answers
1
votes
15
views
asked 23 days ago

Adding MFA to Workspaces "failed" problem

I have been attempting to add Mult-Factor Authentication to my workspaces account for my user base. I have configured the radius server using Free Radius from this post here: https://aws.amazon.com/blogs/desktop-and-application-streaming/integrating-freeradius-mfa-with-amazon-workspaces/ and all goes according to plan. I have the FreeRadius server using LinOTP running. The problem is in the very last step, when I go to enable MFA in workspace , I put in the information and it just says "failed". Specifically, Step 6: Enable MFA on your AWS Directory Communication between the AWS Managed Microsoft AD RADIUS client and your RADIUS server require you to configure AWS security groups that enable communication over port 1812. Edit your Virtual Private Cloud (VPC) security groups to enable communications over port 1812 between your AWS Directory Service IP end points and your RADIUS MFA server. Navigate to your Directory Service console Click the Directory you want to enable MFA on. Select Network & Security tab, scroll down to Multi-factor authentication, click Actions and Enable. In Enable multi-factor authentication (MFA) configure MFA settings: Display label: Example RADIUS server IP address(es): Private IP of the Amazon Linux 2 instance Port: 1812 Shared secret code: the one set in /etc/raddb/clients.conf Confirm shared secret code: as preceding Protocol: PAP Server timeout (in seconds): 30 Max retries: 3 This operation can take between 5-10mins to complete. Once the Radius status is “completed” you can test MFA authentication from the WorkSpace client. I really have two questions: 1. How do I do this part? Edit your Virtual Private Cloud (VPC) security groups to enable communications over port 1812 between your AWS Directory Service IP end points and your RADIUS MFA server. Maybe I'm not setting up the endpoints correctly ? Do I go to the VPC and add endpoints there? CAn you pleae be specific. 2. How do I get more information from just the "failed" in red --- how do I access the creation logs? Thanks in advance, Jon
2
answers
0
votes
7
views
asked a month ago

Athena returns "FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. null"

Following the well architected labs 200: Cost and usage analysis I get the following error when adding partitions in Athena Query Editor: ``` MSCK REPAIR TABLE `cost_optimization_10XXXXXXXX321`; ``` and it returned the following error: > FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. null This query ran against the "costfubar" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 856e146a-8b13-4175-8cd8-692eef6d3fa5 The table was created correctly in Glue with ``` Name cost_optimization_10XXXXXXXXX21 Description Database costfubar Classification parquet Location s3://cost-optimization-10XXXXXXX321// Connection Deprecated No Last updated Wed Apr 20 16:46:28 GMT-500 2022 Input format org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat Output format org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat Serde serialization lib org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe Serde parameters serialization.format 1 Table properties sizeKey 4223322objectCount 4UPDATED_BY_CRAWLER costfubarCrawlerSchemaSerializerVersion 1.0recordCount 335239averageRecordSize 27exclusions ["s3://cost-optimization-107457606321/**.json","s3://cost-optimization-1XXXXXXXX21/**.csv","s3://cost-optimization-107457606321/**.sql","s3://cost-optimization-1XXXXXXXX321/**.gz","s3://cost-optimization-107457606321/**.zip","s3://cost-optimization-107457606321/**/cost_and_usage_data_status/*","s3://cost-optimization-107457606321/**.yml"]CrawlerSchemaDeserializerVersion 1.0compressionType nonetypeOfData file ``` and has the following partitions shown in Glue: ``` partition_0 partition_1 year month detailed-cur-1XXXXXXXX57 detailed-cur-1XXXXXXXX57 2018 12 View files View properties detailed-cur-1XXXXXXXXX57 detailed-cur-1XXXXXXXXX57 2022 4 View files View properties detailed-cur-1XXXXXXXXX57 detailed-cur-1XXXXXXXXX57 2018 11 View files View properties detailed-cur-1XXXXXXXX57 detailed-cur-1XXXXXXXX57 2018 10 View files View properties ```
2
answers
0
votes
71
views
asked a month ago

App Runner actions work very slow (2-10 minutes) and deployer provides incorrect error message

App Runner actions work very slow for me. create/pause/resume may take 2-5 minutes for simple demo image (`public.ecr.aws/aws-containers/hello-app-runner:latest`) and create-service when image not found takes ~10 minutes: example #1 - 5 min to deploy hello-app image ``` 04-17-2022 05:59:55 PM [AppRunner] Service status is set to RUNNING. 04-17-2022 05:59:55 PM [AppRunner] Deployment completed successfully. 04-17-2022 05:59:44 PM [AppRunner] Successfully routed incoming traffic to application. 04-17-2022 05:58:33 PM [AppRunner] Health check is successful. Routing traffic to application. 04-17-2022 05:57:01 PM [AppRunner] Performing health check on port '8000'. 04-17-2022 05:56:51 PM [AppRunner] Provisioning instances and deploying image. 04-17-2022 05:56:42 PM [AppRunner] Successfully pulled image from ECR. 04-17-2022 05:54:56 PM [AppRunner] Service status is set to OPERATION_IN_PROGRESS. 04-17-2022 05:54:55 PM [AppRunner] Deployment started. ``` example #2 - 10 min when image not found ``` 04-17-2022 05:35:41 PM [AppRunner] Failed to pull your application image. Be sure you configure your service with a valid access role to your ECR repository. 04-17-2022 05:25:47 PM [AppRunner] Starting to pull your application image. ``` example #3 - 10 min when image not found ``` 04-17-2022 06:46:24 PM [AppRunner] Failed to pull your application image. Be sure you configure your service with a valid access role to your ECR repository. 04-17-2022 06:36:31 PM [AppRunner] Starting to pull your application image. ``` but 404 error should be detected immediately and fail much faster. because no need to retry 404 many times for 10 min, right? additionally the error message `Failed to pull your application image. Be sure you configure your service with a valid access role to your ECR repository` is very confusing. it doesn't show image name and doesn't provide the actual cause. 404 is not related to access errors like 401 or 403, correct? can App Runner actions performance and error message be improved?
0
answers
0
votes
5
views
asked a month ago

Security group appears to block certain ports after google-authenticator mis-entries

I run a small server providing web and mail services with a public address. I was planning on upgrading from a t2 small to a t3 small instance so I began testing the new environment using ubuntu 20.04. The new instance is running nginx, postfix, dovecot and has ports 22,25,80,443,587 and 993 open through two security groups assigned. I wanted to test a user which used only google-authenticator with pam/sshd to log in (no pubkey, no password). What I discovered was that after two sets of failed login attempts (intentional), my connection to the server would be blocked and I would receive a timed out message. Checking the port status with nmap shows that ports 22,80 and 443 were closed. and the remaining still open. I can still reach all the ports normally from within my vpc, but from outside, the ports are blocked. Restarting the instance or reassigning the security groups will fix the problem. Also, after about 5 minutes, the problem resolves itself. It appears that the AWS security group is the source of the block, but I can find no discussion of this type of occurrence. This isn't critical, but a bit troubling, because it opens a route for malicious actions that could block access to my instance. I have never experienced anything like this in about 7 years of running a similar server, though I never used google-authenticator with pam/sshd before. Do you have any ideas? I'd be happy to provide the instance id and security groups if needed.
1
answers
0
votes
5
views
asked a month ago

Unauthorized AWS account racked up charges on stolen credit card.

My mother was automatically signed up for an AWS account or someone used her credentials to sign up. She did not know that she had been signed up, and it sat unused for 3 years. Last month, she got an email from AWS for "unusual activity" and she asked me to help her look into it. Someone racked up $800+ in charges in 10 days for AWS services she has never heard of, let alone used (SageMaker, LightSail were among the services). The card on the AWS account is a credit card that was stolen years ago and has since been cancelled. So when AWS tried to charge the card, it didn't go through. My experience with AWS customer service has been unhelpful so far. Mom changed her AWS password in time so we could get into the account and contact support. I deleted the instances so that the services incurring charges are now stopped. But now AWS is telling me to put in a "valid payment method" or else they will not review the fraudulent bill. They also said that I have to set up additional AWS services (Cost Management, Amazon Cloud Watch, Cloud Trail, WAF, security services) before they'll review the bill. I have clearly explained to them that this entire account is unauthorized and we want to close it ASAP, so adding further services and a payment method doesn't make sense. Why am I being told to use more AWS services when my goal is to use zero? Why do I have to set up "preventative services" when the issue I'm trying to resolve is a PAST issue of fraud? They also asked me to write back and confirm that we have read and understood the AWS Customer Agreement and shared responsibility model." Of course we haven't, because we didn't even know the account existed! Any advice or input into this situation? It's extremely frustrating to be told that AWS won't even look into the issue unless I set up these additional AWS services and give them a payment method. This is a clear case of identity fraud. We want this account shut down. Support Case # is xxxxxxxxxx. Edit- removed case ID -Ann D
1
answers
0
votes
16
views
asked 2 months ago

Unable to purchase prepaid Hits

Hi, I am new to Mturk and very confused about the process for purchasing prepaid Hits. I was following the process described in the FAQ of the Amazon Mturk page (https://www.mturk.com/help#enable_aws_billing): ========================================= How do I purchase prepaid HITs on Amazon Mechanical Turk? Follow these steps to purchase prepaid HITs: 1. From your Amazon Mechanical Turk account, go to My Account -> Purchase Prepaid HITs. 2. Enter in the amount you would like to purchase. 3. Select the credit or debit card on file or enter in new credit or debit card information. 4. Confirm your purchase. Note: As a US Requester, you may be prompted to establish a verified Amazon Payments account if you plan to make a purchase above certain amounts. You can create a verified Amazon Payments account at any time here. ========================================= First of all, I am NOT ABLE TO find "Purchase Prepaid HITs" on "My Account" page. So, I tried to establish "a verified Amazon Payments account" as it directs, and I am in the stage when I encounter "We’re verifying your identity now, and we’ll send you an email when the verification is complete. This can take up to 24 hours. You can’t use your account until we’ve verified your identity." But it has been more than two weeks since I saw that message. What is wrong with my whole process? I really do want to purchase prepaid HITs but I am not able to...
0
answers
0
votes
6
views
asked 2 months ago

Redshift Clear Text Passwords and Secret keys exposed?

Hi there, I received the following email about my redshift cluster: > We are reaching out to inform you your Amazon Redshift cluster(s) may have been affected by an issue caused by a change introduced on October 13, 2021, where your password and/or your Secret_Access_Key may have been inadvertently written in plain text to your cluster's audit logs (stl_user_activity_log). We do not have any indication that these credentials have been accessed. We applied a patch on January 19, 2022, to fix the issue for all clusters in all AWS regions. > As a cautionary measure, we recommend that you: (1) Review any access to your cluster(s) in your audit log files from October 13, 2021 through January 19, 2022, such as those by authorized applications, to ensure your access credentials and passwords were not accessed; (2) Immediately change your cluster's password and/or generate a new Secret_Access_Key for use with COPY and UNLOAD commands for moving files between Amazon S3 and Amazon Redshift; and (3) Scan and sanitize your audit log files, that were created between October 13, 2021 through January 19, 2022, both dates inclusive, to remove any occurrences of clear text passwords and security keys in them. However, looking on my cluster I can't see a stl_user_activity_log > Select * from stl_user_activity_log; > SQL Error [42P01]: ERROR: relation "pg_catalog.stl_user_activity_log" does not exist Was this email pointing out the wrong audit logs? or should I not be looking for these audit logs on the table? we have s3 audit logging enabled, but browsing through those I don't see anything either.
1
answers
0
votes
10
views
asked 2 months ago

EC2 instance can’t access the internet

Apparently, my EC2 instance can’t access the internet properly. Here is what happens when I try to install a Python module: `[ec2-user@ip-172-31-90-31 ~]$ pip3 install flask` `Defaulting to user installation because normal site-packages is not writeable` `WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fab198cbe10>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /simple/flask/` etc. Besides, inbound ping requests to instances the Elastic IP fail (Request Timed Out). However, the website that is hosted on the same EC2 instance can be accessed using both http and https. The security group is configured as follows: the inbound rules are | Port range | Protocol | Source | | -------- | -------- | ---- | | 80 | TCP |0.0.0.0/0 | | 22 | TCP |0.0.0.0/0 | | 80 | TCP |::/0 | | 22 | TCP |::/0 | | 443 | TCP |0.0.0.0/0 | | 443 | TCP |::/0 | the outbound rules are | IP Version | Type | Protocol | Port range | Source | | ----------- | --------- | -------- | ------- | ------ | | IPv4 | All traffic | All | All | 0.0.0.0/0 | The ACL inbound rules are: | Type | Protocol | Port range | Source | Allow/Deny | | -------- | -------- | ---- | -------- | ---------- | | HTTP (80) | TCP (6) | 80 |0.0.0.0/0 | Allow | | SSH (22) | TCP (6) | 22 |0.0.0.0/0 | Allow | | HTTPS (443)| TCP (6) | 443 |0.0.0.0/0 | Allow | | All ICMP - IPv4 | ICMP (1) | All | 0.0.0.0/0 | Allow | | All trafic | All | All |0.0.0.0/0 | Deny | and the outbound rules are: | Type | Protocol | Port range | Source | Allow/Deny | | -------- | -------- | ------- | -------- | ---------- | | Custom TCP | TCP (6) | 1024 - 65535 | 0.0.0.0/0 | Allow | | HTTP (80) | TCP (6) | 80 |0.0.0.0/0 | Allow | | SSH (22) | TCP (6) | 22 |0.0.0.0/0 | Allow | | HTTPS (443) | TCP (6) | 443 |0.0.0.0/0 | Allow | |All ICMP - IPv4 | ICMP (1) | All | 0.0.0.0/0 | Allow | | All trafic | All | All |0.0.0.0/0 | Deny | This is what the route table associated with the subnet looks like: | Destination | Target | Status | Propagated | | ---------- | -------- | -------- | ---------- | | 172.31.0.0/16 | local | Active | No | | 0.0.0.0/0 | igw-09b554e4da387238c | Active | No | (no explicit or edge associations). As for the firewall, executing `sudo iptables –L` results in `Chain INPUT (policy ACCEPT)` `target prot opt source destination` `Chain FORWARD (policy ACCEPT)` `target prot opt source destination` `Chain OUTPUT (policy ACCEPT)` `target prot opt source destination` and `sudo iptables -L -t nat` gives `Chain PREROUTING (policy ACCEPT)` `target prot opt source destination` `Chain INPUT (policy ACCEPT)` `target prot opt source destination` `Chain OUTPUT (policy ACCEPT)` `target prot opt source destination` `Chain POSTROUTING (policy ACCEPT)` `target prot opt source destination` What am I missing here? Any suggestions or ideas on this would be greatly appreciated. Thanks
2
answers
0
votes
31
views
asked 2 months ago
1
answers
0
votes
10
views
asked 3 months ago

What is Best Practice configuration for a SECURE single user WorkSpaces VPC?

I am a one-person business looking to set up a simple VPC for access to a virtual Windows desktop when I travel from the US to Europe. My trips are 1-3 months in duration, and I'd like to carry just my iPad or a Chromebook rather than a full laptop. This is easier and more secure if my desktop is in the AWS cloud. I am a bit of a network novice and my prior experience with AWS has been only with S3 buckets. From reading the AWS docs, I have learned how to create a VPC, with subnets and a Simple AD. I can spin up a workspace and access it. However, I am unsure about what additional steps, if any, I should take to *secure* my WorkSpaces environment. I am using public subnets without a NAT Gateway, because I only need one workspace image and would like to avoid paying $35+ per month for the NAT just to address one image. I know that one of the side benefits of using a NAT Gateway is that I get a degree of isolation from the Internet because any images behind a NAT Gateway would not be directly reachable from the Internet. However, in my case, my workspace image has an assigned IP and is *not* behind a NAT Gateway. My questions are: 1. Am I taking unreasonable risks by placing my WorkSpaces in a public subnet, i.e., by not using a NAT Gateway? 2. Should I restrict access using Security Group rules, and if so, how? 3. Are there other steps I should take to improve the security of my VPC? I want to access my WorkSpace using an iPad, so I can't use certificate-based authentication. I don't know if I could easily use IP restriction, because I don't know in advance the IP range I would be in when I travel. PLUS, as you can probably tell, I'm confused about what I need to secure - the workspace image, my Simple Directory instance, or both? I'm having a hard time finding guidance in the AWS documentation, because much of the docs are oriented toward corporate use cases, which is understandable. The "getting started" documentation is excellent but doesn't seem to touch on my questions. Thanks in advance for any answers or documentation sources you provide!
3
answers
0
votes
11
views
asked 3 months ago

Setting MKL_NUM_THREADS to be more than 16 for m5 instances

Hey, I have a 32-core EC2 linux m5 instance. My python installed via anaconda. I notice that my numpy cannot use more than 16 cores. Looks like my numpy uses libmkl_rt.so: ``` [2]: np.show_config() blas_mkl_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/home/ec2-user/anaconda3/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/home/ec2-user/anaconda3/include'] blas_opt_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/home/ec2-user/anaconda3/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/home/ec2-user/anaconda3/include'] lapack_mkl_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/home/ec2-user/anaconda3/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/home/ec2-user/anaconda3/include'] lapack_opt_info: libraries = ['mkl_rt', 'pthread'] library_dirs = ['/home/ec2-user/anaconda3/lib'] define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)] include_dirs = ['/home/ec2-user/anaconda3/include'] ``` When I tried to set MKL_NUM_THREADS below 16, it works ``` (base) ec2-user@ip-172-31-18-3:~$ export MKL_NUM_THREADS=12 && python -c "import ctypes; mkl_rt = ctypes.CDLL('libmkl_rt.so'); print (mkl_rt.mkl_get_max_threads())" 12 ``` When I tried to set it to 24, it stops at 16 ``` (base) ec2-user@ip-172-31-18-3:~$ export MKL_NUM_THREADS=24 && python -c "import ctypes; mkl_rt = ctypes.CDLL('libmkl_rt.so'); print (mkl_rt.mkl_get_max_threads())" 16 ``` But I do have 32 cores ``` In [2]: os.cpu_count() Out[2]: 32 ``` Is there any other settings I need to check? Thanks, Bill
3
answers
0
votes
8
views
asked 3 months ago
  • 1
  • 90 / page