Follow popular topics

see all

Top contributors

see all
RankNameTotal points
1
1,691
2
1,579
3
1,327
4
1,224
5
1,170

Recent questions

see all
1/18
  • Below is a sample javascript sdk v3 Athena query that uses a prepared statement and parameters that are passed to the query ``` const { AthenaClient } = require("@aws-sdk/client-athena"); const REGION = 'us-east-1'; const athenaClient = new AthenaClient({region: REGION}); module.exports = {athenaClient}; ``` ``` const tableName = 'employees'; const sqlString = "SELECT firstname, lastname, state FROM " + tableName + " WHERE " + "zipcode = ? AND " + "companyname = ?"; const queryExecutionInput = { QueryString: sqlString, QueryExecutionContext: { Database: 'sample-employee', Catalog: 'awscatalogname' }, ResultConfiguration: { OutputLocation: 's3://athena-query-bucket' }, WorkGroup: 'primary', ExecutionParameters: ["12345", "Test 1"] } const queryExecutionId = await athenaClient.send(new StartQueryExecutionCommand(queryExecutionInput)); const command = new GetQueryExecutionCommand(queryExecutionId); const response = await athenaClient.send(command); const state = response.QueryExecution?.Status?.State; if(state === QueryExecutionState.QUEUED || state === QueryExecutionState.RUNNING) { await setTimeout(this.config.pollInterval); //wait for pollInterval before calling again. return this.waitForQueryExecution(queryExecutionId); } else if(state === QueryExecutionState.SUCCEEDED) { const resultParams = { QueryExecutionId: response.QueryExecution.QueryExecutionId, MaxResults: this.config.maxResults}; const getQueryResultsCommand:any = new GetQueryResultsCommand(resultParams); const resp = await athenaClient.send(getQueryResultsCommand); console.log("GetQueryResultsCommand : ", resp.ResultSet.ResultSetMetadata.ColumnInfo); console.log("GetQueryResultsCommand : ", resp.ResultSet.Rows); } else if(state === QueryExecutionState.FAILED) { throw new Error(`Query failed: ${response.QueryExecution?.Status?.StateChangeReason}`); } else if(state === QueryExecutionState.CANCELLED) { throw new Error("Query was cancelled"); } ``` This table has about 50 records that match this query. When the query is run this is what is returned for all 50 records. ``` { "ResultSetMetadata": { "Rows": [ { "Data": [ { "VarCharValue": "firstname" }, { "VarCharValue": "lastname" }, { "VarCharValue": "state" } ] } ] } } ``` Only the column names are listed but no data from these columns. I see the exact same issue when I try it using the CLI as well ``` aws athena start-query-execution --query-string "SELECT firstname, lastname, state FROM employees WHERE zipcode = CAST(? as varchar) AND companyname = CAST(? as varchar)" --query-execution-context "Database"="sample-employee" --result-configuration "OutputLocation"="s3://athena-query-bucket/" --execution-parameters "12345" "Test 1" aws athena get-query-execution --query-execution-id "<query-execution-id>" aws athena get-query-results --query-execution-id "<query-execution-id>" ``` FYI ColumnInfo in the ResultSetMetadata object has been removed to keep the json simple ``` { "ResultSetMetadata": { "Rows": [ { "Data": [ { "VarCharValue": "firstname" }, { "VarCharValue": "lastname" }, { "VarCharValue": "state" } ] } ] } } ``` So, not exactly sure what I might be doing wrong. Any help/pointers on this would be great. We are currently running Athena engine version 2.
    0
    answers
    0
    votes
    5
    views
    asked 37 minutes ago
  • I want to create an IAM role that have permission to unload only one schema in redshift, is it achievable?
    0
    answers
    0
    votes
    2
    views
    asked an hour ago
  • Customer tried "Troubleshooting MFA" and entered the email address as the first step. When they click on the link that they receive, they get that "Email is expired" (Even before 15 mins)
    0
    answers
    0
    votes
    1
    views
    AWS
    asked 2 hours ago
  • Greetings, I already managed to use boto3 with streaming channels. However, I need so much to find an example using Boto3 to mkv videos to Kinesis Signaling Channels (WebRTC). Would anyone please help? Thanks.
    1
    answers
    0
    votes
    6
    views
    asked 3 hours ago
  • some time ago, I was told that for using Kinesis delivery stream to Redshift, you HAD to use a provisioned cluster, not serverless. Something to do with Kinesis only able to use public IP addresses on both sides, and Redshift serverless was internal-only. Has this been fixed yet? I see I can now create a "Redshift endpoint" for Redshift serverless... AND checked the "Enable public access" checkbox. but when I try to define it as the destination for Kinesis delivery stream (in the GUI), my redshift serverless instance still doesnt show up as an option.
    0
    answers
    0
    votes
    5
    views
    asked 3 hours ago
  • I am trying to create new connection for a new api destination for an EventBridge rule. The api destination is to a service hosted in AWS. I am trying to setup Oauth Client Credentials. I am trying to re-use credentials that are in the AWS Secrets Manager. I keep getting the following error: Invalid target fields. Complete all required fields for the new connection correctly. I am not being told what field is incorrect. Is there a way to know which connection field is wrong? Is trying to re-use credentials in the secrets manager possible?
    0
    answers
    0
    votes
    7
    views
    asked 3 hours ago
  • i have a t3.medium size type, how can i do scaling out nodes and shards?
    0
    answers
    0
    votes
    3
    views
    Kris
    asked 3 hours ago
  • HI, I am following the tutorial here: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-s3.html But the deployment fails. Where I run status on the CodeDeployAgent Service using: **sudo service codedeploy-agent status ** I get this error: <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:151:in `require': cannot load such file -- net/smtp (LoadError) from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:151:in `require' from /opt/codedeploy-agent/vendor/gems/logging-1.8.2/lib/logging/appenders/email.rb:2:in `<top (required)>' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:151:in `require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:151:in `require' from /opt/codedeploy-agent/vendor/gems/logging-1.8.2/lib/logging/appenders.rb:57:in `<module:Logging>' from /opt/codedeploy-agent/vendor/gems/logging-1.8.2/lib/logging/appenders.rb:2:in `<top (required)>' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:151:in `require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:151:in `require' from /opt/codedeploy-agent/vendor/gems/logging-1.8.2/lib/logging.rb:537:in `<module:Logging>' from /opt/codedeploy-agent/vendor/gems/logging-1.8.2/lib/logging.rb:18:in `<top (required)>' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:162:in `require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:162:in `rescue in require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:152:in `require' from /opt/codedeploy-agent/vendor/gems/process_manager-0.0.13/lib/process_manager/log.rb:2:in `<top (required)>' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:99:in `require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:99:in `require' from /opt/codedeploy-agent/vendor/gems/process_manager-0.0.13/lib/process_manager.rb:9:in `<top (required)>' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:162:in `require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:162:in `rescue in require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:152:in `require' from /opt/codedeploy-agent/lib/instance_agent.rb:10:in `<top (required)>' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:88:in `require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:88:in `require' from /opt/codedeploy-agent/bin/../lib/codedeploy-agent.rb:22:in `<main>' <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:151:in `require': cannot load such file -- logging (LoadError) Did you mean? logger from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:151:in `require' from /opt/codedeploy-agent/vendor/gems/process_manager-0.0.13/lib/process_manager/log.rb:2:in `<top (required)>' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:99:in `require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:99:in `require' from /opt/codedeploy-agent/vendor/gems/process_manager-0.0.13/lib/process_manager.rb:9:in `<top (required)>' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:162:in `require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:162:in `rescue in require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:152:in `require' from /opt/codedeploy-agent/lib/instance_agent.rb:10:in `<top (required)>' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:88:in `require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:88:in `require' from /opt/codedeploy-agent/bin/../lib/codedeploy-agent.rb:22:in `<main>' <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:88:in `require': cannot load such file -- process_manager (LoadError) from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:88:in `require' from /opt/codedeploy-agent/lib/instance_agent.rb:10:in `<top (required)>' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:88:in `require' from <internal:/usr/share/ruby3.2-rubygems/rubygems/core_ext/kernel_require.rb>:88:in `require' from /opt/codedeploy-agent/bin/../lib/codedeploy-agent.rb:22:in `<main>' Has anyone experienced this before.
    0
    answers
    0
    votes
    5
    views
    asked 4 hours ago
  • Can we configure mtls when using the [S3 rest api](https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html)? From looking at the documentation, I understand that the way to perform such activity would be to put the call behind an API gateway service and have it manage the [mtls part](https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mutual-tls.html)
    0
    answers
    0
    votes
    4
    views
    asked 4 hours ago
  • We are, as of an hour ago, getting the following error on all of out environments (multiple AWS accounts). Everything was working fine 6 hours ago. No changes to configuration of CodeDeploy or Autoscaling Groups have been made today. The deployment failed because a non-empty field was discovered on your Auto Scaling group that Code Deploy does not currently support copying. Unsupported fields: [DescribeAutoScalingGroupsResponse.DescribeAutoScalingGroupsResult.AutoScalingGroups.member.TrafficSources.member.Type] Are there any issues here that AWS are aware of? If not how to I see that the value of the variable is?
    0
    answers
    1
    votes
    8
    views
    asked 4 hours ago
  • There is a SubscribeToShard SDK call. Is there an "unsubscribe from shard"? Thank you.
    0
    answers
    0
    votes
    2
    views
    asked 4 hours ago
  • Hello, I am receiving a "no handler found for uri" error when attempting to replicate data from a MySQL RDS instance using DMS 3.4.7 to OpenSearch 2.5. ``` 2023-03-29T18:35:07 [TARGET_LOAD ]E: Elasticsearch:FAILED SourceTable:accounts TargetIndex:accounts Operation:INSERT_ENTRY RecordPKKey:1010 RecordPKID:7A5DF5FFA0DEC2228D90B8D0A0F1B0767B748B0A41314C123075B8289E4E053FES HttpCode:400 ESErrorResponse: { "error": "no handler found for uri [/accounts/doc/7A5DF5FFA0DEC2228D90B8D0A0F1B0767B748B0A41314C123075B8289E4E053F] and method [PUT]" } [1026400] (elasticsearch_utils.c:657) ``` I ssh'd into the OpenSearch cluster and the index does exist, so it is creating the index, but no records are being written. What is strange to me is that based on this error, it looks like DMS is attempting to write the record to `/accounts/doc/id` when in the official OpenSearch documentation the operation should be using `/accounts/_doc/id` as noted here: ``` PUT sample-index/_doc/1 { "Description": "To be or not to be, that is the question." } ``` https://opensearch.org/docs/2.5/api-reference/document-apis/index-document/ When I attempt to insert a record with the underscore (PUT accounts/_doc/1) it works. Am I missing something here? Here is my task config: ``` { "Logging": { "EnableLogging": true, "EnableLogContext": false, "LogComponents": [ { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "TRANSFORMATION" }, { "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG", "Id": "SOURCE_UNLOAD" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "IO" }, { "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG", "Id": "TARGET_LOAD" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "PERFORMANCE" }, { "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG", "Id": "SOURCE_CAPTURE" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "SORTER" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "REST_SERVER" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "VALIDATOR_EXT" }, { "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG", "Id": "TARGET_APPLY" }, { "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG", "Id": "TASK_MANAGER" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "TABLES_MANAGER" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "METADATA_MANAGER" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "FILE_FACTORY" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "COMMON" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "ADDONS" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "DATA_STRUCTURE" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "COMMUNICATION" }, { "Severity": "LOGGER_SEVERITY_DEFAULT", "Id": "FILE_TRANSFER" } ], "CloudWatchLogGroup": "hidden", "CloudWatchLogStream": "hidden" }, "StreamBufferSettings": { "StreamBufferCount": 3, "CtrlStreamBufferSizeInMB": 5, "StreamBufferSizeInMB": 8 }, "ErrorBehavior": { "FailOnNoTablesCaptured": false, "ApplyErrorUpdatePolicy": "LOG_ERROR", "FailOnTransactionConsistencyBreached": false, "RecoverableErrorThrottlingMax": 1800, "DataErrorEscalationPolicy": "SUSPEND_TABLE", "ApplyErrorEscalationCount": 0, "RecoverableErrorStopRetryAfterThrottlingMax": false, "RecoverableErrorThrottling": true, "ApplyErrorFailOnTruncationDdl": false, "DataTruncationErrorPolicy": "LOG_ERROR", "ApplyErrorInsertPolicy": "LOG_ERROR", "EventErrorPolicy": "IGNORE", "ApplyErrorEscalationPolicy": "LOG_ERROR", "RecoverableErrorCount": -1, "DataErrorEscalationCount": 0, "TableErrorEscalationPolicy": "STOP_TASK", "RecoverableErrorInterval": 5, "ApplyErrorDeletePolicy": "IGNORE_RECORD", "TableErrorEscalationCount": 0, "FullLoadIgnoreConflicts": true, "DataErrorPolicy": "LOG_ERROR", "TableErrorPolicy": "SUSPEND_TABLE" }, "TTSettings": { "TTS3Settings": null, "TTRecordSettings": null, "EnableTT": false }, "FullLoadSettings": { "CommitRate": 10000, "StopTaskCachedChangesApplied": false, "StopTaskCachedChangesNotApplied": false, "MaxFullLoadSubTasks": 8, "TransactionConsistencyTimeout": 600, "CreatePkAfterFullLoad": false, "TargetTablePrepMode": "DROP_AND_CREATE" }, "TargetMetadata": { "ParallelApplyBufferSize": 0, "ParallelApplyQueuesPerThread": 0, "ParallelApplyThreads": 0, "TargetSchema": "", "InlineLobMaxSize": 0, "ParallelLoadQueuesPerThread": 0, "SupportLobs": true, "LobChunkSize": 64, "TaskRecoveryTableEnabled": false, "ParallelLoadThreads": 0, "LobMaxSize": 0, "BatchApplyEnabled": false, "FullLobMode": true, "LimitedSizeLobMode": false, "LoadMaxFileSize": 0, "ParallelLoadBufferSize": 0 }, "BeforeImageSettings": null, "ControlTablesSettings": { "historyTimeslotInMinutes": 5, "HistoryTimeslotInMinutes": 5, "StatusTableEnabled": false, "SuspendedTablesTableEnabled": false, "HistoryTableEnabled": false, "ControlSchema": "", "FullLoadExceptionTableEnabled": false }, "LoopbackPreventionSettings": null, "CharacterSetSettings": null, "FailTaskWhenCleanTaskResourceFailed": false, "ChangeProcessingTuning": { "StatementCacheSize": 50, "CommitTimeout": 1, "BatchApplyPreserveTransaction": true, "BatchApplyTimeoutMin": 1, "BatchSplitSize": 0, "BatchApplyTimeoutMax": 30, "MinTransactionSize": 1000, "MemoryKeepTime": 60, "BatchApplyMemoryLimit": 500, "MemoryLimitTotal": 1024 }, "ChangeProcessingDdlHandlingPolicy": { "HandleSourceTableDropped": true, "HandleSourceTableTruncated": true, "HandleSourceTableAltered": true }, "PostProcessingRules": null } ```
    0
    answers
    0
    votes
    2
    views
    b0t
    asked 4 hours ago
  • Hi: wondering if an AWS technical support could look into this to determine why the request is coming back FORBIDDEN ... two requestId's below to compare ... **Request Header (identical for both requests)** OPTIONS https://api.flybreeze.com/production/nav/api/nsk/v1/token HTTP/1.1 Host: api.flybreeze.com Connection: keep-alive Accept: */* Access-Control-Request-Method: POST Access-Control-Request-Headers: content-type Origin: https://www.flybreeze.com User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36 Edg/111.0.1661.51 Sec-Fetch-Mode: cors Sec-Fetch-Site: same-site Sec-Fetch-Dest: empty Referer: https://www.flybreeze.com/ Accept-Encoding: gzip, deflate, br Accept-Language: en-US,en;q=0.9 **FORBIDDEN Response Header** HTTP/1.1 403 Forbidden Content-Type: application/json Content-Length: 23 Connection: keep-alive Date: Thu, 30 Mar 2023 18:51:50 GMT **x-amzn-RequestId: 7bb21b87-6ecd-4dc1-8e07-bef8e7172d71** Access-Control-Allow-Origin: * Access-Control-Allow-Headers: Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,Platform **x-amzn-ErrorType: ForbiddenException** x-amz-apigw-id: Cm8LHG-koAMFlBA= Access-Control-Allow-Methods: OPTIONS,POST **X-Cache: Error from cloudfront** Via: 1.1 9a63a58e298bfb2c58157beda1f6de12.cloudfront.net (CloudFront) X-Amz-Cf-Pop: DEN52-P1 X-Amz-Cf-Id: Wixm-reIOJukfeov0CcZmEfAy7e1ASejSVj6kmCbqe-BRZyqnUNoYQ== Response Message {"message":"Forbidden"} **Below is a successful Response Header. Only difference is the ISP. The forbidden call was using fiber.net (host-145.arcadia-srv-216-83-134.fiber.net). The successful call was from the same web browser on the same machine, but tethered to T-Mobile hotspot.** **Why would AWS block one request but not the other based on the ISP?** **SUCCESSFUL Response Header** HTTP/1.1 200 OK Content-Type: application/json Content-Length: 0 Connection: keep-alive Date: Thu, 30 Mar 2023 16:54:08 GMT **x-amzn-RequestId: e1e7b624-dc5b-43d1-bfcd-434ee36bd580** Access-Control-Allow-Origin: * Access-Control-Allow-Headers: Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token x-amz-apigw-id: Cmq7qH32IAMFodw= Access-Control-Allow-Methods: DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT **X-Cache: Miss from cloudfront** Via: 1.1 0c32860274691581031a51698ea82be8.cloudfront.net (CloudFront) X-Amz-Cf-Pop: LAX53-P4 X-Amz-Cf-Id: UlBl6kMeG-q_hD9J_9u9tqeWJOywEwNrtYcPSuQSQKJs3RiuRXApPA== Response Message: {null}
    0
    answers
    0
    votes
    8
    views
    asked 5 hours ago
  • My domain is through google workspace and I am trying to activate gmail. My name servers use AWS and that's why I have to use AWS console to propagate MX records for gmail. My records are correct according to google workspace technician so there is something wrong from AWS route 53. somebody please help me.
    0
    answers
    0
    votes
    8
    views
    AS
    asked 5 hours ago
  • Hello! I downloaded the AWS Client VPN for Mac and the installation failed (see photo below). I did not change any default settings when going through the installation steps -- I simply agreed to the terms and kept on clicking the "next" button. I am running a macOS Ventura version 13.2.1. I also tried restarting my computer. Same issue. The installation failed. Any advice would be much appreciated, thanks! ![AWS VPN Client Installation Fail](/media/postImages/original/IMEXq2LAKmSBOrxtvvaPnqvg)
    0
    answers
    0
    votes
    15
    views
    asked 5 hours ago
  • I'm new to AWS and am using S3 to host eLearning courses on the web. I got an alert within 24 hours of uploading my first content that I had reached my request limit for the Free Tier. I'm confused about what a request is and how I would have gotten so many so quickly without sharing links to content with any yet. Any info would be helpful, thank you! Message I got "Your AWS account xxxxxxxxxxxx has exceeded 85% of the usage limit for one or more AWS Free Tier-eligible services for the month of March." I also logged in to check and it says that I have exceeded 2,000 requests.
    3
    answers
    0
    votes
    15
    views
    asked 5 hours ago
  • GRANT file ON `%`.* TO user@`%`with grant option    Error Code: 1045. Access denied for user 'user'@'%' (using password: YES)    0.000
    0
    answers
    0
    votes
    5
    views
    asked 5 hours ago
  • Hello, required: Enable s3 bucket access for a specific permission set 1.I have an SSO role in IAM for Billing. This is an AWS managed SSO Role and gives access to Billing Actions in its policy. AWSReservedSSO_BillingReadOnly_tagnumber. 2.Have an IAM Identity Center Group, AWS-acctnum-BillingReaders-Prod, that has 4 SSO users. 3. The above group has been assigned to permission sets below, user is able to see the permission sets on his login page, under the account. 4. Also Have a permission set(BillingReadOnly) that has the AWS managed Billing policy- AWSBillingReadOnlyAccess and also an inline policy that allows access to s3 bucket, (ListBucket, GetObject) The SSO user who is part of group 2, sees this permission set on his login screen. But he does not see any buckets listed on s3. Note, anything that is AWS managed, cannot be altered, hence the addition of custom inline policy on the permission set. Any idea what's wrong here? Thanks in advance.
    1
    answers
    0
    votes
    7
    views
    Swee
    asked 6 hours ago

Recent Knowledge Center content

see all
1/18

Recent articles

see all
1/18