Questions tagged with Amazon QuickSight
Content language: English
Sort by most recent
Does anyone have any insight on this issue?
getting null on quicksight service configuration -

Hello,
I just startet using AWS Quicksight and ran into a problem when trying to upload new datasets using SPICE.
According to the site on Manage Quicksight/ SPICE capacity my SPICE usage is with 297 GB way over my capacity limit of 31 GB. The Problem is that I already deleted all Datasets in Quicksight to get more SPICE capacity and this led to a gain of 14 MB of free capacity, but I am now wondering how should I free the remaining capacity of 297 GB?
I can not see an option to release capacity in the QuickSight console.

I have a Quicksight dataset that’s been working fine for months pulling data from S3 via a manifest file but all of a sudden all the refreshes started failing with the errors below since yesterday:
FAILURE_TO_PROCESS_JSON_FILE=
Error details:
S3 Iterator: S3 problem reading data
I’ve double checked the manifest file format and the S3 bucket permissions for Quicksight and everything seems fine and nothing has changed on our end for this to suddenly stop working out of the blue…
Manifest file:
```
{
"fileLocations": [
{
"URIPrefixes": [
"https://s3.amazonaws.com/solar-dash-live/"
]
}
],
"globalUploadSettings": {
"format": "JSON"
}
}
```
The error I get in the email alert is different and says "Amazon QuickSight couldn't parse a manifest file as valid JSON." However I verified that the above JSON is formatted correctly.
Also, if I create a new dataset with the same manifest file, it will show the data in the preview tool but its just the Refresh that fails so clearly the manifest file is formatted correctly if Quicksight is able to initially pull data from S3 but only fails later.
Copied from https://community.amazonquicksight.com/t/new-custom-sql-spice-import-failures-recent-code-update/9480
Was there a recent update to how SPICE imports work, perhaps when used in conjunction with custom SQL? I'm seeing an odd bug involving column orders like so:
Given custom SQL that looks like this:
```
SELECT
SOME_VARCHAR_COLUMN_A,
SOME_DATETIME_COLUMN_B,
SOME_INTEGER_COLUMN_C
FROM MY_TABLE
```
that returns rows like this:
```
SOME_VARCHAR_COLUMN_A | SOME_DATETIME_COLUMN_B | SOME_INTEGER_COLUMN_C
======================+========================+======================
abcdefg | 1659355922000 | 1234
Some other string | 1659432273000 | 192384719
```
And given a data set where I previously had done something like rename SOME_VARCHAR_COLUMN_A to "Column A", I am now getting errors on SPICE refresh that say the following:
```
Error threshold exceeded for dataset (10000) (maxErrors = 10000)
SKIPPED ROWS
10001 rows where SOME_DATETIME_COLUMN_B field date values were not in a supported date format.
```
The error file that the refresh dialog looks something like this:
```
ERROR_TYPE | COLUMN_NAME | SOME_DATETIME_COLUMN_B | SOME_INTEGER_COLUMN_C | Column A
===============+========================+========================+=======================+=========
MALFORMED_DATE | SOME_DATETIME_COLUMN_B | abcdefg | 1659355922000 | 1234
MALFORMED_DATE | SOME_DATETIME_COLUMN_B | Some other string | 1659432273000 | 192384719
```
So, it appears that the fields being served to SPICE from the query result are still being served in the same order (A, B, C), but the column that was renamed is now placed after the other columns in parsing order (B, C, A).
As a workaround, I changed my renamed columns back to their original names, but I'm still getting SPICE ingest errors. (I've even confirmed via the CLI with `aws quicksight describe-data-set --aws-account-id my-acct-number --data-set-id my-dataset-id` that the RenameColumnOperation blocks are gone from the DataTransforms section in the LogicalTableMap).
Want to create a quicksight dashboard which will allow user to upload a simple csv file to an existing dataset.
Suppose, one dataset has one column customer_id. This has 10 entries.
I want to enable users to upload a new csv file having new set of 50 customer_ids and append in that existing dataset. Could you please help/suggest how to achive that in quicksight.Thanks.
Nitpick about Quicksight in its current incarnation (currently on first-month trial, pay-per-session plan):
Datasets added from the CLI don't show up in the Browser Console (even after adding permissions shown below)
Also, datasets deleted from the CLI still show up in the list of datasets in Browser Console until you remove them (of course they don't really exist)
Dashboards (and I'd assume Analyses) do, however, display correctly in the Browser Console when added or removed from the CLI
If this is by design, I'd request changing the design, as it breaks the whole automated CLI workflow process.
Suggested fix: Quicksight should update its cached list of datasets every time the Datasets option is clicked in the browser console.
Perms used:
"quicksight:UpdateDataSourcePermissions",
"quicksight:DescribeDataSourcePermissions",
"quicksight:PassDataSource",
"quicksight:DescribeDataSource",
"quicksight:DeleteDataSource",
"quicksight:UpdateDataSource"
Hi,
I am migrating dashboard from one aws account to another aws account. In the target account , i want user to have access so that they can copy the dashboard to create new one.
But i am not able to do this via API. Is there any api which can provide save As Previlages?
Hello,
I'm trying to create a Cudos or CID dashboard that calculates data/costs for just a specific keyword in name of an item e.g. oracle. Each of our CFN stacks have the tag of Name populated. Having shared the tag from payer accounts to the account with the CUR data, it shows up in the CUR table as resource_tags_user_name. I've tried filtering on it, I've tried adding a control based on the "resource_name" and filtering on the parameter (which is the alias in the summary view for it), but while it does correctly find all related items, the cost of said items is very low vs what I know to be the actual cost (e.g. $1.2k a month vs what is more like $20k a month). Now, all of these volumes are provisioned as ec2 volumes, not strictly EBS volumes, but either way the cost is WAY off.
Is Name not a reliable tag? Is there a billing attribute that I'm missing? I've basically used various EC2 and EBS graphs as templates to simply add the filter on and the result is the same. Kinda stumped here.
Thanks very much in advance.
Who can I contact about Quicksight access issues. I have been unable to access Quicksight since Thursday 1/19/2023.
Hello, we are receiving a limit exceeded message in our application for quicksight identites. Can this limit be increased.
Aws::QuickSight::Errors::LimitExceededException: You can only share with up to 100 identities. For more information, see https://docs.aws.amazon.com/console/quicksight/permissions-api .
aws-sdk-core (3.119.1) lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call'
aws-sdk-core (3.119.1) lib/aws-sdk-core/plugins/jsonvalue_converter.rb:22:in `call'
aws-sdk-core (3.119.1) lib/aws-sdk-core/plugins/idempotency_token.rb:19:in `call'
aws-sdk-core (3.119.1) lib/aws-sdk-core/plugins/param_converter.rb:26:in `call'
Hi everyone,
We are setting up an application for our clients which includes a Quicksight dashboard. External users to the dashboard should be invited through Quicksight, but all users of the client who will have some administrative tasks in the operation of the application, and thus also have IAM users, should access Quicksight through their IAM user.
I tried to follow the steps in your [documentation on Quicksight and IAM](https://docs.aws.amazon.com/quicksight/latest/user/security_iam_service-with-iam.html), but I am not sure I understood everything.
What I want is that users within a certain IAM group for readers will only be able to create Reader accounts when first signing in to Quicksight, while those in another group for Quicksight admins will create admin or author accounts.
But when I select "Manage QuickSight access to AWS services" and choose "IAM / Use existing role" I only see the option to select **one** role. So how would I best design this to get a different treatment for different users? Variables in the policy? Or did I misunderstand and the steps outlined in the documentation on passing the IAM role with Quicksight permissions to Quicksight apply only for the administrator role, but not the users who should only be readers?
Did I understand correctly that at first at least one "normal" login and user creation (with email registration) in Quicksight via the managed service role is needed as one can only change to the use of IAM roles there?
Many thanks for your help!
Best regards
As of this writing, applying a background color using a **gradient** through conditional formatting of a numeric (DECIMAL or INTEGER) column in a Table visual causes the table to render with the message:
> The data type of a field used in this visual has changed too much for QuickSight to automatically update your analysis. You can adjust field data types by editing or replacing the current dataset.
This is a new issue that has appeared in recent days and is affecting a QuickSight dashboard that was published months ago and used heavily just last month.
As a workaround, I've removed conditional formatting from all columns in the table, and the table is now successfully rendering.
Here is an example using a subset of the data in a new analysis:

And here is the same table once a gradient is applied to one of the columns. The issue appears regardless of the column type chosen.
