Questions tagged with Amazon QuickSight

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Hello! How do you change the filters from your controls to reflect specific data outputs in your email? For example, I am trying to send out PDF snap shots of current quarterly data from my controls via email. Thanks
0
answers
0
votes
3
views
asked 5 hours ago
Greetings, I have some analyses made on Quicksight and for personal reasons my plan is to unsubscribe now and return 3 months later to work on the same assets that I now have. My question is: When you delete a Quicksight account, ¿orphaned resources have the same recovery time as standard deleted assets from an active Quicksight subscription? Thanks
0
answers
0
votes
3
views
asked 7 hours ago
In the following blog post: https://aws.amazon.com/blogs/big-data/add-comparative-and-cumulative-date-time-calculations-in-amazon-quicksight/ under Advanced use case 3: Partial period comparisons for partialQoQQTDSales comparisons if this formula is used in conjunction with RunningSum to measure partial year this year to date vs last year to date and if **one of the year** is a leap year, the calculations will take into account one more or less day (depending if the leap year is a prior year or this year). Using the blogpost example, to replicate this issue, one can try this: PartialYoYYTDSales=periodOverPeriodPercentDifference(runningSum(sumIf(Sales, {Order Date} <= addDateTime(dateDiff(truncDate("YYYY",now()), now(), "HH"), "HH", truncDate("YYYY", {Order Date}))),[truncDate("HH",{Order Date}) ASC],[truncDate("YYYY",{Order Date})]),{Order Date}, YEAR, 1) if now() is a date in a leap year ,the prior year invoked will have 1 more day being added into the running sum calculation for last year. i.e. if now() is Mar 6 2020, prior year running sum calculation will include Mar 7 2019 as an extra day (from Feb 29 2020) is added into the periodOverPeriodPercentDifference calculation. This also affects all periodOverPeriod series of functions. Please advise if this could be fixed or looked into. Thank you.
0
answers
0
votes
4
views
jw321
asked 8 hours ago
We set up a flow syncing Google Analytics 4 metrics Metric:active1DayUsers, Metric:active7DayUsers, Metric:active28DayUsers by dimension Dimension:country. We have no filters. This flow succeeds on demand, but fails on schedule with this message: ``` The request failed because the service Source Google Analytics 4 returned the following error: Details: The field that you specified as a filter, active1DayUsers, is not supported by the data object that you assigned to this flow. Specify a valid field and try again., ErrorCode: InvalidArgument. ``` Any idea how I can fix this?
0
answers
0
votes
6
views
asked 14 hours ago
Hi, we configured SSO for QuickSight and followed the instructions in this blog: https://aws.amazon.com/de/blogs/big-data/enable-federation-to-amazon-quicksight-with-automatic-provisioning-of-users-between-aws-iam-identity-center-and-microsoft-azure-ad/ However, in this article every user will be an admin, because https://aws.amazon.com/SAML/Attributes/Role will always be mapped to arn:aws:iam:: <YourAWSAccount ID>:role/QuickSight-Admin-Role - the role does not depend on the user group. ![Enter image description here](/media/postImages/original/IM8xZxakPnSvCrXgsmqDns_A) As described in the article, we created 3 IAM roles and Azure AD groups (Admin, Author, Reader). How can we assign IAM roles to the AD group? We already tried using claims in Azure AD, as described here: https://aws.amazon.com/de/blogs/big-data/enabling-amazon-quicksight-federation-with-azure-ad/
0
answers
0
votes
9
views
fabian
asked 18 hours ago
Hi, I'm creating a dashboard for operators to download the athena query results. The ID column values contain hyphens `-` and For example, if table contains the following data | id | name | | --- | --- | | `-xyz` | `First example` | | `a-b-c` | `Second example` | The generated csv contains a extra single quote in the id column at the first row ```csv "id","name" "'-xyz","First example" "a-b-c","Second example" ``` Is there any way to avoid it?
0
answers
0
votes
16
views
hota
asked 3 days ago
I have a KPI visual that displays the count of records I have from a dataset. Is it possible for me to make that KPI show me all the records that were included in this count?
0
answers
0
votes
3
views
asked 4 days ago
In Amazon Quicksight, I get a "system error, size limit exceeded" error when trying to duplicate or delete a visual. What may be the cause?
1
answers
0
votes
13
views
mizuno
asked 5 days ago
I am building an analysis where I would like to highlight entire rows based off values in different fields. I have successfully been able to highlight an entire row based off one field, but if I try to highlight a row a different color based off another field, I don't seem to be able to do that. Is this possible?
0
answers
0
votes
6
views
asked 6 days ago
Hi, I am using AWS Quicksight Q to analyze a dataset. When a why question is employed with average aggregation function (e.g., "Why did value drop in 2020?"), the highlight of the result is wrong (e.g., %DIFF -75% drop when values are 61650.19 vs. 53150.14. Please see the attached image). If the same question is asked with SUM aggregation function, the highlight of result is correct! Could you please help me to address the issue? If there is an opened bug in AWS Quicksight Q, could you please send a link to track when it is resolved. Best, ![Enter image description here](/media/postImages/original/IM45wJjBxkSAWALb5BZRiVVA)
0
answers
0
votes
16
views
asked 8 days ago
Do filters in direct mode mean that the raw underlying SQL query is adding or modifying a where clause? OR does it sorta scan everything then filter it after the underlying data has been scanned? I have multiple quicksight reports and analyses all using datasets that are cached in spice and from what I understand this means that all the queries are running off a cached disconnected dataset thats a copy of the actual underlying data and that copy is refreshed periodically. this works and its awesome. I also have a timestream table thats large in the region of terabytes of data. This dataset is obviously too large to import into spice, which means I must directly query the dataset. I've imported the dataset and have set everything up, but when viewing data my timestream costs over the last few days have shot up immensely. I normally sit around 2-3 usd a day, and on the day I released my report it jumped to 90$. If I look in the costing breakdown this is due to the amount of "Scanned bytes" in timestream. All of my reports use filters to segment and breakdown the data but this obviously isn't working the way it should. if my datasource is essentially "select * from table", and then i add a filter on the dataset does that mean the query sent to the datasource is "select * from table where column = filter" OR does it mean it loads all the rows and then does some other filtering after that? based off the speed of reports I assume its the first one, but if so then I need to figure out how to constrain the filters even more to load less data. i have disabled the report for now and my timestream costs have once again dropped down to normal levels, and at this point i'm too scared to re-enable it, but people are clamoring for their data :/
1
answers
0
votes
32
views
asked 8 days ago
Hello I have JSON files for jobs created and extracted from Amazon transcribe and I need to add it to cloud formation stack of PCA so I found that on my PCA(post call analytics) dashboard there is not option to create a new data source to add the JSON file to it How can I add new data source (JSON files) from transcribe to PCA?? and How can I display my new jobs which created manually within transcribe on PCA as a data source?
0
answers
0
votes
6
views
asked 9 days ago