By using AWS re:Post, you agree to the Terms of Use
/AWS AppSync/

Questions tagged with AWS AppSync

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Storing/representing a hierarchical tree used for navigation within an Amplify web app and AppSync GraphQL API layer.

Hi, **TL;DR: Can anyone give a recommend approach to storing customisable n-level hierarchy trees for grouping and navigating results via a frontend Amplify-powered web app (using DynamoDB or any other database solution that can be mapped to AppSync ideally)?** **Some background** I'm building an multi-tenant IoT analytics solution that takes data from some sensors out in the field, uploads to AWS, processes this data and stores in a DynamoDB table (i.e. a very "standard" setup). I'm planning on adding a web frontend (built using Amplify and an AppSync GraphQL layer) that will allow users to navigate a **customisable, n-level** hierarchy tree of assets, in order to view the sensor data we've collected. Examples of valid hierarchies include: Country -> Site -> Building -> Floor -> Room -> Sensor (6-level) or Site -> Building -> Room -> Sensor (4-level) etc. The important thing here, is that this hierarchy tree can differ per customer, and needs to be customisable on a tenant-by-tenant basis, but we don't need to do any complex analysis or navigation of relationships between hierarchy levels (so, to me, something like Amazon Neptune or another graph database feels a bit overkill, but perhaps I’m wrong). My first thought was to try and build a hierarchical relationship inside of a DynamoDB table, possibly making use of a GSI to provide this, but in all of the examples I’ve seen online, the focus is very much on quick retrieval, but not so quick updating of hierarchy trees – now, whilst it’s unlikely that these tree structures would be updated on a regular basis, it is something we need to be able to support, so the idea of possibly updating ‘000s of rows in DynamoDB every time we want to make a change to the hierarchy tree for a given control area doesn’t seem quite right to me. Hence, my question above. I'm ideally looking for guidance on how to structure a DDB table to best support BOTH optimal retrieval of, and updates to, hierarchy trees in our application, but if DDB isn't the right answer here, then suggestions of alternatives would also be greatly appreciated. Many thanks in advance.
1
answers
0
votes
3
views
cgddrd
asked 14 days ago

AppSync request mapping template errors not logged in CloudWatch

I have a simple resolver that has a simple Lambda function as a data source. This function always throws an error (to test out logging). The resolver has request mapping template enabled and it is configured as follows: ``` $util.error("request mapping error 1") ``` The API has logging configured to be as verbose as possible yet I cannot see this `request mapping error 1` from my CloudWatch logs in `RequestMapping` log type: ``` { "logType": "RequestMapping", "path": [ "singlePost" ], "fieldName": "singlePost", "resolverArn": "xxx", "requestId": "bab942c6-9ae7-4771-ba45-7911afd262ac", "context": { "arguments": { "id": "123" }, "stash": {}, "outErrors": [] }, "fieldInError": false, "errors": [], "parentType": "Query", "graphQLAPIId": "xxx" } ``` The error is not completely lost because I can see this error in the query response: ``` { "data": { "singlePost": null }, "errors": [ { "path": [ "singlePost" ], "data": null, "errorType": null, "errorInfo": null, "locations": [ { "line": 2, "column": 3, "sourceName": null } ], "message": "request mapping error 1" } ] } ``` When I add `$util.appendError("append request mapping error 1")` to the request mapping template so it looks like this: ``` $util.appendError("append request mapping error 1") $util.error("request mapping error 1") ``` Then the appended error appears in the `RequestMapping` log type but the `errors` array is still empty: ``` { "logType": "RequestMapping", "path": [ "singlePost" ], "fieldName": "singlePost", "resolverArn": "xxx", "requestId": "f8eecff9-b211-44b7-8753-6cc6e269c938", "context": { "arguments": { "id": "123" }, "stash": {}, "outErrors": [ { "message": "append request mapping error 1" } ] }, "fieldInError": false, "errors": [], "parentType": "Query", "graphQLAPIId": "xxx" } ``` When I do the same thing with response mapping template then everything works as expected (errors array contains `$util.error(message)` and outErrors array contains `$util.appendError(message)` messages. 1. Is this working as expected so the `$util.error(message)` will never show up in CloudWatch logs? 2. Under what conditions will `errors` array in `RequestMapping` log type be populated? 3. Bonus question: can the `errors` array contain more than 1 item for either `RequestMapping` or `ResponseMapping` log types?
0
answers
0
votes
3
views
Henry
asked 18 days ago

Possible to override default GraphQL @model resolvers with Lambda function resolvers?

I'm hoping to leverage a GraphQL model managed by Amplify / AppSync and build on that, using the DynamoDB table for storage but then adding my own custom business logic. In the current example, I want a model that represents a session for an external API, and I want to override the `create` mutation with a Lambda function that will call the external API to get an access token, and then add that token to the newly-created `@model` instance. I'm trying to do that by disabling the default `create` resolver and then adding my own in my GraphQL schema: ``` type ExternalAPISession @model(mutations: { create: null }) @auth(rules: [{allow: public}]) { id: ID! username: String! @index(name: "byUsername", queryField: "getExternalAPISessionByUsername") access_token: String! refresh_token: String! } type Mutation { createExternalAPISession(username: String, password: String): ExternalAPISession @function(name: "CreateExternalAPISession-${env}") } ``` But, even though I tried to disable the default `create` resolver, I still get this error when I try to `amplify push` this schema: ``` ⠹ Updating resources in the cloud. This may take a few minutes... Following resources failed Resource Name: MutationcreatePaytronixSessionResolver (AWS::AppSync::Resolver) Event Type: create Reason: Only one resolver is allowed per field. (Service: AWSAppSync; Status Code: 400; Error Code: BadRequestException; Request ID: 08399b17-1e38-46f6-bf9f-06f68356c21a; Proxy: null) ``` Is it even possible to do what I'm trying to do? I can't seem to find any explicit confirmation in the documentation that you can override a default CRUD action with your own Lambda function resolver. I see that you can override the default CRUD VTL templates with your own VTL. But can you override them with Lambda functions?
2
answers
0
votes
4
views
TaoRyan
asked 25 days ago

enhanced subscription filtering connection error when using amplify-cli generated mutations

I am using amplify-cli with angular front-end. I have the following schema (schema.graphql): ``` type CardDeck @model @key(name: "byBoard", fields: ["boardId"], queryField: "cardDeckByBoardId") { id: ID! type: String! properties: [PropertyOrderTwo] boardId: ID! } type Subscription { onUpdateCardDeckByBoardId(boardId: ID!): CardDeck @aws_subscribe(mutations: "updateCardDeck") } ``` I added the following response mapping template to the subscription in the appSync console. ``` ## Response Mapping Template - onUpdateCardDeckByBoardId subscription $extensions.setSubscriptionFilter({ "filterGroup": [ { "filters" : [ { "fieldName" : "boardId", "operator" : "eq", "value" : "**** -> a valid board id" } ] } ] }) $util.toJson($context.result) ``` This results in the following connection error when subscribing to the listener in my app: ``` Connection failed: {"errors":[{"message":"Cannot return null for non-nullable type: 'ID' within parent 'CardDeck' (/onUpdateCardDeckByBoardId/id)"},{"message":"Cannot return null for non-nullable type: 'String' within parent 'CardDeck' (/onUpdateCardDeckByBoardId/type)"},{"message":"Cannot return null for non-nullable type: 'ID' within parent 'CardDeck' (/onUpdateCardDeckByBoardId/boardId)"},{"message":"Cannot return null for non-nullable type: 'AWSDateTime' within parent 'CardDeck' (/onUpdateCardDeckByBoardId/createdAt)"},{"message":"Cannot return null for non-nullable type: 'AWSDateTime' within parent 'CardDeck' (/onUpdateCardDeckByBoardId/updatedAt)"}]} ``` What am I doing wrong?
1
answers
0
votes
2
views
PatrickSteyaert
asked a month ago

How do I set up an AWS Amplify project to query an existing AWS AppSync API?

Hi, I am new to AWS Amplify and would like guidance on how to send a query to an ***existing*** GraphQL API on AWS AppSync. I am unsure how to start as a lot of Amplify coverage creates a *new* AppSync API using the Amplify CLI. ## Objectives * Set up a Node.js project to work with an existing AWS AppSync API, using AWS Amplify as the GraphQL client. * Send a single query to an existing AWS AppSync API. The query lists game results from a DynamoDB table and is called `listGames` in my GraphQL schema. * I need to repeat the query in order to fetch all available database records that satisfy the query. This would mean adding results to an array/object until the `nextToken` is `null` (i.e. no more records can be found for the query). ## Context * This application is deployed in an Amazon ECS container using AWS Fargate. * The ECS service is fronted by an Application Load Balancer (ALB). * A leader board web page fetches game results through a `POST` request to the ALB's DNS name / URL and adds them to a HTML table. ## Notes * For now, API key is my authentication method. I would soon like to switch to a task IAM role in ECS. * The ECS deployment described in 'Context' is working but it sends `POST` requests without AWS libraries. It is my understanding that I would need to use an AWS library in order to use an IAM role for AppSync authentication (used as a [task IAM role in ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html)). Please correct me if I am mistaken. I would greatly appreciate any help you can give me. Thank you for your time!
1
answers
1
votes
4
views
Toby
asked a month ago
0
answers
0
votes
2
views
awsed
asked a month ago

Cannot find namespace 'ZenObservable' issue with aws-appsync-subscription-link in AWS AppSync JavaScript SDK

ISSUE WITH [aws-appsync-subscription-link package in AWS AppSync JavaScript SDK](https://github.com/awslabs/aws-mobile-appsync-sdk-js/tree/master/packages/aws-appsync-subscription-link). I am using the [Apollo client v3.x](https://github.com/apollographql/apollo-client) along with aws-appsync-auth-link and aws-appsync-subscription-link packages in an Angular v12+ project. I am trying to upgrade Apollo Client from v2.6 to v3.x. While doing so I had to upgrade aws-appsync-auth-link and aws-appsync-subscription-link packages too to their latest version. Below is my current package.json: ``` { "name": "test-apollo-client", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test", "lint": "ng lint", "e2e": "ng e2e" }, "private": true, "dependencies": { "@angular/animations": "12.2.0", "@angular/cdk": "12.2.0", "@angular/common": "12.2.0", "@angular/compiler": "12.2.0", "@angular/core": "12.2.0", "@angular/forms": "12.2.0", "@angular/localize": "12.2.0", "@angular/platform-browser": "12.2.0", "@angular/platform-browser-dynamic": "12.2.0", "@angular/router": "12.2.0", "rxjs": "~6.6.0", "tslib": "^2.0.0", "zone.js": "~0.10.2", "@apollo/client": "^3.5.10", "aws-appsync-auth-link": "^3.0.7", "aws-appsync-subscription-link": "^3.0.10", "graphql": "15.6.0", "zen-observable-ts": "^1.1.0", "@types/zen-observable": "^0.5.3" }, "devDependencies": { "@angular-devkit/build-angular": "12.2.6", "@angular/cli": "12.2.6", "@angular/compiler-cli": "12.2.0", "@types/jasmine": "~3.6.0", "@types/node": "^12.11.1", "codelyzer": "^6.0.0", "jasmine-core": "~3.6.0", "jasmine-spec-reporter": "~5.0.0", "karma": "~5.1.0", "karma-chrome-launcher": "~3.1.0", "karma-coverage": "~2.0.3", "karma-jasmine": "~4.0.0", "karma-jasmine-html-reporter": "^1.5.0", "protractor": "~7.0.0", "ts-node": "~8.3.0", "tslint": "~6.1.0", "typescript": "4.2.3" } } ``` When trying to do ng build I am getting below errors: ``` Error: node_modules/aws-appsync-subscription-link/lib/subscription-handshake-link.d.ts:18:71 - error TS2503: Cannot find namespace 'ZenObservable'. 18 connectNewClients(connectionInfo: MqttConnectionInfo[], observer: ZenObservable.Observer<FetchResult>, operation: Operation): Promise<any[]>; ~~~~~~~~~~~~~ Error: node_modules/aws-appsync-subscription-link/lib/subscription-handshake-link.d.ts:19:68 - error TS2503: Cannot find namespace 'ZenObservable'. 19 connectNewClient(connectionInfo: MqttConnectionInfo, observer: ZenObservable.Observer<FetchResult>, selectionNames: string[]): Promise<any>; ~~~~~~~~~~~~~ Error: node_modules/aws-appsync-subscription-link/lib/subscription-handshake-link.d.ts:20:67m - error TS2503: Cannot find namespace 'ZenObservable'. 20 subscribeToTopics<T>(client: any, topics: string[], observer: ZenObservable.Observer<T>): Promise<unknown[]>; ~~~~~~~~~~~~~ Error: node_modules/aws-appsync-subscription-link/lib/subscription-handshake-link.d.ts:21:63 - error TS2503: Cannot find namespace 'ZenObservable'. 21 subscribeToTopic<T>(client: any, topic: string, observer: ZenObservable.Observer<T>): Promise<unknown>; ~~~~~~~~~~~~~ Error: node_modules/aws-appsync-subscription-link/lib/types/index.d.ts:75:15 - error TS2503: Cannot find namespace 'ZenObservable'. 75 observer: ZenObservable.SubscriptionObserver<any>; ``` A sample Angular v12 app is created in https://github.com/wildbsiu/testing-aws-appsync-subscription-link-issue to reproduce the issue. Run ng build command to reproduce the error(s).
0
answers
1
votes
2
views
wildbisu
asked a month ago

How do I compose a BULK request to OpenSearch via AppSync resolver mapping templates?

I have a pipeline resolver for an AppSync Mutation. It contains two functions, the first one is a Lambda sending updates to RDS, the second one should take the result from `$ctx.prev.result` and index it via OpenSearch datasource. In the request resolver mapping template of the second one, I am composing the bulk body in NDJSON similar to the following mannar: ```vtl #set($bulk = $util.toJson({ "index": { "_id": "${$ctx.prev.result.id}" } })) #set($bulk = "${bulk} ${util.toJson($ctx.prev.result)}") { "version": "2017-02-28", "operation": "POST", "path": "/_bulk", "params": { "body": $util.toJson($bulk) } } ``` Lacking proper debugging tools, I have been using `$util.error` as a logging method to get my `$bulk` contents. And it looks like the following format, which seems correct. ```ndjson {"index":{"_id":"A8DEF210-C342-48CB-9A4A-DA7D1E4D6AF1"}} {"foo":123,"bar":999,"baz":1234567} ``` But when I actually runs the mutation via AppSync, I got a `MappingTemplate` error `Unable to transform for the body: $[params][body].` and I have no idea why. EDIT: I took a look at [[re:Post] Appsync HTTP resolver supported content types](https://repost.aws/questions/QUFqBom4iFQ-6ePM1z11Sthw/appsync-http-resolver-supported-content-types), which inspired me to take another look at [Resolver Mappping Template for OpenSearch (params)](https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-elasticsearch.html#params-field). It seems POST body only accepts a single JSON object, NSJSON required by the bulk request is not supported yet. Am I correct? If so, is supporting the bulk API in the upcoming plans? Also, what is the currently recommended way to index multiple "normalized" documents from the same resolver?
0
answers
0
votes
2
views
vicary
asked a month ago

AppSync fails to use Lambda Authorizer for Secondary Authorization

When utilizing the AWS Lambda Authorizer for AppSync as a secondary option, unable to get any request to come back as authorized. As part of testing, set the Authorizer to return true in every circumstance, but it was still returning an ‘Not Authorized’ error in AppSync. It appears that this is an issue with AppSync and it's Lambda Authorizer. I am able to confirm it calls the Lambda, and the response is hardcoded to be true, but it still fails in the AWS AppSync console saying it is unauthorized. We are able to perform our queries with the API Key in the Console but it fails and says ‘Unauthorized’ with the AWS Lambda Authorizer for the same query. We are deploying AppSync via CloudFormation, utilizing Serverless Framework and the AppSync plugin. The return from the Lambda was hardcoded (for testing) to this: ``` { "isAuthorized": true, "resolverContext": {} } ``` The error message in AppSync: ```{ "data": { "getEvent": null }, "errors": [ { "path": [ "getEvent" ], "data": null, "errorType": "Unauthorized", "errorInfo": null, "locations": [ { "line": 2, "column": 3, "sourceName": null } ], "message": "Not Authorized to access getEvent on type Query" } ] } ``` Made sure to include `resolverContext` due to this thread about Amplify issues with AppSync. GitHub thread about Amplify issue with AppSync Lambda Auth: https://github.com/aws-amplify/amplify-cli/issues/10047 Testing with an empty `resolverContext` and non-empty `resolverContext` produced the same results. Lambda Code, Typescript compiled to Node 14: ```"use strict"; Object.defineProperty(exports, "__esModule", { value: true }); exports.handler = void 0; async function handler(lambdaEvent) { console.log('Received event context: {}', JSON.stringify(lambdaEvent.requestContext)); return { isAuthorized: true, resolverContext: {} }; } exports.handler = handler; //# sourceMappingURL=authenticate.js.map ``` Example of Context coming from AppSync invocation: ```2022-03-30T16:51:49.315Z 39a757f2-1ae9-4f10-a1aa-7acbb3e0f2d3 INFO Received event context: {} { "apiId": "zpaawy2f7rbqdpupeik44az6wm", "accountId": "$$$$$$$$$$$", "requestId": "596b0f97-a6eb-47e0-bf98-f6659fc27df0", "queryString": "query MyQuery {\n getEvent(id: \"2193\") {\n location {\n name\n }\n name\n start_time\n end_time\n }\n}\n", "operationName": "MyQuery", "variables": {} } ```
0
answers
1
votes
3
views
dtornow-lc
asked 2 months ago

Amplify @auth rule on relation

I'm new to Amplify and having trouble configuring @auth rules on a model. The app has two user groups, Event Organisers, and Club Managers. Event Organisers can login and create `Events`. Club Managers can login and create `Teams`, which they can register for `Events`. When a `Team` is registered for an `Event` an `EventRegistration` is created. The models (simplified) look like this: ``` type Event @model @auth(rules: [ # Event organisers create these and can perform CRUD operations. { allow: owner }, # Anyone logged into the system can view events, so they can register. { allow: private, operations: [read] }, ]) { id: ID! name: String! # Many teams can register for the same event. eventRegistrations: [EventRegistration!] @hasMany } ``` ``` type EventRegistration @model @auth(rules: [ # Club managers create these when they register their team for an event. Once # created, registrations are read-only from the club managers perspective. { allow: owner, operations: [create, read] } # Event organisers can read and update registrations for their events. { allow: owner, ownerField: "organiser", operations: [read, update] }, ]) { id: ID! organiser: String! event: Event! @belongsTo # I want to make this readable by event organisers, so they can see teams who have # registered for their event. Currently they can't because of the auth rule on Team. team: Team! @belongsTo } ``` ``` type Team @model @auth(rules: [ { allow: owner } ]) { id: ID! name: String! eventRegistrations: [EventRegistration!] @hasMany } ``` The problem is, when an Event Organiser queries a list of registrations for their event, the `team` property is not available, because Event Organisers don't have read access as specified by the `Team` auth rules. Note - Event Organisers shouldn't be able to read all teams, just the those registered for their event. I've thought about a few solutions, but none of them have worked, or felt like the correct way solve the problem. I tried adding field level auth rules to `EventRegistration.team` hoping those would take precedence over the rules defined on `Team`, but that didn't seem to work. One idea is to add `organisers: [String]` to the `Team` model. Then add Event Organisers to the list when a team registers for an event, and remove them when the event is finished, or the team de-registers. But this seems quite error prone, remembering to add / remove access programatically in different scenarios. Event Organisers are also not a concern of the `Team` model, they really belong on `EventRegistration`. I've also considered having a seperate `RegisteredTeam` model which is essentially a copy of the `Team` model, with different auth rules, but duplication seems like a bad idea. [Custom auth rules](https://docs.amplify.aws/cli/graphql/authorization-rules/#custom-authorization-rule) is something else I've seen but haven't dug into yet. I'm hoping something with more Amplify experience than me can recommend a pattern :)
0
answers
0
votes
2
views
flashbackzoo
asked 3 months ago

AppSync query can not be authorized by IAM

I built an AppSync project by Amplify, and the scheme is as below. ``` # This "input" configures a global authorization rule to enable public access to # all models in this schema. Learn more about authorization rules here: https://docs.amplify.aws/cli/graphql/authorization-rules type Post @model @auth( rules: [ { allow: owner ownerField: "owner" provider: userPools operations: [read, create] } { allow: private, provider: userPools, operations: [read, update] } { allow: private, provider: iam, operations: [read, create, update] } ] ) { id: ID! content: String! owner: String nickname: String createdAt: AWSDateTime command: Command @default(value: "PRIVMSG") channel: String! @index( name: "byChannel" queryField: "postsByChannel" sortKeyFields: ["createdAt"] ) destination: Destination @default(value: "LOGGER") @index( name: "byDestination" queryField: "postsByDestination" sortKeyFields: ["createdAt"] ) } enum Command { PRIVMSG NOTICE } enum Destination { IRC LOGGER ALL } type Channel @model @auth( rules: [ { allow: private provider: userPools operations: [create, read, delete] } { allow: private, provider: iam, operations: [read, update, delete] } ] ) { id: ID! name: String! posts: [Post] @hasMany(indexName: "byChannel", fields: ["name"]) } ``` I was planning to do the listPosts from python scripts by IAM authentication, but it shows unauthenticated error. So I tried to do the same thing on AppSync. I used the query as below: ``` query listPosts { listPosts { items { id } } } ``` and I got `"Not Authorized to access listPosts on type ModelPostConnection"`, even my user has the AdministratorAccess policy. Did I miss something else? I appreciate it if there is any suggestion. P.S.: my query definition part in AppSync schema is as below: ``` type Query { getPost(id: ID!): Post @aws_iam @aws_cognito_user_pools listPosts(filter: ModelPostFilterInput, limit: Int, nextToken: String): ModelPostConnection @aws_iam @aws_cognito_user_pools postsByChannel( channel: String!, createdAt: ModelStringKeyConditionInput, sortDirection: ModelSortDirection, filter: ModelPostFilterInput, limit: Int, nextToken: String ): ModelPostConnection @aws_iam @aws_cognito_user_pools postsByDestination( destination: Destination!, createdAt: ModelStringKeyConditionInput, sortDirection: ModelSortDirection, filter: ModelPostFilterInput, limit: Int, nextToken: String ): ModelPostConnection @aws_iam @aws_cognito_user_pools getChannel(id: ID!): Channel @aws_iam @aws_cognito_user_pools listChannels(filter: ModelChannelFilterInput, limit: Int, nextToken: String): ModelChannelConnection @aws_iam @aws_cognito_user_pools } ```
0
answers
1
votes
5
views
Shishin Mo
asked 3 months ago

AWS Amplify - Field Level GraphQL Auth on Required Fields

I am trying to set up GraphQL via AWS Amplify so that all users can see part of a Member object (e.g. the name), but only members in certain groups can see other parts (e.g. the e-mail address). I have set up my `schema.graphql` as follows (note this is a truncated version): ``` type Member @model(subscriptions: { level: off }) @auth(rules: [{allow: groups, groups: ["MANAGER"]}, {allow: private, operations: [read]}]) { membershipNumber: Int! @primaryKey firstName: String! lastName: String! email: String! @auth(rules: [{allow: groups, groups: ["MANAGER"]}, {allow: groups, groups: ["COMMITTEE"], operations: [read]}]) dietaryRequirements: String @auth(rules: [{allow: groups, groups: ["MANAGER"]}, {allow: groups, groups: ["COMMITTEE"], operations: [read]}]) } ``` As I understand it, all logged in users should be able to read `membershipNumber`, `firstName` and `lastName`. Users in the COMMITTEE group should also be able to read `email` and `dietaryRequirements`, and users in the MANAGER group should be able to read/write all fields. When I try to run a query as a logged in user with no groups though, I get an unauthorized error on `dietaryRequirements` (which is good) but I am able to read `email` without an error (which is bad). The only difference I can see is that `email` is a required field, whereas `dietaryRequirements` isn't. What am I doing wrong? Do required fields override the authorization rules?
0
answers
0
votes
1
views
jbaker-qswp
asked 4 months ago

Appsync GSI not producing existing DynamoDB record by Graphql using amplify

I've used Amplify to generate AppSync schema and GraphQL resolvers automatically. All of sudden, a major fetch stopped working. This is the query that I am running in the AppSync console: ``` query CreamByUuid { creamByUUID(streamUUID: "6e1a5555-9999-6666-84c1-54e777777777", id: {}) { items { id } nextToken } milkBySlug(slug: "cherry-rare-solo-appearance") { nextToken items { price id } } getCream(id: "77bababa-8888-3333-bb2b-857c470d5555") { id creamUUID } } ``` The first part is a GSI that fetches a stream by a field called the “creamUUID”. The problem is, the result returns an empty array: ``` { "data": { "creamByUUID": { "items": [], "nextToken": null }, "milkBySlug": { "nextToken": null, "items": [ { "price": 7, "id": "17e50e71-ay7d-8382-1098-25c616444444" } ] }, "getCream": { "id": "77bababa-8888-3333-bb2b-857c470d5555", "creamUUID": "6e1a5555-9999-6666-84c1-54e777777777" }, } } ``` I’ve tested another GSI, milkBySlug, to make sure it’s not a broader GSI issue. As you see though, I am correctly getting an item back. More troubling, when I fetch the Cream by its ID, I actually get the Cream record back (“getCream” returns the record with a creamUUID of “6e1a5555-9999-6666-84c1-54e777777777”) So the Cream record exists in DynamoDB, and a direct query of it using an ID returns it. But a GSI creamByUUID with the creamUUID returns an empty array. The resolver is autogenerated by amplify push, and I can share that if it is helpful. The queries I’m making are directly within the AWS console, so it can’t be a coding error on my part… Can anyone help? I'm out of ideas about why the creamByUUID returns an empty array when it should return the Cream record with ID 77bababa-8888-3333-bb2b-857c470d5555
1
answers
0
votes
6
views
roundtheatre
asked 4 months ago

Getting @aws_subscribe to work with CDK aws-appsync

For the application I'm building Amplify is an overkill. I've been able to manage to build graphQL with appsync and cdk only at this point. Now, I'm running into issue with @aws_subscribe -- it just does not work for me. I'm wondering if the directives like this require Amplify to be aded and initialized? Is there any way to initialize and process graphQL subscriptions with CDK api directly? My schema.graphql contains certain definitions: ``` type Message{ chatUuid: String! uuid: String! messageUuid: String! text: String! createdAt: AWSDateTime! updatedAt: AWSDateTime! } sendMessage( chatUuid: String! uuid: String! messageUuid: String! text: String! ): Message } type Subscription { onSendMessage(chatUuid: String!): Message @aws_subscribe(mutations: ["sendMessage"]) } ``` And here is how I'm initializing my graphQL with CDK off of that definition file: ``` const api = new appsync.GraphqlApi(this, `${deployEnv()}-WiSaw-appsyncApi-cdk`, { name: `${deployEnv()}-cdk-wisaw-appsync-api`, schema: appsync.Schema.fromAsset('graphql/schema.graphql'), authorizationConfig: { defaultAuthorization: { authorizationType: appsync.AuthorizationType.API_KEY, apiKeyConfig: { expires: cdk.Expiration.after(cdk.Duration.days(365)), }, }, }, }) ``` For debugging purposes, tried to attach a resolver to the subscription: ``` export default async function main( event:any ) { console.log({event,}) return {} } ``` In the log see the following: ``` { event: { arguments: {}, identity: null, source: null, request: { headers: [Object], domainName: null }, prev: null, info: { selectionSetList: [Array], selectionSetGraphQL: '{\n chatUuid\n createdAt\n messageUuid\n text\n updatedAt\n uuid\n}', fieldName: 'onSendMessage', parentTypeName: 'Subscription', variables: {} }, stash: {} } } ``` At this point, I'm running out of things to try. Any help or direction would be highly appreciated.
1
answers
0
votes
3
views
AWS-User-4937507
asked 5 months ago

AppSync RDS HTTP 500 Errors

This morning we started getting intermittent "500 Internal Server Error" responses from appsync with message "RDSHttp:{}". We do not seem to get the error on any standard query, but including 1 sub resolver we get the errors 50% of the time, 2 sub resolvers 90 %, and 3 sub resolvers 100% of the time. Using queries that return basically no data is the same issue. This is not query specific, any query that uses a sub resolver schema wide currently has this issue. Is some major issue going on currently? This has brought our stack to its knees. Issuing the exact same queries directly to the data api out of AppSync runs without issue. PLEASE HELP Using CloudTrail we see the following: { "eventVersion": "1.05", "userIdentity": { "type": "AssumedRole", "principalId": "xxxxx", "arn": "xxxxxx", "accountId": "xxxxx", "accessKeyId": "xxxxx", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "xxxxxx", "arn": "xxxxxx", "accountId": "xxxxxxx", "userName": "xxxxxx" }, "webIdFederationData": {}, "attributes": { "mfaAuthenticated": "false", "creationDate": "2020-11-18T14:02:44Z" } }, "invokedBy": "appsync.amazonaws.com" }, "eventTime": "2020-11-18T14:02:44Z", "eventSource": "rdsdata.amazonaws.com", "eventName": "ExecuteStatement", "awsRegion": "us-east-2", "sourceIPAddress": "appsync.amazonaws.com", "userAgent": "appsync.amazonaws.com", "errorCode": "InternalServerErrorException", "requestParameters": { "continueAfterTimeout": false, "database": "**********", "includeResultMetadata": true, "parameters": \[], "resourceArn": "xxxxxx", "schema": "**********", "secretArn": "xxxxxx", "sql": "**********" }, "responseElements": null, "requestID": "9d4266dc-06b3-4eec-940c-540cdaf1cca4", "eventID": "fe8ad37a-db83-4e6d-8793-53f08d29e61f", "eventType": "AwsApiCall", "recipientAccountId": "xxxxxx" }
1
answers
0
votes
1
views
JasonCassadol
asked a year ago
1
answers
0
votes
0
views
JonathanFallon
asked 3 years ago

How to Filter by Query on Nested Fields in AWS AppSync

**Problem and Expected Results** I'm using a proof of concept schema and DynamoDB Table setup to filter on nested field values. I've followed the ideas very generally here (https://medium.com/open-graphql/implementing-search-in-graphql-11d5f71f179) as well as the documentation for **$utils.transform.toDynamoDBFilterExpression** (https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference.html#transformation-helpers-in-utils-transform). The basic idea is this: using the same sort of principles, I'd like to filter by any arbitrarily deep nested field (short of the 32 document path length limit in DynamoDB). The relevant setup looks like this: AppSync schema (apologies for the naming conventions; was supposed to be a quick and dirty PoC): ``` query { listActiveListingsBySubAndFilter( filter: TableTestMasterDataTable_ImportV1FilterInput!, limit: Int, nextToken: String ): TestMasterDataTable_ImportV1Connection } input TableBooleanFilterInput { ne: Boolean eq: Boolean } input TableDataObjectFilterInput { beds: TableFloatFilterInput baths: TableFloatFilterInput } input TableFloatFilterInput { ne: Float eq: Float le: Float lt: Float ge: Float gt: Float contains: Float notContains: Float between: [Float] } input TableIDFilterInput { ne: ID eq: ID le: ID lt: ID ge: ID gt: ID contains: ID notContains: ID between: [ID] beginsWith: ID } input TableIntFilterInput { ne: Int eq: Int le: Int lt: Int ge: Int gt: Int contains: Int notContains: Int between: [Int] } input TableStringFilterInput { ne: String eq: String le: String lt: String ge: String gt: String contains: String notContains: String between: [String] beginsWith: String } input TableTestMasterDataTable_ImportV1FilterInput { id: TableStringFilterInput status: TableStringFilterInput sub: TableStringFilterInput data: TableDataObjectFilterInput } type TestMasterDataTable_ImportV1 { id: String! status: String! sub: String! data: AWSJSON } type TestMasterDataTable_ImportV1Connection { items: [TestMasterDataTable_ImportV1] nextToken: String } input UpdateTestMasterDataTable_ImportV1Input { id: String! status: String sub: String! data: AWSJSON } ``` VTL request and response resolvers: ``` ## Request resolver #set( $filter = $ctx.args.filter ) #set( $path = $filter.data ) { "version" : "2017-02-28", "operation" : "Query", "index" : "listings-index", ## GSI on table with HASH: status, RANGE: sub "query" : { "expression": "#status = :status and #sub = :sub", "expressionNames" : { "#status" : "status", "#sub" : "sub" }, "expressionValues" : { ":status" : $util.dynamodb.toDynamoDBJson("Active"), ":sub" : $util.dynamodb.toDynamoDBJson($filter.sub.eq) } }, "filter" : $util.transform.toDynamoDBFilterExpression($path), "limit": $util.defaultIfNull($ctx.args.limit, 20), "nextToken": $util.toJson($util.defaultIfNullOrEmpty($ctx.args.nextToken, null)) } ## Response resolver { "items": $util.toJson($ctx.result.items), "nextToken": $util.toJson($util.defaultIfNullOrBlank($context.result.nextToken, null)) } ``` Example DynamoDB Table element: ``` { "_meta": { "exposure": 0.08, "lastActive": 1557800000, "lastUpdated": 1557878400, "lastView": 1557878500, "numViews": 63, "posted": 1557878400 }, "buildingID": "325-5th-Ave,-New-York,-NY-10016,-USA", "data": { "agent": [ { "agentID": "daeo@gmail.com" }, { "agentID": "ben@gmail.com" } ], "amenities": [ "hot tub", "time machine" ], "baths": 2, "beds": 2 }, "id": "325-5th-Ave,-New-York,-NY-10016,-USA#37C:1557878400", "status": "Active", "sub": "new-york/manhattan/listings", "unitNum": "37C", "unitRefID": "325-5th-Ave,-New-York,-NY-10016,-USA#37C" } ``` Based on all of this, if I run the following query: ``` listActiveListingsBySubAndFilter(filter: { "sub" : { "eq" : "new-york/manhattan/listings" }, "data": { "beds": { "eq": 2.0 } }) { items { id status } nextToken } ``` I would expect to get something like this in return: ``` { "data": { "listActiveListingsBySubAndFilter": { "items": [ { "id": "325-5th-Ave,-New-York,-NY-10016,-USA#37C:1557878400", "status": "Active" } ], "nextToken": null } } } ``` Note: this is the only expected return since there's only one item matching these requirements in the database at this time. **Actual Results** All of that said, the results I'm getting (or lack thereof) aren't making much sense. No matter the query (**data.beds**, **data.baths**), if the field is nested in **data** the return is the same: ``` { "data": { "listActiveListingsBySubAndFilter": { "items": [], "nextToken": null } } } ``` I've verified the query is working as expected and the filter expression is formatted appropriately (it works on other non-nested fields like **id**). What's perplexing is that the filter just doesn't seem to get applied (or maybe is being applied in some non-intuitive way?). For reference, here's a snippet of a typical CloudWatch log for the above: ``` { "context": { "arguments": { "filter": { "sub": { "eq": "new-york/manhattan/listings" }, "data": { "beds": { "eq": 2 } } }, "limit": 200 }, "stash": {}, "outErrors": [] }, "fieldInError": false, "errors": [], "parentType": "Query", "graphQLAPIId": "q7ueubhsorehbjpr5e6ymj7uua", "transformedTemplate": "\n\n{\n \"version\" : \"2017-02-28\",\n \"operation\" : \"Query\",\n \"index\" : \"listings-index\",\n \"query\" : {\n \"expression\": \"#status = :status and #sub = :sub\",\n \"expressionNames\" : {\n \t\"#status\" : \"status\",\n \"#sub\" : \"sub\"\n \t},\n \"expressionValues\" : {\n \":status\" : {\"S\":\"Active\"},\n \":sub\" : {\"S\":\"new-york/manhattan/listings\"}\n }\n },\n \"filter\" : {\"expression\":\"(#beds = :beds_eq)\",\"expressionNames\":{\"#beds\":\"beds\"},\"expressionValues\":{\":beds_eq\":{\"N\":2.0}}},\n \"limit\": 200,\n \"nextToken\": null\n}" } ``` Notice the filter **expressionValues** value in **transformedTemplate**: **{ "N" : 2.0 }** (sans **$util.toDynamoDBJson** formatting) and compare it to the value in the object in DynamoDB on that field. I've tried everything, including changing the fields themselves to strings and doing various filter operations like **eq** and **contains** to see if this was some odd type inconsistency, but no luck. As of now, I have two backup solutions that involve either "pulling up" all the relevant fields I might want to filter on (cluttering my records with attributes I'd rather keep nested) or creating a new nested type containing only high-level fields for filtering on -- i.e., effectively split the records into a record reference and a record filter reference. In this scenario, we'd get some "**Listing**" record that has as its **data** field value something like **ListingFilterData** -- e.g.: ``` type Listing { id: String! sub: String! status: String! data: ListingFilterData! } type ListingFilterData { beds: Float! baths: Float! } ``` Both are doable, but I'd rather try to solve the current issue instead of adding a bunch of extra data to my table. Any thoughts? Edited by: april-labs on Sep 16, 2019 4:31 PM
4
answers
0
votes
2
views
april-labs
asked 3 years ago
2
answers
0
votes
1
views
Chagui
asked 3 years ago

Appsync Subscription through browser (ReactJS and MQTT.js)

I'm trying to connect to an Appsync subscription through a React app via web browser. I seem to be connected, but I'm not receiving any messages/packets when I post new messages through postMessage mutation. The Appsync API uses Cognito User Pool as authorization mode **_Schema_** ``` schema { query: Query mutation: Mutation subscription: Subscription } type Query { getUser: String } type Mutation { postMessage(groupId: ID!, message: String!): Message! } type Subscription { postedMessage(groupId: ID!): Message @aws_subscribe(mutations: ["postMessage"]) } type Message { groupId: ID! id: ID! CreateTime: Int! message: String! author: String! } ``` **_ReactJS and MQTT.js_** ``` import mqtt from 'mqtt' ... subToNewMessages() { var groupId = this.props.group.id var query = `subscription PostedMessage($groupId: ID!) { postedMessage(groupId: $groupId){id, CreateTime, message, author} }` httpReq( { method: 'POST', body: JSON.stringify({ query: query, variables: { groupId } }), url: 'https://mygraphqlendpoint.com', // the graphql endpoint sits behind an API Gateway which goes through an authorizer headers: [ ['Content-Type', 'application/json'], // SessionId is traded for JWT token through API Gateway Authorizer ['SessionId', localStorage.getItem('sessionId')] ] }, xhr => { console.log(xhr) var parsedResponse = JSON.parse(xhr.responseText).extensions .subscription.mqttConnections console.log(parsedResponse) var client = mqtt.connect(parsedResponse[0].url, { clientId: parsedResponse[0].client }) client.on('connect', function() { console.log('connected') client.subscribe(parsedResponse[0].topics[0], function( err, granted ) { if (!err) { console.log(granted) } else { console.log(err) } }) }) client.on('error', function(err) { console.log(err) }) client.on('reconnect', function() { console.log('reconnecting') }) client.on('message', function(topic, message, packet) { console.log(topic) console.log(message.toString()) console.log(packet) }) client.on('packetreceive', function(packet) { console.log(packet) }) } ) } ``` **__parsedResponse_ from graphql endpoint POST_** ``` 0: client: "lidtp2k5hvfchaafxmbajcoqfi" topics: ["534527562705/wog3wai35vf3tgxehzrho3kdga/postedMess…02cadae2890d3c30e272230322b361d38d21268c5c99b2b80"] url: "wss://a307bjgfbycsj5-ats.iot.us-east-1.amazonaws.com/mqtt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAS3UEXNWQVC7VBUYW%2F20190705%2Fus-east-1%2Fiotdevicegateway%2Faws4_request&X-Amz-Date=20190705T215536Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=383fed2e3e9df8d53f545bae8991ac8cc497918ae90d101290d69cc75a8a973c&X-Amz-Security-Token=AgoJb3JpZ2luX2VjEMb%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCICcf69zsU2uq4wdLCXtunExOvRUPy1PZEcuhbarOHQS8AiEAtG4z413Odhrv5Vr8vE78Cmx2G0tJAiA3EXQdtD8WHf0q%2FAMIHxABGgwxOTY3NzMxMTMyNDkiDHwJIpkOnfX6SbWf9irZAwP7ApoQf7KqHyEiIJsvBdoLZDEcYbGBg1h2N2Hk%2BnnMXwzg4vA6mptzNrwLHUOYcWCcjmcG6f0lHUIxMdcPIIdaLd6%2BvqcEl3B%2B%2BDUqIJWWK1MoLzOcEO3rmThuAhU1akg16xS6gACahSfvrCL6Xre1DBeglvQX69%2FYesAdBtHCDg08hYg3bjxuWmw2MK2jzgDFnlY2fov5zIc0G%2FiQrhFgOFCCyHwh3u%2FIXz1Oi1qZx%2F1yGjZE5UKG1Ksoar4jg2lHHQ5M8oxu74VWLoSs1tB9o1Ex50eLqn%2BBe821D7kCbg86hPKdQ109Lq%2BvosOq0fFdg4atUD4f%2F%2FAW2scnjV8s%2BNs2HlR1jCuUbZll3HwNKOeQwK8XJWsBjAI7eKSFivyJ%2B29QZHyqT%2F8F4G1qttbLpnKjoLLLNQRZ6y31kIE%2FsYa107VhMoxkRRpBl55kvU4KF0aoyNbYj5q4sYTc0ldMgs4fvGvXHc5hWv0ERcQ260lIF2O90rqwZcNgIjCtX1M1B%2FQdBcq7POyc23CH9WzAogxNSUWOCmV0Wd%2BWE4%2BN9H40FWMCjZO4QPWXejXnlzwrMrsAjdn1Zh4tHqDItcCdmNhOvgpPUNmczsIxmBRsgnGLnDZsQqX9MNiO%2F%2BgFOrQBAyptNPG6V65CDeboZaueAzoahpVhupdFEGxUTQq86Tu1l3Y%2BxyenEgY93qWobbevcp%2B5j63ciPNEAepn%2FUBrUtBblbEiyzF84CK40aQnGrWapmG5A1zR5qLcBPUh8fFNI3W2nXifoi%2F%2BOr%2BXnp69xB1F60L6yVrd7meVOD8P55hHggbylxgIA8YlpdUnqoFky1HfHHYAJKJUzYKAIE8WPOiKopYrO6TSP3X0xREPLS3nWWD0" __proto__: Object length: 1 __proto__: Array(0) ``` **_Responses received in browser after connecting to subscription, _connack_, and subscribing to topic, _suback__** ``` e.exports {cmd: "connack", retain: false, qos: 0, dup: false, length: 2, …} cmd: "connack" dup: false length: 2 payload: null qos: 0 retain: false returnCode: 0 sessionPresent: false topic: null __proto__: Object e.exports {cmd: "suback", retain: false, qos: 0, dup: false, length: 3, …} cmd: "suback" dup: false granted: [0] length: 3 messageId: 37880 payload: null qos: 0 retain: false topic: null __proto__: Object ```
2
answers
0
votes
0
views
NabeelTheDev
asked 3 years ago

Appsync resolver conditional update of AWSJSON attribute

I have a mutation for an object where I have multiple non-required fields that may get updated in a single call. One of the attributes, data, is defined as AWSJSON in the schema. To build the update I check supplied values with statements like this: #if( !$util.isNull(${context.arguments.input.data}) ) If there is data for the attribute the necessary values are added to maps to build the final SET expression, map attribute names #data => data (needed for attributes with names of reserved words), and values :data => value. The final update uses $utils.toJson to apply the constructed maps. The problem is, when I supply data for the AWSJSON attribute, I get this error: ``` "Expected JSON object for attribute value '$[update][expressionValues][:data]' but got 'STRING' instead." ``` All other attributes work as long as the JSON attribute is not supplied. However, if I instead supply the expressionValues inline, not using $utils.toJson but also not able to conditionally add attributes to the update, it works as expected. Am I using the wrong way to apply the collected expressionValues map to the final update statement, maybe a different $utils method? Any other workaround to be able to deal with the collection of non-required attributes of the mutation? At worst I can make a separate mutation just for the JSON attribute but that's clearly not ideal to make two calls instead of one. I can make a call from the Appsync Console like this: ``` mutation UpdateMyItem { updateMyItem(input: { id: "ITEM-ID-HERE", data:"[{\"xyz\": 101}]" }) { id, data } } ``` Resolver: ``` { "version": "2017-02-28", "operation" : "UpdateItem", "key" : { "id" : $util.dynamodb.toDynamoDBJson($context.arguments.input.id) }, ## Set up some space to keep track of things we're updating ** #set( $expSet = {} ) #set( $expNames = {} ) #set( $expValues = {} ) ## updatedAt #set($now = $util.time.nowISO8601()) $!{expSet.put("updatedAt", ":updatedAt")} $!{expValues.put(":updatedAt", { "S" : "$now"})} ## data #if( !$util.isNull(${context.arguments.input.data}) ) $!{expSet.put("#data", ":data")} $!{expNames.put("#data", "data")} $!{expValues.put(":data", $util.dynamodb.toDynamoDBJson($context.arguments.input.data) )} #end ## other redacted optional input arguments of various types would be here ## build the expression #set( $expression = "SET" ) #foreach( $entry in $expSet.entrySet() ) #set( $expression = "${expression} ${entry.key} = ${entry.value}" ) #if ( $foreach.hasNext ) #set( $expression = "${expression}," ) #end #end "update" : { "expression": "${expression}", "expressionNames": $utils.toJson($expNames), ## this fails and results in an error: ## "Expected JSON object for attribute value '$[update][expressionValues][:data]' but got 'STRING' instead." "expressionValues": $util.toJson( $expValues ) ## this works and all attributes are updated ##"expressionValues": { ## ":updatedAt" : $util.dynamodb.toDynamoDBJson($now), ## ":data" : $util.dynamodb.toDynamoDBJson($context.arguments.input.data) ##} } } ```
2
answers
0
votes
0
views
snewton
asked 3 years ago

request resolver with es...trying to limit filter results

I have a query like this... ``` { "version":"2017-02-28", "operation":"GET", "path":"/articlesnew1/json/_search", "params":{ "body": { "size": 50, "query": { "bool": { "should" : [ { "exists" : { "field" : "field_categories.__target_path" } }, { "exists" : { "field" : "field_plays.__target_path" } }, { "terms" : { "field_categories.__target_path" : ["he/taxonomy_term/categories/837","he/taxonomy_term/categories/840","he/taxonomy_term/categories/841","he/taxonomy_term/categories/842""] } }, { "terms" : { "field_plays.__target_path" : ["he/taxonomy_term/plays/757","he/taxonomy_term/plays/758""] } } ], "must_not": [ { "term": { "field_exclude_from_syndication.value": 1 } } ], "filter": [ { "terms": { "entity.bundle" : ["he_activity_highlight","he_company","he_event","he_exclusive","he_industry_voice","he_new_financing","he_news","he_opinion","he_person","he_publication","he_transaction","he_under_40","he_whos_who"] } }, { "range" : { "entity.changed.timestamp" : { "gte": "${context.arguments.from}", "lte": "${context.arguments.to}" } } } ] } } } } } ``` But I do not want results where either "field_categories.__target_path" or "field_plays.__target_path" are not present and matching one of the values. But I am still getting results where both are empty like this... ======================================== "field_categories": \[], "field_plays": \[], ======================================== The expected results might look like this... "field_categories": \[{ "__target_path": "he/taxonomy_term/categories/850" }], ...or this... "field_plays": \[{ "__target_path": "he/taxonomy_term/plays/772" }], ideas? Edited by: cjokinen on Jun 10, 2019 2:30 PM Edited by: cjokinen on Jun 10, 2019 2:31 PM Edited by: cjokinen on Jun 10, 2019 2:32 PM
2
answers
0
votes
0
views
cjokinen
asked 3 years ago

response resolver nesting foreach does not appear to work

I have a response resolver that looks like this snip NOTE: This code runs inside a foreach ============================================== #elseif( $entry.get("_source").entity.bundle == "he_transaction" ) #if( $velocityCount > 1 ) , #end #set( $counties = \[]) #if(!$entry.get("_source").field_locations.isEmpty()) #if(!$entry.get("_source").field_locations.counties.isEmpty()) #foreach($county in $entry.get("_source").field_locations.counties) $util.qr( $counties.add( { "name": $county } ) ) #end #end #end $util.toJson({ "__typename": "Transaction", "id": $entry.get("_source").entity.nid, "title": $entry.get("_source").title\[0].value, "status": $entry.get("_source").entity.status, "uri": { "uri": $entry.get("_source").entity.uri }, "changed": $entry.get("_source").entity.changed.value, "closeDate": $entry.get("_source").field_close_date\[0].value, "companies": \[], "buyers": \[], "marketers": \[], "locationBasin": $entry.get("_source").field_location_basin\[0].value, "locationField": $entry.get("_source").field_location_field\[0].value, "locations": \[{ "countryCode" : $entry.get("_source").field_locations\[0].country_code, "administrativeArea" : $entry.get("_source").field_locations\[0].administrative_area, "locality": $entry.get("_source").field_locations\[0].locality, "counties": $counties }], "price": { "currency_code": "USD", "number": 0.0 }, "types": \[], "roomOpening": $entry.get("_source").field_room_opening_date.value, "relatedContents": \[] }) ============================================== but when I run the query, counties is empty when I can see the result from ES has values in it like this ============================================== "field_locations": \[ { "country_code": "US", "administrative_area": "OK", "counties": \[ "Garvin", "Grady", "Logan", "Mayes", "McClain", "Oklahoma", "Texas & Woodward Cos." ] } ], ============================================== Edited by: cjokinen on Jun 7, 2019 6:54 AM
2
answers
0
votes
0
views
cjokinen
asked 3 years ago

Clarification on AppSync pricing when using pipelines

In my case I'm using DynamoDB to back an AppSync endpoint and Cognito for auth. The Appsync Pricing indicates "You are billed separately for query and data modification operations, and for performing real-time updates on your data". I'd like to confirm that "query and data modification operations" applies only to DynamoDB operations and not to other data manipulations that might be done within functions along a pipeline. So, in the following example, function one is not billable? ``` - Pipeline Before : some authentication checks - Function 1 : data validation, stashes result (datasource type = NONE) - Function 2 : DynamoDB PutItem (datasource type = AMAZON_DYNAMODB) - Pipeline After : returns $ctx.result ``` And... in the following completely ridiculous, manufactured-to-make-a-point, contrived example, Functions 1-9 would not be billable? ``` - Pipeline Before : some authentication checks - Function 1 : data validation, stashes result (datasource type = NONE) - Function 2 : something else, stashes result (datasource type = NONE) - Function 3 : something else, stashes result (datasource type = NONE) - Function 4 : something else, stashes result (datasource type = NONE) - Function 5 : something else, stashes result (datasource type = NONE) - Function 6 : something else, stashes result (datasource type = NONE) - Function 7 : something else, stashes result (datasource type = NONE) - Function 8 : something else, stashes result (datasource type = NONE) - Function 9 : something else, stashes result (datasource type = NONE) - Function 10 : DynamoDB PutItem (datasource type = AMAZON_DYNAMODB) - Pipeline After : returns $ctx.result ```
2
answers
0
votes
0
views
cameroncfm
asked 3 years ago

Mutation through API fails, but works from web portal

Hey! I'm trying to get some help figuring out why a mutation works in the AWS AppSync web portal, but fails when I try to use the API within one of my lambda tests. Schema: ``` type OrganizationLink @model @searchable @auth(rules: [ {allow: groups, groups: ["Admin"]}, {allow: groups, groupsField: "readGroups", queries: [get, list], mutations: null}, {allow: groups, groupsField: "writeGroups" mutations: [update, delete], queries: null} ]) { id: ID! type: OrganizationType school: School @connection(name: "SchoolOrganization") district: District @connection(name: "DistrictOrganization") organization: Organization @connection(name: "OrganizationConnection") readGroups: [String] writeGroups: [String] } ``` Auto-Generated Schema from backend/.../schema.graphql: ``` ... createOrganizationLink(input: CreateOrganizationLinkInput!): OrganizationLink ... onCreateOrganizationLink: OrganizationLink @aws_subscribe(mutations: ["createOrganizationLink"]) ... input CreateOrganizationLinkInput { id: ID type: OrganizationType readGroups: [String] writeGroups: [String] organizationLinkSchoolId: ID organizationLinkDistrictId: ID organizationLinkOrganizationId: ID } ``` Mutation: ``` mutation createLink($organizationID: ID, $schoolID: ID){ createOrganizationLink(input:{ type: School organizationLinkOrganizationId: $organizationID organizationLinkSchoolId: $schoolID }) { id } } ``` API Call method: ``` const resp = await fetch(graphqlApi, { body, credentials: 'include', headers: { accept: '*/*', authorization: AccessToken, 'content-type': 'application/json', }, method: 'POST', mode: 'cors', }); ``` API Request Body: ``` { "operationName": "createOrganizationLink", "query": "\n\t\tmutation createLink($organizationID: ID, $schoolID: ID){\n\t\t createOrganizationLink(input:{\n\t\t \ttype: School\n\t\t\torganizationLinkOrganizationId: $organizationID\n\t\t\torganizationLinkSchoolId: $schoolID\n\t\t }) { id }\n\t}", "variables": { "organizationId": "5432eca5-1018-4bf1-ba60-679557ca1e3c", "schoolId": "ed1fbead-1906-4cc8-b5b3-64b3e9995501" } } ``` Response Error: ``` url: 'https://XXXXXXXXXX.appsync-api.us-west-2.amazonaws.com/graphql', status: 400, statusText: 'Bad Request', errorType: 'BadRequestException', message: 'No operation matches the provided operation name createOrganizationLink.' } ] } ``` I don't understand how the operation isn't being found, when I am using the exact same operation in the web portal. The operation to create an organization link is clearly in the schema as well. I use this same function to do other mutations prior to this one and they work flawlessly. Any hints would be greatly appreciated...please let me know if any additional information is needed. Edited by: markta on Apr 19, 2019 2:03 PM
1
answers
0
votes
0
views
markta
asked 3 years ago
1
answers
0
votes
0
views
NeillGiraldo
asked 3 years ago
  • 1
  • 90 / page