Getting GameLift Client setup with Unreal for Flexmatch


Hi All,

I have been trying to get the basics of GameLift up and running with UE4.21 so I can start implementing the code needed for FlexMatch. Upon reviewing the FlexMatch Integration Roadmap and setting up a FlexMatch Rule Set, it seems like the only way to call the necessary functionality to Create a Queue (say CreateGameSessionQueueRequest() or GameSessionQueueDestination()) is to build a GameLift Client by ourselves from the aws-cpp-sdk or build on some of the headaches others have had trying and do just that: YetiTech Studios GameLift Client SDK.

I decided to start with YetiTech Studios GameLift Client and by following these wonderful tutorials I got it set up enough to connect 2 clients to a hosted fleet successfully, although I did run into some of the issues mentioned here such as the player timeout. Then I tried to setup a queue for FlexMatch in my gameinstance.cpp:

void UT_GameInstance::StartRequest()
	auto QueueRequest = Aws::GameLift::Model::CreateGameSessionQueueRequest();

	auto Destination = Aws::GameLift::Model::GameSessionQueueDestination::GameSessionQueueDestination();

	Aws::Vector<Aws::GameLift::Model::GameSessionQueueDestination> Destinations = { Destination };

When I built it I got unresolved external symbols linker errors for Aws:Malloc and Aws::Free which occurs when using Aws::Vector, and is what I am trying to revolve in this post.

What I have tried to fix this:

If anyone has any ideas what might be causing Malloc and Free to not be linking I would appreciate any help.

Also please let me know if perhaps by approach to this is flawed. After reading some posts like this it seems like maybe I wouldn't want to create/add a Gamelift Client that creates a queue for flexmatch in the first place as it may open one up to DDoS attacks. But in that case why are the Queue related functions only in the GameLift Client SDK or at least why are the required aws-cpp-sdk libraries not included by default in the GameLift Server SDK? Furthermore why does the documentation for setting up queues say "In a game client, new game sessions are started with queues by using placement requests." which specifies client and only shows brief implementation for console or aws cli. Again, any help is appreciated. Cheers! :upside_down_face:

asked 3 years ago35 views
8 Answers

Some hopefully useful background:

The GameLift Client SDK (shipped as part of the AWS C++ SDK) is not for 'game clients' per se. Client in this sense, is the general AWS term for client, as in a client to call some AWS service. There are many things in the GameLift client SDK that you would not want to call on an actual game client because you really shouldn't be creating critical resources from game clients.

The GameLift Server SDK is a special SDK to enable your game server to interact with GameLift. Its not a general client for the GameLift service.

You really want to separate things into 3 piles:

  1. Actions you required to deploy your resources - ie queue creation, fleet creation etc. These ideally should be modeled in some deployment script/service and will stand up your GameLift resources as required.
  • The nice thing about queues is you can create one and then add/remove fleets from the queue as required without ever having to change your queue id
  1. Actions that you want your game server clients to do - ideally these are things like making a matchmaking request, connecting to a server. And even some of these you may want to hide behind a server/serverless bounday. Your clients should have the minimal set of permissions required to do their job ie they can only call StartMatchaming in us-east-1. For this you would need things from the AWS C++ SDK (which includes a client for the AWS GameLift service).

  2. Actions you you want your game server todo. For this you would need the Server SDK.

You may need to include the AWS C++ SDK on your server because you want to talk to AWS other resources etc or call general functions in GameLift like describe-queuues.

This may be useful:

answered 3 years ago


Thanks for the reply and background.

This link really helped visualize the architecture:

Some follow up questions:

  1. For examples of high-level architectures of pile 2 actions, does this sound reasonable and relatively secure? The game client (say an iOS device):
  • would authenticate by using POST requests to an AWS API Gateway that is integrated with AWS Cognito to handle the actual authentication?
  • would create a StartMatchmakingRequest when the user hits the appropriate button, that would in turn be seen as a POST to an AWS API Gateway that then could be handled by say an AWS Lambda function (likely C++ using the AWS GameLift Client included in aws-cpp-sdk) to send the response back to the game client?
  1. Also in reference to pile 2, this diagram shows the interaction between the client app, the game server, and the GameLift Service. Would it be reasonable to think that practically all requests coming from the client app on the left should go through a "client service", for example a API Gateway for Lambda functions, which then call the GameLift Service?

  2. Where are the actions that you're describing in pile 1 actually being called from and/or executed on? The Game Server (but not in the game session) like close to the InitSDK()? Or in a completely separate service that is not in this graph like another api gateway and lambda integration or maybe a separate ec2 instance that spins up new queues and fleets whenever it is called from say the client services when the client services notices a spike in legitimate traffic? But isn't the auto-scaling features meant to deal with the increase in queues and fleets (or at least ec2 instances ) automatically anyway without us having to worry about how to scale it in a sustainable way?

Sorry for the barrage of questions, just trying to get my head around a possible architecture and how AWS's services slot in. Appreciate any wisdom you have on that matter. Thanks!

answered 3 years ago

Any news on this @REDACTEDUSER Thanks!

answered 3 years ago


answered 3 years ago

Sorry, I somehow missed your reply

  1. Basically sounds reasonable. iOS client would talk to Cognito to get AWS creds to talk to your service API (ie API Gateway); use cognito to handle auth exchange to get short term creds for clients ie Player Authentication (of which there are lots of examples out there)

  2. Any requests from the client such that create/delete resources should probably be mediated through your service API.

  3. For the deployment pile, I meant the things that probably aren't changing dynamically ie queues and builds, roles, policies, matchmaking configs etc and even fleet creation. This can be managed by you / a deployment service / CloudFormation (pending an update to GameLift's resources).

Your game will need to know the queue or queue(s) it needs to talk to along with matchmaking configs etc.

For management of processes/game sessions ie capacity of your infrastructure, use GameLift's target tracking. Apologies for any confusion.

answered 3 years ago


I appreciate the perspective.

You mentioned that queues or fleet creation most likely are not changing dynamically. This sparked a few more questions that are not so clear from the documentation:

  1. If fleets can scale/contract it's amount on instances and therefore game sessions, what would be the purpose of having multiple fleets? Just to have at least one fleet in each region to improve player ping time?

  2. Similarly what would be the point of having multiple queues? Especially if its a multi-region queue.


answered 3 years ago
  1. There are many reasons for multiple fleets, a lot depends on customer use cases etc:
  • Latency (as you pointed out)
  • Redundancy (just in case PDX goes down you can easily shift to IAD etc)
  • Build updates
    • Fleets are tied to non-mutable builds (unless they are realtime fleets using scripts) so you may want to deploy out to a new pre-prod/test fleet first that is taking a small fraction of your traffic. Just add it your queue destinations and see what happens.
  • Spot fleets
    • May want to have Spot and OnDemand fleet in a region and take advantage of FleetIQ to prefer spot to OnDemand but have some available capacity for when Spot is unavailable.

Multiple queues tend to be more to do with what deploy strategy you need to have. But its worth remembering that Queues support much higher game session creation rates, have a notion of player latency and will use the whole FleetIQ system to find the most stable/cost effective server placement for you (

Some common scenarios I can think of:

  • Test / Prod queue
    • Have queues for testing and for prod to keep traffic separate to enable feature release.
  • Regionalization / Separation of customer bases / build features
    • ie For a 'global game` there may be features that need to be kept separate/run separately so you may decide to have a NA queue because you have loot boxes in your build, but not offer that feature in EU. Or your build for China is very different than the build for the rest of the world. You may want to enforce this separation at the Queue level.
  • Breaking features in your game
    • If you have none backwards compatible features, you may need new Queues/Fleets and only your updated clients can talk to your new Queue. Traffic slowly moves to your new servers, drains from old servers in your legacy fleet etc.
answered 3 years ago

Thank you! This was very helpful.

answered 3 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions