How can I deploy a Jumpstart foundational model like llama2 and an endpoint through cloudformation?


I have created a LLM inference endpoint using sagemaker studio. Making the endpoint with the studio involves choosing the model from various jumpstart models, configuring permissions, and selecting configurations for the EC2 instance it is running on. However the whole setup takes just a couple minutes. (

How can I repeat this with cloudformation? I see that a way to do this is with a python sdk ( but I'd like to make this into a stack I can stand up or tear down.

2 Answers

Hi Tim,

Here is an example of a CDK deployment of a GenAI model through SageMaker JumpStart -

You may want to go through this blog to get a fair idea of how you can make use of CDK (which uses CloudFormation behind the scenes) to deploy the infra and code.

Thanks, Arjun

answered 2 months ago

Hello Tim,

You can leverage the following AWS Machine Learning Blog which discusses the deployment deployment of different models provided by Amazon SageMaker JumpStart (such as Stable Diffusion model for image generation, the FLAN-T5-XL model for natural language understanding (NLU) and text generation from Hugging Face) via AWS CDK.

[+] Deploy generative AI models from Amazon SageMaker JumpStart using the AWS CDK

Here’s the CDK sample referred in the above blog -

Hope this helps!

answered 2 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions