AWS re:Post Live | Accelerating Foundation Model Evaluation with Amazon SageMaker Clarify and fmeval - Live on October 7th!

2 minute read
Content level: Foundational
0

Join us live on Twitch.tv on Monday, October 7th to hear us discuss Accelerating Foundation Model Evaluation with Amazon SageMaker Clarify and fmeval

20241007_RPL_SocialGraphic

Welcome to our Community Article for the upcoming AWS re:Post Live show scheduled for Monday, October 7th at 11 am PST / 2 pm EST on twitch.tv/aws! On this episode, join Sr. Technical Account Manager Jay Busch and Principal Technical Account Manager Rajakumar Sampathkumar as they discuss Accelerating Foundation Model Evaluation with Amazon SageMaker Clarify and fmeval! During the show, we will dive deep into this article written by show guest Rajakumar and detail how you can get the most out of your Machine Learning practices. If you have any questions please add them in the comments section at the bottom of this article and we will answer them as part of our live show on Monday, September 30th over on Twitch. If your question is selected you will be awarded 5 re:Post points!

Amazon SageMaker offers features to improve your machine learning (ML) models by detecting potential bias and helping to explain the predictions that your models make from your tabular, computer vision, natural processing, or time series datasets. It helps you identify various types of bias in pre-training data and in post-training that can emerge during model training or when the model is in production. You can also evaluate a language model for model quality and responsibility metrics using foundation model evaluations. Using Amazon SageMaker Clarify you can evaluate large language models (LLMs) by creating model evaluation jobs. A model evaluation job allows you to evaluate and compare model quality and responsibility metrics for text-based foundation models from JumpStart. Model evaluation jobs also support the use of JumpStart models that have already been deployed to an endpoint.