Questions tagged with AWS Lambda
Content language: English
Sort by most recent
In my DynamoDb stream object, I have a field that is an array of strings, i.e. attribute type SS.
```
"FOO": {"SS": ["hello"]},
```
I want to filter out the event if any string in that array matches one of "x", "y", or "z" (placeholder values). I can't figure out the correct filter pattern syntax here, but it does seem possible based on the answer in https://repost.aws/questions/QUgqGseyltTceWNYpMF_2tXw/how-to-create-dynamo-db-stream-event-filter-for-a-field-from-array-of-objects. Here's what I've tried:
```
"FOO": {
"SS": {
"anything-but": ["x","y","z"]
}
}
```
Can anyone advise on what the filter pattern should look like?
Assume a user connects via a Websocket connection to a server, which serves a personalized typescript function based on a personalized JSON file
So when a user connects,
- the personalized JSON file is loaded from an S3 bucket (around 60-100 MB per user)
- and when he types a Typescript/JavaScript/Python code is executed which returns some string a reply and the JSON-like data structure gets updates
- when the user disconnects the JSON gets persisted back to the S3-like bucket.
In total, you can think about 10,000 users, so 600 GB in total.
It should
- spin up fast for a user,
- should be very scalable given the number of users (such that we do not waste money) and
- have a global latency of a few tens of ms.
Is that possible? If so, what architecture seems to be the most fitting?
When [retrieving secrets using the AWS Parameters and Secrets Lambda Extension](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets_lambda.html), does the cache get invalidated when a secret is rotated?
I can't find a concrete answer in the AWS documentation.
Hi,
I followed the Amazon GameLift-UE4 tutorial series you guys have on youtube https://www.youtube.com/playlist?list=PLuGWzrvNze7LEn4db8h3Jl325-asqqgP2
And is working as expected, the one thing I want to change that I haven’t find a way to do it is to setup the fleet ID from within unreal. Right now, the fleet ID is hardcoded in the Gamelift-StartGameSession lambda function.
My problem is that I want to have several build/fleets sets to run different projects. as it is right now, I have to go and edit that function which is a hassle plus the most important thing is that I can’t run both projects at the same time.
Is there a way that I can set the fleet ID in unreal engine stead ?
Any help appreciated
We have a Django app running in lambda. It connects to the RDS Database through RDS PROXY Using IAM Auth. When we are doing load testing after certain load we start getting error saying Too many requests to the IAM AUTH service. When we tried creating a new AWS RDS PROXY without IAM AUTH configuration our load tests performed much better. But we wanted to check what is the best and scalable architecture. Should we remove the IAM AUTH and keep the communication direct BW the application and the IAM AUTH or there is a better way to do this.
I publish my api project at aws lambda. After publishing, when i test API this error showing:
{
"errorType": "NullReferenceException",
"errorMessage": "Object reference not set to an instance of an object.",
"stackTrace": [
"at Amazon.Lambda.AspNetCoreServer.APIGatewayHttpApiV2ProxyFunction.MarshallRequest(InvokeFeatures features, APIGatewayHttpApiV2ProxyRequest apiGatewayRequest, ILambdaContext lambdaContext)",
"at Amazon.Lambda.AspNetCoreServer.AbstractAspNetCoreFunction`2.FunctionHandlerAsync(TREQUEST request, ILambdaContext lambdaContext)",
"at Amazon.Lambda.RuntimeSupport.HandlerWrapper.<>c__DisplayClass26_0`2.<<GetHandlerWrapper>b__0>d.MoveNext()",
"--- End of stack trace from previous location ---",
"at Amazon.Lambda.RuntimeSupport.LambdaBootstrap.InvokeOnceAsync(CancellationToken cancellationToken)"
]
}
And when call any endpoint the response is:
can't parse JSON. Raw result:
Internal Server Error
This is program.cs file
using BT.API.Extensions;
using BT.API.Hubs;
using BT.Repository.Domains.Requests;
using Core.Constants;
using Core.Filters;
using Core.Infrastructure.Options;
using Core.Infrastructure.Security;
using Core.Interfaces.Services;
using FluentValidation.AspNetCore;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using Microsoft.OpenApi.Models;
using Serilog;
var builder = WebApplication.CreateBuilder(args);
builder.Host.UseSerilog((context, configuration) =>
{
configuration.ReadFrom.Configuration(context.Configuration);
});
//builder.Services.AddCorsServices(builder.Configuration);
builder.Services.Configure<CorsOptions>(builder.Configuration.GetSection(nameof(CorsOptions)));
var corsOptions = builder.Configuration.GetSection(nameof(CorsOptions))
.Get(typeof(CorsOptions)) as CorsOptions;
builder.Services.AddCors(options =>
{
options.AddPolicy(corsOptions.PolicyName, policy =>
{
policy.AllowAnyHeader().AllowAnyMethod();
if (corsOptions != null)
{
policy.WithOrigins(corsOptions.Origins);
}
else
{
policy.AllowAnyOrigin();
}
policy.AllowCredentials().SetIsOriginAllowed((host) => true);
});
});
builder.Services.AddControllers(options =>
{
options.Filters.Add(typeof(InputValidationFilter));
// options.Filters.Add(typeof(ExceptionFilter));
})
.AddFluentValidation(fv =>
{
fv.RegisterValidatorsFromAssemblyContaining<SignInRequest>();
})
.AddNewtonsoftJson(x => x.SerializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore)
.AddFluentValidation(x => x.RegisterValidatorsFromAssemblyContaining<SignInRequest>());
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "BT.API", Version = "v1" });
c.AddSignalRSwaggerGen();
});
builder.Services.AddDependencies(builder.Configuration);
builder.Services.AddScoped<IAuthenticatedUser, AuthenticatedUser>();
builder.Services.AddAWSLambdaHosting(LambdaEventSource.HttpApi);
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "BT.API v1");
});
}
var corsOptionss = builder.Configuration.GetSection(nameof(CorsOptions))
.Get(typeof(CorsOptions)) as CorsOptions;
app.UseCors(corsOptions.PolicyName);
app.UseDependencies(builder.Configuration, app.Services.GetRequiredService<ILoggerFactory>());
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.MapControllers();
app.Map("/api/hello", app =>
{
app.Run(async context =>
{
await context.Response.WriteAsync("Hello, world!");
});
});
app.MapHub<SocketHub>(Constants.SOCKET_HUB);
app.Run();
I implemented an EventBridge Scheduler to target Lambda in a VPC. I placed the Lambda in three Availability Zones. How does EventBridge determine which Lambda to call?
Hi all,
I have a lambda function that i need to run every two minutes, i am just using the AWS interface, not using Serverless.
Via EventBridge, i have defined the following trigger with cron:
*/2 * * * ? *
This does not work as expected, the function runs sporadically every hour or so at odd times.
The EventBridge console shows a correct schedule:
- Thu, 30 Mar 2023 09:32:00 UTC
- Thu, 30 Mar 2023 09:34:00 UTC
- Thu, 30 Mar 2023 09:36:00 UTC
- Thu, 30 Mar 2023 09:38:00 UTC
- Thu, 30 Mar 2023 09:40:00 UTC
- Thu, 30 Mar 2023 09:42:00 UTC
- Thu, 30 Mar 2023 09:44:00 UTC
but on CloudWatch monitor i see the function is not running as expected:
- 2023-03-30 12:06:15 (UTC+03:00)
- 2023-03-30 11:54:15 (UTC+03:00)
- 2023-03-30 09:38:40 (UTC+03:00)
- 2023-03-30 09:38:14 (UTC+03:00)
- 2023-03-30 07:38:15 (UTC+03:00)
- 2023-03-30 05:12:15 (UTC+03:00)
- 2023-03-30 03:11:17 (UTC+03:00)
Any help would be appreciated, thank you
We have a use case wherein we want to access mongoDB database via IAM role based authentication mechanism . We have attached an IAM role to the DB and while making connection to db from lambda , for iam role based auth we require temp security credentials like key, secret and security token . For that we are using sts.assumeRole method which gives temporary security credentials by assuming the role (one atttached to DB) . To allow Sts.assumeRole to work we are required to add the arn of user(lambda) in the trust policy of the IAM which we want to assume . We instead want to make it work by adding arn of role or by policy way and not by adding the arn of user(lambda) . We arent able to do that . Is there a way possible to achieve this?
I deploy my API on Lambda recently. I use the new function 'function URLs', it works.
But today morning, when I enter the function page, the 'function URLs' is missing, and there is no 'function url' in the configuration column.
[The provided URL still work]
Also, I tried to make a new Lambda function and didn't find the 'Function URL'.


I already train a BERT model in Python 3.9.16 and I save the .pth files in the models directory (my model ia about 417MB) and I also have my Dockerfile and requirements.txt as following:
# Dockerfile
```
FROM public.ecr.aws/lambda/python:3.9-x86_64
ENV TRANSFORMERS_CACHE=/tmp/huggingface_cache/
COPY requirements.txt .
#RUN pip install torch==1.10.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
RUN pip install torch==1.9.0
RUN pip install transformers==4.9.2
RUN pip install numpy==1.21.2
RUN pip install pandas==1.3.2
RUN pip install -r requirements.txt --target "${LAMBDA_TASK_ROOT}/dependencies"
COPY app.py ${LAMBDA_TASK_ROOT}
COPY models ${LAMBDA_TASK_ROOT}/dependencies/models
CMD [ "app.handler" ]
```
# requirements.txt
```
torch==1.9.0
transformers==4.9.2
numpy==1.21.2
pandas==1.3.2
```
# app.py
```
import torch
from transformers import BertTokenizer, BertForSequenceClassification, BertConfig
#from keras.preprocessing.sequence import pad_sequences
#from keras_preprocessing.sequence import pad_sequences
#from tensorflow.keras.preprocessing.sequence import pad_sequences
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
import numpy as np
import pandas as pd
from typing import Dict
import json
# Path to the directory containing the pre-trained model files
#model_dir = "./models/"
model_dir= "./dependencies/models/"
dict_path = f"{model_dir}/model_BERT_DAVID_v2.pth"
state_dict = torch.load(dict_path,map_location=torch.device('cpu'))
vocab_path=f"{model_dir}/vocab_BERT_DAVID_v2.pth"
vocab = torch.load(vocab_path,map_location=torch.device('cpu'))
model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=4, state_dict=state_dict)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True, vocab=vocab)
def handler(event):
#payload = json.loads(event)
payload=event # dict with the text
text = payload['text']
df = pd.DataFrame()
df['TEXT']=[text]
sentences = df['TEXT'].values
sentences = ["[CLS] " + sentence + " [SEP]" for sentence in sentences]
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
MAX_LEN = 256
# Use the BERT tokenizer to convert the tokens to their index numbers in the BERT vocabulary
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
# Pad our input tokens
#input_ids = pad_sequences(input_ids, maxlen=MAX_LEN, dtype="long", truncating="post", padding="post")
# Pad our input tokens
input_ids = [torch.tensor(seq)[:MAX_LEN].clone().detach() for seq in input_ids]
input_ids = torch.nn.utils.rnn.pad_sequence(input_ids, batch_first=True, padding_value=0)
input_ids = torch.nn.functional.pad(input_ids, (0, MAX_LEN - input_ids.shape[1]), value=0)[:, :MAX_LEN]
input_ids = input_ids.type(torch.LongTensor)
# Create attention masks
attention_masks = []
# Create a mask of 1s for each token followed by 0s for padding
for seq in input_ids:
seq_mask = [float(i>0) for i in seq];attention_masks.append(seq_mask)
prediction_inputs = input_ids.to('cpu') # cuda
prediction_masks = torch.tensor(attention_masks, device='cpu') # cuda
batch_size = 32
prediction_data = TensorDataset(prediction_inputs, prediction_masks)
prediction_sampler = SequentialSampler(prediction_data)
prediction_dataloader = DataLoader(prediction_data, sampler=prediction_sampler, batch_size=batch_size)
# Prediction
# Put model in evaluation mode
model.eval()
# Tracking variables
predictions = []
# Predict
for batch in prediction_dataloader:
# Add batch to GPU
#batch = tuple(t.to(device) for t in batch)
batch = tuple(t for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask = batch
# Telling the model not to compute or store gradients, saving memory and speeding up prediction
with torch.no_grad():
# Forward pass, calculate logit predictions
logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)
# Move logits and labels to CPU
logits = logits['logits'].detach().cpu().numpy()
#label_ids = b_labels.to('cpu').numpy()
# Store predictions and true labels
predictions.append(logits)
#true_labels.append(label_ids)
key = {0:'VERY_NEGATIVE', 1:'SOMEWHAT_NEGATIVE', 2:'NEUTRAL',3:'POSITIVE'}
values=np.argmax(predictions[0], axis=1).flatten() # prediccion maxima de likehood
converted_values = [key.get(val) for val in values] # valor del dict al que corresponde al optimo valor de likehood
# Obtain the score for the intensity
exponents = np.exp(predictions) # Operar sobre la softmax para sacar la prob
softmax = exponents / np.sum(exponents)
intensity={'VERY_NEGATIVE':softmax[0][0][0],'SOMEWHAT_NEGATIVE':softmax[0][0][1],'NEUTRAL':softmax[0][0][2],\
'POSITIVE':softmax[0][0][3]}
score=max(intensity.values())
return converted_values[0]
```
Everything seems correct in local when i create the aws lambda function in the 3.9 version I got this error:
```
{
"errorMessage": "invalid load key, 'v'.",
"errorType": "UnpicklingError",
"requestId": "",
"stackTrace": [
" File \"/var/lang/lib/python3.9/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n",
" File \"/var/task/app.py\", line 25, in <module>\n state_dict = torch.load(dict_path,map_location=torch.device('cpu'))\n",
" File \"/var/lang/lib/python3.9/site-packages/torch/serialization.py\", line 608, in load\n return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)\n",
" File \"/var/lang/lib/python3.9/site-packages/torch/serialization.py\", line 777, in _legacy_load\n magic_number = pickle_module.load(f, **pickle_load_args)\n"
]
}
```
I try multiple things but no solution so far anyone can help me
Hi,
Is there a way to get the AWS Lambda Function URL string (or the bits to construct it) programmatically from the running instance of the Lambda itself?
I tried the below options and neither of them had the necessary URL:
1. checked the input object in `handleRequest(Object input, Context context)`
2. checked the items in `System.getenv()`
Thanks