Optimizing performance of AWS lambda functions

Ram Thiruveedhi
2 min readMay 9, 2022

I recently developed a data science application on AWS Lambda. It is CPU bound application that uses data science libraries.

When I tested it using 128MB setup I found only 72MB was utilized on several trials. I incorrectly assumed 256MB or higher setting is not going to help. I then watched this talk and this talk and realized higher memory also comes with better CPU and hence cost and latency may be lower. The documentation is clear stating “Lambda allocates CPU power linearly in proportion to the amount of memory configured.”

Memory Setting

I tried several settings manually and recorded the performance over 10 runs. I found that 3GB setting gives me best latency with <5% increase in cost. There is also power tuner to automate this step. I would highly recommend using this especially if the performance of your lambda function is variable due to factors like payload size, payload type, etc. I would recommend running the tuner on several payloads and choose a setting that would work for most while reducing cost.

There is also AWS Compute Optimizer which will make recommendations. I m yet to try that out.

Tips:

  • Use Layers to include your python libraries. I used AWSLambda-Python38-SciPy1x and it was sufficient to find all libraries I needed. If need additional libraries try using custom layer before using container images
  • If you would like to quickly test our your lambda function on web application, start by using newly introduced function URL as an alternative to creating API gateway.

References:

--

--