Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple keep warm #861

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open

Conversation

mcrowson
Copy link
Collaborator

Description

The keep_warm setting now takes an integer for the number of concurrent containers to keep warm. This is done by the keep_warm_callback in the handler.py file using a thred pool to call zappa async tasks that each initialize the application.

Those initialized applications then wait 30 seconds before returning to allow for other other lambdas to cold start rather than reuse the newly warmed container.

Somewhat arbitrary values were set for the ThreadPool size and the sleep size. If the user has specified a timeout_seconds < 30, it will exit sooner.

The time to run the keep_warm_callback with 200 as the value was 15 seconds. If the time.sleep is too low, then it is possible for some percentage of the keep_warm_lambda_initializers to use warm containers.

GitHub Issues

#851

@coveralls
Copy link

coveralls commented May 17, 2017

Coverage Status

Coverage increased (+0.4%) to 72.886% when pulling 3f14397 on mcrowson:multiple_keep_warm into 07dbcb9 on Miserlou:master.

@coveralls
Copy link

coveralls commented May 17, 2017

Coverage Status

Coverage decreased (-0.3%) to 72.222% when pulling 14719c9 on mcrowson:multiple_keep_warm into 07dbcb9 on Miserlou:master.

@mcrowson
Copy link
Collaborator Author

Following a discussion with @nikbora on slack, now setting the sleep time dynamically so it is least intrusive.

@coveralls
Copy link

coveralls commented May 18, 2017

Coverage Status

Coverage decreased (-0.3%) to 72.177% when pulling 3cc4a45 on mcrowson:multiple_keep_warm into b04cb00 on Miserlou:master.

@kevinluvian
Copy link

is there any update to this @mcrowson ?

@mcrowson
Copy link
Collaborator Author

mcrowson commented Mar 11, 2018 via email

@jneves
Copy link
Collaborator

jneves commented Mar 1, 2020

Does this still make sense now that we have an upstream solution with provisioned concurrency? https://aws.amazon.com/pt/about-aws/whats-new/2019/12/aws-lambda-announces-provisioned-concurrency/

@wagmiwiz
Copy link
Contributor

wagmiwiz commented Mar 2, 2020 via email

@TamasNo1
Copy link

FWIW I tried to set up provisioned concurrency manually for our Zappa application, but it did not deliver the results expected:

  • I manually had to create a new ALIAS for the latest Lambda version
  • I had to update API Gateway to point to FUNCTION_NAME:ALIAS instead of just FUNCTION_NAME
  • Set provisioned concurrency to 20 for ALIAS and wait until it became ready
  • Run ab -n200 -c10 https://MY_ENDPOINT (concurrency of 10)

In the result, I still saw > 10s for p95, so cold start still occurred, even though I've set my concurrency to twice the amount I used to test it with. When I ran the test again, right after I got the results, the cold starts were gone. This suggests that provisioned concurrency is not a good enough solution to keep Zappa functions warm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants