-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple keep warm #861
base: master
Are you sure you want to change the base?
Multiple keep warm #861
Conversation
Following a discussion with @nikbora on slack, now setting the sleep time dynamically so it is least intrusive. |
is there any update to this @mcrowson ? |
Not on my end. Feel free to take a stab at a PR on this. The challenge is going to be on making sure your concurrently making the requests and getting a cold lambda each time.
… On Mar 10, 2018, at 8:39 PM, Kevin Luvian ***@***.***> wrote:
is there any update to this @mcrowson <https://github.com/mcrowson> ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#861 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AGJ2ki774Vx3IOINhmCa5X-EtOt3KIhLks5tdIBAgaJpZM4NeD_J>.
|
Does this still make sense now that we have an upstream solution with provisioned concurrency? https://aws.amazon.com/pt/about-aws/whats-new/2019/12/aws-lambda-announces-provisioned-concurrency/ |
Probably not? However this does give you a cheaper way of trying to keep a
few instances warm as you are not paying for their "uptime" (provisioned is
pretty pricey https://aws.amazon.com/lambda/pricing/).
…On Sun, 1 Mar 2020 at 17:00, João Miguel Neves ***@***.***> wrote:
Does this still make sense now that we have an upstream solution with
provisioned concurrency?
https://aws.amazon.com/pt/about-aws/whats-new/2019/12/aws-lambda-announces-provisioned-concurrency/
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#861?email_source=notifications&email_token=AASA4B2QNMVA76VTAW4N6ELRFKIBBA5CNFSM4DLYH7E2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENNEO7I#issuecomment-593119101>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AASA4B4JX6APFOV6TRCT7S3RFKIBBANCNFSM4DLYH7EQ>
.
|
FWIW I tried to set up provisioned concurrency manually for our Zappa application, but it did not deliver the results expected:
In the result, I still saw > 10s for p95, so cold start still occurred, even though I've set my concurrency to twice the amount I used to test it with. When I ran the test again, right after I got the results, the cold starts were gone. This suggests that provisioned concurrency is not a good enough solution to keep Zappa functions warm. |
Description
The
keep_warm
setting now takes an integer for the number of concurrent containers to keep warm. This is done by the keep_warm_callback in the handler.py file using a thred pool to call zappa async tasks that each initialize the application.Those initialized applications then wait 30 seconds before returning to allow for other other lambdas to cold start rather than reuse the newly warmed container.
Somewhat arbitrary values were set for the ThreadPool size and the sleep size. If the user has specified a
timeout_seconds
< 30, it will exit sooner.The time to run the keep_warm_callback with 200 as the value was 15 seconds. If the time.sleep is too low, then it is possible for some percentage of the keep_warm_lambda_initializers to use warm containers.
GitHub Issues
#851