![]() You should use v2beta2 apiVersion of HPA, as it allows that type of metrics.Īlso what I think is that you will have to use the v2beta2 apiVersion of HPA because you will have to keep the request count to unary so that the requests doesn't generate 5XX as kubernetes service will send the request to the same pod if such metric isn't set. Also you will have to downscale as soon as the request ends. In Node, there are two types of threads: one Event Loop (aka the main loop, the main thread, event thread, etc.), and a pool of k Workers in a Worker Pool (aka the thread pool). Strategic Solution: You can declare an HPA for your deployment with a flag of -max-replicas=xx, then you need to write a job using request metrics, that whenever there is request to the service the Deployment should be scaled automatically and descaled likewise. You can make it work, and the following can be a way ahead. However quoting the same A Linux shell is a Linux shell is a Linux shell. ![]() First, consider this code that uses setTimeout to delay printing a simple statement on the console after 2 seconds. So why not use it to implement the sleep () function. The timer functions within Node.js implement a similar API as the timers. The setTimeout () function is used to create a time delay before a certain block of code is executed. Because the timer functions are globals, there is no need to call require ('node:timers') to use the API. I would advise you to consider your architectural design of application, break them into multi threaded and then revisit kubernetes. The timer module exposes a global API for scheduling functions to be called at some future period of time. Because every Node process would be running isolated inside a container! Kubernetes is an Orchestrator for containers and deploying a monolithic application on kubernetes not just bring down all the tremendous that kubernetes can do but also bring lots of overhead with Deployment and automation issues.Īlso, the nice thing when you break away from a monolith (= single thread) into a (micro) service oriented architecture you can have an isolated event loop for each service. As per your say your application is a single threaded one, and you want to increase the pod as soon as a new request is fired only if the previous pod is busy or has a lock, in simplest terms a new pod to come with a new request if the previous pod is busy. ![]() ![]() The use-case here is quite a bit, unsteady.
0 Comments
Leave a Reply. |