*** goldyfruit has joined #openstack-qinling | 01:50 | |
*** mnaser has left #openstack-qinling | 13:38 | |
*** goldyfruit has quit IRC | 13:51 | |
*** goldyfruit has joined #openstack-qinling | 13:51 | |
*** goldyfruit_ has joined #openstack-qinling | 14:42 | |
*** goldyfruit has quit IRC | 14:42 | |
*** goldyfruit_ has quit IRC | 14:50 | |
*** goldyfruit has joined #openstack-qinling | 14:50 | |
*** openstackgerrit has joined #openstack-qinling | 17:29 | |
openstackgerrit | Gaëtan Trellu proposed openstack/qinling master: WIP: Add API documentation https://review.opendev.org/663730 | 17:29 |
---|---|---|
openstackgerrit | Gaëtan Trellu proposed openstack/qinling master: WIP: Add API documentation https://review.opendev.org/663730 | 20:48 |
openstackgerrit | Gaëtan Trellu proposed openstack/qinling master: WIP: Add API documentation https://review.opendev.org/663730 | 21:09 |
goldyfruit | lxkong, what does exactly function scaleup ? | 21:15 |
goldyfruit | From the function I just see an insertion in the database | 21:16 |
*** goldyfruit has quit IRC | 21:17 | |
*** goldyfruit has joined #openstack-qinling | 21:18 | |
goldyfruit | Sorry, I just see a get | 21:20 |
goldyfruit | Ok I misread the code | 21:21 |
goldyfruit | I keep looking | 21:21 |
*** goldyfruit has quit IRC | 21:26 | |
*** goldyfruit has joined #openstack-qinling | 21:27 | |
*** goldyfruit_ has joined #openstack-qinling | 21:29 | |
*** goldyfruit has quit IRC | 21:29 | |
goldyfruit_ | lxkong, still not sure what it does :p | 21:34 |
goldyfruit_ | it does not touch the number of worker right ? | 21:34 |
lxkong | goldyfruit_: funciton scaleup is manually increase number of the back pods for the function execution | 21:35 |
lxkong | the number of workers is runtime property, nothing to do with the function | 21:36 |
lxkong | goldyfruit_: sometimes you have your own monitoring system and want to do autoscaling together with the monitoring system | 21:37 |
goldyfruit_ | I'm confused, what is the difference with the scaleup and the autoscaling ? | 21:37 |
lxkong | autoscaling = scale up + scale down + auto | 21:37 |
goldyfruit_ | so what is the advantage of scaleup ? | 21:39 |
goldyfruit_ | if autoscaling does it automatically ? | 21:39 |
lxkong | "sometimes you have your own monitoring system and want to do autoscaling together with the monitoring system" | 21:39 |
lxkong | which means you config the scalup or scaledown endpoint in the monitoring system | 21:39 |
lxkong | Qinling currently only supports scaling according to the execution concurrency | 21:40 |
lxkong | let's say, you want autoscaling according to the cpu or mem usage | 21:40 |
lxkong | or traffic or something | 21:40 |
lxkong | the system which monitors the cpu/mem/traffic metrics could trigger scaleup function | 21:41 |
lxkong | but the admin need to config that manually | 21:41 |
goldyfruit_ | SO this permanent until the scale down ? | 21:41 |
goldyfruit_ | this is* | 21:41 |
lxkong | you can config both scale up and scale down | 21:42 |
goldyfruit_ | you mentioned back pods | 21:42 |
goldyfruit_ | Could you please say a bit more about this ? | 21:42 |
lxkong | when your function is executed, you could see there are some pods have the function_id in the label | 21:43 |
goldyfruit_ | yeah | 21:44 |
lxkong | sorry, have to go stand up meeting, will continue after | 21:44 |
goldyfruit_ | no problem! | 21:44 |
*** goldyfruit_ has quit IRC | 21:52 | |
*** goldyfruit has joined #openstack-qinling | 21:58 | |
lxkong | from the beginning, there is only 1 pod to run the function, but think about the situation that a function is triggered hundreds of times at the same time, 1 pod is going to be a problem, the response is going to be slow or timeout. So, you need scale up (either automatic or manual) to increase the pod number to increase the concurrency capability, then scale down to release the unnecessary pods | 22:00 |
goldyfruit | lxkong, (I have laptop issue, motherboard I guess... keep crashing!) | 22:02 |
goldyfruit | So, let say I spawn a runtime with 3 replica | 22:03 |
goldyfruit | A function with the default concurrency which is 3 | 22:03 |
goldyfruit | runtime = worker ? | 22:04 |
goldyfruit | + sidecar | 22:05 |
goldyfruit | So basically I have 3 worker | 22:05 |
goldyfruit | so if I run 3 times the function at the same time I'll not have any worker available | 22:06 |
goldyfruit | or is it 3 time per worker ? | 22:06 |
lxkong | > A function with the default concurrency which is 3 | 22:13 |
lxkong | you mean `CONF.engine.function_concurrency=3`? | 22:13 |
*** goldyfruit has quit IRC | 22:19 | |
openstackgerrit | Gaëtan Trellu proposed openstack/qinling master: WIP: Add API documentation https://review.opendev.org/663730 | 22:37 |
*** goldyfruit has joined #openstack-qinling | 22:38 | |
goldyfruit | Damn X260... | 22:38 |
*** goldyfruit has quit IRC | 23:02 | |
*** goldyfruit has joined #openstack-qinling | 23:34 | |
openstackgerrit | Gaëtan Trellu proposed openstack/qinling master: WIP: Add API documentation https://review.opendev.org/663730 | 23:37 |
openstackgerrit | Gaëtan Trellu proposed openstack/qinling master: WIP: Add API documentation https://review.opendev.org/663730 | 23:38 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!