| *** artom has quit IRC | 00:21 | |
| *** _erlon_ has quit IRC | 00:25 | |
| *** baojg has quit IRC | 00:37 | |
| *** baojg has joined #openstack-meeting-3 | 00:39 | |
| *** bnemec has quit IRC | 00:41 | |
| *** bnemec has joined #openstack-meeting-3 | 00:46 | |
| *** bnemec has quit IRC | 01:34 | |
| *** bnemec has joined #openstack-meeting-3 | 01:39 | |
| *** apetrich has quit IRC | 02:26 | |
| *** apetrich has joined #openstack-meeting-3 | 02:27 | |
| *** psachin has joined #openstack-meeting-3 | 03:39 | |
| *** psahoo has joined #openstack-meeting-3 | 03:57 | |
| *** psahoo has quit IRC | 04:42 | |
| *** belmoreira has joined #openstack-meeting-3 | 04:53 | |
| *** psahoo has joined #openstack-meeting-3 | 04:55 | |
| *** lajoskatona has joined #openstack-meeting-3 | 05:13 | |
| *** lajoskatona has left #openstack-meeting-3 | 05:13 | |
| *** psahoo has quit IRC | 05:18 | |
| *** belmoreira has quit IRC | 05:26 | |
| *** belmoreira has joined #openstack-meeting-3 | 05:34 | |
| *** psahoo has joined #openstack-meeting-3 | 05:35 | |
| *** ralonsoh has joined #openstack-meeting-3 | 06:14 | |
| *** slaweq has joined #openstack-meeting-3 | 06:31 | |
| *** psahoo has quit IRC | 06:43 | |
| *** psahoo has joined #openstack-meeting-3 | 06:49 | |
| *** e0ne has joined #openstack-meeting-3 | 06:57 | |
| *** e0ne has quit IRC | 07:00 | |
| *** yamamoto has quit IRC | 07:12 | |
| *** yamamoto has joined #openstack-meeting-3 | 07:13 | |
| *** baojg has quit IRC | 07:28 | |
| *** baojg has joined #openstack-meeting-3 | 07:29 | |
| *** baojg has quit IRC | 07:30 | |
| *** baojg has joined #openstack-meeting-3 | 07:31 | |
| *** e0ne has joined #openstack-meeting-3 | 07:39 | |
| *** tosky has joined #openstack-meeting-3 | 07:47 | |
| *** baojg has quit IRC | 08:06 | |
| *** baojg has joined #openstack-meeting-3 | 08:07 | |
| *** yamamoto has quit IRC | 08:15 | |
| *** yamamoto has joined #openstack-meeting-3 | 08:25 | |
| *** yamamoto has quit IRC | 09:24 | |
| *** yamamoto has joined #openstack-meeting-3 | 09:53 | |
| *** yamamoto has quit IRC | 10:05 | |
| *** yamamoto has joined #openstack-meeting-3 | 10:36 | |
| *** yamamoto has quit IRC | 10:40 | |
| *** yamamoto has joined #openstack-meeting-3 | 10:40 | |
| *** yamamoto has quit IRC | 11:01 | |
| *** yamamoto has joined #openstack-meeting-3 | 11:12 | |
| *** yamamoto has quit IRC | 11:12 | |
| *** baojg has quit IRC | 11:19 | |
| *** baojg has joined #openstack-meeting-3 | 11:19 | |
| *** yamamoto has joined #openstack-meeting-3 | 11:35 | |
| *** artom has joined #openstack-meeting-3 | 12:25 | |
| *** psachin has quit IRC | 13:14 | |
| *** yamamoto has quit IRC | 13:23 | |
| *** bnemec has quit IRC | 14:11 | |
| *** bnemec has joined #openstack-meeting-3 | 14:12 | |
| *** baojg has quit IRC | 14:42 | |
| *** baojg has joined #openstack-meeting-3 | 14:43 | |
| *** apetrich has quit IRC | 14:54 | |
| *** baojg has quit IRC | 14:54 | |
| *** baojg has joined #openstack-meeting-3 | 14:55 | |
| *** baojg has joined #openstack-meeting-3 | 14:56 | |
| *** yamamoto has joined #openstack-meeting-3 | 15:24 | |
| *** yamamoto has quit IRC | 15:29 | |
| *** rambo_li has joined #openstack-meeting-3 | 15:44 | |
| *** harsha24 has joined #openstack-meeting-3 | 15:56 | |
| gibi | #startmeeting nova | 16:00 |
|---|---|---|
| openstack | Meeting started Thu Aug 27 16:00:09 2020 UTC and is due to finish in 60 minutes. The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot. | 16:00 |
| openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 16:00 |
| *** openstack changes topic to " (Meeting topic: nova)" | 16:00 | |
| openstack | The meeting name has been set to 'nova' | 16:00 |
| dansmith | o/ | 16:00 |
| *** elod has joined #openstack-meeting-3 | 16:00 | |
| gibi | o/ | 16:00 |
| gmann | o/ | 16:00 |
| gibi | lets get started | 16:00 |
| gibi | #topic Bugs (stuck/critical) | 16:00 |
| *** openstack changes topic to "Bugs (stuck/critical) (Meeting topic: nova)" | 16:00 | |
| gibi | We have a high severity CVE #link https://bugs.launchpad.net/nova/+bug/1890501 that has been fixed on master and ussuri and patches are going in to older stable branches. | 16:00 |
| openstack | Launchpad bug 1890501 in OpenStack Compute (nova) stein "Soft reboot after live-migration reverts instance to original source domain XML (CVE-2020-17376)" [Critical,In progress] - Assigned to Lee Yarwood (lyarwood) | 16:00 |
| bauzas | \o | 16:01 |
| stephenfin | o/ | 16:01 |
| gibi | besides that I don't see any critical bugs | 16:01 |
| gibi | #link 31 new untriaged bugs (-5 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New | 16:01 |
| gibi | #link 6 untagged untriaged bugs (-2 change since the last meeting): https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW | 16:01 |
| gibi | please look at the untriaged bug list and try to push some bugs forward | 16:01 |
| gibi | also I pinged some of you about specific bugs | 16:02 |
| *** psahoo has quit IRC | 16:02 | |
| bauzas | thanks gibi for scrubbing the list | 16:02 |
| lyarwood | \o | 16:02 |
| gibi | do we need to talk about some of the open bugs herE? | 16:02 |
| lyarwood | https://review.opendev.org/#/c/747358/ looks jammed btw | 16:02 |
| lyarwood | not sure what we need to do to move it into the gate | 16:03 |
| lyarwood | sorry ignore that | 16:03 |
| lyarwood | thought it was the CVE, too many bugs :) | 16:03 |
| gibi | :) | 16:03 |
| gibi | #topic Runways | 16:03 |
| *** openstack changes topic to "Runways (Meeting topic: nova)" | 16:03 | |
| gibi | etherpad #link https://etherpad.opendev.org/p/nova-runways-victoria | 16:03 |
| gibi | bp/provider-config-file has been merged \o/ so it's slot is freed up in the runway | 16:04 |
| gibi | the first patch of bp/cyborg-rebuild-and-evacuate now has +2 from me | 16:04 |
| gibi | he spawn part of the bp/add-emulated-virtual-tpm has been merged, the migrate and resize part has a +2 from me | 16:04 |
| gibi | nothing is in the queue and we have a free runway slot, so if you have a feature ready then it is a good time to add it to the queue | 16:04 |
| gibi | do we need to talk about the items in the runway slots? | 16:04 |
| bauzas | gibi: I'll look at the vtpm series again tomorrow morning | 16:05 |
| gibi | cool | 16:05 |
| gibi | thanks | 16:05 |
| bauzas | this should potentially free yet another slot | 16:05 |
| bauzas | *potentially* :) | 16:05 |
| gibi | #topic Release Planning | 16:06 |
| *** openstack changes topic to "Release Planning (Meeting topic: nova)" | 16:06 | |
| gibi | We have 2 weeks until Milestone 3 which is Feature Freeze | 16:06 |
| gibi | I've set up the release tracking etherpad #link https://etherpad.opendev.org/p/nova-victoria-rc-potential | 16:06 |
| gibi | Next week is the last release of non-client libraries for Victoria. For os-vif we would like to get #link https://review.opendev.org/#/c/744816/ merged before the release | 16:06 |
| gibi | Sooner than later we have to talk about cycle highlights and reno prelude | 16:06 |
| *** dustinc has quit IRC | 16:07 | |
| *** baojg has quit IRC | 16:07 | |
| gibi | anything else about the coming release that we have to discuss? | 16:07 |
| harsha24 | https://blueprints.launchpad.net/nova/+spec/parallel-filter-scheduler | 16:07 |
| *** baojg has joined #openstack-meeting-3 | 16:07 | |
| *** sean-k-mooney has joined #openstack-meeting-3 | 16:08 | |
| gibi | harsha24: let's get back to that in the Open Discussion | 16:08 |
| harsha24 | okay | 16:08 |
| gibi | #topic Stable Branches | 16:08 |
| *** openstack changes topic to "Stable Branches (Meeting topic: nova)" | 16:08 | |
| gibi | lyarwood: do you have any news? | 16:09 |
| gibi | I guess we release from all the stable branches that are not in EM due to the CVE I mentioned above | 16:10 |
| lyarwood | elod: has been busy cutting releases | 16:10 |
| gibi | cool | 16:10 |
| lyarwood | elod: no other news aside from that really | 16:10 |
| elod | yes, ussuri release is done, | 16:10 |
| lyarwood | ops sorry | 16:10 |
| elod | train patch is open | 16:11 |
| elod | stein will come as soon as the patch lands :) | 16:11 |
| elod | np :) | 16:11 |
| bauzas | I guess we can release ussuri but we need to hold for train, nope ? | 16:11 |
| * bauzas is lost with all the train queue | 16:11 | |
| elod | the train patch is: https://review.opendev.org/#/c/748383/ | 16:12 |
| lyarwood | https://review.opendev.org/#/c/747358/ would be nice to land as we did in ussuri | 16:12 |
| lyarwood | that's the change jammed at the moment | 16:12 |
| elod | bauzas: there are some patch on the queue, but do we want to wait? | 16:12 |
| bauzas | I don't know | 16:12 |
| bauzas | ask the owner :p | 16:12 |
| elod | ok, I'll -W than the Train release patch | 16:12 |
| lyarwood | elod: ack, I'll reset the +W flags on that change, hopefully that should get it back into the gate | 16:13 |
| gibi | OK | 16:13 |
| elod | lyarwood: not jammed actually | 16:13 |
| elod | just the parent needs to get merged | 16:13 |
| elod | :] | 16:13 |
| lyarwood | oh my bad I was sure that had already landed | 16:14 |
| lyarwood | gerrit-- needs to make these dots bigger! | 16:14 |
| gibi | moving on | 16:14 |
| gibi | Libvirt (bauzas) | 16:14 |
| gibi | there is thing on the agenda | 16:14 |
| gibi | (lyarwood) Looking at bumping MIN_{QEMU,LIBVIRT}_VERSION in V | 16:14 |
| bauzas | nothing to report honestly, I know aarents had changes but I didn't paid attention | 16:14 |
| gibi | #link https://review.opendev.org/#/q/topic:bump-libvirt-qemu-victoria+(status:open+OR+status:merged) | 16:14 |
| gibi | This would mean using UCA on bionic ahead of our move to focal, is anyone against that? | 16:14 |
| lyarwood | I forgot to update the agenda | 16:15 |
| gibi | looking at these patches most of them is are in merge conflict | 16:15 |
| lyarwood | this looks like it's blocked on the same focal detach device bug | 16:15 |
| gibi | lyarwood: ohh, so this was discussed last week | 16:15 |
| lyarwood | yeah sorry | 16:15 |
| gibi | np | 16:15 |
| sean-k-mooney | gibi: we used to use uca on 16.04 | 16:15 |
| lyarwood | #link https://bugs.launchpad.net/nova/+bug/1882521 is the bug reported against focal | 16:16 |
| openstack | Launchpad bug 1882521 in OpenStack Compute (nova) "Failing device detachments on Focal" [High,New] | 16:16 |
| lyarwood | I'm seeing the same issue with UCA on bionic but I can't reproduce locally | 16:16 |
| lyarwood | if we are supposed to be moving to focal anyway this needs to get resolved | 16:17 |
| gibi | yeah | 16:17 |
| sean-k-mooney | i think we were ment to move at m2 | 16:17 |
| sean-k-mooney | or try too so we shoudl swap sonner rather then later | 16:17 |
| lyarwood | yeah I know gmann was still trying to move | 16:17 |
| gibi | I'm hoping somebody will crack that as I did not have time to look into it | 16:17 |
| lyarwood | same, I'm out until Tuesday after this that doesn't help | 16:18 |
| gmann | lyarwood: yeah it is not clear why those fails | 16:18 |
| gmann | as of now those failing tests are skipped to keep doing the testing. | 16:18 |
| sean-k-mooney | lyarwood: could this be due to persitent vs transitant domain difference | 16:18 |
| gmann | but yes this is one of blocker to move to Focal | 16:19 |
| lyarwood | sean-k-mooney: maybe, I really can't tell tbh | 16:19 |
| gibi | #action we need a volunteer to look into the Focal bug #link https://bugs.launchpad.net/nova/+bug/1882521 | 16:19 |
| openstack | Launchpad bug 1882521 in OpenStack Compute (nova) "Failing device detachments on Focal" [High,New] | 16:19 |
| sean-k-mooney | for what its wroth focal and cento8 but use the same version of libvirt | 16:20 |
| sean-k-mooney | so i would expect this to affact centos8 too | 16:20 |
| sean-k-mooney | and as an extention rhel | 16:20 |
| *** e0ne has quit IRC | 16:20 | |
| gibi | let's move on now but please continue discussing this bug on #openstack-nova | 16:21 |
| gibi | #topic Stuck Reviews | 16:21 |
| *** openstack changes topic to "Stuck Reviews (Meeting topic: nova)" | 16:21 | |
| gibi | nothing on the agenda. Is there anything that is stuck? | 16:21 |
| *** bnemec has quit IRC | 16:22 | |
| tosky | I'd say https://review.opendev.org/#/c/711604/ | 16:22 |
| gmann | i think you missed API updates | 16:22 |
| tosky | (or I can mention it when talking about community goals) | 16:22 |
| gibi | gmann: ack, sorry, I will get back to that | 16:23 |
| gibi | tosky: as I see you need review | 16:23 |
| gibi | on that patch | 16:23 |
| gibi | tosky: I can look at it tomorrow | 16:23 |
| bauzas | we should rename this section | 16:23 |
| bauzas | this is confusing, we're not asking for code waiting to be reviewed | 16:24 |
| gibi | bauzas: sure | 16:24 |
| bauzas | but for patches that are stuck because of conflicting opinions | 16:24 |
| gibi | bauzas: I will rename it to reviews with conflicting oppinions | 16:24 |
| bauzas | I suck at naming things | 16:24 |
| dansmith | when was the last time we had one? | 16:24 |
| bauzas | so I won't surely propose a thing | 16:25 |
| dansmith | maybe we could just remove it from the agenda altogether? | 16:25 |
| gibi | real one? on the meeting? pretty long time ago | 16:25 |
| bauzas | this could work too | 16:25 |
| bauzas | open discussions are there anyway, so people can argue there | 16:25 |
| gibi | #action gibi to remove stuck review from the agenda | 16:25 |
| gibi | moving on | 16:25 |
| gibi | #topic PTG and Forum planning | 16:25 |
| *** openstack changes topic to "PTG and Forum planning (Meeting topic: nova)" | 16:25 | |
| gibi | The next Forum and PTG is less than 2 months from now | 16:26 |
| gibi | summary mail #link http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016770.html | 16:26 |
| gibi | please indicate your acceptable PTG timeslots in #link https://doodle.com/poll/a5pgqh7bypq8piew | 16:26 |
| gibi | please collect topics in #link https://etherpad.opendev.org/p/nova-wallaby-ptg | 16:26 |
| gibi | anything to be discussed about the coming PTG and Forum? | 16:26 |
| gibi | #topic Open discussion | 16:27 |
| *** openstack changes topic to "Open discussion (Meeting topic: nova)" | 16:27 | |
| gibi | harsha24: what can we do for you? | 16:27 |
| bauzas | he's proposing a new design for processing filters that are using threads | 16:28 |
| bauzas | we had this discussion in the past and we always said filters aren't a performance bottleneck | 16:28 |
| harsha24 | i am thinking is there a way to reduce the latency to spawn a vm using mutlithreding concept | 16:28 |
| bauzas | but,n | 16:28 |
| bauzas | things changed slightly since we have placement | 16:28 |
| bauzas | harsha24: the point is, I wouldn't argue for more complexity unless you prove me there are huge benefits in doing such parallelism | 16:29 |
| sean-k-mooney | bauzas: well with plamcent you have less need for filters | 16:29 |
| bauzas | sean-k-mooney: correct | 16:29 |
| gibi | harsha24: do you have measurement about the latency specific to the filter executions? | 16:29 |
| dansmith | imagine how much worse NoValidHost will be to debug when it depends on the ordering of threads :) | 16:30 |
| bauzas | tbc, all filter processing is memory-based | 16:30 |
| bauzas | dansmith: right, that too | 16:30 |
| bauzas | filters ordering is crucial for most of our ops | 16:30 |
| *** bnemec has joined #openstack-meeting-3 | 16:30 | |
| dansmith | yep | 16:30 |
| sean-k-mooney | well all the filters for a given requst could probaly be put in there own tread without that being an issue | 16:30 |
| artom | dansmith, well, if it's done properly we'd join() all the threads *then* print out a summary | 16:30 |
| dansmith | sean-k-mooney: we can do that today with number of workers on the scheduler right? | 16:31 |
| bauzas | I remember johnthetubaguy making a proposal about some smarter filter processing in the past | 16:31 |
| artom | I think the question is - is the complexity and effort worth it? | 16:31 |
| sean-k-mooney | indvigual filtere on the other hand would be a problem | 16:31 |
| sean-k-mooney | dansmith: ya more or less | 16:31 |
| bauzas | but that's... yeah, artom said it loudly | 16:31 |
| sean-k-mooney | harsha24: what was your suggestion | 16:31 |
| harsha24 | no actually list of each filter hosts will outcome and make an intersection of filtered hosts at the end | 16:31 |
| dansmith | so each filter runs on the full set of hosts, which means expensive filters cost even more | 16:32 |
| bauzas | sean-k-mooney: a classic parallelism approach | 16:32 |
| harsha24 | yeah | 16:32 |
| bauzas | dansmith: yup | 16:32 |
| sean-k-mooney | so the current filter list has a defiend order | 16:32 |
| bauzas | honestly, again, the CPU time is very short | 16:32 |
| artom | Thinking out loud, perhaps just doing the external REST API calls async with something like concurrent.futures might be a much bigger speed gain | 16:32 |
| bauzas | and we use generators | 16:32 |
| sean-k-mooney | if we run them in paralle they need to filter more host and we then need to caulate the intersection | 16:32 |
| bauzas | honestly, again | 16:32 |
| sean-k-mooney | so that will use more cpu and memory | 16:32 |
| artom | IOW - do other stuff while we wait for Cinder/Neutron to answer. I know it's a completely different thing, just continuing the effort/results thought... | 16:33 |
| bauzas | operators never saw the filters performance as a bottleneck | 16:33 |
| dansmith | sean-k-mooney: that was my point yeah | 16:33 |
| bauzas | ask CERN if you don't know | 16:33 |
| johnthetubaguy | do we know which filters are expensive, last time I looked its the DB queries that dominated | 16:33 |
| dansmith | artom: what filters call out to cinder and neutron? | 16:33 |
| bauzas | johnthetubaguy: my whole point | 16:33 |
| dansmith | exactly | 16:33 |
| artom | dansmith, not the filters, in the compute manager for example | 16:33 |
| dansmith | artom: this is about filters | 16:33 |
| artom | dansmith, as I said, completely different thing :) | 16:33 |
| bauzas | we stopping querying the DB in the filters ages ago | 16:33 |
| dansmith | ack | 16:33 |
| aarents | sean-k-mooney: bauzas we need less filter with placement AND filters are run only on few hosts with room for allocation (others are filtered before by placement) | 16:33 |
| sean-k-mooney | the aggrate* fitlers and numa proably are the most expencive | 16:33 |
| dansmith | numa I'm sure | 16:34 |
| artom | dansmith, I know - it was more the "where can we invest to improve speed" idea | 16:34 |
| sean-k-mooney | aarents: yes | 16:34 |
| harsha24 | what about weighers | 16:34 |
| bauzas | AGAIN, filters don't call DB or neutron | 16:34 |
| johnthetubaguy | numa and affinity were fairly expensive last time I looked | 16:34 |
| sean-k-mooney | weigher are a spesrate pahse they could be paralised but i think tehy are cheap | 16:34 |
| dansmith | harsha24: even worse.. weigh all hosts even though we're going to exclude most? | 16:34 |
| bauzas | they just manipulate in-memory python objects to answer a simple question | 16:35 |
| bauzas | sean-k-mooney: again, the ordering is crucial | 16:35 |
| sean-k-mooney | dansmith: well we coudl weight the filtered host in parrale | 16:35 |
| bauzas | for weighers | 16:35 |
| sean-k-mooney | bauzas: not for weighers | 16:35 |
| sean-k-mooney | that is not an orderd list | 16:35 |
| sean-k-mooney | or is it i should check that | 16:35 |
| harsha24 | I mean after the filtered hosts then can use parallelism for weighers | 16:35 |
| sean-k-mooney | if its orded then yes | 16:35 |
| bauzas | sean-k-mooney: trust me :) | 16:36 |
| dansmith | sean-k-mooney: that seems like a tiny potential for improvement | 16:36 |
| harsha24 | filters order is not required as we intesect the set at the end | 16:36 |
| sean-k-mooney | dansmith: correct | 16:36 |
| bauzas | this seems putting more complexity for no gain | 16:36 |
| johnthetubaguy | bauzas: +1 | 16:36 |
| sean-k-mooney | harsha24: filter order is currently used to impore perfomance | 16:36 |
| dansmith | sean-k-mooney: there's still overhead in spinning up those threads, joining, and then the sorting is still linear, so I'd super doubt it's worth it :) | 16:36 |
| johnthetubaguy | sean-k-mooney: +1 | 16:36 |
| sean-k-mooney | you can change the oder of the filters to elimiate more hosts in the intial filters | 16:36 |
| bauzas | I thought ops were maybe wanting to describe filters and weights another way, but they were okay with a sequential approach | 16:37 |
| gibi | OK so in summary, we don't see filters as a performance bottleneck of instance spawn. Therefore adding the complexity of the parallel exection does not worth it for us. | 16:37 |
| harsha24 | Again the no filters shold be less than no of threads available | 16:37 |
| sean-k-mooney | dansmith: yes im not arguign for threads by the way | 16:37 |
| sean-k-mooney | harsha24: can you provide any data for this | 16:37 |
| dansmith | maybe this request is actually just someone's master's thesis and not really reality? :P | 16:37 |
| sean-k-mooney | harsha24: so we can look at a specici case | 16:37 |
| gibi | sean-k-mooney: ++ | 16:38 |
| gibi | I would like to see data that proves what we could gain | 16:38 |
| gibi | before moving forward with this | 16:38 |
| dansmith | the claim is also latency, | 16:38 |
| dansmith | not throughput | 16:38 |
| sean-k-mooney | harsha24: we have had ineffinctly inpmeneted filters in the past which have been optimised either at the db layer or in python | 16:38 |
| bauzas | we're discussing about a finite number of possibilities | 16:38 |
| artom | dansmith, oh be nice :) If harsha24 is looking to improve Nova performance, we should guide them to where we thing the biggest problems are | 16:38 |
| bauzas | in general, we have less than 1000 nodes | 16:38 |
| bauzas | so parallelism doesn't really help | 16:39 |
| bauzas | even 100000 with say 10 filters is reasonable | 16:39 |
| sean-k-mooney | well even if we have more placment will have narrowed it | 16:39 |
| bauzas | right | 16:39 |
| bauzas | and again, the whole thing was sucking because of the DB access times | 16:39 |
| bauzas | again, latency as dansmith said | 16:39 |
| dansmith | also, you can improve latency by reducing the number of results from placement, if you're claiming latency on an empty cloud where tons of hosts are considered by the filters | 16:40 |
| bauzas | at least that's what a couple of summit sessions from CERN teached me | 16:40 |
| sean-k-mooney | cern reduce it excessively | 16:40 |
| bauzas | aarents: what are your findings btw ? | 16:40 |
| sean-k-mooney | but yes | 16:40 |
| dansmith | which is far less complex than threading any of the rest of the process | 16:40 |
| bauzas | ++ | 16:41 |
| johnthetubaguy | more request filters to remove filters? | 16:41 |
| dansmith | johnthetubaguy: that's covered under the placement topic I think, but always better yeah | 16:41 |
| sean-k-mooney | wehre possible | 16:41 |
| johnthetubaguy | dansmith: true | 16:41 |
| bauzas | johnthetubaguy: well, I wouldn't be that opiniated | 16:42 |
| dansmith | meaning, as we do more of that, fewer real filters are required because placement gives us more accurate results | 16:42 |
| artom | harsha24, are you following any of this, by the way? | 16:42 |
| bauzas | johnthetubaguy: I'd say some ops would prefer having a fine-grained placement query and others would prefer looping over the filters logic | 16:42 |
| artom | harsha24, was your idea to implement threading, or more generally to try and improve performance? | 16:42 |
| johnthetubaguy | dansmith: +1 | 16:42 |
| harsha24 | what if we use this as a plugin when there are few filters like less than 3 filters or so | 16:42 |
| aarents | bauzas: we leaves with cluster of 1500 nodes on newton(placement free).. it is slow but it "usable" | 16:42 |
| aarents | live | 16:42 |
| bauzas | aarents: ah, I forgot you lag :p | 16:43 |
| harsha24 | artom yeah | 16:43 |
| sean-k-mooney | harsha24: we had plugins in the past that tried to use caching shcudler and other trick to spead up shcudling | 16:43 |
| sean-k-mooney | in the long run that appoch ahs not worked out | 16:44 |
| sean-k-mooney | you coudl do an experiment but im not sure the scudling is the botelneck in spawning an instance | 16:44 |
| johnthetubaguy | well, caching scheduler became pointless, now placement does it better | 16:44 |
| harsha24 | oh okay then its not feasible | 16:44 |
| gibi | OK lets wrap this up. | 16:44 |
| sean-k-mooney | johnthetubaguy: ya more or less | 16:45 |
| harsha24 | does this method improves performance in weighers | 16:45 |
| bauzas | caching scheduler was there because of the DB bottleneck, not because of the filters :) | 16:45 |
| gibi | so the direct idea of paralelize the filter exection seems not worth it. please provide data to prove the gain. | 16:45 |
| *** tosky has quit IRC | 16:45 | |
| bauzas | +1 | 16:45 |
| harsha24 | I will try a PoC based on this | 16:45 |
| gibi | harsha24: if you are here to improve performance in general, then I think artom had some idea where to gain omre | 16:45 |
| bauzas | with a devstack env ideally, please | 16:45 |
| johnthetubaguy | bauzas: agreed, and the filters, with loads of hosts too basically zero time, when I measured it in prod at a public cloud | 16:45 |
| johnthetubaguy | s/too/took/ | 16:46 |
| gibi | Is there any other open disucssion topic for today? | 16:46 |
| artom | gibi, it was literally a random thought in my head, but I guess? | 16:46 |
| gibi | artom: :) | 16:46 |
| artom | End this! ;) | 16:46 |
| bauzas | artom: I'd dare say good luck with improving I/Os on filters | 16:47 |
| dansmith | we do async the neutron stuff already, IIRC, | 16:47 |
| bauzas | right | 16:47 |
| dansmith | and the cinder stuff has dependencies that make it hard, IIRC | 16:47 |
| bauzas | we don't call cinder on a filter either way | 16:47 |
| bauzas | and I'd -2 any proposal like it | 16:47 |
| dansmith | bauzas: artom was randomly picking non-filter things | 16:47 |
| bauzas | hah | 16:47 |
| artom | *performance related* random things! | 16:47 |
| artom | I wasn't talking about, like cabbages! | 16:47 |
| bauzas | but I thought we were discussing about filters parralelism | 16:48 |
| dansmith | bauzas: so this is not filter-related, but it was confusing in a discussion about...filters :) | 16:48 |
| artom | Yes, to improve *performance* :) | 16:48 |
| bauzas | right | 16:48 |
| dansmith | bauzas: we were... | 16:48 |
| sean-k-mooney | let go to #openstack-nova | 16:48 |
| gibi | if nothing else for today then lets close the meeting. | 16:48 |
| dansmith | please | 16:48 |
| artom | I said "End this!" like 13 lines ago | 16:48 |
| sean-k-mooney | and we can continue this there if we want | 16:48 |
| gibi | thanks for joining | 16:48 |
| bauzas | okay, my tongue is dry | 16:48 |
| *** yamamoto has joined #openstack-meeting-3 | 16:48 | |
| sean-k-mooney | o/ | 16:48 |
| gibi | #endmeeting | 16:48 |
| *** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/" | 16:48 | |
| openstack | Meeting ended Thu Aug 27 16:48:36 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:48 |
| openstack | Minutes: http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-08-27-16.00.html | 16:48 |
| openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-08-27-16.00.txt | 16:48 |
| openstack | Log: http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-08-27-16.00.log.html | 16:48 |
| *** sean-k-mooney has left #openstack-meeting-3 | 16:48 | |
| * bauzas runs as quickly as he can | 16:49 | |
| *** belmoreira has quit IRC | 16:52 | |
| *** yamamoto has quit IRC | 16:53 | |
| *** harsha24 has quit IRC | 16:55 | |
| *** ralonsoh has quit IRC | 17:12 | |
| *** baojg has quit IRC | 17:25 | |
| *** baojg has joined #openstack-meeting-3 | 17:26 | |
| *** baojg has quit IRC | 17:45 | |
| *** baojg has joined #openstack-meeting-3 | 17:46 | |
| *** raildo_ has quit IRC | 17:55 | |
| *** raildo_ has joined #openstack-meeting-3 | 17:56 | |
| *** e0ne has joined #openstack-meeting-3 | 18:09 | |
| *** rambo_li has quit IRC | 18:21 | |
| *** e0ne has quit IRC | 18:27 | |
| *** tosky has joined #openstack-meeting-3 | 18:35 | |
| *** belmoreira has joined #openstack-meeting-3 | 18:54 | |
| *** rambo_li has joined #openstack-meeting-3 | 19:02 | |
| *** rambo_li has quit IRC | 19:12 | |
| *** rambo_li has joined #openstack-meeting-3 | 20:39 | |
| *** rambo_li has quit IRC | 20:57 | |
| *** baojg has quit IRC | 21:04 | |
| *** baojg has joined #openstack-meeting-3 | 21:05 | |
| *** baojg has quit IRC | 21:05 | |
| *** baojg has joined #openstack-meeting-3 | 21:06 | |
| *** slaweq has quit IRC | 21:22 | |
| *** baojg has quit IRC | 21:24 | |
| *** baojg has joined #openstack-meeting-3 | 21:25 | |
| *** raildo_ has quit IRC | 21:28 | |
| *** slaweq has joined #openstack-meeting-3 | 21:29 | |
| *** yamamoto has joined #openstack-meeting-3 | 21:30 | |
| *** slaweq has quit IRC | 21:34 | |
| *** belmoreira has quit IRC | 21:56 | |
| *** bnemec has quit IRC | 22:07 | |
| *** bnemec has joined #openstack-meeting-3 | 22:12 | |
| *** yamamoto has quit IRC | 22:12 | |
| *** baojg has quit IRC | 23:06 | |
| *** baojg has joined #openstack-meeting-3 | 23:07 | |
| *** baojg has quit IRC | 23:16 | |
| *** bnemec has quit IRC | 23:37 | |
| *** bnemec has joined #openstack-meeting-3 | 23:43 | |
| *** tosky has quit IRC | 23:59 | |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!