Tuesday, 2021-06-01

*** jbadiapa has quit IRC00:21
*** jbadiapa has joined #openstack-meeting-300:34
*** yasufum has joined #openstack-meeting-300:45
*** manub has quit IRC02:53
*** manub has joined #openstack-meeting-304:31
*** ricolin_ has joined #openstack-meeting-304:32
*** manub has quit IRC04:33
*** ricolin has quit IRC04:36
*** ricolin_ is now known as ricolin04:36
*** ralonsoh has joined #openstack-meeting-305:41
*** slaweq has joined #openstack-meeting-306:17
*** jbadiapa has quit IRC07:02
*** jbadiapa has joined #openstack-meeting-307:02
*** tosky has joined #openstack-meeting-307:48
*** tosky has quit IRC07:48
*** tosky has joined #openstack-meeting-308:08
*** stephenfin has quit IRC08:49
*** stephenfin has joined #openstack-meeting-309:15
*** obondarev has joined #openstack-meeting-311:32
*** jbadiapa has quit IRC12:07
*** jbadiapa has joined #openstack-meeting-312:09
*** belmoreira has joined #openstack-meeting-313:51
*** jlibosva has joined #openstack-meeting-313:56
*** lajoskatona has joined #openstack-meeting-314:00
slaweq#startmeeting networking14:01
opendevmeetMeeting started Tue Jun  1 14:01:50 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.14:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:01
opendevmeetThe meeting name has been set to 'networking'14:01
rubasovhi14:02
lajoskatonaHi14:02
jlibosvahi14:02
ralonsohhhi14:02
amotokio/14:02
obondarevhi14:03
bcafarelo/14:03
slaweqok, lets start14:04
*** manub has joined #openstack-meeting-314:04
slaweqwelcome in our new irc "home" :)14:04
slaweq#topic announcements14:05
slaweqXena cycle calendar https://releases.openstack.org/xena/schedule.html14:05
njohnstono/14:06
slaweqwe have about 5 more weeks before Xena-2 milestone14:06
slaweqand I want to ask everyone to try to review some of opened specs, so we can get them merged before milestone 2 and move on with implementation later14:06
slaweqnext one14:07
slaweqMove to OTFC is done: http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022724.html14:07
slaweqYou probably all know that already as You are here14:07
bcafarel:)14:07
slaweqbut just for the sake of announcements14:07
slaweqregarding to that migration I have couple of related things14:07
slaweqif You can, please stay few more days on the old channel, and redirect people to the new one if necessary,14:07
slaweqbut please remember to not say explicitly name of the new server on the freenode channel14:08
slaweqas that may cause that our channel there will be taken14:08
slaweqneutron docs are updated: https://review.opendev.org/c/openstack/neutron/+/793846 - if there are any other places to change, please ping me or send patch to update that14:08
*** mlavalle has joined #openstack-meeting-314:09
slaweqand last thing related to that migration14:09
mlavalleo/14:09
slaweqhi mlavalle - You were fast connecting here :)14:09
mlavalleLOL, took me a few minutes to figure it out14:10
slaweq:)14:10
slaweqtc asked teams to consider moving from the meeting rooms to the project channels for meetings14:10
slaweqso I wanted to ask You what do You think about it?14:10
slaweqdo You think we should still have all meetings in the #openstack-meeting-X channels?14:10
slaweqor maybe we can move to the #openstack-neutron with all meetings?14:11
amotokithe only downside I see is that openstack bot could post messages on reviews. Otherwise sounds good.14:11
slaweqamotoki: do You know if that is an issue for other teams maybe?14:12
amotokislaweq: unfortunately no14:12
slaweqI know that some teams already have their meetings in the team channels14:12
obondarevsome ongoing discussions in team channel would have to be paused during meeting time14:13
lajoskatonaPerhaps that would be easier for collecting people to the meetings14:13
lajoskatonamake ----^14:13
slaweqobondarev: yes, but tbh, do we have a lot of them? :)14:13
bcafareloverall I think #openstack-neutron is generally "quiet" enough now that to have meetings in it14:13
bcafarelbut no strong opinion, I will just update my tabs :) (or follow the PTL pings)14:14
obondarevjust try to figure out downsides, I don't think it should stop us:)14:14
amotokiobondarev's point and mine are potential downside. generally spekaing, #-neutron channel does not have many traffic, so I am fine with having meetings in #-neutron channel.14:14
rubasovI'm also okay with both options14:14
obondarev+114:15
slaweqok, I will try to get more info from the tc why they want teams to migrate14:15
njohnstonThe octavia team has met in their channel for a long time now without issues14:15
slaweqnjohnston: thx, that's good to know14:16
slaweqdo You maybe know why tc wants teams to do such migration? is there any issue with using -meeting channels?14:16
* slaweq is just curious14:16
njohnstonI'm not sure, I did not attend that discussion.14:17
slaweqok14:17
slaweqso I will try to get such info and as there is no strong opinions agains, I will update our meetings accordingly14:18
slaweqso please expect some email with info about it during this week :)14:19
slaweqok, last reminder14:19
slaweqNomination for Y names' proposals is still open http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022383.html14:19
slaweqthat's all announcements/reminders from me14:19
slaweqdo You have anything else what You want to share now?14:20
amotokiI have one thing I would like to share around OVN switch and horizon. I don't want to dig into it in detail here though.14:20
amotokihorizon see consistent failures in the integration tests after switching the network backend to OVN. The failures happen in router related tests.14:20
amotokihorizon see consistent failures in the integration tests after switching the network backend to OVN. The failures happen in router related tests.14:20
amotokiWe haven't understood what happens in detail, but if you notice some difference between OVS/L3 and OVN cases please let vishal (PTL) or me know it.14:20
amotokithat's all I would like to share.14:20
slaweqamotoki: there are for sure differences as in OVN/L3 there is no "distributed" attribute supported - routers are distributed by default14:21
slaweqand other difference which I know is that ovn/l3 don't have l3 agent scheduler14:21
amotokimistake in the third comment : it should be "When we configure the job to use OVS backend, the failure (at least on the router related failures) has gone."14:22
slaweqso all "agent" related api calls will probably fail14:22
amotokislaweq: yeah, I know. I think horizon needs to investigate it in more detail on what's happening.14:22
slaweqamotoki: do You have link to the failed job, I can take a look after the meeting14:22
slaweqamotoki: we also had similar issue with functional job in osc14:23
amotokiI think https://zuul.openstack.org/builds?job_name=horizon-integration-tests&project=openstack%2Fhorizon&branch=master&pipeline=periodic is the easist way.14:23
slaweqI sent patch https://review.opendev.org/c/openstack/python-openstackclient/+/793142 to address issues in that job14:23
amotokiI saw it too.14:23
slaweqand there are also some fixes on neutron side needed (see depends-on in the linked patch)14:23
amotokianyway I will continue to follow up the failure detail :)14:24
slaweqok14:25
slaweq#topic Blueprints14:25
slaweqXena-2 BPs: https://bugs.launchpad.net/neutron/+milestone/xena-214:26
slaweqdo You have any updates about any of them?14:26
slaweqI have only one short update about https://blueprints.launchpad.net/neutron/+spec/secure-rbac-roles14:27
slaweqalmost all UT patches are now merged14:27
slaweqonly one left14:27
bcafarelnice work14:29
slaweqnow I need to check how to switch some devstack based job to use those new defaults14:30
slaweqand then to check what will be broken14:30
slaweqbut tbh I'm not sure if we should mark BP as completed now or keep it still opened?14:30
slaweqall new default rules are there already14:30
slaweqso IMO we can treat any issues which we will find as bugs simply and report them on LP14:31
slaweqwdyt?14:31
obondarevmakes sense14:31
slaweqIMO it would be easier to track them14:31
bcafarelwell the feature is implemented code is in there etc so marking BP as completed makes sense14:31
ralonsohyeah, we can open new LP for new issues14:32
amotokimake sense to me too.14:32
slaweqk, thx14:32
slaweqand that's all update from my side regarding BPs14:32
slaweqif there are no other updates, I think we can move on14:33
slaweqto the next topic14:33
slaweq#topic Bugs14:33
slaweqI was bug deputy last week. Report http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022776.html14:33
slaweqthere is couple of bugs which I wanted to highlight now14:33
slaweqfirst of all, l3-ha issue:14:33
slaweqhttps://bugs.launchpad.net/neutron/+bug/1930096 - this is related to the L3ha and seems pretty serious issue which needs to be checked.14:33
opendevmeetLaunchpad bug 1930096 in neutron "Missing static routes after neutron-l3-agent restart" [High,Confirmed]14:33
slaweqaccording to my test, it seems that this may be related to the change which is setting interfaces to be DOWN on the standby node14:34
slaweqand bring them UP when router becomes active on the node14:34
slaweqI saw in the logs locally that keepalived was complaining that there is no route to host and extra routes were not configured in qrouter namespace14:35
slaweqif anyone will have cycles, please try to check that14:35
slaweqI may try but not this week for sure14:36
ralonsohI can check it14:36
slaweqralonsoh: thx14:36
obondarevralonsoh has infinite cycles it seems :)14:36
slaweqobondarev: yes, that's true14:37
slaweq:)14:37
mlavalleThere are several ralonsoh s in parallel universes all working together14:37
slaweqmlavalle: that can be the only reasonable explanation ;)14:38
ralonsohhehehe14:38
amotoki:)14:38
slaweqok, next one14:38
slaweqhttps://bugs.launchpad.net/neutron/+bug/1929523 - this one is gate blocker14:38
opendevmeetLaunchpad bug 1929523 in neutron "Test tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details is failing from time to time" [High,Confirmed]14:38
slaweqand as our gate isn't in good condition now, we need to take a look into that14:39
ralonsohthere was a related patch from Candido14:39
slaweqI may try to check that tomorrow or on Friday14:39
ralonsohI'll send you the link14:39
slaweqralonsoh: but I'm not sure if it's the same issue really14:39
slaweqplease send me link and I will take a look14:39
ralonsohit is, the DNS entries14:39
slaweqwould be great :)14:40
ralonsohhttps://review.opendev.org/c/openstack/tempest/+/77975614:40
ralonsohof course, maybe we need something more14:40
slaweqthx ralonsoh14:42
slaweqI will take a look at it14:42
slaweqand the last bug from me:14:42
slaweqhttps://bugs.launchpad.net/neutron/+bug/1929821 - that seems to be low-hanging-fruit bug, so maybe there is someone who wants to take it :)14:42
opendevmeetLaunchpad bug 1929821 in neutron "[dvr] misleading fip rule priority not found error message" [Low,New]14:42
slaweqand that are all bugs which I had for today14:43
slaweqany other issues You want to discuss?14:44
mlavallenot me14:44
slaweqbug deputy this week is hongbin14:45
slaweqhe is aware of it and should be ok14:45
slaweqnext week will be haleyb's turn14:45
haleybhi14:45
slaweqhi haleyb :)14:46
mlavallehi haleyb!14:46
slaweqare You ok being bug deputy next week?14:46
haleybyes, that's fine14:46
slaweqgreat, thx14:46
slaweqso, let's move on14:46
slaweq#topic CLI/SDK14:47
slaweqOSC patch https://review.opendev.org/c/openstack/python-openstackclient/+/768210 is merged now14:47
slaweqso with next osc release we should have possibility to send custom parameters to the neutron14:47
bcafarel\o/14:47
slaweqthanks all for reviews of that patch :)14:47
amotokireally nice14:47
*** jlibosva has quit IRC14:47
slaweqI proposed also neutronclient patch https://review.opendev.org/c/openstack/python-neutronclient/+/79336614:48
slaweqplease check that when You will have some time14:48
slaweqand I have 1 more question - is there anything else we should do to finish that effort?14:48
amotokigenerally no, but I'd like to check --clear behavior consistency in detail. in some API, None needs to be sent to clear the list (instead of []).14:49
slaweqAFAIR that OSC thing was last missing piece but maybe I missed something14:49
amotokiI am not sure it should be handled via CLI or API itself.14:49
slaweqamotoki: You mean to check if it works ok for all our resources already?14:50
amotokislaweq: yeah, I would like to check some API which uses None to clear the list, but I don't think it is a blcoking issue for final cal.14:51
amotoki*call.14:51
slaweqamotoki: great, if You will find any issues, please report bug for OSC and we can fix them14:51
amotokislaweq: sure14:51
slaweqamotoki++ thx14:51
slaweqso we are finally approaching to the EOL for neutronclient CLI :)14:51
slaweqplease be aware14:52
slaweqand that are all topics from me for today14:52
obondarevraise hand who still use neutronclient o/ :D14:52
slaweqobondarev: o/14:53
slaweqbut it's time to move to the OSC now :)14:53
obondarevyeah, sad but true14:53
amotokiusing neutron in mitaka env I still need to take care :p14:53
slaweqamotoki: WOW, that's old one14:53
slaweqI hope You don't need to do backports from master to that mitaka branch :P14:53
amotokihehe. it's not so surprising in teleco env,14:54
slaweq:)14:54
bcafareltrue14:54
slaweqok, thx for attending the meeting14:54
slaweqand have a great week14:54
mlavalleo/14:54
ralonsohbye14:54
slaweqo/14:54
amotokio/14:54
rubasovo/14:54
bcafarelo/14:54
slaweq#endmeeting14:54
opendevmeetMeeting ended Tue Jun  1 14:54:55 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:54
opendevmeetMinutes:        http://eavesdrop.openstack.org/meetings/networking/2021/networking.2021-06-01-14.01.html14:54
opendevmeetMinutes (text): http://eavesdrop.openstack.org/meetings/networking/2021/networking.2021-06-01-14.01.txt14:55
opendevmeetLog:            http://eavesdrop.openstack.org/meetings/networking/2021/networking.2021-06-01-14.01.log.html14:55
lajoskatonao/14:55
manubo/14:55
*** manub has quit IRC14:55
slaweq#startmeeting neutron_ci15:01
opendevmeetMeeting started Tue Jun  1 15:01:37 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.15:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:01
opendevmeetThe meeting name has been set to 'neutron_ci'15:01
ralonsohhi15:02
lajoskatonaHi15:03
obondarevhi15:03
slaweqbcafarel: ping15:03
slaweqci meeting15:03
bcafarelo/ sorry15:03
slaweqnp :)15:03
bcafarelI got used to the usual 15-20 min back between these meetings :p15:03
slaweqok, let's start15:03
slaweqGrafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate15:04
slaweqPlease open now :)15:04
slaweq#topic Actions from previous meetings15:04
slaweqobondarev to check neutron-tempest-dvr-ha-multinode-full and switch it to ML2/OVS15:04
obondarevhttps://review.opendev.org/c/openstack/neutron/+/79310415:05
obondarevready15:05
*** sean-k-mooney has joined #openstack-meeting-315:05
slaweqobondarev: thx15:05
slaweqtoday I found out that there is also neutron-tempest-ipv6 which is now running on ovn15:06
slaweqand now the question is - do we want to switch it back to ovs or keep it with default backend?15:06
ralonsohit is not failing15:06
lajoskatona+115:07
slaweqI would say - keep it with default backend (ovn now) but maybe You have other opinions about it15:07
ralonsohso, IMO, keep it in OVN15:07
lajoskatonaagree15:07
obondarevwill it mean that reference implementation will not be covered in gates?15:07
obondarevwith tempest ipv6 tests15:07
slaweqobondarev: what do You mean by reference implementation? ML2/OVS?15:07
bcafarelit may turn close to a duplicated of neutron-ovn-tempest-ovs-release-ipv6-only too?15:07
obondarevif yes - that would be a problem15:07
obondarevslaweq, yes ML2-OVS15:08
slaweqbcafarel: good point, I missed that we have such job already15:08
ralonsohif I'm not wrong, ovs release uses master15:08
ralonsohright?15:08
ralonsohin any case, it won't affect and could be a duplicate15:08
slaweqso according to that and to what obondarev said, maybe we should switch it neutron-tempest-ipv6 to be ml2/ovs again15:09
obondarevso if reference ML2-OVS is not protected with CI then it's prone to regressions which is not good15:09
obondarevfor ipv6 again15:09
slaweqobondarev: that's good point, we don't want regression in ML2/OVS for sure15:10
ralonsohbtw, why don't we rename neutron-ovn-tempest-ovs-release-ipv6-only to be OVS?15:10
slaweqralonsoh: that'15:10
ralonsohand keep it neutron-tempest-ipv6 with the default backend15:10
slaweqthat's my another question - what we should do as next step with our jobs15:10
slaweqshould we switch "*-ovn" jobs to be "-ovs" and keep "default" jobs as ovn ones now?15:11
slaweqto reflect the devstack change in our ci jobs too?15:11
slaweqor should we for now just keep everything as it was, so "regular" jobs running ovs and "-ovn" jobs running ovn15:11
slaweqwdyt?15:12
ralonsohIMO, rename those with a different backend15:12
ralonsohin this case, -ovs15:12
obondarevthat makes sense15:13
bcafarel+1 at least for a while, it will be clearer15:13
slaweqok, so we need to change many of our jobs now :)15:13
lajoskatonayeah make the names help identifing what is the backend15:14
bcafarelbefore there was only select job with linuxbridge (and some new with ovn) so naming was clear, but with the default switch, it can get confusing (even for us :) )15:14
slaweqbut I agree that this is better long term, especially that some of our jobs inherits e.g. from tempest jobs and those tempest jobs with default settings are run in e.g. tempest or nova's gate too15:14
obondarevwe can set them use OVS explicitly as first step15:14
obondarevand go on with renaming as second15:14
slaweqobondarev: yes, I agree15:14
slaweqlet's merge patches which we have now to enforce ovs where it was before, to have working ci15:15
slaweqand then let's switch jobs completly15:15
slaweqas that will require more work for sure15:15
bcafarelsounds good to me!15:16
slaweqok, sounds like a plan :)15:16
slaweqI will try to prepare plan to switch jobs for next week15:17
slaweq#action slaweq to prepare plan of switch ovn <-> ovs jobs in neutron CI15:17
slaweqok15:19
slaweqnext one15:19
slaweq    ralonsoh to talk with ccamposr about issue https://bugs.launchpad.net/neutron/+bug/192952315:19
opendevmeetLaunchpad bug 1929523 in neutron "Test tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details is failing from time to time" [High,Confirmed]15:19
slaweqralonsoh: it's related to the patch https://review.opendev.org/c/openstack/tempest/+/779756 right?15:19
ralonsohyes15:20
slaweqralonsoh: I'm still not convinced that this will solve that problem from https://bugs.launchpad.net/neutron/+bug/192952315:20
opendevmeetLaunchpad bug 1929523 in neutron "Test tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details is failing from time to time" [High,Confirmed]15:20
slaweqas the issue is a bit different now15:21
slaweqit's not that we have additional server in the list15:21
ralonsohin this case we don't have any DNS regoster15:21
slaweqbut we got empty list15:21
slaweqand e.g. failure https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_567/785895/1/gate/neutron-tempest-slow-py3/567fc7f/testr_results.html happened 20.05, more than week after patch https://review.opendev.org/c/openstack/tempest/+/779756 was merged15:22
ralonsohare we using cirros or ubuntu?15:22
ralonsohif we use the advance image, maybe we should use resolvectl15:22
slaweqin that failed test, Ubuntu15:22
ralonsohinstead of reading /etc/resolv.conf15:23
ralonsohI'll propose a patch in tempest to use resolvectl, if present in the VM15:23
slaweqk15:23
ralonsohthat should be more accurate15:23
slaweqmaybe indeed that will help15:23
slaweqthx ralonsoh15:23
slaweq#action ralonsoh to propose tempest patch to use resolvectl to address https://bugs.launchpad.net/neutron/+bug/192952315:24
opendevmeetLaunchpad bug 1929523 in neutron "Test tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details is failing from time to time" [High,Confirmed]15:24
slaweqok, I think we can move on15:24
slaweq#topic Stadium projects15:24
slaweqlajoskatona: any updates?15:24
lajoskatonanothing15:25
*** jpward has joined #openstack-meeting-315:25
lajoskatonaI think one backend change patch is open, letme check15:25
lajoskatonahttps://review.opendev.org/c/openstack/networking-bagpipe/+/79112615:25
lajoskatonayeah one more thing, the old patches of boden for using payload are now again active, but I hope there will be no problem with them15:26
slaweqyes, I saw some of them already15:27
lajoskatonaI have seen patch in x/vmware-nsx as abandoned for payload patch, and I have no right to activate it15:27
lajoskatonanot sure if we have to warn them somehow15:27
slaweqgood idea, I will try to reach out to boden somehow15:28
slaweqmaybe he will redirect me to someone who is now working on it15:28
slaweq#action slaweq to reach out to boden about payload patches and x/vmware-nsx15:28
slaweqthx lajoskatona15:28
slaweqif that is all, lets move on15:31
slaweq#topic Stable branches15:31
slaweqbcafarel: anything new here?15:31
bcafarelmostly good all around :) one question I had there (coming from https://review.opendev.org/c/openstack/neutron/+/793417/ failing backport)15:31
bcafarelother branches do not have that irrelevant-files issue as in newer branches these jobs run in periodic15:32
bcafarelbut for victoria/ussuri I think it is better to fix the job dep instead of backporting the move to periodic15:33
slaweqI think we are still missing many patches like https://review.opendev.org/q/topic:%22improve-neutron-ci-stable%252Fussuri%22+(status:open%20OR%20status:merged)15:33
slaweqin stable branches15:33
slaweqand that's only example for ussuri15:34
slaweqbut similar patches are opened for other branches too15:34
bcafarelyes, getting these ones in will probably help ussuri in general too15:34
bcafarelI have https://review.opendev.org/c/openstack/neutron/+/793799 and https://review.opendev.org/c/openstack/neutron/+/793801 mostly for that provider job issue15:35
bcafarelralonsoh: looks like that whole chain is just wainting on https://review.opendev.org/c/openstack/neutron/+/778708 if you can check it15:36
ralonsohI'll do15:36
ralonsohah I know this patch, perfect15:37
bcafarelyes hopefully it should not take too much off of your infinite cycles :)15:37
slaweqLOL15:38
slaweqthx ralonsoh15:38
slaweqok, lets move on15:39
slaweq#topic Grafana15:39
slaweqwe have our gate broken now due to ovs->ovn migration and some other issue15:39
slaweqso we can only focus on the check queue graphs today15:39
slaweqand the biggest issues which I see for now are with neutron-ovn-tempest-slow job15:40
slaweqwhich is failing very often15:40
slaweqand ralonsoh already proposed to make it non-voting temporary15:40
ralonsohyes15:40
slaweqI reported LP for that https://bugs.launchpad.net/neutron/+bug/193040215:41
opendevmeetLaunchpad bug 1930402 in neutron "SSH timeouts happens very often in the ovn based CI jobs" [Critical,Confirmed]15:41
slaweqand I know that jlibosva and lucasgomes are looking into it15:41
slaweqdo You have anything else regarding grafana?15:42
bcafarelI see openstack-tox-py36-with-neutron-lib-master started 100% failing in periodic few days ago15:42
ralonsohlink?15:43
slaweqbcafarel: yes, I had it for the last topic of the meeting :)15:43
slaweqbut as You started, we can discuss it now15:43
slaweqhttps://bugs.launchpad.net/neutron/+bug/193039715:43
opendevmeetLaunchpad bug 1930397 in neutron "neutron-lib from master branch is breaking our UT job" [Critical,Confirmed]15:43
slaweqther is bug reported15:43
slaweqand example https://zuul.openstack.org/build/9e852a424a52479695223ac2a7723e1a15:43
bcafarelah thanks I was looking for some job link15:43
ralonsohmaybe this is because of the change in the n-lib session15:44
ralonsohI'll check it15:44
ralonsohgood to have this n-lib master job15:44
slaweqralonsoh: yes, I suspect that15:44
slaweqso we should avoid release new neutron-lib before we will not fix that issue15:44
slaweqotherwise we will probably break our gate (again) :)15:45
ralonsohright15:45
ralonsohpffff15:45
ralonsohno, not again15:45
bcafarelone broken gate at a time15:45
slaweqLOL15:45
obondarev:)15:45
bcafarelmaybe related to recent "Allow lazy load in model_query" neutron-lib commit?15:45
ralonsohno, not this15:45
obondarevI checked it but seems unrelated15:45
ralonsohthis is not used yet15:45
obondarevyes15:46
bcafarelok :)15:46
slaweqso, ralonsoh You will check it, right?15:47
ralonsohyes15:47
slaweqthx a lot15:47
slaweq#action ralonsoh to check failing neutron-lib-from-master periodic job15:47
slaweqok, let's move on then15:47
slaweq#topic fullstack/functional15:47
slaweqregarding functional job, I didn't found any new issues for today15:47
slaweqbut for fullstack there is new one:15:48
slaweqhttps://bugs.launchpad.net/neutron/+bug/193040115:48
opendevmeetLaunchpad bug 1930401 in neutron "Fullstack l3 agent tests failing due to timeout waiting until port is active" [Critical,Confirmed]15:48
slaweqseem like it happens pretty often on various L3 related tests15:48
slaweqI can investigate it more in next days15:48
slawequnless someone else wants to take it :)15:48
ralonsohmaybe next week15:49
lajoskatonaI can check15:49
slaweqlajoskatona: thx a lot15:50
slaweq#action lajoskatona to check fullstack failures https://bugs.launchpad.net/neutron/+bug/193040115:50
opendevmeetLaunchpad bug 1930401 in neutron "Fullstack l3 agent tests failing due to timeout waiting until port is active" [Critical,Confirmed]15:50
slaweqlajoskatona: and also, there is another fullstack issue: https://bugs.launchpad.net/neutron/+bug/192876415:50
opendevmeetLaunchpad bug 1928764 in neutron "Fullstack test TestUninterruptedConnectivityOnL2AgentRestart failing often with LB agent" [Critical,Confirmed] - Assigned to Lajos Katona (lajos-katona)15:50
slaweqwhich is hitting us pretty often15:51
slaweqI know You were working on it some time ago15:51
slaweqdo You have any patch which should fix it?15:51
lajoskatonaYes we discussed it with Oleg in review15:51
slaweqor should we maybe mark those failing tests as unstable for now?15:51
lajoskatonahttps://review.opendev.org/c/openstack/neutron/+/79250715:51
lajoskatonabut obondarev is right, ping should not fail during restart of agent15:52
slaweqactually yes - that is even main goal of this test AFAIR15:52
slaweqto ensure that ping will work during the restart all the time15:53
lajoskatonayeah marking them unstable can be a way forward to decrease the pressure on CI15:53
slaweqlajoskatona: will You propose it?15:53
lajoskatonaYes15:53
slaweqthank You15:53
slaweq#action lajoskatona to mark failing TestUninterruptedConnectivityOnL2AgentRestart fullstack tests as unstable temporary15:54
slaweqlajoskatona: if You will not have too much time to work on the https://bugs.launchpad.net/neutron/+bug/1930401 this week, maybe You can also mark those tests as unstable for now15:54
opendevmeetLaunchpad bug 1930401 in neutron "Fullstack l3 agent tests failing due to timeout waiting until port is active" [Critical,Confirmed]15:54
obondarevanother bug related to PTG discussion on linuxbridge fiture15:55
obondarevfuture*15:55
lajoskatonaslaweq: I will check15:55
slaweqIMHO we need to make our CI to be a bit better as now it's a nightmare15:55
slaweqobondarev: yes, that's true15:55
*** elodilles has joined #openstack-meeting-315:55
slaweqprobably we will get back to that discussion in some time :)15:55
lajoskatonawe should ask NASA to help maintaining it :P15:56
slaweqlajoskatona: yeah :)15:56
slaweqgood idea15:56
slaweqcan I assign it as an action item to You? :P15:56
* slaweq is just kidding15:56
lajoskatona:-)15:57
slaweqok, that was all what I had for today15:57
slaweqif You don't have any last minute topics, I will give You few minutes back15:58
obondarevo/15:58
bcafarelnothing from me15:58
slaweqok, thx for attending the meeting today15:58
slaweq#endmeeting15:58
lajoskatonao/15:58
opendevmeetMeeting ended Tue Jun  1 15:58:46 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:58
slaweqo/15:58
opendevmeetMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ci/2021/neutron_ci.2021-06-01-15.01.html15:58
opendevmeetMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2021/neutron_ci.2021-06-01-15.01.txt15:58
opendevmeetLog:            http://eavesdrop.openstack.org/meetings/neutron_ci/2021/neutron_ci.2021-06-01-15.01.log.html15:58
ralonsohbye15:58
bcafarelo/15:59
gibi#startmeeting nova16:00
opendevmeetMeeting started Tue Jun  1 16:00:02 2021 UTC and is due to finish in 60 minutes.  The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
opendevmeetThe meeting name has been set to 'nova'16:00
gibio/16:00
bauzas\o16:00
elodilleso/16:00
gibihotseating with neutron :)16:00
bauzasyup, I guess the chair is hot16:00
gibior hotswaping16:00
gmanno/16:00
slaweq:)16:00
slaweqhi16:00
bauzasmaybe we should open the windows ?16:00
gibislaweq: o/16:00
gibibauzas: :D16:00
slaweqbauzas: :D16:01
stephenfino/16:01
* bauzas misses the physical meetings :cry:16:01
* gibi joins in16:01
elodilles:)16:01
sean-k-mooneyo/16:01
bauzasyou could see me sweating16:01
gibithis will be a bittersweet meeting16:01
gibilets get rolling16:02
gibi#topic Bugs (stuck/critical)16:02
gibino critical bugs16:02
gibi#link 9 new untriaged bugs (-0 since the last meeting): #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New16:02
gibiI like this stable number under 1016:02
gibi:D16:02
gibiany specific bug we need to discuss?16:02
gibigood16:04
gibi#topic Gate status16:04
gibiPlacement periodic job status #link https://zuul.openstack.org/builds?project=openstack%2Fplacement&pipeline=periodic-weekly16:04
*** lajoskatona has left #openstack-meeting-316:04
gibisuper green16:04
gibialso nova master gate seems to be OK16:04
gibiwe merged patches today16:04
bauzas\o/16:04
bauzasthanks lyarwood I guess16:04
gibithanks everybody who keep this up :)16:05
gibiany gate issue we need to talk about?16:05
sean-k-mooneyim still investingating the one that might be related to os-vif16:06
sean-k-mooneybut not form me16:06
gibisean-k-mooney: thanks16:06
gibiif nothing else for the gate then16:07
gibi#topic Release Planning16:07
gibiWe had Milestone 1 last Thursday16:07
gibiM2 is 5 weeks from now16:07
gibiat M2 we will hit spec freeze16:07
gibihurry up with specs :)16:08
gibianything else about the release?16:08
gibi#topic Stable Branches16:09
gibicopying elodilles' notes16:09
gibinewer stable branch gates needs investigation why those fail16:09
gibiwallaby..ussuri seems to be failing, mainly due to nova-grenade-multinode (?)16:09
gibitrain..queens seems to be OK16:09
gibipike gate fix is on the way, should be OK whenever it lands ( https://review.opendev.org/c/openstack/devstack/+/792268 )16:09
gibiEOM16:09
gibielodilles: on the nova-grenade-multinode failure, is the ceph issue you pushed a DNM patch for?16:10
*** obondarev has quit IRC16:10
elodillesyes, that's it i think16:10
gibiin short we see to new ceph version (pacific) installed on stable16:11
gibis/to/too/16:11
gibianything else about stable?16:12
elodillesyes, melwitt's comment pointed that out ( https://review.opendev.org/c/openstack/nova/+/785059/2#message-c31738db1240ddaa629a3aaa4e901c5a62206e85 )16:12
elodillesnothing else from me :X16:12
gibi#topic Sub/related team Highlights16:13
gibiLibvirt (bauzas)16:13
bauzaswell, nothing worth mentioning16:13
gibithanks16:13
gibi#topic Open discussion16:13
gibiI have couple of topics16:13
gibi(gibi) Follow up on the IRC move16:14
gibiso welcome on OFTC16:14
gmann+116:14
gibiso far the move seems to be going well16:14
gibiI grepped our docs and the nova related wiki pages and fixed them up16:14
artomThen again, if someone's trying to reach us over on Freenode, we'll never know, will we?16:14
gmannyeah thanks for that16:14
gibiartom: I will stay on freenode16:14
artomUnless someone's stayed behind to redirect folks?16:14
* sean-k-mooney was not really aware we mentioned irc in the docs before this16:14
artomAha!16:15
bauzaswe tho  still have less people in the OFTC chan vs. the freenode one16:15
gmannyeah I will also stay on Freenode for redirect16:15
bauzasgibi: me too, I'll keep the ZNC network open for a while16:15
gibiso if any discussion is starting on freenode I will redirect people16:15
gmannWe are going to discuss in TC Thursday meeting on topic change on freenode or so16:15
artomI believe you're not allowed to mention OFTC by name?16:15
artomThere are apparently bots that hijack channels if you do that?16:15
gibiartom: I will use private messages if needed16:15
bauzaswe currently have 102 attendees on freenode -nova compared to OFTC one16:15
gmannartom: we can do as we have OFTC ready and working now16:15
bauzasfreenode : 102, OFTC : 8316:16
sean-k-mooneyartom: that was librea and it was based on the topic i think16:16
bauzasso I guess not everyone moved already16:16
gmannwe can give this email ref to know details http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022780.html16:16
gibigmann: good point16:16
bauzasartom: indeed, you can't spell the word16:16
sean-k-mooneybauzas: well som of those are likely bots16:16
bauzasartom: or the channels could be hijacked by some ops16:16
bauzassean-k-mooney: haven't really digged into details16:17
bauzasbut the numbers were appealing16:17
bauzasbut that's only a 48h change16:17
sean-k-mooneyyep most people are now here16:17
bauzaswe'll see next weeks16:17
gibiOK, any other feedback or question around the IRC move?16:17
gibiif not then a related topic...16:18
gibi(gibi) Do we want to move our meeting from #openstack-meetin-3 to #openstack-nova ?16:18
gibithis was a question originally from the TC16:18
gmannI think it make sense and also working for many projects like QA, TC afaik16:18
artom-1 from me. I've popped in to other channel, not aware that they were having a meeting, and "polluted" their meeting16:18
gmannartom: which one?16:19
sean-k-mooneyartom: i raised that in the tc call16:19
artomgmann, #openstack-qa, actually, I think :)16:19
gmannI see very less interruption in QA or TC since past 1 year16:19
sean-k-mooneybut that is really just a habbit thing16:19
artomFolks were very polite and everything16:19
gmannartom: it might happen very less.16:19
sean-k-mooneywaitign for the topic to load adn see if the channel is active16:19
gibiI can handle interruption politely I guess16:19
artomBut I felt guilty for interrupting16:19
gmannand if anyone come in between we can tell them its meeting time16:19
artomBut... why?16:20
artomWhat's wrong with a dedicated meeting channel?16:20
sean-k-mooneyartom: i have had the same feeling yes16:20
sean-k-mooneyartom: noting really just more infracture to schduing the meetings16:20
bauzasI am absolutely +0 to this16:20
sean-k-mooneye.g. "booking the room"16:20
gmannit is hard to know where the meeting is going on with all openstack-meeting-* channels.16:20
sean-k-mooneyand seeting up the loging ectra16:20
bauzasbut sometimes it's nice to have sideways discussions happening on -nova while we continue ranting here16:20
artomsean-k-mooney, it's already all set up :)16:21
gmannthere will be no difference in logging etc16:21
sean-k-mooneyartom: oh i know16:21
gibibauzas: +1 on the side discussions16:21
sean-k-mooneyi do like having the side  conversation option16:21
bauzasso I guess my only concern would be the ability to have dual conversations happening at the same time without polluting the meeting16:21
artomgmann, is it though? Maybe for someone completely new to the community16:21
sean-k-mooneythat said i try not to do that when i can16:21
gmannartom: its for me too when I need to attend many meeting :)16:22
bauzasbut I guess #openstack-dev could do the job16:22
artomgmann, ah, I see your point.16:22
bauzasmy other concern could be some random folks pinging us straight during the meeting16:22
artomWell, #openstack-<project-name>-meetings then?16:22
sean-k-mooneyi guess it depends i ususally wait for gibi  to ping us :)16:22
bauzasbut that's not a big deal16:22
bauzas(as I usually diverge)16:22
artomI still want to keep the dual channel meeting/normal IRC option16:22
gmannartom: that will be too many channels16:23
artom*shrug* w/e :)16:23
bauzas#openstack-dev can fit the purpose of side discussions16:23
sean-k-mooneydansmith: you are pretty quite on the topic16:23
* artom joins bauzas in the +0 camp16:23
dansmithsean-k-mooney: still have a conflict in this slot, will have to read later16:23
gmanndansmith: is in another call i think16:23
bauzasI just express open thoughts and I'm okay with workarounds if needed16:24
bauzashence the +016:24
bauzasnothing critical to me to hold16:24
sean-k-mooneydansmith: ah just asking if you prefered #openstck-nova vs #openstack-meeting-316:24
gibiOK, lets table this for next week then. So far I don't see too many people wanting to move16:24
sean-k-mooneydansmith: no worries16:24
gmannI will say let's try and if it does not work we can come back here16:24
gmann+1 on keeping it open for discussion16:24
gibiwe will come back to this next week16:25
gibinext16:25
gibi(gibi) Monthly extra meeting slot for the Asia + EU. Doodle #link https://doodle.com/poll/svrnmrtn6nnknzqp . It seems Wednesday 8:00 or Thursday 8:00 is winning.16:25
gibi8:00 UTC  I mean16:25
dansmithsean-k-mooney: very much -nova16:25
sean-k-mooneygibi: does that work for you to chair the meeting at that time16:27
bauzasdansmith: I just expressed some concern about the ability to have side discussions, they could happen "elsewhere" tho16:27
gibiIf no objection then I will schedule that to Thursday 8:00 UTC and I will do that on #openstack-nova (so we can try the feeling)16:27
gibisean-k-mooney: yes16:27
gibisean-k-mooney: I can chair16:27
sean-k-mooneycool16:27
bauzasthis works for me too16:27
bauzas10am isn't exactly early in the morning16:27
* sean-k-mooney wakes up at 10:30 most mornings16:28
gibithere was a lot of participation in the doodle16:28
sean-k-mooneyyes16:28
gibiso I hope for similar crowd on the meeting too16:28
bauzasindeed16:28
sean-k-mooneyform a number of names we dont see often in this neeting16:28
bauzasyup, even from cyborg team16:28
gibiI will schedule the next meeting for this Thursday16:29
gibiso we can have a rule of every first Thurday of a month16:29
bauzas(I mentioned this because they expressed their likeliness for a Asian-friendly timeslot)16:29
bauzasgibi: this works for me, and this would mean the first meeting being held in two days16:30
gibiyepp16:30
gibino more topic on the agenda for today. Is there anything else you would like to discuss today16:30
gibi?16:30
bauzasdo we need some kind of formal agenda for thursday meeting ?16:31
bauzasor would we stick with a free open hour16:31
bauzas?16:32
gibiI will ask the people about it on Thursday16:32
gibiI can do both16:32
gibior just summarizing anyting from Tuesday16:32
artomOh, can I ask a specless blueprint vs spec question?16:32
gibiartom: sure, go ahead16:32
artomSo, we talked about https://review.opendev.org/c/openstack/nova-specs/+/791287 a few meetings ago16:33
artom WIP: Rabbit exchange name: normalize case16:33
artomsean-k-mooney came up with a better idea that solves the same problem, except without the messy upgrade impact:16:33
artomJust refuse to start nova-compute if we detect the hostname has changed16:33
artomSo I want to abandon https://review.opendev.org/c/openstack/nova-specs/+/791287 and replace it with sean-k-mooney's approach16:34
artomWhich I believe doesn't need a spec, maybe not even a blueprint16:34
sean-k-mooneyyou mean treat it as a bugfix if its not a blueprint16:34
gibiartom: the hostname reported by libvirt change compared to the hostname stored in DB?16:35
artomgibi, yes16:35
artomsean-k-mooney, basically16:35
bauzastbc, we don't mention the service name16:36
gibiartom: could there be deployments out there that are working today but will stop working after your cahnge?16:36
bauzasbut the hypervisor hostname which is reported by libvirt16:36
bauzasbecause you have a tuple16:36
artomgibi, we could wrap it in a config option, sorta like stephenfin did for NUMA live migration16:36
bauzas(host, hypervisor_hostame)16:36
sean-k-mooneygibi: tl;dr in the virt diver we would lookup the compute service record using CONF.host and in the libvirt driver check that 1 the compute nodes assocated with the compute service record is lenght 1 and 2 that its hypervior_hostnaem is the same as the one we currenmtly have16:36
gibisean-k-mooney: thanks16:37
bauzaseg. with ironic, you have a single nova-compute service (then, a single hostname) but multiple nodes, each of them being an ironic node UUID16:37
bauzassean-k-mooney: what puzzles me is that I thought service RPC names were absolutely unrelated to hypervisor names16:37
artombauzas, it would be for drivers that have a 1:1 host:node relationship16:37
artombauzas, but it's a good point, we'd have to make it drive-agnostic as much as possible16:37
bauzasand CONF.host is the RPC name, hence the service name16:37
bauzasartom: that's my concern, I guess16:38
bauzassome ops wanna define some RPC name that's not exactly what the driver reports and we said for a while "you'll be fine"16:38
artombauzas, valid concern, though I think it'd be pointless to talk about it in a spec, without the code to look at16:38
bauzasartom: tbh, I'm even not sure that other drivers but libvirt use the libvirt name as the service name16:39
bauzasso we need to be... cautious, I'd say16:39
gibiI thought we use CONF.host as service name16:39
sean-k-mooneybauzas: we cant do that wihtou breaking people16:39
bauzasgibi: right16:39
sean-k-mooneygibi: we do16:39
gibiOK16:40
gibiso the service name is hypervisor agnostic16:40
bauzasgibi: but we use what the virt driver reports for the hypervisor_hostname field of the ComputeNode record16:40
gibithe node name is hypervisor specific16:40
bauzasgibi: this is correct again16:40
sean-k-mooneyand in theory we use conf.host for the name we put in instance.host16:40
gibiand RPC name is also the service name so that is also hypervisor agnostic16:41
bauzasyup16:41
gibiso if we need to fix the RPC name the we could do it hypervisor agonsticly16:41
gibi(I guess)16:41
bauzasbut here, artom proposes to rely on the discrepancy to make it hardstoppable16:41
sean-k-mooneyi think there are two issues 1 changing conf.host and two the hypervior_hostname changing16:41
gibiwhat if we detect the discrepancy via comparing db host with nova-compute conf.host?16:41
sean-k-mooneyideally we would liek both to not change and detetc/block both16:42
bauzaswell, if you change the service name, then it will create a new service record16:42
gibihm, we cannot look up our old DB record if conf.host is changed :/16:42
sean-k-mooneygibi: we would have to do that backwards16:42
sean-k-mooneylookup compute node rp by hypervior_hostname16:42
bauzasthe old service will be seen as dead16:42
sean-k-mooneythen check compute service recrod16:42
gibisean-k-mooney: so we detect if conf.host changes but hypervisor_hostname remains the same16:43
*** mlavalle has quit IRC16:43
sean-k-mooneygibi: yep we can check if either of the values changes16:43
sean-k-mooneybut not if both of the values change16:43
bauzaswhat problem are we trying to solve ?16:43
bauzasthe fact that messages go lost ?16:44
gibisean-k-mooney: OK thanks, now I see it16:44
sean-k-mooneyunless we just write this in a file on disk that we read16:44
artombauzas, people renaming their compute hosts and exploding stuff16:44
artomEither on purpose, or accidentally16:44
sean-k-mooneybauzas: the fact that is possible to start the compute service when either conf.host has change or hypervisor_hostname has changed and get in an inconsitent state16:45
sean-k-mooneybauzas: one of those bing that we can have instance on the same host with different valuse of instace.host16:45
gibiI guess if both changed then we get a new working compute and the old will be orphaned16:45
sean-k-mooneygibi: yep16:45
gibiOK16:45
stephenfinassuming we can solve this, I don't see this as any different to gibi's change to disallow N and N-M (M > 1) in the same deployment16:45
bauzassean-k-mooney: can we just consider to NOT accept to rename the service hostname if instances are existing on it ?16:46
stephenfinin terms of being a hard break but only for people that already in a broken state16:46
sean-k-mooneybauzas: that might also be an option yes16:46
stephenfinditto for the NUMA live migration thing, as artom alluded to above16:46
sean-k-mooneybauzas: i think we have options and it might be good to POC some of them16:46
bauzasagain, I'm a bit conservative here16:47
gibiI think I convinced that this can be done for libvirt driver. As of how to do it for the other drivers it remains to be seen16:47
bauzasI get your point but I think we need to be extra cautious, especially with some fancy scenarios involving ironic16:47
bauzasgibi: that's a service issue, we need to stay driver agnostic16:48
sean-k-mooneybauzas: ack although what i was orignly suggestin was driver speific16:48
artomsean-k-mooney, right, true16:48
bauzassean-k-mooney: hence my point about hypervisor_hostname16:48
sean-k-mooneyyes it not that i did not consider ironic16:48
artomIt would be entirely within the libvirt driver in init_host() or something, though I suspect we'd have to add new method arguments16:48
bauzasI thought you were mentioning it as this is the only fied being virt specific16:48
sean-k-mooneyi intentionally was declaring it out of scope16:48
sean-k-mooneyand just fixinbg the issue for libvirt16:48
sean-k-mooneybauzas: i dont think this can happen with ironic for what its worth16:49
bauzasartom: init_host() is very late in the boot proicess16:49
artombauzas, well pre_init_host() then :P16:49
bauzasyou already have a service record16:49
bauzasartom: pre init_host, you are driver agnostic16:49
gibibauzas: we need a libvirt connection though16:49
bauzasgibi: I know16:49
gibibauzas: so I'm not sure we can do it earlier16:49
bauzasgibi: my point16:49
sean-k-mooneybauzas: we need to do it after we retrive teh compute service recored but before we create a new one16:49
bauzasiirc, we create the service record *before* we initialize the libvirt connection16:50
bauzasthis wouldn't be idempotent16:50
sean-k-mooneyperhaps we should bring this to #openstack-nova16:50
bauzasI honestly feel this is tricky enough for drafting it somewhere... unfortunately like in a spec16:51
artomFWIW we call init_host() before we update the service ref...16:51
bauzasespecially if we need to be virt-agnostic16:51
artomAnd before we create the RPC server16:51
bauzasartom: ack, then I was wrong16:51
artom'ts what I'm saying, we need the code :)16:52
artomIn a spec it's all high up and abstract16:52
bauzasI thought we did it in pre_hook or something16:52
sean-k-mooneyno we can talk about this in a spec if we need too16:52
bauzaspoc, then16:52
sean-k-mooneyspecs dont hae to be high level16:52
bauzaspoc, poc, poc16:52
sean-k-mooneyok16:52
gibiOK, lets put up some patches16:53
gibidiscuss it there16:53
artom🐔 poc poc poc it is then 🐔🐔16:53
gibiand see if this can fly16:53
artomChickens can't fly16:53
gibiI'm OK to keep this without a spec so far16:53
sean-k-mooneyif we have a few mints i have one ther simialr topic16:53
sean-k-mooneyhttps://bugzilla.redhat.com/show_bug.cgi?id=170039016:53
opendevmeetbugzilla.redhat.com bug 1700390 in openstack-nova "KVM-RT guest with 10 vCPUs hangs on reboot" [High,Closed: notabug] - Assigned to nova-maint16:53
gibisean-k-mooney: sure16:53
sean-k-mooneywe have ^ downstream16:54
sean-k-mooneybaically when using realtime you shoudl alway use hw:emulator_tread_polcy=something16:54
sean-k-mooneybut we dont disally it because while its a bad idea not to it can work16:54
sean-k-mooneyim debating between filing a whish list bug vs a specless blueprint for a small change in our defaultl logic16:55
gibifeels like a bug to me we can fix in nova. even if it works some times we can disallow it16:55
sean-k-mooneyif i rememeber correct we stil require at least 1 core to not be realtime16:55
sean-k-mooneyso i was tinking we could limit the emulator tread to that core16:55
gibisounds like a plan16:56
gibiif we don't have such a limit then I think we should add that as weel16:56
gibiwell16:56
gibiit does not make much sense not to have a o&m cpu16:56
sean-k-mooneywe used too16:56
sean-k-mooneyi dont think we removed it16:56
sean-k-mooneywould people be ok with this as a bugfix16:56
sean-k-mooneyi was conserded it might be slitghly featureish16:57
gibiyes. It removes the possibility of a known bad setup16:57
sean-k-mooneyok ill file and transcribe the relevent bit from the downstream bug so16:58
sean-k-mooneythe workaround is just use hw:emulator_tread_polcy=share|isolate16:58
sean-k-mooneybut it would be nice to not have the bugging config by default16:58
gibiI agree16:58
gibiand others seems to be silent :)16:59
gibiso it is sold16:59
gibiany last words before we hit the top of the hour?16:59
gibithen thanks for joining today17:00
gibio/17:00
gibi#endmeeting17:00
opendevmeetMeeting ended Tue Jun  1 17:00:12 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:00
opendevmeetMinutes:        http://eavesdrop.openstack.org/meetings/nova/2021/nova.2021-06-01-16.00.html17:00
opendevmeetMinutes (text): http://eavesdrop.openstack.org/meetings/nova/2021/nova.2021-06-01-16.00.txt17:00
opendevmeetLog:            http://eavesdrop.openstack.org/meetings/nova/2021/nova.2021-06-01-16.00.log.html17:00
*** sean-k-mooney has left #openstack-meeting-317:00
elodilleso/17:00
*** ralonsoh has quit IRC17:20
*** ralonsoh has joined #openstack-meeting-317:20
*** ralonsoh has quit IRC17:21
*** belmoreira has quit IRC18:35
*** tosky has quit IRC23:11

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!