16:00:15 <bauzas> #startmeeting nova
16:00:15 <opendevmeet> Meeting started Tue Oct  4 16:00:15 2022 UTC and is due to finish in 60 minutes.  The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:15 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:15 <opendevmeet> The meeting name has been set to 'nova'
16:00:21 <bauzas> hey stackers
16:00:29 <gibi> o/
16:00:33 <bauzas> #link https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
16:01:04 <elodilles> o/
16:02:05 <bauzas> okay, let's start, hopefully people will join later
16:02:28 <bauzas> #topic Bugs (stuck/critical)
16:02:34 <bauzas> #info No Critical bug
16:02:39 <bauzas> #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 4 new untriaged bugs (-1 since the last meeting)
16:02:43 <Uggla> o/
16:02:51 <auniyal__> O/
16:02:59 <bauzas> the etherpad I created for this week's triage https://etherpad.opendev.org/p/nova-bug-triage-20220927
16:03:38 <bauzas> and I have one security bug I'd like to discuss with the team, now we made it public
16:03:52 <bauzas> #link https://bugs.launchpad.net/nova/+bug/1989008 Security bug
16:04:16 <bauzas> I was consider it to close it as Wontfix
16:04:20 <bauzas> considering*
16:04:32 <JayF> o/
16:05:19 <bauzas> tl;dr: depending on your sudoers rules, you can trick nova user
16:05:37 <bauzas> we could change our privsep rules to be more restrictive
16:05:45 <sean-k-mooney[m]> i filed a downstream backlog item to adress it properly
16:05:47 <bauzas> but we prefer deferring to the host config
16:05:58 <bauzas> about the permissions rights
16:06:01 <sean-k-mooney[m]> so longterm i think we shoudl rewirte how we use privesep
16:06:15 <bauzas> I don't disagree
16:06:19 <sean-k-mooney[m]> but i dont think we will have time in A
16:06:24 <bauzas> but this is a long-term effort
16:06:32 <bauzas> yeah and very tedious effort
16:06:53 <sean-k-mooney[m]> i personally would not mind tipping away at this over time
16:07:04 <bauzas> for that reason, I think this is valid to close this bug as Wontfix
16:07:05 <sean-k-mooney[m]> but not sure i can do it in A
16:07:17 <bauzas> as this is actually more a request for enhancement than a really butg
16:07:20 <bauzas> bug*
16:07:38 <sean-k-mooney[m]> i have no objection to that as its really a speless blueprint or spec in my view
16:07:57 <bauzas> of course, deployers and openstack distros need to properly care about this bug
16:08:08 <bauzas> and make sure the rights they give are correctly set
16:08:19 <sean-k-mooney[m]> its not quite an architectual change but it is a desgin pattern change
16:08:31 <bauzas> but from an upstream perspective, given no further effort can be simply made, we need to close it
16:08:42 <bauzas> sean-k-mooney: yeah a refactoring change
16:08:45 <bauzas> but,
16:08:51 <sean-k-mooney[m]> so currently it cannot lead to privladge escalation if you dont already have the ablity to spwan the privsep helper
16:08:57 <sean-k-mooney[m]> or have access to the unix socket of an exsiting one
16:08:58 <bauzas> sean-k-mooney: we correctly need to make it
16:09:09 <bauzas> sean-k-mooney: exactly my point
16:09:29 <bauzas> unless you fucked up with your sudo rights, you shouldn't hit this bug
16:09:39 <sean-k-mooney[m]> yep
16:10:02 <sean-k-mooney[m]> its kind of like exposing the docker socket to a container
16:10:10 <bauzas> so, agreed as Wontfix and leave a note saying we're not against modifying our privsep use, but this is deferred for now ?
16:10:29 <sean-k-mooney[m]> ok with me
16:10:40 <bauzas> no objections so far ?
16:11:03 <gibi> please explain in the bug (if not yet explained) that it cannot lead to escalation if you don't have the rights to spawn the privsep_helper
16:11:19 <bauzas> gibi: I explained it when I replied but I'll redo it
16:11:19 <gibi> or talkt to the socket
16:11:33 <gibi> bauzas: if it is there already then it is OK
16:11:55 <bauzas> gibi: quote from myself "I agree with all the above. Unless the user is accepted by sudoers to  have root priviledges, it can't use privsep to get what they want from  the kernel, so this isn't an exploit."
16:12:07 <gibi> cool then
16:12:08 <gibi> thanks
16:12:10 <bauzas> (comment #9)
16:12:16 <gibi> sorry I not read through the bug
16:12:23 <bauzas> but I'll make it clear on my last reply
16:13:16 <bauzas> #agreed https://bugs.launchpad.net/nova/+bug/1989008 to be marked as Wontfix as this isn't a flaw if sudoers is correctly set and we don't know when we can modify our privsep usage in nova yet
16:13:40 <bauzas> voila, that's it for me unless other pings
16:14:05 <bauzas> #link https://storyboard.openstack.org/#!/project/openstack/placement 26 open stories (+0 since the last meeting) in Storyboard for Placement
16:14:12 <bauzas> #info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster
16:14:24 <bauzas> elodilles: still fighting with the release, man ?
16:14:35 <elodilles> hopefully till tomorrow ;)
16:14:44 <elodilles> so yes, i can take the baton
16:14:46 <bauzas> then you have no excuse.
16:14:52 <bauzas> #info bug baton is being passed to elodilles
16:14:57 <bauzas> elodilles: thanks
16:14:57 <elodilles> ~o~
16:15:08 <elodilles> np
16:15:09 <bauzas> elodilles: you won't feel overloaded
16:15:23 <elodilles> famous last words? :)
16:15:24 <bauzas> our untriaged backlog is very low todzy
16:15:48 <elodilles> cool :)
16:15:56 <bauzas> unless something big happens, like a tornado or a earthquake, you'll be fine (tm)
16:16:07 <elodilles> :]
16:16:17 <bauzas> moving on
16:16:20 <bauzas> #topic Gate status
16:16:30 <bauzas> #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs
16:16:38 <bauzas> #link https://zuul.openstack.org/builds?project=openstack%2Fplacement&pipeline=periodic-weekly Placement periodic job status
16:16:46 <bauzas> link https://zuul.openstack.org/builds?job_name=tempest-integrated-compute-centos-9-stream&project=openstack%2Fnova&pipeline=periodic-weekly Centos 9 Stream periodic job status
16:16:49 <bauzas> damn
16:16:52 <bauzas> #link https://zuul.openstack.org/builds?job_name=tempest-integrated-compute-centos-9-stream&project=openstack%2Fnova&pipeline=periodic-weekly Centos 9 Stream periodic job status
16:17:00 <bauzas> #link https://zuul.opendev.org/t/openstack/builds?job_name=nova-emulation&pipeline=periodic-weekly&skip=0 Emulation periodic job runs
16:17:08 <bauzas> all green above ^
16:17:38 <bauzas> well, actually https://zuul.openstack.org/builds?project=openstack%2Fplacement&pipeline=periodic-weeklyhttps://zuul.openstack.org/builds?project=openstack%2Fplacement&pipeline=periodic-weekly is still fetching info
16:17:52 <bauzas> anyone having the same trouble than me when opening the url ?
16:17:59 <sean-k-mooney[m]> you could just have one linke to the perodic pipeline
16:18:08 <bauzas> right
16:18:29 <sean-k-mooney[m]> https://zuul.opendev.org/t/openstack/builds?pipeline=periodic-weekly&skip=0
16:18:35 <sean-k-mooney[m]> they work ok for me
16:18:49 <bauzas> sean-k-mooney: so you basically let me providing those 3 links for the whole Yoga and Zed releases ?
16:18:57 <bauzas> meh p
16:18:59 <bauzas> :p
16:19:26 <sean-k-mooney[m]> https://zuul.opendev.org/t/openstack/builds?project=openstack%2Fnova&project=openstack%2Fplacement&pipeline=periodic-weekly&skip=0
16:19:30 <sean-k-mooney[m]> hehe
16:19:42 <sean-k-mooney[m]> that one is nova and placment weekly jobs in one link
16:20:03 <bauzas> yeah, I never reconsidered to make one call once we added the two other checks :)
16:20:04 <sean-k-mooney[m]> anyway the are looking good  so we can move on
16:20:08 <bauzas> yeah, moving on
16:20:11 <bauzas> but this was fun
16:20:21 <bauzas> stupid me.
16:20:38 <bauzas> #info Please look at the gate failures and file a bug report with the gate-failure tag.
16:20:44 <bauzas> #info STOP DOING BLIND RECHECKS aka. 'recheck' https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures
16:21:17 <bauzas> fyi, I think I'll start providing the weekly metrics for the recheck command in the nova project if our stats don't improve :)
16:22:07 <gmann> there is one TC monitor https://etherpad.opendev.org/p/recheck-weekly-summary
16:22:12 <bauzas> https://etherpad.opendev.org/p/recheck-weekly-summary tells me we're at 50%
16:22:21 <bauzas> damn, burned by gmann
16:22:46 <gmann> :) thanks to slaweq
16:22:48 <bauzas> gmann: yeah I know and I feel 50% is a too bug number
16:22:50 <bauzas> big
16:23:16 <gmann> but its 1 out of 2
16:23:18 <sean-k-mooney[m]> so at this point im not sure reminding people each week is helping
16:23:36 <gmann> 1 bare recheck is still not so bad
16:23:38 <bauzas> if my daughter was ranked 10/20 (this is a french note over 20 points), I'd ask her to change a few things
16:24:32 <bauzas> actually, nevermind me
16:24:40 <bauzas> I just saw the raw stats
16:24:41 <sean-k-mooney[m]> can i suggest we drop this form the meeting going forward or talk about it at the ptg
16:24:46 <bauzas> yeah
16:24:56 <bauzas> but fwiw, our 90-day stat is at 25%
16:25:10 <bauzas> surprinsingly, we only had 2 rechecks this week
16:25:15 <gibi> I think it will only improve we can can look at the actual bare rechecks
16:25:21 <bauzas> and 1 being a bare one
16:25:23 <gmann> talking in meeting just a heads up is good reminder to all of us
16:25:34 <bauzas> I'll add a point to the PTG agenda
16:25:39 <bauzas> and we'll discuss it
16:25:45 <bauzas> moving on
16:25:47 <gibi> and explain if / when it is OK and NOT OK to use it in situation
16:25:53 <sean-k-mooney[m]> but i dont think it s the people that are in the meeting that need to be reminded
16:25:54 <gibi> but yes, move on and return to it on the PT
16:25:55 <gibi> G
16:26:13 <bauzas> #topic Release Planning
16:26:21 <bauzas> #link https://releases.openstack.org/zed/schedule.html
16:26:44 <bauzas> #link https://releases.openstack.org/antelope/schedule.html
16:26:53 <bauzas> #info Zed GA planned tomorrow
16:26:59 <bauzas> so RC2 was our final RC
16:27:06 <bauzas> kuds to the team for the hard work
16:27:12 <bauzas> kudos*
16:27:36 <gibi> \o/
16:27:42 <elodilles> ~o~
16:28:01 <bauzas> about this, I'll talk at the next openinfra live session on this Thursday
16:28:05 <bauzas> 1400UTC
16:28:34 <bauzas> like I did last cycle, I'll present our cycle highlights and I'll talk about what we plan for Antelope
16:28:45 <gibi> cool
16:28:50 <bauzas> you should see some metrics, they are interested to read
16:28:56 <bauzas> interesting*
16:29:23 <sean-k-mooney[m]> ack
16:29:33 <bauzas> which makes me move to the next topic
16:30:19 <bauzas> #topic PTG planning
16:30:32 <bauzas> #link https://etherpad.opendev.org/p/nova-antelope-ptg Antelope PTG etherpad
16:30:38 <bauzas> #link https://ptg.opendev.org/ptg.html PTG schedule
16:30:45 <bauzas> #info As a reminder, please provide your PTG topics before Oct-6 (Thursday) !
16:31:16 <bauzas> which actually suits me, as I need to present this Thursday our plans for Antelope :)
16:31:27 <bauzas> so, typey typey please
16:31:41 <sean-k-mooney[m]> i have one topic that im going to add but more of an fyi, i would like to repon the idea of move placment to launchpad (without copying and exitsing stories or bugs)
16:31:57 <bauzas> sean-k-mooney: already in the etherpad :)
16:32:05 <sean-k-mooney[m]> oh ok :)
16:32:16 <bauzas> yeah, that's boring to me
16:32:29 <bauzas> so, about PTG topics
16:32:41 <bauzas> we're collecting inputs from other teams too
16:32:58 <bauzas> there should be a neutron x-p session
16:33:08 <sean-k-mooney[m]> there are topic in there etherpad
16:33:36 <bauzas> I know
16:34:02 <bauzas> I need to sync up with ralonsoh to find a proper timeslot that suits both teams
16:34:07 <bauzas> also,
16:34:33 <bauzas> as you've seen in the lists, we should have a nova-ironic x-p session too
16:34:54 <bauzas> we have one hot topic that needs to be properly addressed at the PTG
16:35:13 <bauzas> even if there are already very valuable feedback from sean-k-mooney, gibi and dansmith
16:35:32 <ralonsoh> bauzas, we can sync tomorrow
16:35:34 <bauzas> so, JayF here wanted to see whether Monday 1400UTC suited for you
16:35:45 <bauzas> this is off the Nova hours
16:36:00 <bauzas> anyone having concerns by this timeslot ?
16:36:32 <sean-k-mooney[m]> no issues with me
16:36:34 <JayF> I appreciate everyone trying to get together to solve this for our users :)
16:36:40 <bauzas> we don't need the whole gang to attend, but I'd like to see attendance from some people at least
16:36:43 <sean-k-mooney[m]> are we going to do the cross project topics all on monday
16:36:53 <bauzas> sean-k-mooney: I don't know
16:37:03 <sean-k-mooney[m]> ok
16:37:09 <dansmith> not sure I can make that due to TC stuff, I'll have to check
16:37:10 <bauzas> the current agenda is pretty small as we speak
16:37:20 <bauzas> dansmith: I have an hard stop by 1500UTC
16:37:28 <bauzas> a hard* (damn English)
16:37:41 <bauzas> so the session would be one hour max
16:38:05 <bauzas> probably not sufficient to cover any possible solutions, but I guess the conversation will focus more about the problem first
16:38:25 <gmann> Monday 15 UTC is also TC + Leaders sessions
16:38:32 <bauzas> gmann: that's my hard stop
16:38:35 <gmann> +1
16:38:37 <dansmith> yeah, so I should be good then
16:38:57 <bauzas> fwiw, I was hoping to chime on the spec
16:39:14 <bauzas> but as I said, most of the points were already covered and explained
16:39:30 <dansmith> there's really not much to chime in on in its current form I think yeah
16:39:45 <bauzas> this is frankly not an easy problem to solve so I guess we need to focus on the use case firstr
16:40:10 <bauzas> dansmith: agreed and the limitations are given
16:40:18 <bauzas> anyway
16:40:27 <bauzas> sold for Monday 1400-1500UTC
16:40:31 <bauzas> JayF ^
16:40:31 <sean-k-mooney[m]> fundementally the way the ironic driver is written today does not aling with how we expect a driver too work
16:40:42 <sean-k-mooney[m]> it will not be simple to reconsile the two
16:41:01 <bauzas> sean-k-mooney: the first step is to admit we can't easily reconcile
16:41:46 <bauzas> JayF: I'll put this in our etherpad, are you going to reserve the timeslot and book the room ?
16:42:08 <bauzas> he's maybe afk, moving on
16:42:15 <JayF> bauzas: I can; but if you know how I'd prefer you do it. The PTG planning for this cycle was done by outgoing PTL and I don't know the steps right off
16:42:33 <bauzas> JayF: no worries, I'll book the room
16:42:37 <JayF> thank you o/
16:42:52 <bauzas> just hoping the bexar room to be free at that time :)
16:43:15 <bauzas> moving on
16:43:37 <bauzas> ralonsoh: no worries, let's catch up tomorrow morning EU-time
16:44:14 <bauzas> #agreed Nova-Ironic cross-project PTG session happening on Monday 1400-1500UTC
16:44:20 <bauzas> #topic Review priorities
16:44:27 <bauzas> #link https://review.opendev.org/q/status:open+(project:openstack/nova+OR+project:openstack/placement+OR+project:openstack/os-traits+OR+project:openstack/os-resource-classes+OR+project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/osc-placement)+(label:Review-Priority%252B1+OR+label:Review-Priority%252B2)
16:44:54 <bauzas> nothing from me to say
16:45:15 <bauzas> (we'll discuss this flag at the PTG, btw.)
16:45:47 <bauzas> #topic Stable Branches
16:45:53 <bauzas> elodilles: this is your show time
16:46:01 <elodilles> #info from stable/zed back till stable/train branches' gates should be OK
16:46:22 <sean-k-mooney[m]> awsome
16:46:24 <elodilles> (i had only a quick look so hope it's true o:))
16:46:31 <elodilles> #info stable/stein (and older) are blocked: grenade and other devstack based jobs fail
16:46:39 <elodilles> with the same timeout issue as stable/train was previously
16:46:46 <elodilles> #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci
16:47:03 <elodilles> so actually nothing special, just the usual things
16:47:19 <elodilles> that was it
16:47:24 <gibi> can we delete stable/train ? :)
16:47:43 <gibi> and older :)
16:48:06 <elodilles> well, stein and older looks like abandoned
16:48:13 <elodilles> so that's for sure
16:48:21 <gibi> propose a PTG topic :)
16:48:29 <gibi> lets decide there
16:48:32 <bauzas> yeah
16:48:40 <elodilles> ++
16:48:42 <elodilles> ack
16:49:16 <bauzas> technically, we won't delete the branches, right ? :)
16:49:22 <bauzas> just EOL them :)
16:49:37 <elodilles> bauzas: actually it means *deletion*
16:49:57 <elodilles> only the $series-eol tag can be retrieved
16:50:05 <bauzas> cool
16:50:09 <elodilles> after we EOL'd + deleted a branch
16:50:16 <bauzas> yeah OK then
16:50:50 <bauzas> (even if my payslip asks me to do things with newton-based environments, this is not about upstream so... :D )
16:50:50 <elodilles> (so the history can be retrieved via that *-eol tag)
16:51:17 <sean-k-mooney[m]> yep we can always restore
16:51:27 <bauzas> anything else to mention ?
16:51:29 <sean-k-mooney[m]> i would be in favor of eloing stien and older but ya ptg
16:51:47 <bauzas> perfect place for this kind of discussion
16:52:28 <bauzas> in particular given not a lot of people care about stein and older (in terms of branch maintainance)
16:52:38 <elodilles> :)
16:52:39 <bauzas> anyway, moving on
16:52:52 <elodilles> (meanwhile i've added the topic to the PTG etherpad)
16:52:52 <bauzas> #topic Open discussion
16:53:00 <bauzas> elodilles: great, thanks
16:53:05 <elodilles> ++
16:53:18 <bauzas> there is nothing to discuss in the open discussion section
16:53:27 <bauzas> anyone having anything they'd like to say ?
16:53:48 <bauzas> as a reminder, specless blueprints can be asked here :)
16:54:11 <bauzas> looks not,
16:54:20 <bauzas> I return you 6 mins of your time
16:54:24 <bauzas> thanks alll
16:54:27 <bauzas> all* even
16:54:30 <bauzas> #endmeeting