14:00:49 #startmeeting nova 14:00:50 Meeting started Thu Feb 20 14:00:49 2020 UTC and is due to finish in 60 minutes. The chair is efried. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:53 The meeting name has been set to 'nova' 14:01:24 o/ 14:01:25 o/ 14:01:29 o/ 14:01:34 o/ 14:01:35 o/ 14:02:02 o/ 14:02:03 o/ 14:02:10 o/ 14:02:59 Hello all! 14:03:04 #link agenda https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting 14:03:09 Let's roll 14:03:27 #topic Last meeting 14:03:27 #link Minutes from last meeting: http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.html 14:03:27 * efried lyarwood to curate rocky EM list (from two weeks ago) 14:03:28 o/ 14:04:06 efried: yup apologies but I've not found the time to get to this as yet 14:04:17 efried: I'll try to find time before the end of the week to send this out 14:04:22 Okay, no worries. Have they officially EM'd the thing yet? 14:04:45 not offically AFAIK 14:04:48 lyarwood: ping elod he might able to help 14:04:50 but it's pending 14:04:57 gibi: ack will do 14:05:10 cool, I guess we have until... whenever they do that :) 14:05:22 I'll keep this on the agenda for next time. 14:05:28 any other old business? 14:06:06 #topic Bugs (stuck/critical) 14:06:06 No Critical bugs 14:06:06 However, our untriaged bug counts are still climbing. 14:06:06 101 'new untriaged' as of yesterday 14:06:14 #help need help with bug triage 14:06:31 #link 101 new untriaged bugs (+5 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 14:06:31 #link 27 untagged untriaged bugs (+6 since the last meeting): https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW 14:06:45 any comments on bugs? 14:06:51 >100 so we are doomed. 14:06:58 lets move on :) 14:06:59 ikr 14:07:20 #topic Release Planning 14:07:20 #link ussuri planning etherpad https://etherpad.openstack.org/p/nova-ussuri-planning 14:07:20 Spec freeze has passed. 14:07:21 I will try to do some triage at some point in the future :) 14:07:26 thanks gibi 14:07:38 so, there are a couple of exceptions on the agenda 14:07:40 first: 14:07:47 #link support-volume-local-cache http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012615.html 14:08:06 the cinder spec has been approved today 14:08:16 I'm supportive to approve the nova spec 14:08:31 +1 14:08:32 gibi: agree approve this spec 14:09:20 dansmith is not supportive but he is not -1 either 14:09:21 hope team can approve it, so customers can try this feature and give feedback to improve it in next release. thanks 14:09:24 Okay, sounds like gibi and alex_xu are willing to approve the spec. Is it ready now? 14:09:41 looks like it is, I see those +2s. 14:09:52 Okay, let's grant the sfe here. Any objections? 14:09:54 yeah, it looks good to me 14:10:28 #action efried unblock and +W the support-volume-local-cache spec after the meeting https://review.opendev.org/#/c/689070/ 14:10:33 next: 14:10:41 thanks team 14:10:46 #link destroy-instance-with-datavolume http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012616.html 14:11:16 Looks like the spec needs a bit of work, but is it pretty close? 14:11:33 gmann has a request to use PATCH instead of PUT. I can accept that 14:11:50 do we have a second core on board here? 14:11:57 I think that is the biggest change 14:12:12 IMO, that is best way to go and keep already complex swap API as it is. 14:12:19 the rest are mainly wordign nits that can be adress in a FUP if needed 14:13:01 could somebody proxy gmann's and sean-k-mooney's +1 as a +2 ? :) 14:13:37 what is the body for PATCH action? 14:13:39 *if* we switch to PATCH right? 14:13:40 I... don't feel _great_about that 14:14:15 brinzhang__: --^ 14:14:25 I don’t know much about PATCH, please forgive me for not saying too much. 14:14:30 delete-termination flag and volume id 14:14:33 alex_xu: the body would just be {delete_on_terminate:True|false} 14:15:15 gmann: we could take the volume_id form the url 14:15:18 and volume id in url. or somethings we can decide bets possible design 14:15:19 im not sure that needs to be there 14:15:20 yeah 14:15:26 using swap volume API, and keep that condition inline, is it ok? 14:15:43 I'm not sure about the PATCH, I'm ok with existing PUT, the only concern is the policy. there should be another policy for delete_on_termination 14:16:32 it should be admin or owner right 14:16:47 yea 14:16:52 sean-k-mooney: yeash 14:17:15 alex_xu: but existing PUT is for updating the "attachments of server" not updating the attachments property. 14:17:17 i would have assumed that was the same policy for swap volume 14:17:17 the existing swap API is admin-only, I don't think the usecase is for admin-only 14:17:25 oh i see 14:17:32 ya ok makes sense 14:18:05 another reason for going with PUT over UPDATE tbh 14:18:13 overloading the swap volume API is just wrong 14:18:29 lyarwood: did you mean PATCH 14:18:33 PATCH even sorry 14:18:35 yeah 14:18:41 yeah. from user point view it will be too much mixing things in single API 14:19:11 especially when we called PUT as swap volume API in our doc. 14:19:15 So look, I want to be permissive here, but it's tough to justify this exception if we're still discussing nontrivial design details after spec freeze. 14:19:16 so PATCH we could make admin_or_owner by default and restict it to just the delete_on_termination property of the attachment 14:19:36 efried: agreed 14:19:37 efried: agree 14:19:44 this seems to be still open 14:20:12 sean-k-mooney: +1. owner should be able to update it. 14:20:27 There seems to be genuine desire among the team to make something work here, which is nice to see. 14:21:05 efried: if it was not for the legacy of swap volume this seams like a thing we would have just trivially approved 14:21:08 What about this: If y'all can figure out a way to get the spec updated and approved by EOB tomorrow, we'll allow the exception? 14:21:24 i would be fine with ^ 14:21:29 sounds good 14:21:47 gibi, alex_xu: those +2s would be on you. Are you on board with that? 14:22:21 ok for me 14:22:22 efried: sure 14:22:35 my EOB is in 3 hours 14:22:36 brinzhang__: seem fair? 14:22:53 yes, I figure it is more likely to happen tomorrow "morning". 14:23:08 efried:agree, I will ask gibi and alex_xu to do something 14:23:40 alex_xu, gibi: if you leave your +2s on it by your EOB, I'll swap the CR-2 for W+1 in my daytime. 14:24:11 got it 14:24:17 Thanks all. Anything further on sfes before we move on? 14:24:31 got it 14:24:47 thanks all 14:25:01 #agreed to grant sfe for support-volume-local-cache if two +2s by EOB Friday 20200221 14:25:20 delete on terminate ? 14:25:25 whoops 14:25:26 #undo 14:25:26 Removing item from minutes: #agreed to grant sfe for support-volume-local-cache if two +2s by EOB Friday 20200221 14:25:46 #agreed to grant sfe for destroy-instance-with-datavolume if two +2s by EOB Friday 20200221 14:25:50 thanks gibi 14:25:53 :) 14:26:08 okay, moving on... 14:26:11 gibi, alex_xu: thanks 14:26:21 #link Proposal to scrub five Definition:Approved blueprints http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012612.html 14:26:55 bauzas expressed his opinion on the thread. TLDR: why do this? It ain't broke, don't fix it. 14:27:18 I've gotten the impression many of you agree, or are at best neutral 14:27:23 thoughts from those present? 14:28:02 is there any bp that is easy to cut out? 14:28:08 if not then lets go with 30 14:28:12 I get where you're coming from but I have to agree with bauzas - I think this stuff will work itself out naturally 14:29:30 there is no clear -1s on the etherpad so I'm on the side to continue with 30 14:29:37 Yes, it always does work itself out naturally. 14:29:51 Okay then, I stand down. 14:30:06 #agreed to Direction:Approve all Definition:Approved blueprints 14:30:24 moving on 14:30:28 we already talked about rocky EM 14:30:38 #action efried to fup with lyarwood next week 14:30:44 #undo 14:30:45 Removing item from minutes: #action efried to fup with lyarwood next week 14:30:53 #action efried to fup with lyarwood next week about rocky EM 14:31:04 #topic PTG/Summit planning 14:31:04 Please mark attendance and topics on 14:31:04 #link PTG etherpad https://etherpad.openstack.org/p/nova-victoria-ptg 14:31:20 I submitted the attendance survey 14:31:32 stating we would have about 20 people 14:32:10 and asking for a 20-person "room", with the note that one of nova/cinder/ironic/neutron ought to have a 40-person room for xproj stuff. 14:32:44 and saying we would need a minimum of 1 day 14:32:57 make sense 14:33:23 but under the "who's gonna run the room" question, a big shrug. I guess it will be clearer after we have a Victoria PTL. 14:33:36 I'm sure diablo_rojo_phon understands that. 14:33:44 any comments questions concerns? 14:33:44 in Shanghai we made that dynamic 14:34:00 ++ 14:34:09 there was always 2-3 cores in the room and handled the agenda together 14:34:38 ya i think that makes sense 14:35:09 cool 14:35:10 #topic Sub/related team Highlights 14:35:10 Placement (tetsuro) 14:35:33 It seems melwitt is driving the consumer types work along. Otherwise nothing going here that I'm aware of. 14:36:33 API (gmann) 14:36:42 There's an update from last week: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012563.html 14:36:50 not sure if there's anything new we haven't already talked about above 14:37:04 that's all for this week. something we need to start working and add in report is API bug triage. I have not looked on numbers yet 14:37:26 bug-in-general triage. Perhaps we should reinstate the 'bug czar' 14:37:36 gmann: care to volunteer? :P 14:37:52 efried: i can do but after policy and py2 drop work 14:38:04 that would be really great, thank you. 14:38:13 IIUC the bug czar is responsible for bugging people about bugs 14:38:32 not solely responsible for triage etc, but coordinates the effort 14:38:57 ok. 14:39:13 moving on before gmann changes his mind... 14:39:14 #topic Stuck Reviews 14:39:18 any? 14:39:56 #topic Open discussion 14:39:56 [efried] Exiting OpenStack 14:39:56 #help PTL pro tem needed 14:39:56 #link call for volunteers http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012663.html 14:40:26 A deafening silence followed last week's call for volunteers. Ditto email responses on that topic. 14:40:58 There's very little official/documented process for replacing a PTL mid cycle https://docs.openstack.org/project-team-guide/ptl.html#handing-over-ptl-duties 14:41:15 I'll just point out the ominous "figure it out or the TC gets involved" bit. 14:41:54 I totally agree that we have to solve the situation somehow 14:41:59 wrt knowledge transfer, I really don't feel like anybody who would be volunteering here would need a huge amount of handoff; you all pretty much know how to run this thing. 14:42:40 and nova PTL guide cover most of things which is really nice doc and only few project have. 14:42:54 efried: if you see someting to update in ^^ then that would be appreciated 14:43:25 Yes, good call, I've had that TODO for a while to take a swipe at that doc 14:43:31 anybody have that link handy? 14:43:38 I have it *somewhere*... 14:43:44 I hope gmann has :) 14:43:53 this one? https://docs.openstack.org/nova/latest/contributor/ptl-guide.html 14:43:56 gmann: i think its a cross project goal to add it to ther projects 14:44:19 efried: that is the doc 14:44:37 yeah. efried is fast 14:44:42 #action efried to look at the nova PTL guide and update if/as appropriate 14:44:51 #link nova ptl guide https://docs.openstack.org/nova/latest/contributor/ptl-guide.html 14:45:19 Okay, that's everything on the agenda. Anything else to discuss before we close? 14:45:25 yes 14:45:32 kevinz: your floor 14:45:37 I have one about bring arm64 CI on arm64 14:45:59 we have donate some nodes to nodepool already, wanna to define some jobs 14:46:09 a draft here: https://etherpad.openstack.org/p/arm64-nova-ci 14:47:43 but actually not sure which jobs are related with multi-arch as the first to enable 14:48:47 The draft is picked up from one submit and remove some ones that looks not related with architecture 14:48:55 At a glance, there are a couple of things that look odd to me, like creating a whole pipeline for this. It's unclear to me whether this should just be a job in the experimental queue (for now) or a 3pCI or... 14:48:55 sean-k-mooney, gmann: would one of you be willing to liaise with kevinz to work out the kinks here? 14:49:33 am sure i can try and help 14:49:44 sure. 14:49:51 kevinz: i noteiced there is a sperate check-arm64 pipeline right 14:49:52 thanks a lot! 14:49:57 Thanks. 14:50:22 kevinz: are you able to hang out in #openstack-nova and/or #openstack-qa to chat with gmann and sean-k-mooney? 14:50:26 sean-k-mooey: yes, due to lacking of essential nodes. so we define a separate pipeline 14:50:35 (we don't want to discuss it here) 14:50:42 sure no 14:50:45 sure np 14:50:47 :D 14:50:58 Great. 14:50:58 Anything else to discuss? 14:51:04 ya we can discuss it in either just ping me 14:51:22 OK, thx 14:51:36 Okay, thanks for a productive meeting. 14:51:36 o/ 14:51:36 #endmeeting