14:01:07 #startmeeting nova 14:01:08 Meeting started Thu Jul 11 14:01:07 2019 UTC and is due to finish in 60 minutes. The chair is efried. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:12 The meeting name has been set to 'nova' 14:01:15 o/ 14:01:18 o/ 14:01:19 o/ 14:01:19 ~o~ 14:01:23 o/ 14:01:31 \o 14:01:48 Sorry folks, I gave myself a cool hour to prep for this, and then got distracted. 14:04:22 #link agenda https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting 14:04:32 #topic Last meeting 14:04:32 #link Minutes from last meeting: http://eavesdrop.openstack.org/meetings/nova/2019/nova.2019-06-27-14.00.html 14:04:42 #topic Release News 14:04:43 #link spec review day Tuesday, July 2nd http://lists.openstack.org/pipermail/openstack-discuss/2019-June/007381.html 14:04:43 Much was accomplished here, but more to do. 14:05:08 I'm not sure if having a focus day is the most effective thing 14:05:24 or if I should just continue maintaining the etherpad and pester people 14:07:04 i think we have enough to do for train 14:07:54 meaning what, we should just abandon all the specs that haven't merged yet? 14:08:08 no, 14:08:13 but people can review as they want 14:08:15 or not, whatever 14:09:14 spec freeze is in two weeks right 14:09:31 so anything that is not merged by then will be proceduraly -2'd 14:09:32 and we've already had 3 spec review sprints, which is probably more than any other release 14:09:46 there are likely few spec that will still merge before then 14:09:54 doesn't have to be -2ed 14:10:02 just gets deferred to U if the author still cares about it 14:10:21 sure 14:10:22 If there were more overlap among (a) people who can approve specs, (b) people who contribute specs and/or their code, and (c) people who want the proposed features, the "meh, let it happen" approach might work 14:10:53 i'm not sure what that means 14:11:03 efried: well any nova core can approve but not all core want to review specs 14:11:05 nova-core can approve specs now 14:11:28 Maybe I'm being starry-eyed, but we as cores have a responsibility to do *something* with specs that are proposed for the current release. 14:11:44 I keep thinking there needs to be some sort of more formal bandwidth limiter for spec approvals, to make sure that there are then code reviewers available 14:11:50 But that's neither here nor there 14:12:28 we used to do that 14:12:38 we had a spec proposal freeze at m1 14:12:44 me saying "i don't care about this, so i'm not going to vote on it" is me doing "something" 14:12:55 but how many specs which did not get a single round of review ? 14:13:02 If we think we've reached a limit of what the reviewers can handle, despite there being enough contributor bandwidth, then we (cores) need to be able to say "no" on those grounds. 14:13:07 i would say that low 14:13:12 no reivews 14:13:41 but when it happen it general means the propsor did not reach out to people to review or there was a general lack of interest in the feature 14:13:47 If we think a thing is good and are willing to review it, we can approve the spec and put the onus on the contributors to make the code happen in a timely enough fashion to facilitate reviews. 14:14:07 But it's pretty discouraging for a contributor to propose a thing and get radio silence. 14:14:18 which specs have had no review in train? 14:15:15 I don't know that we have any with zero reviews. I'm saying we're not "on track" to *close* (approve/abandon/defer) on all the specs currently proposed for train without doing some kind of push. 14:15:45 well we will defer by default in two weeks 14:16:06 its up to the author to decied to abandon or rework for U 14:16:11 ...but yes, actually, looking at the 14:16:11 #link train specs etherpad https://etherpad.openstack.org/p/nova-spec-review-day 14:16:11 it appears as though most of the actions on open specs are currently to the authors. 14:16:16 if there is no update we can abandon 14:16:34 we're also probably not on track to close on all already-approved blueprints for train 14:16:36 or bug fixes 14:16:39 or non-triaged bugs, 14:16:40 etc 14:17:10 yeah, I get it's not a perfect world. 14:17:17 for what its worth there are 32 specs approve already 14:17:44 we had 18 last cycle 14:17:47 but if something is going to languish, I don't want it to be the "fault" of the core team, for which I as PTL am to some degree responsible. 14:18:18 oh i should add implemented 14:18:36 so we had 37 last cylce 18 of which were not implemented 14:18:43 efried: i get that and you're cat herder in chief, 14:19:06 so if there is a particular set of specs you are worried about, you can ping some cores to review something (again if necessary) 14:19:25 but those cores should also be able to say, "this isn't my area, or i don't care about this" 14:20:13 * bauzas waves super late 14:20:32 if blueprints don't get implemented because the contributor didn't get code proposed on time, or their code was crap, or whatever, that's the way it is. 14:20:32 But if contributors did everything right and blueprints didn't get implemented because we ran out of reviewer bandwidth, that seems like it's on us (core team) during the spec part of the cycle to manage our commitment better. 14:21:04 I don't like that we simply accept that there will always be a large percentage of unimplemented blueprints every cycle. 14:21:10 with half the core team doing <1 review per day, either isn't going to go very well 14:22:25 okay; I'll take mriedem's suggestion, monitor specs individually, ping cores as appropriate. 14:22:27 but 14:23:12 I also think it would be nice if we could find a way to better understand our ability to get things completed in a cycle and narrow the "unimplemented" gap. 14:23:33 you wouldn't be the first 14:23:36 even (especially) if it's by approving significantly fewer blueprints 14:23:47 we tried that back in newton 14:23:48 having core taking some set of spec as owner- get code review, ask author if no code. can be good practice and work sharing 14:24:02 "spec champion" 14:24:17 yeah kind of 14:24:24 efried: isnt that in thory one of the things the bluepirnt approver is ment to do 14:24:32 and the spec approvers for that matter 14:24:40 right, swhat I'm saying, I think we do a poor job of that. 14:25:01 maybe 14:25:23 sorry, sean-k-mooney you were talking about the "spec champion" thing, I mean we do a poor job of denying blueprints because of overall load in a release. 14:25:41 yes i was 14:25:55 but yeah, it wouldn't be unreasonable to hold spec approvers accountable for reviewing the code that comes out of those blueprints. 14:26:06 ...if I didn't think that would further discourage people from committing to spec reviews :( 14:26:14 okay, let's move on. 14:26:33 but this conversation isn't over. ("Alexa, play ominous music.") 14:26:55 #topic Bugs (stuck/critical) 14:26:55 No Critical bugs 14:26:55 #link 67 new untriaged bugs (+6 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 14:26:55 #link 5 untagged untriaged bugs! (+5 since the last meeting): https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW 14:27:49 #topic Gate status 14:27:49 #link check queue gate status http://status.openstack.org/elastic-recheck/index.html 14:27:49 3rd party CI 14:27:49 #link 3rd party CI status http://ciwatch.mmedvede.net/project?project=nova&time=7+days 14:27:49 This ^ is dead, but stay tuned for a replacement 14:28:34 If you watch 14:28:34 #link grafana http://grafana.openstack.org/d/rZtIH5Imz/nodepool?orgId=1 14:28:34 you may notice the In Use count climbing above 600 now 14:29:29 a contribution of CI hardware is in process 14:29:34 more on that once the dust settles. 14:29:58 anything else on bugs or CI? 14:30:38 #topic Reminders 14:30:41 any? 14:30:57 CI related, i'm trying to convert the nova-next job to zuul v3 14:31:09 https://review.opendev.org/#/c/670196/ 14:31:12 not working yet 14:31:31 ack 14:31:33 Oh, new CI hardware? Do we know if there's going to be multi-node NUMA flavor available? 14:31:45 (Sorry for the late jump in, on an internal call at the same time) 14:31:51 donnyd: can you answer this ^ ? 14:32:00 (I think donnyd is on vacation currently) 14:32:11 I'm back :) 14:32:16 o/ 14:32:31 I can make anything available 14:32:39 artom: if not we really need to revie the fedora based nfv ci job i was trying to crate 14:33:03 All of my nodes have 4 sockets, so something like that is totally possible 14:33:05 donnyd, whoa. OK, we can take this offline, but you're making me very happy right now. 14:33:46 #topic Stable branch status 14:33:46 #link stable/stein: https://review.openstack.org/#/q/status:open+(project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/nova)+branch:stable/stein 14:33:46 #link stable/rocky: https://review.openstack.org/#/q/status:open+(project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/nova)+branch:stable/rocky 14:33:46 #link stable/queens: https://review.openstack.org/#/q/status:open+(project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/nova)+branch:stable/queens 14:33:53 Sure thing 14:33:55 donnyd: if you have nested vert available it will help us test a lot if you can also provide mulit numa flavors for us to use 14:34:17 but yes we can follow up offline 14:34:18 I can turn on nested virt 14:34:46 I was told that we currently disable it, but if its needed I don't have any issues with it 14:34:48 queens was just released 14:34:55 #link queens 17.0.11 https://review.opendev.org/669014 14:34:58 for nova anyway 14:35:07 and also my network stack is on vlans, so doing vxlan over top is also not an issue 14:35:44 Okay, I guess since donnyd has been officially outed at this point, I will giv a hearty 14:35:44 #thanks donnyd for boosting CI hardware resources! 14:35:48 will likely start the whole release train for stable again in 2 weeks 14:36:03 donnyd, what company do you represent? 14:36:16 I assume they're not boxes you personally bought in your basement ;) 14:36:21 intel of course 14:36:46 I work at Intel, but this effort is all of mine own. I personally own all the gear 14:37:02 so artom, yes, personal in basement :P 14:37:05 I'm genuinely not sure if you're serious 14:37:25 http://project.fortnebula.com/services/ 14:38:17 anywya back to the stable topic. mriedem did you do os-vif release or should i look at doing that? 14:38:21 "I am also an Amateur Radio Operator, my call sign is K0QBU." 14:38:28 donnyd, meet dansmith 14:38:33 This has been around for longer than the page indicates... about 5 or so years. Just wen through HW refresh, and has been rebuild for CI workloads.. 14:38:39 im not sure if its needed or not but i can see what the delta is since the last relsese 14:38:50 sean-k-mooney: i didn't do os-vif releases no 14:39:22 ok i dont think we have merged much lately on stable but ill take a look 14:41:53 did we lose efried? 14:41:59 I'm here. Moving on? 14:42:07 #topic Sub/related team Highlights 14:42:07 Placement (cdent) 14:42:19 I'll stand in I guess. 14:42:29 #link latest pupdate http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007543.html 14:43:39 We're pretty much "done" for Train, having merged microversions 1.32-6 14:44:03 oh, I guess we still plan to do 14:44:03 #link Spec for Consumer Types is merged https://docs.openstack.org/placement/latest/specs/train/approved/2005473-support-consumer-types.html 14:44:51 but we've published 14:44:51 #link Microversion 1.36 (same_subtree+resourceless) is merged https://docs.openstack.org/placement/latest/placement-api-microversion-history.html#support-same-subtree-queryparam-on-get-allocation-candidates 14:44:51 which was the last major piece necessary to start modeling NUMA affinity. 14:45:12 I would like to see us at least get started on this in Train 14:45:17 efried: I'll turn on gears to ^ once nova-manage audit is done 14:45:20 which I suppose means someone should propose a late spec. 14:45:24 thanks bauzas 14:45:54 we already have a spec but which needs updates 14:46:00 anyway, moving on 14:46:07 for NUMA topo & affinity in placement? 14:46:10 correct 14:46:13 okay, cool 14:46:29 Any questions on the placement side? 14:46:50 API (gmann) 14:46:51 Updates on ML- #link http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007667.html 14:47:25 nothing extra than in ML. few BP code are ready to review and few need another +2 14:47:36 Thanks gmann 14:47:43 i'd like to call out https://review.opendev.org/#/c/645520/35 14:47:47 it's in a runway 14:47:48 has a +2 14:47:51 needs another core 14:48:03 pretty straight forward 14:48:15 yeah that is good one to get in fast 14:48:19 it's big b/c all api microversion that have samples are big 14:48:36 I can take a look at that now. Been meaning to for a while 14:48:43 That one's been on my list for a while too. 14:48:54 Dunno if bauzas wants to weigh in first though 14:48:56 AZ goodness 14:49:05 it's not az 14:49:10 (or avoidance of same) 14:49:12 wait what ? 14:49:30 hah, ack 14:49:33 will review it 14:49:36 wait, melissaml has +1ed it, can't we just proxy that as a +2? 14:49:50 oh snap 14:50:03 anyway i've gone through the whole universe of changes on that one https://review.opendev.org/#/q/topic:bp/add-host-and-hypervisor-hostname-flag-to-create-server+(status:open+OR+status:merged) 14:50:09 so i'd like to see it move before i'm out next week 14:50:27 if someone has questions just ask me in -nova 14:50:42 cool 14:50:45 stephenfin: it allows us to not use the az hack to land a server on a host while runing it fully through the schduler 14:51:02 sean-k-mooney: Yup, I've reviewed the docs for same a few times 14:51:24 #topic Stuck Reviews 14:51:35 any? 14:51:48 #topic Review status page 14:51:48 #link http://status.openstack.org/reviews/#nova 14:51:48 Count: 461 (-3); Top score: 1412 (-113) 14:51:48 #help Pick a patch near the top, shepherd it to closure 14:52:10 #topic Open discussion 14:52:13 Just one for mriedem specifically 14:52:14 go 14:52:42 the -2 from this needs to be dropped https://review.opendev.org/#/c/662501/ 14:53:22 It's not valid any, it's been around for a while and I've asked on IRC and in the (following) review to drop it 14:53:32 s/valid any/valid/ 14:53:43 That is all :) 14:53:45 i will if i see the ec2 tempest runs passing on the series 14:53:50 ec2api 14:54:16 Kerblam https://review.opendev.org/#/c/663386/ 14:54:21 or however that's spelled 14:55:02 that didnt run tempest 14:55:22 it ran functional tests 14:55:30 right? 14:55:46 the functional job runs tempest 14:55:49 probably via a hook 14:55:55 ah ok 14:55:56 http://logs.openstack.org/86/663386/5/check/ec2-api-functional-neutron/4c4aff8/logs/testr_results.html.gz 14:56:02 b/c it's the ec2api tempest plugin 14:56:48 i can look into it after the meeting 14:57:00 ta 14:57:18 anything else before we close? 14:57:43 Thanks all. 14:57:43 o/ 14:57:43 #endmeeting