21:00:13 <melwitt> #startmeeting nova
21:00:14 <openstack> Meeting started Thu Mar 15 21:00:13 2018 UTC and is due to finish in 60 minutes.  The chair is melwitt. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:15 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:17 <openstack> The meeting name has been set to 'nova'
21:00:20 <tssurya> o/
21:00:24 <melwitt> hello peeps
21:00:26 <dansmith> o/
21:00:27 <edleafe> \o
21:00:40 <efried> @/
21:00:43 <mriedem> o/
21:01:02 <melwitt> #topic Release News
21:01:19 <melwitt> #link Rocky release schedule: https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule
21:01:26 <melwitt> next milestone is r-1/spec freeze on Apr 19
21:01:53 <melwitt> I was thinking we'll do a spec review day, will mention that in reminders
21:02:06 <melwitt> (it's on the agenda, I mean)
21:02:17 <melwitt> anyone else have anything for release news?
21:02:39 <melwitt> okay
21:02:44 <melwitt> #topic Bugs (stuck/critical)
21:02:55 <melwitt> no critical bugs in the query
21:03:04 <melwitt> #link 47 new untriaged bugs https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
21:03:33 <melwitt> untriaged bug count is up, which I think is not too surprising because of the PTG and everyone being busy with that
21:03:56 <melwitt> need to get back to triaging those down
21:04:01 <melwitt> #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags
21:04:26 <melwitt> ^ instructions on how to do bug triage. first tag with a category, then set importance/validity
21:04:35 <melwitt> #link untagged untriaged bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW
21:04:54 <melwitt> we have 12 untagged bugs, so people have been good about applying categories to bugs
21:05:17 <melwitt> but we need more people to dive into an area and set validity and importance on bugs in the area they're familiar
21:05:25 <melwitt> #help tag untagged untriaged bugs with appropriate tags to categorize them
21:05:31 <melwitt> #help consider signing up as a bug tag owner and help determine the validity and severity of bugs with your tag
21:05:52 <melwitt> if you're a bug tag owner, please do take a look and see if there's anything new to triage for your bug tag
21:06:12 <melwitt> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html
21:06:49 <melwitt> today there were problems with the latest zuul security fixes, so zuul was having POST_FAILUREs
21:06:58 <melwitt> that has recently been resolved, so it's safe to recheck your patches now
21:07:06 <melwitt> #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days
21:07:26 <melwitt> that link isn't opening for me
21:08:03 <melwitt> but from anecdotal experience, it seems like 3rd party CI has been okay for the most part
21:08:09 <melwitt> anything else on bugs or gate status?
21:08:23 <dansmith> so,
21:08:29 <dansmith> I haven't seen the xen driver vote in a while,
21:08:41 <dansmith> including on those xen patches for live migration (I don't think they have a live test, but still)
21:09:11 <melwitt> hm, good point
21:09:22 <melwitt> being that we have xenapi driver work slated for rocky
21:09:30 <dansmith> I mean, there are some green checks there,
21:09:32 <dansmith> so I dunno
21:09:46 <dansmith> but it might be worth figuring out if there's something going on there
21:09:56 <dansmith> those guys aren't around during these hours though
21:10:03 <dansmith> anyway, just another gut data point
21:10:18 <mriedem> i tend to only look for virt driver specific CI on virt driver specific patches
21:10:32 <mriedem> but i will hold up a +1/+2 until i see it
21:10:42 <melwitt> yeah, let me look into it and reach out to the xenapi peeps to make sure we have CI going for it
21:10:51 <dansmith> right, I noticed it on the xen patches
21:10:56 <dansmith> or rather, didn't notice it
21:11:18 <melwitt> #action melwitt to look into xen driver CI and reach out to the xen people if needed
21:11:31 <melwitt> cool, thanks for bringing that to attention
21:11:45 <melwitt> anyone have anything else on bugs or gate status or 3rd party CI?
21:12:06 <melwitt> #topic Reminders
21:12:13 <melwitt> #info Rocky PTG nova session summaries are being sent out to the dev ML this week and will continue the following week if needed
21:12:42 <melwitt> I'm writing up the non-placement PTG session summaries and sending them to the dev ML. will continue next week if I don't finish them this week
21:12:52 <tssurya> thanks for doing this
21:13:15 <melwitt> big thank you to jaypipes, mriedem, and cdent and anyone else who contributed to the placement summary
21:13:41 <melwitt> #help Need one or two volunteers to help with the 40 minute Nova on-boarding session at the summit
21:13:55 <dansmith> I did it the last two times,
21:14:05 <dansmith> the most recent being not as useful it seemed
21:14:09 <dansmith> so maybe it's me :)
21:14:34 <melwitt> it was probably me since that was my first time helping with an onboarding session :|
21:14:42 <mriedem> no no no,
21:14:46 <mriedem> you were all equally bad
21:14:51 <edleafe> nah, it had to be me
21:14:53 <melwitt> aw thanks mriedem
21:15:06 <mriedem> edleafe: i was implicitly including you
21:15:32 <dansmith> so maybe we should have some newer people do it
21:15:45 <dansmith> I'll be glad to be in the room
21:15:51 <melwitt> I'll be one of the speakers, so if you'd like to help out with onboarding, please let me know and I'll add you to the list so I can tell kendall
21:16:10 <edleafe> I'll be busy prepping for the placement talk with efried
21:16:25 <efried> Was this the session where sdague showed slides with log extracts and talked through them?
21:16:45 <mriedem> yes
21:16:45 <efried> and mriedem made fun of the PowerVM driver?
21:16:46 <mriedem> in boston
21:16:51 <mriedem> i did?
21:16:57 <dansmith> I did
21:17:04 <dansmith> I don't think mriedem was in there, IIRC
21:17:11 <mriedem> heh, i was for part of it
21:17:18 <mriedem> our collective memories suck
21:17:20 <dansmith> right but not up front?
21:17:21 <melwitt> I was only in the room for the last 10 or 20 minutes so I missed that
21:17:22 <dansmith> anyway
21:17:26 <dansmith> I definitely made fun of powervm
21:17:30 <dansmith> once I knew efried was in there
21:17:31 <efried> Anyways... If the timing works out, I don't mind being in the room to field questions.
21:17:45 <efried> Not sure how much use I'll be, but maybe able to give a noob's perspective.
21:17:50 <mriedem> i volunteer efried to be up front
21:17:52 <dansmith> +1 for efried being up front
21:17:53 <efried> vay
21:18:04 <mriedem> +gibi
21:18:07 <mriedem> there
21:18:10 <mriedem> it's a party
21:18:12 <dansmith> that would be a good combo
21:18:17 <dansmith> if gibi is in
21:18:20 <efried> Hah, poor gibi_vacation
21:18:23 <melwitt> cool. well, fwiw I think it would help to have newer perspectives when giving an onboarding session
21:18:51 <melwitt> alright
21:18:52 <efried> Based on the probably-faulty assumption that I actually understand anything at this point.
21:19:04 <dansmith> efried: I'll be there to point out anything you get slightly wrong
21:19:05 <dansmith> don't worry
21:19:06 <melwitt> pff, you understand stuff
21:19:17 <efried> oh, THANK you dansmith.  Then I'm in.
21:19:21 <dansmith> "actually efried, that's a CAPITAL F"
21:19:38 <melwitt> cool, I think it will go well :)
21:19:39 <efried> I'll give you a capital F...
21:19:42 <edleafe> Heh, the people in the audience won't know if you screw up
21:19:53 <efried> 'cept when dansmith points it out
21:20:00 <efried> Okay, okay.
21:20:19 * efried has thick ears and can take it.
21:20:40 <melwitt> I'm gonna try to prepare some stuff this time. apparently there's a format that the foundation and upstream institute would like if we used, so I'm gonna learn about that and see if/how we can incorporate some of it to have on-hand depending on what a poll of the room decides they'd like to hear
21:21:08 <melwitt> anything else on onboarding?
21:21:31 <melwitt> #info Rocky spec freeze is about a month out, Apr 19. Need to be thinking about when to have a Rocky spec review day. melwitt will post to the dev ML to gather input about what date we should schedule it
21:22:12 <melwitt> we've had spec review days in the past, so I was thinking we should do one for rocky. only thing not sure about is the date we should pick, so I'll send email to the dev ML to gather input on when we should do it
21:22:31 <melwitt> #link Rocky Review Priorities https://etherpad.openstack.org/p/rocky-nova-priorities-tracking
21:23:24 <melwitt> ^ looks like people have been using the priorities etherpad, which is great. please keep using it as your place to find things to review and add your items where applicable
21:23:50 <melwitt> there are sections for non-priority approved blueprints, trivial bug patches, the various virt drivers, subteams, osc-placement, and so on
21:24:01 <melwitt> which leads into ...
21:24:06 <melwitt> #info We're going to be experimenting with a new non-priority blueprint tracking a reviewing process this cycle: "runways" and melwitt is writing up a rough draft on how it will work and gather input, then document it and communicate it to the dev ML
21:24:40 <dansmith> I feel like if we're going to do this, we need to be doing it like yesterday
21:24:40 <melwitt> the "runways" idea has been talked about in the past, many cycles ago, and this cycle we're going to try it out
21:24:45 <dansmith> I dunno how other people feel,
21:24:48 <efried> dansmith: ++
21:24:51 <dansmith> but the longer we wait the harder it'll be
21:25:01 <dansmith> I kinda hoped to have more closure on the idea at ptg
21:25:08 <dansmith> like, a plan
21:25:19 <melwitt> I agree. I'm sorry it didn't get started earlier but I was on PTO last week and this week I'm writing a bunch of session summaries
21:25:41 <efried> If there are open questions, perhaps we can talk through during Open Discussion here?
21:25:52 <melwitt> oh, I see. well, I hope it will be pretty straightforward. I'm basically going to write up my understanding of what's on the PTG etherpad about it
21:26:20 <melwitt> I don't want it to get stuck in committee but just wanted to keep it open to some amount of feedback before just going ahead, if that makes sense
21:26:26 <dansmith> right, so,
21:26:35 <dansmith> I think we just need to pick some details and run with them
21:26:46 <dansmith> we can discuss some more, but we either need to do it or punt to next cycle I think
21:26:58 <melwitt> agreed, it's not going to be perfect and I'm 100% expecting this will be, just do something and adjust it as we go
21:27:30 <melwitt> I didn't mean for it to sound like I want to spend a lot of time designing it. I was thinking a week at most
21:28:00 <efried> I just heard dansmith volunteer to have it written up by EOD.
21:28:09 <dansmith> no, melwitt already said she was doing it
21:28:12 <dansmith> which is cool,
21:28:32 <melwitt> if dansmith is willing to do it, that would be great. I just figured no one else wanted to do it
21:28:44 * efried puts away pokeystick and stfu
21:29:26 <melwitt> so if someone else wants to write up an etherpad, please feel free :) else, I'll do it and send it out and get quick feedback and then we roll with the plan and adjust along the way
21:30:13 <melwitt> anything else on runways?
21:30:30 <claudiub|2> don't run-away from reviewing them? :D
21:30:40 <melwitt> heh
21:30:41 <dansmith> claudiub|2: you had one good one today, don't push it
21:31:02 <melwitt> #topic Stable branch status
21:31:10 <melwitt> #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z
21:31:49 <melwitt> several backports there but looks like they're getting reviews
21:31:58 <melwitt> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z
21:32:11 <melwitt> same with pike ^
21:32:17 <melwitt> #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z
21:32:34 <mriedem> #link EOL is EOL https://review.openstack.org/#/c/548916/
21:32:49 <melwitt> I was about to ask about that
21:33:01 <dansmith> more like EOL is dumpsterfire
21:33:37 <melwitt> okay, so stable branches are to remain open for as long as reasonably possible
21:33:57 <mriedem> yar
21:34:13 <mriedem> people will come help as a result,
21:34:16 <mriedem> and everyone will be happy
21:34:29 <melwitt> well, there ya go
21:34:30 <dansmith> and you get a pony
21:34:30 <dansmith> and you get a pony
21:34:31 <dansmith> and you get a pony
21:35:21 <melwitt> okay, well, that seems okay. sounds like it means things go EOL when things are broken and too high cost to keep the branch anymore
21:35:21 <mriedem> we'll cut another release on stable,
21:35:25 <mriedem> once mnaser's reverts are merged
21:35:55 <dansmith> except we can't delete branches that are broken and in between branches that work, right?
21:35:56 <melwitt> cool
21:35:59 <mnaser> Ill keep following them through
21:36:22 <melwitt> dansmith: I would think not but AFAIK that hasn't happened before. (has it?)
21:36:34 <dansmith> it can't happen with the current policy
21:37:15 <dansmith> anyway, I'm just throwing stones, move along :)
21:37:20 <mriedem> if Pike stops working and we delete it, we'd have to delete ocata too
21:37:23 <melwitt> well, what I mean is when we've had queens/pike/ocata I have only seen a job break on ocata, not anywhere in the middle. but I see your point that the window for it being observable has been really small
21:37:52 <dansmith> mm, I think it's broken in the middle before, but it certainly *can* happen
21:37:54 <dansmith> more releases, more potential
21:38:03 <melwitt> fair
21:38:32 <melwitt> okay, so something for everyone to be aware of
21:38:38 <melwitt> #topic Subteam Highlights
21:38:47 <melwitt> dansmith: cells v2
21:39:01 <dansmith> we had a meeting, discussed a bunch of in-progress stuff,
21:39:21 <dansmith> including bugs and features, lots of stuff up from tssurya, pretty good consensus on a new bug that got filed
21:39:28 <dansmith> everything's good I think
21:39:38 <melwitt> cool, thanks
21:39:42 <melwitt> edleafe: scheduler
21:39:48 <edleafe> Some brief discussion about the spec for supporting traits in Glance to give operators more control over where their images are run.
21:39:51 <edleafe> The 'member_of' API change was holding up the Request Filter series; that patch is now complete and awaiting Tempest recheck.
21:39:54 <edleafe> We got somewhat risqué and talked about Forbidden Traits.
21:39:57 <edleafe> cdent updated us on the patch moving ResourceProvider objects to placement.
21:40:00 <edleafe> We ended the meeting deciding what should be authoritative about traits. The issue was that the virt driver could set traits, and then the operator could unset them. The driver would then re-set them, and this could go on forever.
21:40:04 <edleafe> The discussion spilled over into -nova, where we decided that there would be a set of traits defined that a compute node would have domain over. Operators can then set any others without worrying about them getting overwritten.
21:40:08 <edleafe> That's it
21:40:23 <melwitt> cool, thanks
21:40:33 <melwitt> there aren't any notes for the notifications subteam from gibi
21:40:42 <melwitt> anything else on subteams?
21:40:55 <melwitt> #topic Stuck Reviews
21:41:07 <melwitt> nothing on the agenda. does anyone have anything for stuck reviews?
21:41:29 <melwitt> #topic Open discussion
21:41:40 <melwitt> there are several specless blueprints on the agenda
21:41:48 <melwitt> first up, dansmith for CPU weigher
21:41:58 <dansmith> yeah, so stephen wants this approved,
21:42:09 <dansmith> a new weigher for spreading due to cpu allotment
21:42:18 <dansmith> weighers are good, I don't imagine this is controversial
21:42:21 <dansmith> so I imagine we should approve it
21:42:51 <melwitt> yeah, seems like the addition of a new weigher should be fine
21:43:39 <melwitt> anyone else have opinions about a new CPUWeigher?
21:44:12 <melwitt> okay, I'll ask about it in -nova later when more people are around
21:44:25 <melwitt> next is from mriedem, adding granular policy code to placement
21:44:27 <mriedem> -ops list would be more useful, but
21:44:44 <mriedem> we agreed at the ptg to do granular placement policy
21:44:47 <mriedem> just don't know if i need to write a spec
21:44:48 <dansmith> mriedem: we have ops that want it, in case it's not obvious :D
21:44:53 <mriedem> psh,
21:44:57 <mriedem> i don't trust red hat customer ops
21:45:15 <mriedem> i thought stephen just wanted it for his own pet scheduler
21:45:38 <melwitt> the bp is really old, from 2014 and it was from someone else, Charlotte Han
21:45:39 <mriedem> this placement granular policy thing, all rules would default to admin-only as today
21:46:23 <mriedem> i personally don't care about the cpu weigher
21:46:25 <mriedem> but,
21:46:25 <dansmith> I think there was wide agreement about this,
21:46:33 <mriedem> i could see a bunch of people wanting to add more weighers
21:46:35 <dansmith> I don't think we need a spec for it.. it's mechanical
21:46:35 <mriedem> just cuz
21:47:01 <dansmith> mriedem: weighers that work on our fundamental data structures are good right?
21:47:03 <melwitt> I guess I don't see what's wrong with adding more weighers
21:47:15 <dansmith> weighers to do crazy things, totally agree
21:47:22 <mriedem> they are all on by default aren't they
21:47:31 <mriedem> by i guess the default weight factor isn't?
21:47:32 <dansmith> wanting to land on the least-overcommitted node is pretty common
21:47:49 <mriedem> so by default, the weigher doesn't do anything
21:47:51 <mriedem> rihgt?
21:47:53 <dansmith> are they all on by default?
21:47:58 <mriedem> pretty sure
21:48:28 <mriedem> weight_classes = nova.scheduler.weights.all_weighers
21:48:48 <dansmith> but you tweak them on or off
21:48:55 <dansmith> whether you want to pack or spread with this for example
21:49:00 <mriedem> by the weight option i think
21:49:12 <mriedem> e.g. ram_weight_multiplier = 1.0
21:49:18 <dansmith> yeah
21:49:30 <mriedem> and we already have weighers for ram and disk
21:49:33 <dansmith> this exact point by the way.. we have the for things like ram
21:49:33 <mriedem> so sure, add cpu
21:49:34 <dansmith> right
21:50:33 <melwitt> okay, so sounds like we agree CPUWeigher is cool to approve, and so is the granular policy, we agreed about it at the PTG and the question was whether it needs a spec, and it doesn't need a spec
21:50:35 <mriedem> left a comment in the bp, i'm good with it
21:51:13 <melwitt> okay, next on the agenda is from gibi, specless bp for including traceback in versioned error notifications
21:51:20 <dansmith> I think we agreed on this at ptg too,
21:51:22 <melwitt> https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications
21:51:32 <dansmith> or at least I don't think I heard anyone say anything about why we shouldn't have this right?
21:51:58 <melwitt> yeah, I don't remember any disagreement
21:52:23 <mriedem> gibi was going to find out why it wasn't in the versoined notifications to start,
21:52:32 <mriedem> which was discussed in the original versoined notification spec,
21:52:46 <mriedem> and people were just worried about tracebacks and unstructured blobs,
21:52:50 <mriedem> but we have that in the rest api today so meh
21:52:54 <mriedem> and it's in the legacy notificatoins
21:52:57 <mriedem> seems like a non-issue
21:53:05 <dansmith> it's a for-humans text field
21:53:07 <dansmith> it seems fine to me
21:53:22 <melwitt> yeah, I think he explains why it wasn't originally in the notifications in the bp
21:53:22 <mriedem> typos are not intentional, just typing fast
21:53:51 <melwitt> okay, so we're cool to approve that one too then
21:54:15 <melwitt> next is specless blueprint for xenapi aggregate removal (and related alternate pool implementation) from dansmith
21:54:22 <melwitt> https://blueprints.launchpad.net/nova/+spec/live-migration-in-xapi-pool
21:54:23 <dansmith> this also was discussed at ptg,
21:54:30 <dansmith> and I think we should obviously approve it
21:54:39 <dansmith> it will end up removing the aggregate upcall from the xen driver,
21:54:46 <dansmith> and moves to an actually-usable pool model for them
21:54:54 <dansmith> so, fixes workiness and removes an upcall,
21:54:59 <dansmith> all with just driver maintenance
21:55:03 <melwitt> yeah. the host aggregate upcall thing is broken today IIUC, so they need to do this work to fix it
21:55:10 <mriedem> they don't have the up-call removal patch up yet right?
21:55:16 <dansmith> so is the actual aggregate pool as far as I understand
21:55:18 <dansmith> mriedem: they don't
21:55:27 <dansmith> mriedem: they're implementing the new scheme and will follow that with removing the other stuff
21:55:35 <dansmith> makes for less churn anyway
21:55:39 <dansmith> so I think it's appropriate
21:55:41 <mriedem> yeah multiple patches is good
21:55:46 <mriedem> i'm happy with this
21:55:50 <mriedem> jianghuaw_ and co do nice work
21:56:02 <melwitt> okay, so we will approve that one
21:56:14 <melwitt> next is specless blueprint for versioned notification when removing members from a server group from takashin
21:56:22 <melwitt> https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications
21:56:34 <melwitt> mriedem and I have commented on the bp already
21:57:30 <melwitt> we agreed it's not useful to notify for the quota recheck removing a server group member
21:57:54 <mriedem> i want to see the PoC before we approve the bp
21:58:05 <mriedem> "As for the other scenario (deleting a server) that is more  straight-forward but I'd like to see the proposed code for that, since  it would likely involve needing to lookup the group that the server is  in *before* we delete it, since there is no specific method to remove a  server group member when an instance is deleted, it's all implicit."
21:58:31 <melwitt> yes, so we'll hold off on that one and let takashin know to propose the PoC
21:58:39 <dansmith> 1.5 minutes
21:58:41 <melwitt> last is  SpecLess blueprint (not sure if it's required, but I filed one anyway): libvirt-cpu-model-extra-flags from kashyap
21:58:45 <melwitt> yeah
21:59:09 <dansmith> yeah this is important for several reasons, but meltdown specifically
21:59:21 <dansmith> there's agreement on the code already
21:59:26 <dansmith> tests notwithstanding
21:59:38 <melwitt> okay, cool. so that one should be approved
21:59:55 <melwitt> we have like 30 seconds left so have to wrap up now. thanks everyone
21:59:56 <mriedem> if it's a bp,
21:59:59 <melwitt> #endmeeting