17:00:28 <dtantsur> #startmeeting ironic
17:00:29 <openstack> Meeting started Mon Mar  6 17:00:28 2017 UTC and is due to finish in 60 minutes.  The chair is dtantsur. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:29 <ricardoas> o/
17:00:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:31 <yolanda> o/
17:00:33 <yuriyz> o/
17:00:33 <lucasagomes> o/
17:00:34 <openstack> The meeting name has been set to 'ironic'
17:00:35 <rpioso> o/
17:00:35 <hshiina> o/
17:00:41 <joanna> o/
17:00:50 <soliosg> o/
17:00:51 <mat128> o/
17:00:53 <mgoddard> o/
17:00:59 <TheJulia> o/
17:01:06 <rloo> o/
17:01:07 <jroll> \o
17:01:20 * jroll in another meeting at same time so won't be paying much attention here
17:01:30 <dtantsur> hey everyone, thanks for joining!
17:01:31 <aslezil_> o/
17:01:35 <jlvillal> o/
17:01:59 <mjturek> o/
17:02:01 * dtantsur gives folks a one more minute to join
17:02:08 <krtaylor> o/
17:02:26 <vdrok> o/
17:02:36 <dtantsur> #topic Announcements / Reminders
17:02:48 <dtantsur> I don't have much to announce, but we have announcements from other team members
17:02:56 <dtantsur> #info UI sub-team meeting - Tuesdays at 1800 UTC in #openstack-meeting-3
17:03:08 <dtantsur> ironic-ui needs love, please join :)
17:03:21 <jroll> ++
17:03:30 <TheJulia> \o/
17:03:40 <galyna> o/
17:03:53 <dtantsur> #info Potential Nova deadlines: http://lists.openstack.org/pipermail/openstack-dev/2017-March/113275.html
17:03:54 <vsaienk0> \o
17:03:55 <jlvillal> No promises but I may have a co-worker helping out. Hasn't been decided yet though...
17:04:01 <dtantsur> jlvillal, great!
17:04:29 <dtantsur> as to nova, we have to do our specs (if any) by mid-April, apparently
17:04:38 <dtantsur> they don't expect to have a non-priority FF (which is great for us)
17:04:47 <rama__> o/
17:04:47 <dtantsur> TheJulia, do we need a spec for BFV in Nova?
17:04:52 <dtantsur> (maybe jroll knows ^^^)
17:04:59 <jlvillal> FYI: Doing a recheck on the stable/newton branch. I think it broke back when dstools was broken. Hopefully it will work now.
17:05:22 <TheJulia> I am not on top of the nova portion at the moment
17:05:34 <lucasagomes> dtantsur, apparently there's one already (merged): https://review.openstack.org/#/c/200496/
17:05:36 <dtantsur> ack
17:05:48 <lucasagomes> (link from: https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume)
17:05:53 <dtantsur> lucasagomes, this is ironic's
17:06:14 <dtantsur> but https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume seems to be approved
17:06:20 <lucasagomes> oh er, sorry copy and pasted from the bp too quick, thought was nova :D
17:06:29 <jroll> dtantsur: I'm fairly certain we don't, will need to check
17:06:48 <dtantsur> cool
17:06:59 <dtantsur> #action jroll to check if we need a spec for Nova's BFV side (probably not)
17:07:02 <jroll> dtantsur: right, we do not, BP is approved https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
17:07:15 <dtantsur> good :)
17:07:17 <jroll> :)
17:07:21 <dtantsur> #undo
17:07:22 <openstack> Removing item from minutes: #action jroll to check if we need a spec for Nova's BFV side (probably not)
17:07:31 <dtantsur> now, the last thing
17:07:35 <dtantsur> #info Boot from volume meeting scheduling poll at http://doodle.com/poll/qwhnpqazmf7fn5ik
17:07:51 <dtantsur> we're getting moar subteam dedicated meetings
17:08:39 <mjturek> dtantsur: is the ironic/neutron meeting kicking off again?
17:08:57 <dtantsur> mjturek, nothing on this side yet, at least not from me
17:09:09 <mjturek> ack
17:09:20 <rloo> mjturek: is that meeting something that you'd like?
17:09:56 <mjturek> rloo: I think the networking stuff can be hard to follow without it but people were saying it was turning into a status meeting?
17:10:25 <jroll> it was, yeah, but if the people working on it want to spin it up again, I'm not opposed
17:10:26 <rloo> mjturek: would documentation be better? i believe it was a status meeting but i didn't attend the old ones
17:10:29 <dtantsur> I wonder if we can alternate between BFV and networking meetings
17:10:41 <dtantsur> as I'm not sure, to be honest, how valuable a subteam meeting is very week
17:10:45 <dtantsur> but maybe it's only me
17:10:52 <rloo> mjturek: or do you mean 'networking stuff that we're currently working on'?
17:10:59 <mjturek> rloo: yes, sorry :)
17:11:17 <rloo> mjturek: i was hoping the subteam reports would address that :-(
17:12:03 <dtantsur> maybe we can bring this to the ML?
17:12:04 <mjturek> fair enough! I was just curious. It's something I would attend if it was happening is all
17:12:17 <dtantsur> there may be more folks interested in such meeting, who are not here (e.g. folks for vendor's side)
17:12:38 <mjturek> dtantsur: I can start an ML thread if that'd be helpful
17:12:42 <dtantsur> thanks!
17:12:45 <mjturek> np
17:12:54 <dtantsur> #action mjturek to start a ML thread about reviving the networking subteam meeting
17:12:58 <dtantsur> now, finally
17:13:14 <dtantsur> #info we are defining the Pike priorities, please participate: https://review.openstack.org/439710
17:13:36 <dtantsur> anything else to announce?
17:14:26 <dtantsur> #topic Review subteam status reports
17:14:35 <dtantsur> #link https://etherpad.openstack.org/p/IronicWhiteBoard
17:14:39 <dtantsur> starting with line 97
17:15:39 <rloo> dtantsur: wrt the high bugs; anything to review? do we need folks to look into them?
17:15:45 <dtantsur> TheJulia, we haven't landed API for BFV, right? so we can't really land https://review.openstack.org/#/q/status:open+project:openstack/python-ironicclient+branch:master+topic:bug/1526231 ?
17:15:55 <dtantsur> rloo, I haven't looked myself yet. vdrok?
17:16:09 <TheJulia> dtantsur: that is correct
17:16:23 <rloo> jlvillal & vsaienk0: wow, great work on mulitnode grenade. this week maybe...?
17:16:38 * jlvillal is hopeful!
17:16:49 <vsaienk0> rloo: only one patch to devstack left, and we are ready to add job in -nv mode
17:17:14 <vdrok> rloo: the two I've set to high were - do not use tempest.scenario manager directly, and a bunch of deprecations that are seen during unittest run
17:17:24 <vdrok> so that we are not broken at some point
17:18:16 <rloo> vdrok: thx. solio is dealing with the first. why are deprecations high? things still work?
17:18:37 <vdrok> rloo: I've seen them for a while, and some of them relate to api
17:18:47 <vdrok> so, if the pecan gets updated, we are broken
17:18:56 <rloo> vdrok: ah. :-(
17:19:24 <dtantsur> ugh
17:19:37 <dtantsur> link to the bug?
17:20:08 <vdrok> lemme find it
17:21:03 <vdrok> ah, it's medium, I don't remember which one is high :( https://bugs.launchpad.net/ironic/+bug/1668240
17:21:03 <openstack> Launchpad bug 1668240 in Ironic "Deprecation warnings about pecan's _route and CORS" [Medium,In progress] - Assigned to Vladyslav Drok (vdrok)
17:21:41 <dtantsur> #link https://bugs.launchpad.net/ironic/+bug/1668240 needs fixing, so that we're not broken by new pecan versions
17:22:20 <rloo> this is the latest/most recent bug with HIGH: https://bugs.la
17:22:22 <rloo> oops
17:22:28 <rloo> https://bugs.launchpad.net/ironic/+bug/1668974
17:22:29 <openstack> Launchpad bug 1668974 in Ironic "Do not enforce the use of Python 2 when running the unit-with-driver-libs tox target" [High,Triaged]
17:22:45 <dtantsur> ah, that one
17:22:48 <rloo> vdrok: was it that?
17:23:03 * dtantsur is not sure it's high, but python 3 is the next big thing, sooo...
17:23:33 <rloo> ok, i think we can move on :)
17:23:43 <vdrok> rloo: I guess, I did set only one bug to high then :) this one was dtantsur :)
17:23:43 <dtantsur> everyone ready to move on?
17:23:51 <lucasagomes> ++
17:24:10 <dtantsur> #topic Deciding on priorities for the coming week
17:24:36 <dtantsur> I added the multi-node grenade to the list from the previous week. otherwise I'm not sure what we can do
17:25:03 <rloo> boot from volume needs reviews? (is that right?)
17:25:21 <rloo> maybe we should look at etags spec
17:25:35 <dtantsur> seems like BFV has -1 on everything now :)_
17:25:36 <vsaienk0> dtantsur: only one patch left to devstack https://review.openstack.org/#/c/440783/ nothing to be done on ironic side
17:25:49 <rloo> put etags spec on agenda for next week's meeting? will that speed it up?
17:25:49 <lucasagomes> in the PTG we identified some works that are basically ready and just need reviews (etgas as rloo said, node tags and the OSC thingy)
17:25:53 <lucasagomes> maybe we should target that
17:25:54 <dtantsur> vsaienk0, let's keep it on our radar though. we may need something as soon as it runs ;)
17:25:55 <lucasagomes> and just get it done
17:26:05 <dtantsur> rloo, lucasagomes, +1 to that
17:26:11 <mjturek> rescue/unrescue too right?
17:26:20 <lucasagomes> mjturek, true, that too AFAIR
17:26:36 <rloo> mjturek: yeah, but that is a bigger thing and hasn't been around/ready as long as the other two
17:26:50 <mjturek> rloo: ahhh understood
17:26:52 <TheJulia> +1 to review/land etags/node tags/rescue
17:26:53 <rloo> mjturek: i'd be happy if we even get one of the other two done this week. (pessimist that i am)
17:27:00 <mjturek> lolol
17:27:21 <vsaienk0> I'm kindly asking to review ironic standalone tests https://review.openstack.org/#/c/423556/ as sooner we add -nv jobs is better so we can start fixing bugs if any exists
17:27:35 <dtantsur> oh, standalone tests, I'd happily get them into priorities
17:27:39 <yolanda> i'd like to get reviews on ironic deployment steps https://review.openstack.org/412523
17:27:44 <rloo> oh, faults support spec needs reviews too. if we get etags done this week, i'd vote for faulst support next week
17:28:11 <dtantsur> yep, this and deploy steps for next week is probably a good plan
17:28:25 <dtantsur> yolanda, we're trying to finish small things that are already close to landing this week
17:28:36 <dtantsur> unfortunately, deploy steps is neither small nor easy :)
17:28:40 <yolanda> indeed
17:28:55 <dtantsur> any other easy wins we could consider?
17:29:23 <rloo> dtantsur: i think that's plenty. how many did we get done from last week?
17:29:26 <dtantsur> hmm, node tags does not look in good shape
17:29:31 <dtantsur> s/does/do
17:29:40 <dtantsur> https://review.openstack.org/#/q/topic:bug/1526266 - merge conflicts, -1's, etc
17:29:44 <mgould> dtantsur: no, you were right the first time :-)
17:29:58 <mgould> wait, no
17:30:02 * mgould needs coffee
17:30:17 <dtantsur> zhenguo, hey! are you planning on updating the node tags patches early this week?
17:30:20 <rloo> dtantsur: we can ping zhenguo to deal with merge conflicts; that shouldn't be a big deal.
17:30:34 <galyna> https://review.openstack.org/#/c/381991/ - as for etags, just to not forget as you touch this subject
17:30:43 <dtantsur> ok, let's keep it in, assuming somebody rebases them
17:31:08 <rloo> galyna: dtantsur already noted it, see line 92 in etherpad
17:31:25 <dtantsur> ok, does this list look good?
17:31:25 <galyna> Ok, I see :)
17:32:08 <rloo> i'm good with this list, thx.
17:32:13 <dtantsur> cool
17:32:15 <dtantsur> #topic Appointing a bug liaison for the next week
17:32:15 <lucasagomes> dtantsur, yeah I've reviewed it after the PTG, the -1's are not that big... but need fixing imo
17:32:31 <dtantsur> any volunteers to take a look at the bug list?
17:32:38 <mjturek> dtantsur: I'd like to give it a shot!
17:32:49 <vdrok> phew :)
17:32:53 <dtantsur> awesome, thanks mjturek :)
17:33:01 <mjturek> np!
17:33:23 <dtantsur> #action mjturek to help with bug triaging this week
17:33:46 <dtantsur> skipping empty topics......
17:33:56 <dtantsur> #topic Open discussion
17:34:03 <dtantsur> the floor is open, go ahead :)
17:34:22 <soliosg> this patch is ready for review/merge, it unblocks QA Tempest team (do not use tempest.scenario.manager directly): https://review.openstack.org/#/c/439252/
17:34:30 <rloo> dtantsur: if there is nothing, maybe we can just go through the priorities pike patch
17:35:16 <dtantsur> #link https://review.openstack.org/#/c/439252/ should be reviewed asap to unblock QA team
17:35:19 <dtantsur> thanks soliosg
17:35:35 <dtantsur> rloo, I agree, just giving the folks some time for suggestions
17:36:20 <soliosg> And also, about moving ironic/ironic_tempest_plugin to its new repository openstack/ironic-tempest-plugin, maybe next week we can have it as a priority; it shouldn't be difficult to make the transition
17:36:26 <lucasagomes> if there's nothing I've something related to the redfish spec (the current comment on it) but, the pike priorities is more important at this point I think
17:36:43 <lucasagomes> so the only thing is, if you have time please take a look at the last comment at: https://review.openstack.org/#/c/184653/27/specs/approved/ironic-redfish.rst
17:36:44 <lucasagomes> thanks
17:36:49 <vsaienk0> soliosg: pas-sha replied to MT and copying manager.py from tempest among project is not right approach
17:38:07 <vsaienk0> soliosg: there are alternative options how to do it, please check MT
17:38:30 <soliosg> vsaienk0: I think I read the discussion between pas-sha and Andrea
17:38:35 <dtantsur> #undo
17:38:36 <openstack> Removing item from minutes: #link https://review.openstack.org/#/c/439252/
17:38:53 <lucasagomes> yeah copying whole files from a project to another usually is bad form, they tend to spread like a plague in openstack
17:39:07 <dtantsur> qa-incubator anyone? :)
17:39:11 <soliosg> vsaink0: However, other projects have already merge the change (keep local copy of manager.py)
17:39:49 <rloo> i think we want to wait until/if there is a conclusion of that email thread?
17:39:58 <soliosg> https://review.openstack.org/#/q/topic:tempest-manager
17:40:25 <soliosg> vsaienk0, rloo: agree, let me read the mail thread and see what's being suggested
17:40:26 <dtantsur> soliosg, fwiw I don't see any of "core" projects on the list..
17:40:49 <jroll> dtantsur: because they're in tree in tempest :)
17:41:17 <dtantsur> ah, obviously
17:41:23 <soliosg> dtantsur: not all project have the dependency on tempest.scenario.manager interface, I believe
17:41:27 * dtantsur reserves his opinion on that
17:41:33 <jroll> heh
17:41:47 <dtantsur> vsaienk0, I don't see any responses to the message soliosg refers to Oo
17:42:18 <dtantsur> nv, ML usability rocks..
17:42:24 <soliosg> I think we can take this to irc, in case we'd like to elaborate
17:42:25 <rloo> i have a concern with https://review.openstack.org/#/c/440865/, don't know if we want to discuss it here.
17:42:42 <dtantsur> #link http://lists.openstack.org/pipermail/openstack-dev/2017-March/113196.html discussion re tempest-manager refactoring consequences
17:43:06 <rloo> wrt addressing db deadlocking. we've added a new config, it'll retry 5 times. but that patch changes it to 20 times.
17:43:14 <vsaienk0> dtantsur: https://openstack.nimeyo.com/108176/openstack-ceilometer-networking-networking-networking-scenario
17:43:39 <dtantsur> fwiw I like pas-ha's suggestion
17:43:44 <rloo> nova retries 5 times. i'm concerned 20 is too many.
17:43:56 <rloo> i can also ping/discuss in irc outside this meeting.
17:44:23 <vsaienk0> rloo we can ask oslo-core team why they put 20 as default
17:44:40 <rloo> vsaienk0: yes, someone can ask, but i disagree with changing it w/o any good reason.
17:44:42 <vsaienk0> I think there should be a valid reason for doing that
17:44:52 <dtantsur> rloo, yep, let's start oslo.db discussion on that.. what we do is kinda hacky
17:45:56 <joanna> vsaienk0, rloo, dtantsur: I'll ask oslo.db team why they chose 20 as a default
17:46:05 <rloo> thx joanna
17:46:05 <dtantsur> thanks joanna
17:46:32 <joanna> I'll place a comment in the patch with the answer when I get it :)
17:46:41 <dtantsur> lucasagomes, redfish.. well, if we can get more vendors on board, it's probably not too hacky to split the option.
17:47:14 <lucasagomes> dtantsur, indeed, I think that would be make things more flexible
17:47:21 <lucasagomes> and will be simpler to implement too
17:47:40 <dtantsur> lucasagomes, I'd prefer Hans to confirm, though, that with such split his hardware is going to work ;)
17:48:06 <lucasagomes> yeah, he comment on it already waiting for him to re-comment :D
17:48:51 <jroll> lucasagomes: quickly skimmed that, seems fine to me, will read in more detail later
17:49:38 <lucasagomes> jroll, cool thanks. Yeah, the proposed driver is just standard it doesn't have anything fancy in this first interaction at least
17:49:53 <vdrok> a question related to attach/detach - do we consider it as just a function to map a neutron interface to ironic one, or is it ok to call eg neutron from inside? https://review.openstack.org/#/c/424723/6/ironic/drivers/modules/network/common.py@395
17:50:38 <dtantsur> vdrok, how well does it work without neutron? :)
17:50:43 <vdrok> I don't really like the if ACTIVE there :(
17:51:14 <vdrok> dtantsur: we could trigger the attach to put the key in internal info, and then trigger the port_bind from conductor manager
17:51:41 <vdrok> it's more about defining where to do that, inside the attach itself, or on a higher level, in manager
17:51:55 <dtantsur> I can't answer without taking a deep look, I guess..
17:51:58 <vdrok> for tenant networks, we do it separately
17:52:13 <jroll> seems like a network driver thing to me
17:53:53 <vdrok> yeah, the attach was defined in the spec as "a way that different network interfaces can override the virtual network interface (VIF) to physical network interface (PIF) mapping logic currently contained in nova", my question is, should we redefine it, saying that it can actually do the binding
17:54:37 <vdrok> anyway, this can be discussed in the patch :)
17:54:54 <vdrok> just filling the quiet here :)
17:55:09 <vdrok> s/quiet/silence
17:55:18 <dtantsur> there is a question on the priorities patch
17:55:34 <dtantsur> who to put as contacts there? I tried putting everyone and it ended up HUGE very quickly
17:55:49 <jroll> dtantsur: for what, attach/detach?
17:55:56 <dtantsur> Pike priorities
17:56:04 <dtantsur> so now I ended up putting only core "sponsors" of each item, which also seems contentions to folks, apparently
17:56:07 <jroll> oh, as in cores or everyone
17:56:19 * jroll already put his thoughts on that in a comment
17:56:22 <dtantsur> cores or everyone or somehow selected group
17:56:44 <jroll> we've had this debate in the past, it led to only having cores in this document
17:56:46 <JayF> I mean, I worry about using core reviewer status to distinguish anything but the +2/A ACL in gerrit.
17:57:00 <jroll> to quote
17:57:02 <jroll> The primary core contact(s) listed
17:57:04 <jroll> is/are responsible for tracking the status of that work and herding cats
17:57:06 <jroll> to help get that work done. They are not the only contributor(s) to this work!
17:57:27 <rloo> dtantsur: i'm good with just cores. i just think it is too bad, but i understand.
17:57:28 <jroll> I don't think that leaves room to think cores are being "rewarded" somehow here
17:57:35 <JayF> I do :(
17:57:47 <jroll> sigh
17:57:52 <rloo> oh, so JayF disagrees.
17:57:54 <JayF> I think it's super unfriendly to new contributors who want to spend a lot of time.
17:57:56 <jroll> I'm happy to hound anyone daily for a status
17:58:15 <jroll> I don't think everyone takes that responsibility as heavy as a core does, though
17:58:21 <dtantsur> JayF, my thinking was: 1. if we don't have cores on board, the feature is unlikely to land; 2. cores tend to show up on meeting and be online, which is not always true for random contributors
17:58:24 <JayF> I mean, I'm happy to be outvoted -- I just don't believe it's as innoculous as some of you do to rely on "cores" for stuff beyond just reviewing acls.
17:58:31 <jroll> I don't want to have to wait for someone that spends 25% of their time on the project, etc
17:58:48 <jroll> cores are cores not because they're good at pushing buttons in gerrit
17:58:50 <JayF> dtantsur: I think for #2 there's a little chicken/egg issue: if we expected more of contributors, maybe they'd provide more
17:59:01 <jroll> but rather because they are tuned in with the project, know our priorities and values etc
17:59:08 <dtantsur> nothing prevents them from doing it, right? :)
17:59:09 <JayF> but I'm clearly outvoted so lets not waste time arguing about it? I just didn't want my silence to be read as agreement
17:59:30 <jroll> let's all comment on the patch, right now it looks like a -1, +0, and +1 to me
17:59:33 <dtantsur> JayF, my biggest concern is actually quite trivial: with adding everyone who wants, some features already have >5 memebers
17:59:49 <dtantsur> and I have no way to figure out whom to actually put
17:59:56 <rloo> JayF: yes, that's my issue too. If i make a one-line change, should I want to add my name to a feature?
18:00:10 <jroll> omg
18:00:12 <JayF> dtantsur: That's a better reason for limiting it to a lead/co-lead -- but limiting the "lead" to core reviewers only is not something I'm in love with.
18:00:14 <jroll> this is what commit history is for
18:00:21 <jroll> the list here is "who do I bother for status"
18:00:34 <JayF> rloo: ++ I think that's unhelpful. I don't mean we should list everyone who contributes. I just dislike that it's implied only a core reviewer can lead the implemntation of a feature.
18:00:52 <dtantsur> " can lead the implemntation of a feature" is not the correct way to put it IMO
18:00:56 <dtantsur> anyway, we're out of time
18:01:00 <rloo> JayF: i was exaggerating, but the list grows. you can see folks asking to add their names.
18:01:03 <NobodyCam> :)
18:01:04 <wanyen> ILO driver team is interested in redfish driver and plan to contribute. so please coun tus in
18:01:04 <dtantsur> let's move it to the channel please
18:01:09 <dtantsur> #endmeeting