15:00:03 <dtantsur> #startmeeting ironic
15:00:04 <openstack> Meeting started Mon Mar  4 15:00:03 2019 UTC and is due to finish in 60 minutes.  The chair is dtantsur. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:07 <openstack> The meeting name has been set to 'ironic'
15:00:13 <dtantsur> Who's here for our exciting weekly meeting?
15:00:18 <mgoddard> \o
15:00:18 <etingof> o/
15:00:19 <rpioso> o/
15:00:23 <rloo> o/
15:00:28 <arne_wiebalck_> o/
15:00:32 <rpittau> o/
15:00:39 <iurygregory> o/
15:00:43 <jroll> \o
15:00:45 <dtantsur> Julia is out today, so you have an opportunity to see me in this chair once again :)
15:00:57 <stendulker> o/
15:00:58 <bdodd> o/
15:01:00 <kaifeng> o/
15:01:23 <dtantsur> #link https://wiki.openstack.org/wiki/Meetings/Ironic our agenda for today
15:01:48 * rpioso is happy to see dtantsur, but hope TheJulia feels better soon
15:02:02 <dtantsur> #topic Announcements / Reminder
15:02:13 <dtantsur> #info This week is R-5. Client library release deadline and feature freeze.
15:02:36 <dtantsur> This is about the right time to stop writing new features and start finishing whatever is proposed already.
15:02:48 <dnuka> o/
15:03:01 <dtantsur> After Thursday (pending Julia's decision) new features will need an exception.
15:03:49 <rajinir> o/
15:04:12 <dtantsur> #link https://etherpad.openstack.org/p/DEN-train-ironic-brainstorming Please keep suggesting your ideas for the Forum/PTG
15:04:28 <dtantsur> #info dtantsur is on the Ops Meetup Wednesday-Thursday
15:04:43 <dtantsur> #link https://etherpad.openstack.org/p/BER19-OPS-BAREMETAL baremetal section of the ops meetup, ideas welcome
15:04:59 <dtantsur> anything else to announce? any questions?
15:05:10 <dnuka> o/
15:05:13 <rloo> TC elections - vote. Ends tomorrow.
15:05:17 <dtantsur> oh right
15:05:23 <dnuka> Hi folks, this is my last meeting as the OpenStack Outreachy intern. During my internship I have acquired a lot of knowledge thanks to my mentors etingof, dtantsur and my team. Before the internship I didn't knew anyone from the IT industry. Thanks to my mentors & Outreachy I got the chance to meet and work with super talented and nice people from around the world, I never would have been able to
15:05:29 <dnuka> before.
15:05:31 <rloo> looks like Train PTL self-nomination starts this week (according to the stein schedule)
15:05:38 <dnuka> I just want to say, thank you Ironicers! It's been a privilege working with you all :)
15:05:51 <dtantsur> dnuka: thank YOU for working with us :)
15:05:52 <rloo> AND... the stein schedule has 'Stein Community Goals Completed' this week
15:05:57 <iurygregory> tks for the reminder rloo
15:06:04 <iurygregory> tks for the work dnuka
15:06:11 <jroll> dnuka: thanks for spending time with us, glad to hear you enjoyed :) have fun out there!
15:06:14 <rloo> thanks dnuka, and good luck! :)
15:06:17 * dtantsur tries to remember community goals for this cycle
15:06:22 <dnuka> :)
15:06:27 <rpittau> dnuka, thanks for your help and good luck :)
15:06:30 <iurygregory> python3  XD
15:06:58 <kaifeng> python3 and upgrade checkers i guess
15:07:07 * etingof hopes that dnuka won't get far from us
15:07:09 <rpioso> dnuka: Well done! Thank you for your contributions.
15:07:40 <dtantsur> okay, both look pretty good
15:07:44 <dtantsur> anything else?
15:08:02 <dnuka> thanks rpioso, dtantsur, rpittau, jroll, iurygregory :)
15:08:10 <etingof> btw, we got two new ironic projects approved for the next Outreachy round
15:08:52 <dtantsur> \o/
15:09:05 <dnuka> thank you rloo :)
15:09:06 <kaifeng> https://governance.openstack.org/tc/goals/stein/index.html
15:09:07 <iurygregory> gtz
15:09:17 <kaifeng> i think the two
15:09:24 <dtantsur> thanks kaifeng
15:09:30 <dtantsur> We have no action items from the previous meeting. Moving on to the statuses?
15:09:36 <rloo> ++ movin'
15:09:41 <iurygregory> ++
15:09:45 <dtantsur> #topic Review subteam status reports
15:09:45 <rpittau> let's move :)
15:10:03 <dtantsur> #link https://etherpad.openstack.org/p/IronicWhiteBoard starting around line 272
15:10:47 <dtantsur> those who read bug stats may notice that I triaged quite a few stories this morning :)
15:12:28 <kaifeng> yeah, got quite a lot of email notifications
15:12:45 <dtantsur> heh
15:13:47 <dtantsur> not many status updates so close to the release, I'm done.
15:14:54 <dtantsur> how's everyone?
15:14:57 <rloo> so python3 is one of the community goals.
15:15:18 <rloo> there are a bunch of maybe's there... L295+
15:15:18 <rpittau> rloo, correct :)
15:15:28 <rloo> what is 'minimum'?
15:15:44 <rpittau> I think minimum is py36
15:15:57 <dtantsur> rloo: minimum is voting jobs on py36, both integration and unit tests
15:16:12 <jroll> so the overall openstack plan is py2 and py3 fully supported in Train, dropping py2 in U
15:16:13 <dtantsur> I think everything else are stretch goals (py37, zuul v3, etc)
15:16:16 <jroll> (for context)
15:16:16 <rloo> what i meant is, of all that stuff there, what should we get done to satisfy the goal.
15:16:44 <dtantsur> lines 300-305
15:16:49 <jroll> at a minimum we *need* py3 support *fully* completed by Train
15:17:12 <dtantsur> I guess grenade cannot be switched until Train..
15:17:50 <rloo> Ah, the Stein goal page has a list of 'Completion Criteria': https://governance.openstack.org/tc/goals/stein/python3-first.html
15:18:43 <kaifeng> it seems ironic client is the only one left?
15:19:28 <rloo> we should prioritize whatever if left... is derekh here?
15:20:06 <rloo> anyway, we can take that offline. would be good to know what exactly needs to be done and if we need volunteers to work on them, etc.
15:20:10 <derekh> here, reading back
15:20:47 <rpittau> rloo, I'm actually working on that :)
15:20:59 <rloo> jroll: quickly, for nova conductor group awareness -- the nova patch merged, so done?
15:21:09 <jroll> rloo: ah yes, sorry
15:21:16 * jroll forgets that is on there
15:21:26 <rloo> jroll: no worries, i just updated
15:21:31 <rloo> jroll: and yay!
15:21:38 <jroll> thanks!
15:22:29 <derekh> iirc we could switch some of the non grenade ironic jobs with a flag, not sure about the grenade ones though
15:22:40 <rloo> the neutron event processing stuff is out of date too.
15:23:13 <rloo> other than that, i'm good
15:23:47 <openstackgerrit> Nikolay Fedotov proposed openstack/ironic-inspector master: Use getaddrinfo instead of gethostbyname while resolving BMC address  https://review.openstack.org/626552
15:23:50 <dtantsur> #topic Deciding on priorities for the coming week
15:24:17 <dtantsur> I guess we need it on the priorities list, so hjensas could you update the events patches on the whiteboard and throw them on the priority list?
15:25:23 <dtantsur> What ironicclient changes do we need still? Only deploy templates?
15:25:43 <rloo> dtantsur: i think so. that's what i wanted to know too :)
15:26:12 <dtantsur> the configdrive change I pushed does not (for now?) have a client part for it
15:26:29 <rloo> dtantsur: oh. neutron events will need client (python API)
15:26:36 <dtantsur> rloo: I *think* it's done
15:26:58 <rloo> dtantsur: oh, then we're good there. (need to catch up on neutron events)
15:27:02 <dtantsur> https://github.com/openstack/python-ironicclient/commit/e8a6d447f803c115ed57064e6fada3e9d6f30794
15:27:13 <dtantsur> we did API+client before implementing actual processing
15:27:18 <rloo> dtantsur: awesome.
15:28:09 <dtantsur> so, seems only deploy templates, and it seems very close. good!
15:28:09 <rloo> dtantsur: i think the only thing left besides deploy templates, is to go over any bugs/prs against the client to see if we want to land any of those. and do zuulv3 migration? https://review.openstack.org/#/c/633010/
15:28:10 <patchbot> patch 633010 - python-ironicclient - Move to zuulv3 - 16 patch sets
15:28:22 <dtantsur> rloo: yep
15:30:03 <dtantsur> it feels like the events work will have to request an FFE
15:30:06 <rloo> who is doing L170: check missing py36 unit test jobs on all projects?
15:30:13 <dtantsur> volunteers ^^?
15:30:30 <rloo> dtantsur: wrt events, let's see where it is at on Thursday?
15:30:40 <dtantsur> yeah. for now two patches are WIP.
15:31:19 <rloo> dtantsur: so either FFE or punt to Train. Guess Julia can make that decision on Thurs since you'll be at that Ops thing.
15:31:27 <dtantsur> yep
15:32:07 <mgoddard> is hjensas around?
15:32:58 <mgoddard> I guess not. Would just be nice to get a feel for how far off he thinks it is
15:33:02 <dtantsur> iurygregory, rpittau, any of you want to check all our projects for the presence of a py36 unit test job?
15:33:09 <dtantsur> yeah..
15:33:12 <iurygregory> dtantsur, i can do
15:33:17 <iurygregory> =)
15:33:32 <dtantsur> thanks!
15:33:32 <rpittau> dtantsur, I'm already doing that :)
15:33:41 <dtantsur> hah, okay, so I'll leave it up to you too :)
15:33:56 <mgoddard> I had a slightly awkward thought about doing the neutron event processing via deploy steps
15:34:04 <dtantsur> hmmmmmm
15:34:13 <mgoddard> main downside is that it would complicate the steps seen by the user
15:34:17 <kaifeng> when i am. suspecting there is any project hasn't a py36 job then found ngs requires one..
15:34:29 * mgoddard stops derailing
15:34:38 <rloo> mgoddard: is it only at deployment. don't we get events (or won't we) when deleting?
15:34:45 <dtantsur> and probably cleaning
15:34:46 <rloo> mgoddard: and yeah, we can discuss later
15:34:53 <dtantsur> and potentially inspection \o/
15:34:54 <rpittau> kaifeng, most of the projects are good already
15:34:59 * dtantsur stops derailing as well
15:35:08 <dtantsur> okay folks, how's the list looking?
15:35:13 <dtantsur> anything that must be there?
15:35:41 <kaifeng> i wonder what's the status of molteniron, it seems haven't been maintained
15:35:45 <iurygregory> dtantsur, can we add one patch for zuulv3?
15:35:46 <mgoddard> so there are as yet unwritten patches to make deploy templates more useful
15:35:57 <dtantsur> iurygregory: add after line 166
15:36:05 <dtantsur> mgoddard: for example?
15:36:14 <mgoddard> i.e. to add deploy_step decorators for RAID
15:36:22 <mgoddard> and BIOS
15:36:27 <rloo> that fast track support is a feature. anyone know the status of that? (L196ish).
15:37:12 <mgoddard> I'm out until Thursday, so it would have to be an FFE if we wanted to look at that
15:37:13 <dtantsur> rloo: I think the status is "Julia is close to getting it working"
15:37:28 <iurygregory> nvm both are there
15:37:33 <dtantsur> I doubt it's making Stein, but we'll see..
15:37:37 <mgoddard> it's not critical to the deploy templates feature, but we did talk about doing it at the PTG
15:37:49 <rloo> mgoddard: so do you think we shouldn't merge existing deploy templates, w/o the other stuff?
15:38:04 <dtantsur> rloo: I think it's fine to have the foundational bits/API in place
15:38:16 <rloo> dtantsur: yeah, me too. just wanted mgoddard's opinion
15:38:22 <dtantsur> and I'll personally be open for an FFE for RAID/BIOS steps
15:38:31 <mgoddard> rloo: I think we should merge - it could be used with a 3rd party driver that has some steps for example
15:38:44 <mgoddard> that's probably what I'd demo if we don't get that part into Stein
15:39:12 <rpioso> mgoddard: Is it the decorator infrastructure or adding them to the individual driver methods?
15:39:13 <rloo> mgoddard, dtantsur: ok, let's proceed then. and yes to FFE for raid/bios. depends on the work involved there but should be minimal. i am wondering though if we need to split up existing big deploy step for anything to work.
15:39:41 <dtantsur> rloo: I think splitting to core step requires careful discussion. I'd postpone it until the PTG.
15:39:43 <mgoddard> I could probably put together a PoC for BIOS/RAID this evening, then we can use that to evaluate FFE
15:39:47 <dtantsur> mgoddard: wanna add the topic for ^^^?
15:40:31 <mgoddard> rpioso: individual driver methods, possibly even base driver interface classes if we can do it generically
15:40:35 <rloo> dtantsur: definitely. i was thinking that if we did have to split up deploy step, then it won't get done until Train.
15:40:54 <dtantsur> I think we can do it later without breaking the API
15:41:08 <mgoddard> dtantsur: you mean add the deploy templates topic to my PoC?
15:41:09 * rpioso would like to chat with mgoddard after the meeting
15:41:10 <rloo> we already have a deploy step decorator, it might need tweaking if anything.
15:41:18 <dtantsur> mgoddard: I mean, splitting the core step
15:41:41 <dtantsur> also, ready to move on? I have a few RFEs to review (for Train, I guess).
15:41:47 <mgoddard> sure
15:41:49 <rloo> ++ move on
15:42:12 <dtantsur> #topic RFE review
15:42:19 <dtantsur> I'll start with the one I posted
15:42:26 <dtantsur> #link https://storyboard.openstack.org/#!/story/2005126 Updating name and extra for allocations
15:42:49 <dtantsur> This is mostly an oversight on my side, the description should be pretty clear
15:43:20 <mgoddard> seems reasonable
15:44:04 <dtantsur> any comments? objections?
15:44:23 <kaifeng> works for me
15:44:32 <mgoddard> added rfe-approved
15:44:53 <dtantsur> thanks!
15:45:13 <dtantsur> #link https://storyboard.openstack.org/#!/story/2005119 iDRAC OOB inspection to set boot_mode capability
15:45:48 <cdearborn> IB introspection sets that flag, which allows UEFI boot mode to "just work"
15:46:05 <cdearborn> the idea is to bring OOB introspection to parity
15:46:20 <dtantsur> this seems consistent with what ironic-inspector already does
15:46:23 <cdearborn> so that UEFI boot mode will just work with the iDRAC driver
15:46:26 <cdearborn> yes
15:46:37 <rpioso> The story's headline is mislabeled. Remove [RFE].
15:46:47 <dtantsur> rpioso: no, it's an RFE
15:46:59 <rpioso> Seems like a bug fix to me.
15:47:18 <rpioso> If inspection should set  boot_mode.
15:47:30 <dtantsur> it's a bit of a stretch to goal it a bug fix, given that it never worked..
15:48:19 <dtantsur> how many drivers are actually setting boot_mode during inspection?
15:48:33 <rpioso> idrac
15:48:54 <dtantsur> well, currently none
15:49:04 <dtantsur> (only ironic-inspector)
15:49:20 <dtantsur> so it does sound like a new feature to me, even though I'm open to approving it right away
15:50:10 <rloo> no one has objections to it
15:50:11 <stendulker_> it should be ok i suppose as drac does not have ability to perform management operation of set_boot_mode()
15:50:46 <stendulker_> this would provide an option to update ironic node about the available boot mode on node
15:50:51 * dtantsur marks as approved
15:51:17 <dtantsur> let's debate rfe vs bug a bit later, I have a few items more to go over
15:51:19 * rpioso meant idrac does not set it
15:51:33 <dtantsur> #link https://storyboard.openstack.org/#!/story/2005060 Add option to control if 'available' nodes can be removed
15:51:34 <rpioso> dtantsur: ty
15:51:39 <dtantsur> arne_wiebalck_: if you're around ^^^
15:52:05 <arne_wiebalck> yes
15:52:13 <openstackgerrit> Iury Gregory Melo Ferreira proposed openstack/python-ironicclient master: Move to zuulv3  https://review.openstack.org/633010
15:52:13 <dtantsur> I can imagine why people may want to prevent deletion of available nodes
15:52:22 <arne_wiebalck> This is to protect nodes in available from being deleted.
15:52:42 <arne_wiebalck> Happened to us, so we thought some protection would be nice.
15:52:55 <dtantsur> I'm +1, jroll apparently as well. Objections?
15:53:22 <rpittau> should we allow a force mode to bypass the restriction ?
15:53:35 <dtantsur> rpittau: in maintenance probably?
15:53:36 * jroll has totally been there
15:53:48 <arne_wiebalck> bypass is the default and current behavior
15:53:52 <dtantsur> arne_wiebalck: wdyt about still allowing deletion in maintenance?
15:54:02 <dtantsur> even with this option?
15:54:09 <rpittau> dtantsur, sounds good
15:54:34 <arne_wiebalck> sounds good
15:54:44 <rloo> so to be clear. what does 'removing' mean?
15:54:47 <arne_wiebalck> ‘available’ means ready for use
15:54:53 <dtantsur> rloo: 'baremetal node delete'
15:54:56 <rloo> cuz we use 'delete', 'destroy'.
15:54:56 <arne_wiebalck> remove == delte from ironic
15:55:08 <rloo> ah, so destroy
15:55:19 <arne_wiebalck> rloo: :-D
15:55:35 <rloo> maybe call it allow_destroy_available_nodes?
15:55:36 <dtantsur> annihilate
15:55:52 <rloo> (no comment on the terms we already used, delete & destroy)
15:56:03 <dtantsur> rloo: the downside is that we never use "destroy" in user-visible parts, only internally
15:56:18 <rloo> dtantsur: oh, we use 'delete'.
15:56:24 <dtantsur> the API is `DELETE /v1/nodes/<node>`, the CLI is 'baremetal node delete'. not sure about Python API though..
15:56:37 <arne_wiebalck> dtantsur: that’s why I used delete (but then realised its destroy internally)
15:56:44 <rloo> ok, allow_delete_available_nodes. and of course, we will have words to describe it...
15:57:24 <dtantsur> yeah, let's settle on "delete". the Python API also uses it.
15:57:36 <dtantsur> arne_wiebalck: please update the RFE, then we can approve it.
15:57:45 <arne_wiebalck> dtantsur: ok
15:57:54 <dtantsur> next?
15:58:18 <dtantsur> #link https://storyboard.openstack.org/#!/story/2005113 Redfish firmware update
15:58:32 <dtantsur> This one may need more details, and I've increases its scope a bit from the initial version by etingof
16:00:09 * dtantsur hears crickets
16:00:25 <dtantsur> I guess we can leave it till next time. Thanks all!
16:00:42 <jroll> oh wow, time already
16:00:44 <dtantsur> yeah
16:00:46 <rpioso> Will there be a spec?
16:00:49 <rpittau> thanks! o/
16:00:50 <dtantsur> #endmeeting