17:00:08 <jroll> #startmeeting ironic 17:00:09 <openstack> Meeting started Mon Jun 13 17:00:08 2016 UTC and is due to finish in 60 minutes. The chair is jroll. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:12 <JayF> o/ 17:00:12 <jroll> ohai 17:00:13 <openstack> The meeting name has been set to 'ironic' 17:00:15 <jlvillal> o/ 17:00:16 <rpioso> \o 17:00:21 <NobodyCam> o/ 17:00:22 <pas-ha> o/ 17:00:23 <vdrok> o/ 17:00:25 <rloo> o/ 17:00:31 <krtaylor> o/ 17:00:37 <jroll> as always, the agenda: 17:00:37 <TheJulia> o/ 17:00:38 <dtantsur> o/ 17:00:39 <jroll> #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting 17:00:41 <rama_y> o/ 17:00:42 <devananda> o/ 17:00:48 <sambetts> o/ 17:00:51 <jroll> #topic announcements and reminders 17:00:55 <milan> o/ 17:01:46 <jroll> reminder that our midcycle is next week! 17:01:49 <vsaienko> o/ 17:01:53 <jroll> the etherpad is here: 17:01:55 <jroll> #link https://etherpad.openstack.org/p/ironic-newton-midcycle 17:01:59 <jroll> please rsvp if possible 17:02:08 <jroll> (at the bottom) 17:02:15 <jroll> also, if you have topics to discuss, please add them 17:03:02 <rloo> so wrt 0000 timeslot. if no one rsvp's for only that one, we should cancel. 17:03:18 <jroll> ++ 17:03:29 <JayF> Good call 17:03:35 * milan is the only one with beer preference :-/ 17:03:38 <rloo> wrt what will be discussed and when they will be discussed, when will that be decided? 17:03:53 <jroll> rloo: first 30 minutes? 17:04:08 <jroll> I'd like to go over the priorities first, other than that I'm not opinionated when we discuss things 17:04:14 <rloo> jroll: fine with me. 17:04:36 * rloo hopes 30 min is enough time. we will *focus* :) 17:04:51 <devananda> I'm sad that I will miss a bunch of the midcycle 17:05:13 <NobodyCam> I sm only able to make the first two days 17:05:27 * rloo said that deva sad 17:05:27 <NobodyCam> s/sm/am 17:05:34 <rloo> s/said/sad/ 17:06:14 <jroll> another announcement: grenade is now in our check pipeline, non-voting \o/ 17:06:18 <dtantsur> \o/ 17:06:20 <JayF> \o/ 17:06:26 <jroll> huge thanks to everyone that contributed to that, especially vdrok and vsaienko :) 17:06:26 <jlvillal> :) 17:06:26 <NobodyCam> Great work everyone :) 17:06:29 <rloo> for those that cannot make it the entire time, it might be useful for you to indicate what you are interested in participating in (in case you can't be there the first 30 min) 17:06:49 <gabriel-bezerra> \o/ 17:06:57 <xavierr> congratulations guys :) 17:07:12 <wajdi_> congrats on the grenade work guys. That's great! 17:07:17 <devananda> so many thanks to the folks who worked on that. great stuff! 17:07:25 <rloo> ++++ thanks everyone :D 17:07:40 <sambetts> \o/ 17:08:54 <jroll> any other announcements/reminders? 17:09:25 <vdrok> #pixiesay -m rnr 17:09:35 <vdrok> :( no pixie 17:09:37 <JayF> Might be worth mentioning CFP is open or opening soon for the next OpenStack summit if anyone wants to submit anything about Ironic :) 17:09:49 <jroll> JayF++ it's open through mid-july 17:10:20 <dtantsur> announcement: we've dropped support for the old ramdisk 17:10:28 <jroll> oh YES 17:10:30 <jroll> ++++ 17:10:33 <NobodyCam> ++ 17:10:39 <sambetts> \o/ much celebration today 17:11:22 <jroll> alright, let's move on now 17:11:27 <jroll> #topic subteam status reports 17:11:29 <jroll> #link https://etherpad.openstack.org/p/IronicWhiteBoard 17:11:32 <jroll> starts at line 80 17:11:33 <gabriel-bezerra> dtantsur: do you mean iSCSI deployment interface? 17:11:45 <dtantsur> gabriel-bezerra, no, I only mean the old ramdisk 17:11:55 <dtantsur> both deployment methods are supported with IPA 17:11:59 <vdrok> gabriel-bezerra: heh, that would be bad :) 17:12:00 <devananda> gabriel-bezerra: IPA ramdisk also supports iscsi-based deployment 17:12:30 <gabriel-bezerra> Oh, I see. Thank you. 17:12:40 <rloo> so we have 182 RFEs? wow. 17:12:49 <dtantsur> and growing.. 17:13:02 <jroll> O_o 17:13:03 <devananda> rloo: yea... lots of development interest ... 17:13:07 <jroll> I see 108 tagged 'rfe' 17:13:11 <JayF> Has anyone looked at any metrics yet about the grenade job? 17:13:16 <JayF> Are they passing regularly? 17:13:41 <rloo> do we have people setting up the grenade partial job? 17:13:49 <dtantsur> jroll, it counts Wishlist stuff 17:13:54 <jlvillal> rloo: On my TODO list 17:13:55 <jroll> ah 17:14:01 <dtantsur> it's a bit tricky to count stuff with tags in lp API 17:14:03 <rloo> thx jlvillal 17:14:08 <jroll> fair enough dtantsur 17:14:10 <devananda> jroll: shal I add a section for policy work to the whiteboard? 17:14:51 <rloo> i didn't add any of the 'secondary' priorities to the subteam section. 17:14:54 <rloo> should i? 17:15:07 <jroll> I'm not sure, I guess it doesn't hurt 17:15:15 <jroll> but that becomes large :) 17:15:21 <rloo> ok, i'll add policy etc then. 17:15:48 <jroll> ok cool, thanks rloo 17:15:52 <rloo> it is large in the whiteboard, but the actual report isn't usually large (although i am fine if people prove me wrong!) 17:16:00 <jroll> fair enough! 17:16:24 <jroll> okay, I don't have questions on this right now 17:16:48 <jroll> I'd encourage folks to spend review time in the networking work, driver comp spec, boot-from-volume specs 17:17:03 <jroll> gotta get these specs landed if we want to get code done this cycle 17:17:22 <rloo> wrt the networking work. since we have grenade working, we should be able to see if it works for networking, right? 17:17:29 <devananda> rloo: that's the plan 17:17:30 <jroll> rloo, yep :) 17:17:45 <rloo> i mean, the grenade tests should show up with the patches? 17:18:03 <jroll> yes, but I don't think tests have run since grenade moved to check queue 17:18:09 <jroll> once it's rebased we should have those results 17:18:24 <devananda> I'm looking forward to seeing grenade tests on my policy changes, too :) 17:18:30 <rloo> ok, great. will wait for rebase then. and then will start reviewing :) 17:18:43 <dtantsur> first we probably need -2 lifted from the 1st patch 17:19:11 <jroll> there's also some stuff with green labels in the "code patches" column on trello, that I'd like to land by n-2 if possible: https://trello.com/b/ROTxmGIc/ironic-newton-priorities 17:19:45 <rloo> oh. active node creation. i think it has one +2. devananda maybe, it'd be quick for you. or yuriy I think. 17:19:46 <jroll> this reminds me, I want to release 6.0.0 this week, now that bash ramdisk is gone. any objections? 17:20:06 <jroll> if not I'll do that today, I think we're at a good point for that 17:20:06 <JayF> ++ release away 17:20:07 <devananda> rloo: thanks. I'll take a look - I reviewed that a few times before 17:20:17 <JayF> question: should we release IPA when we do intermediate Ironic releases? 17:20:23 <rloo> devananda: exactly, that's why i think it'll be quick for you :) 17:20:41 <rloo> JayF: we shouldn't have to, but that doesn't answer your question. 17:20:51 <NobodyCam> JayF: would make sense even if just for consistency sake 17:20:53 <rloo> JayF: do you think we need to? 17:21:04 <jroll> I think releasing in lockstep is an anti-pattern, but I'm not opposed to it 17:21:17 <rloo> we don't release ironic-lib or client at same time? 17:21:18 <JayF> I don't have a strong opinion in any direction 17:21:29 <JayF> but I think we have to assume that people would use new ironic with mitaka ipa if we don't release it 17:21:36 <dtantsur> well, we have to release it from time to time... 17:21:43 <JayF> which means any new things that needed ipa support wouldn't work yet 17:21:44 <rloo> but maybe we should until we clarify how ipa and ironic work together. 17:21:51 <jroll> JayF: yep 17:21:52 <NobodyCam> I feel it just make human things a bit easier as I can check on date then.. 17:22:00 <JayF> I don't mean it like, "I don't expect it to work unless we do" 17:22:06 <jroll> JayF: we landed the mitaka compatibility thing, right? 17:22:08 <JayF> I mean it more like "it's a good excsue to do it" 17:22:14 <JayF> jroll: I believe so, unsure about on stable 17:22:22 <jroll> there's no stable work for that 17:22:27 <jroll> I guess I'll release it anyway to get it out 17:22:39 <NobodyCam> :) 17:22:41 <jroll> but I don't think we should force ourselves to tie them together in the future 17:22:53 <JayF> Yeah I only made that comment b/c it's a good opportunity to 17:22:57 <jroll> ++ 17:23:03 <jroll> I'll check ironic-lib and the client too 17:23:09 <JayF> I will wanna re-release it as well when/if I get new stable coreos working :/ 17:23:15 <jroll> ++ 17:23:22 <JayF> (that's an uphill climb, btw) 17:23:35 <jroll> yeah 17:23:43 <jroll> we're veering off topic, anything else on subteam things? 17:24:07 <jroll> no other agenda topics, so 17:24:12 <jroll> #topic open discussion 17:24:17 <jroll> anyone got anythign? 17:24:32 <NobodyCam> :) 17:24:56 <thiagop> I do. Would love if someone can spend some review time on the Dynamic allocation for Oneview patch 17:25:12 <thiagop> https://review.openstack.org/#/c/286192/ 17:25:27 <thiagop> Now it has periodics to handle some situations, as requested. 17:25:28 <NobodyCam> other then ask for reviews on https://review.openstack.org/#/c/272658 17:25:34 <JayF> I published some example hardware managers, with some pretty extensive docs comments/gotchas in there 17:25:39 <JayF> please see the mailing list post 17:25:50 <JayF> my question would be: should this/can this live in openstack/ ? 17:26:05 <JayF> I'm thinking maybe as a "cookiecutter" type repo for IPA Hardware Managers, clone-and-build your own HWM plugin 17:26:44 <devananda> my work on adding keystone policy support is almost done 17:27:13 <jroll> JayF: it certainly can, I don't have a strong opinion if it should other than lottery factor 17:27:23 <devananda> need to fix one or two things, and get a spec and docs up, but should be good to go by EOW, I hope 17:27:44 <thiagop> devananda: do it have a patch already? 17:27:50 <jroll> JayF: I'd ack it if proposed, though 17:27:50 <JayF> jroll: lottery factor? 17:27:53 * jlvillal likes the idea in principle on a "Cookiecutter" type repo. If feasible. 17:27:55 <thiagop> does it* 17:28:03 <jroll> JayF: like bus factor but winning the lottery, you taught me this :) 17:28:09 <JayF> oh yeah, lol 17:28:18 <devananda> thiagop: yes 17:28:34 <devananda> thiagop: https://review.openstack.org/#/c/325599/ 17:28:54 <thiagop> devananda: thanks, I'll take a look 17:29:00 <devananda> thanks! 17:29:15 <NobodyCam> oh...we should bring up the cleaning bug we found last week: https://bugs.launchpad.net/ironic/+bug/1590146 17:29:15 <openstack> Launchpad bug 1590146 in Ironic "A timed out cleaning cannot be retried successfully" [High,In progress] - Assigned to Julia Kreger (juliaashleykreger) 17:29:52 <TheJulia> https://review.openstack.org/#/c/327403/ 17:30:11 <devananda> NobodyCam: ouch :( 17:30:16 <NobodyCam> yea 17:30:17 <JayF> that's just waiting on core reviews, it seems, 1x+2 2x+1 (including mine) 17:30:22 <jroll> that's fun 17:30:25 <JayF> jroll: ^ we should not release until that's merged 17:30:28 <jroll> yep 17:30:33 * jroll adds to list 17:31:02 <jlvillal> Has anyone made any progress on the intermittent gate issue? 17:31:18 <rloo> jlvillal: which issue is that? 17:31:31 <JayF> https://bugs.launchpad.net/ironic/+bug/1590139 17:31:32 <openstack> Launchpad bug 1590139 in Ironic "gate job fails when unable to start apache service during horizon setup" [Undecided,New] 17:31:34 <jlvillal> Apache restart fails intermittently in Horizon 17:31:39 <JayF> I don't think anyone's worked on it, but I haven't seen it happen much more either 17:31:42 <rloo> is it just ironic? 17:31:49 <JayF> which makes me wonder if our bug is a dupe and it was fixed in horizon/devstack 17:31:49 <jroll> I saw one or two this weekend 17:31:50 <jlvillal> That I don't know. 17:31:58 <JayF> ah so still ongoing, but very intermittant 17:32:08 <jroll> rather likely friday, noticed over the weekend 17:32:25 <thiagop> Are we using horizon on our gate's devstack? 17:32:30 <thiagop> if so, why? 17:32:58 <rloo> i see that jay proposed an elastic-recheck query for it but it didn't get in. yet. 17:32:59 <jroll> great question :) 17:33:28 <jroll> I suspect folks need to be poked about e-r reviews given it's a low churn project 17:33:30 <JayF> rloo: elastic-recheck itself has been broken apparently for a while 17:33:30 <dtantsur> I think some projects are installed by default: nova, keystone, horizon... 17:33:32 <devananda> I believe it's there by default 17:33:46 <JayF> rloo: tests are broken and the patch has been collecting dust for weeks that fixes them 17:34:01 <JayF> rloo: so I moved focus to other things; hard enough to get reviews when there isn't a dep chain to get tests working at all :/ 17:34:14 <rloo> JayF: ok, just too bad. 17:34:33 <jroll> I mean... if you ask sdague to take a look I'm sure he'd be happy to help 17:35:37 * jroll pokes -qa channel 17:35:44 <thiagop> Can we disable horizon in our jobs too? It's good to have a bug, but it seems useless to have horizon in the Ironic gate.. 17:36:07 * thiagop thinks "good" wasn't a good choice of words 17:36:51 <JayF> I think that's a pretty good idea, thiagop, I'd review a patch to project-config to change that behavior if you wanted to push it 17:36:53 <devananda> jlvillal: running that e-r query manually, I do not see any hits in last 7d 17:37:09 <jlvillal> devananda: I couldn't get that search to work for me. 17:37:18 <devananda> jlvillal: ah 17:37:20 <jlvillal> devananda: Even though i saw failures 17:37:23 <thiagop> JayF: I can do that </endwarcraftquote> 17:37:34 <devananda> jlvillal: I wonder if e-r is indexing the output of hte tests in which those failures occurred 17:37:39 <jlvillal> devananda: I keep getting upset at logstash, as I fail to get it to work :( 17:38:17 <jroll> anything else for the meeting at this point? we can work on this bug in channel :) 17:38:24 <rloo> there was a bug about apache not starting, but in 2014: https://bugs.launchpad.net/devstack/+bug/1340660. 17:38:24 <openstack> Launchpad bug 1340660 in devstack "Apache failed to start in the gate" [Undecided,Fix committed] 17:38:24 <jlvillal> devananda: Maybe, do you know how to find out? Who to ask? 17:38:27 <JayF> devananda: yeah, other half of that is my query is not so good 17:38:39 <devananda> jlvillal: Last Elastic Search Index Update: Thu Jun 09 2016 10:29:56 GMT-0700 (PDT) ..... 17:38:39 <JayF> devananda: I had it working, copied it out, and then it stopped working 17:38:44 <JayF> kibana :( 17:38:58 <devananda> jroll: ++ to moving on 17:40:57 <jroll> anything else? 17:41:06 <NobodyCam> say thank you :) 17:41:09 <NobodyCam> hehehe 17:41:18 <NobodyCam> :p 17:41:19 <jroll> thanks :D 17:41:21 <jroll> #endmeeting