17:00:00 #startmeeting ironic 17:00:02 Meeting started Mon Mar 27 17:00:00 2017 UTC and is due to finish in 60 minutes. The chair is jroll. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:05 The meeting name has been set to 'ironic' 17:00:08 o/ 17:00:08 o/ 17:00:09 o/ 17:00:12 o/ 17:00:12 o/ 17:00:14 hi o/ 17:00:14 o/ 17:00:18 o/ 17:00:19 mmm, this chair is comfy, it's been a while 17:00:22 o/ 17:00:29 o/ 17:00:30 o/ 17:00:31 o/ 17:00:32 :p 17:00:34 o/ 17:00:36 hi everyone! 17:00:42 * rloo hopes jroll doesn't fall asleep in comfy chair... 17:00:48 heh 17:00:59 pft, I'm wide awake :D 17:01:03 o/ 17:01:04 #topic announcements and reminders 17:01:08 couple things 17:01:17 * NobodyCam not has monty python skit running through his head 17:01:17 o/ 17:01:22 \o 17:01:30 dmitry is out this week 17:01:49 I'm also planning to do a nice small release of ironic before we start slamming features home 17:01:52 and ironicclient 17:01:53 \o 17:01:54 and ironic-lib 17:01:58 still need to check the rest 17:02:07 will do those this afternoon 17:02:13 jroll: I think bifrost as well, will double check in a little bit 17:02:23 nice, I plan to release sushy this week as well 17:02:23 TheJulia: nice, thanks 17:02:23 jroll: hmm, can we wait til we land the api for dynamic drivers? 17:02:48 jroll: ah, maybe not. i'm not sure it'll be ready in time. 17:02:51 rloo: maybe, let's talk later 17:02:52 One other announcement. I sent an email about a half hour ago to the cores that I would appreciate feedback on in the next day or two. Thanks in advance! 17:02:55 o/ 17:03:23 thanks TheJulia 17:03:29 any other announcements? 17:04:18 #topic review subteam status updates 17:04:24 as always, those are here 17:04:28 #link https://etherpad.openstack.org/p/IronicWhiteBoard 17:04:35 line 79 17:04:58 * jroll gives a friendly reminder to do these before the meeting, not during :) 17:05:38 vsaienk0: nice work on standalone stuff :) 17:05:47 * rloo just updated Bugs stats 17:06:49 ++ vsaienk0 17:06:51 o/ 17:06:53 * TheJulia glares at etherpad for eating her update 17:07:04 ++ on stand-alone :) 17:07:43 Also looking forward to vsaienk0's increased concurrency patch, if it works. Big time savings :) 17:08:04 vdrok: are you taking over the node tags stuff? 17:08:22 thanks folks! 17:08:31 rloo: i can't say taking over, but I'll be updating them if zhenguo is not around 17:08:34 great work vsaienk0 17:08:37 vdrok: thx! 17:08:51 vsaienk0, sambetts: any idea what the status is wrt routed networks support? (L188) 17:10:14 rloo: I'm still working on initial commits for networking-baremetal project 17:10:42 vsaienk0: ok, i'll add that to the status, thx. 17:10:54 routed networks support is mostly about scheduling, right? I can't remember the difference between that and "physical network awareness" 17:11:29 we have to be aware of the networks to be able to schedule on them correctly 17:11:31 oh and more 17:11:37 * jroll reads http://specs.openstack.org/openstack/ironic-specs/priorities/pike-priorities.html#routed-networks-support 17:11:38 jroll: I think there is some other weird things there that require research but the physical awareness is a big part of it 17:11:47 btw routed networks spec has merged today 17:11:56 sambetts: right, they're two separate priorities/subteams, that's why I ask 17:12:18 jroll: no not only about scheduling, we need to create a mechanism which will populate nova hypervisor (ironic node) to neutron segments mapping 17:12:24 vdrok: is the link to that spec, in the etherpad (subteam report par)? 17:12:35 vsaienk0: right 17:12:36 rloo: it is 17:12:41 \o/ for merging that 17:12:43 yup 17:13:05 jroll: it is? oh, do you mean the physical network spec? 17:13:29 rloo: yes, I know you weren't directing the question at me but I answered it 17:13:37 yeah, physnet spec was landed - i didn't see the routed nets one though ? 17:13:57 jroll: thx. and mariojv ++, that's what i thought we were talking about but guess not. 17:14:04 anything else on subteam updates? they seem legit to me 17:14:09 mariojv: yeah, I haven't seen a spec on that yet 17:14:35 * jroll waits a sec 17:14:39 cool - RFE is here, i'll put a link to that for tracking on the status page for now: https://bugs.launchpad.net/ironic/+bug/1658964 17:14:39 Launchpad bug 1658964 in Ironic "[RFE] Implement neutron routed networks support in Ironic" [Wishlist,Confirmed] 17:15:05 thx mariojv 17:15:21 #topic Deciding on priorities for the coming week 17:15:30 looks like we were 1/5 last week 17:15:36 I think we need to prioritize the rolling upgrades 17:15:41 so I suggest just leaving the same 4 there 17:16:09 otherwise any patch bumping object version or rpc version breaks the grenade multinode job 17:16:12 eg https://review.openstack.org/233357 17:16:35 vdrok: i'm fine with adding rolling upgrades to that list, too - maybe at the bottom so folks review stuff that's lagging behind first? 17:16:52 though... did folks find blockers with those? or is it just slow review/code cycle? are the existing things there close to ready? 17:16:53 maybe top is better to prevent that breakage, though 17:16:56 ++ for rolling upgrades 17:17:02 vdrok: yikes 17:17:18 that seems... wrong 17:17:29 wrt rolling upgrades. i rebased them last week, and fixed some stuff in one patch. there's *one* patch I'm not sure about. although others are welcome to review. i'd like to test this week. 17:17:42 I think this includes the first patch in the chaing for rolling upgrades + adding some pins to our devstack plugin 17:17:43 That kind of does, but I'm +1 on rolling upgrades 17:18:09 * jroll would love a doc on how the multinode grenade works 17:18:12 vdrok: will talk to you later on irc about the rolling upgrades stuff 17:18:20 ++ jroll. that is on my list this week to understand. 17:18:21 rloo: ok sure 17:18:38 basically, we need to be able to pin things 17:19:04 yeah, it just seems like a compatible rpc bump should work anyway 17:19:08 we can discuss that later 17:19:10 so rolling upgrades priority 1 or "last"? 17:19:19 * TheJulia thinks priority 1 17:19:34 * NobodyCam would like to see 1 17:19:41 +1 17:19:52 okay 17:20:39 if someone could throw the gerrit link there on the whiteboard, that would be awesome 17:20:49 any other priority change requests or are we good? 17:21:05 jroll: done 17:21:17 thanks 17:21:20 ty rloo 17:21:22 :) 17:21:35 no stuck specs, discussion time 17:21:43 #topic CI failure rates 17:21:45 this is TheJulia 17:21:58 #link http://paste.openstack.org/show/603960/ 17:22:51 So dmitry put together a report at http://paste.openstack.org/show/603960/ that shows CI failure rates, and a major point of concern is the third party CI jobs. 17:23:14 I guess the biggest question is if anyone has any insight as to why the failure rates are so much higher, and how we can make it better? 17:23:29 We've been actively working on the issue -- http://lists.openstack.org/pipermail/openstack-infra/2017-March/005263.html 17:23:55 I think we can remove the parallel job now that we have the standalone tests? 17:24:05 (just a note, the UEFI job is now voting in gate) 17:24:16 TheJulia: are the top 4 jobs there (starting L151) expected to have 100% failure rates? 17:24:17 gate-tempest-dsvm-ironic-parallel-ubuntu-xenial-nv 17:24:27 or is it about the third party ci only? :) 17:24:33 TheJulia: I thought Dmitry said he'd get in touch with the 4rd party CI folks about their tests failing 17:24:41 s/4/3/ :) 17:24:50 mariojv: they're experimental, so probably a WIP or abandoned WIP 17:25:02 A lot of those top jobs consist of our experimental jobs. 17:25:18 Some could probably be pruned 17:25:55 rloo: He indicated he was going to reach out, but I'm wondering if any of us in the larger community have any insight into the third party ci jobs failures, since the rates do see rather high across the board. If it is something we're doing, we should likely fix it :) 17:25:55 But they only get run if someone does "check experimental" 17:26:07 ok cool, just wanted to be sure there's not some super serious breakage there 17:26:18 I think we can probably kill parallel like vdrok said. the py3 jobs need to get working. the full, I would like to keep around, but I don't have much time lately to work on it 17:26:26 the third party stuff, those parties will need to speak for :) 17:26:47 also, I don't see the ibm ci here in this list 17:26:54 I think JayF and hurricanerix are working or will be working on the Python3 jobs. Based on owners for Pike priorities. 17:26:58 I think we should/could either 1. wait for dmitry to get back to find out where/what he's done and/or 2. send email to the dev list. 17:27:07 A bunch of devstack changes broke our CI. Not sure about others 17:27:19 I'd wait for dmitry on third party stuff 17:27:32 rloo: That is reasonable, I was just kind of hoping that people might have gained some insight by looking at failed third party CI logs when doing reviews 17:27:35 rajinir: ++ 17:27:50 TheJulia: honestly, they seem to fail so often that I don't look at them :-( 17:28:00 rloo: +1 on that. 17:28:04 same rloo 17:28:15 vdrok, maybe it means it's passing all the time (passing as in def test(self): pass) ;) 17:28:16 rloo: right, that's one of dmitry's goals to fix this cycle, it seems 17:28:27 i basically have 0 confidence voting on a lot of driver patches because of the CI flakiness there 17:28:36 milan: hopefully not :) 17:28:48 i think it was discussed before so i don't want to go over it now, but we need some definitive place to see the status of the 3rd party tests. that would at least give us an indication if we should look or not, to see if our patch is causing a failure. 17:28:52 Okay, well it sounds like we have work to do. Lets see what dmitry gets back from the 3rd party CI operators and go from there. 17:29:30 Thank you everyone 17:29:39 thanks for bringing it up, TheJulia :) 17:29:45 TheJulia: maybe add agenda item (again) for next week :) 17:29:51 ++ 17:29:59 TheJulia: although dmitry might not be ready to give any update 17:29:59 rloo: excellent idea! 17:30:12 that'll teach him to go on PTO 17:30:24 rough 17:30:28 heh 17:30:29 lol 17:30:30 ha ha 17:30:36 :p 17:30:39 #topic open discussion 17:30:41 anything else? 17:30:58 * TheJulia hears crickets 17:31:01 jroll: oh, just remembered. do we have a bug triager? 17:31:14 rloo: not on the agenda, not my job! 17:31:17 :D 17:31:20 just a thank you to lucasagomes: for the red fish work 17:31:27 jroll: ha ha. it got deleted from the agenda :) 17:31:28 who wants to triage the bugs? 17:31:31 NobodyCam, :-) cheers man 17:31:32 I have brain cells, I can go through the new bugs 17:31:33 thanks again to mjturek for doing it again 17:31:34 I can help with bug triage this week 17:31:42 TheJulia: vdrok: battle 17:31:49 :D 17:31:51 lol 17:31:56 I'll just mark both of you, thanks! 17:32:00 yup 17:32:18 * TheJulia suddenly thinks of "The Princess Bride" 17:32:22 oh, wrt the brainstorming/forum thing. i'm guessing nothing happened there? 17:32:25 muahaha 17:32:29 for the summit? 17:32:42 rloo: nope, I believe jroll submitted the tc inspired session 17:32:50 i forgot when the deadline was. apr 1? or already past? 17:32:52 yeah, the vm/baremetal session is proposed 17:32:53 or someone did 17:33:06 and stig telfer proposed a "Baremetal BIOS/RAID reconfiguration according to instance" topic 17:33:07 #link https://etherpad.openstack.org/p/BOS-ironic-brainstorming 17:33:11 http://forumtopics.openstack.org/ for the record 17:33:17 ^ there's the brainstorming etherpad, not a ton of stuff there 17:33:45 rloo: april 2 17:33:45 looks like a couple ideas about ops feedback 17:33:52 will you ironicers attend boston summit? 17:33:55 * jroll makes a todo to add these things 17:33:59 Yeah, I had no braincells last week. 17:34:01 thx jroll et al! 17:34:06 xavierr: I know at least two people are going, maybe 3, maybe more 17:35:12 * jroll counts down from 10 before closing the meeting, chirp up if you have something 17:35:12 crickets? 17:35:24 #endmeeting