17:00:06 <dtantsur> #startmeeting ironic 17:00:07 <openstack> Meeting started Mon Nov 20 17:00:06 2017 UTC and is due to finish in 60 minutes. The chair is dtantsur. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:10 <openstack> The meeting name has been set to 'ironic' 17:00:16 <dtantsur> o/ 17:00:25 <rpioso> o/ 17:00:30 <TheJulia> o/ 17:00:34 <dtantsur> #link https://wiki.openstack.org/wiki/Meetings/Ironic 17:00:46 <NobodyCam> :) o/ 17:00:59 <rloo> o/ 17:01:11 <dtantsur> Hi all! Welcome to the regular meeting of bears doing drums!</kidding> 17:01:21 <etingof> o/ 17:01:22 <NobodyCam> w00t :) 17:01:41 * NobodyCam offers up a rimshot 17:01:57 <mjturek> o/ 17:02:02 <TheJulia> dtantsur: I thought we were upgrading to cute koalas ;) 17:02:15 <dtantsur> cute koalas doing heavy metal 17:02:20 <TheJulia> \o/ 17:02:24 <dtantsur> #topic Announcements / Reminder 17:02:30 <rloo> TheJulia: koalas are passe. isn't the future 'grizzly bears' (vancouver)? 17:02:40 <dtantsur> a few useful ML discussions here 17:02:44 <dtantsur> #info Summary of Ironic sessions from Sydney: http://lists.openstack.org/pipermail/openstack-dev/2017-November/124520.html 17:02:49 <milan_> aaah here it is! the mtg :) o/ 17:02:51 <dtantsur> many thanks to TheJulia for writing this up 17:02:55 <rloo> +++ 17:03:11 <rpioso> Thank you, TheJulia! 17:03:24 <dtantsur> #info Midcycle information: http://lists.openstack.org/pipermail/openstack-dev/2017-November/124724.html 17:03:46 <dtantsur> #info Midcycle is on Nov 27th, see ML and https://etherpad.openstack.org/p/ironic-queens-midcycle for details 17:04:07 <dtantsur> #info Moving this meeting to our channel, please comment: http://lists.openstack.org/pipermail/openstack-dev/2017-November/124580.html 17:04:17 <dtantsur> and the last but certainly not the least 17:04:21 <dtantsur> #info RFC: automatic upgrade to hardware types: http://lists.openstack.org/pipermail/openstack-dev/2017-November/124509.html 17:04:30 <dtantsur> actually, this may be a midcycle topic, I'll add it 17:04:52 <TheJulia> ++ 17:05:05 <rloo> just do it dtantsur! :) 17:05:08 <NobodyCam> Great write up TheJulia :) 17:05:31 <dtantsur> rloo: e.g. aschultz asks for an API-level changes to silently replace classic drivers with hardware types 17:05:36 <dtantsur> s/an// 17:06:11 <rloo> dtantsur: oh, i missed that 17:06:22 <dtantsur> #info python-ironicclient 2.0.0 and ironic 9.2.0 released 17:06:29 <dtantsur> anything else? :) 17:06:33 <rloo> yay, we finally released! 17:07:08 <dtantsur> FINALLY 17:07:09 <rloo> dtantsur: wrt midcycle, do we decide on the agenda, etc, at the midcycle itself? 17:07:27 <dtantsur> there are topics on https://etherpad.openstack.org/p/ironic-queens-midcycle 17:07:42 <rloo> dtantsur: so do we accept topics still, and if so, til when? 17:08:08 <dtantsur> given that I was the only person adding topics, I don't mind people adding them till the date of the midcycle :D 17:08:08 <TheJulia> I kind of feel like we should just roll with it 17:08:43 <rloo> ok. all i am going to say to that is 'if the topic requires preparing' and it gets added late, i'm going to nix it, because i don't want to waste everyone's time 17:09:11 <dtantsur> fair enough 17:09:12 <TheJulia> Reasonable 17:09:41 <dtantsur> we can say that topics added after the 4 already proposed will be covered only if we have enough time 17:10:00 <rloo> dtantsur: ok with me 17:11:41 <dtantsur> any other announcements? 17:12:18 <dtantsur> #topic Review action items from previous meeting 17:12:25 <rloo> are we going to vote in this meeting, about where to hold th emeeting? 17:12:43 <rloo> i mean the weekly meetings, not the midcycle :) 17:12:48 <dtantsur> rloo: let's postpone it till open discussion please :) 17:12:51 <rloo> dtantsur: ok 17:12:52 <dtantsur> #link http://eavesdrop.openstack.org/meetings/ironic/2017/ironic.2017-11-13-17.10.html 17:13:07 <dtantsur> so, I did some bug triaging indeed, results are at 17:13:13 <dtantsur> #link https://etherpad.openstack.org/p/ironic-bug-triage 17:13:29 <rloo> thx dtantsur! 17:13:31 <dtantsur> pas-ha promised a discussion on taking n-g-s under our wing, and he did as well 17:13:36 * dtantsur looks for a link 17:13:56 <dtantsur> #link http://lists.openstack.org/pipermail/openstack-dev/2017-November/124519.html 17:14:16 <rloo> dtantsur: let's go with 1 (add to ironic governance) 17:14:29 <dtantsur> I'm fine with that, I'll propose it soon, if nobody objects 17:14:50 <dtantsur> #action dtantsur to officially propose n-g-s under ironic program, if nobody objects really soon 17:15:04 * rloo likes these action thingies 17:15:09 <dtantsur> yeah, me too :) 17:15:12 <dtantsur> any comments? 17:15:24 <dtantsur> 3.. 17:15:26 <dtantsur> 2.. 17:15:28 <dtantsur> 1.. 17:15:29 <rloo> what happened with the idea of holding weekly meetings for vendors? 17:15:36 <dtantsur> mmm 17:15:38 <rloo> that was an action item from a long time ago 17:15:43 <dtantsur> there was some discussion, but it ended in nothing? 17:16:06 <rloo> cuz we didn't do this #action thing then! proof that it is worth doing :) 17:16:27 <dtantsur> rajinir was asking people around 17:16:38 <dtantsur> rajinir: hi, here? 17:16:41 <rloo> yeah, i remember the ML, not much discussion there 17:17:16 <rpioso> rajinir is out this week for the Thanksgiving holiday. 17:17:29 <rpioso> And I just returned to the office this morning :) 17:17:30 <dtantsur> given that we don't have concerned vendor people asking me about this meeting, nobody really cares... 17:17:35 <dtantsur> ack, thanks rpioso 17:18:15 <dtantsur> #topic Review subteam status reports 17:18:25 <dtantsur> let's follow the agenda, we can get back to it later 17:18:36 <dtantsur> #link https://etherpad.openstack.org/p/IronicWhiteBoard line 150 17:20:17 <rloo> dtantsur: this reminds me, wrt deprecating ironic CLI. given the questions about migrating classic to hw types, i wonder if we should make more noise about deprecating ironic CLI... 17:20:59 <dtantsur> rloo: CLI deprecation is more visible to folks interacting with it 17:21:38 <rloo> dtantsur: am wondering if folks will push back/say it isn't enough time for deprecation. i guess we can wait til later to mention it more, dunno. 17:21:49 <TheJulia> w/r/t refarch guide, do we have any other patches up for that beyond the initial context setting 17:21:50 <TheJulia> ? 17:21:53 <rloo> dtantsur: alex's mention of tooling reminded me of it 17:22:12 <dtantsur> TheJulia: nope, ENOTENOUGHTIME 17:22:23 <TheJulia> dtantsur: Ack, know the feeling :( 17:22:43 <rloo> dtantsur: i added another item to classi drivers deprecation -- the migration stuff 17:22:51 <dtantsur> rloo: some folks certainly will be sad 17:26:30 <dtantsur> as I wrote there, I'm worried about graphical console, or rather its lack of progress 17:26:48 <dtantsur> vdrok, rpioso, your names are on ^^^, what's the status? 17:27:46 <rloo> dtantsur: would be good to get status for next week's midcycle 17:28:03 <dtantsur> can someone please take an action item? 17:28:19 <vdrok> dtantsur: re graphical console? 17:28:22 <dtantsur> yes 17:28:41 <rloo> i thought pas-ha said that he'd update the spec? 17:28:51 <rloo> at PTG 17:28:52 <vdrok> dtantsur: I don't think there are any updates, maybe pas-ha knows more 17:29:24 <rloo> vdrok: since you're here, what about cleaning up deploy interfaces? 17:29:41 <vdrok> don't have any progress on that either :( 17:29:54 <rloo> vdrok: any idea when you might have time to work on it? 17:29:58 <rpioso> dtantsur: re: graphical console, I don't have an update. 17:30:47 <vdrok> rloo: I think next week 17:30:57 <rloo> vdrok: great! 17:31:44 <dtantsur> #action vdrok to help moving graphical console forward 17:31:54 <dtantsur> not necessary even coding, just showing some progress... 17:31:59 <dtantsur> otherwise we're not making it for sure 17:32:07 <vdrok> will do 17:32:21 <rloo> oh wait, is vdrok offering to do graphical console, or deploy interface, or both? 17:32:52 <dtantsur> oh, good question 17:33:06 <dtantsur> vdrok: which one/two are you volunteering for? :) 17:33:11 <rloo> vdrok is so nice, maybe he'll do both... 17:33:11 <vdrok> dtantsur rloo: can try both :) 17:33:15 <rloo> yay! 17:33:16 <dtantsur> thanks vdrok 17:33:41 <dtantsur> is everyone done with the statuses? 17:34:00 * rloo is done 17:34:16 <dtantsur> #topic Deciding on priorities for the coming week 17:34:40 <dtantsur> so, can we move zuul out of the priorities now? 17:35:06 <rloo> dtantsur: there are some ongoing zuul-related patches, but given that it is mostly cores reviewing those, yup, i'll ping you. 17:35:47 <dtantsur> so, we have midcycle planning, the BIOS spec and some rescue stuff 17:35:50 <TheJulia> I think so, bifrost just has one outstanding patch at the moment 17:36:07 <dtantsur> do we have something high-priority to throw there? 17:36:15 <rloo> https://review.openstack.org/#/c/476171/ 17:36:21 <dtantsur> I can offer my documentation changes, I know rloo loves reviewing them ;) 17:36:35 <TheJulia> https://review.openstack.org/#/q/topic:bug/1699547 17:36:44 * rloo has new rule; one doc patch from dmitry per week 17:37:00 <rloo> TheJulia: yup, that's the next patch ^^ 17:37:25 <dtantsur> rloo: wow, I haven't heard about this rule :) 17:37:37 <rloo> dtantsur: just came up with it, fresh off the press! 17:37:41 <TheJulia> rloo: I suspect we could land the cinder one today if we wanted to 17:37:57 <dtantsur> that's why I also added the ironicclient one ;) 17:37:58 <rloo> TheJulia: that would be great. may need a rebase cuz of reno 17:38:06 <TheJulia> yeah, already -1'ed it 17:38:58 <dhellmann> if we need to rebase something because of reno, that may be a bug in reno. please let me know what the details are if you have some time after your meeting 17:39:12 <TheJulia> dhellmann: we changed the files used 17:39:14 <dtantsur> wow, dhellmann has an alert on the word "reno" :) 17:39:29 <TheJulia> so less a rebase, more of an edit to the patch. 17:39:41 <dhellmann> TheJulia : ah, ok, cool 17:39:44 <rloo> dhellmann: thx for offering though! you're like a genie :) 17:40:03 * dhellmann dissolves back into his bottle 17:40:04 <dtantsur> yeah, thanks dhellmann 17:40:10 <milan_> lol 17:40:12 * dtantsur hopes it's a bottle of good whiskey 17:40:23 * dhellmann sees no point in bad whiskey 17:40:28 <dtantsur> +++ 17:40:46 <dtantsur> anyway, I risked adding my docs changes too, if somebody has less restrictions on reading my English than rloo :) 17:40:54 <rloo> priorities look good to me 17:41:07 <rloo> dtantsur: i'm fine with the docs patch, i think i already reviewed :) 17:41:23 <dtantsur> yeah, it needs an update, I'll try to do it tomorrow morning 17:41:27 * milan_ would <3 https://review.openstack.org/#/c/466448/ if inspector counts ;) 17:41:35 <rloo> dtantsur: take your time :) 17:41:43 <milan_> * priorities I mean 17:42:00 <dtantsur> milan_: it's on the subteam prio list 17:42:05 <rloo> milan_: you can put that down at L135 ish 17:42:14 <rloo> oh yeah, already there 17:42:16 <milan_> it's been there for a while 17:42:24 <milan_> ;) so raising the awareness :P 17:42:30 <dtantsur> yeah, I'd like folks to do more subteam reviews, to be honest 17:42:35 <dtantsur> esp. with n-g-s joining soon 17:42:52 * rloo can barely do the top list of reviews... 17:42:59 <dtantsur> okay, how is the list looking? 17:43:13 <rloo> looks like a lot :D 17:43:15 * milan_ thinks there should be just single prio list but off-topic I guess 17:43:17 <rloo> so good 17:44:01 <dtantsur> milan_: it won't help, if people don't/can't/don't want to review, say, inspector 17:44:24 <milan_> dtantsur, even less if it's on a separate list ;) but yeah 17:44:27 <dtantsur> #topic Appointing a bug triaging lead for the coming week 17:44:41 <dtantsur> I cannot promise to do it this time, thanks to IKEA :) anyone? 17:45:07 <dtantsur> milan_: wanna do some bug triaging? :) 17:45:08 <milan_> I could for inspector? 17:45:16 <milan_> sure :) 17:45:17 <dtantsur> see, I read your mind :D 17:45:21 <milan_> lol 17:45:26 * etingof could probably learn from milan_ 17:45:30 <dtantsur> milan_: you can try other projects too, it does not need deep knowledge 17:45:32 <dtantsur> aha, nice 17:45:41 <milan_> ack :) 17:45:45 <dtantsur> #action milan_ and etingof to lead bug triaging this week 17:45:46 <milan_> put me on the list 17:45:52 <etingof> pair programming! 17:46:09 <milan_> \0/ old school hard core agile stuff! :D 17:46:13 <dtantsur> heh 17:46:16 <dtantsur> #topic RFE review 17:46:22 <dtantsur> rloo, the mic is yours 17:46:36 <rloo> thx dtantsur 17:46:39 <rloo> welcome one and all 17:46:40 <rloo> ha ha 17:46:43 <dtantsur> #link https://bugs.launchpad.net/ironic/+bug/1680160 deprecate "hash_distribution_replicas" config option 17:46:43 <openstack> Launchpad bug 1680160 in Ironic "[RFE] deprecate "hash_distribution_replicas" config option" [Wishlist,Triaged] 17:46:48 <dtantsur> I can work as a #link-er 17:47:01 <rloo> ok, let's get down to business. i'd like to know what folks think about these RFEs. can we approve, need more info, need spec? 17:47:20 <rloo> i'm good with deprecating this. Others? 17:47:42 <dtantsur> ++ looks dangerous 17:47:56 <rloo> dtantsur: dangerous to keep, right? not to remove? 17:47:56 <dtantsur> and we're not doing good job gate-testing even the simplest HA scenario (sigh!) 17:47:59 <TheJulia> lets do it 17:48:09 <dtantsur> dangerous for operators to use 17:48:10 <rloo> good. thx! 17:48:19 <rloo> i'll update them later 17:48:20 <rloo> next... 17:48:29 <dtantsur> #link https://bugs.launchpad.net/ironic/+bug/1710850 SNMP driver does not implement security features 17:48:29 <openstack> Launchpad bug 1710850 in Ironic "[RFE] SNMP driver does not implement security features" [Wishlist,Confirmed] 17:49:06 * etingof can work on this 17:49:15 <rloo> thx etingof :) 17:49:15 <TheJulia> I could have sworn it was proposed already... I'm good with it 17:49:30 <dtantsur> etingof: do we need to directly depend on cryptosomething? 17:49:32 <rloo> i don't know enough about the pycrypto etc concerns. if we think it'll be fine? 17:49:45 <dtantsur> I don't understand if it's just a dependency on pysnmp or we have to use it directly from ironic 17:49:55 <etingof> no, just pysnmp 17:50:11 <dtantsur> okay, then it's not our problem (well, unless we really try to bring driver dependencies to g-r) 17:50:55 <dtantsur> approving then? 17:50:58 <rloo> do folksneed to do anything to migrate from snmpv2 to 3? or is it all 'under the hood'? 17:50:59 <etingof> alternatively, we could introduce the global dependency on pycryptodomex 17:51:23 <etingof> rloo, they need to reconfigure both client and server sides of SNMP 17:51:47 <etingof> rloo, SNMPv3 has different security/authentication model 17:51:58 <dtantsur> etingof: does it mean a new hardware type essentially? 17:52:12 <etingof> dtantsur, I hope not 17:52:13 <rloo> so if we do this, it isn't a choice for them, even if they don't want the security stuff, they'll need to reconfigure? 17:52:23 <dtantsur> do we have to update virtualpdu to support it? 17:52:50 <etingof> rloo, if they do not need security they should better stay with SNMPv2c 17:53:13 <rloo> etingof: so our code will support both? 17:53:15 <etingof> rloo, if they want security, then they have to provide SNMP username and symmetric keys 17:53:17 * dtantsur starts feeling like having a spec... 17:53:19 <TheJulia> it should just be a matter of settings 17:53:29 <etingof> rloo, absolutely, we support both 17:53:31 <TheJulia> although, virtualpdu support may be more complicated 17:53:40 <dtantsur> yeah, virtualpdu is one problem 17:53:43 <rloo> etingof: thx for clarifying 17:53:48 <etingof> np 17:53:53 <rloo> then virtualpdu... 17:53:53 <dtantsur> the second problem is whether we can/should call it the same hardware type 17:54:02 <dtantsur> I'm not sure how compatible the two protocols are 17:54:15 <dtantsur> we do support both IPMI 1.5 and 2.0 via our ipmi hw type 17:54:18 <rloo> so based on etingof, virtualpdu should still work. as long as we don't want to test the new stuff. 17:54:28 * dtantsur does want to test new stuff 17:54:48 <TheJulia> dtantsur: every place I've seen snmp support implemented, it is done as settings, not different drivers or overall high level options. 17:55:04 * etingof nods 17:55:22 <dtantsur> okay, at which point do we switch from v2 to v3? on seeing any of these options? 17:55:35 <etingof> dtantsur, I think so 17:55:36 <rloo> i think i'm good with this, but i'd like to put down some caveats, like 'both versions have to be supported', virtualpdu needs to be used for testing both scenarios 17:55:50 <TheJulia> dtantsur: we don't 17:55:53 <TheJulia> dtantsur: we can't 17:56:02 <dtantsur> mmmmm, then which protocol will the driver use? 17:56:10 <TheJulia> the driver should be configurable 17:56:21 <dtantsur> I thought, v2 is none of these options is present, v3 if any is. 17:56:29 <dtantsur> which is a bit too implicit to my taste 17:56:45 <dtantsur> do we need an explicit snmp_version field in driver_info? 17:56:55 <etingof> dtantsur, yes, we could also introduce snmp_version setting 17:56:55 <dtantsur> even if it's redundant, it will be clear what the operator's intention is 17:57:00 <TheJulia> dtantsur: that is possible, but several options are required to make it work with v3 17:57:04 <TheJulia> ++ snmp_version 17:57:21 <dtantsur> and we can fail if snmp_version=3, but not all options are present OR if snmp_version=2 but v3 options are present 17:57:28 * etingof commented on the required options in the bug 17:57:35 <dtantsur> with this ^^ +vpdu in place, I'm good with the RFE 17:58:15 <dtantsur> next? 2 mins left 17:58:41 <rloo> yup, next... 17:58:44 <dtantsur> #link https://bugs.launchpad.net/ironic/+bug/1708549 for rolling upgrades, pin the API version 17:58:44 <openstack> Launchpad bug 1708549 in Ironic "[RFE] for rolling upgrades, pin the API version" [Wishlist,In progress] - Assigned to Ruby Loo (rloo) 17:58:48 <dtantsur> I think YES 17:59:04 <dtantsur> just one question: will it use the same conf option? 17:59:10 * rloo abstains to avoid conflict of interest :) 17:59:15 <rloo> dtantsur: yup, same option 17:59:22 <TheJulia> definitely yes 17:59:29 <dtantsur> fine with me.. just let's not forget to document it 17:59:39 <rloo> thx, done. and doc patch is done too :) 17:59:44 <dtantsur> other opinions? 18:00:04 <dtantsur> thanks all, let's move to our channel 18:00:07 <dtantsur> #endmeeting