17:00:00 #startmeeting ironic 17:00:01 Meeting started Mon Apr 3 17:00:00 2017 UTC and is due to finish in 60 minutes. The chair is dtantsur. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:03 o/ 17:00:05 The meeting name has been set to 'ironic' 17:00:06 o/ 17:00:07 o/ 17:00:10 o/ 17:00:11 o/ 17:00:18 o/ 17:00:19 o/ 17:00:28 \o 17:00:31 \o 17:00:36 o/ 17:00:41 o/ 17:00:55 hi everyone, thanks for joining :) 17:01:07 as usual, our agenda (relatively light today), can be found at: 17:01:09 o/ 17:01:10 o/ 17:01:12 o/ 17:01:14 o/ 17:01:15 #link https://wiki.openstack.org/wiki/Meetings/Ironic 17:01:21 o/ 17:01:39 #topic Announcements / Reminders 17:01:52 #info dtantsur had some good time in Barcelona during his PTO :) 17:01:59 nothing else from me, really :) 17:02:17 A couple of quick announcements from me. 17:02:20 1. We've just open sourced a project based on kolla for deployment of OpenStack with a focus on baremetal for the scientific computing use case. https://github.com/stackhpc/kayobe. Please get in touch if you're interested in finding out more. 17:02:36 2. In the above project we're doing some interesting things with ironic inspector with the aim of reaching 'zero touch' commissioning of a bare metal cloud. Blog post about it here: https://www.stackhpc.com/ironic-idrac-ztp.html. 17:02:43 #link https://github.com/stackhpc/kayobe a project based on kolla for deployment of OpenStack with a focus on baremetal for the scientific computing use case 17:02:49 o/ 17:03:04 o/ 17:03:11 we did a few releases last week 17:03:18 #link https://www.stackhpc.com/ironic-idrac-ztp.html Zero-Touch Provisioning using Ironic Inspector and Dell iDRAC by mgoddard 17:03:23 rloo, indeed! 17:03:25 One announcement from me: I'll be at the leadership training event next week, and on vacation the following week, so I'll be somewhat unavailable. I'll likely need someone to volunteer to run the UI and BFV meetings, or otherwise cancel them. 17:03:31 #info first round of Pike releases done last week 17:04:26 TheJulia, the UI one is 8pm for me, but I can run the BFV. it does conflict with the api-wg meeting though.. 17:04:59 I can run BFV, I'm there anyway 17:05:08 joanna, thanks! 17:05:21 :) 17:05:22 #action joanna to run the next BFV subteam meeting 17:05:22 * jroll notes that he has a bunch of downstream things going on this week, and also at leadership training next week, so mostly out for two weeks but I'll be around irc if people need a thing 17:05:40 do we have someone to run the UI meeting? 17:05:59 I can just cancel the UI meeting, I think it will be fine for a week. 17:06:00 additional info on releases: everything else is done, but the ironic release is waiting on a couple of patches: https://review.openstack.org/#/c/452806/ https://review.openstack.org/#/c/452787/ 17:06:04 I don't have much experience, but I can try because I am going to be there anyways 17:06:14 o/ 17:06:25 crushil: it is easy, I can give you a run down this week :) 17:06:33 TheJulia, Thanks 17:06:37 thanks crushil! 17:06:41 np 17:06:44 crushil: thank you! 17:06:46 #action crushil to run the next UI subteam meeting 17:07:00 oh, I guess it's not "the next".. 17:07:18 * dtantsur wonders if he should #undo that actions, or it's clear enough.. 17:07:19 Week after, I'm sure I'll be on some next week. 17:07:45 * TheJulia thinks undo and then note it again 17:07:52 #undo 17:07:53 Removing item from minutes: #action crushil to run the next UI subteam meeting 17:07:56 #undo 17:07:57 Removing item from minutes: #action joanna to run the next BFV subteam meeting 17:08:13 #action joanna to run the BFV meeting next week 17:08:15 better? 17:08:27 ++ 17:08:37 #action crushil to run the UI subteam meeting next week 17:08:39 Yeah I guess 17:08:52 cool :) anything else from folks? 17:09:25 #topic Review subteam status reports 17:09:29 o/ 17:09:38 #link https://etherpad.openstack.org/p/IronicWhiteBoard 17:09:42 starting with line 87 17:11:20 dtantsur: as a side note, did you decide wrt trello? 17:11:34 dtantsur: thinking we should delete the trello links 17:11:53 rloo, nope, we can have a voting later :) /me knows folks like voting 17:12:09 * rloo likes votes, reminder of how democracy could work 17:12:22 so the physical network stuff, that spec merged last week, didn't it? 17:12:27 L196 17:13:07 JayF, TheJulia: wrt documentation reorg, it is April. L207. 17:13:08 #link http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/physical-network-awareness.html 17:13:11 rloo: it did 17:13:37 rloo: Good point, I've been waiting on a spec, but I've not seen it yet. 17:13:47 folks, please remember to update the status's :) 17:14:34 rloo: my bad 17:14:34 * rloo updates status' with guesses :) 17:14:50 heh 17:14:54 * TheJulia liked the ORLY 17:15:03 :D 17:15:38 rpioso, hey! do you plan on creating a new-style hardware type for Drac or should one of us (one of me?) write it? 17:16:47 wrt the oslo note (L276). anyone want to volunteer to look into any impact on ironic? 17:17:14 * rloo wonders how many dtantsurs there are 17:17:35 dtantur: Welcome back! 17:17:38 rloo, I am qualitative, not quantitative! :) 17:17:41 rpioso, thanks! 17:17:45 the oslo thing should be easy, just stare at logs for warnings :) 17:18:01 jroll: we need a starer! 17:18:13 or we can proactively switch that flag to true 17:18:14 I'd rather not take that task but can if we get no volunteers 17:18:15 Why not automate that? 17:18:16 * TheJulia ducsk 17:18:18 ducks 17:18:44 curl $logfile | grep 'the warning' 17:18:44 It actually should be easy, pull down the library, merge, manually install with pip, restart 17:18:46 jroll: would be nice if a noncore takes that on. 17:18:52 rloo: indeed 17:18:52 dtantsur: Please elaborate. 17:19:07 jroll: we could file a bug, 'low hanging fruit'? 17:19:13 +1 to bug 17:19:30 ok, /me files a bug then :) 17:19:32 TheJulia: even easier, we only use it in unit tests 17:19:33 dtantsur: We could discuss outside of this meeting. 17:19:34 * jroll may jfdi 17:19:44 rpioso, yep, let's do it. not really urgent. 17:20:02 dtantsur: Cool! ty 17:22:04 we're past 10 minutes cap for the review. are folks still reviewing? 17:22:08 seems like time to move on :) 17:22:32 #topic Deciding on priorities for the coming week 17:22:44 i'm done with subteam reports. but leading to next stuff. there are some deadlines coming up, next week: glance, manila, nova spec freeze 17:22:56 does that impact us, are there things we ought to look at soon? 17:23:11 I can check on our nova work this week 17:23:12 do we need nova specs for any priorities? BFV? 17:23:25 #action jroll to check on our nova work this week 17:23:30 last I looked we only needed a blueprint for BFV with nova 17:23:32 afaik we have approved BPs for everything we want to do, but I'll double check 17:23:36 thanks! 17:23:43 np 17:23:51 as to priorities, I applied only minimal changes to the list from last week 17:24:21 well, I'd prefer to bump redfish priority for reasons already known to the core team :-/ 17:24:40 (sorry for secrecy here) 17:25:01 dtantsur: true, but I don't think we should. 17:25:28 rloo, why? 17:25:34 dtantsur: can discuss later 17:25:42 * mariojv is lost here 17:25:50 * TheJulia thinks she groks it 17:25:53 it isn't a high priority 17:26:12 i think if the community wants it, they'll review, regardless if it is on our list of priorities or not 17:26:35 well, it's going to be on my personal high priority list anyway 17:26:42 but ok 17:26:45 dtantsur: which is perfectly fine! 17:26:48 rloo: I think you and dtantsur agree, don't have it on this weeks priorities 17:26:52 right? 17:26:58 * TheJulia agrees 17:27:00 * jroll not sure if bump means bump up or bump off here 17:27:16 i agree with bumping for this week 17:27:19 * TheJulia wonders if we should be using more specific words 17:27:32 it can stay on the spec priority list imo 17:27:38 yeah, my English lets me down sometimes. I meant to raise its priority. 17:27:38 spec is merged :) 17:27:42 ah 17:27:44 and we can update and have gerrit votes on that, if that has to change, im 17:27:45 *imo 17:27:46 redfish driver isn't on this week's list of priorities 17:27:56 see L75+ 17:28:23 I think organically through community reviews would be best, if the community wants to see it sooner rather than later, it will get reviews 17:28:23 sorry for confusion, everyone. what I suggested (and what rloo correctly understood, I think) is adding redfish to the prio list 17:28:31 it seems like we're in agreement that we should not. 17:29:01 seems like it 17:29:11 yeah, I'm fine leaving it off 17:29:22 actually I'm fine either way :P 17:29:30 so, does the list look ok now? 17:29:32 would be nice to just get it done, but it isn't a high priority 17:29:45 yep, fine with me 17:29:53 * dtantsur blames daylight saving change for his condition >_< 17:29:59 the osc default API version change. is there an urgency with that, or we just have to do it anyttime in pike? 17:30:20 i think it's fine as long as it gets in 3 months before queens 17:30:27 what mariojv just said 17:30:33 ++ 17:30:34 mariojv: ok, remind us if we get close and it ain't 17:30:38 will do 17:30:44 if it doesn't, we just need to drop it later in queens, is that a problem? :) 17:31:01 I don't see it as a problem personally, just revise the spec 17:31:01 i still need to get ironic CLI patch up and have it log when using client library 17:31:03 jroll: i'd like it to be done, sooner rather than later 17:31:08 but spec and openstack CLI patches are up 17:31:13 +1 on just getting it done. 17:31:30 jroll: not a problem, but no reason to not do it soon :) 17:31:31 i think i'm good with priorities for this week. will maybe ask aout network ones next week :) 17:31:39 yep 17:31:40 +1 priorities look fine to me 17:31:41 they LGTM 17:31:45 same 17:31:50 +1 17:31:53 awesome 17:32:18 #topic RFE review 17:32:26 we have some this time! 17:32:32 \o/ 17:32:33 * jroll gets ready to yell 17:32:38 this time I added a couple 17:32:40 #link https://bugs.launchpad.net/ironic/+bug/1669243 - Support for zmq in ironic 17:32:40 Launchpad bug 1669243 in Ironic "[RFE] Ironic doesn't support zmq with oslo.messaging" [Wishlist,In progress] - Assigned to Galyna Zholtkevych (gzholtkevych) 17:33:00 first one seems to be pretty easy, do we want to support zeromq? 17:33:17 this one shouldn't be an RFE, IMO - we claim to support any oslo.messaging-supported backend, feels like a bug if zmg is broken 17:33:24 s/zmg/zmq/ 17:33:33 +1 to what jroll said 17:33:39 yup, I'm fine with that too 17:33:43 +1 17:33:45 if there's not a huge technical reason not to support it, we should, imo 17:33:56 +1, seems like a bug 17:34:06 any objections to treating it as a bug? 17:34:14 none 17:34:25 #agreed treat broken zmq support as a bug, not RFE 17:34:26 but pleeeeeze, if it needs an rpc bump, can it wait til after rolling upgrades? :) me adds comment 17:34:34 ++ 17:34:36 hah, ++ 17:35:06 next 17:35:26 #link https://bugs.launchpad.net/ironic-inspector/+bug/1678134 - plugin for setting local link connection switch info from LLDP system name 17:35:26 Launchpad bug 1678134 in Ironic Inspector "[RFE] plugin for setting local link connection switch info from LLDP system name" [Wishlist,In progress] - Assigned to Mark Goddard (mgoddard) 17:35:33 the second one is a bit harder, it seems like there is a confusion among the ml2 drivers maintainers 17:35:57 it seems like cisco and oneview ml2 drivers implemented the switch_info as dictionaries 17:36:27 I thought switch_info was intended to be a dict 17:36:43 and in case of oneview, they want to make switch_info required i suppose, at least from this change https://review.openstack.org/#/c/377106/9 17:36:46 jroll: according to the spec it was designed as a string 17:36:55 the ironic-neutron ml2 spec didn't specify what format it should take, but gave the suggestion that it could be switch system anem, which is a string 17:37:09 s/anem/name 17:37:19 mgoddard: it is specified in the spec both port_id switch_id and switch_info are string fields 17:37:27 oh right, it does say it's a string 17:37:33 * jroll scratches his head 17:37:41 well, it explicitly stated strings :) but, we have to deal with it somehow at this point 17:37:53 right you are vsaienko 17:38:01 can't we fix the plugins to (also?) allow strings there? 17:38:15 * jroll bets it's a string that looks like a dict 17:38:22 https://review.openstack.org/#/c/188528/7/specs/liberty/ironic-ml2-integration.rst@94 17:38:50 regardless, the switch_info field is meant to be optional, and vendor-specific. so a plugin layer for inspector seems sane enough there 17:39:20 I'm also fine with putting a name there, and allowing plugins to overwrite, but dunno.. 17:39:41 kinda sucks that we have to do it, but it's explicitly vendor specific, so what can you do 17:39:52 thats right, jroll... we've been using switch_info as a string that looks like a dict 17:40:01 sambetts made the point that the plugin may need to be aware of the switch that the node is connected to in order to determine the correct thing to put there 17:40:14 I'm pretty sure I don't like the base class approach though 17:40:16 jroll in fact, more like a json object 17:40:44 ricardoas: which I don't quite like, but alas 17:40:47 we need separate plugins layered on top of the generic one, not inheriting from it 17:40:56 it is better to use a new key switch_capabilities to define a switch capabilities as LLDP field is capabilities not info 17:41:21 the confusion around this makes it sound spec-able 17:41:30 lots of ideas here 17:41:36 should be a quick spec to write 17:41:36 dtantsur: it is layered in my proposal, as the generic plugin now inherits from the base 17:41:37 spec-able, and also it feels like all of the information needs to be presented 17:41:42 TheJulia: ++ 17:41:49 because it seems like there are several vectors that can and should be addressed 17:42:04 mgoddard, well, I'm probably using the wrong word, but I'd like inheriting to disappear for the picture 17:42:17 including possibly changing the ml2 drivers to be more consistent or possibly small architectural changes to improve the overall experience. 17:42:32 mgoddard, in favor of just having another small plugin running after the generic one and adding the name. dunno how easily doable it is 17:42:58 mgoddard, I suspect we need to plug bfournie's LLDP work in 17:43:12 also ++ for a spec 17:43:49 objections? 17:43:58 dtantsur: it's doable, would just require a refactor 17:43:59 nope, makes sense to me 17:44:05 dtantsur: +1 17:44:09 seems reasonable 17:44:29 #agreed https://bugs.launchpad.net/ironic-inspector/+bug/1678134 will need a spec to clarify all the details 17:44:29 Launchpad bug 1678134 in Ironic Inspector "[RFE] plugin for setting local link connection switch info from LLDP system name" [Wishlist,In progress] - Assigned to Mark Goddard (mgoddard) 17:44:36 anything else here before we move on? 17:45:01 #topic What about having the first mid(not really mid)cycle soon? 17:45:14 +1 17:45:15 I remember we talked about having more virtual meet-ups 17:45:26 why do we want it? 17:45:42 rloo, high throughput, especially around RFE/spec reviews and agreeing on contentions points 17:45:50 I think it would be really good to get on the same page, even if it just for a couple hours talking on a call prior to the summit. 17:46:11 I'd almost prefer having a discrete list of things to work through, but syncing up is always helpful too 17:46:25 I agree with jroll 17:46:27 I remember we had 8 4-hour slot on the last virtual midcycle, right? we can have only 2 slots now, one in each time 17:46:42 s/8/6/ maybe, it was 3 days 17:46:42 oh. i don't mind a meeting to sync up. guess i'm not sure i want it to be called a midcycle meetup. 17:46:45 I think it was 6 slots, but yeah 17:46:51 Do we plan to meet at the Summit Forum? 17:46:52 It was 6 17:46:55 i'd be fine with that ^ i think later in the cycle might be better for a real midcycle 17:47:02 rloo, I used word "midcycle" to give folks a quick idea what I mean 17:47:02 do we have a curated list of items to discuss dtantsur ? 17:47:03 by "that" i mean the 2 slots 17:47:05 i mean, several short syncs during a cycle are great. we should have them when we 'need' them. 17:47:27 rpioso: afaik we only have 2-3 devs going, so while they will probably meet up it won't be a team meetup 17:47:30 TheJulia, not now, but I do have a few things in mind. e.g. ongoing nova work and what to do about capabilities.. 17:47:33 eg, if we have say 2 features that are ready, then lets just meet to *do* them. 17:48:00 rloo, e.g. rolling upgrades, I guess, could use more eyes and discussion 17:48:05 this is really just a guess though 17:48:09 jroll: ty 17:48:54 anyway, what I wanted today is to suggest it and let you think if you have something for a high-throughput discussion 17:48:56 dtantsur: It might also be prudent to spend a little time discussing stand-alone usage related items, but that is just a thought 17:49:13 dtantsur: so TheJulia and jroll are away next week, and TheJulia is away the week after. Shall we tentatively schedule something the week TheJulia is back? 17:49:14 TheJulia, yep, especially if you'll have a list of rough edges by then 17:49:28 rloo, ++ 17:49:36 dtantsur: already mostly typed out :) 17:49:40 dtantsur, will virtual meetup logistics be published so non-core can participate? 17:49:47 wanyen, definitely! 17:49:48 wanyen: sure 17:49:51 and then the/a big problem is time... 17:49:54 thanks 17:49:54 this is one of the big goals 17:50:14 wanyen, it will probably be something like SIP, but other FOSS-friendly options will be considered too 17:50:34 TheJulia, which dates are you free? I'm confused with "next" here.. 17:51:07 dtantsur: Starting back the week of the 24th 17:51:16 ack 17:51:32 that's the week just before summit 17:51:45 which is good, I guess 17:51:48 oh no. there is another week after that. 17:52:03 that woudl be a good week then. 17:52:07 #agreed Let's think if we find value in a virtual meetup (similar to previous virtual midcycle) e.g. on the week of Apr 24th 17:52:09 right? 17:52:20 Yup 17:52:30 yup. i guess we shoudl have an etherpad for people to suggest things. 17:52:42 and possible days/times. 17:52:55 #action dtantsur to announce this idea on the mailing list and create an etherpad for potential topics 17:52:58 i hope it is just one day, one block of time. but i guess it depends on what we talk about. 17:53:23 rloo, probably 2 blocks on one day because of timezones... 17:53:40 but ok, let's think about it for some more time 17:53:53 dtantsur: i really would like to see more focused things, to get concrete stuff done, but yeah, let's see. 17:53:59 rloo++ 17:54:01 any more comments before I open the floor? 17:54:15 #topic Open discussion 17:54:19 i have a small thing about rescue mode i'd like to bring up, proxying for JayF 17:54:23 basically, in nova, they allow a user to specify which image will be used when rescuing an instance - so the image will specify info about which user is created for SSH access when rescue finishes 17:54:29 with ironic, we're not letting the user specify the rescue image, since that doesn't really make sense in our case 17:54:33 so this brings up the question of what to call our rescue user 17:54:37 current behavior is to create a user called "rescue" that has passwordless sudo: https://review.openstack.org/#/c/423521/15/imagebuild/coreos/oem/finalize_rescue.sh (L6) 17:54:42 i prefer this over having the user just login as root immediately with the password given from nova 17:54:47 but it's still a decision we didn't really spell out explicitly in the spec, so i wanted to get community opinion on this 17:55:20 so, these are two questions, right? whether to allow user images and what user to use? 17:55:32 maybe make the user configurable? 17:55:44 i mainly wanted to focus on which user, for this discussion 17:55:48 tbh, I'm fine with having root, if 'sudo -i' is going to be the first thing for people to do.. 17:55:49 mariojv: dumb question cuz i am not familar with rescue. shouldn't it work with nova's rescue? 17:56:06 The person maintaining the volume connection API patch would like to get some reviews in order to head off nitpicking when it finally comes time to land the patch. https://review.openstack.org/#/c/214586/ It has a few pep8 errors which I'll try to fix today. 17:56:19 rloo: yes - but in the nova case, they have user images when rescuing, so that's where they specify it 17:56:34 i think i'd be fine with having it root by default, in that case, dtantsur 17:56:37 in other words: do we have a use case for non-root access to the rescue image? 17:56:59 #link https://review.openstack.org/#/c/214586/ - volume connection API patch that could use early reviews 17:57:05 and maybe just have some docs for how to change it when building IPA if they want to (make it configurable in that way like jroll suggested) 17:57:17 I'm okay with just root 17:57:29 dtantsur: i guess i was just a bit worried about giving a user something that they can do damage with, by default 17:57:31 it doesn't really increase security to use a non-root user, if they have sudo 17:57:34 but maybe that's ok, since it's a ramdisk 17:57:49 3 minutes 17:57:49 yeah, not security, more prevent you from accidentally doing bad things 17:57:54 mariojv, without root they won't even be able to mount disks (neither to access them) 17:58:07 ah that's a good point 17:58:16 ok, let's keep it as root then 17:58:20 so why it doesn't make sense for user to pick what rescue image? 17:58:31 that seems to be the default in a lot of nova docs for rescue mode anyway 17:58:34 2 minute warning 17:58:39 wanyen: that's a longer discussion than we have time for here 17:58:54 tl;dr because pxe boot 17:59:06 oh that Pixie :D 17:59:07 wanyen: but basically it opens up a huge amount of security risk, allowing someone to boot an arbitrary ramdisk 17:59:08 I'm not sure if anyone seen this spec from yuriyz -- https://review.openstack.org/452182, while it was intended for 1st april, it seems like whis will work for ipa authentication, if someone has a bit of time to read through 17:59:19 and it'd be a big pain implementation wise 18:00:01 we're running out of time. thanks everyone! 18:00:04 thanks! 18:00:07 thanks 18:00:07 Thanks! 18:00:12 :) 18:00:14 #endmeeting ironic