17:00:44 <devananda> #startmeeting ironic
17:00:44 <openstack> Meeting started Mon Jan 12 17:00:44 2015 UTC and is due to finish in 60 minutes.  The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:45 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:47 <openstack> The meeting name has been set to 'ironic'
17:00:50 <jroll> \o
17:00:52 <devananda> #chair NobodyCam
17:00:53 <openstack> Current chairs: NobodyCam devananda
17:01:06 <devananda> g'morning / afternoon / evening, all
17:01:20 <NobodyCam> morning here :)
17:01:21 <JoshNang> o/
17:01:24 <rameshg87> o/
17:01:31 <Nisha> o/
17:01:33 <devananda> as usual, the agenda can be found here -
17:01:35 <devananda> #link https://wiki.openstack.org/wiki/Meetings/Ironic
17:01:36 <clif_h> o/
17:01:39 <rloo> hi
17:01:42 <Shrews> morning
17:01:43 <NobodyCam> oh rloo has a v6 ip neat
17:01:59 <dtantsur> :)
17:02:11 <rloo> I do? (oh yeah, I'm hip...)
17:02:29 <devananda> I left several of the items from last week's meeting on there - it was our first post-holiday meeting, and I'd like to at least touch on everything again since many of the usual folks weren't there
17:02:56 <devananda> #topic announcements
17:03:30 <devananda> probably the only announcement from me is the midcycle(s)
17:03:56 <NobodyCam> devananda: is there a list of who is going to which meetup?
17:04:21 * NobodyCam assumes both are official now?
17:04:33 <devananda> NobodyCam: not yet - but there are meetup sites that folks should be signing up at
17:04:45 <devananda> (apologies for being a bit slow - my links are missing ...)
17:04:52 <Shrews> is the SF one actually going to happen then?
17:05:03 <devananda> yep
17:05:10 <jroll> yes!
17:05:17 <devananda> Rackspace is hosting another sprint in the SF bay area the week after the Grenoble sprint
17:05:25 <jroll> all y'all should come visit us
17:05:28 <devananda> since there are a lot of folks who apparently can't make it
17:05:41 <NobodyCam> Shrews: the j's are holding it weather its official or not as I understand it
17:05:45 <NobodyCam> lol
17:06:14 <Shrews> awesome
17:06:20 <devananda> jroll: didn't ya'll post something on the ML ?
17:06:32 <jroll> devananda: I think just the wiki
17:06:35 <jroll> I could email
17:06:41 <devananda> oh - right
17:06:48 <devananda> that's why I cant find the link
17:06:51 <jroll> I don't have an eventbrite or anything, unclear if I should
17:06:56 <NobodyCam> jroll: ++ That would be good I think
17:07:11 <devananda> #link https://wiki.openstack.org/wiki/Sprints/IronicKiloSprint
17:07:12 <devananda> details are there
17:07:27 <devananda> grenoble meetup: https://www.eventbrite.com/e/openstack-ironic-kilo-midcycle-sprint-in-grenoble-tickets-15082886319
17:07:36 <devananda> SF meetup: https://www.eventbrite.com/e/openstack-ironic-kilo-midcycle-sprint-in-san-francisco-tickets-15184923515
17:07:39 <jroll> oh, there is an eventbrite!
17:07:40 <jroll> awesome
17:07:43 <JayF> I can try and add more information to that wiki if desired, or if anyone has questions about the area feel free to ask in channel and we'll help.
17:07:46 <devananda> phew :) I knew I'd seen it
17:07:56 <Shrews> jroll: suggested accomodations would be great
17:08:08 <NobodyCam> Shrews: ++
17:08:11 <jroll> JayF: ^
17:08:14 <devananda> I'll be going to both, but that's not expected of anyone else
17:08:30 <JayF> Shrews: I'll make a note to add those after the meeting. We have a couple of suggested hotels but anything near transit or in SoMA/Financial District will be close :)
17:09:21 <devananda> #info both Grenoble and SF Bay area meetups are on. Details on the above link.
17:09:27 <devananda> that's all for my announcements
17:09:48 <devananda> #topic subteam status reports
17:09:49 <NobodyCam> nothing from /me this time
17:10:31 <devananda> dtantsur: thanks for updating the bug stats
17:10:33 <NobodyCam> looks like a couple of the updates didn't make it to the white board this time
17:10:45 <devananda> #link https://etherpad.openstack.org/p/IronicWhiteBoard
17:10:51 <dtantsur> np, sorry for being inactive with bugs for some time
17:11:09 <dtantsur> also, there're some bugs, that require deep though, I left them as NEW for now
17:11:16 <NobodyCam> dtantsur: you are allowed to take some PTO .... now an then
17:11:28 <dtantsur> :)
17:11:33 <jroll> NobodyCam: I cleared the whiteboard this time... some sections were feeling stale
17:11:45 <devananda> wanyen: congrats on getting the nova spec landed. is there code up for that on both nova and ironic sides?
17:11:46 <NobodyCam> jroll: ++
17:11:51 <devananda> jroll: ++
17:12:19 <wanyen> deva, yes.  code changes to nova ironic virt driver is up for review.
17:12:30 <NobodyCam> wanyen: link?
17:12:41 <wanyen> let me find the link
17:13:21 <Nisha> NobodyCam,  https://review.openstack.org/#/c/141012/
17:13:36 <NobodyCam> #link https://review.openstack.org/#/c/141012
17:13:36 <devananda> jroll, jayf: I see the IPA's API breaking change mentioned here. has that landed now?
17:13:39 <NobodyCam> Ty Nisha
17:13:45 <Nisha> NobodyCam, https://review.openstack.org/141010
17:13:49 <jroll> devananda: yes, it's landed
17:14:02 <NobodyCam> #link https://review.openstack.org/141010
17:14:16 <JayF> devananda: yes, after some appreciated input and review from a few interested parties but the design stayed mostly the same
17:14:19 <JayF> thanks for those who reviewed that
17:15:00 <devananda> #info non-backwards-compatible change in ironic-python-agent API landed, as per discussion from prior week's meeting. thanks to all who provided feedback.
17:15:06 <wanyen> link for nova code change for passing flavor to ironic https://review.openstack.org/#/c/141012/
17:15:41 <devananda> #info need more eyes on bugs - the backlog is growing, and many bug reports are getting stale // need to be re-triaged
17:15:56 <dtantsur> ^^^ +100
17:16:26 <devananda> last meeting, there was also general agreement that we should have a bug day
17:16:33 <devananda> dtantsur: would you like to organize that?
17:16:53 <NobodyCam> dtantsur: do you have a list of the bug that require deeper thought?
17:17:00 <dtantsur> no problem. e.g. tue and wed work for me
17:17:00 <NobodyCam> *bugs
17:17:10 <dtantsur> NobodyCam, http://ironic-bugs.divius.net/ shows then as NEW
17:17:24 <dtantsur> (maybe not all of them, but most)
17:17:30 <NobodyCam> dtantsur:  Ack :)
17:18:00 <devananda> dtantsur: ok, thanks much :)
17:18:04 <dtantsur> WDYT about bug day on Wed some common time?
17:18:23 <NobodyCam> dtantsur: ++ works for me
17:18:34 <devananda> I'll be offline most of wednesday. any other day this week wfm
17:19:03 <dtantsur> then only Tue works for me. what about tomorrow?
17:19:17 <NobodyCam> I could also make that
17:19:20 <dtantsur> e.g. the same time as this meeting?
17:19:27 <dtantsur> is it ok for most folks?
17:19:36 <jroll> I could make that
17:19:40 <devananda> wfm
17:20:00 <rloo> wfm
17:20:07 <NobodyCam> :) sweet
17:20:17 <Shrews> ++
17:20:23 <dtantsur> #info bug day Tue, 12 Jan 17:00 UTC
17:20:28 <dtantsur> correct?
17:20:33 * dtantsur has problems with time zones
17:20:37 <rloo> it isn't the whole day, right?
17:20:39 <devananda> think so
17:20:51 <devananda> rloo: hopefully it wont take all day :)
17:20:59 <NobodyCam> dtantsur: looks correct
17:21:01 <dtantsur> due to time zones, it can't be the whole day
17:21:15 <devananda> let's move on - we're over 10 minutes for this section
17:21:24 <dtantsur> ++
17:21:34 <devananda> #topic current state machine
17:21:52 <devananda> NobodyCam: I left this on the agenda since there are still open reviews - anything you'd like to discuss?
17:22:26 <NobodyCam> nope we are getting eyes on it
17:22:32 <NobodyCam> s/it/them/
17:22:41 <devananda> great
17:22:51 <NobodyCam> ofc the more the merrier
17:22:52 <devananda> #link https://review.openstack.org/#/q/status:open+project:openstack/ironic+topic:bp/new-ironic-state-machine,n,z
17:23:13 <devananda> anyone have anything to discuss on this right now?
17:23:26 <devananda> I'll give it another minute, then move on otherwise
17:24:02 <NobodyCam> :) we landed two last week. would love to land the rest this week
17:24:46 <devananda> ok, moving on
17:24:55 <devananda> #topic IPA Hardware Manager
17:25:10 <devananda> oops, skipped one
17:25:12 <devananda> #undo
17:25:13 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x3b54f10>
17:25:16 <jroll> this was discussed last week, it's been done
17:25:18 <jroll> oh
17:25:19 <devananda> #topic stable branch maintenance
17:25:33 <devananda> this was discussed last week
17:25:57 <devananda> folks present agreed that we aren't actively supporting Icehouse at this point, but that we should be
17:26:07 <devananda> publishing our team's stable branch policy officially
17:26:16 <devananda> and I have that on my TODO list (but not done yet)
17:26:41 <devananda> just sharing the info - nothing to discuss here on my end
17:26:42 <NobodyCam> devananda: we came up with supprt for two cycles basicly
17:26:43 <rloo> we *should* be supporting icehouse? ugh.
17:26:50 <devananda> er
17:26:57 <devananda> I mean we should be publishing our policy
17:27:05 <devananda> rloo: not that we should be supporting icehouse
17:27:22 <rloo> devananda: ok, cuz i think it would be hard to support icehouse
17:27:35 <devananda> rloo: indeed. and I dont think we should be.
17:28:20 <devananda> #info devananda to draft a formal policy of support for two cycles, starting with Juno, and publish to wiki
17:28:27 <devananda> #topic IPA's hardware manager
17:29:01 <JayF> This must be an agenda item that never got removed last week.
17:29:15 <JayF> There shouldn't be a need for further discussion.
17:29:34 <devananda> ack
17:29:43 <devananda> skipping it then
17:29:52 <devananda> #topic driver-specific periodic tasks
17:29:53 <devananda> dtantsur: hi!
17:29:59 <dtantsur> :)
17:30:20 <dtantsur> just wanted to ask, if folks are ok with landing periodic tasks temporary using greenthread.spawn_n
17:30:40 <dtantsur> and then once we get oslo.service work done for parallel periodic tasks, switch to it
17:30:48 <dtantsur> (that may be pretty late in K cycle)
17:30:51 <dtantsur> wdyt?
17:31:11 <devananda> dtantsur: I agree with jroll's comment -- as long as that doesn't require changes in the drivers themselves, it should be OK
17:31:30 <NobodyCam> devananda: jroll: ++
17:31:54 <dtantsur> devananda, you mean it does not break existing drivers? well, it does not
17:31:58 <devananda> oh - not just "no change in driver API" -- I would prefer to see no change in drivers code, actually
17:32:05 <jroll> right
17:32:20 <jroll> so there should be some base function to register a peridic task for a driver
17:32:23 <jroll> drivers use that
17:32:37 <jroll> the implementation can be changed without drivers knowing
17:32:41 <devananda> if we add an interface for drivers to register their periodic task, and then later we change how ConductorManager is creating or tracking those tasks
17:32:43 <jroll> is how I see this working
17:32:45 <jroll> yeah
17:32:48 <devananda> that shouldn't require a driver author to change their code a second time
17:32:58 <devananda> jroll: ++
17:33:21 <dtantsur> not even adding 'parallel=True' to e.g. decorator signature? hmm...
17:34:11 <jroll> dtantsur: that's optional, though, that should be fine
17:34:33 <jroll> right?
17:34:34 <NobodyCam> yea I think a change of that nature would be ok
17:34:42 <NobodyCam> thats just a bit flip
17:34:50 <dtantsur> the issue is that we'll probably default to non-parallel periodic tasks, so the worst thing to happen is that they become non-parallel after this switch.
17:35:04 <dtantsur> and will require adding 'parallel=True' (or something) to signature
17:35:12 <dtantsur> is it an issue?
17:35:34 <jroll> why default to non-parallel?
17:35:36 <NobodyCam> dtantsur: you'll first add with parallel = false then just flip it?
17:36:06 <dtantsur> jroll, it's how it is done right now..
17:36:17 <devananda> to preserve current behavior?
17:36:26 <jroll> ok
17:36:28 <jroll> just asking :)
17:36:29 <devananda> I think that's fine
17:36:36 <dtantsur> well, ok, we can start with having noop parallel=True in these tasks, and then it'll become a proper implementation.
17:36:39 <jroll> I agree
17:36:42 <dtantsur> I think I understand now
17:36:48 <devananda> if the first pass of this functionality doesn't support parallel tasks
17:37:09 <devananda> then we shouldn't be publishing an interface which makes it appear that it does
17:37:23 <devananda> either don't expose a "parallel" parameter, or error if =True is passed
17:37:26 <devananda> for now
17:37:43 <devananda> I'm fine with *adding* features to it when we switch to oslo
17:37:52 <devananda> but don't *require* a driver author to change their code at that point
17:37:53 <lucasagomes> sorry just joined... +1 for preserve current behavior. How is the work in oslo going? was it accepted to have parallel periodic tasks there?
17:38:16 <dtantsur> I was putting it wrong. we can for now simulate parallel=True with spawn_n, that's what I wanted to say
17:38:22 <dtantsur> so drivers won't be changed
17:38:26 <dtantsur> sorry for the confusion
17:38:26 <devananda> ah, ok
17:38:31 <NobodyCam> oh
17:38:44 <dtantsur> lucasagomes, it's blocked by oslo.service graduation
17:38:52 <dtantsur> and it's not moving fast
17:38:59 <jroll> dtantsur: can we simulate parallel=False with spawn_n?
17:39:02 <lucasagomes> hmm I see
17:39:11 <dtantsur> jroll, yes, by not using it :)
17:39:27 <dtantsur> just call the function directly - and you get parallel = False
17:39:34 <jroll> dtantsur: oh, right
17:39:39 <jroll> ok, so why not support both?
17:39:49 <jroll> and when we switch to oslo that will also support both?
17:39:50 <dtantsur> now I realize that we should, yes
17:39:52 <jroll> ok
17:39:53 <jroll> cool
17:39:55 <jroll> +1
17:39:59 <dtantsur> I was somewhat confused myself :)
17:40:04 <dtantsur> thanks all for the feedback!
17:40:09 <NobodyCam> :)
17:40:44 <devananda> dtantsur: sounds good to me
17:40:53 <devananda> anything else on this topic?
17:41:04 <dtantsur> nothing from me
17:41:05 <lucasagomes> +1 supporting both seems good
17:41:14 * lucasagomes catchs up with the agenda
17:41:23 <devananda> #topic open discussion
17:41:32 <devananda> wow - almost 20 minutes left for open discussion today! ;)
17:41:34 <NobodyCam> rameshg87: go
17:41:37 <rameshg87> NobodyCam, yes
17:41:43 <NobodyCam> :)
17:41:52 <jroll> this has been an exceptionally productive meeting :D
17:42:06 <rameshg87> all, i had mentioned a spec to share boot images for ilo drivers https://review.openstack.org/#/c/137291/
17:42:16 <devananda> NobodyCam: to your question on the grenoble meetup, I have not had as many signups on eventbrite as I would hope for
17:42:30 <devananda> NobodyCam: but there are several folks who have msg'd me to confirm but not yet signed up, too ...
17:42:33 <NobodyCam> I started a conversation in channel just before the meeting about the ref_counter on https://review.openstack.org/#/c/137291/8/specs/kilo/ilo-drivers-share-boot-images.rst
17:42:46 <rameshg87> instances using same images will have the same kernel/ramdisk and hence can share the same boot iso
17:42:59 <NobodyCam> devananda: ack :)
17:43:25 <rameshg87> to track how many instances are currently using the image, i am using a ref counter in Swift and it's modification in Ironic is protected by a lock
17:43:52 <NobodyCam> my concern was that ref_counter could get out of sync
17:44:32 <NobodyCam> and with the iso name being a hashed value a operator would nover know the image was not needed
17:44:37 <jroll> so my biggest issue here is this. say changing the ref counter fails or gets out of sync. 1) how does an operator detect this? 2) how does an operator fix this? an operator can't get the same lock the conductor uses, except maybe twiddling the db.
17:44:59 <jroll> 3) what happens if ref counter goes to 0, ironic cleans up, something is still using it
17:45:27 <NobodyCam> oh I hadn't even gotten to think of #3
17:45:38 <jroll> yeah
17:45:43 <jroll> and it could happen pretty easily
17:46:06 <devananda> rameshg87: I'd like to understand the use-case here. it doesn't seem like the added complexity is worth it for saving a small amount of space in Swift
17:46:12 <lucasagomes> devananda, mentioned enabling deduplication in swift, if the space is what is concerned here
17:46:23 <jroll> decrement succeeds, http response is dropped, ironic thinks decrement fails, either fails the tear down or decrements again
17:46:26 <lucasagomes> I think it worth investigation, generating the ISO with the same content result in the same output?
17:46:40 <lucasagomes> if so, deduplication would be the right way to do it IMO too
17:46:43 <rameshg87> devananda, yeah space was the concern here .. if 100 instances were using the same image, you ideally need to store only one
17:47:09 <devananda> rameshg87: how large is the image? 50 MB ?
17:47:26 <rameshg87> lucasagomes, yeah we can safely assume
17:47:34 <rameshg87> sorry i meant to devananda :)
17:48:01 <lucasagomes> np :)
17:48:19 <rameshg87> lucasagomes, i am not sure if generating the same iso image twice will result in exactly same copy, thought it should :)
17:48:27 <devananda> saving a few GB of space in a petabyte-sized swift cluster doesn't seem like a reason to do anything
17:48:49 <NobodyCam> so 50m * 100 instances is not that much space in todays world of multi TB drives
17:49:20 <devananda> rameshg87: disk storage is cheap. why the concern over saving a few GB ?
17:49:21 * jroll wants to share devananda's petabyte cluster with him
17:49:24 <lucasagomes> rameshg87, right, we could investigate that... And if that's the case we could include in our documentation about how to enable deduplication in swift (or a link to one)
17:50:01 <rameshg87> devananda, lucasagomes, yeah may be i could check on deduplication
17:50:02 <NobodyCam> lucasagomes: ++ fpr letting swift handle that
17:50:15 <NobodyCam> *for even
17:50:50 <rameshg87> devananda, probably i will check and get back then ..
17:51:25 <NobodyCam> rameshg87: awesome Thank you
17:51:31 <lucasagomes> sounds good, thank you rameshg87
17:51:43 <rameshg87> thanks everyone :)
17:52:27 <NobodyCam> any other OD topics?
17:52:31 <dtantsur> lucasagomes, wanna discuss MANAGE[D]?
17:52:39 <lucasagomes> oh, yeah that would be good
17:52:42 <devananda> ah yea
17:52:45 <lucasagomes> if nobody else has any other topic
17:52:48 * lucasagomes gets the link
17:53:02 <lucasagomes> #link https://review.openstack.org/#/c/146452/
17:53:18 <lucasagomes> so it's about making the states name in the state machine more consistent
17:53:49 <devananda> +1
17:53:59 <lucasagomes> in the spec we say that we use states names endiging with -ED and -ING for active states (states that transition from other state without an api call)
17:54:01 <devananda> though there's no MANAGING or MANAGED state, right?
17:54:09 <lucasagomes> the only exception is the MANAGED state with is a passive state
17:54:21 <lucasagomes> devananda, no
17:54:29 <lucasagomes> cause it goes from INSPECTING to MANAGED
17:54:35 <dtantsur> I suggest MANAGEABLE instead :)
17:54:36 <lucasagomes> or ENROLL -> MANAGED
17:55:38 <rloo> so to avoid confusion, we don't want passive states to end in 'ED', right?
17:55:38 <devananda> oh, on a related topic, I'd like to s/NOSTATE/AVAILABLE/ this week
17:55:46 <lucasagomes> so, what you guys think? Should we rename it to MANAGE or MANAGEABLE or <something else>...
17:55:57 <lucasagomes> rloo, yeah
17:56:21 <NobodyCam> I like manage
17:56:26 <rameshg87> +1 for MANAGEABLE :)
17:56:26 <lucasagomes> devananda, +1 for the AVAILABLE, NOSTATE is complicated...
17:56:52 <dtantsur> NobodyCam, re MANAGE, I feel a bit strange using verbs for static state
17:56:54 <lucasagomes> specially because the value of it is None in the states.py :)
17:56:56 <NobodyCam> manageable seem out of place with the other states to me
17:56:56 <devananda> lucasagomes: yea. without breaking the API. I may want some help from you
17:57:06 <rloo> devananda: wrt NOSTATE->AVAILABLE, does that break backwards compatibility?
17:57:06 <wanyen> deva, +1 for available
17:57:09 <devananda> lucasagomes: the change needs to work with current Nova driver
17:57:12 <dtantsur> NobodyCam, AVAILABLE looks similar to me
17:57:17 <lucasagomes> devananda, +1 cool yeah no problem we can work on that
17:57:25 <devananda> rloo: it needs to NOT break compat. that's the challenge
17:57:31 <NobodyCam> dtantsur: ah good point
17:57:50 <rloo> devananda: good if it doesn't break compat!
17:57:53 <devananda> for MANAGED[D], what about UNAVAILABLE ?
17:58:29 <dtantsur> devananda, sounds like an error...
17:58:32 <NobodyCam> ?
17:58:34 <devananda> right. never mind
17:58:45 <NobodyCam> two minutes
17:58:46 <lucasagomes> yeah sounds like it's off or something
17:59:02 <rloo> so we go from ENROLL to XYZ (which allows one to do ...) to AVAILABLE
17:59:31 <lucasagomes> I think MANAGEABLE is good, but we can iron out the name after the meeting too. If we agree here that we will change MANAGED its good already
17:59:38 <devananda> rloo: both MANAGE and RESCUE are verbs, which I think is where your objection stems from?
17:59:47 <rloo> managed means we can talk to the node?
18:00:00 <wanyen> manageable sounds fine
18:00:04 <rloo> devananda: just wondering if there is another word than MANAGE. But yeah, RESCUE is a bit odd too.
18:00:08 <lucasagomes> rloo, yeah that's how I understand it
18:00:15 <lucasagomes> credentials are set and all
18:00:20 <devananda> lucasagomes: I agree there's room for improvement in the words, until we actually implement them, at which point, changes will break compat
18:00:28 <devananda> ok - we're out of time
18:00:31 <devananda> thanks all!
18:00:36 <dtantsur> thanks!
18:00:38 <devananda> #endmeeting