19:01:36 <devananda> #startmeeting ironic
19:01:37 <openstack> Meeting started Mon Jun  9 19:01:36 2014 UTC and is due to finish in 60 minutes.  The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:40 <openstack> The meeting name has been set to 'ironic'
19:01:46 <adam_g> o/
19:02:03 <devananda> #chair NobodyCam
19:02:03 <openstack> Current chairs: NobodyCam devananda
19:02:07 <NobodyCam> :)
19:02:09 <devananda> #topic announcements
19:02:12 <devananda> hi all!
19:02:17 <matty_dubs> o/
19:02:20 <devananda> as usual, the agenda can be found here: https://wiki.openstack.org/wiki/Meetings/Ironic
19:02:21 <NobodyCam> what a week!
19:02:30 <linggao> o/
19:02:41 <devananda> biggest item is probably the current status
19:02:42 <lucasagomes> NobodyCam, +1 heh
19:02:43 <devananda> We weren't able to land any patches last week, and in fact we haven't had any commits merge in 10 days!
19:03:03 <devananda> last week's issues were due to https://bugs.launchpad.net/nova/+bug/1326289
19:03:04 <uvirtbot> Launchpad bug 1326289 in nova "Failing to launch instances : Filter ComputeCapabilitiesFilter returned 0 hosts" [Critical,Fix committed]
19:03:07 <devananda> The fix for that (reverting the relevant nova change) landed this morning. We should be able to start approving patches again.
19:03:22 <NobodyCam> woo hoo!
19:03:41 <NobodyCam> great job babysitting the landing of that btw
19:03:43 <devananda> however, the openstack gate is still having some issues -- it seems better than last week, but not great yet
19:03:56 <Shrews> devananda: but *should* you be approving patches? haven't seen anything from sdague saying "go for it"
19:04:05 * Shrews 2 seconds to late
19:04:32 <JayF> I approved an IPA patch earlier today and it already made it through the gate. Looking at status.openstack.org/zuul it seems to be OK in a generic sense to approve patches. IDK about Ironic-proper though.
19:04:36 <adam_g> is there a plan for how we are going to deal with this once those nova changes merge again?
19:05:06 <devananda> JayF: because IPA isn't co-gated with anything
19:05:36 <devananda> JayF: which is because it's not beign tested with tempest/devstack/etc
19:05:51 <JayF> yet :(
19:05:55 <devananda> adam_g: roughly - yes, but nothing's formal afaik
19:06:42 <devananda> adam_g: my rough plan is: land the nova cchange w/o changing the signature of HostState.__init__, in a backwards-compatible way
19:06:51 <adam_g> i suppose when the gate is better shape it will be easier to coordinate
19:07:04 <adam_g> cool
19:07:15 <devananda> #action devananda to draft plan to re-land commit ce3f9e5fa9cd05f3ee3bb0cc7d06521d05901cf4 in a non-breaking way
19:07:50 <devananda> we should be good to recehck things as needed now, and we should be able to trust Jenkins' votes again
19:08:18 <devananda> that's it for the big status thing
19:08:25 <devananda> #topic release cycle progress report
19:08:40 <devananda> J1 milestone is this week ...
19:08:58 <devananda> given that we have a lot of the refactoring and fixes we discussed at the summit *in flight* right now
19:09:25 <devananda> it'd be great if we get those in ... but given the status of the gate, i'm not so sure
19:09:52 <devananda> i have some ideas. anyone else want to jump in, feel free
19:10:14 <rloo> devananda: J1 milestone is June 12. Don't they cut it two days before? So June 10? (tomorrow)
19:10:48 <devananda> rloo: releases are being handled slightly differently this cycle. rather than cutting a branch, i'll just tell ttx what git SHA to use from master
19:10:58 <NobodyCam> are things stable enough to cut a release?
19:11:04 <rloo> devananda: great. that buys us some more time.
19:11:08 <devananda> rloo: and we have some leeway. Thursday would be best, but if it slips to next Monday, it's nto the end of the world
19:12:42 <devananda> if we can, I think it would be great to do some quick reviewing of the backlog, rebase a bunch of really-nice-to-have cleanup and bug fixes, then push them through today and tomorrow (assuming the gate lets us land anything)
19:13:14 <NobodyCam> +1
19:13:23 <devananda> cause we have a lot of high-priority bug fixes with patches up right now
19:13:24 <lucasagomes> +1, if gate gets better tomorrow we may want to have a review jam or something? to land things quickly
19:13:40 <jroll> I'm in for a review jam
19:14:05 <NobodyCam> I too am up for jam
19:14:18 <devananda> great, thanks
19:14:49 <devananda> #info review jam tomorrow (Tuesday PST morning / GMT afternoon)
19:15:10 <jroll> devananda: can we try to make that post-9am? :)
19:15:10 <devananda> lucasagomes: anything you'd like to add on the overall code cleanup progress?
19:15:16 <jroll> (PST)
19:15:20 <devananda> jroll: nope
19:15:34 <lucasagomes> devananda, due the gate problems, we still waiting for the tempest and devstack patches to land
19:15:37 <lucasagomes> both has a +2
19:15:38 <devananda> jroll: show up when you can -- i'll try to start at 6am myself
19:15:48 <jroll> whoaaaaaa
19:15:56 <devananda> jroll: which isn't exactly normal, but neither is the current situation :)
19:15:56 <jroll> I can do that, but whoa.
19:15:58 <JoshNang> that's some dedication
19:15:58 <jroll> :)
19:16:05 <NobodyCam> i will try for 6 am
19:16:08 <lucasagomes> there's some discussion going on on the tempest patch right now, but I hope toget both of then in this week
19:16:15 <jroll> totally fair dude, I'll shoot for 6am, may be 6:30 or so
19:16:34 <linggao> devananda, is review jam using this IRC channel?
19:16:45 <devananda> linggao: no - we use the normal #openstack-ironic channel
19:17:00 <devananda> lucasagomes: anything we can do to help those tempest patches?
19:17:07 <linggao> ok, 6am your time?
19:17:21 <lucasagomes> devananda, yes, if you guys could read the discussion and give ur opnion would be useful
19:17:23 <lucasagomes> #link https://review.openstack.org/95789
19:17:25 <lucasagomes> that's the patch
19:17:41 <devananda> lucasagomes: thanks, will do
19:17:59 <devananda> #topic subteam: integration testing
19:18:05 <Shrews> lucasagomes: good luck getting jenkins to approve them. I can't get my tempest changes past jenkins  :(
19:18:12 <devananda> nice segway into the next topic :)
19:18:18 <lucasagomes> Shrews, :/
19:18:30 <devananda> adam_g: any add'l updates on tempest from last week?
19:18:55 <adam_g> devananda, not too much
19:19:02 <adam_g> patches that were up last week still are, for obvious reasons
19:19:09 <devananda> right
19:19:25 <devananda> lifeless: hi! any updates on tripleo CI for Ironic?
19:21:01 <devananda> ok, we can come back to lifeless any time if he's around
19:21:08 <devananda> #topic subteam: bugs
19:21:19 <devananda> dtantsur is on PTO for a few more days IIRC
19:21:34 <devananda> I didn't get any summary from him before he left
19:21:45 <lucasagomes> I think he's on PTO this week and the next one
19:21:50 <NobodyCam> yes form what I recall he is out the rest of this week
19:21:54 <devananda> but from what I can tell, he's been reviewing all the open bugs, adjusting status
19:22:17 <lifeless> devananda: hi
19:22:18 <devananda> lucasagomes: ah, thanks for the clarification - didn't realize it extended past this week
19:22:35 <lifeless> devananda: uhm, no, I'm not driving that, just whinging about it
19:22:40 <lucasagomes> devananda, not 100% sure, lemme try to confirm that
19:22:57 <lifeless> devananda: I wish I had time to drive it. check with adam_g and NobodyCam who AFAIK were caring for the failing tripleo-ironic jobs
19:23:24 <lifeless> We might be able to sic pcrews on it too, from the tripleo end
19:23:52 <devananda> lifeless: ack. your name ended up on that item in our agenda -- adam_g or NobodyCam, either of you want to give updaets on tripleo ci of ironic?
19:23:56 <NobodyCam> I was unable to get a lot done with the Ci job last week, I should have more time this week
19:24:38 <adam_g> the ironic dsvm jobs *should* be passing now. so any consistent failures should get a bug and we can take a look.
19:24:58 <devananda> lifeless, NobodyCam - fwiw, this is also a good time to raise awareness of any particularly large issues that tripleo is hitting with ironic
19:24:59 <adam_g> im aware of the UCA GPG key error, and will see what infra thinks about caching that stuiff on the precise slave images
19:25:02 <lifeless> adam_g: not dvsm
19:25:09 <lifeless> adam_g: *tripleo*.*ironic.*
19:25:26 <NobodyCam> adam_g: even undercloud
19:25:42 <adam_g> oh my bad, i have not been involved there.
19:26:02 <adam_g> WRT major bugs that have been affecting our tripleO testing, https://bugs.launchpad.net/ironic/+bug/1320513 is probably the #1 for us right now
19:26:04 <uvirtbot> Launchpad bug 1320513 in ironic "IPMI commands are sent / queried too fast" [High,In progress]
19:26:29 <devananda> ack. i have a fix proposed for that
19:26:36 <lifeless> adam_g: check-tripleo-ironic-undercloud-precise
19:26:47 <lifeless> adam_g: thats the one I handed off my patch about race conditions to you :)
19:26:52 <Shrews> for triple0, this needs approving (just rechecked it): https://review.openstack.org/96498
19:26:54 <lifeless> adam_g: [you can check your IRC logs :P]
19:26:57 <NobodyCam> 96558 & 96902 address that
19:27:47 <adam_g> lifeless, right, the fix of which is still blocked on a refactor.. last week didn't help.
19:28:09 <devananda> Shrews: ack. with that many +2's it shouldn't be a problem to land it today (if the gate lets us)
19:28:34 <devananda> ok, let's move on as there's a few more items to cover
19:28:42 <lifeless> adam_g: (note that that might not be the cause, or the whole cause, it $whatever - what i needed to handoff was the 'sit on problem job and make damn thing work'
19:28:46 <devananda> one quick note on bug triage in general
19:29:00 <devananda> let's start using tags to categorize bugs we see
19:29:21 <devananda> dtantsur proposed the "driver" tag. I'd like to propose a few different ones
19:29:39 <devananda> anyone want to discuss those, let's talk after the meeting -- otherwise, I'll just put up a wiki page today
19:29:46 <lucasagomes> driver for the nova driver? or drivers in ironic itself (e.g pxe)?
19:29:54 <devananda> lucasagomes: eactly. i'd like to clarify that sort of thing
19:30:00 <lucasagomes> ack
19:30:09 <NobodyCam> +1 to more tags
19:30:46 <devananda> #action devananda to write a wiki on bug tags, link from our main wiki
19:31:04 <devananda> #topic subteam: IPA
19:31:16 <devananda> russell_h or jroll: either of you have any updates to share?
19:31:21 <jroll> hi!
19:31:28 <jroll> just a couple of things
19:31:43 <jroll> 1) I put up a spec for the agent driver on friday - please review :) https://review.openstack.org/#/c/98506/
19:32:03 <NobodyCam> #link https://review.openstack.org/#/c/98506/
19:32:06 <NobodyCam> :-p
19:32:10 <devananda> #info spec for IPA's driver now available -- please review! https://review.openstack.org/#/c/98506/
19:32:15 <jroll> 2) I've put up a high-level todo list, for the road to getting the agent stuff merged, and beyond: https://etherpad.openstack.org/p/ipa-todos
19:33:04 <devananda> jroll: good stuff
19:33:07 <jroll> our main goal right now is refactoring the agent driver as stated there
19:33:08 <lucasagomes> nice
19:33:39 <jroll> I'd like to land that sometime before the mid-cycle
19:33:57 <jroll> I think that's it from me
19:34:13 <devananda> jroll: sort of serious, sort of joking -- does it work now?
19:34:47 <devananda> jroll: it would be good to have a walkthrough in that etherpad showing other devs how to replicate/create an env to do a deploy using IPA
19:34:56 <jroll> devananda: for some definition of 'work'
19:35:05 <jroll> devananda: it's fairly reliable, honestly
19:35:08 <JayF> jroll: is underselling it
19:35:28 <jroll> we've been testing and things are typically happy
19:35:32 <devananda> great
19:35:33 <jroll> and yes, docs
19:35:35 * jroll adds to list
19:35:49 <NobodyCam> jroll: I have not looked, (and may have already asked) is there a DIB element for Ironic-IPA
19:35:53 <JayF> jroll: russell_h: is dwalleck still working on tempest for ipa?
19:35:55 <devananda> i'd like to also see a list of known missing things (after all, that's part of the plan)
19:36:10 <jroll> NobodyCam: not today
19:36:21 <devananda> jroll: by "missing" i mean compared to the PXE driver. I know ya'll have a lot more stuff planned that PXE wont do
19:36:21 <JayF> NobodyCam: The existing agent image is built using coreos and run in a container. I'll gladly review additions of DIB elements.
19:36:24 <jroll> devananda: missing for what? feature parity with pxe?
19:36:26 <jroll> ok
19:36:29 <devananda> right
19:37:00 <JayF> NobodyCam: should be relatively trivial to do, given all the dependencies are documented in the Dockerfile in the root of the repo.
19:37:08 <jroll> devananda: my goal is to make that an empty set before landing :)
19:37:19 <devananda> jroll: ++
19:37:30 <NobodyCam> ack... I can help with dib element, but will nned to get myself up to speed on IPA
19:37:37 <devananda> jroll: would be good to have that gap outlined upfront, so that folks aren't surprised when reviewing it
19:37:53 <jroll> devananda: of course, it's now on my todo list :)
19:38:15 <devananda> anything else on IPA?
19:38:36 <jroll> I think that's it
19:38:50 <devananda> great, thanks for the updates!
19:38:54 <devananda> #topic subteam: oslo
19:39:00 <jroll> :)
19:39:02 <devananda> GheRivero: hi! any updates?
19:39:09 <GheRivero> nothing new. Like almost every other project, the gate issues blocked it.
19:39:46 <GheRivero> some blocker patches were approved for oslo.db, but there are others in the queue
19:39:51 <GheRivero> the same for oslo.i18n
19:40:20 <devananda> actually I have something to point out - our DB migration code, which is largely based on oslo-incubator's db migration code, is badly broken
19:40:24 <devananda> #link http://lists.openstack.org/pipermail/openstack-dev/2014-June/036950.html
19:40:33 <devananda> * db migration test code, i mean
19:41:05 <devananda> tldr; we're not running db migration tests in the check/gate queues currently
19:41:10 <devananda> and we shouldn't enable them until this is fixed
19:41:12 <lucasagomes> :(
19:41:15 <lucasagomes> yeah
19:41:17 <devananda> because they'll introduce random failures
19:41:55 <devananda> if anyone is familiar with the oslo db migration code and wants to fix it locally -- that'd be best
19:42:24 <lucasagomes> #link http://lists.openstack.org/pipermail/openstack-dev/2014-June/036950.html
19:42:48 <lucasagomes> devananda email to the mailist ^ there's some info about the currently situation of the migration scripts
19:42:58 <lifeless> so
19:43:02 <lifeless> devananda: let me check my understanding
19:43:11 <lifeless> devananda: there is a single DB server - mysql/whatever
19:43:25 <lifeless> devananda: and multiple tests that can affect the schema on it, and they run concurrently?
19:43:45 <devananda> lifeless: in one configuration, yes.
19:43:55 <lifeless> devananda: which configuration is that?
19:44:12 <devananda> lifeless: the one that is currently being used by oslo-incubator and ironic
19:44:25 <lifeless> devananda: so the situation as I described it is the situation today?
19:44:30 <lifeless> if those tests were enabled
19:44:42 <devananda> lifeless: i haven't looked at oslo.db, it may not be affected. nova is nto affected, because it's using older code (pre-oslo-incubator DB migration code)
19:45:02 <lifeless> devananda: do our other tests depend on that DB and schema? Or is this isolated to a small set of our tests?
19:45:46 <devananda> lifeless: I believe it affects all our unit tests which use the DB (they share a single DB schema currently, AFAIUI)
19:46:10 <devananda> lifeless: but I haven't looked at /that/ aspect yet
19:46:11 <lifeless> devananda: let me rephrase; do many of our unit tests use the DB, or is it mocked out for most of them ?
19:46:41 <devananda> lifeless: not sure at the moment. that email was only referring to migration tests
19:46:46 <lifeless> yeah
19:46:53 <lifeless> ok so we can take this to the main channel
19:46:56 <lifeless> but I have a few suggestions
19:47:01 <devananda> right, i have some too
19:47:09 <lifeless> good :)
19:47:21 <lifeless> suggestions at dawn, on the commons
19:47:38 <devananda> i was mentioning it to draw attention to the fact that we're running db migration tests right now -- and we shouldn't just start running that test, cause it'll break
19:47:45 <devananda> cool. moving on :)
19:47:52 <devananda> #topic open discussion
19:48:00 <devananda> 12 minutes left! we're doign great today :)
19:48:27 <lucasagomes> devananda, what we are expecting to be covered on the mid-cycle with nova?
19:48:40 <lucasagomes> devananda, mostly about the nova ironic driver and the path to get it in?
19:48:47 <devananda> oh! gah! of course  Ishould have announced that
19:49:11 <devananda> #info mid-cycle announced: details here: https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint
19:49:46 <devananda> lucasagomes: so we're co*located* with nova, but not having a joint summit
19:49:56 <devananda> we'll be mostly doing our own thing in a separate room
19:50:08 <devananda> with easy access back and forth, and probably daily meetings with them
19:50:16 <devananda> to sort out landing the nova driver
19:50:24 <lucasagomes> right
19:50:43 <devananda> lucasagomes: so we should have our own agenda
19:50:52 <devananda> I've put some ideas in this etherpad
19:50:53 <devananda> #link https://etherpad.openstack.org/p/juno-ironic-sprint
19:51:05 <lucasagomes> +1, wil take a look and put some ideas as well
19:52:23 <devananda> aside from the nova driver and IPA, I think it's too early to know what code we'll all be hacking on at that point
19:52:33 <Shrews> If the Triple0 is the week before, does anyone plan to attend?
19:53:06 <devananda> Shrews: I'll probably be at TripleO as well
19:53:28 <Shrews> devananda: cool. i might have to attend part of that one instead  :(
19:53:36 <devananda> lifeless: any thoughts on when you expect to have confirmation of the tripleo dates?
19:53:39 <devananda> Shrews: noooo
19:55:28 <lifeless> devananda: today I hope
19:55:48 <devananda> lifeless: ack
19:55:59 <devananda> if there's nothing else, let's end a few minutes early
19:56:19 <jroll> \o/
19:56:30 <NobodyCam> wow
19:56:32 <NobodyCam> :)
19:56:36 <NobodyCam> Thank you all
19:57:01 <devananda> thanks!
19:57:05 <devananda> #endmeeting