19:01:26 <devananda> #startmeeting Ironic
19:01:27 <openstack> Meeting started Mon Jan 27 19:01:26 2014 UTC and is due to finish in 60 minutes.  The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:29 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:31 <openstack> The meeting name has been set to 'ironic'
19:01:33 <devananda> #chair NobodyCam
19:01:34 <openstack> Current chairs: NobodyCam devananda
19:01:45 <devananda> As usual, our agenda's here -- https://wiki.openstack.org/wiki/Meetings/Ironic
19:01:51 <NobodyCam> Welcome all
19:01:58 <GheRivero> o/
19:01:59 <NobodyCam> short agenda today :)
19:02:20 <devananda> could end up being a short meeting, but who knows if we'll side track :)
19:02:42 <devananda> small snnouncement: we cut a milestone as with all the other projects last week
19:02:53 <NobodyCam> Saw the email about that
19:02:54 <dkehn> o/
19:03:23 <max_lobur_cell> o/
19:03:51 <devananda> this was more about getting the process down than actually producing a stable build, but it was, afaict, stable for the things we've implemented so far
19:04:16 <devananda> things are looking good to me for our progress towarsd graduation -- keep up the good work!
19:04:35 <devananda> mostly, we need deploy to really work. we're so close but could really use folks focusing on taht right now
19:04:49 <devananda> let's take a look at open BPs and high pri bugs
19:04:52 <devananda> #topic blue prints
19:05:20 <devananda> #link https://launchpad.net/ironic/+milestone/icehouse-3
19:05:27 * devananda waits on slow connection
19:05:50 <devananda> ok,we  have 2 essential BPs for i3 no tdone yet
19:06:20 <devananda> romcheg: think you'll be able to start on the db migration from nova-baremetal soon? or have you already and I missed it?
19:06:40 <romcheg> devananda: I'll submit all I did this week
19:06:45 <devananda> dkehn: i've seen a lot of comments (and left some) on your neutron patch. anything more I can do to facilitate getting that done ASAP?
19:06:50 <devananda> romcheg: great, thanks!
19:07:00 <NobodyCam> romcheg: awesome
19:07:21 <dkehn> working on it at present, lifeless has left comments on it concerning I it will be used, which is good
19:07:40 <devananda> linggao: do i remember correctly that you're working on https://blueprints.launchpad.net/ironic/+spec/serial-console-access ?
19:07:48 <dkehn> it really depends upon the reviews, I'm taking care of them as I get them
19:07:52 <devananda> linggao: or working with sun on that, i mean
19:08:11 <linggao> devananda, Sun Jing is working on it.
19:08:29 <devananda> also, has anyone gotten the neutron patch to work in a deployment, in conjunction with the nova driver yet?
19:08:51 <lucasagomes> nop, I'm setting up tripleo to start working on that right now
19:09:07 <devananda> linggao: ah, sorry. I don't usually see sun at the meetings, just you
19:09:12 * NobodyCam notes that I should have a new rev of the driver up today. I tried (and failed) to have it up for hte meeting
19:09:13 <lucasagomes> probably will have something to say about it by tomorrow (day's almost finishing here)
19:09:23 <devananda> lucasagomes: awesome, ty
19:09:39 <linggao> davadanda, it's her night time. 3am in the morning. :-)
19:09:55 <linggao> She will finish the console with ipmitool.
19:09:58 <devananda> linggao: ah! that's a good reason not to be in the meetings :)
19:10:10 <devananda> linggao: ok, thanks
19:10:20 <linggao> She has transfered other blueprints to me but his one.
19:10:27 <linggao> his/this
19:11:00 <devananda> fwiw, the other 3 BPs currently targeted to i3 may get bumped -- depending on review bandwidth, etc
19:11:45 <lucasagomes> the ephemeral partition one I think is already in good progress, there's two patches waiting for review that makes ironic
19:11:58 <lucasagomes> create an ephemeral partition at deploy time
19:12:14 <devananda> lucasagomes: awesome work on that, btw. i know it's mostly just porting from nova, but still ... :)
19:12:14 <lucasagomes> and the next one teachs ironic hw to preserve that partition if you redeploy the machine
19:12:27 <lucasagomes> yea mostly adapting the nova code to ironic
19:12:28 <NobodyCam> lucasagomes: do you recall the deploy timeout issue we chatted about a week or so ago. did we get a bug or bp for that?
19:12:42 <lucasagomes> and maybe we need to add a redeploy to the ironic driver as well
19:12:53 <lucasagomes> NobodyCam, there's a bug for that
19:12:56 <dkehn> lifeless: pong
19:12:58 <lucasagomes> devananda, opened
19:13:03 <dkehn> lifeless: pong
19:13:08 <devananda> lucasagomes: let's chat about redeploy towards the end of meeting
19:13:11 * lucasagomes tries to find the link
19:13:12 <NobodyCam> lucasagomes: ack TY
19:13:21 <devananda> lucasagomes: i would like to know what you mean, but not sidetrack yet :)
19:13:38 <dkehn> sorry wrong window
19:13:40 <devananda> any other questions/comments/blockers on blue prints, before we move on?
19:13:44 <lucasagomes> NobodyCam, https://bugs.launchpad.net/ironic/+bug/1270986
19:13:45 <lucasagomes> dendrobates, ack
19:13:53 <devananda> lol
19:13:58 <NobodyCam> #link https://bugs.launchpad.net/ironic/+bug/1270986
19:14:00 <NobodyCam> TY lucasagomes
19:14:08 <lucasagomes> devananda, ack*
19:14:24 <devananda> #topic bugs
19:14:56 <devananda> first thing is, i'd like to ask that any developers filing bugs, who have a sense of the project's needs (as ya'll probably do by now)
19:14:59 <devananda> also triage them
19:15:04 <NobodyCam> devananda: I believe the above bug should be i-3
19:15:10 <devananda> #link https://wiki.openstack.org/wiki/BugTriage
19:16:04 <devananda> so, eg, if you spot a problem and file a bug, please set the status to triaged and the importance to the appropriate level
19:16:19 <devananda> see taht link for a description of the different meanings of the fields
19:16:45 <devananda> and if you're working on fixing a bug, please set the milestone target if you think you'll have a fix by a certain time
19:16:49 <devananda> thanks :)
19:17:19 <devananda> any particular open bugs that folks want to bring up?
19:17:34 <devananda> either you're blocked fixing it, or you think the priority is wrong (too low? too high?)
19:17:56 <devananda> or you think it's essential to fix by i3 but it's not listed on https://launchpad.net/ironic/+milestone/icehouse-3 ...
19:18:06 <lucasagomes> the timeout one is tricky
19:18:13 <devananda> #link https://bugs.launchpad.net/ironic/+bug/1270986
19:18:14 <devananda> yea
19:18:18 <linggao> what is the target date for i3?
19:18:31 <devananda> march 6
19:18:41 <linggao> I see.
19:18:52 <devananda> there's a feature freeze deadline of feb 22, i believe
19:18:53 <rloo> which means mar 4 to get the changes in?
19:18:56 <devananda> see the ML for the discussions
19:19:05 <lucasagomes> cause in order to use an periodic task you will have to break the lock to be able to acquire the node object and set it's provision_state to error
19:19:14 <lucasagomes> not only that, we will also have to stop the job
19:19:18 <lucasagomes> which is running in the background
19:19:26 <devananda> basically, that means any major new code needs to be proposed ~2 weeks before the FF so folks can have tiem to review it and iterate
19:19:46 <devananda> hm, i should have announced that at the beginning...
19:19:47 <linggao> So after March 6, no more code check in?
19:20:12 <devananda> linggao: after mar6, only bug fixes and docs, unless you get a feature-freeze-exception to continue working on an essential feature for the release
19:20:28 <linggao> got it.
19:20:45 <devananda> lucasagomes: exactly. and i dont think we've got a solution yet for "interrupt the background job"
19:20:53 <NobodyCam> feature-freeze-exception come from TC's only?
19:21:03 <devananda> NobodyCam: PTL
19:21:06 <lucasagomes> devananda, yup, I was thinking about somehow use max_lobur_cell patchs
19:21:07 <NobodyCam> ack
19:21:14 <lucasagomes> to have an interface to stop the thread
19:21:30 <devananda> lucasagomes: that creates the thread, but does it let us SIG_INT the thread?
19:21:42 <max_lobur_cell> devananda, lucasagomes, its possible to cancel background job
19:21:49 <devananda> max_lobur_cell: awesome - how?
19:21:57 <lucasagomes> yea, if we had a thread pool or something
19:22:11 <lucasagomes> where I could get that future object and somehow call a stop()
19:22:15 <max_lobur_cell> I'd like to discuss after
19:22:20 <max_lobur_cell> exactly
19:22:25 <devananda> max_lobur_cell: sounds good
19:22:28 <lucasagomes> idk about sending a signal to the thread
19:22:34 <lucasagomes> unless we do it with multiprocessing
19:22:35 <max_lobur_cell> but there are other ways
19:22:44 <lucasagomes> that would allow us to send a signal to stop that process
19:23:08 <devananda> lucasagomes: right - i was using SIG_INT as an expression, not literally :)
19:23:17 <lucasagomes> gotcha
19:23:27 <lucasagomes> well we can use the signal python lib for that then
19:23:29 <lucasagomes> I think
19:23:37 <devananda> so that is the only _critical_ bug that we have open today,w hich is great
19:23:48 <lucasagomes> that would make things more difficult to port ironic to windows for e.g
19:24:01 <max_lobur_cell> I'll ping you guys after
19:24:04 <devananda> fwiw, critical bug == hold off the release until it's fixed
19:24:05 <lucasagomes> as signal is unix only... but we don't have to mind about it now as well
19:24:18 <devananda> and i think this bug is that bad that we must fix it before Icehouse release
19:24:20 <max_lobur_cell> once I get back to my laptop :-)
19:24:36 <NobodyCam> devananda: ++
19:24:44 <devananda> any other bugs folks are concerned about ?
19:24:58 <devananda> *are particularly concerned about
19:26:19 <devananda> ok, moving on ...
19:26:33 <devananda> #topic Tempest tests
19:26:51 <devananda> we're still waiting on -infra to merge the tempest API functional tests into the pipeline
19:27:17 <devananda> they were really busy last week... :)
19:27:33 <NobodyCam> ya,
19:27:49 <devananda> other than that, I'm hoping agordeev2 // romcheg // etc might have some updates on devstack/tempest functional testing with VMs?
19:28:10 <agordeev2> devananda: right! i'm fully concentraced on it
19:28:35 <devananda> agordeev2: awesome! how's it going? need anything from me?
19:29:37 <agordeev2> devananda: it's going fine. I think we'll have working patchset before the mid of i3 :)
19:30:00 <agordeev2> devananda: no, no need of anything from you. thank you
19:30:17 <devananda> agordeev2: ok, thanks!
19:30:22 <devananda> #topic nova driver
19:30:25 <devananda> NobodyCam: that's you :)
19:30:29 <NobodyCam> yep
19:30:57 <NobodyCam> I have a update to puch up as soon as I correctly rebase the driver
19:31:24 <NobodyCam> I am getting the node to the deploying state
19:31:40 <NobodyCam> I hope to be testing with dkehn's patch today
19:32:00 <devananda> awesome
19:32:16 <devananda> any idea when it'll be ready for review by nova team?
19:32:47 <NobodyCam> if things go well today. I hope to jump in to test writting
19:32:50 <dkehn> NobodyCam: look at https://review.openstack.org/#/c/66071/11, working on it a bit to deal with lifeless review
19:33:19 <NobodyCam> so maybe by next meeting? (i hope)
19:33:35 <NobodyCam> dkehn: ack will look after meeting
19:34:06 <devananda> NobodyCam: need anything from me / others?
19:34:24 <NobodyCam> oh quietion ya
19:34:36 <NobodyCam> we use mock
19:34:47 <NobodyCam> nova appers to still be using mox
19:34:55 <NobodyCam> any pref there?
19:34:58 <devananda> yea
19:35:01 <devananda> mock :)
19:35:04 <NobodyCam> :)
19:35:12 <devananda> all projects should be moving twoarsd py3 compat
19:35:20 <devananda> mox is one of the things holding nova back
19:35:28 <NobodyCam> that what I thought
19:35:38 <dkehn> is mox is or out?
19:35:47 <max_lobur_cell> devananda +
19:35:52 <devananda> mox is not py3 compat, last i heard
19:36:19 <devananda> it definitely shouldn't be used in ironic, and if we're adding unit tests to other projects (eg, nova) it's nice of us to use mock instead
19:36:22 <dkehn> thx
19:36:39 <NobodyCam> any question for me on the nova driver?
19:37:17 <devananda> not from me. anyone else?
19:37:55 <devananda> i'm going to skip the next agenda item (devstack) since agordeev2 already gave an update on it, unless anyone has questions on taht topic
19:38:18 <devananda> #topic tripleo
19:38:42 <devananda> several of the patches to add ironic support to tripleo have merged, there's at least one more still in progress
19:38:58 <devananda> #link https://review.openstack.org/#/c/66461/1
19:39:10 <NobodyCam> #link https://review.openstack.org/#/c/69013/1
19:39:11 <devananda> also, lucasagomes raised a question in channel earlier
19:39:23 <devananda> whether we're adding ironic to the seed or undercloud VMs and why
19:39:40 <devananda> so i'd like to clarify that in case others are wondering
19:40:06 <devananda> seed has some extra bits (init-no-cloud-init, or something like that) so it can run without cloud-init available
19:40:13 <NobodyCam> I have a update ALMOST ready for 66461
19:40:26 <matty_dubs> If you pardon the stupidity, what would be the use case for baking Ironic into the seed/UC VMs?
19:40:26 <devananda> i believe the elements are otherwise the same as for undercloud
19:40:44 <devananda> so we're adding it to undercloud first and once that's working, will add it to seed
19:40:45 <NobodyCam> matty_dubs: a working ironic
19:40:55 <devananda> matty_dubs: it's replacing nova's baremetal driver
19:41:11 <devananda> matty_dubs: so instead of seed and undercloud using nova+baremetal, it will use nova+ironic
19:41:15 <devananda> s/it/they/
19:41:17 <matty_dubs> Oh, whoops, _under_cloud. I'm with you now. :)
19:41:22 <lucasagomes> :)
19:41:23 <devananda> :)
19:41:27 <NobodyCam> :)
19:41:55 <devananda> anyone else experimenting with tripleo + ironic?
19:42:06 <devananda> questions? comments?
19:42:33 <matty_dubs> I'm ostensibly at the intersection of the two, though not too heavily involved with TripleO at the moment.
19:42:48 <matty_dubs> Now that I don't have undercloud and overcloud confused (it's Monday!), it seems like it would be pretty useful.
19:43:04 <devananda> matty_dubs: are you working on tuskar?
19:43:24 <matty_dubs> devananda: I came from Tuskar, and my focus is starting to shift to Ironic. Largely in support of Tuskar.
19:43:38 <devananda> matty_dubs: great. welcome!
19:43:42 <matty_dubs> Thanks! :)
19:44:11 <devananda> matty_dubs: you'll probably want to poke at our API a bunch, and getting tuskar to talk directly to it for enrolling / status / etc
19:45:01 <matty_dubs> Yes, was looking at precisely that last week :)
19:45:45 <devananda> if there's nothing else specific to our progress to integrate with tripleo, let's jsut open the floor
19:45:48 <devananda> #topic open discussion
19:46:41 <NobodyCam> just wanted to say AWESOME job everyone
19:46:47 <lucasagomes> devananda, at the beginning of the meeting I said that we probably will have to add a way to "redeploy" in the ironic driver for nova
19:46:49 <lucasagomes> I meant rebuild
19:47:11 <NobodyCam> lucasagomes: how would you see that getting used
19:47:23 <NobodyCam> s/how/when/
19:47:49 <lucasagomes> NobodyCam, I think TripleO is already planning to use that on this cycle
19:48:06 <devananda> lucasagomes: as in, upgrade the image
19:48:09 <lucasagomes> lifeless,  sent an email to the openstack-dev list today I think
19:48:11 <NobodyCam> for upgrades
19:48:12 <lucasagomes> dendrobates, yes
19:48:13 <NobodyCam> :)
19:48:18 <lucasagomes> devananda, yes
19:48:18 <devananda> lucasagomes: or "put new image on same node without wiping the ephemeral"
19:48:21 <devananda> ok
19:48:27 <lucasagomes> yea update the deployed image
19:48:32 <lucasagomes> without removing all the user data
19:48:46 <devananda> there was a patch up to implement a "redeploy", which would ostensibly re-trigger a deploy when the previous attempt failed mid-way
19:49:21 <NobodyCam> devananda: have the link handy?
19:49:22 <devananda> i blocked that one because, if something fails, i think exposing the equivalent of F5 is terrible
19:49:36 <lucasagomes> I see... that's useful as well :) but yea I confused the words I meant rebuild instead of redeploy
19:49:46 <devananda> lucasagomes: actually i think it's anti-useful
19:50:24 <devananda> why not just "delete" + "deploy" ?
19:50:36 <devananda> but anyway, for tripleo's needs, re-image is very useful
19:50:57 <lucasagomes> devananda, well, if the redeploy is smart enough to tell the scheduler to pick another node (ignore the one that failed, not try to redeploy on it)
19:51:04 <lucasagomes> I think it's kinda the same as delete and deploy again
19:51:17 <lucasagomes> kinda like an alias for both action
19:51:27 <devananda> lucasagomes: at the nova level, taht will happen with automatic scheduler retries
19:51:27 <lucasagomes> actions*
19:51:56 <devananda> lucasagomes: actually, i dont think the scheduler is issuing deletes for the failures, but i'll check
19:52:10 <devananda> #action devananda to check if nova-scheduler issues DELETE for failed attempts prior to RETRY
19:52:30 <lucasagomes> devananda, right, hmm... that would be important to it to call delete
19:52:39 <lucasagomes> so we can do a tear_down() and remove all the config files
19:52:45 <lucasagomes> generated for that node that failed to deploy
19:52:51 <devananda> lucasagomes: ironic doesn't need to expose retry in its API because teh logic for that is external. we don't have an internal scheduler today
19:52:55 <lucasagomes> inlcuding the auth_token etc
19:52:58 <devananda> lucasagomes: exactly
19:53:10 <devananda> but we do need to expose "reimage"
19:53:28 <devananda> that can't be accomplished with any other combination of actions today
19:53:52 <lucasagomes> I see
19:53:57 <matty_dubs> 'reimage' is for the upgrade use case, where you want to leave guests intact, but upgrade the operating system / OpenStack install, correct?
19:54:21 <devananda> matty_dubs: yep
19:54:59 <NobodyCam> devananda: would that be needed for graduation?
19:55:04 <devananda> NobodyCam: nope
19:55:08 <lucasagomes> yea that logic about "redeploy" should live in the ironic driver for e.g not in our APi directly
19:55:16 <devananda> however, tripleo needs it for their story
19:55:24 <devananda> lucasagomes: ++
19:55:24 <lucasagomes> the driver can call the delete+deploy, but understand both actions as "redpeloy"
19:56:30 <devananda> lucasagomes: new bug that you'll be interested in; https://bugs.launchpad.net/ironic/+bug/1272599
19:56:51 <devananda> i had some time to chat with krow late on friday and he pointed that issue out
19:57:39 <lucasagomes> hmm will take a look
19:58:35 <lucasagomes> devananda, this only affects ironic!?
19:58:37 <NobodyCam> two minute bell
19:58:46 <lucasagomes> I bet it affects many other projects
19:58:48 <devananda> lucasagomes: heh. probably affects other projects too ...
19:59:03 <lucasagomes> our factory resources like is used by many others
19:59:04 <lucasagomes> yea
19:59:04 <matty_dubs> Is caching a POST RFC-legal?
19:59:36 <lucasagomes> matty_dubs, idk, I will google it
19:59:40 <lucasagomes> but that looks hmm, odd
19:59:43 <devananda> yea
19:59:44 <matty_dubs> lucasagomes: Ah, I can. Was mostly just curious/surprised.
20:00:01 <NobodyCam> Thats time folks
20:00:02 <devananda> lucasagomes: we also talked about 200 vs 201 vs 202 return codes
20:00:16 <devananda> thanks everyone! see you next week :)
20:00:19 <devananda> #endmeeting