19:01:08 #startmeeting ironic 19:01:09 Meeting started Mon Jun 23 19:01:08 2014 UTC and is due to finish in 60 minutes. The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:12 \o 19:01:13 The meeting name has been set to 'ironic' 19:01:16 #chair NobodyCam 19:01:17 Current chairs: NobodyCam devananda 19:01:23 #link https://wiki.openstack.org/wiki/Meetings/Ironic 19:01:28 as usual, the agenda can be found ^ 19:02:07 #topic greetings, announcement, etc 19:02:34 I think the "bug tag" announcement may have been left from last week, but to reiterate 19:02:49 as we are filing and triaging bugs, let's try to tag them using the official tags 19:03:05 also, lots of thanks to dtantsur for work on that :) 19:03:19 np :) 19:03:20 +1 19:03:41 as for the midcycle meetup(s), there's info here for ours 19:03:44 #link https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint 19:03:47 action for me or somebody: to add the same tags to python-ironicclient 19:03:59 should this be done by the bug submiter or person doing triage 19:04:06 NobodyCam: either one 19:04:08 NobodyCam, both 19:04:19 ack ack :) 19:04:32 if you are able to make it to the Ironic meetup, please RSVP on the links from that wiki page 19:04:45 so we can tell the venue how many to expect 19:04:59 I've estimated that we'll be between 20 and 30 ppl ... 19:05:18 How many plan to attend the TripleO one instead/as well? 19:05:48 sorry for interrupting you: is anyone attending meet-up in Paris in the beginning of June? 19:05:56 Shrews: i was just about to ask that :) 19:06:18 I will be at the tripleo midcycle 19:06:29 * Shrews will attend tripleo instead 19:06:48 dtantsur, I am: https://wiki.openstack.org/wiki/Sprints/ParisJuno2014 19:06:51 * NobodyCam should be at Ironic's meetup 19:07:13 * matty_dubs is attending only the Ironic+Nova one 19:07:16 dtantsur, yeah me too 19:07:32 dtantsur: viktors will be there so you can discuss oslo.db :) 19:07:41 we may have just lost devananda 19:08:35 * Shrews mourns the loss 19:08:43 :) 19:08:47 lol yep lost his connection 19:08:56 all in favour of NobodyCam taking over... 19:09:18 :) 19:09:24 :) 19:09:41 * matty_dubs for one, welcomes our new Ironic overlords 19:09:47 oh, its only his *connection* :) 19:09:50 king is dead long live the king 19:10:08 #topic Release cycle progress report 19:10:33 we'll skip over ironic-specs and blueprint status (devananda) 19:10:38 ok deva is off, so I think I can talk about the refactor 19:10:39 so lucasagomes your up 19:10:54 the base patches and the dependencies in devstack and tempest are merged 19:11:03 there's 2 patches left for the instance_info work 19:11:31 nice 19:11:33 the migration script and another one changing the patcher.py on the driver to make it not delete the pxe_deploy_{kernel,ramdisk} when tearing down the node 19:11:39 * lucasagomes will grab the links 19:11:54 #link https://review.openstack.org/96136 19:12:02 #link https://review.openstack.org/101864 19:12:05 and that's it :) 19:12:18 awesome 19:13:28 one question if anyone knows: what's the status of the ceilometer integration? 19:13:30 we'll come back to get deva's report 19:14:03 ifarkas: I'd have to follow up and looking to it 19:14:33 shall we move on to the Sub-team reports 19:14:51 +1 19:14:53 #topic Sub-team status reports 19:15:03 going to go a little out of order 19:15:13 NobodyCam, ok, I just found a WIP patch but wanted to check the progress 19:15:14 Jroll I know you have leave early 19:15:21 oh yeah! 19:15:31 wanta give ipa's report 19:15:36 yep 19:15:48 :) 19:16:03 so, first off, we announced onmetal last week, which is based on ironic. thank you all for your help! 19:16:21 jroll: \o/ 19:16:27 woot 19:16:32 second, I wrote a blog post about how we run ironic with IPA for onmetal, for anyone interested: http://developer.rackspace.com/blog/from-bare-metal-to-rackspace-cloud.html 19:16:37 configs and all are included there 19:17:03 third, we've been working on powering through our todo list. as usual, that's located here: https://etherpad.openstack.org/p/ipa-todos 19:17:24 we're refactoring the agent patches to be easier to merge 19:17:35 this is the main agent patch now (still needs tests) https://review.openstack.org/#/c/101020/ 19:17:44 cool! 19:17:45 and features that the PXE driver doesn't have will be applied on top of that 19:17:48 awesome stuff jroll 19:18:09 wb devananda_ 19:18:13 feel free to review that agent patch, I should have the tests up later today, hopefully 19:18:40 jroll, is it just a new review request? 19:18:43 ugh. lost connection to irc proxy. reconnected directly... so no history here 19:18:55 I mean, it's not the same as I saw :) 19:19:02 one last thing, JoshNang worked friday on an infra job to post an agent image tarball every time a patch is merged. unclear if that's working yet, but if not it will be soon 19:19:03 devananda: it will be in the logs :) 19:19:05 lol 19:19:14 dtantsur: yes, we'll be abandoning the 100-patchset review soon :) 19:19:37 jroll, nooooo... I wanted to see 100th patchset so badly :( 19:19:38 we doing IPA subteam report as jroll needs to jump out a bit early 19:19:42 and, I believe that's all I have right now for IPA 19:19:51 dtantsur: lol, I can submit patches anyway :P 19:19:56 jroll, any updates on testing IPA? 19:20:22 dtantsur: that infra job I mentioned is the first step, the smaller agent patch is the second 19:20:36 dtantsur: planning on building tempest jobs to test against 101020 19:20:58 NobodyCam: ack, thanks 19:21:35 NobodyCam: have we covered other subteams yet? 19:21:52 devananda: thou we do need to loop back and get a ironic-specs and blueprint status update too 19:22:02 not yet 19:22:18 ok, let's move on then 19:22:47 shrews, adam_g: any updates on our tempest test status? 19:22:49 Integration Testing 19:22:53 i think there was a bunch of discussion about taht last week 19:23:09 our check and gate jobs look to be stabalized. 19:23:25 im currently working on weeding thru tempest test failures that are the result of unsupported hypervisor features, and adding new flags to tempest to avoid them in our job 19:23:39 with the goal of removing the redundent devstack jobs, and consolidating them into just a few 19:23:50 looks like our tempest job is not voting in our gate still. pending https://review.openstack.org/#/c/100623/ 19:23:50 and getting rid of the big ugly regex we use in devstack-gate 19:24:02 adam_g: awesome 19:24:06 Shrews, any updates on how the rebuild testing is coming along? 19:24:31 nothing from my end except added a new compute flag for the rebuild feature. and we should probably think about adding separate icehouse tests at some point 19:25:35 but rebuild test is ready to merge, IMO 19:25:38 cool 19:26:04 thats about it from the tempest front 19:26:21 NobodyCam: any news on the tripleo CI front? 19:26:39 any progress with the tripleo-undercloud test? 19:27:10 I poked infra this morning on the package 19:27:16 gah.. patch 19:27:40 ok 19:27:56 dtantsur: hi! any updates on bugs and fedora coverage? 19:28:21 devananda, Fedora: devstack work out-of-box once my patch got merged, nothing new here 19:28:35 \o/ 19:28:37 dtantsur: nice 19:28:43 \o/ 19:28:48 to the bugs 19:29:08 people like numbers, so here are numbers: 2 New bugs (0); 123 Open bugs (+5); 36 In-progress bugs (-5); 1 Critical bug (0); 20 High importance bugs (+1); 6 Incomplete bug 19:29:13 #info devstack support for Ironic is working on fedora now 19:29:54 as to me, we're pretty where we used to be last week 19:30:20 so, could be worse (and could be better) 19:30:26 I managed to do some unassigning and reassigning though 19:30:36 and I have one thing to try clarify 19:30:52 we lack clarify in how we use assignee field vs "In Progress" 19:31:03 dtantsur: the one critical bug still open? 19:31:03 to me "In Progress" == assigned and that's why: 19:31:36 NobodyCam, it cound Fix Commited too 19:31:48 that's default Launchpad stats, thank you for letting me now 19:31:53 ahh just not released yet ... 19:32:00 next time I'll probably count manually or write a script 19:32:21 dtantsur: I believe you can filter out bugs by status with URI parameters 19:32:36 1. In Progress should be assigned, because there should be one responsible for this "progress" 19:32:58 would be good to show stats for bugs not counting fix-committed, otherwise the stats will only decrease when we release milestones 19:32:59 2. assigned should be "In Progress" just because it's otherwise hard to track assigned bugs 19:33:23 so, IMO, and based on https://wiki.openstack.org/wiki/BugTriage#In_progress_bugs_without_an_assignee 19:33:30 i.e. launchpad cannot "filter all assigned bugs" 19:33:33 bugs which are "in progress" _must_ have an assignee 19:33:45 agreed 19:33:51 what about the opposite direction? 19:34:01 while a bug may have an assignee without being in progress yet, eg. the person has just started working and has not proposed a fix yet 19:34:31 devananda, so for you "In Progress" means "Has patch"? 19:34:43 that's how I understand it 19:34:55 in progress == a patch has been proposed for that bug 19:34:55 dtantsur: that's the emergent behavior which I have observed 19:35:05 that doesn't make it good, necessarily 19:35:33 I'm also fine if we add a policy to, if u assign urself to a bug just mark it as "in progress" straight away 19:35:33 devananda, the only downside is that you can't easily get all assigned bugs via launchpad ui 19:35:48 dtantsur: if we required that all assigned bugs be marked "in progress" as soon as they're assigned, we wouldn't gain anything 19:36:03 also very hard to enforce 19:36:06 *we wouldn't gain anything that I'm aware of 19:36:48 dtantsur: if you look at a milestone, you can see which bugs are assigned, eg https://launchpad.net/ironic/+milestone/juno-2 19:36:50 devananda, I'm ok with this approach, than next question: should we force people to not set "In Progress" while they don't have a patch? 19:37:16 because some people set "In Progress" right away 19:37:20 right 19:37:46 That's not terribly intuitive -- to me, as soon as I start working on a patch, it's 'in progress' 19:37:57 I'm not arguing that it _should_ be that way, but that it's what the term intuitively seems to mean 19:38:04 isn't there a 'fix proposed' status? 19:38:06 so I see the difference between assigned and in progress being whether someone is actually working on it. People assign themselves to "reserve" a bug, but haven't necessarily started working on it. 19:38:11 I think it should not be set to In Progress before no one can see the progress 19:38:18 it's not good doing that, btw 19:38:24 mrda, I don't particularly like reserving bugs, yeah 19:38:25 mrda: exactly 19:38:54 dtantsur: so what is it that we need to solve -- launchpad is a tool we're using to organize work among a very distributed bunch of people 19:39:09 but worth noting that once aa patch is pushed, gerrit moves bugs to in progress if they're not already 19:39:11 dtantsur: I dont want us to create more work for people (or for you) just to police bug status :) 19:39:27 how about adding a commit like "I have an idea for a fix.. but can not start until x/y/z 19:39:44 devananda, my only concern is people reserving bug and forgetting about it 19:39:46 NobodyCam: Do you mean a comment? 19:39:55 dtantsur: taht said, we may be able to create tools (ttx has some we can use as templates to get started) to enforce some of these things for us -- if that's going to be beneficial to our development process 19:39:57 why yes I do 19:40:09 NobodyCam: at least then it opens the door for someone to "steal" if if they have free cycles beforehand... 19:40:23 I don't see the problem with jumping in IRC and asking "hey mrda, this is assigned to you, are you working on it or may I?" 19:40:35 jroll, +1 19:40:35 dtantsur: so for "reserved and forgotten" I think simply enforcing that an assigned bug with no patch revision in the last N days should get human attention and _possibly_ be unassigned 19:40:39 dtantsur: is totally reasonable 19:40:40 ya, and they would also know someone already has an idea for a fix , and can ping them 19:40:44 let people mark bugs as in progress 19:40:50 devananda, N = 7 IIRC? 19:40:52 but hold them responsible for that bug 19:40:54 dtantsur: and I bet we can create a script that polls launchpad and prints a list of those bugs 19:40:56 jroll: except tz. We are a global project (it's 5:10am right now here :) 19:41:07 mrda: so give them a day to respond 19:41:08 dtantsur: that's what we agreed on last time, ya. was just speaking in teh generic sense :) 19:41:09 devananda, I'm thinking of this script already, yeah 19:41:13 mrda: send an email if necessary 19:41:20 * dtantsur likes dashboards :) 19:41:29 mrda: there's enough to do where nobody will be blocked because they can't work on bug xyz 19:41:30 Is this all documented somewhere? It seems like it should be. 19:41:52 so, i think we should leave the states as they are now -- folks are mostly doing the right thing, I think. 19:41:59 matty_dubs, mostly in hours heads :) I wanted to writing something after this discussion 19:42:07 * hours = ours 19:42:08 and create a script which we can use to help catch forgotten assigned bugs 19:42:21 anyone disagree with ^ ? 19:42:29 dtantsur: Yeah, I think that'll be good, to make sure everyone's on the same page 19:42:35 devananda: +1 19:42:39 devananda: I'm fine with that, but s/script/launchpad URL/ 19:42:58 +1 19:43:01 not everything is possible with launchpad URL's I'm afraid... 19:43:03 I'm good with status quo 19:43:13 jroll: i'm not sure it's as simple as a launchpad URL but perhaps someone can host the outptu of said script and refresh taht every N hours or something 19:43:37 devananda: sure, I hope it's possible but if not, sure, a script 19:43:55 ack, thanks to everyone :) 19:44:05 I guess that's all for me 19:44:26 thanks dtantsur 19:44:34 #agreed continue using assignee + "in progress" in the current (flexible) way, document our expectation that an assigned bug must be actively worked on, and create a script (or URL with result of script) displaying forgotten bugs for all to see 19:44:39 dtantsur: cheers 19:44:52 GheRivero: hi! any updates on the oslo front? 19:45:26 the oslo.db patch is ready. Still with problems when testing in parallel the db migrations tests 19:45:36 but that an issue we already have 19:45:58 GheRivero: sort of.... 19:46:05 GheRivero, do you think we should wait that to be fixed before merging it? 19:46:12 GheRivero: ironic is not actually running migration tests in our gate jobs AFAIK 19:46:30 lucasagomes: yes. I will wait fot the fixes to get it merged 19:46:45 and activate the migrations tests in the gate jobs 19:46:47 ack 19:46:51 GheRivero: great, thanks 19:47:07 romcheg: hi! any updates on the nova-> ironic db migration? 19:47:15 Hi all! 19:47:23 Yes, there are some updates 19:47:34 So I moved the code to Nova 19:47:42 #info oslo.db patch is ready, but still has a few problems with db migration tests. We should merge it once those are fixed, and enable db migration tests in Ironic's gate at that time. 19:47:57 #link https://review.openstack.org/#/c/101920/ 19:48:26 * jroll has to run, enjoy the rest of the meeting :) 19:48:32 jroll: thanks, ciao! 19:48:37 jroll, see ya 19:48:41 I'm testing now a new patch set that does not copy Ironic models 19:49:01 romcheg: do we have any volunteers yet for the grenade testing of this upgrade? 19:49:12 I remember direct imports caused conflicts in the past. It seems now the problem is fixed 19:49:31 devananda: No one volunteered 19:49:50 anyone want to volunteer now? heh heh 19:50:01 I can take a look at that too, if no one wants 19:50:09 But more time is required 19:50:21 So I have also made a tool that updates nova flavors 19:50:31 The only question is the input format 19:50:37 romcheg: ack. please continue focusing on the nova migration itself 19:50:39 does the migration tool belong in nova/baremetal, or ironic? (just wondering) 19:50:54 rloo_ nova 19:51:08 romcheg: but it is going to modify ironic db? 19:51:16 It is 19:51:29 romcheg: so why nova? 19:51:36 rloo_: that question was raised on the nova spec, but nova team hasn't answered yet 19:51:54 romcheg: also wondering if it is in nova, and we modify ironic db still before juno, how hard/easy will it be to update it in nova? 19:52:03 There was a discussion in specs, there's still no answer but it's more likely it will be nova 19:52:16 romcheg, devananda: ok. 19:52:28 rloo_: it's nova's responsibility to accept a migration tool away from the driver they will be deprecating 19:52:31 So regarding to the input format 19:52:50 rloo_: but that's a good point -- we may need to make changes to that migration tool before the end of juno 19:53:04 Every flavour might have it's own deploy kernel and ramdisk 19:53:22 romcheg: just wondering. does the tool assume the ironic db is 'housed' by the same db server as the nova db? 19:53:49 #info seeking volunteers for creating grenade testing for upgrades from nova-baremetal to ironic. reference: https://review.openstack.org/#/c/95025/ 19:54:05 I think it's better to make csv file like flavor, new_kernel_uuid, new_ramdisk_uuid and give that to the migration tool 19:55:11 * five minute beep 19:55:12 What do you think? 19:55:40 romcheg: AFAIK, it needs to be able to run against a deployment of openstack 19:56:03 romcheg: so if one tool connects to nova and saves a .csv with that info, and a second tool parses that .csv -- that's probably fine, and may be more easily tested 19:56:10 devananda: It's the user who builds and uploads the stuff to glance 19:56:28 devananda: dose it need to support live migrations? 19:56:28 ok, gotta move on 19:56:37 mv 19:56:38 #link https://review.openstack.org/#/c/95025/ 19:56:46 NobodyCam: Oh, please not now, it's such a pain :) 19:56:49 for anyone curious about the upgrade path, it's all proposed there ^ 19:57:12 thanks to everyoen for the great subteam reports again this week! 19:57:16 #topic open discussion 19:57:21 3 minutes left - go :) 19:57:29 ifarkas, had question about ceilometer IIRC 19:57:31 idk if there's enough time but 19:57:37 For the meetup -- are people leaving mid-day Weds., or should we plan 3 full days? 19:57:40 do we still want consistency on the partition sizes parameters (ephemeral, root, swap)? 19:57:42 lucasagomes: i think you wanted to bring up the GB/MB discussion 19:57:43 :) 19:57:48 devananda, yup 19:57:54 devanand, yeah, wanted to check the progress of the celiometer integration 19:58:00 if so, how people want to do it? 19:58:01 yea, might be worth taking to the mailing list ... this one has dragged on for too long IMO 19:58:08 1- Everthing to be in MB? 19:58:08 2- Everything GB and float (so 0.5 == 512MB)? 19:58:08 3- Everything GB and rounding it up the swap size (because swap is MB in nova) so if users says 512 MB they will get 1GB in Ironic? 19:58:16 if anyone happens to be in infra a poke for https://review.openstack.org/#/c/100063 would be awesome 19:58:17 ifarkas: afaik no work has been done this cycle 19:58:29 devananda, ack 19:58:29 ifarkas: we had a summit session with ceilometer folks 19:58:52 devanand, thanks, I will check it with Haomeng 19:58:55 ifarkas: my take away was that they feel ironic should expose $things ina pluggable way, but ceilometer probably won't consume them initially 19:58:57 lucasagomes: are there plans for nova to move to GB (EVER) 19:58:58 It didn't go swimmingly. 19:59:02 NobodyCam, no 19:59:17 ifarkas: https://etherpad.openstack.org/p/juno-ironic-and-ceilometer 19:59:40 i may vote to have our interface match theirs (ie MB) 19:59:44 ifarkas: hmm, looks ilke there weren't a lot of notes there :( 19:59:55 NobodyCam, only for swap? 19:59:57 lucasagomes, as before I vote for (1) 20:00:08 i.e. MiB for everything 20:00:09 dtantsur, yeah cheers 20:00:15 no 20:00:21 dtantsur: ++ 20:00:30 time's up - let's continue in channel as needed 20:00:32 ok 2 votes for 1) 20:00:35 devananda, ? 20:00:37 lucasagomes: but if we do MB, then we're allowing folks to do root/ephemeral in sizes that aren't GB units. 20:00:38 oh time 20:00:43 lucasagomes: oh, do we need a vote real quick? 20:00:52 devananda, nop 20:00:59 let's go to the channel 20:01:01 thank you all 20:01:02 k k 20:01:03 Thanks everyone! 20:01:06 lucasagomes: and I thought that was one of lifeless' objections. 20:01:06 thanks everyone! 20:01:12 #endmeeting