19:00:38 #startmeeting ironic 19:00:39 o/ 19:00:40 Meeting started Mon Jul 14 19:00:38 2014 UTC and is due to finish in 60 minutes. The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:00:41 щ. 19:00:43 The meeting name has been set to 'ironic' 19:00:46 hi all! 19:00:58 hi 19:01:00 o/ 19:01:02 as usual, our agenda is available up here: 19:01:03 #link https://wiki.openstack.org/wiki/Meetings/Ironic 19:01:08 #chair NobodyCam 19:01:09 Current chairs: NobodyCam devananda 19:01:12 o/ 19:01:19 hi 19:01:34 #topic announcements 19:01:39 o/ 19:01:40 \o 19:02:03 quick call out again for our mid cycle meetup coming in just two weeks! 19:02:35 I'd like to organize a small pre-meetup pizza outing on Sunday - sign up on the etherpad if you're interested 19:02:37 #link https://etherpad.openstack.org/p/juno-ironic-sprint 19:02:41 o/ 19:02:55 hah nice 19:02:57 My flight gets in super-late so I won't be able to make it for pizza :'( 19:03:10 we should start thinking about what our goals for the midcycle are -- not in this meeting, but on the etherpad above :) 19:03:37 looks like there are some up there already, in fact 19:03:37 I will not be able to make the next meeting due to TripleO mid cycle 19:03:54 yep - next week is the tripleo mid cycle in Raleigh 19:04:04 I know some of us (myself included) will be there 19:04:18 So who will be the chair? 19:04:20 * NobodyCam is htere monday and tuesday only 19:04:47 * jroll votes lucasagomes for chair next week 19:04:54 +1 for lucasagomes! 19:04:57 +1 19:04:59 NobodyCam: coming to the tripleo meetup? excellent 19:05:03 +1 19:05:05 hah 19:05:05 lucasagomes: want to? I'm fine with that 19:05:06 alright 19:05:08 :) 19:05:08 +1 19:05:19 nice looking bus I have, eh lucasagomes ? :) 19:05:20 sure 19:05:32 #info lucasagomes will run the next weekly ironic meeting, since devananda and NobodyCam will be at the TripleO midcycle 19:05:57 also, the followign week being our meetup, I would like to suggest taht we don't haev an IRC meeting taht monday 19:06:10 +1 19:06:11 devananda: +1 19:06:19 yeah makes sense 19:06:21 yup 19:06:34 #info no weekly meeting on July 28, due to Ironic midcycle meetup 19:06:51 But will you post updates so the guys who don't go to the meetup can read that? 19:07:03 devananda: since adam_g and myself will be in raleigh too, i suggest talking testing strategy while there at some point 19:07:10 romcheg_ltp: yep 19:07:18 Shrews: good point. 19:07:55 other announcements -- I reviewed our review stats on Friday and sent a few nominations out 19:08:18 I'll make those official later today 19:08:30 o/ 19:08:46 a huge thanks to everyone who's been helping with reviews (and spec reviews)! 19:09:07 our pace has gotten about 50% faster, in terms of # of patches landed, in June compared to May 19:09:16 and I think it'll keep improving with a few mroe cores :) 19:09:23 nice! 19:09:38 that's all for my announcements, I think 19:09:39 Deva: we uploaded the new ilo deploy spec, can you and otehr reviewers take a look? 19:10:14 wanyen: please wait until the open discussion to bring up questiosn which aren't on the agenda 19:10:21 #topic release cycle progress report 19:10:24 ok 19:10:40 I noticed this morning taht three more specs were approved recently -- good job! 19:10:56 again, a huge thanks to those workign on spec revies, and to the proposers for having patience 19:11:10 we (and a lot of other projects, too) are still finding what the "right" balance is 19:11:25 between beign too process heavy vs. not thinking things through clearly 19:11:29 /b 76 19:11:35 gah 19:11:47 BINGO 19:11:50 :-p 19:11:53 lol 19:11:57 Close some of those tabs! ;) 19:12:08 also a reminder, we're past the new-spec cut off -- but we can (and should) continue to iterate on existing specs until August 13 19:12:40 the nova driver is making progress. we landed several more clean up patches last week 19:12:53 mrda has one in flight still to add auth token caching to the client inside the nova driver 19:13:01 so no new specs for J at this point K is open? 19:13:02 which should help with some of the performance 19:13:14 NobodyCam: correct. no NEW specs. but revisions to existing specs are fine 19:13:21 :) 19:13:29 devananda: hoping to get an updated patch uploaded today for further review 19:13:35 mrda: fantastic 19:13:45 devananda: if someone proposes a new spec, do you -2 it? 19:13:47 I'm going to try to squash all the ones that landed into my proposal to nova 19:13:52 rloo: yep 19:14:13 and take the feedback i've gotten from nova on taht review and push it back to ironic 19:14:16 thx devananda. you = you ;) 19:14:28 rloo: with a nice note that we'll unblock it after k officaily opens 19:14:35 FWIW, this is painful. I would really like us to stop making any more changes to the nova virt driver 19:14:52 we can talk about it on the open discussion, but what if the specs are submitted to the k/ directory instead of juno/? 19:15:01 mrda: so the sooner you can work on that, and we can land it, the better -- I think it's important enough to land it today if we can 19:15:06 devananda: if we don't make the changes to the nova virt driver now, then when? would it be less painful later? 19:15:09 deva: Did faizan workwith you on teh Nova driver changes for URFI boot? 19:15:30 s/URFI/UEFI 19:15:50 wanyen: the spec for those is not approved yet, in either nova or ironic. I suspect taht work will have to wait until after we land the nova virt driver (assuming we are able to land it this cycle) 19:16:08 rloo, after it's merged in nova 19:16:15 devananda: perhaps nova driver changes should be submitted to nova with your patch as a dep 19:16:35 rloo, not less painful, but keep updating the patch all the time adding new features makes the review in nova more difficult 19:16:38 jroll: except we can't test them like that yet 19:16:51 until the driver merges into nova, we can't flip the tempest switch to test it there 19:17:07 so we can't test dependent patches right now if they're proposed on top of mine 19:17:11 lucasagomes: yeah, I hear you. Pain is in our future... 19:17:16 devananda: sure, but we can't land them like that either... 19:17:34 I'm just thinking it's a better place to put them, and they'll land eventually 19:17:37 jroll: we can test and land them in Ironic for now. it jus tmeans I need to do a lot of manual merging and fixing of things 19:17:52 "I would really like us to stop making any more changes to the nova virt driver" 19:18:07 bugfixes should be in the ironic tree, yes, but adding features probably not 19:18:09 which is why I'm asking that everyone hold off approving any changes to the nova virt driver in our tree 19:18:49 right... which doesn't jive with "test and land in ironic for now" 19:18:52 let's move on, we can talk about the details of that process for a while later :) 19:18:55 we can discuss later 19:18:57 :) 19:18:59 devananda: I have +2 some changes but I am not +a'ing any in the nova tree 19:19:07 lucasagomes: hi! any updates on the general cleanup? 19:19:09 comstud: ^ ? 19:19:30 NobodyCam: thanks. that's perfect. 19:19:56 devananda: I haven't been doing any cleanup... 19:19:56 well not on the clean ups, pretty much all the instance_info stuff is now merged, but the migration script... I will update the patch of the migration to include some documentation about how to use and what is that migration script about 19:20:21 but, I have a that one bug fix for build + delete races (when you happen to configure 2 nova-computes the way nova does not like :) 19:20:32 I need to update it with an issue I found with my fix tho 19:20:43 comstud: thought you had some more object code cleanups in the pipe? 19:20:48 not a clean up... but a refactor-like patch. There's the management interface, I put a new spec up today its already +2 19:21:09 devananda: Gosh, well, I wish.. but they are not for the virt driver, anyway.. 19:21:11 if I could get more eyes on that would be great because there's other specs depending on that one 19:21:24 lucasagomes: got a link? 19:21:24 #link https://review.openstack.org/#/c/100218/ 19:21:26 :) 19:21:29 lucasagomes: link 19:21:31 :--p 19:21:31 heh perfect timing 19:21:36 was grabbing it 19:21:40 heheh 19:21:47 #info more reviews requested for the change to ManagementInterface (see prior link) 19:22:00 lucasagomes: anything else before we move on? 19:22:05 not from me 19:22:11 #topic subteam status reports 19:22:36 adam_g: hi! care to summarize the tempest work from last week? 19:23:28 nothing too major from the tempest side of things. Shrews and i have been busy trying to push through some new feature flags, the devstack slaves RAM requirements should get bumped to 512 soon paving the way for concurrent testing + parallel ironic instance spawns 19:23:28 devananda: Not sure adam_g is available, but I did make some progress with submitting some tempest reviews to fix some more API tests for us (some after lengthy discussions). 19:24:10 good stuff 19:25:00 for those not following the CI stuff closely, the short version is that the -qa team wants to enable all the compute/api tests with the ironic driver (instead of with libvirt) 19:25:38 Shrews, was consensus reached regarding what to do about cinder tests? 19:25:39 rather than us having just one or two scenario tests for ironic, that would mean running all the nova tests (on the order of 100 or some such) 19:26:00 adam_g: we're going with a feature flag 19:26:08 Shrews, ah, cool 19:26:20 adam_g: https://review.openstack.org/106472 19:26:32 #link https://review.openstack.org/106472 19:26:35 NobodyCam: any updates on tripleo testing of Ironic? 19:27:29 not from me today, maybe lifeless has one? 19:27:34 if he's around 19:27:57 NobodyCam: ok. it looks like the tripleo-undercloud-ironic job has been mostly stable lately, so perhaps no news is good news 19:28:00 dtantsur: hi! I know you just got back from PTO (welcome back!) -- anything to share regaring bug status today? If not, that's fine. 19:28:10 something :) 19:28:16 and hello everyone 19:28:22 dtantsur: or regarding fedora support, if anything's changed there 19:28:22 ya :) 19:28:29 people like numbers, here are the numbers: 11 New bugs (+9); 151 Open bugs (+28); 35 In-progress bugs (-1); 1 Critical bug (0); 26 High importance bugs (0); 5 Incomplete bugs (-1) 19:28:54 ouch, 151 open bugs... I think it was just barely over 100 a few weeks ago 19:28:55 As you see, we have seriously increased number of opened bugs (and I didn;t have time to sort out new ones) 19:29:07 123 2 weeks ago 19:29:22 (+- numbers are differences since last measurement) 19:29:32 ack 19:29:42 One of the reasons is that Nova folks start retargeting bm bugs to us 19:29:54 some without even seriously looking into, some quite valid 19:29:55 #info Bug stats: 11 New bugs (+9); 151 Open bugs (+28); 35 In-progress bugs (-1); 1 Critical bug (0); 26 High importance bugs (0); 5 Incomplete bugs (-1) 19:30:05 so we can expect even more bugs from them 19:30:21 dtantsur: yep. jogo poked me after the fact on that, but I haven't had time to triage all of them. 19:30:23 bugs for nova-bm are being closed 19:30:34 dtantsur: can we close some of the bugs, 19:30:44 devananda, I'll ask you to triage some, because not everything is clear to me 19:30:47 the one critical has a fix committed 19:30:49 some of them may be valid, but they should all have the "baremetal" tag 19:30:53 so at least they are easy to spot 19:31:04 as I suspect many others do too 19:31:17 NobodyCam, I didn't have time to create a script, measuring proper numbers, so this is still what LP reports 19:31:30 ahh :) Np 19:31:48 this is it for bugs, I'm in the middle of sorting them 19:31:53 what do we need to do to remove them frm LP? 19:32:02 NobodyCam: nothing 19:32:02 and no updates on Fedora - didn't have time 19:32:17 NobodyCam: they will be hidden from the default status view on LP once they are released 19:32:26 ack :) 19:32:44 ah! https://bugs.launchpad.net/launchpad/+bug/1341643 19:32:45 I'd prefer our weekly stats to indicate those bugs taht don't even have a fix committed yet -- but LP doesn't do that for us, so we will need a custom script at some point 19:32:50 Launchpad bug 1341643 in launchpad "No way of closing some bugs" [Undecided,New] 19:32:52 that is about non-closing bugs in LP :) 19:33:05 that's something else too ... 19:33:20 devananda, that's on my plans, didn't have time though 19:33:24 jroll: hi! any updates on IPA? 19:33:30 hi! 19:33:39 dtantsur: thanks for the updates :) again, welcome back! 19:33:43 as always, people can check the latest status here: https://etherpad.openstack.org/p/ipa-todos 19:33:43 thanks :) 19:33:52 #link https://etherpad.openstack.org/p/ipa-todos 19:34:06 I'm still figuring out some devstack things... agent is booting and getting the deploy command, but can't talk to glance 19:34:11 s/glance/swift/ 19:34:24 I have some ideas... patches and testing etherpad soon 19:34:41 * NobodyCam has to step away from computer... I will catch up / read the log when I return.. Thank you all... 19:34:42 JoshNang has been working on / testing decom... that's very close as well 19:34:54 jroll, do you have some instructions on testing IPA locally? 19:34:59 maybe I missed them... 19:35:01 and don't forget we still need the spec to land: https://review.openstack.org/#/c/98506/ 19:35:08 jroll: should we mention the builds of ipa-coreos image? 19:35:16 dtantsur: not yet, coming soon. this is what I meant by "testing etherpad" 19:35:23 very good 19:35:24 JayF: yes, thanks :) 19:35:28 I will talk about it then :) 19:35:43 We have a job running, in openstack infra, that's generating this: http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz every agent commit 19:36:02 this is a tarball that includes the ramdisk and kernel neccessary to run IPA in the CoreOS ramdisk, just like we do at Rackspace 19:36:07 is IPA going to support OS othere than CoreOS? 19:36:23 We welcome patches to add support for other distribution methods, including DIB :) 19:36:41 #link http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz 19:36:45 I started some work https://review.openstack.org/#/c/103105/ 19:36:47 IPA will run on any OS, fwiw, we just don't have builders for others 19:37:15 yuriyz: awesome! should ping us for reviews when that's no longer WIP :) 19:37:17 #info ironic-python-agent now generating tarballs of a CoreOS-based agent image on every commit, available at http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz 19:37:40 yuriyz, cool, I'd also be glad to review 19:37:44 yuriyz: nice! 19:38:17 yuriyz: I'd also suggest you add a builder for it to -infra once we get it merged as well. If it's going to be supported, it needs to be built and tested regularly to make sure it doesn't atrophy :) 19:38:38 and while we still need better testing, we can at least build and upload working images :) 19:38:42 jroll: keep up the good work. I'm looking forward to digging into IPA at the sprint in two weeks :) 19:39:06 devananda: thanks, we're excited to land it 19:39:13 but seriously need spec reviews 19:39:20 so that the spec lands before the sprint 19:39:55 #info IPA spec needs reviews -- https://review.openstack.org/#/c/98506/ 19:40:01 * dtantsur will look 19:40:06 woot :) 19:40:07 +1 19:40:11 GheRivero: hi! any updates on Oslo for us? 19:41:38 ok, we'll come back to Ghe if he is around later. 19:41:42 romcheg_ltp: hi! any updates on the nova db -> ironic db migration script? 19:42:04 Morning again! 19:42:41 So I made a few minor updates to the data migration tool 19:43:05 And was sticking to the Grenade testing 19:43:42 it looks like there are reviews up for them, but they haven't gotten any nova attention yet 19:43:46 I don't know how that d**n think works on gates but in a week I didn't manage that to run 19:44:04 Who is the best guys to talk about grenade? 19:44:17 romcheg_ltp: so at the atlanta summit, sdague volunteered to give some guidance on how we can use grenade to test this upgrade path 19:44:36 So I will poke Sean then 19:44:42 thanks! 19:44:43 Also I have a question 19:45:02 I'll poke the nova team regarding the upgrade spec and your two reviews for a migration tool in the meantime, and at the midcycle in two weeks 19:45:08 Should I create console endpoints for migration tools in nova? 19:45:39 I dont have confidence that console support worked in nova-baremetal 19:45:45 I can also post that to the mailing list but there's no need for that if you know the quick answer 19:45:50 it certainly wasn't tested, and none of the deployments I know of used it 19:46:04 devananda: No, not that console :) 19:46:07 I think posting to the ML would be good, just in case someone is out there who needs it 19:46:10 oh? 19:46:20 which console then 19:46:21 I mean cli endpoints 19:46:41 what's a CLI endpoint? 19:47:13 console_script in setup.py 19:47:18 oh - you mean install them into $PATH somewhere 19:47:19 yes 19:47:21 That's what I mean 19:47:34 definitely :) 19:48:06 romcheg_ltp: thanks! 19:48:22 thanks to all the subteams - great reports, folks :) 19:48:22 I have no questions then :) 19:48:24 Thanlks 19:48:43 #topic discussion: how to clearly indicate that Ironic is not a CMDB 19:48:49 so this came up (again) today 19:48:57 I feel strongly that Ironic is not suited to be a CMDB 19:49:03 +1 19:49:10 and we're dancing close to a slippery slope 19:49:19 and some of the spec proposals go too far, IMO, over that edge 19:49:33 What are a few examples? 19:49:39 so I pointed out the mission statement 19:49:41 #link http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n142 19:50:07 and am wondering if others feel that is good, or if that is somehow indicating that a CMDB is or might be in scope ... 19:50:24 devananda: it does invite some vagueness 19:50:39 as I already stated, "managing" is too vague 19:50:47 To be honest (and maybe this is just me), I'm not sure I have ever seen that mission statement before. 19:51:03 success story ^^^ 19:51:10 matty_dubs: it's part of the governance docs which the TC produces 19:51:34 but that ya'll haven't seen them indicates also that the impression that Ironic might be a CMDB isn't coming from there 19:51:42 so just changing that won't address it (but might help) 19:51:48 right, I don't think it's coming from there 19:51:52 devananda: I'm very curious as to hear your impression as to what is over the line or not? 19:52:05 but if you change it, you can at least point at that when needed 19:52:09 It may be the canonically correct place to clarify this, but I'm not sure if it'll be widely read 19:52:14 JayF: storing a full inventory of discovered hardware information 19:52:40 devananda: btw, I think the mission statement should be here: https://wiki.openstack.org/wiki/Ironic 19:52:44 oh, we had this dicsussion today again 19:52:45 JayF: in and of itself isn't over the line. but as soon as that's there, someone will want to search through it, or audit it, or ask for a delta -- what changed when I re-inventoried this server? 19:52:50 (and perhaps a thing about 'not a cmdb') 19:52:51 jroll, +1 19:52:53 and *that* is all CMDB territory 19:52:55 devananda: perhaps adding an "Ironic Manifesto" to the ironic wiki page would help? 19:52:59 it's a wiki so we can update that 19:53:00 devananda, someone already wanted in comments to my spec >_< 19:53:01 oh, yeah, what jroll said :) 19:53:14 devananda: that's sorta what I was thinking the line was too: storing what exists (ok), modifying or updating what exists (not ok) 19:53:25 yeah, if we want to avoid it, we may just be clear and say it 19:53:36 devananda: the path for #2 is 'old server gone, here's a new server that looks suspiciously similar except now has new ram' 19:54:10 JayF: I think updating is ok... but using it as a data source is what makes it a cmdb 19:54:10 and of course, if ya'll think I"m wrong (and Ironic should be a CMDB) I expect everyone to tell me that 19:54:13 (but so far no one has) 19:54:22 hw inventory related to scheduling would be relevant to proviisoning 19:54:24 devananda: they're all scared of you :) 19:54:31 jroll: :-/ 19:54:37 (I'm joking) 19:54:52 wanyen: there are other methods of doing that 19:55:06 jroll: i swear, i'm not scary ..... :) 19:55:07 To me, the line is kind of blurry. I have no issue with where it's drawn, but I think it's very easy to come along and say, "As long as we're doing x, it'd be nice to do x++...." 19:55:16 Jroll: for example? 19:55:28 matty_dubs: exactly the situation we're in. 19:55:36 wanyen: example: https://review.openstack.org/#/c/105802/ 19:55:41 5 minutes left 19:55:53 Doesn't that just mean that spec reviewers need a good idea of where the line is and to enforce it there? 19:55:58 Not to store all hw inventory but only teh hw properties taht assist scheduling 19:56:06 i sort of had this discussion with folks offline about a year ago, but it was hypothetical. now, it's practical. 19:56:26 and I think I'm doing a disservice to all of you if I don't try to help clarify where the line is 19:56:31 wanyen: scheduler doesn't need to know that "this machine has 512kb L2 cache and an nvidia m1234 gpu with 2048MB video ram", that's far too fine-grained 19:56:36 devananda: maybe something good to discuss at the mid-cycle is things explicitly in and out of scope 19:56:40 devananda: to put into the wiki to clear it up 19:56:44 JayF: ++ 19:56:48 +1 19:56:52 ++ 19:56:56 * devananda adds to agenda 19:57:00 +1 sounds good 19:57:07 Can we have a quick chat on discovery specs? 19:57:15 sure 19:57:18 #topic open discussion 19:57:20 type fast :) 19:57:28 jroll: I agree. So only the proerpties taht assist scheduling 19:57:32 Nisha: around? 19:57:39 wanyen: yep :) 19:57:40 we had a long discussion with Nisha and I guess we came to sort of agreement 19:58:15 we don't see our specs as overlapping too much, Nisha's spec is one particular implementation that can be built on top of what I'm suggestings 19:58:27 we've discussed ways of doing it, I left comment on the spec 19:58:50 so I don't see reasons not to consider both specs 19:58:57 Jroll: we need toallow shceudling for with basic properties: #of CPus, Ram sz, etc. Some vendor maight want to support schedulling based on server model taht should be allowed. 19:58:58 i think this isn't the first time we've seen a need 19:59:08 for a generic spec + a specific implementation spec on top of it 19:59:18 so I'm fine with that 19:59:19 sure :) 19:59:20 if it helps the discussion 19:59:45 jroll: yes 19:59:46 if both authors agree, and gerrit represents taht dependency, i think that's great 19:59:48 And I would actually ask cores to consider my spec, because it's around for some time, people come and ask for some new details, for proper hw inventory and so on 19:59:49 dtantsur: indeed, I think Nisha's should depend on yours :) 19:59:59 wanyen: indeed 20:00:03 Nisha, are you ok with what I suggest? 20:00:15 any objections to using a generic + specific spec approach in this case, or in general? 20:00:49 time is up, folks 20:00:50 (waiting for Nisha's answer before ending the meeting) 20:00:58 links: mine https://review.openstack.org/#/c/102565/, Nisha's https://review.openstack.org/#/c/100951 20:01:01 dtantsur: Yes it looks ok 20:01:07 #link https://review.openstack.org/#/c/102565/ 20:01:10 #link https://review.openstack.org/#/c/100951 20:01:14 I will update 20:01:21 thanks a lot :) 20:01:27 so nisah spec is pretty geenric in terms of management interface 20:02:09 #agreed using a generic spec + one or more implementation-specific specs, which are gerrit dependencies, seems to be an accepted approach when that helps facilitate a discussion about new driver features 20:02:16 thanks all! great meeting, as usual 20:02:21 thanks! 20:02:24 see some of you next week, and some the week after that! 20:02:27 cheers 20:02:30 #endmeeting