14:00:52 #startmeeting nova 14:00:53 Meeting started Thu Jan 23 14:00:52 2014 UTC and is due to finish in 60 minutes. The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:54 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:56 The meeting name has been set to 'nova' 14:00:58 Hello, everyone! 14:01:06 This is our second bi-weekly meeting at 1400 UTC 14:01:07 hi 14:01:09 hi 14:01:10 * n0ano is it morning yet 14:01:13 \o 14:01:20 hi 14:01:23 hi 14:01:35 seemed to be attended just as well as the other time 2 weeks ago 14:01:52 #topic general 14:02:02 so let's check in on the icehouse schedule 14:02:10 #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule 14:02:16 icehouse-2 is today 14:02:23 which means we're now in icehouse-3 development 14:02:32 some critical dates to keep in mind for icehouse-3 ... 14:02:41 feature freeze --> March 4 14:02:55 however, we need to pick a date for our feature proposal freeze 14:03:09 last cycle we had a deadline for code to be posted 2 weeks before feature freeze 14:03:24 i think it would be wise to do the same thing again, but hadn't really picked a specific date yet 14:03:28 feature proposal == bp approval? 14:03:58 i should say, blueprint code proposal freeze 14:04:03 not sure what to call it :-) 14:04:08 +1 sounds good 14:04:13 proposal deadline? 14:04:17 basically ... we want code to be up in advance of the merge deadline 14:04:18 yeah. 14:04:35 patch proposal 14:04:59 we're looking at mid-february sometime 14:05:07 i'll start a thread on openstack-dev to finalize the date 14:05:19 because i suspect other projects may want to coordinate on that 14:05:25 hi 14:05:38 but basically ... less than 4 weeks to have code up for review for icehouse (for blueprints) 14:05:46 any schedule questions? 14:06:02 sounds good, cross project meeting agenda item? 14:06:12 johnthetubaguy: that's a good idea 14:06:28 #action russellb to bring code proposal freeze date selection to cross project meeting 14:06:44 :) 14:07:01 #topic sub-teams 14:07:08 * hartsocks waves 14:07:10 * n0ano gantt 14:07:16 * johnthetubaguy raises hand 14:07:20 hartsocks: go for it 14:07:24 sure. 14:07:35 We're set for Minesweeper to do voting ... 14:07:46 … it's set for a "0" vote right now instead of a −1 vote. 14:08:05 Mostly we're bummed because none of our blueprints were reviewed. 14:08:17 That's all I'd bring to the group. 14:08:23 sorry about that 14:08:32 I'll hit the mailing list later with my detailed update. 14:08:37 a very low number of blueprints made it in 14:08:38 * hartsocks bows out 14:08:47 yeah I figured. 14:08:53 we can revisit that in a bit more detail in the blueprints section in a bit 14:09:00 This was a rocky milestone. 14:09:22 k, will look for your detailed update on list 14:09:27 n0ano: hi 14:09:37 couple of things... 14:10:02 decision was made to get the gantt tree working first and then re-sync (potentially recreate the tree) to the nova tree... 14:10:25 yes, i think that's a good plan of attack 14:10:40 to that, I've submitted a couple of patches for review that get almost all of the 254 unit tests working (17 related to affinity still fail)... 14:10:48 links? 14:11:20 https://review.openstack.org/#/q/project:openstack/gantt+branch:master,n,z 14:11:21 #link https://review.openstack.org/#/c/68521/ 14:11:33 reviews would be nice :-) 14:11:37 https://review.openstack.org/#/q/project:openstack/gantt+branch:master+status:open,n,z 14:12:06 ok, will keep a tab open on them to look today 14:12:11 n0ano: I keep meaning to, will bump up the list a bit 14:12:30 on the add/remove files thing ... probably something to note for regenerating the repo 14:12:32 NP, they're there waiting for you :-) 14:12:34 if that gets done 14:13:15 I did it that way so that the gantt changes would be separate from adding/deleting files, makes the re-sync easier 14:13:23 so i think for icehouse we should aim to have the thing running so that we could do a nova-scheduler freeze early juno 14:13:45 russellb, +1 (that would be the goal) 14:13:49 i think a full dev cycle is ideal to let things shake out with splitting 14:14:09 running by end of icehouse then? 14:14:13 yeah 14:14:20 sounds good 14:14:31 and if we meet that, we can then have the conversation with release management about the deprecation plan and release of gantt 14:14:37 we're close (getting most of the unit tests going is big) so I think that's doable 14:15:01 sounds good 14:15:05 thanks for your work on it! 14:15:10 anything else in scheduler land? 14:15:21 BTW, I haven't gotten anyone to look at my devstack changes for gantt 14:15:28 #link https://review.openstack.org/#/c/67666/ 14:15:39 getting reviews for that would be nice also 14:15:39 ooh i didn't know that existed yet 14:15:53 will look at that too 14:15:58 yes, one more item, johnthetubaguy brought up his BP for caching scheduler 14:16:10 I am writing a new caching scheduler driver: https://review.openstack.org/#/c/67855/ 14:16:12 #link https://blueprints.launchpad.net/nova/+spec/caching-scheduler 14:16:31 some discussion around that, looks interesting but need more info 14:16:42 its going to be experimental at the moment 14:16:58 I don't have stats only feelings 14:17:17 would like to encourage innovation there, over perfection 14:17:29 hmm, maybe a design wiki page to go with it? 14:17:32 run through some examples 14:17:38 it re-uses all filters and weights at the moment 14:17:43 russellb, that was suggested at the meeting 14:18:03 johnthetubaguy: cool, just trying to understand how the cache part works 14:18:08 which i guess is kinda the point :) 14:18:13 yeah, I should write that, proved to myself it works 14:18:19 ish 14:18:34 but yeah, i'm all for alternative approaches 14:18:43 that's pretty much all from me 14:19:02 I will write it up, now it works (with one host anyway) 14:19:10 johnthetubaguy: ok great 14:19:24 ping me when you have that so we can push the bp review along 14:19:27 * n0ano one host - my favorite cloud 14:19:44 we all do dev on one host clouds, right? heh 14:19:51 works on devstack, ship it 14:19:52 I didn't approve the BP since I didn't feel I wrote it up well enough yet :) 14:19:57 hehe 14:20:00 sometimes I go wild and bring up 2 14:20:14 johnthetubaguy: yeah, that's all i want to see (a bit of a write-up) 14:20:21 +1 14:20:28 alright, thanks guys! 14:20:32 johnthetubaguy: you! 14:20:36 cool 14:20:37 in another context i suspect 14:21:02 matel: is doing great work on the test stuff, maybe he can summarise that? 14:21:35 yes, we have pending changes in nodepool & infra, but tempest still fails. 14:21:59 So the plan is to get a stable tempest run first, and then trigger the infra team. 14:22:05 but on a physical node BobBall has full tempest passing right? 14:22:12 infra team is pretty slammed at the moment anyway 14:22:17 yes, physical works. 14:22:22 full tempest? 14:22:35 and is that scoped to only xenapi patches? or all? 14:22:36 what about the other env makes it fail? 14:22:53 russelb: Still looking into that 14:22:58 k just curious 14:23:02 appreciate your hard work on this 14:23:15 It's a bit slow, but we'are getting closer. 14:23:41 how do you feel about the timeline right now 14:23:54 mriedem: full passed on 17th last. 14:24:16 matel: how long is a full run taking on your physical node? 14:24:50 russellb: Not sure, related to our changes, it's around 2-4 weeks. 14:25:09 OK, was wondering how realistic having it running by icehouse-3 was 14:25:25 i guess getting tempest passing is in your control ... infra integration slightly out of your control 14:25:30 mriedem: 4118.578s on a rather old machine 14:25:55 not bad 14:26:00 that's how long current jobs are taking 14:26:19 yeah, there are some perf fixes done by Bob to help that, so we are getting close 14:26:33 so far... 14:26:38 current jobs were faster before, but we cut concurrency back to 2 from 4 14:26:42 because 4 overloaded the test nodes 14:26:49 matel: and is this on all nova patches, or a subset? 14:26:49 ah, makes sense 14:27:26 mriedem: This job, that I was looking at, was a nova master + under review citrix patches. 14:27:55 yeah, final thing with run in Rax cloud using zuul 14:28:11 anyway, that was main news 14:28:27 great, happy to hear about progress on that front 14:28:38 mriedem: last master pass was on 8-jan 14:29:09 regarding other drivers ... no real word from hyperv lately, and docker folks just starting to work on it 14:29:28 hyperv at the biggest risk, then docker 14:29:33 i was going to read the latest hyperv meeting minutes 14:29:34 i feel good about vmware and xenapi 14:29:34 LXC are thinking about stuff too I think 14:29:36 is there any news in there? 14:29:49 johnthetubaguy: LXC? as in libvirt/lxc or? 14:29:52 mriedem: good idea 14:29:58 mriedem: wish they would show up here ... :( 14:30:03 yeah, me too 14:30:05 i'll read their minutes 14:30:10 russellb: libvirt + LXC I think 14:30:11 mriedem: k let me know 14:30:16 johnthetubaguy: ah OK 14:30:43 we also do libvirt/xen 14:30:48 i doubt there will be CI for that 14:31:00 yeah, suse guys mentioned it, or someone did at the summit 14:31:03 but also not sure it's worth ripping out that pieces of it 14:31:04 but not seen anything 14:31:13 it being libvirt driver 14:31:21 well, I think we agreed to add a log saying "untested" or something 14:31:27 maybe put a big warning in the logs ... yeah 14:31:28 for xenapi? 14:31:38 xenapi is not in libvirt 14:31:50 libvirt xen is a different thingy 14:31:52 ok, xenapi is what we were just talking about before right? 14:31:54 ok, yeah 14:32:00 i joined late :) 14:32:11 yeah, sorry, for confusion, why just have one tool stack, when you can have seven? 14:32:13 xenapi is in good shape 14:32:41 ok onward 14:32:46 its promising at least, hoping to add cloudcafe after we have tempest working 14:32:47 #topic bugs 14:33:04 #link https://bugs.launchpad.net/nova 14:33:09 226 new bugs 14:33:37 it's probably worth a bug triage day soon 14:33:50 to make sure we catch anything really important in there for icehouse 14:33:59 anyone interested in organizing that? 14:34:07 first friday of icehouse-3 and icehouse-4? 14:34:25 101 untagged, wow 14:34:26 what's icehouse-4 :) 14:34:28 I am tempted by might need some guidance, I can reach out to mr still I guess 14:34:40 doh 14:34:45 johnthetubaguy: happy to provide guidance, and michael would be great too 14:34:59 cool, I can give it a try then 14:35:00 basically ... pick a day, write up an encouraging call to arms email 14:35:08 invite everyone to join #openstack-nova to work together 14:35:12 oh, yeah, and ready the stats 14:35:19 provide a set of links to the bug lists, and process notes 14:35:26 and yeah, metrics are good so that we can celebrate progress made 14:35:30 cool, I can dig out the old emails 14:35:52 when do we want it, next week or two? 14:36:11 i think any time in icehouse-3 would be OK 14:36:12 I can volunteer for the bug triage day to participate if it's before 30th of this month. 14:36:19 before we start our big bug fix push in the RC cycle 14:36:27 kashyap: great thanks 14:36:36 russellb: right, and after the gate bug day, I guess 14:36:44 true 14:36:49 gate bug day is every day right? 14:36:51 which is this coming monday 14:36:52 heh 14:36:55 Although, my areas limited to KVM/QEMU/Libvirt aspects. 14:37:00 gate bug day has been every day for me lately yes :) 14:37:11 kashyap: plenty of those in there to look at i think 14:37:24 kashyap: and you can always expand your knowledge by studying some others :) 14:37:26 rushiagr, Yeah, I'm already on two of them 14:37:54 I'll do some bugtriage wrt ec2 api bugs 14:37:56 russellb, Yeah, for now I'm focusing on KVM internals/nested virt in my spare time 14:38:00 OK, so I will try sort something out, maybe aim for the Monday after the gate day? 14:38:15 if nothing else, for status we have http://webnumbr.com/untouched-nova-bugs 14:38:18 * rushiagr smiles at kashyap thanking him for waking /me up at the right time 14:38:34 johnthetubaguy: week after next sounds good 14:38:34 johnthetubaguy: sure ... or i think fridays are good for this kinda thing too ... any day is good with me though 14:38:39 b/c the week after that is the mid cycle meetup 14:38:45 (Oops, didn't mean to prompt, damn auto-completion) 14:38:52 kashyap: no worries 14:38:54 could be a mid-cycle meetup activity, too 14:39:01 yeah 14:39:04 for some of the time anyway 14:39:05 might work out that way 14:39:20 well, do we want it after the mid-cycle meetup 14:39:30 is that too late? 14:39:34 we also have http://status.openstack.org/bugday/ 14:40:09 mriedem: yeah, I guess, so week before it will be 14:40:10 (/me is trying to reproduce this nasty one on his setup, I wonder why more more people aren't hitting it - https://bugs.launchpad.net/nova/+bug/1267191) 14:40:19 sounds good to me 14:40:23 thanks a bunch johnthetubaguy :) 14:40:34 johnthetubaguy: catch my two stats links? 14:40:42 yeah, looks good, thanks 14:40:44 cool 14:40:49 mikal may have another too 14:41:09 yeah, will drop him a mail, given he sleeps while I code 14:41:15 ok last bug note real quick ... 14:41:19 #link http://status.openstack.org/elastic-recheck/ 14:41:26 i've been focusing on gate bugs the last couple weeks 14:41:30 could still use more help 14:42:01 Is there a URL for gate bugs? 14:42:15 the elastic-recheck page sorts the known ones by frequency 14:42:30 kashyap: there's a libvirt one on there 14:42:31 I'd like to keep an eye, as most of my testing is on Fedora + latest bits (mostly RDO; sometimes devstack) 14:42:53 #topic blueprints 14:43:04 so blueprints ... 14:43:14 our velocity is down a good bit this cycle, and i think other projects too 14:43:19 #link https://launchpad.net/nova/+milestone/icehouse-2 14:43:23 we completed 9 in icehouse-2 14:43:30 and 13 in icehouse-1 14:43:49 and now icehouse-3 is kind of insane looking 14:43:57 but on the velocity issue 14:44:04 i think there are a few major contributing factors worth noting 14:44:09 been thinking about the process, if there are more higher priority blueprints, its easier to review them above all the other low? Maybe we should just bump a few up and leave sponsors as extra input not only input? 14:44:11 1) summit in the middle of icehouse-1 14:44:20 2) holidays in the middle of icehouse-2 14:44:32 3) major gate troubles in the last couple weeks 14:44:55 johnthetubaguy: yes the process, thanks for bringing that up 14:45:07 so i sent out a message last week asking for people to please sponsor stuff 14:45:11 i'm not sure that really helped 14:45:33 so i agree, i think for icehouse at this point, we should just bump things up that we just feel are more important 14:45:56 we should only bump up the ones we really want to get in, and think can (sponsors aside) 14:45:57 russellb, Yes, that Gate bug (1254872) - I looked at it a couple of days ago. danpb posted some investigation for the reporter. I'll try it tomorrow 14:46:00 yeah, we kinda need to, we can leave sponsoring as a handy way to get _other_ blueprints to medium 14:46:13 johnthetubaguy: sure, i'm good with that 14:46:26 and we'll have to re-discuss the process for juno 14:46:41 i'm loving the help, that part is good 14:46:46 the prioritization part is a flop 14:46:51 +1 14:47:11 the review seems to work better, spread around, but yeah, priorities suck right now 14:47:13 so johnthetubaguy (and other nova-drivers) feel free to start selectively increasing priority 14:47:38 russellb: worth a mail to nova-drivers again? 14:47:42 yes 14:47:47 sweet 14:48:06 so we should do that very soon 14:48:13 aim to pick out the ones we really want to land 14:48:13 I almost missed this meeting, and I'm guessing others aren't here. So email is good 14:48:26 alaski: all good 14:48:39 we have a list of 147 right now 14:48:46 which is far from realistic 14:49:05 vs 13 and 9… 14:49:07 right. 14:49:15 eek 14:49:18 so ... prioritize to pick the ones we want to focus on 14:49:26 set a blueprint approval deadline to be soon 14:49:31 a good number of these haven't been approved 14:49:52 and then i suspect proposal deadline will kick out another big chunk 14:50:12 sounds like i need to write another email about icehouse-3 blueprint timeline 14:50:13 yeah, would be good 14:50:25 the points in time where we'll be kicking stuff out 14:50:35 how much time should we reserve to get the last bit approved 14:50:43 not like we need more approved, but should do another pass 14:50:55 next Friday reasonable? 14:50:57 so, blueprint approved by, all patches ready and passing tests by? 14:51:06 yeah 14:51:15 and then of course, feature freeze (merged) 14:51:25 good point 14:51:35 a week seems a bit short, but longer seems too long 14:51:37 another week and a day seem reasonable to do our last blueprint approvals? 14:51:40 heh 14:51:48 I think I aggree with a week 14:51:53 we can always grant exceptions :) 14:51:57 sounds good to me 14:51:59 true 14:52:27 so at least we'll hear the complaints sooner and have more time to respond to them, heh 14:52:41 true true 14:53:12 and if anyone has an icehouse-3 blueprint you know won't make it, save us some work and update it to defer :) 14:53:28 stuff that is up for review now, all patches ready, should we make them medium now? since they got them there first? 14:53:46 that sounds totally fair 14:53:52 i wouldn't update all of them to medium 14:53:55 should be more selective 14:54:11 well, if we make the other stuff we _really_ want high? 14:54:14 the ones we feel are most important out of them, yes 14:54:18 * mriedem gets out my defer pen 14:54:30 well ... release management treates medium and above special ... as in, those are the things we expect to merge 14:54:36 low is all "nice to have" 14:54:48 true, we should stick with that 14:54:52 that's the only reason i'm picky about it 14:55:11 but we can certainly afford more medium/high than we have right now 14:55:27 but if the work's already done then we expect it to merge, unless we just don't like the blueprint idea, in which case it should have been rejected 14:55:27 just wondering about fairness to those with code already out there, and others who come late to the party, but no easy solution there I guess 14:56:16 johnthetubaguy: yeah i think it's worth picking the mediums and such out of the ones in that state 14:56:26 just not wanting to agree to a blanket upgrade of all of them 14:56:35 yeah, thats a good way of putting it 14:57:00 because there are some things that are just a ton of work with seemingly low backing 14:57:02 like gce for example 14:57:23 #topic open discussion 14:57:30 caching? 14:57:36 hartsocks: caching of what 14:57:43 method outputs 14:57:52 no idea what you're talking about, heh 14:57:54 erm... 14:57:57 I've seen a number of patches that implement their own caches…. 14:58:06 don't we have an oslo thingy for that? 14:58:13 I thought so... 14:58:14 russellb: nova v3 xml remove? ;) 14:58:21 sdague: AH YES 14:58:24 so xml in v3. 14:58:27 anyone want to keep it? 14:58:31 nope 14:58:31 nope 14:58:33 i still need to read that thread 14:58:39 mriedem: i'll take that as a no 14:58:44 :) 14:58:50 not me personally 14:58:56 FWIW, i'm happy to remove it 14:58:58 I'm getting a tempest patch ready so we could do the nova delete 14:59:05 i feel like we put some threads out there, no real support for keeping it 14:59:21 I think it would take a lot of cruft out of the code, and let us focus on real issues 14:59:25 …its a great sub team project, if someone wants to fund it in the future, not something we have resources for now 14:59:29 i also feel like i should send (yet another) email on it making it clear that we're making that call 14:59:50 sdague: do we have a blueprint? 14:59:59 russellb: no, I'll create one 15:00:07 k 15:00:12 the xml stuff is also a PITA for handling datetimes in responses, i found that out the hard way with converting an api to objects 15:00:32 s/PITA.*/PITA/ 15:00:33 heh 15:00:52 alright, time is up 15:00:55 thank you all for coming 15:00:59 i appreciate your time, and your work on nova! 15:01:10 I have submitted some code w.r.t. additions to ec2 api related to block storage. I'd appreciate any help w.r.t getting some eyes on my code.. 15:01:22 russellb: we appreciate you 15:01:29 +1 15:01:30 let's jump to #openstack-nova rushiagr 15:01:33 alaski: johnthetubaguy <3 15:01:39 russellb: done 15:01:40 bye :) 15:01:42 #endmeeting