21:01:25 #startmeeting nova 21:01:26 Meeting started Thu Feb 27 21:01:25 2014 UTC and is due to finish in 60 minutes. The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:30 The meeting name has been set to 'nova' 21:01:30 o/ 21:01:32 hi! 21:01:35 hi 21:01:37 again 21:01:37 o/ 21:01:39 #link https://wiki.openstack.org/wiki/Meetings/Nova 21:01:40 o/ 21:01:40 yo 21:01:41 \o 21:01:42 hi 21:01:51 hi 21:01:53 hi 21:02:11 #topic general 21:02:20 biggest thing today is icehouse-3 and feature freeze 21:02:22 #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule 21:02:29 feature freeze this coming tuesday 21:02:43 after that blueprints will require a feature freeze exception 21:03:13 we can discuss individual blueprints in a little bit 21:03:19 #topic sub-teams 21:03:29 any sub-team reports for today? 21:03:29 * n0ano gantt 21:03:37 xenapi testing is running! 21:03:41 n0ano: ok what's up 21:04:03 * melwitt novaclient 21:04:21 talked a lot about plan B for the gantt forklift (basically clean things up in the current nova tree so that a split becomes doable) 21:04:39 details are in the etherpad at https://etherpad.openstack.org/p/icehouse-external-scheduler 21:04:49 cool 21:04:58 obviously we won't be ready for icehouse but juno is in our sights 21:04:59 what should we do with the current gantt repo? 21:05:23 n0ano seemed interested in continuing to play with it 21:05:27 jog0, I'd like to keep it, it'll be re-created when we're ready to do the real split 21:05:36 but it's a good testing ground for now 21:05:43 so keeping it with nova core has a potential cost to it 21:05:50 re: nova cores should be reviewing it 21:06:05 since we know it's going to get re-done later, we could change the reviewers 21:06:08 thoughts? 21:06:28 i'm kind of keen to get it moved to stackforge first if we're going to open up the review team though 21:06:28 jog0, not sure what your concern is, should we just remove nova core from the reviewers and add those that are actively working on gantt for now 21:06:31 I think we should move it to stackforge and open up the team, but that has never been done before 21:06:42 n0ano: something like that 21:06:43 yes 21:06:54 n0ano: i'm happy to add whoever you'd like if you want to drive getting it moved to stackforge 21:07:06 jog0: well the issue on the move is it requires gerrit downtime 21:07:25 works for me, if we change the reviewers should we just leave the repo where it is? 21:07:26 and if it's going to come back later, it's just 2 saturdays you are making the infra team work 21:07:55 that would make it for a total of three, right? 21:08:02 given the one they already did to get it in there? :) 21:08:07 yep 21:08:34 sdague: yeah, could wait until they're having to do it anyway for something else 21:08:43 russellb: ++ 21:08:53 say, if barbican gets incubated 21:09:05 which we're reviewing this coming tuesday 21:09:11 you can change the review team without moving the repo 21:09:25 yes but i'm saying i want to move it before changing the review team 21:09:33 I don't think we want the repo under the nova umbrella for now 21:09:38 there's no harm in just doing it when the next opportunity arises right? 21:09:41 i'm being a control freak as long as it's still under the compute program :-) 21:10:03 russellb: right, but it can kick out of compute without moving out of openstack, but we can probably take that offline 21:10:41 maybe we should continue this via email, I'm not sure what all the implications are right now 21:10:47 i guess, but let's just slip this in the next time renames are being done anyway 21:11:19 melwitt: how's novaclient looking 21:11:38 the biweekly novaclient report 2014/2/27: 21:11:38 open bugs, 162 !fix released, 98 !fix released and !fix committed 21:11:38 30 new bugs, 0 high bugs 21:11:38 30 patches up, 6 are WIP, reviews have all been updated within the last 7 days 21:12:00 i pushed a novaclient release yesterday 21:12:16 so we should be on the lookout for any regressions 21:12:32 noted 21:12:49 any bugs worth noting specifically? 21:13:32 melwitt: also, tjones started a nova bug team / meeting. you could consider joining that meeting and getting novaclient bugs on the team's radar, too 21:14:04 no bugs worth noting specifically 21:14:11 ok 21:14:34 #topic bugs 21:14:51 speaking of tjones ... o/ 21:14:55 #link https://wiki.openstack.org/wiki/Meetings/NovaBugScrub 21:15:02 russellb: hi 21:15:04 i believe this team had their first meeting yesterday? 21:15:25 yes we did (tues) - we focused on tagging the untagged bugs 21:15:54 tjones: i was just mentioning to melwitt, who has been looking after novaclient bugs, that she may want to join this meeting too. the group may be interested in the novaclient bugs 21:16:04 got a fair ways through. wendar was nice enough to clean up the tags and owner table. but we still need a bunch of owners. particualry the hypervisors 21:16:15 great - glad to have her 21:16:24 cool, sounds like a good first meeting then 21:16:41 yes i think so. once we have more owner coverage i think the triage will go much better 21:16:58 cool, definitely don't consider the owner list current 21:17:05 judge based on bugs actually triaged :-) 21:17:21 anything you wanted to discuss in meeting today?\ 21:17:26 yes that is big problem. i was thinking to send out something on the ML to get this updated 21:17:27 https://wiki.openstack.org/wiki/Nova/BugTriage 21:17:39 unless you have a better idea 21:17:48 nope not really 21:18:01 * dansmith updates for objects 21:18:23 i would like to discuss https://blueprints.launchpad.net/nova/+spec/convert-image-meta-into-nova-object 21:18:33 dkliban: blueprints up next 21:18:39 russellb, k 21:18:46 tjones: anything else for today? 21:18:54 tjones: reach out to alexpilotti for hyperv 21:19:00 and ewindisch for docker i think 21:19:07 yep ^^ 21:19:16 maybe zul for lxc 21:19:21 great thanks! no just chasing owners and then hassling them to triage 21:19:23 pong 21:19:38 oh, unified-objects is on there, nevermind 21:19:40 scheduler sub-team folks / n0ano for scheduler 21:19:58 dansmith: heh was wondering what update you were making 21:20:06 russellb: expected objects instead of unified-objects 21:20:11 #topic blueprints 21:20:17 alright, feature freeze in a few days 21:20:25 dkliban: k, you're up 21:20:34 I have a couple of BPs I've been asked to push 21:20:35 i've been working on https://blueprints.launchpad.net/nova/+spec/convert-image-meta-into-nova-object 21:21:00 i am getting close, but i am worried it won't be merged till after 4th 21:21:37 so, I think this is important because it's a cleanup thing that needs to go in so it can soak for a while before we deprecate old tags, 21:21:39 sooo 21:21:42 that one isn't even approved 21:21:44 or at least, get people using the new ones sooner 21:22:10 russellb, can you approve it? 21:22:11 how close to freeze will it be ready 21:22:16 dkliban: maybe :) 21:22:26 oops...I thought it was already approved 21:22:28 if dansmith is going to help you drive it, i'm ok with it 21:22:54 dansmith has been helping me. danpb has been giving input. 21:23:10 so then that brings me to https://blueprints.launchpad.net/nova/+spec/add-ability-to-pass-driver-meta-when-starting-instance 21:23:42 i wrote a patch earlier that stored the new meta data in the database. it was decided that we don't want to store any extra data. 21:24:04 ah I see, the above one is just a dependency of this one 21:24:31 i would like to use the new VirtProperties object to store the new meta 21:24:32 * russellb adds blueprint dependency 21:24:47 russellb: so a BP proposed in 02/17 and started in 2-25 still ok? sigh. 21:25:02 yjiang5: not all blueprints are of equal size/complexity 21:25:10 russellb: :) 21:25:11 this one isn't very big, and has a sponsor 21:25:16 and this was started in january 21:25:32 the dependent blueprint came out of review 21:26:26 k, well keep pushing as hard as you can 21:26:34 further it goes, less likely we will approve exceptions 21:26:55 no guarantees 21:26:56 i am trying to do it by the 4th. i just wanted to warn you that i might be a couple of days late. 21:27:00 ok 21:27:20 any more than a couple days will probably be too late IMO 21:27:29 dkliban: I can probably help write actual code for this tomorrow, so lets sync up and sprint for a bit and see how close we can get it 21:27:48 dansmith, thanks. 21:27:57 lets talk after this meeting 21:28:04 ok, other blueprints? 21:28:09 events 21:28:23 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/admin-event-callback-api,n,z 21:28:29 definitely need some review on that stuff if it's going to land 21:28:49 got some good api feedback from cyeoh and others, but the core bits definitely need some eyes 21:28:57 yeah ... risky stuff to be doing after freeze 21:29:05 we have 2 scheduler related BPs, targeted for Icehouse-3 that need core reviews: 21:29:06 we have support on the neutron side, so it would really suck to not get this in, 21:29:11 and only ever race to boot guests for another cycle :( 21:29:15 * russellb nods 21:29:24 could argue that this is a bug fix, really 21:29:31 so probably ok .. 21:29:31 yeah, but it's big 21:29:34 right 21:29:37 and its an API change 21:29:38 so earlier is better either way 21:29:43 and that 21:29:57 n0ano: which bps 21:29:59 Solver Scheduler: https://blueprints.launchpad.net/nova/+spec/solver-scheduler 21:30:06 Instance Group API: https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension 21:30:21 i've been trying to look at the instance group one when i can 21:30:25 i'd love to finally get that in 21:30:33 ndipanov had a lot of feedback on the instance-group stuff last I heard 21:30:37 that's what we're looking for 21:30:51 yea I need to have another look at that one too. 21:30:57 sitting with a -1 and a couple -2s 21:31:00 solver scheduler i don't know the status on 21:31:02 russellb: btw list of patches for unapprovedd BPS http://pastebin.com/hzB3QwYC 21:31:20 wow 21:31:35 n0ano: the v2 extension has an unanswered -1 21:31:48 jog0: you've -2'ed all of those, right? 21:31:48 its not completely accurate so take it with a grain of salt 21:31:54 mikal: I try 21:31:57 but there are soo many 21:32:11 I'm just the messenger, I'll let Yathi know, that clearly needs to be resolved 21:32:13 nova-network-objects is unapproved? 21:32:14 feel free to -2 away on that list 21:32:20 cinder v2 support is a relatively small patch and it's close i think, but needs some changes - could use another core with me on it: https://review.openstack.org/#/c/43986/ 21:32:25 russellb: nova-network-objects is on there 21:32:25 jog0: and you'll pay for it when Juno opens unless you have a script :-) 21:32:28 and some like admin-event-callback-api 21:32:31 nova-network-objects already marked implemented 21:32:42 then maybe we should approve? :) 21:32:44 I didn't -2 21:32:44 and approved 21:32:51 hmm 21:32:57 make sure to set the direction as approved too 21:33:00 thats what the script uses 21:33:03 https://blueprints.launchpad.net/nova/+spec/nova-network-objects 21:33:04 it is 21:33:43 there's going to be a big list that doesn't make it due to review bandwidth, unfortunately 21:33:51 sorry :-/ 21:34:03 please don't rage about it, that will make me sad 21:34:20 #topic open discussion 21:34:20 delegate the rage 21:34:28 happy to talk more blueprints, or whatever else 21:34:37 so novaclient reviews 21:34:51 it seems like its just 3 or 4 cores who venture over there 21:34:52 melwitt: says they've all been reviewed within the last 7 days 21:35:04 by only a few people 21:35:07 it would be good to spread the love 21:35:16 yeah, but maybe that's not a huge deal, as long as we're keeping up 21:35:26 but by all means, the more the merrier .. 21:35:39 https://review.openstack.org/#/q/is:open+project:openstack/python-novaclient,n,z 21:35:44 not all patches are revewed 21:36:11 sorry, I didn't mean they were necessarily reviewed but updated (presumably by the author) recently 21:36:12 reviewed and iterated 21:36:14 but they've been updated within the last few days 21:36:16 yeah 21:36:29 anyway no its not a big deal at all, this is more of don't forget that python-novaclient exists too 21:36:34 :) 21:36:38 novaclient needs love too 21:37:09 will do one more novaclient release around icehouse time to catch whatever other bug fixes or features we land between now and then 21:37:47 we will need it for events before neutron can apply their patch I think 21:38:09 I think it has to go nova backend, api, client, neutron, libvirt 21:38:14 testing not against novaclient master? 21:38:34 anyway, it's super easy to cut a release 21:38:46 I don't think we test against client masters 21:38:48 they have to be confident that there will be a release before theirs I think 21:38:49 ah 21:38:52 k 21:38:54 well whenever 21:39:11 it's literally just .... $ git tag -s ; git push --tags gerrit 21:39:36 freaking magic 21:40:17 you also have to send an email 21:40:27 heh, yes, ideally 21:40:37 anything else for today? 21:40:55 so I don't know if its worth talk about the v3 or not? 21:41:10 but I'd like suggestions on how to resolve it one way or another 21:41:13 Do we have time 21:41:17 That might take a while 21:41:21 russellb: should I just sync with tjones on the docker ci work? 21:41:34 Although I agree it needs a decision 21:41:38 mikal: probably not, but whenever are we going to get the time 21:41:38 ewindisch: the mention in that section of the meeting was about talking to you about owning docker bug triage 21:41:45 ah 21:41:53 russellb: speaking of CI what aobut the CI and drivers for Icehouse? 21:41:56 mikal: and I don't want to waste even more time/resources if its never going to get anywhere 21:42:00 deadlines aetc 21:42:02 etc* 21:42:08 jog0: I have some terrible reporting which isn't ready yet 21:42:17 jog0: I think we're good for everyone except docker, right? 21:42:20 for mikalVirtDirver? 21:42:21 jog0: its harder than it looks 21:42:23 jog0: xenapi seems to be back online 21:42:31 dansmith: the not-vote rate is quite high for some CI implementations 21:42:47 mikal: not vote or not report? 21:42:55 dansmith: no comment made 21:43:00 ah, okay, 21:43:08 I was looking today and it seemed like we were doing pretty well 21:43:10 dansmith: my initial (possibly wrong) numbers say vmware only comments 67% of the time for instance 21:43:18 over what interval? 21:43:23 Last 7 days 21:43:26 okay 21:43:31 are you going to send a report or something? 21:43:33 But like I said, its actually fiddly 21:43:36 can anyone else view http://ca.downloads.xensource.com/OpenStack/xenserver-ci/refs/changes/35/75535/1/testr_results.html.gz 21:43:39 Yeah, I need to finish it first 21:43:44 okay, that would be good 21:43:48 dansmith: we're doing pretty good with docker, but it's just a matter of making the remaining 20-30 failing tests pass... I've got patches out for some, but a few are ... odd 21:43:51 also wonder how often a patch gets another revision before it had time to run 21:43:56 Its on my todo list, hopefully finished early next week 21:44:00 ewindisch: really? I haven't seen a docker one in a while 21:44:09 ewindisch: lets sync over in openstack-nova after this meeting on docker bugs 21:44:10 notably I have failures with stuff like uploading to Glance and creating tenants in Keystone that shouldn't have anything to with Docker 21:44:16 jog0: my browser sees it and wants to download the gz file 21:44:16 tjones: okay 21:44:17 russellb: that's quite often... I exclude patches which existed for less than 3 hours from reporting 21:44:22 That's where the fiddlyness is 21:44:22 cool 21:44:27 Working out what's a fair review to count 21:44:30 dripton: ahh mine did too 21:44:32 just got it open 21:44:37 dansmith: I've disabled the reporting for the time being as it's just going to tell you that the tests are failing 21:44:42 Pass 1932 Skip 135 21:44:44 not bad 21:44:46 cyeoh: so, ACK on the importance of v3 discussion progress. honestly, we probably need a bit more time. i need to catch up on the list from my last couple days of travel, too. 21:45:05 vs 2020 in gate 21:45:08 cyeoh: lastly, i'm feeling absolutely miserable right now and not sure i can do that topic justice at the moment personally 21:45:19 ewindisch: okay, well, not reporting or always failing, neither are really helpful 21:45:30 russellb: ok 21:45:32 Next week's meeting will be bad 21:45:38 Because its at the horrible time for AU people 21:45:48 Is it worth having a special v3 only meeting? 21:45:55 This is kind of a big deal 21:45:56 we could do that, sure 21:46:12 i'd be fine with that. 1:30am is a bit hard and I'm not so coherent then 21:46:30 or i can turn it over to dansmith cyeoh to discuss some now 21:46:33 Even if we did it in the team channel 21:46:58 I feel like we've beat it to death, personally 21:47:07 yeah 21:47:14 i was hoping to get a wide net of feedback / opinions 21:47:21 haven't seen as much as i'd like yet 21:47:22 dansmith: well my current code reviews get us about 1/3rd of the way there. Then there is the matter of at least a one non-implemented feature (suspend/resume), and most of the rest seem to be non-Docker issues on the surface. 21:47:39 I feel like we've got very little support for releasing it in icehouse, and I think that we can justify delaying a larger discussion until the summit since api shouldn't change much between freeze and then, 21:47:42 IMHO 21:47:57 so I'm still trying to work out what the principal objection is - because the maintenace of the overhead just doesn't seem as big blocker as is suggested 21:48:39 (and I think we can reduce a lot of the overhead of internal changes on the api) 21:48:39 dansmith: I am not sure I agree with you fully on that. If one of our goals is to move to 1 API as soon as possible 21:48:42 1/3rd of the way there from a baseline of "where we are today" which are about 30 tempest failures, aka 30->20 failures 21:48:52 jog0: waiting until summit? 21:48:54 then holding it in development another cycle has a real overhead 21:49:02 dansmith: waiting to release until Juno 21:49:11 that being said I am not saying we should release it in icehouse either 21:49:19 I am saying both suck 21:49:21 jog0: I said having the larger discussion at the summit, not 'definitely releasing it in juno' 21:49:23 heh, i think the icehouse decision is done and behind us 21:49:26 jog0: agreed 21:49:45 Is it fair to make that entire group of devs sit and wait three months? 21:49:46 dansmith: so as I mentioned if we are really stuck with v2 forever we should be freezing a lot of the api changes suggested for icehouse 21:49:59 jog0: I was just going to say we can wait to have the "what now?" discussion until icehouse, but I really don't want to have two for any longer than we have to, so if that means we have the discussion sooner, then that's fine too 21:50:13 mikal: hopefully those devs would be working on bug fixes during feature freeze right? 21:50:20 dansmith: what you said 21:50:23 and it's not 3 months :-) 21:50:36 * russellb thinks 21:50:38 ok, 2.5, heh 21:50:40 russellb: so some do, but its not really what happens in practice if you want to hit the ground running for juno 21:50:40 russellb: (February now, May then. That's three monthsish) 21:50:59 i think it's what *should* happen 21:51:02 .. and if they're not going to be work on API stuff in Juno, will look for other stuff for them to work on 21:51:18 * jog0 is wondering if the ML thread will hit 100 21:51:25 jog0: I hope not :( 21:51:35 I think the ML thread isn't really helping any more 21:51:49 agreed, I keep telling myself I'm going to stop replying 21:52:14 so should we have a meeting on this early next week? 21:52:21 phone call? g+ hangout? 21:52:25 One question I have is, if we drop v3 in favour or improving v2... Do we have anyone actually signing up to do that v2 work? 21:52:30 russellb: perhaps right after feature freeze? 21:52:39 russellb: I think we should. 21:52:40 mikal: do we release a second api if not? 21:52:43 mikal: good question 21:52:45 mikal: that doesn't make sense to me 21:52:47 russellb: and that anything higher bandwidht than email is good 21:53:04 Sorry 21:53:13 irc is our default 21:53:14 What I am saying is if we choose to improve v2 and not release v3 21:53:17 Who is going to do that work? 21:53:19 but wondering if voice would be better on this 21:53:32 would getting some feedback from the likes of jclouds and fog folks help? 21:53:53 voice is why I suggested we talk at the summit, 21:53:57 because it's very high bandwidth, 21:53:58 jog0: well, supporting v3 is a lot of work for people like Rackspace 21:54:02 and not much should change between now and then on the api 21:54:06 So, its not likely to happen while there is uncertainty 21:54:08 everett toes is rax and jclouds 21:54:13 Thus a chicken and egg problem 21:54:15 toews 21:54:38 and that way, 21:54:45 cyeoh can actually hit me if he likes 21:54:46 mikal: I was more thinking just ask them for general API feedback on which option they like best 21:54:49 Noting that v3 forces some interesting behaviour from deployers that I quite like 21:54:49 the uncertainty is a *big* thing 21:55:13 i.e. actually exposing a cinder and glance endpoint 21:55:49 mikal: I'm not sure that's really the case 21:55:57 mikal: or will they just continue to only expose v2 ... 21:56:00 dansmith: how so? 21:56:00 mikal: in fact it could fracture it more 21:56:03 well, of course they will 21:56:17 we'll have a mix of v2 only and v2+v3 21:56:31 If v2 is eventually deprecated, then people have to move on supporting v3 21:56:45 and new clouds may just go v3 only 21:56:46 it's the same problem as having two apis in the first place, we're just forcing their hand or holding them hostage 21:56:49 if we're ok pissing them off by pushing removal before people have actually migrated 21:56:54 which I just don't think is what we should be doing 21:57:09 right, i'm not ok with forcing it 21:57:11 so I don't think we should just rule out compatibility layers either 21:57:19 rather than just trying to keep legacy code forever 21:57:21 if the migration doesn't happen naturally based on the new API being compelling, we've failed 21:57:26 totally 21:57:42 and yeah, i'm interested in some hybrid alternatives 21:57:47 I think we've demonstrated why a meeting next week is a good idea by the way 21:57:59 lets just adopt a standard API like EC2 ;) 21:58:01 russellb: so part of that depends on whether we forver try to backport stuff to V2 too 21:58:05 * russellb smacks jog0 21:58:06 I know that the uncertainty sucks, 21:58:20 but I'm also concerned that I have lots of work to do between now and freeze/summit, etc 21:58:29 so we need to be really sure that a meeting is going to yield something useful, IMHO 21:58:37 i.e. and not "another meeting" 21:58:52 few things that i think would be useful here 21:59:09 1) concrete feedback from more people on what they expect in terms of of a possible v2 deprecation timeline 21:59:51 2) a more concrete proposal for the v2-only option, including policy changes we would make (like around versioning), to make it a good enough API to live with long term 22:00:17 3) some more concrete ideas presented on whatever could make dual maintenance more palatable 22:00:25 we got some pretty good feedback from RAX this morning 22:00:26 so I think we really need quantify how much this dual maintenance is too. 22:00:27 so, documenting those sorts of things on etherpads would be helpful 22:00:36 dansmith: oh ok, i haven't seen that yet, very happy to hear that 22:00:42 how much more work per change is it? 22:00:42 to me, 22:00:53 dansmith: what's the tl;dr 22:00:54 the dual maintenance is all pain if nobody really wants to use the api 22:01:13 russellb: apiv3 has no value for them, they are still supporting old-school cloud servers and it's a huge friggin pain 22:01:20 dansmith: we have a tonne of new users coming in too 22:01:25 russellb: customers don't complain about the things apiv3 fixes 22:01:36 interesting .. 22:01:37 that's what I gathered from it 22:01:41 so if we release another, 22:01:45 dansmith: I don't think we can ignore them - ~6000 expected for HK, ~9000 for atlanta 22:02:04 even if the dual maintenance is small for us (I don't think it is), it's still two APIs we're supporting, and I don't know why we're doing it 22:02:08 then the majority of users are new users 22:02:19 if v3 was organized differently, it'd be another story 22:02:32 dansmith: because v2 is fragile, and we want to move off of that 22:02:48 we actually really do want to drop the old v2 code 22:03:13 its not just error prone for users of the api, but us as maintainers of it if we want to make significant changes 22:03:16 well, there's we (devs) want to 22:03:24 and reality of when we can do it without hurting us as a project 22:03:34 and hence, we're stuck at this juncture 22:03:41 we're out of time for today 22:03:45 thanks everyone 22:03:47 #endmeeting