21:00:09 #startmeeting nova 21:00:10 Meeting started Thu Feb 12 21:00:09 2015 UTC and is due to finish in 60 minutes. The chair is mikal. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:13 The meeting name has been set to 'nova' 21:00:17 Who is here for a nova meeting? 21:00:19 o/ 21:00:20 o/ 21:00:21 o/ 21:00:23 o/ 21:00:24 o/ 21:00:26 o/ 21:00:31 Yay! 21:00:36 #topic Release status 21:00:39 o/ 21:00:46 Ok, so we're in feature freeze for non-priority stuff now 21:00:54 We've collected a bunch of FFE requests during this week 21:01:06 And the plan is nova-specs-core will meet early next week to discuss them 21:01:14 (as announced in John's email last week) 21:01:21 #link http://lists.openstack.org/pipermail/openstack-dev/2015-February/056208.html 21:01:28 o/ 21:01:35 I don't have much more on that, its just a "please hold" 21:01:47 Any other comments or questions on the FFE process? 21:01:59 I submitted a ffe request for passing capbility to ironic 21:02:06 Yep 21:02:20 You'll hear back next week once nova-specs-core has had a chance to meet and discuss 21:02:23 I hope we hope sometime to discuss it today 21:02:38 do questions about new features concern the FFE process? 21:02:46 One sec 21:03:02 wanyen_: sorry, we weren't planning to work through them at this meeting, as half of specs-core is asleep at the moment 21:03:07 Hence the meeting next week 21:03:11 Is/i hope we hope/ I hope we can 21:03:28 mikal: at one point, the discussion sounded like it was to happen in this meeting at this time 21:03:35 so I think probably some people are surprised to not see it happening 21:03:36 Huh 21:03:43 Ahhh, I see 21:04:13 I'm working off John's email, which says a separate meeting (to my reading at least) 21:04:36 The deadline for requesting one has also only just closed, so presumably people need time to consider the requests 21:04:48 If we need to discuss specifics of some, we can do that in open discussion 21:05:17 mikal, discuss at open discussion will be fine 21:05:19 alevine: now on to you 21:05:35 alevine: so yes, the feature freeze blocks all new non-priority features for the rest of kilo 21:05:51 alevine: this is intended to free up time and attention to actually finish the priority work for the release 21:06:13 sdague suggested to bring up a question of extension needed for this standalone EC2 API project here. It's just about some more properties to show 21:06:30 alevine: yep, that's actually in the next meeting topic on the agenda 21:06:33 So... please hold! 21:06:40 Oh, ok. 21:07:04 So, what we said in the announcement email is we'd discuss exceptions next week, and announce results no later than the nova meeting next week 21:07:08 But let's move on... 21:07:14 #topic Kilo Priorities 21:07:26 So, looking at the tracking etherpad 21:07:33 #link https://etherpad.openstack.org/p/kilo-nova-priorities-tracking 21:07:45 Is there anything that needs to be called out as needing more attention from that list of reviews? 21:08:07 so, 21:08:42 there is a review up that proposes to implement all (I think) of the nova-to-neutron proxy by hackishly redefining the Network object to make calls to neutron instead of nova's DB 21:08:52 #link https://review.openstack.org/#/c/150490/ 21:09:11 I'm trying to codify why I hate it, but it's hard to put feelings I have in my stomach into words 21:09:12 Yeah, I saw that one during the week but you seemed to be on it 21:09:28 dansmith: what would you like to see as a way forward? 21:09:39 I feel like simple things will work with that, but more complicated cases will require behavioral changes in the API layer 21:09:47 anteaya: what I said in my review: do it at the API layer 21:09:57 and I mean the nova/network api layer 21:10:50 dansmith: is gus' reply incorporated into your solution direction? 21:11:05 anyway, I thought maybe some other people could look at it and see if they think that's a super idea and I'm just crazy 21:11:07 anteaya: yes 21:11:19 great, thanks 21:11:25 Reading gus' comment, I think he thinks Dan means the nova-api layer 21:11:32 okay would be great to have other's feedback 21:11:32 Not the network-api layer 21:11:36 Or am I mis-reading it? 21:11:53 mikal: I think he's maybe confusing several things 21:11:55 if this is the wrong direction, I would like to get folks steered to the mutually agreed upon right direction 21:12:02 if we can agree what that is 21:12:17 mikal: because there are things that nova-api does with network stuff, and things that nova/network/manager does 21:12:42 dansmith: if I can get both you and gus in a venue of some sort, do you think you could clear some things up? 21:12:56 So, the specific request here is for other people to take a look at this review and see if they have concerns? 21:13:41 mikal: I think that'd be a good idea, but it's also hard to know whether this is a good step forward without a lot of consideration of the cases we'll cover 21:13:47 mikal: but yes. 21:13:49 * mikal tries to remember how to makea note in meeting minutes 21:13:59 try # info 21:14:06 #info Please review https://review.openstack.org/#/c/150490/ 21:14:31 Ok, I can also chat to Angus and try to get him some help 21:14:34 doing it at the object layer, 21:14:52 is akin to doing it at the sqlalchemy layer and trying to just make it look like we're storing things the same way, even though we're not 21:15:05 which is, AFAIK, the job of nova/network/neutronv2/api.py 21:15:07 I'll have to spend some more time looking, but at first glance I agree with dansmith that this is the wrong api to hide a rpoxy behind 21:15:33 it seems like it will end up being limiting at some point 21:15:39 Ok, so let's have a look at it in the next day or so and then I can have a chat to Angus about it 21:15:47 Given his timezone is very convenient for me 21:16:01 thanks 21:16:02 Is there anything else on priority work we need to cover? 21:16:32 ... 21:16:36 ... 21:16:39 I take that as a no 21:16:44 So, the other thing is the ec2 work 21:17:01 sdague has asked if we should consider adding fixing ec2 to our list of priorities 21:17:08 I think this is partially a timing question 21:17:12 Its obviously important 21:17:18 here's what I think we should do: 21:17:20 Do we think those fixes will be ready for kilo? 21:17:28 They are really easy. 21:17:28 I think we should make a special case, and merge the spec for L, since L specs are open 21:17:42 to give them a clear indication that we're ready for them to get going and we bless the direction 21:17:45 Actually just expose some info which is already in novaDB in extended attributes for instances. 21:17:59 mikal: so actually, it's not fixing ec2 persay, it's one API change on the v2.1 tree 21:18:00 I don't think that adding stuff to our api in K to support this is the best idea 21:18:25 So, I'm a bit confused to be honest 21:18:38 The big problem we have is borken auth with modern botos 21:18:46 mikal: nope, that's fixed 21:18:51 mikal: That's fixed 21:18:54 Are we going to not try and fix that and instead transition people immediately to stackforge? 21:19:00 Oh, huh 21:19:09 we imported the fix from the stackforge project, right? 21:19:11 So we're on to the transitioning people part of the plan and I just didn't notice? 21:19:16 Nice 21:19:30 mikal: pretty much. We actually created a new one and used it for stackforge as well. 21:19:32 dansmith: I don't remember the origin 21:19:34 So, that reduces the urgency of the API change, yes? 21:19:51 dansmith: That was answer to you. Sorry. 21:19:56 I think Dan and I are kind of saying the same thing... 21:20:01 mikal: the API changes would mean the ec2 api project can make real progress during Kilo 21:20:02 mikal: it means we don't have to worry about a completely non-functional ec2 in kilo 21:20:05 dansmith: no it was not ported 21:20:06 I'm not sure if we're talking about a priority task for kilo or liberty 21:20:29 which means by Vancouver we could maybe have some working replacement code 21:20:46 So, I have no problem with a priority task for liberty, so let's just not talk about that bit any more for now 21:20:49 if we hold until L opens up, that sticks a couple months delay into the ec2 external effort 21:20:54 sdague: we could do that without merging it into K right? 21:21:01 dansmith: The problem is, until nova exposes some of those APIs. Actually a very few ones, the stackforge project either has to go directly to novaDB or have gaps in its functionality 21:21:05 Are we really talking about this one API change if we do this in Kilo? 21:21:10 Or is there more hidden away as well? 21:21:20 as far as I know it's 1 API change 21:21:28 mikal: Yes 21:21:29 which I think we should do 21:21:30 sdague: 1 change with a bunch of things in it right? 21:21:45 Is this change proposed for review already? 21:21:47 dansmith: it's ~8 new extended-status attributes 21:21:54 right, okay 21:22:02 my point is, 21:22:10 dansmith: High priority are just two - rootDeviceName and deleteOnTermination for volumes. The rest can wait. 21:22:24 if it's just that and it's not complicated, then there's no reason not to just hold off on that because it's not blocking development 21:22:41 dansmith: it does delay deployment though right? 21:22:46 but I understand the desire to flip the argument upside down 21:22:48 mikal: yes 21:22:54 dansmith: if it means you can't transition until you've deployed liberty 21:22:56 mikal: but that's not what sdague said :) 21:22:59 yeh, it seems low risk to me 21:23:14 mikal: it means you can't without the ec2 project reaching in our db 21:23:16 So, yeah, I think I want someone to show me the code 21:23:17 so I'd rather give the ec2 folks more momentum and do this change 21:23:35 mikal: which it does now 21:23:43 Yeah, I hate the db thing 21:23:48 surely 21:23:54 So if this change is a couple of lines, I'm inclined to let it in to be hoenst 21:24:02 Which is why I'd like to see the proposed patch 21:24:39 mikal: Only the spec is ready. There is no code yet, because it was not approved or decided upon. But we can create the patch shortly. 21:25:02 alevine: I think that might make this conversation more concrete 21:25:20 Especially if the patch is small so the amount of work we're asking you to do is relatively small 21:25:24 mikal: Is it ok to create the patch and get back to the subject next week? 21:25:30 alevine: yes 21:25:33 alevine: I like that plan 21:25:41 let me say one more thing 21:25:43 mikal: Great. We'll do that. 21:25:49 dansmith: go for it 21:26:24 I don't like that we had a list of priorities, we drew the line about no non-priority stuff at a date, and we're going to take something new which doesn't even have code up and promote it to a priority and merge it after that line on "a whim" 21:26:38 I don't like the precedent it sets and I don't think it's that critical 21:26:39 I can see that 21:26:44 But that's not what we're doing 21:26:56 but, I want to set the tombstone for ec2 as much as the next guy so whatever 21:27:02 mikal: it totally is 21:27:07 We also didn't make removing ec2 a priority, but are now pushing for it 21:27:10 I mean, aside from the whim part :P 21:27:11 I think if we can see the code and assess its impact and importance, then we can talk about it then 21:27:33 We haven't said we'll merge it in kilo yet, we're still talking 21:28:13 so what are we going to do about the deprecation patch? 21:28:21 Yeah, there's that too 21:28:23 because that's still vaguely limbo 21:28:28 So ttx didn't love it IIRC 21:28:37 I'm +2 on the deprecation patch 21:28:43 ESPECIALLY if we merge this API thing 21:28:44 But it seemed mostly about implementation 21:28:51 ie. just log instead of marking deprecated 21:29:04 dansmith: yeh, so that's part of my reason to want to merge the API change 21:29:09 sdague: I know it is 21:29:16 because I think that makes merging the deprecation more reasonable 21:29:25 We did something very similar for the zookeeper thing this week with zero debate 21:29:31 sdague: agreed 21:29:35 no one uses zookeeper 21:29:46 this is how we do these things, IMHO 21:29:53 they actually can't, it's super broken 21:30:50 and I'd really like Kilo to have ec2 deprecated, as I think that will help shift effort to the external project 21:30:58 no argument there 21:30:58 So my point is what do people think of changing the ec2 deprecation to being like https://review.openstack.org/#/c/152173/3/nova/servicegroup/drivers/zk.py,cm 21:31:01 ? 21:31:17 I want the word deprecated 21:31:22 because I'm an ass 21:31:25 The objections to the current patch see to me mostly about the specific meaning of that word 21:31:32 and because I think it's the right word 21:31:33 sorry, seem to be 21:31:43 yeh, so honestly, I also think deprecated is the right word 21:32:01 deprecated means "don't use this thing, we're not going to support it in the future" 21:32:07 right 21:32:17 not "it's not tested, but if it works for you, have at it!" 21:32:20 which I do think is the state of the world 21:32:27 So I am having trouble arguing against you because I agree... 21:32:37 The counter argument was "but the replacement isn't ready" 21:32:44 And might (theoretically) never be 21:32:51 it doesn't matter 21:33:10 the state of the world is that even if there is no replacement, our ec2 stuff continues to degrade and go unmaintained 21:33:31 * dansmith notes he's being super-negative today 21:33:51 So, the review does have five +2s... 21:33:57 dansmith: what about the long argument that it's not architecturally correct to have ec2 api proxy to openstack api 21:34:00 I also think that's because very few of the core team is knee deep in ec2 any more 21:34:17 dims__: I think that argument is already false 21:34:17 which is why past patches weren't getting reviewed 21:34:28 reading ec2/cloud.py is interesting 21:34:38 almost all the history comes from pre-git days 21:34:38 dims__: the ec2 layer does some really unnatural things to keep working, even though our internal APIs are oriented to be most convenient for our own api layer 21:34:42 dansmith: i agree with you, is there consensus? 21:34:47 by people that aren't working on openstack anymore 21:34:57 dansmith: For the reason of unmaintaned - the sub team of EC2 API formed... 21:35:09 swamireddy: it has been formed eight times since I joined the project :) 21:35:21 complete with summit sessions and everything :) 21:35:37 dansmith: ok...can we try last chance...as we agreed last meeting... 21:35:43 Ok, so 21:35:50 we didn't agree to that :) 21:35:52 Let me take this to ttx and ask his advice 21:35:52 sdague: I wrote the first ec2 id to openstack uuid converter. I wanted it to diaff back then 21:36:01 Noting that I think we should mark it deprecated as well 21:36:02 edleafe: :) 21:36:09 edleafe: nice :) 21:36:22 mikal: sure 21:36:23 We do need to move on now though 21:36:32 edleafe: that's what I mean about unnatural stuff, by the way, and it's gotten worse I'm sure 21:36:35 yep, hopefully we'll have an api patch early next week 21:36:38 #action: mikal to talk to ttx about deprecation of nova's internal ec2 implemetnation 21:36:47 dansmith: yep 21:36:48 so we can see how risky that would be 21:37:02 #topic python-novaclient release 21:37:10 dansmith: Now, there is good no of people in the team to support ....I see we should try it as last chance and decide @ summit 21:37:22 I think the release is unblocked now, yes? 21:37:42 mikal: you're the only one that can do those releases right? 21:37:45 jogo has pinned requirements for stable branches, which I think was the only thing blocking this 21:37:55 mikal: I was talking to john and though ti might be good to have at least another tz with those privs 21:37:57 dansmith: I think other people can, but traditionally its been a ptl task 21:38:09 I'm not sure what privs it needs 21:38:12 Certainly LP permissions 21:38:22 But I think drivers already have all they need? 21:38:23 mikal: well, might be something to think about 21:38:30 I documented the release process on the wiki last time I did it 21:38:32 drivers definitely don't have privs to do that 21:38:36 someone checked 21:38:43 but if that's the intent, that's cool 21:38:47 Oh, interesting 21:38:52 Yeah, I have no problem with other people doing them 21:38:57 Let me do this one because its faster 21:39:02 I'll check the docs as I go 21:39:02 sure 21:39:12 And then we can open it up to drivers (or some other subset) 21:39:27 So, no one sees a reason to not do a release todayish? 21:39:34 the nova-release group can do a client release 21:39:37 #link http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack/python-novaclient.config 21:39:45 currently 21:40:07 Ahhh, a group on one 21:40:11 Thanks for the pointer anteaya 21:40:14 sure 21:40:21 #action mikal to do a python-novaclient release 21:40:30 #action mikal to make sure release docs are right 21:40:43 #action mikal to give a couple more people release perms 21:40:50 I think I got all of the plan there 21:40:52 so... is the icehouse pin in place ? 21:40:55 jogo: ? 21:41:11 My reading of the thread was we were good to go 21:41:16 But I haven't checked git 21:41:31 sdague: I'm checking 21:41:45 yeh, icehouse is pinned 21:41:46 if you want different people to do a novaclient-release then the nova-release change the client acl file to create a new gerrit group 21:41:58 python-novaclient>=2.17.0,<2.21 21:42:26 so as long as it's 2.21 we're good 21:42:28 pinned on both icehouse and juno. though the pins are slightly different :P juno is <=2.20 21:42:38 yeh, juno is autogenerated 21:42:44 icehouse is someone doing a thing manually 21:42:52 oh, that explains that 21:42:59 So, sounds like we're good to go 21:43:03 Let's move on again 21:43:13 #topic Gate status 21:43:32 mriedem: tell me of your things 21:44:01 ...or anyone else 21:44:08 Or is it all puppies and kittens this week? 21:44:16 the big meltdowns were on requirements issues 21:44:16 I don't think there is anything nova-specific 21:44:32 Yeah, I'm not aware of anything, but I like to check anyway 21:44:36 I was really tempted to try to break the new dependency stuff, but I didn't 21:44:36 Gonna move on again 21:44:38 there was the wedged stable gate thing I thought 21:44:41 I was chasing a unit test race because libvirt driver spawn treads 21:44:51 also threads 21:44:55 sdague: yeah, the thread on that one seems mostly to have a plan? 21:44:56 also threads 21:45:11 mikal: mostly, it pokes up more often than I expected 21:45:38 sdague is applying fire 21:45:44 People seem to know what's happening here though 21:45:47 So let's move on 21:45:54 #topic Bugs 21:46:06 I have to say I'm pretty happy with the trivial patch monkey thing 21:46:12 extremely 21:46:12 48 bug fixes merged so far 21:46:20 yay! 21:46:28 Sure they're all trivial, but its good that we're landing the fixes 21:46:33 I think I feel a trend of the quality slipping a little 21:46:41 which I think is because we're through the super trivial ones 21:46:52 which is mostly good news 21:46:53 Oh, interesting 21:47:05 As in it might work well for clearing backlogs, but less for ongoing work? 21:47:12 i took at stab at cleaning up after our abandon script this morning (bugs were left in "In Ptogress"), about 100 bugs 21:47:13 the new trivial ones are definitely not so trivial some times 21:47:20 mikal: we'll see 21:47:29 mikal: we need to stay vigilant on the self-policing of things 21:47:29 Let's wait and see how we feel about how the process is working at the summit 21:47:30 sdague: y it's getting harder to find easy ones 21:47:45 We don't _have_ to always have 20 there 21:47:50 If there are sometimes only 5, so be it 21:47:53 I think it's a good process though 21:47:54 mikal: tell that to dims__ ! :) 21:48:00 Heh 21:48:05 because it's also educational about "why this isn't trivial" 21:48:10 mikal: it varies between 5-15 21:48:13 sdague: yep 21:48:17 which I'm hoping folks are finding useful feedback 21:48:24 sdague: yep 21:48:25 So, golf clap for the trivial patch monkeys 21:48:27 dims__ called me at 2am last night to let me know he had added five new bugs to the list 21:48:34 LOL 21:48:35 heh 21:48:36 dansmith: really? 21:48:39 mikal: yes 21:48:41 dansmith: I didn't know that was an option... 21:48:42 mikal: REALLY 21:48:44 NO 21:48:47 heh 21:48:50 dansmith: I SHALL DO THIS THIGN TOO NOW 21:48:50 lol 21:48:58 :) 21:49:08 I wanna move on again because we're running out of time 21:49:15 #topic Stuck reviews 21:49:30 So, I think the deprecation thing above could have been classified as one of these 21:49:42 Are there any other reviews wedged on reviewers agreeing on a plan? 21:50:13 ...going 21:50:17 sounds like not 21:50:19 ...going 21:50:27 ...gone 21:50:33 This is a good thing 21:50:40 #topic Open Discussion 21:50:48 dansmith: is oslo.versionedobjects you? 21:50:49 can I discuss my ffe now? 21:51:01 mikal: yeah, so oslo.versionedobjects is a thing now 21:51:17 we've been rapidly iterating on it to get it ready, but mostly for other projects 21:51:20 dansmith: can we wait for liberty to move? 21:51:25 I *think* it's probably too risky to do in kilo 21:51:36 dansmith: ++ 21:51:37 dansmith: and a distraction from priority work, right? 21:51:43 but there will be some synchronization patches to keep things in nova in line with oslo, and vice versa 21:51:43 +1 to kilo 21:51:53 I would like to discuss FFE request for passing capabilities requested in flavors to Ironic instance_info http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg45394.html. Several Ironic Kilo features including secure boot, trusted boot, and local boot support with partition image have dependency on this function. 21:51:57 dims__: you mean liberty? 21:52:09 yeh, I think the oslo move is liberty 21:52:11 mikal: well, objects == priority, but yes, more churn than I think we need 21:52:38 dansmith: yeah, but this stuff is a tangent from waht we thought we were doing with objects in kilo 21:52:47 so I wanted to make sure we agree on that, but also just highlight that (a) we'll want to do that when lemming opens and then (b) that there will be syncs to keep it from diverging too much 21:52:54 I think its fine if we have to do a few sync patches in kilo 21:53:03 But let's talk about moving over to it with the liberty ptl 21:53:03 It only affects nova ironic virt driver and the changes are very small. The spec has been merged https://review.openstack.org/136104 and the code https://review.openstack.org/#/c/141012/ has been reviewed by ironic reviewers. 21:53:27 We have discussed this FFE request at this week’s ironic irc meeting http://eavesdrop.openstack.org/meetings/ironic/2015/ironic.2015-02-09-17.00.txt . Devananda and Ironic folks at the meeting are in support of it. It will be much appreciated if we can get nova approval for this FFE request. 21:53:29 Ok, ironic ffe now 21:53:42 mikal, sorry I jumped ahead 21:53:44 wanyen_: we've said that the review process happens next week 21:53:55 wanyen_: so while I agree that all the things you've said are true 21:54:11 wanyen_: I think its premature for me to announce an edict without talking it through with the team 21:54:27 mikal, yes. I understand. Sine I am here I just want to give some info about this ffe. 21:54:34 wanyen_: and we did say at the beginnign that we were going to try and approve zero ffes to free up focus for priority work 21:54:40 wanyen_: ok cool 21:54:49 just a preview of monday: I'm going to -1 all ffes! 21:54:54 wanyen_: but yeah, let me chat to specs-core and then we can talk this through next week 21:55:33 Anything else for open discussion? 21:55:43 Or early mark? 21:55:49 sdague mikalL was AFK, releasing new novaclient should be safe 21:55:53 mikal, thanks! when will be the ffe review meeting, perhaps I canask Deva to attend? 21:56:24 jogo: thanks 21:56:38 wanyen_: well, I don't think that's nessesary 21:56:44 wanyen_: we understand the arguments 21:56:50 agreed 21:56:52 mikal, ty 21:56:56 wanyen_: we just need to work out how much distraction we can handle while still landing the priority stuff 21:57:11 Because we really have to land the priority stuff, by definition 21:57:44 That's time up 21:57:46 * jogo is surprised at the number of FFEs 21:57:49 So no early mark 21:57:57 jogo: well, ffe requests 21:57:59 mikal, I understand but still hoping you can consider approvingthis ffe 21:58:07 jogo: there are no granted ffes so far 21:58:14 wanyen_: we will certainly consider it 21:58:18 And that's time 21:58:21 Thanks everyone 21:58:24 #endmeeting