15:00:42 #startmeeting manila 15:00:44 Meeting started Thu May 26 15:00:42 2016 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:45 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:48 The meeting name has been set to 'manila' 15:00:53 Hi 15:00:53 hi 15:00:54 hello o/ 15:00:55 hello 15:00:56 hi 15:00:56 hola 15:00:57 hi 15:00:57 hello all 15:00:58 hi 15:00:58 hello 15:00:59 hello 15:01:03 hi 15:01:05 hi 15:01:30 looks like that gang's all here 15:01:37 #agenda https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting 15:01:43 \o 15:01:52 nobody added any new topics other than the ones I had 15:02:05 which is good because my vacation starts this afternoon :-) 15:02:24 #topic N-1 milestone 15:03:06 next week is the N-1 milestone, believe it or not 15:03:12 * bswartz finds link 15:03:20 #link http://releases.openstack.org/newton/schedule.html 15:03:34 I know many of you have been working on specs and prototypes 15:04:03 we don't have a specific deadline for specs, but I think most would agree that specs should be wrapping up around now, and we should be aiming to merge them and get on with implementation 15:05:02 anyone have any questions or concerns about N-1 or specific specs or features? 15:05:11 bswartz: I did update the revert-to-snapshot spec. https://review.openstack.org/#/c/315695/7 15:05:23 bswartz: In keeping with last week's discussion. 15:05:27 bswartz: I'm not planning on doing a spec for user messages since it's a port of cinder 15:05:42 ameade: that's fine 15:06:07 for anyone reviewing the user messages code look at the cinder spec if you have questions I guess 15:06:28 Cinder's implementation is less than expected in Manila 15:06:32 at least in scope of API 15:06:38 if we decided to intentionally diverge from the cinder design I suppose we might want a spec for that, but the whole point is not to diverge I think 15:06:38 So long as there's no real differences between the two implementations I'm fine with that 15:06:45 I don't like it, but it's fine :) 15:06:53 if we diverge there has to be good reason 15:06:58 bswartz: +1 15:07:00 ameade: +1 15:07:03 bswartz: is there going to be a spec for access-groups? the wiki in the related BP seems a bit overloaded. 15:07:12 ameade: addon of fitering to API is not enough? 15:07:27 rraja bswartz: alerted nidhi yesterday, she should be submitting a spec soon.. 15:07:36 vponomaryov: there is a patch that adds filtering in cinder, and then i was going to add that to manila 15:07:48 rraja: I spoke with nidhi 2 weeks back and she did plan to convert the wiki to a proper spec 15:07:48 ameade: it worth spec 15:07:53 and pagination as well 15:08:14 bswartz: gouthamr : cool! good to know. 15:08:33 bswartz: welcome to review this(snapshot instance) patch:https://review.openstack.org/234658 15:08:41 anyone else think i need a spec for filtering and pagination? 15:09:32 ameade: seems like all our GET APIs should do filtering and pagination consistently.. so would you be doing anything drastically different? :) 15:09:34 ameade: I think they should be in the cinder spec if you're adding them there 15:09:47 ameade: I wouldn't mind one, but if I'm the only one then feel free to ignore 15:09:52 and yes I hope we don't diverge from other established patterns 15:10:17 bswartz: and I think we need to keep track of changing back to per rule access status. BP or bug-fix? 15:10:18 ameade: any reason it wasn't a CP spec to begin with? 15:10:33 rraja: that's next topic 15:10:35 yeah the pattern is well defined and the devref will be updated, i thought specs we just for discussion 15:10:56 bswartz: it kind of was at some point before specs but we switched to bottom up to actually get it done 15:11:03 if it's not something new then I don't see the point 15:11:17 okay 15:11:38 btw this whole specs thing is an experiment in newton -- we're not requiring them and there's no deadlines 15:11:48 we may learn that that was a bad idea and want to change the rules for ocata 15:12:19 but I'm glad to see that we're already getting a bunch of value out of having the specs repo 15:12:58 okay moving on 15:13:06 #topic update access 15:13:27 so there are a few things left to do with this feature 15:13:57 as rraja mentioned, I think that the DB schema change which moved the access rule status from per-rule to per-share was a mistake, and I think we should change it back in newton 15:14:35 also it's still possible for DB updates in those code paths to race against eachother, so we need to add locking based on tooz 15:14:42 there's no volunteer for this work yet... 15:15:46 we don't need anyone to step up right now but I wanted to make it known that we need help here 15:15:56 speaking of tooz, gouthamr: how is that going? 15:16:39 * bswartz realizes gouthamr may have stepped away for daily standup meeting 15:16:50 we can come back to that 15:17:05 Just yell over the cube walls :) 15:17:10 so we have 3 remaining bullets from the design summit 15:17:18 dustins: I'm at home -- getting ready for my vacation 15:17:30 #topic Nova-network removal 15:17:39 #link https://etherpad.openstack.org/p/newton-manila-contributor-meetup 15:18:02 vponomaryov: did you suggest this topic? 15:18:06 yes 15:18:23 Manila has support of nova-networking in network-plugins, but nova-network has been deprecated for long time. 15:18:51 actually I think they only *formally* deprecated it recently 15:19:01 therefore appears question, do we still need to support it? 15:19:07 but yes everyone has known that it was going away for a long time 15:19:18 bswartz: so, you know for sure, that it is widely used in deployments? 15:19:24 I'm okay with dropping it in newton 15:19:32 vponomaryov: I hope not 15:19:44 I wonder what distros claim to support 15:19:54 redhat/suse guys: do you support n-net still? 15:20:12 for SUSE, no 15:20:26 so dropping it is good! 15:20:32 the policy from the TC has been very clear 15:20:44 move to neutron or face obsolence 15:20:46 I may be mistaken, but I do believe all of our stuff revolves around Neutron, not Nova-Net 15:20:57 I'm about 99.99% sure 15:21:09 six or seven nines 15:21:20 Personally I find neutron rather pleasant -- albeit complex 15:21:21 dustins: sounds sure enough 15:21:32 99.99997%? 15:21:40 bswartz: Give or take :D 15:21:42 we have "support exceptions" but i dunno of any that would hit manila deployments 15:21:46 right 15:21:59 anyone opposed to dropping nova-net support? 15:22:43 If someone can point me to a good book/article/scroll on Neutron, I'm down with that 15:22:52 #agree remove nova-net asap 15:23:06 dustins: https://github.com/openstack-dev/devstack/blob/master/doc/source/guides/neutron.rst 15:23:34 the devstock doc does not suck 15:23:52 Awesome, thanks, bswartz! 15:23:57 also the openstack-manuals docs related to neutron are great resources -- the main problem with them are that there are too many 15:24:46 Ahh, good ol' paralysis of choice, I'll have a look, thanks! 15:24:52 #topic APIv1 removal 15:25:03 this is also vponomaryov I think 15:25:09 yeah 15:25:25 this one is simple too: do we have plan for its deprecation? 15:25:26 vponomaryov is cleaning house 15:25:38 this one is more difficult I think 15:25:44 what makes us to keep it around? 15:25:56 +1 15:26:03 is there a customer base using it? 15:26:03 if we follow the dominant philosophy that we can't break anything ever, then I would say no 15:26:21 or is it just philosophical at this point? 15:26:31 we have precedent 15:26:38 Cinder dropped v1 15:26:47 for its API 15:26:53 let's look at this another way 15:26:57 what is the benefit we gain from dropping it 15:26:59 ? 15:27:03 vponomaryov: I believe Cinder's v2 has been around longer than our v2 15:27:11 I can't think of anything that's using the v1 API for Manila, there's a lot more value in v2's APIs 15:27:14 reduced amount of supported stuff 15:27:25 bswartz: code can be simplified if we drop v1 15:27:36 dustins: I am aware about such one - OpenStack rally 15:27:37 And we don't have to keep making sure that the old v1 APIs work as we add new features 15:27:45 how much ongoing work do we save really though? 15:27:47 dustins: but it is only for testing 15:27:53 is this like a few days of someone's time per release? 15:28:02 vponomaryov: and I imagine it'd be pretty easy to update to v2, yeah? 15:28:06 when was maniila v2 introduced? 15:28:11 dustins: yes 15:28:19 tbarron: liberty 15:28:26 we introduced it w/ microversions 15:28:27 bswartz: ty 15:28:49 if we drop v1, then kilo-era clients will completely fail to talk to newton servers 15:29:19 it's safe to say there aren't many of those 15:29:29 liberty has just been released by distros 15:29:32 bswartz: why server should support old clients? Clients should support old servers 15:29:37 however if there is just 1, are we okay forcing that guy to upgrade? 15:29:52 vponomaryov: both are needed 15:30:25 bswartz: is there a value for that guy to keep using old stuff? 15:30:27 you can only get wide compatibility when both sides can negotiate down 15:31:07 I'm not opposed to dropping v1 eventually 15:31:20 I think it's proper to question whether now is the right time, and why 15:31:40 there is some cost to keeping it around, I accept that, but can we quantify that? 15:31:43 bswartz: at least, we can annonce its deprecation 15:31:55 we already announced its deprecation in liberty 15:31:57 +1 at least 15:32:01 i think manila may have some leeway here since it was only in liberty that it was declared ready for production upstream 15:32:17 or maybe it was early in mitaka 15:32:32 in any case, v1 is clearly deprecated 15:33:05 will we save much by taking it out? just writing the patches to take it out might be more expensive than the work to leave it in one more release 15:33:38 bswartz: simpler code is always better 15:33:44 * bswartz thinks he might be the only one against this idea... 15:34:09 if it was clearly deprecated, then I think it is time to remove it 15:34:23 I personally have not seen we stumble across the need of keeping v1 backwards compatibility that often with new features. THe only one was update_access and we are bringing the state field back... 15:34:24 markstur_: +1 15:34:24 ganso: you pointed out that some distros are just now releasing liberty 15:34:55 ganso: are you suggesting we remove it in newton or keep v1? 15:35:07 bswartz: I am suggesting keep v1 15:35:11 bswartz: in newton at least 15:35:39 okay so that's 2 for keeping it, but probably everyone else for removal 15:35:47 and we have a winner for lottery where we find volunteer for maintaining v1 API! 15:35:52 lol 15:35:54 hahaha 15:35:55 heh 15:36:05 well I don't feel strongly 15:36:09 vponomaryov, nice trick! :-) 15:36:11 may I take back what I just said? 15:36:16 :) 15:36:18 if we put this to a vote I think we would decide to remove it 15:36:27 ganso: do you feel strongly or just your opinoin? 15:36:28 lol 15:36:39 maybe give it a week before deciding 15:37:04 tbarron: Indeed, let everyone mull it over for a bit 15:37:04 procrastination 15:37:10 markstur_: Or that 15:37:12 :P 15:37:18 ganso, about the distro thing - suse is not providing a v1 endoint for the liberty based release. so we are not using/supporting it 15:37:21 bswartz: it is my opinion, but I also have a customer running Manila Kilo, I am not sure how he is going to handle the transition or when he plans to upgrade 15:37:53 ganso: yeah, if there are customers using it, then that counts 15:38:03 bswartz: a compromise is deprecate in Liberty, announce in Newton that it *will* be removed in Ocata. That way we're committed. 15:38:10 ganso: upgrade one-by-one release should be ok 15:38:13 it's not just a matter of what makes devs lives easier 15:38:22 well it only counts if he's using the client, and he expect the client to work against both kilo and newton at the same time 15:38:49 even then, he could pick any client released after liberty and that would be comptible with kilo servers and newton servers 15:38:50 bswartz: agree, but that could be checked in a week 15:39:26 to be clear, i'd like to get rid of it 15:39:35 I'm not a fan of kicking the can 15:39:45 if we decide to keep it, then we should have a reason 15:39:53 but i don't like impromptu decisions on big issues 15:40:04 my main motivation for keeping it is my suspicion that keeping it might actually be less work than removing it 15:40:20 but if others disagree then I say we remove it the sooner the better 15:41:04 again, I would like to ask: have we really stumbled across the NEED to remove it? 15:41:13 I'd rather not set precedent for keeping everything forever. But that doesn't really settle the decision. 15:41:29 like, I am implementing a new feature, and my feature needs a workaround because we need to be v1 compatible, or something like this 15:41:31 markstur_ there are those in the community who do want to set that precedent 15:41:52 ganso: you should not be adding any more changes to v1 after microversions.. 15:42:10 also, we don't have a v1 CI or something like this, how do we know it is not broken already? 15:42:11 ganso: atleast that's what i thought.. (was going to be my question on your migration spec :) ) 15:42:27 ganso: Good point 15:42:28 okay in the interest of moving on, let's table this for 1 week 15:42:38 ganso: actually, now we have mix of API testing in CI 15:42:40 gouthamr: migration is v2, and it is not changing anything v1 specific 15:42:58 next week I expect we will make a decision to remove it, unless someone comes up with a good reason not to 15:42:58 ganso: you have a v1 API in deprecation, no? 15:42:59 ganso: old code uses v1, new v2 15:43:03 having new clients work with old releases (forever) is the more important one. Client upgrade w/o breaking compatability 15:43:21 vponomaryov: but that is just API, undearneath it is all v2 15:43:39 gouthamr: I do? o_O 15:43:40 #action bswartz add this item to the agenda and make decision next week 15:43:51 bswartz, +1 next week decision 15:44:05 #topic Model updates from the driver 15:44:11 Is bswartz on vacation next week? 15:44:15 ganso: lemme double check.. when i was documenting migration, i was confused about that.. 15:44:19 markstur_: good point 15:44:21 no I'm back tuesday 15:44:38 just a 5 day weekend for me 15:44:59 okay this is actually a large topic so let me get started 15:45:00 bswartz: are you not doing a cut for n1? 15:45:12 xyang1: we'll do N-1 next week 15:45:31 bswartz: ok 15:46:13 so the design pattern we inherited from nova/cinder is that drivers don't interact with the database directly -- they receive input from the driver method args, and they make changes by returning "model update" dicts 15:46:40 this is a good design because it limits access to the DB schema by drivers and prevents evil things from happening too easily 15:46:53 bswartz: and over time, we've been conservative about that too, not accepting entire model updates, and limiting them to certain fields. 15:47:04 there are a few downsides to this approach however 15:47:48 in some cases, a driver method does more than 1 thing, and when it completes half of its work, then fails on the second half you have a problem, because you want to both update the database and throw and exception 15:47:56 an* exception 15:48:13 we have dealt with that case by.... 15:48:15 * bswartz cringes 15:48:24 ...attaching a model update to the exception we throw 15:48:32 bswartz: :O 15:48:51 bswartz: do you have an example of such operation that does that? 15:48:51 * dustins barfs 15:49:00 ganso: create_share 15:49:13 ganso: create share and share server 15:49:24 so I think we all agree that's not a good architecture 15:49:29 but it solved that specific problem 15:49:38 however there are deeper issues 15:50:08 what if the driver does half its work and then rather than failing and throwing an exception, the process simply dies? (think kill -9) 15:50:29 in that case the model update just never happens, and we leave garbage behind 15:51:28 now one way to solve this problem is to break down driver methods into smaller pieces -- but you'll never get an interface that works for every single backend implementation 15:51:52 in fact we explicitly acknowledged this in the share replication design and made an interface that was fairly generic 15:52:58 this resulted in drivers that do replication doing lots of things in those methods, and there is a significant potential for failures in those methods or processes dying, to have done lots of work that's not recorded in the database 15:53:29 now one way to address this problem would be to allow drivers to access the DB directly in special cases 15:53:43 I think that leads us down a slippery slope and is a bad idea 15:54:04 bswartz: we have discussed a driver_db_helper in the past 15:54:13 we could also turn some methods into generators that yield multiple model updates 15:54:32 that might be pythonic but I find it to be complex and hard to understand 15:54:57 ganso: yes what my proposal is is to add a db helper object of some kind 15:55:42 if the manager creates and object that wraps the DB handle and passes that object into the driver methods, then drivers could make multiple updates in a single method, but we could control what the driver had access to by the design of that object 15:55:51 you gonna make a spec? 15:55:51 an* object 15:56:06 * bswartz wonders why he keeps typing and when he means an 15:56:28 ameade: I wanted to throw the idea out there and see if people thought it was terrible or good first 15:56:56 ideally we would have covered this in austin but we're a busy team 15:56:57 bswartz: we can prevent the evil stuff in the helper itself 15:57:07 ganso: that's the theory 15:57:15 fencing in the db shenanigans at least helps us spot it, limit it, and prevent direct access when it would otherwise be necessary 15:57:32 we could even tailor special helper objects for each method so that each method only had access to updates fields that it should update 15:57:54 do we need to be that specific? 15:58:03 maybe just a driver db module 15:58:04 ameade: possibly not 15:58:16 we also don't want to create confusion for new driver authors though 15:58:27 * tbarron is thinking we'd need to version it/them 15:58:42 whenever we're not explicit -- driver authors tend to get confused and do weird things 15:58:43 Maybe not specific for reads, but more specific for things that we cringe at 15:58:49 bswartz: i would really like to see/work on the change in representation of these resources first... oslo versioned objects to begin with? 15:59:13 actually reads are more important to prevent than modifications 15:59:17 bswartz: because whatever you make, we'll have to re-make if we do OVO 15:59:38 bswartz: really? I thought reads were not harmful 15:59:49 typically drivers don't want to modify much, but for whatever reason people get crazy ideas and want to read all the fields they shouldn't be depending on 16:00:08 reading of fields creates dependencies that we have to avoid breaking 16:00:21 bswartz: oh I see 16:00:23 it's better to prevent them so there's no dependency at all 16:00:28 bswartz: time check 16:00:32 argh 16:00:34 it's also possible to just have a list of db methods each driver method can call 16:00:39 with a decorator 16:00:39 well at least I got to present this idea 16:00:41 ameade: +1 16:00:53 ameade: +1 16:00:55 good brainstorm 16:00:57 we can discuss it more next week 16:01:03 thanks all 16:01:13 please prioritize spec reviews over code reviews 16:01:22 bswartz: +1 16:01:23 see you next week 16:01:27 #endmeeting