15:00:19 <bswartz> #startmeeting manila
15:00:20 <openstack> Meeting started Thu Aug  4 15:00:19 2016 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:23 <openstack> The meeting name has been set to 'manila'
15:00:29 <bswartz> hello all!
15:00:30 <tovchinnikova> \\//
15:00:35 <zhongjun_> hello
15:00:35 <tpsilva> hello
15:00:36 <ganso> hello
15:00:37 <rraja> hi!
15:00:37 <aovchinnikov> hi
15:00:37 <vponomaryov> hello
15:00:37 <jseiler> hi
15:00:38 <tbarron> hi
15:00:49 <MikeG451> Hi
15:00:50 <zengyingzhe_> Hi
15:00:56 <xyang1> hi
15:00:57 <gouthamr> hey
15:01:10 <dustins> \o
15:01:39 <bswartz> alright!
15:01:43 <bswartz> #topic announcements
15:02:26 <bswartz> first of all, welcome tbarron to the manila core reviewer team
15:02:31 <zhongjun_> tbarron: congratulation!
15:02:33 <bswartz> I still haven't made it official
15:02:40 <xyang1> tbarron: congrats!
15:02:40 <bswartz> but I'll do that after this meeting
15:02:44 <rraja> tbarron: congrats!
15:02:53 <ganso> tbarron: congrats! welcome to core team!
15:02:59 <bswartz> secondly, this is week R-9
15:03:00 <dustins> wooooohoooooo!
15:03:03 <bswartz> 4 weeks to feature freeze
15:03:06 <tbarron> thanks all, i'm honored and will try to do a good job.
15:03:42 <bswartz> no special deadlines today but any major features that aren't getting reviews by now will probably not make it for newton
15:04:21 <bswartz> we have a lot of new stuff proposed, and honestly I'll be surprised if all of it merges
15:04:34 <bswartz> we'll most likely have to make some tough choices in the next few weeks
15:04:45 <bswartz> so be proactive about getting reviews
15:05:01 <markstur> tbarron, congrats!
15:05:03 <ameade> groups > user messages correct?
15:05:09 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:05:30 <bswartz> ameade: personally I'd rather see user messages
15:05:37 <bswartz> 2 topic for today
15:05:47 <vkmc> o/
15:05:48 <bswartz> #topic Migration APIs
15:05:57 <bswartz> #link http://lists.openstack.org/pipermail/openstack-dev/2016-August/100682.html
15:06:51 <ganso> my reply #link http://lists.openstack.org/pipermail/openstack-dev/2016-August/100809.html
15:06:56 <bswartz> so this discussion was started as a result of a few drivers that are actually trying to implement the driver-assisted migration share driver interface
15:07:41 <bswartz> the existing interface was (arguably) confusing and we're proposing to simply the whole interface by basically not doing 1-phase migrations
15:08:10 <bswartz> because end users are easily able to write a script which duplicates what we had for the "1 phase" migration
15:08:50 <bswartz> for those who aren't familiar, every migration operation has a few phases
15:09:33 <bswartz> there is a long phase where the data gets copied, and the short phase where the share changes its export location to the new place
15:09:58 <bswartz> and we felt it was essential that users control when that phase transition occurs
15:10:20 <bswartz> the only thing the 1-phase migration did was automate moving to the second phase after the first phase completed
15:11:25 <bswartz> for those that have read ganso's ML post, (D) is what I propose
15:11:42 * bswartz waits for people to read...
15:12:44 <ganso> I would like to emphasize that we are mostly discussing the user experience here, it is very simple to do the 1-phase thing, automating the call to migration-complete
15:13:11 <ganso> so our concern is that the API is not confusing to the end user, and it is what we want, we don't want to come back to change this again afterwards
15:13:23 <bswartz> okay if there's no comments I'll assume everyone agrees or doesn't care and we can move on
15:13:43 <ganso> bswartz: still, there is the polling approach
15:13:50 <ganso> bswartz: which is between B and C
15:14:10 <ganso> bswartz: I implemented B in latest patch that is waiting to merge on gerrit
15:14:25 <ganso> bswartz: and C has polling in manager, where B doesn't
15:14:49 <bswartz> honestly I'm okay with B in the short term but it will never work long term
15:14:53 <zhongjun_> But if we implement 1-phase migration outside of manila, Does it will cost many resources? Because it will have multiple interactions between manila client and manila service, the manila client will polling manila service to make sure whether the 1st/2st phase is completed.
15:15:00 <ganso> bswartz: I probably wasn't very clear but it is B+D or C+D what we choose
15:15:20 <bswartz> in the short term we want the ensure the REST API is what we want to set in stone, and we need to make sure the driver interface isn't too terrible
15:15:27 <bswartz> we can iterate on the driver interface in future releases
15:15:47 <ganso> zhongjun_: you are correct
15:16:16 <bswartz> the main problem with (B) is that the driver is allowed to block, which means that when the service is restarted your migration simply stops and you never find out
15:16:33 <ganso> bswartz: both B and C require recover mechanisms
15:16:41 <ganso> bswartz: recover mechanism for C is simpler
15:17:02 <ganso> bswartz: both the driver and manager will need to be aware of this recover mechanism
15:17:30 <bswartz> (D) would as well
15:17:57 <ganso> bswartz: for (D) it does not matter
15:18:19 <ganso> bswartz: since it is just a small detail that changes to correspond to the API
15:18:35 <bswartz> zhongjun_: yes the client would need to poll on some kind of frequency -- things like horizon do that already for other purposes
15:19:29 <ganso> bswartz: in (B), I expect the recover mechanism to be: "if status is 'driver_in_progress', invoke the driver's migration-start again and the driver will figure out what to do in the backend to continue migration"
15:19:40 <bswartz> that's only if the client wants the migration to complete as early as possible however
15:20:02 <ganso> bswartz: in (C), recover can be "if status is 'driver_in_progress', invoke the driver's migration-continue as it normally would"
15:20:39 <ganso> bswartz: either way, the driver will need to figure out what state the storage is in, and how to continue
15:20:56 <bswartz> ganso: I see
15:21:10 <bswartz> I think (B) sets a bad precedent though
15:22:08 <ganso> yes, (C) is more ideal. Also, if driver interface is async, restarting service may have no effect at all in migration if backend is still busy processing the last command
15:22:23 <bswartz> yes
15:22:50 <bswartz> what we want is a way to cleanly shutdown a m-shr service for maintenance
15:23:08 <ganso> ok, I guess we can agree to (C), and I guess also (D) to remove '1-phase' from API...
15:23:16 <bswartz> and that can only be done if driver methods return within finite time
15:23:32 <vponomaryov> bswartz: migration is admin-only operation he can perform maintenance any other time
15:23:51 <bswartz> vponomaryov: that's not true
15:24:07 <ganso> I just would like to remind that we could also keep the 1-phase and default it to be 2-phase. The effort to have 1-phase in the code is very minimal
15:24:14 <bswartz> vponomaryov: only the admin can perform a migration to a specific pool, but end users will be able to trigger migrations through other APIs such as retype
15:24:44 <ganso> bswartz: good point
15:24:56 <ganso> bswartz: for other operations, what would we expect?
15:24:59 <bswartz> ganso: the goal is to solidify the API and I'd rather start with something minimal
15:25:10 <ganso> bswartz: user wants to retype, so it is always 2-phase as well?
15:25:19 <vponomaryov> bswartz: right, forgot about retype
15:25:24 <bswartz> if we go with 2-phase only and we want to add a one-shot API in the future, that's and easy enhancement to add
15:25:53 <bswartz> if we start with the more complex API you can't fix it in the future if we change our minds
15:26:38 <ganso> alright
15:26:46 <bswartz> or we could do something silly like make part of the API non-experimental but leave other parts experimental
15:26:48 <bswartz> >_<
15:27:26 <bswartz> okay next topic
15:27:33 <bswartz> #topic Correct approach for avoiding races
15:28:22 <bswartz> has jcsp has pointed out in Austin, we haven't really specified anywhere what our locking protocol is or should be
15:29:21 <bswartz> It was my mistake to presume that the solution was obvious -- because I've had the same arguments in other contexts (like cinder) and thought there was a general understanding of how we should use DB states and locks to avoid race conditions and other concurrency issues
15:29:32 <bswartz> #link http://swartzlander.org/20160804_103614.jpg
15:29:53 <bswartz> ^ This is my crappy whiteboard rendering (sorry I'm not an artist)
15:30:28 <bswartz> The example I'm illustrating is taking a snapshot of a share
15:30:32 <markstur> decent primitive art
15:30:47 <tbarron> what's the review number for that spec? :D
15:30:49 <bswartz> but this could apply to any workflow that's not idempotent
15:31:51 <bswartz> what I'm trying to illustrate is that each service should hold a lock while it's examining or manipulating the share state in the DB, but locks are NOT held across RPC or other network boundaries
15:32:21 <tbarron> +1
15:32:22 <bswartz> also locks shouldn't be held while in the driver method call because nobody knows what the driver is going to do
15:32:33 <tbarron> +1
15:32:52 <bswartz> the purpose of the locks is simpy to protect the state change so 2 requests can't race on a state change
15:33:01 <ganso> so, update_access should not have a lock that include the driver running and its status update
15:33:03 <bswartz> the state chance is what prevents concurrent access
15:33:11 <bswartz> state change*
15:33:53 <bswartz> Ideally locks should be held for tens of microseconds at the most
15:34:14 <bswartz> we don't want situations where aquiring a lock takes a noticeable amount of time
15:34:32 <bswartz> because code should grab the lock and change the state and release the lock
15:34:41 <xyang1> bswartz: are you checking and updating db status in one step?
15:35:12 <bswartz> for things that take longer, transitional states are the right way to block other conflicting activities
15:35:23 <bswartz> xyang1: yes precisely
15:35:29 <gouthamr> test and set.
15:35:36 <tbarron> ganso: right, the db state change should provide the exclusion
15:36:00 <bswartz> once you have the lock you should read the state, and if it's not the state you expect, fail
15:36:10 <tbarron> gouthamr: mechanism is another question
15:36:38 <bswartz> gouthamr: yeah or read-modify-write -- as long as it's atomic
15:37:27 <bswartz> so there was a specific bug
15:37:35 <bswartz> on the update access code path
15:37:49 <bswartz> where race conditions can easily occur
15:37:56 <bswartz> and that's where I wanted to employ this approach first
15:38:11 * bswartz looks for bug
15:38:17 <tpsilva> #link https://github.com/openstack/manila/blob/master/manila/share/access.py#L125
15:38:19 <tpsilva> #link https://github.com/openstack/manila/blob/master/manila/share/api.py#L1225
15:38:23 <tpsilva> these are the lines
15:38:39 <ganso> tpsilva: almost matching line numbers
15:38:47 <vponomaryov> #link https://bugs.launchpad.net/manila/+bug/1566815
15:38:47 <openstack> Launchpad bug 1566815 in Manila "share manager fails to remove access rules on replicated share" [Critical,In progress] - Assigned to Valeriy Ponomaryov (vponomaryov)
15:38:53 <bswartz> ty
15:38:53 <gouthamr> bswartz: wait, the problem with update_access is that we have two resources and the state of these two resources were maintained on one resource
15:39:05 <gouthamr> bswartz: at least one of the problems
15:39:27 <gouthamr> bswartz: i mean access_rule and share_instance..
15:39:30 <bswartz> gouthamr: there could easily be more problems than the race
15:39:51 <bswartz> but if we fix the race I think the rest will get easier to fix
15:39:59 <vponomaryov> gouthamr: it was not stabily reproducible bug, it is exactly racy
15:40:10 <bswartz> the race in update access was known since mitaka, and we planned to fix it in newton
15:40:15 <ganso> gouthamr and I were discussing yesterday, an alternative fix, that could be more ideal and could even prevent the locks being proposed as immediate fix, is adding back the access_rule 'state' field
15:40:26 <tpsilva> I'm with gouthamr and ganso
15:40:41 <bswartz> ganso: we shouldn't be trying to avoid usage of locks
15:40:53 <bswartz> correct behavior is impossible without atomic state changes
15:41:09 <gouthamr> bswartz: this particular case is allowing access_rules to be added and deleted via the API.. and not locking out on state changes on the share instance
15:41:15 <bswartz> there has been an ongoing misconception that locks in the API service are a bad thing
15:41:29 <ganso> bswartz: removing the access_rule 'state' field has complicated what we could do. Rules had a very well defined workflow: new => active / error
15:42:14 <bswartz> as long as we can ensure that locks are never held for longer than the time it takes to read/write a database column then locks in the API service will be harmless
15:42:31 <ganso> bswartz: the access_rules_state field in share_instance model has its value changed by more than one entity at several times, so it is prone to this kind of problem
15:42:55 <bswartz> ganso: that other fix is needed too
15:43:17 <bswartz> we have lots of bugs, and plenty of work to do before release
15:43:39 <bswartz> I'm trying to get agreement about the general mechanism for avoiding races because they exist for ALL of our APIs
15:44:20 <tbarron> well, i would avoid word mechanism in your prev sentence
15:44:26 <bswartz> for years now we've tried to address these problems with bandaids and workarounds
15:44:41 <tbarron> that question remains open even if we agree on your proposal
15:44:47 <bswartz> it's time to start actually using locks to protect our state changes from races
15:44:50 <tbarron> and they should be decoupled
15:44:55 <tbarron> +1
15:44:56 <gouthamr> bswartz: okay, are we okay with sketching this out in a spec and discuss the ramifications?
15:45:18 <ganso> I agree with the approach, such as in the snapshot example, for access_rules, I agree as an immediate fix, not ideal one, we should revisit this in ocata
15:45:22 <bswartz> gouthamr: if people want a spec I can write a spec but I don't think that will help
15:45:23 <gouthamr> tbarron: +1 -> what mechanism is a different question.. i think we would like the abstraction for now.
15:45:27 <tbarron> gouthamr: you saw the spec, it just needs a review number
15:45:31 <tbarron> :D
15:45:51 <bswartz> tbarron: you raise a good point
15:46:07 <gouthamr> tbarron: sure.. bswartz and i doodle on whiteboards all day and keep running into different bugs :D
15:46:15 <tbarron> :D
15:46:17 <gouthamr> s/bugs/gotchas
15:46:23 <bswartz> other projects have tried to tackle this issue with a different solution
15:47:03 <tbarron> what you said allows for different ways to protech the DB state change in api
15:47:12 <bswartz> cinder for example designed a somewhat complex DB-mechanism for doing a test and set to avoid usage of proper locks
15:47:24 <tbarron> which it should, since which way to do it is an open question
15:47:32 <bswartz> I've heard 2 complaints about the cinder approach
15:47:38 <tbarron> bswartz: the test and set is just another way to do what you describe
15:47:43 <bswartz> 1) it limits what you can update to a single table
15:47:48 <tbarron> i'm not arguing for it or gainst it atm
15:47:59 <bswartz> 2) mortals don't understand how the code works
15:48:10 <tbarron> just saying that there's a followon discussion if we agree to what you said today
15:48:35 <bswartz> I'm hoping that locks are more idiot-proof than test-and-set
15:49:13 <tbarron> so you are arguing for dlm mechanism, not just for exclusion around DB change to -ing state
15:49:15 <bswartz> locks are definitely more flexible in terms of what you can do
15:50:16 <bswartz> tbarron: yes but the lock mechanism only has to be as distributed as the environment where manila runs, and for most of us that's a single node
15:50:22 <xyang1> bswartz: are we going to use tooz or not
15:50:33 <bswartz> xyang1: I'm in favor of tooz
15:50:37 <gouthamr> xyang1: pertinent question :)
15:50:38 <tbarron> why not decouple the two questions?
15:51:00 <tbarron> get agreement on your picture first
15:51:04 <bswartz> tbarron: using of tooz allows us to avoid the question and make it a deployer choice
15:51:29 <bswartz> tbarron: do you not like my proposal?
15:51:56 <tbarron> i like your picture, your picture doesn't imply tooz or whatever rather than compare and swap though
15:52:03 <tpsilva> bswartz: just to clarify, the whiteboard proposal is for *any* locks or just locks that involve the API?
15:52:07 <tbarron> i'm just saying agree to one thing at a time
15:52:23 <bswartz> tbarron: I tried to be very explicit with the little "lock/unlock" pictures
15:52:37 <tbarron> all that means is exclusion, not mechanism
15:53:20 <bswartz> okay well I thought my proposal included the mechanism because I didn't see anything else in the proposal worth arguing about
15:53:30 <tbarron> on the mechanism question, i do like the idea that tooz allows for degrees of freedom
15:53:39 <bswartz> is there another aspect in the proposal we could debate?
15:53:50 <tbarron> with gouthamr i pushed for getting tooz in review here
15:54:12 <tbarron> but i'm disturbed by lack of progress on tooz backends in openstack ini general
15:54:49 <bswartz> http://s2.quickmeme.com/img/c8/c82f116b14ec94338efcbdc160e63c16a599a826f78adafad34f8deec68fc04b.jpg
15:55:05 * bswartz apologizes for meme
15:55:09 <tbarron> nice
15:55:30 <gouthamr> nice
15:55:32 <bswartz> well there's a file lock backend which should be sufficient for everyone who's on a single node
15:55:47 <tbarron> now whenever anyone says this, they are (rightly IMO) invited to join the effort to work on tooz!
15:55:49 <bswartz> and zookeeper is known to deal with multinode configurations well
15:56:07 <bswartz> personally I'd rather use zookeeper for ALL configurations
15:56:35 <bswartz> but tooz creates an abstraction layer for other implementations to be swapped in, which can only be good
15:57:14 <bswartz> the main danger is that broken implementations are advertised as non-broken in which case users get data corruption in non-obvious ways
15:57:43 * bswartz mumbles about redis
15:58:03 <tbarron> yes, my concern is that customers may start using backends that are available in distros that are note optimal
15:58:13 <tbarron> not
15:58:17 <tbarron> not safe
15:58:25 <bswartz> well it can't be worse than the status quo
15:58:57 <bswartz> and there's a path forward to fix those issues or to help users migrate from broken to non-broken backends
15:59:17 <bswartz> okay well we're out of time
15:59:23 <bswartz> perhaps more discussion is needed
15:59:36 <tbarron> bswartz: good stuff
15:59:40 <gouthamr> bswartz: yes.. POC or spec would help us get there
15:59:41 <bswartz> I'd like to see prototypes of what I proposed used to fix actual bugs this release
15:59:56 <tbarron> +1
15:59:58 <gouthamr> bswartz: i volunteer to write up these ideas
16:00:04 <bswartz> we don't need to change everything with a big bang -- just start fixing bugs as we find them
16:00:30 <bswartz> sorry we ran out of time for other topics
16:00:43 <bswartz> anyone with anything else please take it to the channel
16:00:49 <bswartz> #endmeeting