15:01:34 <bswartz> #startmeeting manila
15:01:37 <openstack> Meeting started Thu Jun 12 15:01:34 2014 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:42 <openstack> The meeting name has been set to 'manila'
15:02:05 <bswartz> hello
15:02:10 <xyang1> hi
15:02:11 <deepakcs> hi bswartz
15:02:11 <vponomaryov> hello
15:02:12 <ameade> o/
15:02:16 <scott_da> hi
15:02:16 <csaba> hi
15:02:18 <cknight1> hi
15:02:18 <tbarron> hi
15:02:26 <bswartz> yay I'm not in the wrong room
15:02:31 <dustins> hello!
15:02:34 <bswartz> sometimes it's quiet and I worry
15:02:39 <bswartz> haha
15:02:53 <deepakcs> :)
15:02:54 <bswartz> so I only have a few things
15:03:08 <bswartz> #link https://wiki.openstack.org/wiki/Manila/Meetings
15:03:15 <bswartz> let's do dev status first today
15:03:18 <bswartz> #topic dev status
15:03:39 <vponomaryov> dev status is next:
15:03:45 <vponomaryov> 1) Manila API docs
15:03:50 <vponomaryov> bp: #link https://blueprints.launchpad.net/manila/+spec/manila-documentation
15:03:50 <vponomaryov> gerrit: #link https://review.openstack.org/98462
15:03:50 <vponomaryov> Also added 'docs' CI job. It works with above gerrit change.
15:04:04 <vponomaryov> 2) Remove dependency for 'mox' module
15:04:09 <vponomaryov> bp: #link https://blueprints.launchpad.net/manila/+spec/replace-mox-with-mock
15:04:09 <vponomaryov> gerrit: #link https://review.openstack.org/99362
15:04:21 <vponomaryov> TODO:
15:04:21 <vponomaryov> 1) Update 'docs' for manilaclient
15:04:21 <vponomaryov> 2) finish update of 'docs' for manila
15:04:21 <vponomaryov> 3) finish port of unittests from mox to mock
15:04:52 <vponomaryov> That's main stuff
15:05:24 <bswartz> okay I saw some comments on the docs
15:05:36 <bswartz> some "cinder" needs to be changed to "manila" still
15:05:52 <bswartz> vponomaryov: you're fixing that?
15:05:59 <vponomaryov> bswartz: yes, it should be updated
15:06:02 <bswartz> k
15:06:02 <deepakcs> bswartz, ya and some places it says block storage.. i was planning send a itty bitty patch for that
15:06:28 <bswartz> deepakcs: if you just make some review comments vponomaryov can handle it
15:06:44 <deepakcs> bswartz, ya will start revireing
15:06:47 <deepakcs> *reviewing
15:07:01 <bswartz> all of the sudden interest in docs is because we're lining up our ducks to go before the TC about incubation soon
15:07:24 <deepakcs> source/index.rst -> says 'blcok storage as a service
15:07:26 <deepakcs> but i was not sure
15:07:32 <bswartz> just about everything else is taken care of
15:07:39 <deepakcs> if index.rst is used anywhere ..since we had docs/* files
15:07:55 <vponomaryov> deepakcs: https://review.openstack.org/98462
15:08:01 <deepakcs> vponomaryov, ok :)
15:08:30 <xyang1> bswartz: does everything has to turn green for incubation request to be accepted?
15:08:34 <bswartz> ty vponomaryov
15:08:54 <bswartz> xyang1: no that's just our own assessment of where we need to focus our efforts
15:08:57 <vponomaryov> xyang1, bswartz: one left - votable devstack job
15:09:14 <vponomaryov> except finished API docs update
15:09:40 <bswartz> xyang1: we've taken a look at various other projects the TC has accepted for incubation and we believe we are more mature than several of them in nearly every way
15:09:50 <bswartz> we just want to make it as easy as possible to say yes
15:09:53 <xyang1> bswartz: that's good
15:10:13 <deepakcs> autogen of docs is not yet done, rite ?
15:10:37 <vponomaryov> deepakcs: what do you mean? autogen itself already exist
15:10:45 <bswartz> deepakcs: https://review.openstack.org/#/c/98465/
15:10:59 <deepakcs> vponomaryov, there was some discussion in last mtg.. i was trying to check on that.
15:11:11 <deepakcs> vponomaryov, bswartz cool
15:11:15 <bswartz> Merged Jun 10
15:11:23 <deepakcs> bswartz, ok
15:11:25 <bswartz> :)
15:11:31 <bswartz> okay next topic
15:11:39 <bswartz> #topic access groups
15:11:47 <bswartz> ameade: you have the floor
15:11:50 <ameade> #link https://blueprints.launchpad.net/manila/+spec/access-groups
15:11:58 <ameade> #link https://etherpad.openstack.org/p/manila-access-groups-api-proposal
15:12:14 <ameade> I sent these links out in an email just before the meeting
15:12:29 <ameade> the second link is what we are proposing for the API resources and DB schema changes
15:13:02 <ameade> I'm hoping it's relatively self explanatory but let me know where i need to elaborate
15:13:05 <bswartz> ameade: the BP looks slightly out of date :-(
15:13:20 <ameade> yeah
15:13:31 <bswartz> based on the recent design discussion we probably should mention the concept of user access and user access groups
15:13:46 <vponomaryov> ameade: what email groups did you use for sending this update?
15:13:54 <bswartz> openstack-dev@lists.openstack.org
15:13:58 <ameade> yeah
15:14:12 <bswartz> just about 1 hour ago it looks like
15:14:29 <vponomaryov> I don't see it ,is there manila tag?
15:14:35 <bswartz> yeah I got the mail
15:14:42 <bswartz> Subject:[openstack-dev] [manila] Access Groups API and DB changes
15:14:49 <ameade> yeah, perhaps it is being slow, there isnt any extra info in there anyhow
15:14:50 <xyang1> I see it
15:15:21 <bswartz> so a summary of the idea is that we plan to support groups of IP/subnets and groups of users
15:15:45 <bswartz> and these can be added to shares with the allow API like ordinary IP/subnets and users
15:16:18 <bswartz> the semantics should remain the same, except that when you modify a group's members, any shares that grant access to the group will be updated automagically
15:16:49 <vponomaryov> bswartz, ameade: how do we plan to handle each rule status for each share?
15:17:04 <vponomaryov> it can be different for different shares
15:17:10 <bswartz> we actually considered a bunch of different options, including some that would have made the whole share access API a lot more complicated, but we settled on this relatively simple extension
15:17:36 <bswartz> vponomaryov: not sure what you mean?
15:17:38 <ameade> the access mappings are separate
15:18:21 <ameade> vponomaryov: when you do allow access to a group, it will just traverse every entry in the group and create the share access mapping
15:19:01 <vponomaryov> bswartz: I mean next: one group will be assigned for lots of shares, BUT, now, access rules have status, is it errored or not
15:19:32 <bswartz> vponomaryov: EACH rule or the whole share?
15:19:40 <vponomaryov> each rule
15:19:47 <xyang1> ameade: when you create an access group, do you expect the entries are already created or do you create the entries with the new group?
15:19:52 <bswartz> I'm not sure why we need that
15:20:14 <vponomaryov> bswartz: there could be error on rule applying step
15:20:21 <ameade> xyang1: either, you can create the the group and specify the entries or add and remove entries after you create the group
15:21:12 <ameade> vponomaryov: that would show up when you do an access list, but the group itself is not applied to the share, but just a mechanism for applying multiple rules at once
15:21:13 <bswartz> vponomaryov: if manila failed to grant access on a share to an IP, I would expect that IP to get removed from the list
15:21:38 <ameade> bswartz: i dont like that
15:21:42 <bswartz> handling failures in this area is certainly a complicated problem
15:22:23 <ameade> i view the access group as sort of a template
15:22:28 <bswartz> the solution is clear: we have to write software that never fails!
15:22:29 <deepakcs> so if one IP grant access fails but others succeeed.. does this mean access group as a whole succeeded or failed or sonething in between ?
15:22:44 <vponomaryov> we have use case - having unified access group we should know, what rules are apllied without errors for each share
15:22:55 <ameade> and when you apply a group to a share it means apply every entry in the group individually
15:23:17 <bswartz> I still think we could have a per-share status which indicated whether ALL the access rules were applied or not, and if not, then some mechanism to see which ones failed
15:23:33 <ameade> to see what is actually applied to a share and there states you do an access_list
15:23:38 <bswartz> because as ameade says, with groups the failures get even more subtle
15:23:57 <ameade> the access group is not 1 to 1 with a share
15:24:12 <deepakcs> bswartz, A higher level usecase/workflow would have some logical reason on why it grouped a set of IPs .. if one of that failed to apply, won't that workflow and/or usecase fail ?
15:24:34 <vponomaryov> deepakcs: good point
15:25:04 <deepakcs> and if we return success for access_group and ask user to check access_list on what IPs were applied and what not.. how does it help the bigger usecase ?
15:25:30 <bswartz> I would expect ANY failure to bubble up to the caller of the API
15:25:39 <bswartz> the question is what state is the system left in after the failure
15:25:59 <ameade> how does it currently work when you allow access to just one ip and it fails?
15:26:06 <bswartz> do we roll back the failed change or just set error flags and require intervention?
15:26:33 <vponomaryov> bswartz: called of API won't get status different from 200 Ok, accepted for stating
15:26:49 <bswartz> in the longer term we want to make this feature even more dynamic such that manila detect changes in group memberships outside of manila and applies those to shares automatically
15:26:51 <vponomaryov> ameade: current rules are 1 to 1 with shares
15:26:54 <deepakcs> bswartz, you said " I would expect that IP to get removed from the list" - that won't be correct from the end user's perspective as we would break their logical reason of grouping that IP in the first place
15:27:06 <bswartz> in that case there's no API caller to report the error to
15:27:14 <ameade> we are just taking what's there already and allowing it to be maintained in bulk
15:27:38 <bswartz> I think we will need a way to flag permission setting problems when manila isn't able to enforce the policy the user asked for
15:28:10 <bswartz> deepakcs: I think you're right I'm withdrawing my earlier statement
15:28:12 <ameade> vponomaryov: the access group is just a definition of rules, not what is actually applied to a share so the individual rules that get applied will still be 1 to 1 with a share
15:28:24 <deepakcs> bswartz, thanks :)
15:29:33 <vponomaryov> ameade: I see, as deepakcs said, should we allow set some rules from provided, if can not do it for all?
15:29:56 <ameade> i think that's a good question
15:30:15 <deepakcs> ameade, but someone made that as a group for some logical reason.. maybe all sub-depts of a bigger dept , and if one failed, he may or may not want all of that to be applied
15:31:06 <bswartz> since our model for permissions is GRANT only and there's no way to DENY, I see no danger in applying as many rules as possible
15:31:19 <ameade> yeah so maybe when you do allow_access and supply a group, if anything in the group fails then we dont do anything in the group
15:31:23 <bswartz> if some fail, we can continue granting others
15:31:46 <ameade> bswartz: we could do that, but i think it's more intuitive to the user the other way?
15:32:05 <ameade> well maybe not
15:32:15 <deepakcs> ameade, but since we will grant 1 at a time.. if somethign failed at a later point in time.. rolling back is like going to each backende for each IP and asking to deny_access ?
15:32:32 <ameade> if a group changes but is already applied to a bunch of shares, but the change failed, then what do you do?
15:32:43 <deepakcs> bswartz, didn't understand when u say "ther ei sno way to DENY".. remove_access (or whatever its called) is denying access, no ?
15:32:51 <bswartz> yeah I don't see how we can reasonably roll back
15:33:09 <xyang1> if some succeeded and some failed, what will be the status of the operation?
15:33:11 <bswartz> it's better to try to make reality match the policy as closely as possible and flag cases where it wasn't perfect
15:33:20 <ameade> deepakcs: yeah maybe, but that's an implementation detail
15:33:33 <deepakcs> So can we provide a tunable as part of this API .. where user can specify what behaviour shud be if some grants failed .. ignore it or rollbacl or somethign else ?
15:33:54 <bswartz> because we want to move to dynamic policies there will always be short windows of time where the actual policy won't match the intended policy
15:34:08 <cknight1> deepakcs: seems like a later enhancement
15:34:16 <ameade> bswartz:  +1
15:34:53 <ameade> xyang1: i'm not sure there should be a status of teh operation, but moreso which parts succeeded and which parts failed
15:35:04 <bswartz> we just need a way to flag to the user when something is out of sync
15:35:43 <ameade> we should have that regardless of access groups though
15:36:03 <bswartz> agreed
15:36:12 <ameade> iiuc, if I do an allow access to a single ip atm, i have to then access_list to see if it worked
15:36:27 <bswartz> ameade: do you want to own fixing the way access-grant errors are reported in as part of the access groups work?
15:36:53 <ameade> bswartz: heh my gut says no
15:36:59 <ameade> do you have any ideas on how we can do that?
15:36:59 <bswartz> lol
15:37:09 <ameade> would moving forward hinder us from improving it?
15:37:21 <bswartz> it just seems like a dependency
15:37:35 <bswartz> maybe it isn't
15:37:55 <bswartz> okay so back to the original proposal
15:38:09 <bswartz> aside for how we deal with errors, any concerns about the design for groups?
15:38:29 <ameade> it sounds like we need lots of elaboration in the BP
15:38:42 <ameade> like a walk through of usecases
15:38:42 <bswartz> please read the spec/BP if you haven't and offer feedback on the changes when they show up
15:38:57 <bswartz> or use the ML to provide feedback if you see problems before the code shows up
15:39:29 <ameade> folks can ping me on irc if they want to have a conversation as well
15:39:54 <bswartz> #topic gate tests
15:40:29 <bswartz> hi
15:40:39 <bswartz> that was weird my client just vanished
15:40:56 <scott_da> mine keeps vanishing as well
15:41:07 <bswartz> I wanted to follow up on what vponomaryov said
15:41:14 <scott_da> I was cursing my corporate network, but it could be freenode
15:41:33 <bswartz> no this was definitely my client, not the network
15:41:41 <bswartz> our tempest gate tests are currently non-voting
15:41:47 <bswartz> any reason we can't make them voting?
15:42:05 <vponomaryov> bswartz: I plan make separate devstack votable job
15:42:15 <vponomaryov> bswartz: reason is in a way we use tempest
15:42:29 <bswartz> vponomaryov: is that a lot of work?
15:42:30 <vponomaryov> bswartz: it can become incompatible any moment
15:42:59 <bswartz> vponomaryov: you mean changes in tempest can break us?
15:43:13 <vponomaryov> bswartz: no, but time for merge to infra/config project can vary
15:43:21 <bswartz> vponomaryov: wouldn't we just want to fix those?
15:43:26 <vponomaryov> bswartz: yes, changes in tempest itself
15:43:39 <bswartz> how will making the job separate and votable solve this problem?
15:44:01 <bswartz> if tempest is going to cause random breakage then I'd rather keep the jobs non-voting until we solve THAT issue
15:44:02 <vponomaryov> bswartz: of course, we can fix it, but it will block other change to get into the master
15:44:32 <vponomaryov> bswartz: votable job is going to be onle devstack without tempest
15:44:40 <vponomaryov> s/onle/only/
15:44:50 <bswartz> ok
15:45:08 <bswartz> so you mean we keep the current non-voting jobs and we add yet another one that's expected to be more stable
15:45:22 <vponomaryov> bswartz: yes
15:45:28 <bswartz> that sounds perfect
15:45:41 <vponomaryov> and, also, docs job should be votable too
15:45:41 <bswartz> okay
15:45:47 <bswartz> #topic open discussion
15:45:52 <bswartz> okay anything else?
15:45:54 <vponomaryov> it not now, before related change merged
15:46:11 <csaba> we updated the ganesha BP
15:46:14 <csaba> https://blueprints.launchpad.net/manila/+spec/gateway-mediated-with-ganesha
15:46:16 <bswartz> vponomaryov: yeah that sounds reasonble
15:46:48 <bswartz> csaba: cool
15:47:18 <bswartz> csaba: is there are WIP yet we can look at?
15:47:29 <csaba> more low-level tech spec will follow first
15:48:02 <bswartz> s/are/a/
15:48:32 <csaba> I think it's better to write it up first
15:48:45 <bswartz> I'm glad you guys have spent so much time looking at this
15:48:55 <bswartz> do you have POC code or a prototype or something?
15:49:01 <csaba> not yet
15:49:44 <bswartz> it can be hard to see the flaws in a design if there isn't code to look at
15:50:08 <csaba> yes that's true
15:50:39 <bswartz> keep up the good work on this -- the direction seems good
15:50:50 <csaba> thanks, we are at it
15:51:09 <bswartz> as with so many other things though, when we actually try to make it work, that's when we might find out the ugly issues
15:51:41 <deepakcs> I spent quite a bit of time setting up devstack+Manila on F20.. Hit libguestfs nested KVM bug which i figured out only yest! Bug already filed @ https://bugs.launchpad.net/nova/+bug/1286256/
15:51:42 <uvirtbot> Launchpad bug 1286256 in nova "If libguestfs hangs when provisioning an instance, nova will wait forever." [Medium,Confirmed]
15:52:13 <deepakcs> At present, I am able to spawn Nova instances but n/wing is still flaky.. if i remove neutron and switch nova-net.. n/wing works
15:52:23 <bswartz> csaba: I would encourage you to make WIP submissions available as you go so people can offer early feedback
15:52:34 <bswartz> that's all
15:52:54 <csaba> bswartz: sounds to be a good idea
15:53:09 <bswartz> deepakcs: what does "nested KVM in an environment where nested KVM support is buggy" mean?
15:53:18 <deepakcs> I plan to write a document on the setup , once i am able to setup and get Manila genericdriver in devstack on F20, working reliably
15:53:24 <bswartz> nested KVM works perfectly for me everywhere I've tried it
15:53:37 <deepakcs> bswartz, I asked that Q in the lp bug.. did u just copy that from there ?
15:53:45 <bswartz> yes
15:54:04 <deepakcs> bswartz, I am waiting for the answer too.. but looks like libguestfs + KVM (in nested case) hangs
15:54:24 <bswartz> deepakcs: is this a software or hardware problem?
15:54:27 <deepakcs> bswartz, hence Nova hangs during image resize and instance is stuck in 'spawning' state forever
15:54:38 <bswartz> I need to read up on this libguestfs thing I suppose
15:54:45 <bswartz> I've never heard of it before now
15:54:48 <bswartz> is it related to VirtFS?
15:54:50 <deepakcs> bswartz, i think its software / KVM issue.. i am not KVM expert so can't be sure.. better to get siubscribed to that bug, i did :)
15:54:58 <deepakcs> bswartz, no, nothing to do with virtfs
15:55:17 <bswartz> #link http://libguestfs.org/
15:55:19 <bswartz> oh I see what it does
15:55:56 <deepakcs> bswartz, So thigns as it stand now.. I am able to get host <-> instnace n/wing working, but accessing internet from instance isn't working on Nova instances
15:56:18 <deepakcs> bswartz, I plan to get this working and write a doc and upload on wiki
15:56:39 <bswartz> deepakcs: okay so why is libguestfs a dependency for anything we want to do?
15:56:48 <bswartz> does nova rely on it for anything critical?
15:57:07 <deepakcs> bswartz, as part of Nova creatign instance, it checks if image can be resized and has partitions... for which it uses libguests and its libs to create a qemu process
15:57:19 <vponomaryov> bswartz: it means nova does not work with enabled neutron on F20
15:57:26 <deepakcs> bswartz, this qemu process hangs and never returns in nested KVM case (which is typicaly for devstack)
15:57:46 <bswartz> argh
15:58:04 <deepakcs> vponomaryov, its flaky.. it was not workign before.. i switch to nova-network (turned off nuetron) , it worked... but manila needs nuetron..
15:58:21 <bswartz> okay well hopefully we can get these problems sorted out
15:58:39 <deepakcs> bswartz, I am trying and will document whats the best way to get devstack + manial devpt env for F20
15:58:43 <bswartz> thanks for the update deepakcs
15:58:48 <deepakcs> just hopingi dont get into any more issues :)
15:58:49 <deepakcs> bswartz, sure
15:58:52 <bswartz> anything else in our last minute?
15:59:01 <bswartz> going once...
15:59:08 <bswartz> twice..
15:59:17 <bswartz> okay thanks everyone
15:59:20 <vponomaryov> thanks
15:59:22 <xyang1> thanks
15:59:26 <deepakcs> thanks, bywe
15:59:29 <ameade> bye
15:59:37 <bswartz> #endmeeting