15:01:34 #startmeeting manila 15:01:37 Meeting started Thu Jun 12 15:01:34 2014 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:42 The meeting name has been set to 'manila' 15:02:05 hello 15:02:10 hi 15:02:11 hi bswartz 15:02:11 hello 15:02:12 o/ 15:02:16 hi 15:02:16 hi 15:02:18 hi 15:02:18 hi 15:02:26 yay I'm not in the wrong room 15:02:31 hello! 15:02:34 sometimes it's quiet and I worry 15:02:39 haha 15:02:53 :) 15:02:54 so I only have a few things 15:03:08 #link https://wiki.openstack.org/wiki/Manila/Meetings 15:03:15 let's do dev status first today 15:03:18 #topic dev status 15:03:39 dev status is next: 15:03:45 1) Manila API docs 15:03:50 bp: #link https://blueprints.launchpad.net/manila/+spec/manila-documentation 15:03:50 gerrit: #link https://review.openstack.org/98462 15:03:50 Also added 'docs' CI job. It works with above gerrit change. 15:04:04 2) Remove dependency for 'mox' module 15:04:09 bp: #link https://blueprints.launchpad.net/manila/+spec/replace-mox-with-mock 15:04:09 gerrit: #link https://review.openstack.org/99362 15:04:21 TODO: 15:04:21 1) Update 'docs' for manilaclient 15:04:21 2) finish update of 'docs' for manila 15:04:21 3) finish port of unittests from mox to mock 15:04:52 That's main stuff 15:05:24 okay I saw some comments on the docs 15:05:36 some "cinder" needs to be changed to "manila" still 15:05:52 vponomaryov: you're fixing that? 15:05:59 bswartz: yes, it should be updated 15:06:02 k 15:06:02 bswartz, ya and some places it says block storage.. i was planning send a itty bitty patch for that 15:06:28 deepakcs: if you just make some review comments vponomaryov can handle it 15:06:44 bswartz, ya will start revireing 15:06:47 *reviewing 15:07:01 all of the sudden interest in docs is because we're lining up our ducks to go before the TC about incubation soon 15:07:24 source/index.rst -> says 'blcok storage as a service 15:07:26 but i was not sure 15:07:32 just about everything else is taken care of 15:07:39 if index.rst is used anywhere ..since we had docs/* files 15:07:55 deepakcs: https://review.openstack.org/98462 15:08:01 vponomaryov, ok :) 15:08:30 bswartz: does everything has to turn green for incubation request to be accepted? 15:08:34 ty vponomaryov 15:08:54 xyang1: no that's just our own assessment of where we need to focus our efforts 15:08:57 xyang1, bswartz: one left - votable devstack job 15:09:14 except finished API docs update 15:09:40 xyang1: we've taken a look at various other projects the TC has accepted for incubation and we believe we are more mature than several of them in nearly every way 15:09:50 we just want to make it as easy as possible to say yes 15:09:53 bswartz: that's good 15:10:13 autogen of docs is not yet done, rite ? 15:10:37 deepakcs: what do you mean? autogen itself already exist 15:10:45 deepakcs: https://review.openstack.org/#/c/98465/ 15:10:59 vponomaryov, there was some discussion in last mtg.. i was trying to check on that. 15:11:11 vponomaryov, bswartz cool 15:11:15 Merged Jun 10 15:11:23 bswartz, ok 15:11:25 :) 15:11:31 okay next topic 15:11:39 #topic access groups 15:11:47 ameade: you have the floor 15:11:50 #link https://blueprints.launchpad.net/manila/+spec/access-groups 15:11:58 #link https://etherpad.openstack.org/p/manila-access-groups-api-proposal 15:12:14 I sent these links out in an email just before the meeting 15:12:29 the second link is what we are proposing for the API resources and DB schema changes 15:13:02 I'm hoping it's relatively self explanatory but let me know where i need to elaborate 15:13:05 ameade: the BP looks slightly out of date :-( 15:13:20 yeah 15:13:31 based on the recent design discussion we probably should mention the concept of user access and user access groups 15:13:46 ameade: what email groups did you use for sending this update? 15:13:54 openstack-dev@lists.openstack.org 15:13:58 yeah 15:14:12 just about 1 hour ago it looks like 15:14:29 I don't see it ,is there manila tag? 15:14:35 yeah I got the mail 15:14:42 Subject:[openstack-dev] [manila] Access Groups API and DB changes 15:14:49 yeah, perhaps it is being slow, there isnt any extra info in there anyhow 15:14:50 I see it 15:15:21 so a summary of the idea is that we plan to support groups of IP/subnets and groups of users 15:15:45 and these can be added to shares with the allow API like ordinary IP/subnets and users 15:16:18 the semantics should remain the same, except that when you modify a group's members, any shares that grant access to the group will be updated automagically 15:16:49 bswartz, ameade: how do we plan to handle each rule status for each share? 15:17:04 it can be different for different shares 15:17:10 we actually considered a bunch of different options, including some that would have made the whole share access API a lot more complicated, but we settled on this relatively simple extension 15:17:36 vponomaryov: not sure what you mean? 15:17:38 the access mappings are separate 15:18:21 vponomaryov: when you do allow access to a group, it will just traverse every entry in the group and create the share access mapping 15:19:01 bswartz: I mean next: one group will be assigned for lots of shares, BUT, now, access rules have status, is it errored or not 15:19:32 vponomaryov: EACH rule or the whole share? 15:19:40 each rule 15:19:47 ameade: when you create an access group, do you expect the entries are already created or do you create the entries with the new group? 15:19:52 I'm not sure why we need that 15:20:14 bswartz: there could be error on rule applying step 15:20:21 xyang1: either, you can create the the group and specify the entries or add and remove entries after you create the group 15:21:12 vponomaryov: that would show up when you do an access list, but the group itself is not applied to the share, but just a mechanism for applying multiple rules at once 15:21:13 vponomaryov: if manila failed to grant access on a share to an IP, I would expect that IP to get removed from the list 15:21:38 bswartz: i dont like that 15:21:42 handling failures in this area is certainly a complicated problem 15:22:23 i view the access group as sort of a template 15:22:28 the solution is clear: we have to write software that never fails! 15:22:29 so if one IP grant access fails but others succeeed.. does this mean access group as a whole succeeded or failed or sonething in between ? 15:22:44 we have use case - having unified access group we should know, what rules are apllied without errors for each share 15:22:55 and when you apply a group to a share it means apply every entry in the group individually 15:23:17 I still think we could have a per-share status which indicated whether ALL the access rules were applied or not, and if not, then some mechanism to see which ones failed 15:23:33 to see what is actually applied to a share and there states you do an access_list 15:23:38 because as ameade says, with groups the failures get even more subtle 15:23:57 the access group is not 1 to 1 with a share 15:24:12 bswartz, A higher level usecase/workflow would have some logical reason on why it grouped a set of IPs .. if one of that failed to apply, won't that workflow and/or usecase fail ? 15:24:34 deepakcs: good point 15:25:04 and if we return success for access_group and ask user to check access_list on what IPs were applied and what not.. how does it help the bigger usecase ? 15:25:30 I would expect ANY failure to bubble up to the caller of the API 15:25:39 the question is what state is the system left in after the failure 15:25:59 how does it currently work when you allow access to just one ip and it fails? 15:26:06 do we roll back the failed change or just set error flags and require intervention? 15:26:33 bswartz: called of API won't get status different from 200 Ok, accepted for stating 15:26:49 in the longer term we want to make this feature even more dynamic such that manila detect changes in group memberships outside of manila and applies those to shares automatically 15:26:51 ameade: current rules are 1 to 1 with shares 15:26:54 bswartz, you said " I would expect that IP to get removed from the list" - that won't be correct from the end user's perspective as we would break their logical reason of grouping that IP in the first place 15:27:06 in that case there's no API caller to report the error to 15:27:14 we are just taking what's there already and allowing it to be maintained in bulk 15:27:38 I think we will need a way to flag permission setting problems when manila isn't able to enforce the policy the user asked for 15:28:10 deepakcs: I think you're right I'm withdrawing my earlier statement 15:28:12 vponomaryov: the access group is just a definition of rules, not what is actually applied to a share so the individual rules that get applied will still be 1 to 1 with a share 15:28:24 bswartz, thanks :) 15:29:33 ameade: I see, as deepakcs said, should we allow set some rules from provided, if can not do it for all? 15:29:56 i think that's a good question 15:30:15 ameade, but someone made that as a group for some logical reason.. maybe all sub-depts of a bigger dept , and if one failed, he may or may not want all of that to be applied 15:31:06 since our model for permissions is GRANT only and there's no way to DENY, I see no danger in applying as many rules as possible 15:31:19 yeah so maybe when you do allow_access and supply a group, if anything in the group fails then we dont do anything in the group 15:31:23 if some fail, we can continue granting others 15:31:46 bswartz: we could do that, but i think it's more intuitive to the user the other way? 15:32:05 well maybe not 15:32:15 ameade, but since we will grant 1 at a time.. if somethign failed at a later point in time.. rolling back is like going to each backende for each IP and asking to deny_access ? 15:32:32 if a group changes but is already applied to a bunch of shares, but the change failed, then what do you do? 15:32:43 bswartz, didn't understand when u say "ther ei sno way to DENY".. remove_access (or whatever its called) is denying access, no ? 15:32:51 yeah I don't see how we can reasonably roll back 15:33:09 if some succeeded and some failed, what will be the status of the operation? 15:33:11 it's better to try to make reality match the policy as closely as possible and flag cases where it wasn't perfect 15:33:20 deepakcs: yeah maybe, but that's an implementation detail 15:33:33 So can we provide a tunable as part of this API .. where user can specify what behaviour shud be if some grants failed .. ignore it or rollbacl or somethign else ? 15:33:54 because we want to move to dynamic policies there will always be short windows of time where the actual policy won't match the intended policy 15:34:08 deepakcs: seems like a later enhancement 15:34:16 bswartz: +1 15:34:53 xyang1: i'm not sure there should be a status of teh operation, but moreso which parts succeeded and which parts failed 15:35:04 we just need a way to flag to the user when something is out of sync 15:35:43 we should have that regardless of access groups though 15:36:03 agreed 15:36:12 iiuc, if I do an allow access to a single ip atm, i have to then access_list to see if it worked 15:36:27 ameade: do you want to own fixing the way access-grant errors are reported in as part of the access groups work? 15:36:53 bswartz: heh my gut says no 15:36:59 do you have any ideas on how we can do that? 15:36:59 lol 15:37:09 would moving forward hinder us from improving it? 15:37:21 it just seems like a dependency 15:37:35 maybe it isn't 15:37:55 okay so back to the original proposal 15:38:09 aside for how we deal with errors, any concerns about the design for groups? 15:38:29 it sounds like we need lots of elaboration in the BP 15:38:42 like a walk through of usecases 15:38:42 please read the spec/BP if you haven't and offer feedback on the changes when they show up 15:38:57 or use the ML to provide feedback if you see problems before the code shows up 15:39:29 folks can ping me on irc if they want to have a conversation as well 15:39:54 #topic gate tests 15:40:29 hi 15:40:39 that was weird my client just vanished 15:40:56 mine keeps vanishing as well 15:41:07 I wanted to follow up on what vponomaryov said 15:41:14 I was cursing my corporate network, but it could be freenode 15:41:33 no this was definitely my client, not the network 15:41:41 our tempest gate tests are currently non-voting 15:41:47 any reason we can't make them voting? 15:42:05 bswartz: I plan make separate devstack votable job 15:42:15 bswartz: reason is in a way we use tempest 15:42:29 vponomaryov: is that a lot of work? 15:42:30 bswartz: it can become incompatible any moment 15:42:59 vponomaryov: you mean changes in tempest can break us? 15:43:13 bswartz: no, but time for merge to infra/config project can vary 15:43:21 vponomaryov: wouldn't we just want to fix those? 15:43:26 bswartz: yes, changes in tempest itself 15:43:39 how will making the job separate and votable solve this problem? 15:44:01 if tempest is going to cause random breakage then I'd rather keep the jobs non-voting until we solve THAT issue 15:44:02 bswartz: of course, we can fix it, but it will block other change to get into the master 15:44:32 bswartz: votable job is going to be onle devstack without tempest 15:44:40 s/onle/only/ 15:44:50 ok 15:45:08 so you mean we keep the current non-voting jobs and we add yet another one that's expected to be more stable 15:45:22 bswartz: yes 15:45:28 that sounds perfect 15:45:41 and, also, docs job should be votable too 15:45:41 okay 15:45:47 #topic open discussion 15:45:52 okay anything else? 15:45:54 it not now, before related change merged 15:46:11 we updated the ganesha BP 15:46:14 https://blueprints.launchpad.net/manila/+spec/gateway-mediated-with-ganesha 15:46:16 vponomaryov: yeah that sounds reasonble 15:46:48 csaba: cool 15:47:18 csaba: is there are WIP yet we can look at? 15:47:29 more low-level tech spec will follow first 15:48:02 s/are/a/ 15:48:32 I think it's better to write it up first 15:48:45 I'm glad you guys have spent so much time looking at this 15:48:55 do you have POC code or a prototype or something? 15:49:01 not yet 15:49:44 it can be hard to see the flaws in a design if there isn't code to look at 15:50:08 yes that's true 15:50:39 keep up the good work on this -- the direction seems good 15:50:50 thanks, we are at it 15:51:09 as with so many other things though, when we actually try to make it work, that's when we might find out the ugly issues 15:51:41 I spent quite a bit of time setting up devstack+Manila on F20.. Hit libguestfs nested KVM bug which i figured out only yest! Bug already filed @ https://bugs.launchpad.net/nova/+bug/1286256/ 15:51:42 Launchpad bug 1286256 in nova "If libguestfs hangs when provisioning an instance, nova will wait forever." [Medium,Confirmed] 15:52:13 At present, I am able to spawn Nova instances but n/wing is still flaky.. if i remove neutron and switch nova-net.. n/wing works 15:52:23 csaba: I would encourage you to make WIP submissions available as you go so people can offer early feedback 15:52:34 that's all 15:52:54 bswartz: sounds to be a good idea 15:53:09 deepakcs: what does "nested KVM in an environment where nested KVM support is buggy" mean? 15:53:18 I plan to write a document on the setup , once i am able to setup and get Manila genericdriver in devstack on F20, working reliably 15:53:24 nested KVM works perfectly for me everywhere I've tried it 15:53:37 bswartz, I asked that Q in the lp bug.. did u just copy that from there ? 15:53:45 yes 15:54:04 bswartz, I am waiting for the answer too.. but looks like libguestfs + KVM (in nested case) hangs 15:54:24 deepakcs: is this a software or hardware problem? 15:54:27 bswartz, hence Nova hangs during image resize and instance is stuck in 'spawning' state forever 15:54:38 I need to read up on this libguestfs thing I suppose 15:54:45 I've never heard of it before now 15:54:48 is it related to VirtFS? 15:54:50 bswartz, i think its software / KVM issue.. i am not KVM expert so can't be sure.. better to get siubscribed to that bug, i did :) 15:54:58 bswartz, no, nothing to do with virtfs 15:55:17 #link http://libguestfs.org/ 15:55:19 oh I see what it does 15:55:56 bswartz, So thigns as it stand now.. I am able to get host <-> instnace n/wing working, but accessing internet from instance isn't working on Nova instances 15:56:18 bswartz, I plan to get this working and write a doc and upload on wiki 15:56:39 deepakcs: okay so why is libguestfs a dependency for anything we want to do? 15:56:48 does nova rely on it for anything critical? 15:57:07 bswartz, as part of Nova creatign instance, it checks if image can be resized and has partitions... for which it uses libguests and its libs to create a qemu process 15:57:19 bswartz: it means nova does not work with enabled neutron on F20 15:57:26 bswartz, this qemu process hangs and never returns in nested KVM case (which is typicaly for devstack) 15:57:46 argh 15:58:04 vponomaryov, its flaky.. it was not workign before.. i switch to nova-network (turned off nuetron) , it worked... but manila needs nuetron.. 15:58:21 okay well hopefully we can get these problems sorted out 15:58:39 bswartz, I am trying and will document whats the best way to get devstack + manial devpt env for F20 15:58:43 thanks for the update deepakcs 15:58:48 just hopingi dont get into any more issues :) 15:58:49 bswartz, sure 15:58:52 anything else in our last minute? 15:59:01 going once... 15:59:08 twice.. 15:59:17 okay thanks everyone 15:59:20 thanks 15:59:22 thanks 15:59:26 thanks, bywe 15:59:29 bye 15:59:37 #endmeeting