15:01:08 #startmeeting manila 15:01:09 Meeting started Thu Nov 1 15:01:08 2018 UTC and is due to finish in 60 minutes. The chair is tbarron. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:12 The meeting name has been set to 'manila' 15:01:18 .o/ 15:01:21 hello 15:01:22 hello 15:01:26 hi 15:01:39 o/ 15:02:03 let's wait another minute 15:02:15 ping xyang 15:02:17 \o 15:02:24 hi 15:02:39 ping toabctl 15:03:14 o/ 15:03:43 Hi all! 15:03:51 #topic announcements 15:04:16 Spec deadline is a week from today. 15:04:36 We have specs on the agenda, so 'nuff said for now. 15:04:49 Summit is the following week. 15:05:12 I will make an etherpad for our forum session and send it to the dev list. 15:05:31 Who here will be at summit? 15:05:45 * tbarron will, vkmc will 15:05:47 most likely won't be there... but I'll know soon. 15:06:06 ok, ping me if you'll be attending this one please. 15:06:13 np 15:06:18 o/ 15:06:27 I don't have any other announcements, anyone else? 15:06:34 I won't be there. I'll be in KubeCon Shanghai in that same week 15:06:57 xyang: cool, but remember platforms need infra :) 15:07:01 erlon, tpsilva and I will 15:07:09 :) 15:07:17 ganso: cool, the boys from brazil :) 15:07:34 #topic technical vision 15:07:50 We have zaneb as a special guest today. 15:08:07 I've been following the proposed tech vision in review 15:08:21 #link https://review.openstack.org/#/c/592205/ 15:08:31 I personally like the vision and 15:08:44 think manila fits unprobematically with it. 15:08:53 But we should all take a look. 15:09:01 zaneb: you're up :) 15:09:15 hi everyone :) 15:09:39 hi zaneb 15:09:39 I'm here to listen more than to talk :) 15:09:56 I'd like to hear what y'all think 15:10:23 because I'm not an expert on Manila by any means 15:10:28 me too, have folks had a chance to read over the review? 15:11:01 so do you think this direction is right? is there anything important that we've missed. is there anything there that shouldn't be? 15:11:37 IMO manila provides part of basic data center mgmt via self-service shares 15:11:50 It virtualizes the resources underneath 15:12:05 It isolates tenants from one another 15:12:32 It provides an API so that programs can use it, not just humans 15:12:53 It is not particularly nova-centered since it offers 15:12:53 I haven't had the chance to look at it (and honestly didn't know there was an ongoing effort to define a vision) 15:13:07 up storage over a network rather than via a hypervisor 15:13:30 one question I had actually, is does Manila also make it easier to scale granularly (like e.g. Swift does) or do you hit limits in terms of having to pre-allocate the size of the file system (like e.g. Cinder)? 15:13:33 dustins: /me recommends following the dev list 15:14:10 I think it's still the latter, but I was unsure of how it works in practice 15:14:26 * gouthamr isn't aware of cinder's issue needing pre-allocating size of the file-system 15:14:40 hey 15:15:09 zaneb: yeah, not sure about the cinder limitation either but 15:15:13 zaneb: inherently shared-file-systems are "elastic" - so manila supports expanding and shrinking 15:15:21 we are elastic 15:15:33 gouthamr: I mean, the size of a volume in Cinder is fixed. In Swift if you need to store another object, you just do it. In Cinder if the FS is full, you have much more work to do. Not that that's bad, but they're different things 15:15:39 cloud admins need to track usage and increasee backing store 15:15:53 or users 15:16:09 zaneb: a "share" has a fixed size (with some back ends that's just a back end quota, nothing more) 15:16:18 zaneb: but it can be "extended" 15:16:18 zaneb: there is no way to create a share with "undefined", "unlimited" or "indetermined" size, the user has to choose a size and then can increase or decrease it later 15:16:58 ok, so it's sorta in-between 15:17:00 we've considered "infinite" or "auto-sizing" shares but they've not been implemented yete 15:17:04 yet 15:17:15 you have to define a size, unlike Swift, but it's easy to change, unlike Cinder 15:17:29 yep, cinder doesn't let you shrink a volume 15:17:37 On some backends it's easy to change... 15:17:44 s/cinder/block devices 15:18:09 also, increasing or decreasing its size depends on backend capabilities 15:18:41 * zaneb is learning a lot today :) 15:18:52 #TIL 15:19:40 there's nothing stopping a program from selecting a flexible back end (via a share type) for shares, tracking usage, and extendiing as appropriate 15:19:54 so it may be the case that Manila is also contributing to several of the other design goals as well, not just the basic data center infrastructure 15:20:17 but the abstraction sits over some back ends that may not allow this 15:20:25 +1 - the point about "arbitrary grouping" makes sense on multiple levels for us 15:20:27 zaneb: +2 15:20:33 for instance ^ :) 15:20:37 I meant +1 :) 15:20:57 * tbarron wasn't exercising any special powers 15:22:21 'ts a good read, will review today and provide feedback 15:22:49 zaneb: fwiw I agree with rosmaita that quotas are useful even with "central-planning" cloud economics where individual consumers aren't evaluating opportunity costs :) 15:22:57 but that's a nit 15:23:28 Anyone have any deep concerns about the proposed tech vision or manila's fit ? 15:24:23 zaneb: I'm not hearing any worries so I'll add my +1 as PTL as you requested. 15:24:49 tbarron: thanks! new patch set with responses to comments coming today probably 15:25:00 If anyone comes up with issues take them to the review or catch zaneb on #openstack-tc 15:25:08 ++ 15:25:13 zaneb: thanks for joining us today 15:25:24 thanks for your time everyone, looking forward to your comments :) 15:25:32 #topic spec reviews 15:26:32 #link https://wiki.openstack.org/wiki/Manila/SteinCycle#Other_Work 15:26:53 ^^ That will take you to the right spot even though the topic isn't just right 15:27:18 As mentioned earlier, spec deadline is a week from today. 15:27:33 We have three outstanding spec reviews at the moment. 15:27:42 And *no* review comments :( 15:28:10 I'm going to block off some time for these today and tomorrow but 15:28:35 can we get more reviewers to sign up for these and give initial feedback before the weekend? 15:29:04 ganso: You may want to do some lobbying/arm-twisting since as 15:29:30 things stand now I'd say these are at risk for not making this cycle simply because we 15:29:40 are limited on review bandwidth. 15:29:54 sorry I don't understand the expression "lobbying/arm-twisting" 15:30:02 ganso: if you have to triage, life-boat style, what is your priority? 15:30:13 ganso: he's saying you need to go ask people for reviews 15:30:19 ganso: sorry for the colloquialism, I mean ^^ 15:30:37 There is no magical review fairy 15:31:01 ganso: in an ideal world there would be sufficient reviewer bandwidth and interest in these features that you wouldn't need to do that, but ... 15:31:09 my priority would be 1) manage/unmanage DHSS=True, 2) Replication DHSS=True and 3) Create share from snapshot in other backends 15:31:54 ganso: so find other people with DHSS=True back ends who care about these features and get them to review and agree on the approach 15:32:31 I can try and review specs this weekend. 15:32:40 amito: thanks! 15:33:09 Anything else on this topic today? 15:34:16 #topic planning our work 15:34:27 #link https://wiki.openstack.org/wiki/Manila/SteinCycle 15:35:15 gouthamr: how is it going with running the lvm job in bionic instead of centos? 15:35:37 tbarron: pretty good so far, late-breakthrough with the quagga issue i was seeing 15:35:48 As a reminder, this is a step in the python3 goal since centos currently doesn't support python3 15:35:58 tbarron: i want to test those changes on centos and invoke the netapp-ci which is testing this on xenial 15:36:06 after which we should be good 15:36:34 gouthamr: sounds like good progress 15:36:56 I have work to do on the base patch wrt the Grenade job - that's on my plate today - it'd be nice to get these in, and be the first project to stop testing Stein on Xenial :D 15:37:02 vkmc: gouthamr: what will be the next step on python3 goal after converting the lvm job? 15:37:17 tbarron, porting tempest tests 15:38:07 vkmc: will we port to zuulv3 as part of that? 15:39:00 we also need to replicate these job changes on manila-tempest-plugin (and at some point look at rebuilding the service images) 15:39:49 gouthamr: ack 15:39:52 imho zuulv3 can be a slower transition, unless infra wants us to do it right away 15:40:11 gouthamr: fine by me 15:40:22 i haven't heard that they do, we have so much other work to get to :) 15:40:59 vkmc: gouthamr: you've been keeping 15:41:07 tbarron, yes, but migrating to zuulv3 requires some planning since this will affect third party ci's 15:41:09 #link https://wiki.openstack.org/wiki/Manila/SteinCycle#Python3 15:41:17 we need to make it slowly 15:41:31 pretty up to date, so please keep doing so :) 15:41:41 we discussed about this briefly during the dublin ptg, we didn't talk about this during the denver ptg 15:41:44 and ack on de-coupling this from zuulv3 15:41:46 but certainly is the next step 15:42:42 yeah, we need to decouple py3 migration to zuulv3 migration 15:42:55 I know everybody has been busy with downstream work and concerns lately but I'll mention 15:43:25 on the Manila CSI subject that we hope to have some 15:43:44 good discussions with CERN and OpenStack k8s sig on this subject 15:43:49 at the Summit 15:44:04 \o/ 15:44:17 Meanwhile people are using the dynamic external storage provider for manila successfully but 15:44:44 some of them have raised an issue when using CephFS back end where 15:45:06 k8s needs the shares to be mode 775 instead of 755 15:45:34 because the workload running in the conatiner, which sees the share via biind mout from the k8s host 15:45:57 runs with a high number (non-root) uid but is in root group 15:46:27 Oh yes I remember this discussion 15:46:28 I have an open PR in the ceph_volume_client (in ceph) for this and a dependent review in manila 15:46:43 We need a way for manila to set the mode/owner of the root of the share 15:46:44 I mention this in case any other back ends have similar issues. 15:47:13 bswartz: the manila patch right now is short-term and back end specific but 15:47:14 It's currently undefined which means it's completely up to the driver 15:47:22 And that's terrible 15:47:25 we can look at generalizations too 15:47:50 Some drivers just set it to 777 to avoid any problems 15:48:20 I think we have to get a sense of back end capabilities and defaults in this area before we can do a general solution scoped over all back ends 15:48:42 So I may send an email on the dev list to inquire. 15:49:04 ok, moving along 15:49:12 Well I think it's a solvable problem 15:49:27 There's no reason a backend couldn't set the mode/owner -- it's just not in the manila API 15:49:35 Anyone have updates on any of the other planned work? 15:49:35 The hard part would be amending the Manila API 15:50:04 bswartz: agree, that's why we're doing back end specific for cephfs and cephfs/nfs as the first step 15:50:17 other back ends may want to do something similar 15:50:57 #topic Bugs 15:51:07 #link https://wiki.openstack.org/wiki/Manila/SteinCycle#Bug_Triage 15:51:16 dustins: got anyting for us today? 15:51:34 tbarron: I do not, unfortunately/fortunately 15:52:15 dustins: yeah I know you've been pretty busy with downstream concerns ... 15:52:27 To put it lightly :P 15:52:28 there is a new bug though, and I have some comments from another bug 15:52:42 ganso: Do tell! 15:52:43 ganso: link? 15:52:57 looking for it just a sec 15:53:14 #link https://bugs.launchpad.net/manila/+bug/1799742 15:53:14 Launchpad bug 1799742 in Manila "Wrong segmentation ID sent to drivers when using multi-segments" [Medium,New] 15:53:41 some Rodrigo guy raised this one :) 15:53:52 I found this while testing the network plugins with multiple neutron segments 15:54:06 neutron would allow each subnet to have different VLANs 15:54:32 but the manila plugin code just iterates over the list of segments from the network (not the subnet), the picks the last one, for any subnet 15:54:44 s/the picks/then picks 15:55:20 I checked if it an easy fix 15:55:24 ganso: so is the fix straightforward? i.e. do we have the info to know where in the loop to stop instead of just picking the last? 15:55:26 according to my approach, it is not 15:55:57 because the neutron segments API is not supported by our abstraction of neutron client, nor neutron client v2.0 itself 15:56:03 it requires openstacksdk 15:56:30 I glanced at it, and it seems the authentication system is slightly different (uses that yaml file which we wouldn't want to use) 15:56:37 ganso: https://github.com/openstack/manila/blob/2b40e5618f2c1039bbbbd1a3e31b72e104b5436b/manila/network/neutron/neutron_network_plugin.py#L258 15:57:09 gouthamr: L269 is the bug 15:57:15 the note suggests "the lowest segment" - it's unclear what that is, but mkoderer may have known what was going on 15:57:17 well using the SDK rather than clients is arguably the right direction anyways ... 15:57:33 gouthamr: yea, I don't know what it is supposed to mean 15:57:39 i know what you're referring to, i'm suggesting that you could have a misconfiguration? 15:58:14 gouthamr: I am not sure, it is a good idea to be triaged by someone who understands a lot more about neutron multi-segments than I do 15:58:49 tbarron: may I proceed to the next bug? 15:58:50 yes, reading through the code it looks like only one of the segments can have their "'provider:physical_network'" attr set to the physical network 15:58:59 ganso: maybe "reach out" to mkoderer? 15:59:06 ganso: sure, ++ 15:59:11 bug++ 15:59:14 tbarron: yea we can try that 15:59:22 bug #2 #link https://bugs.launchpad.net/manila/+bug/1786059 15:59:22 Launchpad bug 1786059 in Manila "Cannot Connect to Share Server when Creating Share with Generic Driver (DHSS=True)" [Undecided,Invalid] 15:59:23 you have 45 secs 15:59:31 for some reason, a lot of people are hitting that bug 15:59:35 I have discussed it already 15:59:43 we cannot reproduce it in the gate 15:59:49 right 15:59:52 but we don't know why so many people are hitting that 15:59:57 I'm receiving emails asking for help 16:00:03 There is an environmental issue 16:00:04 i kinda have an inkling why 16:00:05 and I don't know how to help 16:00:05 :) 16:00:17 But I was never able to track down the root case 16:00:19 let's go to #openstack-manila, I think a lot of us are getting these emailis 16:00:20 cause even 16:00:28 I spent alot of time this and last week helping people debug, and I am out of ideas 16:00:37 Thanks everyone! 16:00:40 #endmeeting