15:00:56 <gouthamr> #startmeeting manila 15:00:56 <openstack> Meeting started Thu May 28 15:00:56 2020 UTC and is due to finish in 60 minutes. The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:57 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:59 <openstack> The meeting name has been set to 'manila' 15:01:20 <carloss> o/ 15:01:22 <lseki> o/ 15:01:25 <andrebeltrami> hello 15:01:26 <maaritamm> o/ 15:01:29 <carthaca> Hi 15:01:33 <vhari> hi 15:01:39 <gouthamr> courtesy ping: xyang toabctl ganso vkmc amito dviroel tbarron danielarthurt 15:02:00 <tbarron> hi 15:02:01 <gouthamr> hello o/ 15:02:09 <gouthamr> here's our agenda for today: 15:02:12 <gouthamr> #link https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting 15:02:31 <gouthamr> lets begin as usual with.. 15:02:33 <vkmc> o/ 15:02:34 <gouthamr> #topic Announcements 15:02:35 <vkmc> henlo 15:02:52 <danielarthurt> 'o/ 15:03:19 <gouthamr> We're not going to meet at 1500 UTC in this channel next week, due to the PTG 15:03:24 <gouthamr> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014567.html (PTG schedule announcement, no IRC meeting on June 4th 2020) 15:04:48 <gouthamr> if you haven't looked at the ML, there's a post regarding the manila PTG: 15:04:52 <gouthamr> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-May/015090.html 15:05:16 <gouthamr> main items there are that there's now a draft schedule in our planning PTG etherpad 15:05:18 <gouthamr> #link https://etherpad.opendev.org/p/vancouver-ptg-manila-planning 15:06:08 <gouthamr> there's a helpful page on the PTG website that you should use to track the meetings: 15:06:14 <gouthamr> #link http://ptg.openstack.org/ptg.html (PTG Schedule) 15:06:44 <gouthamr> you can click on the "manila" meeting slots on this page and be taken to the Zoom room that we're going to be in 15:07:26 <gouthamr> these meetings will be recorded - i encourage you to turn on your cameras :) it's been forever since we saw each other... 15:08:11 <gouthamr> there's another email that went out to those that have registered, it contains some more useful information 15:08:14 <gouthamr> #link https://drive.google.com/file/d/1r9T9A8oGfG6zMW2Bq0qiIeevgDRbUjjG/view (PTG Checklist Email) 15:08:55 <gouthamr> It calls out some best practices for the meeting, worth taking a look there as well: 15:08:58 <gouthamr> #link https://etherpad.opendev.org/p/virtual-ptg-best-practices 15:09:39 <gouthamr> Tuesday, June 2 is PTG Costume Day! 15:10:09 <gouthamr> "Dress up as your favorite character, superhero, etc! Or simply find things around 15:10:09 <gouthamr> your house to create a fun costume. There will be up to 3 winners who will win a 15:10:09 <gouthamr> ticket to an OpenStack Foundation event in 2021. To enter the contest, take a 15:10:09 <gouthamr> picture of you in your costume while attending a PTG meeting and email the 15:10:09 <gouthamr> picture to ptg@openstack.org by 7:00 UTC on Wednesday, June 3. Winners will 15:10:10 <gouthamr> be contacted on Friday, June 5." 15:11:26 <gouthamr> so let your imagination run wild... ofcourse no-one'll dare disagree with your design opinions if you come as Thor, just saying 15:12:04 <gouthamr> we'll begin our meeting on June 1st (14:00 UTC) with a Retrospective 15:12:08 <gouthamr> #link https://etherpad.opendev.org/p/manila-ussuri-retrospective (Ussuri cycle Retrospective etherpad) 15:12:31 <gouthamr> please take a look at that etherpad and add your views in before the meeting ^ 15:13:44 <gouthamr> last week, on an internal discussion, vkmc and i started jotting down a bunch of things that are on our #todo list for the upcoming cycle/s 15:13:48 <gouthamr> #link https://etherpad.opendev.org/p/manila-todos (Manila TO DO items) 15:14:19 <gouthamr> we'll bring this up during the PTG, but, if you have items to add there, please do.. 15:14:47 <gouthamr> cool, that's all of the announcements 15:15:12 <gouthamr> does anyone else have any? 15:16:13 <gouthamr> #topic Reviews needing attention 15:16:17 <gouthamr> #link https://etherpad.openstack.org/p/manila-victoria-review-focus 15:16:54 <gouthamr> i guess the top items on this list are still in review 15:17:01 <gouthamr> are there any concerns to bring up? 15:18:21 <gouthamr> i understand folks are busy with specs and stuff at this point in the cycle, but, do spare some time to take a look at these reviews when you can 15:18:43 <gouthamr> we'll do a rollcall when we return from the PTG 15:18:55 <gouthamr> anything else regarding $topic? 15:19:56 <gouthamr> looks like we'll get done early today :) 15:19:59 <gouthamr> #topic Bugs (vhari) 15:20:08 <gouthamr> vhari: you're up 15:20:32 <vhari> hey gouthamr let's start with new bugs 15:20:34 <vhari> #link https://bugs.launchpad.net/manila/+bug/1881112 15:20:34 <openstack> Launchpad bug 1881112 in OpenStack Shared File Systems Service (Manila) "services table is not cleaned up after host rename" [Undecided,In progress] - Assigned to Maurice Escher (maurice-escher) 15:20:50 <vhari> there is a fix proposed - 15:21:03 * gouthamr loves it when bugs are posted with patches 15:21:09 <vhari> need to add importance and other minor details 15:21:25 <vhari> gouthamr, but wait there is more :) 15:22:16 <gouthamr> ack, this is an annoying issue, and good to fix, i vote medium 15:22:23 <gouthamr> we can target to victoria-1 15:22:44 <gouthamr> are you expecting to backport this as well, carthaca? 15:22:45 <vhari> ack 15:23:43 <carthaca> gouthamr: whatever you want, I usually don't need backports :) 15:24:18 <dviroel> o/ 15:24:31 <vkmc> nice one 15:24:41 <gouthamr> carthaca: ack, ty.. we'll hold off unless someone finds it useful - we'll check from any downstream requests 15:24:55 <gouthamr> s/useful/useful in the older releases 15:25:04 <gouthamr> ty for teh bug and the fix, carthaca 15:25:07 <carthaca> technically it should be doable if we stay with the proposed solutiion 15:25:25 <gouthamr> +1 15:25:50 <vhari> next up 15:25:52 <vhari> #link https://bugs.launchpad.net/manila/+bug/1881098 15:25:52 <openstack> Launchpad bug 1881098 in OpenStack Shared File Systems Service (Manila) "missing means to adjust share servers after share_backend rename" [Undecided,In progress] - Assigned to Maurice Escher (maurice-escher) 15:26:14 <vhari> fix proposed - 15:26:24 <vhari> needs minor triage 15:26:44 <gouthamr> great, feels like we'll need this for other resources as well 15:27:14 <carthaca> share groups maybe 15:27:31 <gouthamr> +1 15:27:47 <carthaca> but we don't use them, so I didn't touch those at first 15:27:51 <gouthamr> cool, ty for this one as well - i think this is a medium 15:28:08 <gouthamr> victoria-1 as well 15:28:22 <gouthamr> any objections? 15:28:30 <vkmc> +1 15:28:32 <vhari> done :) 15:29:07 <vhari> #link https://bugs.launchpad.net/manila/+bug/1831983 15:29:07 <openstack> Launchpad bug 1831983 in OpenStack Shared File Systems Service (Manila) "Manila 8.0.0 stein GPFS Driver index error" [Undecided,New] 15:30:44 <gouthamr> its a year old and has some back-and-forth 15:30:48 <vhari> floating this out - waiting for feedback on suggested changes 15:30:55 <tbarron> there is a proposed fix in the bug but we don't have a way to test it ? 15:31:15 <tbarron> pretty trivial fix though 15:31:32 <gouthamr> +1, python3 tests are passing, so either there's no unit test or the fake data isn't good enough 15:31:51 <gouthamr> its been a while since we saw someone from GPFS 15:32:13 * gouthamr wonders which company owns that thing 15:32:23 <tbarron> big company i hear 15:32:29 <vkmc> D: 15:32:37 * tbarron isn't blue-red color blind 15:32:47 <gouthamr> ah! there's my excuse 15:32:57 <vhari> :D 15:33:30 <gouthamr> vhari: lets mark it wishlist, perhaps? it's possible that the reporters couldn't submit their fix for some reason 15:33:45 <ganso> o/ 15:33:50 <gouthamr> vhari: we could also mark it "low-hanging-fruit" because there's a suggested fix 15:34:00 <gouthamr> \o ganso 15:34:11 <vkmc> hmm I'd refrain of doing that because there is no easy way to test it 15:34:20 <gouthamr> vhari: and someone who's looking to contribute can do follow up? 15:34:50 <gouthamr> vkmc: oh, i was thinking we'd still run unit tests... from what i read, the driver's broken with python3 15:34:53 <vkmc> adding the line is easy but how we can make sure it is the correct fix if we don't have the hardware 15:35:10 <vkmc> do we have third party ci to test that? 15:35:16 <gouthamr> vkmc: i agree, no way to know if it works for sure 15:35:47 <gouthamr> vkmc: GPFS CI was reporting for a while, i haven't seen it around since Ussuri 15:36:04 <gouthamr> i can reach out to the maintainers we have on file 15:36:14 <vkmc> gouthamr, yeah, and we can mention them this bug 15:36:23 * gouthamr opens dviroel's driver maintainers list 15:36:48 <gouthamr> yep, i'll email and let them know of this... 15:36:52 <vkmc> ++ 15:37:13 <vhari> gouthamr, "low-hanging-fruit" may get more attention but needs verification method 15:37:28 <vhari> gouthamr, so hold off until we know more? 15:37:36 <gouthamr> vhari: yes... 15:37:48 <vhari> gouthamr, you got it 15:37:51 <gouthamr> ty vhari 15:38:07 <vhari> yw 15:38:17 <vhari> #link https://bugs.launchpad.net/manila/+bug/1855391 15:38:17 <openstack> Launchpad bug 1855391 in OpenStack Shared File Systems Service (Manila) "if all backend down, the status of share will stuck in "extending" when we extend an share" [High,In progress] - Assigned to haixin (haixin77) 15:38:26 <vhari> we discussed recently - 15:38:29 <gouthamr> ah! this is an ongoing bugfix 15:38:54 <vhari> floating this one out since it's a high with no activity 15:39:04 <vhari> :( 15:39:11 <gouthamr> haixin's trying to update the request to extend to flow through the scheduler 15:39:40 <gouthamr> but there's something tripping up, and i haven't paid attention in a few weeks 15:39:56 <vhari> we can shelf it a bit longer - it's non trivial 15:40:41 <gouthamr> vhari: agreed, i'll take a look right after this meeting and see if i can help 15:40:50 <vhari> gouthamr, cool 15:40:58 <vhari> #link https://bugs.launchpad.net/manila/+bug/1838936 15:40:58 <openstack> Launchpad bug 1838936 in kolla-ansible ussuri "manila-share not working with ceph mimic (13.2) nor ceph nautilus (14.2)" [Undecided,Triaged] 15:41:16 <vhari> I was looking at this .. 15:41:30 <gouthamr> hmmm, there's been a bit of back and forth with this bug 15:41:34 <vhari> do we agree this is a manila issue? 15:41:42 * vkmc checks 15:41:56 <vkmc> so... our ci says otherwise 15:42:01 <vhari> ack 15:42:11 <vkmc> we have been running cephfs native and cephfs nfs jobs with nautilus and things work properly 15:42:13 <vkmc> not so sure about mimic 15:42:30 <gouthamr> yep 15:42:41 <vkmc> oh wait 15:42:44 <vkmc> they are with rocky 15:42:50 <gouthamr> here's a related bug where we did hit a similar issue, but the environment was different 15:42:52 <gouthamr> #link https://bugzilla.redhat.com/show_bug.cgi?id=1820346 15:42:52 <openstack> bugzilla.redhat.com bug 1820346 in CephFS "Setting 'ceph.quota.max_bytes' fails" [Medium,Closed: notabug] - Assigned to khiremat 15:43:18 <vkmc> in rocky we test with luminous 15:43:38 <gouthamr> there we triaged the issue to not having the latest kernel client 15:44:38 <gouthamr> vkmc: ack, we could point the nautilus job and show them the results 15:44:47 <vkmc> gouthamr, sure 15:44:50 <vkmc> I can follow up with this one 15:45:20 <vhari> vkmc++ 15:46:01 <vhari> moving on .. 15:46:05 <vhari> #link https://bugs.launchpad.net/manila/+bug/1700138 15:46:05 <openstack> Launchpad bug 1700138 in OpenStack Shared File Systems Service (Manila) "Manila NetApp driver don't delete share-server when cifs configured manually" [Low,Confirmed] - Assigned to Lucio Seki (lseki) 15:46:05 <tbarron> seem to be client/server incompat issues that can't be fixed in manila itself 15:46:13 <gouthamr> tbarron: +1 15:46:15 <tbarron> distro specific 15:46:39 <vhari> looking for feedback - 15:47:08 <gouthamr> tbarron: that too, yes - depends a lot on backports of certain things in ceph 15:47:27 <gouthamr> lseki: #1700138 needs your attention 15:47:37 <dviroel> ^ I want to move this one to invalid, doesn't look right to try to delete something that wasn't configured by manila 15:47:57 <lseki> yes 15:48:15 <lseki> I'll add a comment there 15:48:37 <gouthamr> good stuff, ty 15:48:40 <vhari> lseki, ty 15:48:43 <ganso> dviroel: does it work if you unmanage? 15:49:12 <dviroel> ganso: I'm not sure 15:49:22 <gouthamr> unmanage's a noop isn't it? 15:49:46 <ganso> IIRC yes, it should work 15:50:05 <gouthamr> https://opendev.org/openstack/manila/src/commit/00e548a60c9183366e4cc935436558c330a77d63/manila/share/drivers/netapp/dataontap/cluster_mode/lib_multi_svm.py#L516-L517 15:50:35 <ganso> gouthamr++ 15:51:19 <gouthamr> the problem here though is that tenants may see some network allocations on their neutron network if the share server isn't deleted... 15:51:57 <gouthamr> there's no other direct impact on the tenants themselves.. 15:52:01 <gouthamr> that i can see 15:52:03 <ganso> gouthamr: since unmanage is an admin op, admin should clean the allocations (if that makes sense) 15:52:42 <gouthamr> ganso: ah! that's true, so we have a similar case when an admin unmanages a tenant's share server 15:54:02 <ganso> gouthamr: isn't that the same case? I mean, aren't all share server "tenant"'s (although with only admin visibility) ? 15:54:07 <gouthamr> dviroel lseki: perhaps a good thing to document - this sorta thing is possible with all DHSS=True drivers since the servers aren't entirely under manila's control, administrators have "backdoors" to manipulate these servers if they wish, and that might leave resources in an inconsistent state in manila 15:54:45 <ganso> s/only admin visibility/admin-only visibility 15:54:56 <gouthamr> ganso: yes, i meant that, without visibility to share servers, tenants may wonder why they can't delete their neutron networks 15:55:19 <ganso> gouthamr: oh ok, yes 15:55:50 <dviroel> gouthamr: yes, administrators can do many things on the backend that can lead manila to a inconsistent state, cifs-server is just one of them 15:56:04 <lseki> with great power comes great responsibility 15:56:16 <gouthamr> true.. 15:56:28 <gouthamr> perhaps a user message can help from our end? 15:56:36 <gouthamr> timecheck.. 15:57:03 <gouthamr> vhari: lseki/dviroel will triage this bug 15:57:10 <vhari> gouthamr, ack - that's a wrap for bugs 15:57:10 <gouthamr> we'll take this interesting discussion to #openstack-manile 15:57:14 <gouthamr> manila* 15:57:18 <dviroel> hehe 15:57:21 <gouthamr> great ty vhari 15:57:26 <gouthamr> #topic Open Discussion 15:57:58 <gouthamr> we've spared three whole minutes :) 15:58:04 <gouthamr> darn, two 15:58:18 <ganso> about bug https://bugs.launchpad.net/manila/+bug/1855391 don't we have some history on using scheduler for extend vs not using? I remember having this discussion 15:58:18 <openstack> Launchpad bug 1855391 in OpenStack Shared File Systems Service (Manila) "if all backend down, the status of share will stuck in "extending" when we extend an share" [High,In progress] - Assigned to haixin (haixin77) 15:58:34 <ganso> and if we haven't changed to using the scheduler there may be a reason for that 15:58:36 <gouthamr> ganso: it sure feels like it 15:59:02 <ganso> gouthamr: not sure if cinder already tackled this 15:59:19 <ganso> maybe I remember this discussion/change from cinder perhaps 15:59:33 <gouthamr> ganso: nothing from the wiki or any old specs - the biggest downside i see is that share types are mutable, so, a share extension can fail if the share type capabilities are altered too, for instance 15:59:52 <ganso> gouthamr: hmmmm yes 16:00:18 <gouthamr> alright, we're at the hour 16:00:23 <gouthamr> thanks a lot for joining, everyone 16:00:35 <gouthamr> hope to see you all on Monday 16:00:47 <tbarron> cya in #openstack-manila and at ptg next week! 16:00:47 <gouthamr> stay safe! 16:00:53 <gouthamr> #endmeeting