15:01:01 #startmeeting manila 15:01:02 Meeting started Thu Jun 18 15:01:01 2020 UTC and is due to finish in 60 minutes. The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:05 The meeting name has been set to 'manila' 15:01:06 Hi 15:01:22 o/ 15:01:23 o/ 15:01:25 o/ 15:01:32 hi 15:01:35 courtesy ping: ganso vkmc amito dviroel danielarthurt 15:01:43 o/ 15:02:01 hi 15:02:02 hello everyone o/ 15:02:25 thanks for joining, i hope you're all doing well! The agenda for this meeting is here: https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting 15:02:25 o/ 15:02:52 let's begin as usual with, 15:02:55 #topic Announcements 15:03:27 i hope you all had some light reading from this ML post 15:03:29 #link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015494.html (PTG Summary) 15:04:34 gouthamr: ty for the work doing this summary 15:04:43 gouthamr++ 15:04:54 sorry for the novel, but, if you feel there's something missing there, let me know - we'll revisit this when we review/commit the code we spoke about 15:05:13 it's a great reference even if maybe not a potboiler thriller 15:05:31 haha, its like a memoir of a thriller 15:05:37 We're at milestone-1 15:05:59 that means we're a couple of weeks away from our specifications deadline 15:06:02 #link https://releases.openstack.org/victoria/schedule.html (victoria release schedule) 15:06:40 so this is a reminder to you if you want to propose specs for the victoria cycle, please do so by Jul 10 15:07:15 s/a couple/three 15:07:33 we'll discuss milestone-1 deliverables in a minute 15:07:47 but, that's all i had in terms of announcements 15:07:55 anyone else got any? 15:08:08 * vkmc sneaks in 15:08:21 with an announcement? :) 15:08:36 cool, lets move on with an ad-hoc topic 15:08:42 #topic Milestone-1 rollcall 15:09:03 no bugs in python-manilaclient marked for m-1 15:09:03 #link https://launchpad.net/python-manilaclient/+milestone/victoria-1 (milestone-1 bugs in python-manilaclient) 15:10:06 there's a new release already posted, it'll show up here https://releases.openstack.org/victoria/ soon-ish 15:10:18 #link https://review.opendev.org/#/c/735710/ 15:10:29 there're nothing from manila-ui either 15:10:29 #link https://launchpad.net/manila-ui/+milestone/victoria-1 (milestone-1 bugs in Manila UI) 15:10:39 lets review this list 15:10:39 #link https://launchpad.net/manila/+milestone/victoria-1 (milestone-1 bugs in Manila) 15:11:26 there're 12 bugs in-progress, 4 not in progress 15:11:36 i'll move all 4 of these to milestone-2 15:12:03 is there anything in the in-progress list that we should pay attention to? 15:12:17 if not, we'll retarget them as well 15:13:17 we're using these milestones to track work through the cycle, and not much else - we don't make a release for manila at milestones anymore - but trunk consumers can see when a bugfix lands based on the milestone we target 15:14:29 *crickets* 15:14:54 :) alright, please ping me/follow up on bugs that you own, or are reviewing 15:15:02 and let's update the statuses accordingly 15:15:17 any questions/concerns? 15:15:31 #topic CI/Gate Status 15:16:14 late last week we hit an issue with uwsgi 15:16:15 #link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015432.html 15:16:21 #link https://bugs.launchpad.net/devstack/+bug/1883468 (stack.sh fails because uWSGI directory not found in lib/apache) 15:16:21 Launchpad bug 1883468 in devstack "stack.sh fails because uWSGI directory not found in lib/apache" [Critical,In progress] 15:16:23 #link https://bugs.launchpad.net/manila/+bug/1883715 (Manila API fails to initialize uwsgi) 15:16:23 Launchpad bug 1883715 in OpenStack Shared File Systems Service (Manila) "Manila API fails to initialize uwsgi" [Critical,Fix released] - Assigned to Douglas Viroel (dviroel) 15:16:47 there have been fixes in devstack, and in manila (thanks, dviroel) 15:16:54 gouthamr: np 15:17:05 on the main branch, as well as stable/ussuri, train and stein branches 15:17:14 dviroel++ 15:17:22 dviroel++ 15:17:23 but, everything older - i.e., the extended maintenance branches are still broken 15:18:33 we don't use uwsgi on queens i think 15:19:20 by "we", i mean when deploying manila-api - but, because the rest of the projects do, devstack would be broken 15:19:34 https://review.opendev.org/#/c/631338/ --- iirc, we added this during stein 15:20:33 please keep a look out for this before "rechecking" on these older branches 15:20:46 anything else to add here, dviroel? 15:21:19 i don't think so, but we'll need stable/rocky and stable/queens working again soon, to land a important fix 15:21:21 if you saw third party CI failing - this is likely the issue, but, i think dviroel/andrebeltrami hit some other issues in their CI 15:21:57 gouthamr: yeah, it's almost healthy again 15:22:18 gouthamr: we are updating the images at this moment 15:23:01 do we have any job running on ubuntu 16.04? 15:23:02 dviroel: what OS do you run the NetApp CI on? 15:23:12 dviroel: we do, in rocky and queens 15:23:50 dviroel: actually, most dsvm jobs in rocky/queens should be running on xenial 15:23:53 gouthamr: migrating to 18.04 now 15:25:08 dviroel: ack, devstack was building uwsgi from source for xenial - so i'm not sure how that can be resolved 15:25:37 gouthamr: yeah, we might need a workaround for that 15:25:57 dviroel: we'll follow that up - one alternative is to use mod-wsgi, but i'm not sure the rest of the projects support that - devstack itself had a deprecation warning when we were testing with it 15:26:47 gouthamr: ack, we can try that also 15:26:50 dviroel: if you're updating images, i suggest you update main branch testing with focal fossa 15:27:29 gouthamr: we will do in the sequence 15:27:33 ^ same goes for all third party CI systems - given that we'll likely run into more issues like this in the future 15:28:16 any other concerns / questions? 15:28:55 cool, lets move on 15:28:56 #link https://etherpad.openstack.org/p/manila-victoria-review-focus 15:29:06 #topic Reviews needing attention 15:29:10 #link https://etherpad.openstack.org/p/manila-victoria-review-focus 15:29:45 is there anything on that list that needs to discussed? 15:30:53 i haven't gotten back to the zuulv3 goal since before the PTG - we should probably status check next week 15:31:27 but it generally looks like we're not paying attention to the reviews on this etherpad 15:32:08 :) kinda defeats the purpose - i'll start assigning reviews and reaching out this week 15:32:49 yeah, doesn't seems to work with small changes/fixes, but do work when we are close to feature freeze 15:33:18 possibly, since we have deadlines for those 15:33:39 and bugfix deadlines are more relaxed 15:34:26 lets see, i'll go down that list and chase people for status - if you own a change on the list, or have reviewed it, please leave a status message under the change 15:34:43 lets move on and discuss some bug backlog 15:34:46 #topic Bugs (vhari) 15:34:55 o/ vhari - floor is yours 15:35:08 gouthamr, ofc we have bugs to scrub :) 15:35:11 #link https://bugs.launchpad.net/manila/+bug/1639662 15:35:11 Launchpad bug 1639662 in OpenStack Shared File Systems Service (Manila) "Share service VM system to restart stuck" [Undecided,In progress] - Assigned to Xiaoyang Zhang (es-xiaoyang) 15:35:24 easy one.. fix merged .. 15:35:34 oh, yes 15:35:37 can this be closed now? 15:36:03 vhari: yes, we can move this to "Fix Released", a milestone isn't necessary 15:36:21 bot went on a holiday 15:36:33 k 15:36:35 next up 15:36:39 #link https://bugs.launchpad.net/manila/+bug/1754428 15:36:39 Launchpad bug 1754428 in OpenStack Shared File Systems Service (Manila) "Tempest failure: manila_tempest_tests.tests.api.test_rules.ShareIpRulesForNFSTest.test_create_delete_access_rule_with_cidr fails intermittently" [Medium,Triaged] 15:37:26 need to know if this is still an issue .. 15:37:53 i think it is, we'll need to look on the logserver for recent failures 15:38:10 if so need to add minor triage info 15:39:32 vhari: ack, probably isn't an easy fix 15:39:53 vhari: i'll check if http://logstash.openstack.org/#/dashboard/ shows any occurrences and we can update the bg 15:39:57 bug* 15:40:34 lets loop back after this meeting on this one 15:40:45 gouthamr, ack .. repro info will help 15:41:06 #link https://bugs.launchpad.net/manila/+bug/1772029 15:41:06 Launchpad bug 1772029 in OpenStack Shared File Systems Service (Manila) "ganesha library: update of export counter and rados object url index object racy" [Medium,In progress] - Assigned to Ramana Raja (rraja) 15:42:15 looking for some update on this one too 15:42:18 hmmm, this looks like rraja's tracker - i wonder what's left to do in manila 15:42:34 the issue of atomicity of updates seems to be fixed in ceph itself 15:42:35 I wonder if it's actually a supported configuration. 15:42:49 but he mentions, "The fix could will require changes to the ganesha library (in manila) and the ceph_volume_client library (in Ceph)." 15:42:57 multiple manila driver instances using the same Ganesha 15:43:00 server 15:43:28 tbarron: hmmm, that's a good point - what is supported/recommended when using multibackend? 15:44:22 He might have seen that the code would be racy in such a circumstance but I think CephFS folks including rraja will say that there may 15:44:23 multifs hasn't been supported in ceph - until then, should we say you can only have one manila ceph-nfs backend per ceph cluster? 15:44:37 be other issues Ceph side that you'd hit first. 15:45:14 Also, the plan is over time to migrate off the python-specific ceph_volume_client library, right? 15:46:21 And it looks like rraja envisioned the fix to be in that library, not in manila driver itself? 15:47:04 hmmm, we won't stop using this code path (ceph-volume-client to write exports into rados) until ceph-mgr replaces that operation (it doesn't today) 15:47:44 Agree, but it seems like he's saying it's over on the ceph library/mgr side of the fence. 15:48:10 it == the work to fix 15:48:17 the bug report also says "The fix could will require changes to the ganesha library (in manila)..." 15:48:25 i see 15:48:48 because something changes on the other side and we need to adapt? 15:48:53 perhaps 15:50:50 vhari: we need some investigation 15:51:11 gouthamr, ack 15:51:17 i'll post on this bug after this meeting 15:51:20 gouthamr: I like your suggestion that we add a note in the ganesha doc indicating that currently running multiple ceph-nfs backends with the same ganesha server is not safe, with a link to this bug. 15:51:35 tbarron: +1 15:51:41 https://bugzilla.redhat.com/show_bug.cgi?id=1600068 15:51:41 bugzilla.redhat.com bug 1600068 in CephFS "ceph_volume_client: allow atomic updates of object" [Low,Closed: errata] - Assigned to ridave 15:51:45 ^ related bugzilla 15:51:46 I doubt anyone is actually doing this though. 15:52:05 ack 15:52:17 gouthamr, so that's a wrap for bugs 15:52:21 awesome, ty vhari 15:52:26 yw gouthamr 15:52:29 #topic Open Discussion 15:53:14 so we had some concerning news from the neutron folks this week on the ML 15:53:20 #link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015480.html ([neutron][neutron-dynamic-routing] Call for maintainers) 15:53:42 it appears that they're looking for helping hands, or they'll be deprecating this project 15:54:30 oh 15:54:50 Where do we rely on it? Only for Ipv6 environment? 15:55:09 we've used this service, in combination with quagga/zebra to test ipv6 data path 15:56:07 specifically there, we need a way to advertise more specific cidr networks so that they are reachable on return paths 15:56:24 at least that is my understanding, someone correct me if there's more to it 15:56:33 yes that's mostly it 15:56:38 mostly? 15:57:17 our initial solution was to have something setup static return routes - but it got complicated real quick 15:57:20 Cause I think we should discover what will be the gap and ask how to fill it, it can't be a manila-only need 15:58:15 it can't be, but i only see one response there so far 15:58:42 so if anyone here is interested in continuing to use that project, lets look at sharing the maintenance responsibilities 15:59:06 I disagree, we shouldn't work on this. It's a diversion of resources. 15:59:26 And there must be another good way to solve this problem if the project is languiishing. 16:00:06 There are major openstack distros that support IPv6 but don't support dynamic routing in OpenStack. 16:00:19 agreed; we're at the hour - lets take this to #openstack-manila 16:00:31 To be clear, if someone is really enthusiastic and interested 16:00:32 kk 16:00:34 thank you all for attending, stay safe! 16:00:39 #endmeeting