15:01:39 #startmeeting manila 15:01:40 Meeting started Thu Jul 9 15:01:39 2020 UTC and is due to finish in 60 minutes. The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:43 The meeting name has been set to 'manila' 15:01:45 hi o/ 15:01:51 o/ 15:01:51 hi! 15:02:08 courtesy ping: ganso vkmc amito lseki tbarron andrebeltrami 15:02:08 Hi! 15:02:09 o/ 15:02:11 o/ 15:02:12 Hi 15:02:20 o/ 15:02:31 hey everyone! thanks for joining 15:02:44 here's the agenda for today: https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting 15:03:49 irc has been flaky a bit, so let me save this meeting in case :) 15:03:54 #chair tbarron 15:03:54 Current chairs: gouthamr tbarron 15:04:10 ack 15:04:24 ^ i know he doesn't use irccloud or rdocloud or any other thing that's down at the moment :) 15:04:33 #topic Announcements 15:05:14 so we're at R-14 - about the midway point in the victoria release cycle 15:05:20 #link https://releases.openstack.org/victoria/schedule.html 15:05:31 this is our specifications deadline 15:05:41 we'll discuss these in a bit... 15:05:43 my bouncer is in the DigitalOcean -- hope I didn't just jinx them 15:06:16 the week of Jul 27 - Jul 31 is the new driver deadline 15:06:39 i've seen some queries for new driver inclusions, and there's one blueprint filed 15:07:14 lets hope they can have their code submitted and tested by that deadline 15:08:30 if you have any questions or concerns about these deadlines, do let us know - these are helpful safeguards and save reviewer bandwidth by planning the release better... 15:09:17 but, we've had issues come up that take your focus away... and we'll be accommodative of that in a reasonable manner 15:09:49 next up is some good news 15:10:03 we've a new tc tag approved in the last week 15:10:07 #link https://governance.openstack.org/tc/reference/tags/tc_approved-release.html 15:10:50 gouthamr: thanks for your great work on this! and thanks also to gmann and others who supported the initiative. 15:11:26 absolutely, we had a great discussion about this at the PTG and we have much to follow up on.. 15:11:55 +1. 15:12:49 while we're on this topic, i'd encourage you to take a look at another proposal if you haven't: 15:12:57 #link https://review.opendev.org/#/c/736369/ (Create starter-kit:kubernetes-in-virt tag) 15:13:18 it needs a trivial rebase, but please share your thoughts when you get a chance 15:13:31 any other announcements? 15:14:33 #topic CI status 15:15:18 late last week, we had a breakage - and we haven't root-caused it yet 15:16:19 our voting LVM job - the legacy one that we run on manila's main branch as well as the native-zuulv3 style job fail when run on some infra 15:16:44 #link https://zuul.opendev.org/t/openstack/builds?job_name=manila-tempest-plugin-lvm 15:17:12 or a better link: 15:17:18 #link https://zuul.opendev.org/t/openstack/builds?job_name=manila-tempest-plugin-lvm&job_name=manila-tempest-minimal-lvm-ipv6-only (LVM jobs) 15:18:07 the cause seems to be a reboot that occurs on the test node in the middle of running tests 15:18:29 +1 15:18:41 and it only happens with LVM? 15:19:13 yes, so far as i can tell 15:19:25 before that reboot occurs, a bunch of tests are run, and do pass - however, after the reboot, i don't know how, but, testr_results.html reports all tests having failed 15:20:16 gouthamr: do we still think the issue tracks with rax nodes? 15:20:23 we've been playing with reducing the test concurrency, or modifying the backing file size given that we do thick provisioning, and mayy be running out of space 15:20:27 i do but am checking 15:20:45 tbarron: yes, i have seen the job run on other clouds and pass or fail one or more scenario tests 15:21:05 i don't think it's a resource issue (and that was my main guess b/c rax nodes have less disk space than e.g. vexxhost) on root 15:21:07 tbarron: apart from rax, i haven't seen this reboot behavior occurring elsewhere 15:21:20 https://zuul.opendev.org/t/openstack/build/51980ae05c024e35b73614a9db4925bd/log/logs/screen-m-shr.txt#8814-8829 15:21:31 tha's on rax right before a reboot 15:21:45 instrumented for free memory and disk 15:23:15 the issue is complicated by unreliable syslog and journal. Both can lose info before the reboot and syslog has limited info usually anyways. 15:23:26 Though I found one syslog with perhaps a clue: 15:23:41 https://zuul.opendev.org/t/openstack/build/1bc61a82ace14217807f28c5b8c9debe/log/controller/logs/syslog.txt#8463 15:24:34 but that's with the same kernel that is running on vexxhost 15:24:44 where we don't see an issue 15:25:13 this particular GPF seems to pertain to paqcket filtering and the packets in question are ipv6 15:27:05 good find, tbarron - we do run ipv6 tests even with cephfs-nfs 15:27:34 gouthamr: true and i haven't seen this issue there 15:28:22 okay, lets make a patch disabling ipv6 data path tests, and see if that changes anything 15:28:52 would help to isolate the issue if anything, so we can follow up 15:28:58 +1 15:29:38 lvm job is updating kernel nfs exports whereas cephfs-nfs job is using ganesha but i'm just speculatiing again 15:30:34 cool, we'll continue debugging then and take this to #openstack-manila 15:31:09 but, in general, folks - do not recheck on failures in this job.. 15:31:43 tbarron gouthamr great job in investigating that, hope to find some time to join on debugging 15:31:54 thank you dviroel 15:32:02 anything else regarding $topic? 15:32:03 tbarron gouthamr ++ 15:33:07 #topic Stable branch status 15:33:28 we meant to talk about this one during the PTG 15:33:54 i wanted to run this by the group here, before opening the discussion on the mailing list 15:34:10 we have ETOOMANY branches that are in Extended Maintenance 15:34:23 ocata through rocky 15:35:15 the last time anyone's proposed a patch to ocata was a year ago 15:35:20 #link https://review.opendev.org/#/q/project:openstack/manila+branch:stable/ocata 15:36:05 pike's seen some activity this year, but the last patch that merged was at three months ago: 15:36:07 #link https://review.opendev.org/#/q/project:openstack/manila+branch:stable/pike 15:36:58 i'm not sure any tempest jobs work on these branches.. and it's not clear if anyone benefits from us keeping these alive 15:37:54 so should we just pull the trigger on EOL-ing these branches 15:38:22 that means we'll be creating a tag off their HEAD, and deleting the branches 15:38:50 it'll save gate resources, because bitrot jobs are still running testing these branches 15:39:06 +1 15:39:48 +1 15:40:00 +1 15:40:47 cool, does it mean others disagree, or don't care either way? :) 15:41:07 I don't care ;) 15:41:14 +1 15:41:27 +1 15:41:43 +1 15:41:53 haha, that was my thought too, i don't feel like anyone is relying on us keeping these branches alive 15:42:13 #action gouthamr will send an email asking to EOL stable/pike and stable/ocata branches 15:43:20 we still have stable/queens and stable/rocky - for the moment, wearing my red fedora, i want to help keep stable/queens alive a bit longer, to allow backports that can make their way into rdo and rhosp 15:44:18 +1 15:44:29 but, its not a priority to get the gate up and running against these two branches - devstack's failing atm, i'll take a look when we're done fixing all things on the main branch 15:45:28 however, we should actively keep EOL'ing older branches so we don't give the impression that these are meant to work for these extended periods, while all development is focused ahead 15:46:06 anyway, that's all i had to say about that.. 15:46:10 any other thoughts? 15:46:49 #topic Specifications 15:47:01 \o/ 15:47:15 i see four proposed for this release, but two actively being worked on.. 15:47:31 #link https://review.opendev.org/#/c/735970/ (Share server migration) 15:47:43 #link https://review.opendev.org/#/c/739136/ (Add lite spec for share server limits) 15:48:05 #link https://review.opendev.org/#/c/729292/ (Improve security service update --- deferred to Wallaby?) 15:48:27 #link https://review.opendev.org/#/c/710166/ (Add share group replica) 15:49:51 do you think we can actively review these this week 15:49:59 and bring up any concerns at the next meeting? 15:50:05 dviroel: carloss thanks for #1 15:50:18 keeping focus on current release, and 15:50:37 #2 discussing issues publicly, upstream 15:51:05 gouthamr: that sounds like a good plan to me 15:51:22 :) 15:51:33 Can we think of people who shold review besides the usual suspects? 15:51:35 I agree. Will take a look in the others I haven't reviewed yet 15:51:55 we can take longer on this one: 15:51:56 would be good to have some feedback on #3 too, but I will asks more eyes on #1 and #2 at this moment 15:51:56 #link https://review.opendev.org/#/c/729292/ (Improve security service update --- deferred to Wallaby?) 15:52:07 yeah.. 15:52:19 anybody paricularly imjpacted by share server migration or share server limits? 15:52:39 I will have a look, too 15:52:49 umm, him ^^ 15:52:56 haha 15:52:58 :) ty carthaca 15:53:19 Naturally, because I requested those features 15:53:42 carloss: i'd reach out to Jon Vondra for the server limits spec because he had some thoughts 15:54:03 oh, good 15:54:17 will tag him in the patch as well 15:54:22 thanks.. 15:54:45 okay, anything else about specs? 15:55:15 we've ~6 minutes, maybe we can talk about one or two bugs, vhari? 15:55:24 #topic Bugs (vhari) 15:55:27 sure .. 15:55:39 thanks, and sorry - its been that sort of week :) 15:55:40 gouthamr, may have time to consider closing this 15:55:44 #link https://bugs.launchpad.net/manila/+bug/1838936 15:55:44 Launchpad bug 1838936 in OpenStack Shared File Systems Service (Manila) "manila-share not working with ceph mimic (13.2) nor ceph nautilus (14.2)" [Undecided,Confirmed] 15:55:54 ah yes! 15:56:34 ty vhari, i'll do that 15:56:46 gouthamr, gr8 ty 15:56:47 vkmc is out today, but we know what might have happened in their environments 15:56:56 i'll comment and close that bug 15:56:58 that's a wrap for bugs out of time .. 15:57:07 thank you! 15:57:12 #topic Open Discussion 15:58:24 looks like we have none, and can save a whole minute :) 15:58:49 thank you all for joining - lets get to reviewing and debugging these gate issues on #openstack-manila 15:59:01 see you here next week, stay safe! 15:59:03 thanks!!! 15:59:07 thanks! 15:59:09 #endmeeting