15:00:32 <bswartz> #startmeeting manila
15:00:33 <openstack> Meeting started Thu Jan 25 15:00:32 2018 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:34 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:37 <openstack> The meeting name has been set to 'manila'
15:00:42 <amito> o/ hello
15:00:45 <bswartz> hello all
15:00:48 <dustins> \o
15:00:55 <gouthamr> hello o/
15:00:57 <ganso> hello
15:01:06 <zhongjun> Hi
15:01:06 <xyang> hi
15:01:23 <tbarron> hi
15:02:37 <bswartz> hey guys, sorry I'm dropping a -2 on some patches
15:03:25 <bswartz> Okay!
15:03:26 <tbarron> thud
15:03:34 <bswartz> #topic announcements
15:03:47 <bswartz> Today is the last day for features to merge
15:03:52 <jungleboyj> @!
15:03:53 <bswartz> I need to push tags today
15:04:12 <bswartz> jungleboyj: your bot is awol
15:04:22 * jungleboyj is sad
15:04:53 <bswartz> I don't think we have an other announcements
15:05:11 <bswartz> Feature freeze is usually the biggest deadline
15:05:24 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:05:31 <bswartz> #topic Feature Freeze / Queens milestone-3
15:05:46 <bswartz> So we've been trying to get the last feature patches merged this week
15:06:01 <bswartz> There's been good progress, but this morning I still see some unmerged patches, so let's go through them
15:06:47 <bswartz> The filtering for the share type API won't make it, so I've already block those 2 patches
15:07:29 <bswartz> We have this one waiting for a workflow: https://review.openstack.org/#/c/527134/
15:08:02 * tbarron is timing out connecting to gerrit atm
15:08:09 <gouthamr> was testing that one, need xyang/tbarron to weigh in
15:08:16 <gouthamr> or zhongjun
15:08:35 <bswartz> gouthamr: found any issues worth discussing?
15:08:47 <gouthamr> nope
15:09:30 <bswartz> https://review.openstack.org/#/c/528667/
15:09:52 <bswartz> ^ This one looks like it's suffering infra problems
15:09:53 <gouthamr> ouch, rechecking
15:10:19 <bswartz> https://review.openstack.org/#/c/535701/
15:10:29 <bswartz> ^ This is a 5 line change
15:10:36 <efried> No rechecking
15:10:44 <bswartz> Both of its dependencies are merged
15:10:48 <efried> Wait for next infra broadcast
15:10:52 <bswartz> efried: ?
15:11:13 <tbarron> bswartz: 535701 is ok, I was just waiting for the dependencies
15:11:14 <efried> NOTICE: We're currently experiencing issues with the logs.openstack.org server which will result in POST_FAILURE for jobs, please stand by and don't needlessly recheck jobs while we troubleshoot the problem.
15:11:15 <bswartz> efried: is the recheck machinery broken?
15:11:47 * bswartz smh
15:11:52 <gouthamr> https://review.openstack.org/#/c/535701/ is failing a bunch of third party CIs
15:12:32 <tbarron> thirdy party CIs are failing for another reason
15:12:36 <tbarron> third
15:12:37 <bswartz> ganso: can you speak to that?
15:12:44 <tbarron> including NetApp
15:13:13 <ganso> bswartz: erlon and I are diagnosing a CI failure due to library update
15:13:14 <gouthamr> looks like a broken package atleast for NTAP, python-pcre
15:13:19 <tbarron> python-pcre paacke issue
15:13:25 <ganso> bswartz: erlon says all CIs are failing
15:13:25 <tbarron> http://52.42.67.99/67/537867/1/check/dsvm-veritas-access-manila-driver/95ec234/logs/devstack-early.txt.gz#_2018-01-25_12_22_34_363
15:13:36 <tbarron> ganso: all 3rd party
15:13:39 * bswartz headdesk
15:13:41 <tbarron> ganso: not infra
15:14:12 <tbarron> infra jobs are working, ubuntu and centos, but ubuntu 3rd party has the issue
15:14:17 <erlon_> tbarron, yep infra not, https://review.openstack.org/#/c/534854/
15:14:28 <erlon_> but 90% of the tird party
15:14:31 <bswartz> Okay, this is a perfect illustration of why it's better to get stuff merged on Monday or Tuesday, not Thursday
15:14:35 <erlon_> not sure why
15:14:44 <bswartz> Library problems seem to happen every release
15:14:58 <bswartz> Is it something we can get sorted out this morning?
15:15:24 <bswartz> Or will we have to accept that third party CI is broken until after FF?
15:15:26 <tbarron> https://review.openstack.org/#/c/537867/ is a DNM job that illustrates
15:15:46 <erlon_> bswartz, not sure, couldnt get help from #infra as they are focusing on the logserver problem
15:15:59 <tbarron> infra jobs are all passing, third party are all failing
15:16:18 <ganso> tbarron: not really passing, they have POST_FAILURE issue now
15:16:22 <bswartz> #link https://pypi.python.org/pypi/python-pcre/0.6
15:16:23 <bswartz> this?
15:16:31 <gouthamr> infra has mirrors, they'll be affected late :)
15:16:41 <erlon_> ganso, they are going further at least :p
15:17:22 <tbarron> ganso: they passed in 537867 two hours ago, iow they don't have the package issue
15:17:42 <tbarron> gouthamr: ah, maybe just delayed
15:18:04 <bswartz> I don't understand what's changed
15:18:15 <bswartz> That package hasn't been modified since 2015 according to pypi
15:19:11 <erlon_> bswartz, Im not getting the problem as well, the bindep is already there, for long time: https://github.com/openstack/requirements/search?utf8=%E2%9C%93&q=pcre&type=
15:19:47 <erlon_> so, devstack or something else should know that it should be installed before trying to install the python lib
15:20:12 <bswartz> I'm confused
15:20:16 <tbarron> did we always have to compile it?
15:20:50 <bswartz> tbarron -- if you install from pip, I think the native parts get compiled as part of the install
15:20:51 <tbarron> looks like the dev pkg might be missing, the compile fails for lack of pcre.h
15:21:12 <erlon_> tbarron, not sure, may be that was what is different now, now it needs to be compiled
15:21:14 <bswartz> If you install from apt/yum you get a distro-compile binary
15:21:23 <tbarron> bswartz: ack, understood
15:22:00 <bswartz> It's possible the pcre-dev package moved some files and the python package is now broken
15:22:15 <bswartz> So there may be a python-pcre 0.7.1 very soon
15:22:19 <erlon_> bswartz, that will download the pcre lib only, the pip package still need to compile
15:22:58 <bswartz> I'm okay with ignoring the 3rd party CIs until that issue is sorted
15:23:23 <bswartz> It sounds like the upstream gate might be about to get less friendly for the same reasons, so we should hurry up and merge what we can
15:24:16 <bswartz> Okay are there any other patches I missed?
15:24:24 <bswartz> For Q-3?
15:24:45 <bswartz> If not, I'm going to watch for the last few things to get through the gate and push tags as soon as they make it
15:25:18 <tbarron> looking at devstack-early log in a successful third party CI run from a couple weeks ago I see no mention of python-pcre
15:27:17 <bswartz> I know the netapp guys will dig into to try to fix the netapp CI at least
15:27:27 <bswartz> Please share any fixes you find on the channel
15:27:45 <bswartz> And lmk if you want my help on that CI issue
15:28:05 <bswartz> Okay I think that's it for feature freeze
15:28:18 <tpsilva> a quick workaround we found is to install libpcre3-dev on the nodes before stacking
15:28:31 <bswartz> tpsilva: ty
15:29:01 <bswartz> #topic Let's Go Over New Bugs
15:29:40 <bswartz> Before we get into specific bugs, let me remind y'all that the focus between feature freeze and RC1 should be on finding and fixing bugs
15:30:00 <bswartz> There are some bugfix patches already waiting for reviews
15:30:23 <bswartz> I will be targeting bugs at the RC1 milestone in LP
15:30:57 <bswartz> Arguably, finding bug is as important as fixing the known bugs, so I encourage people to test out some of the new features and try to break them
15:31:52 <bswartz> The RC1 target date for us is 2 weeks from now
15:32:26 <bswartz> As always, we'll cut the RC1 tag when the number of bugs drops to zero, whether that's early or late
15:32:53 <bswartz> But the closer we get to the target date, the more aggressively we'll untarget bugs -- our release manager don't like late RCs
15:33:00 <bswartz> :-)
15:33:08 <bswartz> dustins: do you have specific bugs for today?
15:33:16 <dustins> I do indeed
15:33:19 <dustins> #link https://bugs.launchpad.net/manila/+bug/1550258
15:33:20 <openstack> Launchpad bug 1550258 in Manila "Manila UI - Manage rule for CIFS share updates status from "error" to "active" for invalid users" [Low,New]
15:33:27 <bswartz> #link https://etherpad.openstack.org/p/manila-bug-triage-pad
15:34:04 <bswartz> Wow this is an old one
15:34:41 <bswartz> Why did I target this to manila-ui?
15:34:55 <gouthamr> it's marked invalid on manila-ui
15:34:59 <bswartz> It must have been a click error
15:35:19 <bswartz> In any case, is it HP3par specific?
15:35:33 <tbarron> there was a ui screenshot in the bug report but then it was realized that it's not a ui bug
15:35:42 <dustins> It looks like it manifested first in the UI but is caused by a Manila "core" thing
15:36:02 <dustins> But yeah, I think this may be specific to the 3PAR driver?
15:36:30 <tbarron> would it make sense to ask if the issue is still reproducible?
15:36:42 <bswartz> I'm not convinced there's any problem outside the 3par driver
15:36:48 <tbarron> or close saying re-open if it's still an issue
15:37:29 <tbarron> we don't have any active HP 3par folks here
15:37:31 <bswartz> The maintainer is Ravichandran Nudurumati <ravichandrann@hpe.com>
15:37:56 <bswartz> No known IRC nick for him
15:38:45 <bswartz> It's up to HPE whether they want to fix this bug or not -- I don't see what we can do without access to their hardware
15:39:17 <dustins> Yeah, I was kinda hoping that we had an HPE representative to provide some feedback
15:39:48 <bswartz> next?
15:39:59 <tbarron> what was the disposition?
15:40:06 <tbarron> assign or close?
15:40:23 <tbarron> we need to clear the backlog, not just talk about the bugs.
15:40:29 <bswartz> I think we leave it unassigned, low importance
15:40:34 <bswartz> We can mark it traiged
15:40:35 <tbarron> -1
15:40:43 <bswartz> triaged even
15:41:04 <tbarron> ok, I'm just looking for ways to make some progress
15:41:31 <bswartz> I don't see how we can close it -- it's likely still an issue that users of the HPE driver should be aware of
15:41:34 * tbarron adds tags
15:42:07 <bswartz> Hopefully if someone users the driver and finds the bug, they can go bother HPE about getting it fixed
15:42:29 <tbarron> tags: drivers, hpe-3par
15:42:34 <xyang> +1
15:42:58 <xyang> we do tags in cinder
15:43:51 <dustins> #link https://bugs.launchpad.net/manila/+bug/1700501
15:43:52 <openstack> Launchpad bug 1700501 in OpenStack Compute (nova) "Insecure rootwrap usage" [Undecided,Incomplete]
15:43:59 <dustins> This is the next one for us
15:44:35 <tbarron> note sdague: "this is too vague to be actiionable"
15:44:58 <tbarron> there's a proposed cross-project initiative for privsep
15:45:00 <bswartz> Wow
15:45:37 <tbarron> we should talk at PTG about whether we should be migrating manila to privsep
15:45:41 <bswartz> Yeah rootwrap was always a pretty hacky approach to fixing security issues
15:45:58 <bswartz> tbarron: Can you add that to the ptg etherpad if its not there already?
15:46:06 <tbarron> bswartz: yes
15:46:08 <bswartz> I don't know the details of privsep
15:46:36 <tbarron> we should talk about what it is and whether / when we want to do it
15:46:37 <bswartz> It's important to note that the security issues addressed by rootwrap are for the most part theoretical issues
15:47:14 <tbarron> yeah, I'm not sayiing that we shouldn't fix specific rootwrap issues (like too broad commands specified, etc.) just
15:47:19 <tbarron> that this bug is open ended
15:47:29 <bswartz> Rootwrap exists to reduce the chances of an exploit in the API services from turning into a more general exploit of the machine the services run on
15:48:23 <bswartz> Our first line of defense is the API services themselves, and while python is bad at many things, it's pretty good at preventing over-the-network exploits
15:50:20 <bswartz> I'm reading about privsep -- is anyone already familiar with it?
15:50:45 <tbarron> kinda sorta, need to refresh
15:51:07 <tbarron> jungleboyj: cinder has some experience attempting privsep migration, not all positive, right?
15:51:40 <bswartz> From what I see, it's unclear that privsep helps at all
15:51:52 * tbarron will try to see where other projects are on this
15:52:10 <jungleboyj> tbarron:  It was complicated but we got there.
15:53:00 <jungleboyj> tbarron:  Some of it was learning curve for people on how to use it.
15:53:27 <dustins> jungleboyj: Are there good docs on how to transition to privsep?
15:54:04 <jungleboyj> dustins:  Not sure.  I think hemna did a lot of the work there.  Could check with him.
15:54:20 <tbarron> one thing privsep does is encourage use of libraries rather than shell commands to achieve the needed
15:54:29 <bswartz> I'd like someone to explain how privsep is any better than a properly locked-down rootwrap config
15:54:35 <dustins> Not a bad idea, if it's something that other projects are doing we should leverage their experience and expertise
15:54:53 <bswartz> tbarron: I can see that working for some things
15:55:10 <bswartz> In particular stuff like the "chmod 777 /etc/shadow" exploit listed in the bug
15:55:21 <tbarron> right
15:55:29 <bswartz> But so much of what manila does is invoking binaries that will never have python-library equivalents
15:56:17 <bswartz> Then again, much of that is driver specific...
15:56:42 <tbarron> I've got an AI to see where other projects are on quota replacement before PTG, let me see what I can find out about privsep as well.
15:56:43 <bswartz> Perhaps we could modularize the rootwrap and only enable parts of it related to the specific drivers in use
15:56:48 <tbarron> Did it help?
15:56:52 <tbarron> etc.
15:56:56 <bswartz> okay
15:57:00 <bswartz> We're low on time
15:57:05 <bswartz> dustins: anything else?
15:57:27 <tbarron> mention the default share type one quick
15:57:28 <dustins> #link https://bugs.launchpad.net/manila/+bug/1743472
15:57:29 <openstack> Launchpad bug 1743472 in Manila "Create a default share type for tempest tests" [Undecided,New] - Assigned to Victoria Martinez de la Cruz (vkmc)
15:57:39 <vkmc> o/
15:57:47 <dustins> vkmc: Take it away!
15:57:47 <vkmc> about to submit a fix for that, I'm running test locally
15:57:55 <vkmc> s/test/tempest tests/g
15:57:57 <bswartz> We discuss this one already didn't we?
15:58:12 <tbarron> bswartz: just noting that there's some action on that front ...
15:58:17 <bswartz> Or am I remembering a conversation in another venue?
15:58:23 <vkmc> plus, we need to add the dependency for https://review.openstack.org/#/c/532713/... it's almost a duplicate
15:58:31 <dustins> bswartz: A related one, https://bugs.launchpad.net/manila/+bug/1743472
15:58:32 <openstack> Launchpad bug 1743472 in Manila "Create a default share type for tempest tests" [Undecided,New] - Assigned to Victoria Martinez de la Cruz (vkmc)
15:58:36 <bswartz> Okay
15:58:48 <vkmc> bswartz, yeah, about two weeks ago
15:59:09 <bswartz> Okay let's make sure to get those patches reviewed before RC1
15:59:19 <bswartz> Thanks vkmc
15:59:22 <vkmc> np
15:59:37 <bswartz> Feel free to ping me for a review when it's ready
15:59:45 <bswartz> That's all we have time for today
15:59:57 * bswartz prays the last few patches merge soon
16:00:04 <bswartz> Thanks all
16:00:04 <vkmc> will do, thx
16:00:13 <bswartz> #endmeeting