15:01:01 <tbarron> #startmeeting manila
15:01:05 <openstack> Meeting started Thu Jan 10 15:01:01 2019 UTC and is due to finish in 60 minutes.  The chair is tbarron. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:08 <openstack> The meeting name has been set to 'manila'
15:01:50 <gouthamr> o/
15:02:13 <ganso> hello
15:02:18 <carlos_silva> hi
15:02:22 <jgrosso> hello
15:02:39 <tbarron> ping xyang
15:02:42 <xyang> hi
15:02:55 <tbarron> ping bswartz
15:03:03 <tbarron> ping erlon
15:03:04 <bswartz> .o/
15:03:09 <tbarron> ping tpsilva
15:03:18 <tbarron> ping amito
15:03:31 <bswartz> Sorry I drifted to other channel while waiting for the meeting to start
15:03:56 <bswartz> IRC is nearly as large and as dark as reddit
15:04:15 <tbarron> bswartz: i was dealing with a mutual cust and started a bit  late
15:04:22 <tbarron> hi all
15:04:25 <tbarron> !!
15:04:26 <openstack> tbarron: Error: "!" is not a valid command.
15:04:47 <tbarron> Agenda: https://wiki.openstack.org/wiki/Manila/Meetings
15:05:07 <tbarron> #topic Announcements
15:05:18 <tbarron> It's milestone 2 already :)
15:05:52 <tbarron> We'll have some releases coming out for the libraries, just trying to fit in a couple more changes.
15:05:56 <tbarron> first.
15:06:22 <tbarron> It was new driver deadline but we didn't have any new driver submissions this cycle
15:06:51 <toabctl> hi
15:06:54 * tbarron thinks it's time for some quality improvements on the old drivers but won't digress now any further ...
15:07:21 * tbarron notices toabctl being a busy guy with his other projects :)
15:07:50 <tbarron> oh, looks like I didn't save my latest agenda changes, sec
15:07:53 <amito> (Hey)
15:08:19 <tbarron> good to see you amito
15:09:55 <amito> you too
15:10:30 <tbarron> Anyone else have any announcements?
15:11:38 <tbarron> #topic python3 jobs
15:12:09 <tbarron> it turns out that when you turn on python3 in functional jobs the canonical way
15:12:21 <tbarron> that's insufficient for manila's "legacy" jobs
15:12:38 <tbarron> so we were testing less functional stuff under py3 than I thought.
15:12:59 <tbarron> But we now have the dummy driver job running under python3
15:13:16 <tbarron> #link https://review.openstack.org/#/c/629143/
15:13:30 <tbarron> hopefully we'll soon add the lvm driver job
15:13:44 <tbarron> https://review.openstack.org/#/c/623061/
15:13:52 <tbarron> #link https://review.openstack.org/#/c/623061/
15:14:01 <gouthamr> that's great to get done by M2 :)
15:14:05 <tbarron> actually running under py3 turned up some bugs
15:14:16 <tbarron> that weren't hard to fix
15:14:32 <tbarron> so we'll want to get other functional jobs converted so that we get more coverage
15:15:32 <tbarron> manila UI patch to run the dsvm job under py3 is ready
15:15:40 <tbarron> #link https://review.openstack.org/#/c/629538/
15:15:49 <tbarron> ^^ Please review that one
15:16:11 <tbarron> I'd like to include it in M2 deliverable
15:16:24 <tbarron> all manila ui functional will run py3
15:16:37 <tbarron> python-manilaclient patch
15:16:49 <tbarron> #link https://review.openstack.org/#/c/629536/
15:17:03 <tbarron> is turning up some py3 errors that we need to fix
15:17:33 <tbarron> doesn't look hard, so maybe we'll have it ready today and can include it with M2 deliverable for that lib
15:17:33 <ganso> tbarron: IIRC manila-ui uses manilaclient
15:17:57 <gouthamr> hey i had a question when looking at that patch
15:18:01 <tbarron> ganso: but the tests must not exercise the code that fails in the client
15:18:16 <gouthamr> is the goal of the "functional" test job just to set up devstack/manila-ui?
15:18:30 * tbarron doesn't actually know what these tests do :)
15:18:39 <tbarron> gouthamr: that's what it seems
15:18:48 <ganso> tbarron: I see that the errors are in the python-manilaclient functional tests, not the client code itself
15:19:00 <tbarron> gouthamr: it doesn't run the dummy job even though it loads it up
15:19:15 <tbarron> if valeriy were here we could ask the goal of these tests
15:19:34 <ganso> gouthamr: that functional job's current definition is to set up devstack and invoke manila CLI against it
15:19:35 <tbarron> but some of that will remain a mystery at this juncture
15:19:53 <tbarron> ganso: where is that defined?  I see that that is what it does :)
15:20:16 * tbarron will resist further sniping
15:20:23 <ganso> tbarron: well that's what it is supposed to do AFAIK
15:20:32 <tbarron> ganso: kk
15:20:54 <tbarron> ganso: i've just been a bit puzzled  by what some of the tests aim to do
15:21:12 <gouthamr> o.O at the python-manilaclient failures
15:21:21 <tbarron> ganso: but atm just am trying to see if they can do the same thing when running under py3
15:21:56 <gouthamr> attribution to the lack of unit tests
15:21:58 <tbarron> So as we get these ready today or tomorrow expect to be pinged for reviews.
15:22:00 <gouthamr> #LINK https://bugs.launchpad.net/python-manilaclient/+bug/1364800
15:22:00 <openstack> Launchpad bug 1364800 in python-manilaclient "lack of unittests" [Low,In progress] - Assigned to Alexander Pugachev (a-pugachev-wd)
15:22:06 <tbarron> Thanks for the help thus far.
15:22:27 <tbarron> Anything else on py3 goals?
15:22:58 <tbarron> Anyone working on running their third-party CI under py3?
15:23:19 <tbarron> crickets
15:23:29 <tbarron> So driver folks, vendors, etc.
15:23:34 <bswartz> We run our kubernetes 3rd party CI under py3....
15:23:54 <bswartz> Actually wait, we don't yet
15:24:03 <tbarron> This isn't just a game, distros are going to be running with manila with your code with py3 very soon now.
15:24:10 <bswartz> >_<
15:24:18 <bswartz> I'm pushing people inside netapp to move to py3
15:24:27 <tbarron> If it breaks in my distro I will route the BZs to you!!
15:24:28 <bswartz> py2 is literally deprecated and has been for years
15:24:49 <tbarron> bswartz: +++
15:25:04 * tbarron gets back off his stump for the moment
15:25:29 <tbarron> #topic gate issues
15:25:49 <tbarron> we talked last week about ssh issues  and stuff we've done to mitigate them
15:26:14 <tbarron> and mentioned that migration tests seem to run a long time and fail al lot even with those fixes
15:26:40 <tbarron> well as it turns out tempest runs its own ssh connections, e.g. in host assisted migration
15:27:05 <tbarron> so fixes for paramiko in manila-share have on effect on the tempest ssh stuff
15:27:18 <tbarron> ganso: does that fit with your understanding?  ^^
15:27:27 <ganso> tbarron: AFAIK tempest runs its own SSH in all scenario tests, not only migration
15:27:38 <tbarron> ganso: kk, that's probably right
15:28:02 <tbarron> anyways I see lots of failures and retries and dumping of console logs from this
15:28:11 <tbarron> so that's maybe the next area of inquiry
15:28:26 <ganso> tbarron: but we see a lot of API migration test failures, which don't do SSHs
15:28:54 <tbarron> ganso: this migration onion's layers make me cry
15:29:03 <ganso> tbarron: lol
15:29:24 <bswartz> where does ssh come in with migration tests?
15:29:44 <tbarron> ganso: ^^?
15:30:00 <ganso> bswartz: any scenario test SSHs to instances to create files. It is part of the test
15:30:20 <bswartz> Ah, the test client VMs
15:30:24 <ganso> bswartz: we have migration API and scenario tests. The API tests don't ssh
15:30:31 <bswartz> Right, we need some actual data to migrate
15:30:51 <bswartz> And tempest manages all that for us
15:31:25 <tbarron> ganso: tempest appears to use the admin net, right?
15:31:27 <ganso> tbarron: but, on my testing, only the share servers have SSH problems, instances don't
15:32:01 <tbarron> ganso: do the instances use the manila service image?
15:32:01 <ganso> tbarron: so still, scenario tests shouldn't show any problem aside from the known ones that we see when manila-shr tries to ssh into generic share servers
15:32:12 <ganso> tbarron: they do
15:32:49 <ganso> tbarron: yes, tempest uses the admin net configured with the generic driver job
15:32:53 <tbarron> well we need to fix the manila-image-elements job for a start, it no longer publishes to tarballs.o.o. when a fix merges
15:33:08 <tbarron> we merged a fix to ssh by key w/o password
15:33:24 <tbarron> and inspection suggests that some of the tempest client usages want that
15:33:44 <tbarron> so maybe the elements publish fix is the first thing here
15:33:58 <tbarron> probably it was broken when we moved jobs in-tree
15:35:09 <tbarron> Anyone else have comments on gate issues?
15:35:20 <ganso> tbarron: during my testing, regular instances created using manila-service-image didn't have the ssh problem
15:35:40 <ganso> tbarron: I use manila-service-images everywhere here, because it can mount NFS, while cirros cannot
15:35:56 <ganso> tbarron: it is only when the generic driver's share server uses it that it has problems
15:36:28 <tbarron> ganso: interesting. tempest connects to both svms and regular instances over the admin net?
15:36:28 <ganso> tbarron: and it is only through the service port. When using "connect_to_tenant_network"=True, I can connect to the share server just fine
15:36:32 <bswartz> What image are the test VMs running?
15:36:50 <bswartz> Is it a tempest image or something we provide?
15:36:58 <ganso> tbarron: no, "in my testing" I meant manually. Tempest connects to regular instances through floating IP
15:37:28 <tbarron> ganso: and  to vms over admin net ....
15:37:34 <tbarron> to svms
15:37:38 <ganso> tbarron: so, I don't believe the manila-service-image has any problem. I believe it is a neutron or network problem related to the neutron port
15:37:46 <tbarron> so that's a clue
15:38:17 <tbarron> ganso: I thought I was seeing issues where it couldn't do key auth and there was no p/w
15:38:36 <tbarron> ganso: do you supply manila/manila when you manually connect?
15:38:54 <ganso> tbarron: there is likely to be random issues there as well, as we've seen the generic driver jobs failing because of that in the past, but not very often
15:39:00 <ganso> tbarron: I always use user+pass
15:39:12 <tbarron> ganso: and I think tempest client ssh is not
15:39:23 <ganso> tbarron: that I am not sure
15:39:36 <tbarron> ganso: but we can pursue this out of meeting after I check that assertion more
15:40:00 <tbarron> #topic bugs
15:40:24 <tbarron> We have this etherpad that dustin maintained:
15:40:33 <tbarron> #link https://etherpad.openstack.org/p/manila-bug-triage-pad
15:40:34 <bswartz> If nobody else is having these kind of tempest issues I would be suspicious of the image we're using
15:40:55 <tbarron> that still has lots of low hanging fruit bugs for people to pick up :)
15:40:57 <ganso> bswartz: the issue in question is unrelated to tempest
15:41:14 <bswartz> Ok
15:41:22 <tbarron> and jgrosso has been working with gouthamr to get up to speed on bug czar role
15:41:35 <tbarron> ganso: i'm not sure I believe that, but later ...
15:42:04 <gouthamr> ack, jgrosso has begun looking at our bug backlog
15:42:09 <jgrosso> day 1 :)
15:42:17 <gouthamr> told him we need it to be in single digits by next week though :P
15:42:38 <jgrosso> :)
15:42:47 <tbarron> gouthamr: as you know we set the expectation with jgross that he'll be a true czar
15:42:58 <tbarron> that is, he will bring us peasants work to do
15:43:27 <tbarron> jgrosso has years of experience sorting bugs and coralling develpers to fix so
15:43:48 <tbarron> I am very optimistic that he'll help us get a more systematic approach to
15:43:59 <tbarron> working down our bug backlog.
15:44:08 <jgrosso> I will make it so
15:44:10 <jgrosso> :)
15:44:27 <tbarron> he's a scrum master too so watch out
15:45:10 <tbarron> Any particular bugs that you guys want to talk about today?  Or should we table these for a future meeting?
15:46:30 <gouthamr> i got one
15:46:33 <gouthamr> https://bugs.launchpad.net/manila/+bug/1801763 - "public" flag should be controllable via policy
15:46:33 <openstack> Launchpad bug 1801763 in Manila "public flag should be controllable via policy" [Medium,Confirmed]
15:47:07 <gouthamr> i don't have all the context of public shares, but its given deployers some trouble
15:47:59 <gouthamr> the bug calls out two reasons why they would like to turn off the ability to use "public" shares
15:48:43 <gouthamr> so if i can get a history lesson, probably i can own this bug and fix it per our/end-users' expectations
15:49:13 <tbarron> gouthamr: so if a share is public everyone has to see it when they list shares, whether they want to or not, right?
15:49:26 <gouthamr> tbarron: yes
15:49:55 <tbarron> gouthamr: that seems quite undesirable
15:49:58 <ganso> gouthamr: so this is not a policy issue
15:50:04 <gouthamr> mind you they can't mount it, unless the user that owns it gives them access
15:50:37 <ganso> gouthamr: in my understanding, a policy would control whether you have access to an API that can create or display public shares
15:50:41 <tbarron> gouthamr: right, you can make it accessible to everyone (given the right networking) quite apart from it's advertisement
15:50:45 <gouthamr> in a DHSS=True world, other tenants won't be able to connect to it because tenant networks are isolated
15:50:51 <tbarron> its
15:51:03 <ganso> gouthamr: but our API is manila index, how could you control its behavior through policy?
15:51:25 <tbarron> gouthamr: well one can put tenants on provider networks, etc. if one really wants to do that kind of thing
15:51:57 <tbarron> gouthamr: maybe double nics on tenant vms, etc
15:52:03 <bswartz> gouthamr: you can tie together tenant networks with routers
15:52:11 <tbarron> bswartz: ++
15:52:16 <gouthamr> ganso: not my opinion, but Maurice Schrieber (OP) thinks disallowing 'creating a public share' through policy will solve the issue of listing them as well
15:52:30 <tbarron> but this shouldn't be a capability that just anyone can have
15:52:38 <gouthamr> bswartz: would the administator have to do that?
15:52:45 <tbarron> I shouldn't be able to spam you with my share listings ...
15:52:48 <bswartz> The admin would need to create the shared router
15:53:00 <bswartz> But tenants could in theory plug into that router themselves
15:53:45 <tbarron> bswartz: so i think i agree with the premise of this bug, default policy should not allow regular users to set shares visibility to public
15:53:57 <ganso> I think the major concern here is the spam situation?
15:54:25 <tbarron> ganso: imagine on horizon seeing tons of another project's shares while trying to manage one's own
15:54:28 <ganso> like, Maurice doesn't want users to see shares that they don't care about and can't access anyway
15:54:36 <ganso> tbarron: exactly
15:55:49 <tbarron> gouthamr: so you're going to "fix" this one :)
15:56:33 <ganso> I see 2 possible solutions: 1) create a toggle to whether public shares can be created by admin only. 2) create a toggle to not see public shares by default when you list shares (something similar to "manila list --all-tenants", we could do "manila list --public")
15:56:42 <gouthamr> i have a similar downstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=1326279
15:56:42 <openstack> bugzilla.redhat.com bug 1326279 in openstack-manila-ui "[RFE] Add Ability to Disable Manila Public Share Types" [Medium,Closed: errata] - Assigned to vimartin
15:56:59 <gouthamr> ignore topic, says "Share Types", but means Shares
15:57:58 <gouthamr> tbarron: yes, i can attempt to, but i don't have a solution myself
15:58:10 <gouthamr> ganso: i like both solutions
15:58:47 <tbarron> gouthamr: they aren't incompatible with one another
15:59:21 <gouthamr> they are though... we can turn off listing public shares by default on the clients (manilaclient, manila-ui)
15:59:43 <gouthamr> and we can create a policy that can be toggled to admin-only
16:00:08 <gouthamr> and actually switch it to admin-only in Train or something :)
16:00:12 <tbarron> one has to do with creating public (set by admin), the other with filtering out public.
16:00:25 <tbarron> don't see why both couldn't be done.
16:00:34 <gouthamr> ohhh
16:00:40 <gouthamr> you said "aren't incompatible"
16:00:45 <gouthamr> and i need coffee
16:00:46 * bswartz agrees
16:00:47 <tbarron> k, we're out of time ....
16:00:56 <tbarron> Thanks everyone!!!
16:00:58 * bswartz hands gouthamr some coffee and a large trout
16:01:02 <gouthamr> :D
16:01:07 <jgrosso> :)
16:01:08 <tbarron> very seattle
16:01:13 <tbarron> #endmeeting