15:01:38 <bswartz> #startmeeting manila
15:01:39 <openstack> Meeting started Thu Feb 15 15:01:38 2018 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:40 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:41 <xyang> hi
15:01:42 <openstack> The meeting name has been set to 'manila'
15:01:43 <bswartz> hello all
15:01:49 <vkmc> o/
15:01:49 <dustins> Hello again
15:01:52 <dustins> \o
15:01:56 <tbarron> Hi
15:02:00 <tpsilva> hey
15:02:03 <ganso> hello
15:02:20 <bswartz> courtesy ping: toabctl markstur vponomaryov cknight
15:02:34 <bswartz> I know gouthamr is still honeymooning, and zhongjun is out for chinese new year
15:03:04 <tbarron> vkmc is here even though it's Carnaval, or hangover from same
15:03:21 <vkmc> hard to get back from a long weekend
15:03:25 <vkmc> >.<
15:03:28 <tbarron> but it's not the middle of the night
15:03:32 <bswartz> Too bad we don't have carnaval in the states
15:03:48 <vkmc> you should have it... it's some sort of mardi gras I think
15:03:53 <toabctl> hi
15:04:01 <tbarron> nawlins is special
15:04:26 <xyang> vkmc: how often do you have Carnaval?  once a year?
15:04:28 <bswartz> #topic announcements
15:04:43 <bswartz> Okay so we still have 2 weeks until release
15:05:01 <vkmc> xyang, yes!
15:05:10 <bswartz> Right now we should be bug hunting, fixing any release-blocking bugs, and fixing up docs for Queens
15:05:26 <bswartz> PTL elections are over, and tbarron is now officially installed
15:05:31 <bswartz> Huzzah!
15:05:33 <vkmc> xyang, 40 days before easter... we go nuts... I think it's a latin american thing
15:05:43 <xyang> :)
15:05:58 <dustins> tbarron: congratulations!
15:06:05 <vkmc> tbarron, congrats!
15:06:14 <xyang> tbarron: congrats!
15:06:31 <ganso> tbarron: congratulations!
15:06:35 <bswartz> So the main things to cover this week are any critical bugs, and the PTG planning
15:06:47 <bswartz> Let's do PTG planning first
15:06:51 <bswartz> #topic PTG planning
15:06:54 <tbarron> Thanks all, I'm going to need help and support from all of you :)
15:07:03 <bswartz> #link https://etherpad.openstack.org/p/manila-rocky-ptg
15:07:25 <dustins> tbarron: gladly :)
15:08:22 <bswartz> We should probably take a look at the proposed topics, and start scheduling them for Tuesday or Friday, based on how much time they're likely to consume
15:08:36 <bswartz> tbarron: I think you and I should get together and do that
15:08:51 <tbarron> bswartz: +1
15:08:53 <bswartz> Based on how much stuff we have here, I doesn't feel like we'll use up 2 whole days
15:08:58 <bswartz> Friday might be a short day
15:09:14 <bswartz> Then again, maybe some people have topics not yet added to the etherpad
15:09:25 <bswartz> I probably do...
15:09:38 * bswartz thinks
15:09:50 * tbarron sees smoke in the room
15:10:47 <ganso> bswartz: has it been oficially confirmed that we will have the Tuesday and Friday?
15:11:01 <bswartz> In any case, since we're going to come up with a schedule soon, consider this your last chance to propose a topic for any time other than friday afternoon
15:11:04 <bswartz> ganso: yes
15:11:12 <ganso> bswartz: great
15:11:38 <tbarron> ganso: http://ptg.openstack.org/ptg.html
15:12:07 <ganso> tbarron: nice! we already have the room numbers!
15:12:33 <bswartz> So my other AI related to PTG I still haven't done :-[
15:12:47 <tbarron> ?
15:12:50 <bswartz> I need to find out about evening events so we can schedule a team dinner
15:13:12 <bswartz> Does anyone know what the schedule of evening events looks like?
15:14:08 <xyang> Cinder plans Thursday night dinner:  2/28
15:14:48 <ganso> xyang: Thursday is 3/1
15:15:09 <bswartz> We already eliminated Thursday+Friday
15:15:13 <xyang> that's on Cinder ptg etherpad:)
15:15:14 <bswartz> It has to be Mon, Tue, or Wed
15:15:35 <ganso> xyang: lol, I'm going to fix it over there
15:16:06 <bswartz> ganso: might be worth confirming with the cinder PTL that it's thursday not wednesday
15:16:27 <ganso> jungleboyj: ^
15:17:03 <bswartz> tbarron: unfortunately there's an all-hands meeting here at netapp after this meeting, but are you available later this afternoon?
15:17:05 <xyang> jungleboyj: Can you confirm: Cinder Planning to have a dinner outing on Thursday night:  3/1?
15:17:26 <tbarron> bswartz: sure, afternoon works
15:17:34 <jungleboyj> Yes. That is the plan. Thursday night.
15:17:45 <bswartz> okay cool
15:17:47 <tbarron> jungleboyj: which month?
15:18:05 <jungleboyj> I believe that's when we start March.
15:18:06 <xyang> jungleboyj: you wrote down the wrong date initially:)
15:18:09 <bswartz> #topic Let's Go Over New Bugs
15:18:11 * tbarron is not doing well working on good relations with cinder
15:18:52 <bswartz> dustins: I assume you have some bugs from last week worth covering
15:18:52 <jungleboyj> Do I have February 28th in the etherpad accidentally?
15:18:52 <dustins> #link https://etherpad.openstack.org/p/manila-bug-triage-pad
15:19:03 <bswartz> but first, are there any new bugs found since the RC1 tag?
15:19:12 <xyang> jungleboyj: yes:)
15:19:17 <bswartz> jungleboyj: yes
15:19:20 <dustins> bswartz: You assume correctly, and I believe just one in the last few days
15:19:24 <dustins> We can start with that one
15:19:39 <dustins> #link https://bugs.launchpad.net/manila/+bug/1748696
15:19:40 <openstack> Launchpad bug 1748696 in Cinder "get-pools doesn't reflect capacity change made by consume_from_volume" [Undecided,In progress] - Assigned to TommyLike (hu-husheng)
15:19:54 <dustins> This one is as yet unconfirmed in Manila, but does effect Cinder
15:19:58 <jungleboyj> I will fix that.
15:20:07 <xyang> it's already fixed by ganso
15:20:28 <bswartz> So I would bet that tommylike is on vacation for chinese new year
15:20:48 <tbarron> dustins: the assumption is that we cloned oversubscription code from cinder so we need a similar fix
15:20:50 <bswartz> Is this a release blocker?
15:21:05 <tbarron> if so, we've lived with it for a long time
15:21:06 <bswartz> Oh you mean the fix for manila is already merged?
15:21:24 <tbarron> I don't mean that.
15:21:39 <bswartz> responding to xyang
15:21:48 <tbarron> And I think xyang was talking about the etherpad.
15:21:56 <tbarron> xyang: ?
15:21:59 <bswartz> oh
15:22:07 <xyang> yes, I was responding to jungleboyj
15:22:17 <ganso> IRC chaos lol
15:22:22 <bswartz> multiple conversations overlapping
15:22:29 <xyang> about oversubscription, I'm not sure we have the exact same issue.  we handled a little differently here
15:22:38 <tbarron> I suspect we need a careful fix for the oversubscription issue and wouldn't want to rush.
15:22:40 <bswartz> okay so since tommylike isn't around, who understands this bug?
15:23:17 <bswartz> I see the proposed cinder fix: https://review.openstack.org/#/c/543205/
15:23:49 <ganso> bswartz: I understand that there was an improvement merged in Cinder in the queens release, and we would like to have this improvement ported over to manila in the future. I am unaware of *BUGS* related to that
15:23:58 <bswartz> Maybe we should look at whether our scheduler code is close enough to cinders that we have the same bug and the same fix would work?
15:24:45 <xyang> I remember this part is a little different, but I'll have to take a look
15:25:30 <bswartz> The main decisions we need to make today are: should we plan to address this in a RC2
15:25:39 <bswartz> and who wants to own the investigation?
15:25:41 <tbarron> note that tommylike says manila *may* have the same bug
15:25:55 <bswartz> Yeah I mean if it needs addressing at all
15:26:14 <dustins> Right, it's unconfirmed in Manila right now
15:26:24 <tbarron> xyang: do you have time to look and let us know if the issue exists in manila?
15:26:36 <bswartz> Still if we want to fix it in an RC2, then there's a rush to get the investigation done
15:26:39 <xyang> I can take a look
15:26:52 <bswartz> Otherwise, we can talk about it at PTG and fix it in Rocky
15:27:18 <bswartz> tbarron: Whatever happened to that glance bug? Did they make an RC2 that fixed it?
15:27:35 <tbarron> xyang: if we have it, let bswartz and me know and we'll see about the rc2 vs rocky thing
15:27:46 <tbarron> bswartz: it's fixed and postgresql tests now pass
15:27:47 <xyang> tbarron: sure
15:27:56 <bswartz> excellent
15:28:02 <tbarron> bswartz: dustins: https://bugs.launchpad.net/manila/+bug/1749184
15:28:03 <openstack> Launchpad bug 1749184 in Manila "db migration error with mariadb >= 10.2.8" [Undecided,In progress] - Assigned to Stefan Nica (stefan.nica)
15:28:19 <tbarron> that's another new bug and potential rc2 candidate
15:28:30 <tbarron> https://review.openstack.org/#/c/543927/ addresses it
15:28:51 <tbarron> gate (and our distro) are using using a lower mariadb version but
15:28:57 <bswartz> Do we have a cap on the mariadb version we support?
15:29:04 <tbarron> toabctl: is SUSE affected by this one?
15:29:23 * tbarron notes the bug reporter works for SUSE
15:29:42 <tbarron> bug reporter == bug fixer
15:29:57 <bswartz> Yeah I'm looking at this fix -- I'm not sure why the fix is mysql specific
15:30:13 <bswartz> Does the fix not also work on postgres?
15:30:39 <bswartz> It seems like dropping a primary key before dropping the column itself should be completely harmless
15:31:11 <tbarron> I think it does not, see patch set 3
15:31:13 <bswartz> Oh wait, this drops the whole primary key and recreates it
15:31:55 <bswartz> I wonder if the limitation here comes for sqlalchemy or the underlying databases
15:32:19 <bswartz> It makes me nervous to have different upgrade code for different DBs
15:32:24 <tbarron> in patch set #3 he did the drop and readd unconditionally and the postgresql tests failed
15:33:26 <tbarron> bswartz: agree, but I wonder if anyone actully uses OpenStack with mariadb >= 10.2.8 yet
15:33:27 <bswartz> But if SUSE is shipping a new enough version of MariaDB with they Queens openstack, then this does feel worthy of an RC2
15:33:52 <tbarron> I believe toabctl is on paternity leave (again).
15:34:04 <bswartz> toabctl: do you think you could find this out soon?
15:34:36 <bswartz> He was here 30 minutes ago
15:34:46 <tbarron> bswartz: probably he'll read the backlog but I'll ask in the review as well.
15:35:35 <bswartz> okay well this is important because if we need an RC2, then the sooner we cut it the better
15:36:07 <bswartz> As a reminder, we've branched already so it's okay to merge the fix into master as soon as the fix is ready, and we can take the decision to backport it seaparately
15:36:37 <tbarron> bswartz: remind us of the rules w.r.t. doc fixes, etc. now.
15:36:57 <bswartz> doc fixes are always okay to merge, and those go into master
15:37:10 <bswartz> I don't think we've ever backported docs changes
15:37:44 <bswartz> The main thing that shouldn't be going into master are large/risky changes that could complicate any backports to queens
15:38:30 <tbarron> so if we have missing api doc it won't be in o.o.docs for queens then, right?
15:38:49 <bswartz> Once the release is done, then we open the gates for anything into master
15:39:08 <bswartz> tbarron: IDK id the new docs process requires us to maintain docs in stable branches of manila
15:39:14 <bswartz> s/id/if/
15:39:42 <bswartz> We can look at how the docs builds are being done
15:39:50 <tbarron> well if it *allows* us to, then we can backport those
15:40:02 <tbarron> we have several api update reviews pending
15:40:05 <bswartz> Yes but it's a huge hassle to maintain stable docs in addition to the master docs
15:40:35 <bswartz> The question would be, is anyone going to read anything other than the master version of the docs?
15:40:45 <tbarron> bswartz: I'm not saying we have to do it indefinitely, just that we have some outstanding and it may be worth doing it right after the release.
15:40:56 <ganso> bswartz: perhaps, if the person is using an older version of OpenStack, the master version will not make sense
15:41:11 <tbarron> well I'm not sure what oo./*latest*/ points to
15:41:27 <bswartz> ganso: one would hope we don't change things so much that the master docs are nonsensical w.r.t. stable version of the code
15:41:31 * tbarron needs to check, I think we're pointing to pike rather than master atm
15:41:43 * bswartz sighs
15:41:47 <ganso> IIRC we have /latest/ and previous ones, like /ocata/, /pike/
15:41:50 <bswartz> Okay perhaps we need to do some docs backports
15:42:08 <tbarron> atm I'm just arguing for having *latest* up to date
15:42:25 <bswartz> The master docs should be as correct as we can make them
15:42:43 <ganso> updating /latest means less things to worry about in the future =)
15:42:44 <bswartz> If we need to backport some docs changes to stable branches, then so be it
15:43:01 * dustins wishes he had time to be docs guy again
15:43:21 <bswartz> okay dustins, what other bugs do you have?
15:43:27 <ganso> I believe gouthamr is the current docs guy
15:43:48 <bswartz> gouthamr is the newlywed guy
15:43:55 <dustins> ganso: Ah, didn't know that!
15:44:12 <dustins> #link https://bugs.launchpad.net/manila/+bug/1747695
15:44:13 <openstack> Launchpad bug 1747695 in Manila "neutron network multi segment ip version" [Undecided,New]
15:44:25 <dustins> This one seems to have been found in Pike
15:44:52 <bswartz> Related to the IPv6 changes?
15:45:03 <bswartz> The multi-segment binding feature is not well covered by tests
15:45:42 <dustins> I'm not even sure what that is, honestly
15:45:43 <bswartz> ganso: do you know if any tests that NetApp runs could catch this regression (if that's what this is)?
15:46:10 <ganso> bswartz: I believe tests we run aren't able to catch this
15:46:17 <tbarron> There's a good chance that maurice will propose a fix, he has two in review right now
15:46:40 <bswartz> Since it's pike-targeted, we don't have to consdier it for Queens until after the release
15:47:02 <bswartz> But the bigger concern here is that the regression was able to happen due to poor test coverage
15:47:21 <bswartz> I don't know how we could cover this better in the gate, but 3rd party CI can and should cover this kind of use case
15:47:44 <ganso> Maurice links to stable/pike code, so this is not something we changed in queens, we actually might have fixed it (unlikely)
15:47:54 <bswartz> I doubt it
15:48:09 <tbarron> does he claim (or do we know) that it's a *regression*?
15:48:31 <bswartz> Presumably he was running Ocata and it was working fine
15:48:33 <tbarron> do we know know it's OK in ocata?
15:48:47 <bswartz> And on pike it stopped working -- that's when the ipv6 stuff merged
15:48:49 <tbarron> k, but I'll ask explicitly in the bug
15:49:01 <dustins> tbarron: Thanks!
15:50:06 <tbarron> bswartz: but I agree, it would be good to detect this ourselves rather than having large enterprises discover it for us.
15:50:27 <bswartz> tbarron: that could be a PTG topic then -- how to test multisegment binding in the gate
15:50:49 <tbarron> done
15:51:00 <ganso> bswartz: +1
15:51:10 <bswartz> I'm not optimistic we'll find a solution
15:51:15 <bswartz> But it's worth discussing
15:51:21 <bswartz> dustins: next?
15:51:40 <dustins> #link https://bugs.launchpad.net/manila/+bug/1746725
15:51:41 <openstack> Launchpad bug 1746725 in Manila "LVM driver is unable to remove addresses in different IP versions belonging to the same interface properly" [Undecided,New]
15:52:07 <bswartz> I would like to fix this bug with my new export helper
15:52:19 <bswartz> Already a PTG topic
15:53:18 <tbarron> awesome
15:53:58 <bswartz> dustins: anything else?
15:54:09 <dustins> #link https://bugs.launchpad.net/manila/+bug/1744084
15:54:12 <openstack> Launchpad bug 1744084 in Manila " DocImpact: Add MapR-FS native driver" [Undecided,New]
15:54:24 <dustins> This one's just a simple "are the docs there for this" bug
15:54:34 <bswartz> Yes, let's check if there are docs
15:54:54 <bswartz> #link https://github.com/openstack/manila/blob/master/doc/source/configuration/shared-file-systems/drivers/maprfs-native-driver.rst
15:55:24 <bswartz> Close bug?
15:55:33 <dustins> Looks good to me
15:55:48 <bswartz> next?
15:56:02 <dustins> #link https://bugs.launchpad.net/manila/+bug/1733286
15:56:02 <openstack> Launchpad bug 1733286 in Manila "snapshot share data Sync with the source share" [Undecided,New]
15:56:40 <dustins> An older one that I think we've looked at in the past
15:57:03 <bswartz> Oh
15:57:05 <ganso> dustins: indeed, looks like nobody volunteered to try to reproduce it
15:57:49 <bswartz> This is generic driver specific
15:58:01 <bswartz> I think there are caching/timing issues with how it does snapshots
15:58:24 <bswartz> We just take a cinder snapshot and assume that the right thing happens
15:58:47 <bswartz> If there's unwritten data in the service VM's cache though, it can get missed by the snapshot
15:59:21 <dustins> bswartz: I guess it depends on how driven we are to fix this thing with the Generic driver
15:59:28 <bswartz> The correct behavior would probably be to flush the write caches of the service VM before taking the cinder snasphot
16:00:06 <ganso> time check
16:00:16 <bswartz> Yeah given that it's probably not easy to reproduce this problem, it would be tough to test a fix
16:00:34 <bswartz> But the fix could be as easy as SSH to service VM and invoke "sync" before cinder snapshot
16:00:40 <bswartz> yeah we're out of time
16:00:46 <bswartz> thanks everyone
16:00:53 <bswartz> I have to run to my next meeting
16:00:59 <bswartz> tbarron: i'll be back later this afternoon
16:01:04 <bswartz> #endmeeting