15:00:04 <bswartz> #startmeeting manila
15:00:05 <openstack> Meeting started Thu Oct  5 15:00:04 2017 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:08 <openstack> The meeting name has been set to 'manila'
15:00:09 <gouthamr> o/
15:00:17 <dustins> \o
15:00:24 <bswartz> hello all
15:00:29 <raissa_> o/
15:00:37 <ganso> hello
15:01:04 <bswartz> so there's nothing on the agenda today, and no announcements
15:01:11 <toabctl> hi
15:01:19 <tbarron> hi
15:01:19 <bswartz> we can talk about whatever we like
15:01:25 <bswartz> we should cover the gate issues
15:01:36 <ganso> drivers_private_storage hard-delete vs soft-delete
15:01:41 <bswartz> and try to clear up any other discussions
15:01:43 <bswartz> ^ yes like that
15:01:48 <dustins> bswartz: I've got an update on the bug czar thing when the gate stuff is discussed
15:02:50 <bswartz> okay I cobbled together a quick agenda
15:02:55 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:03:04 <bswartz> #topic Gate issues
15:03:21 <bswartz> so it looks like the zuulv3 migration broke our gate late last week
15:03:32 <bswartz> last I checked, infra was working on the issue but that might not still be true
15:03:47 <bswartz> has anyone else looked into recent failures?
15:04:09 <tbarron> so zuulv3 is now non-gating
15:04:25 <bswartz> pep8 is failing with this error:
15:04:25 <tbarron> zuulv2 aka jenkins is running again on check and gate
15:04:41 <tbarron> zuulv3 works more or less like a third party job
15:04:44 <bswartz> http://paste.openstack.org/show/622774/
15:04:53 <bswartz> looks like we have a bad hacking check
15:05:19 <ganso> bswartz: other CIs are not failing for that reason though
15:05:43 <bswartz> CI failures are not relevant to the gate
15:06:08 <gouthamr> bswartz: which patch was that pep8 failure on?
15:06:20 <ganso> bswartz: first-party CIs I meant
15:06:28 <bswartz> oh it could just be I looked at at broken patch
15:06:32 <bswartz> doh
15:06:54 <gouthamr> there're some recent patches from vkmc that are going to fix py3x for us, guess it's a genuine failure from one of them :D
15:07:02 <bswartz> yeah it seems so
15:07:23 <bswartz> okay we should just try some more known-good patches and see what fails if anything
15:07:30 <bswartz> then address those issues
15:07:43 <bswartz> I'm deep in the process of rewriting generic driver
15:07:45 <gouthamr> jenkins upstream's passing now, but i haven't looked at why my fancy CI is broken.. probably needs some tinkering, based off how upstream fixed stuff
15:07:51 <tbarron> well https://review.openstack.org/#/c/508680/ works with zuulv2
15:08:01 <gouthamr> s/my fancy ci/third party CI
15:08:04 <tbarron> except maybe something screwy with the coverage job
15:08:27 <ganso> bswartz: this is a good one to watch: https://review.openstack.org/#/c/508680/
15:08:40 <bswartz> yes
15:08:41 <ganso> oops, too late
15:09:17 <tbarron> not that zuulv3 is supposed to run on recheck but it's in a different queue with many fewer nodes
15:09:30 <tbarron> so it will eventually run .... theoretically
15:09:36 <bswartz> if 3rd party CIs are broken, talk to the infra and QA teams -- I'm not sure there's anything we can do about it on this team
15:09:37 <ganso> tbarron: the coverage job is not working locally as well
15:09:47 <tbarron> but on this patch the last zuulv3 is from 9/30 ...
15:10:07 <tbarron> ganso: yeah, i haven't had time to chase it
15:10:24 <bswartz> okay let's move on to the issue that's been causing so much strife
15:10:33 <bswartz> #topic drivers_private_storage hard-delete vs soft-delete
15:10:36 <ganso> lol
15:10:47 <tbarron> ganso is causing trouble again
15:10:47 <bswartz> ganso raised this issue in the channel 2 days ago
15:10:49 <gouthamr> strife, i miss vponomaryov
15:10:55 <ganso> #link https://bugs.launchpad.net/manila/+bug/1721123
15:10:56 <openstack> Launchpad bug 1721123 in Manila "cannot update previously deleted drivers_private_storage entries" [Medium,Confirmed] - Assigned to Rodrigo Barbieri (rodrigo-barbieri2010)
15:11:05 <tbarron> beelzebub
15:11:16 <gouthamr> he-who-must-not-be-named
15:11:17 * tbarron tries to raise him
15:11:31 <bswartz> we have a pattern of using soft deletes everywhere
15:11:36 <ganso> xD
15:11:52 <bswartz> it seems there's a bug with the sqlalchemy layer related to deleted driver private share data
15:12:03 <bswartz> I don't understand why we don't just fix this bug and move on
15:12:14 <gouthamr> expressed my opinion on https://bugs.launchpad.net/manila/+bug/1721123
15:12:15 <openstack> Launchpad bug 1721123 in Manila "cannot update previously deleted drivers_private_storage entries" [Medium,Confirmed] - Assigned to Rodrigo Barbieri (rodrigo-barbieri2010)
15:12:21 <bswartz> what's the reason for considering hard deletes?
15:12:50 <ganso> this bug stirred discussion on whether soft-deletes make sense for drivers_private_storage
15:12:54 <bswartz> soft deletes are faster and we do them everywhere else
15:12:56 <gouthamr> because it is a key value store that no one knows how to use besides individual driver authors, so what's the point of soft deleted?
15:12:59 <gouthamr> deleted*
15:13:05 <bswartz> why would we want to be inconsistent here?
15:13:24 <bswartz> the point of soft deletion is that it's what we do everywhere else
15:13:31 <vkmc> O/
15:13:34 <bswartz> and it's arguably faster than hard deletion
15:13:49 <gouthamr> in a key-value store like driver-private-data, soft-deleting is unnecessary
15:13:56 <bswartz> you need a really good reason to add an inconsistency to our databae model
15:14:07 <gouthamr> i.e, will lead to us writing weird logic to reuse an existing "deleted" row
15:14:17 <ganso> it also does not add value to store stale private_storage information
15:14:22 <gouthamr> and defeats one use case of soft-deletes: keeping things around for auditing
15:14:33 <bswartz> it's not necessary anywhere -- we chose to do it for the advantages it offers
15:14:51 <bswartz> I agree nobody cares about auditing old deleted driver private share data
15:14:58 <bswartz> that's not a good argument for hard deletion though
15:15:10 <ganso> the bugfixed soft-delete code would allow entries to be "undeleted", so it loses its purpose
15:15:36 <gouthamr> ganso +1
15:15:45 <bswartz> who cares?
15:15:56 <bswartz> it's up to the driver to do what it wants with that data
15:16:12 <gouthamr> absolutely, so if a driver deletes something, it expects it gone
15:16:17 <bswartz> as long as the driver can do what it wants, I see no issue
15:16:31 <gouthamr> if we're going to start seeing SQL exceptions, that's bad design
15:17:01 <bswartz> soft deletes should look like "gone" from the driver perspective
15:17:06 <gouthamr> handling updates on key-value stores isn't straightforward, as is evidenced by the insertion logic
15:17:10 <bswartz> if not, there's a bug, and we should fix it
15:17:10 * gouthamr looks for link
15:17:23 <ganso> bswartz: they actually would not be
15:17:32 <bswartz> explain
15:17:59 <gouthamr> #LINK https://github.com/openstack/manila/blob/master/manila/db/sqlalchemy/api.py#L3497
15:18:16 <ganso> bswartz: the alternative that fixes the bug with soft-deletes needs to query the soft-deleted entries to undeleted them... so they are not really gone
15:18:25 <ganso> bswartz: this happens in the DB layer though
15:18:30 <bswartz> okay
15:18:55 <bswartz> so if I add a key that was previously deleted, the DB layer just undeletes the old value and updates it
15:19:02 <bswartz> that seems like optimal behavior to me
15:19:18 <ganso> bswartz: it has been designed that way, but the bug prevents that exact scenario you described
15:19:29 <bswartz> and if we fix the bug, then what's the problem?
15:19:53 <ganso> bswartz: we can fix it with soft-deletes or hard-deletes
15:20:14 <bswartz> let's assume hard deletes aren't the answer
15:20:21 <ganso> bswartz: it will remove part of the insertion logic gouthamr linked
15:20:22 <bswartz> what's the problem after we fix the soft delete bug?
15:20:26 <tbarron> if we fix the bug and maintain consistency with soft-deltetes elsewhere, what's the problem?
15:20:47 <bswartz> or is fixing the soft delete bug extremely hard?
15:20:48 <ganso> there's no problem other than some code duplication performing another query
15:20:54 <ganso> it is not
15:20:55 <tbarron> earlier I think you said it makes ugly code?
15:21:00 <ganso> tbarron: yes
15:21:04 <gouthamr> we effectively pay the price of the time saved
15:21:12 <gouthamr> by soft-deleting
15:21:17 <bswartz> the existing code isn't very pretty
15:21:29 <bswartz> so ugliness is a price we've already paid
15:21:54 <bswartz> don't assume that a soft-undelete is expensive
15:22:00 <ganso> bswartz: hard-deletes you make the code less ugly while soft deletes make it uglier :D
15:22:03 <bswartz> UPDATES and SELECTS are usually very fast
15:22:09 <bswartz> INSERTS and DELETES are usually very slow
15:22:22 <gouthamr> depending on the size of the table, or so SQL documentation tells me
15:23:06 <ganso> gouthamr: hummm, so if we have a lot of old soft-deleted keys lying around, updates are going to be more expensive than a delete, am I right?
15:23:07 <bswartz> if the argument comes down to performance, then we have to benchmark the alternatives and see which is faster in practice
15:23:26 <bswartz> my argument is one of simple consistency with existing code
15:23:56 <bswartz> and I suspect the performance differences are not large in either case
15:24:09 <tbarron> probably a small table
15:24:31 <tbarron> but maybe not i guess
15:25:19 <bswartz> in a previous project we implemented soft deletes purely for the performance gain -- but that was a different database in a different era
15:25:27 <ganso> in the patch I am working on which uses private storage, if I attempt to migrate a share and fail over and over and over until I succeed, all those attempts will be lying around in the drivers_private_storage database soft-deleted
15:25:48 <bswartz> It sounds like the only downside to just fixing the existing soft delete code is that there is some code ugliness or duplication
15:25:58 <bswartz> I suspect we can clean it up in code review
15:26:33 <gouthamr> okay, and if we're not able to do that, we can go down the path of hard deletes?
15:26:40 <ganso> we have the manila-manage command to clean soft-deleted entries though, but  that has to be performed manually
15:27:01 <bswartz> if there is a good reason for hard deletes we'll do them
15:27:07 <bswartz> I'm still waiting for the good reason
15:28:02 <bswartz> there's a benefit to code readability from using soft deletes across the board
15:28:34 <bswartz> we shouldn't give that up unless we're gaining something else really big
15:29:02 <bswartz> okay let's move on
15:29:31 <bswartz> #topic Bug Czar
15:29:45 <bswartz> dustins: you had more to share on this topic?
15:29:52 <dustins> bswartz: Yeah!
15:30:11 <dustins> So I'm starting to go through the list of Manila bugs that we have open on Launchpad
15:30:17 * bswartz cringes
15:30:35 <dustins> And I'm going to go through some of the ones that haven't been updated in the last cycle or so
15:30:44 <dustins> (some haven't been touched in nearly three years)
15:30:57 <dustins> Just to have a baseline of what needs to be fixed where
15:31:27 <dustins> For things that are outstanding, I'll leave comments asking what the status is and if it's urgent, I'll find you on IRC :)
15:31:46 <bswartz> dustins: you had said you'd like to use part of this meeting to review bugs
15:31:57 <dustins> bswartz: Is there a way that I can be notified of new bugs added to launchpad as they come in?
15:32:03 <bswartz> oh yeah
15:32:06 <ganso> dustins is the new bug police
15:32:12 <markstur> Czar!
15:32:15 * bswartz checks LP groups membership
15:32:17 <gouthamr> king
15:32:20 <dustins> ganso: I prefer Bug Sherpa
15:32:23 <ganso> emperor
15:32:33 <tbarron> dude
15:32:39 <dustins> I don't want to be a dictator, I just wanna help (and stay mostly benevolent)
15:32:41 <gouthamr> supreme .... okay i'm not going there
15:32:51 <markstur> pretty sure if you don't capitalize king and emperor he'll be offended
15:33:09 <gouthamr> dilly dilly
15:33:12 <dustins> I want to get help where it's needed and provide a gentle push when required
15:33:25 <dustins> This is to help everyone
15:33:35 <bswartz> https://launchpad.net/~manila-bug-supervisors/+members#active
15:33:40 <bswartz> dustins: just added you
15:33:44 <dustins> bswartz: Thanks!
15:33:59 <bswartz> we need to dramatically trim the membership of this group
15:34:05 <dustins> I'll go through the bugs over the next several days and comment as I go along
15:34:19 <dustins> If I have any questions about the status of a bug
15:34:41 <dustins> Chances are we can reduce our backlog by a decent margin in the span of just a week
15:34:48 <dustins> well, hopefully :D
15:35:17 <dustins> bswartz: When you get the chance, I'd like to have the list of driver maintainers as well
15:35:30 <bswartz> dustins: sure thing
15:35:33 <dustins> And I'll go ahead and codify that in the Wiki (if it's not there already)
15:36:38 <dustins> And I'll set aside some time next week in the meeting to mention any critical bugs
15:36:49 <bswartz> dustins: did you go to https://bugs.launchpad.net/manila and click "Subscribe to bug mail"?
15:37:08 <gouthamr> ^^ nice
15:37:37 <dustins> bswartz: Just did!
15:37:45 <bswartz> okay excellent
15:38:13 <dustins> bswartz: That's all I have for today, thanks, everyone!
15:38:34 <bswartz> dustins: while I have a list of driver maintainers based on historical conversations, we should be using something like driver log for the official record
15:39:08 <dustins> bswartz: Driver log? As in "in the driver code"?
15:39:20 <bswartz> I haven't checked how many manila drivers are there, but if any are missing, we should add entries with what we know and ask maintainers to make updates there
15:39:21 <gouthamr> dustins: https://github.com/openstack/driverlog
15:40:01 <dustins> sounds good
15:40:08 <bswartz> dustins: this is the 5500 line json horror-show we discussed last week
15:40:22 <dustins> bswartz: Oh, it's THAT
15:40:47 <bswartz> it's still an official record and better than some spreadsheet on bswartz's laptop
15:41:03 <dustins> That's a LOT of JSON
15:41:07 <bswartz> I'm not sure we can get driverlog changed from json to yaml but it would be nice
15:41:44 <dustins> So...uhh...I'll have a look at this and see if everything's up to date and ping folks that need to update the driver log
15:42:05 <bswartz> yeah that's the best path forward
15:42:25 <bswartz> and we can work together on filling gaps in the data
15:42:28 <dustins> JSON seems like a silly format for this, but that's a discussion for another time and place
15:42:33 <bswartz> indeed
15:42:35 <dustins> Indeed!
15:42:40 <bswartz> YAML > JSON
15:42:43 <bswartz> #topic open discussion
15:43:03 <bswartz> anyone have something else for today?
15:43:45 <bswartz> we didn't spend any time discussion specs today but the deadline to review and merge those is coming up fast
15:44:13 <bswartz> thanks all
15:44:18 <gouthamr> it's a 8 day national holiday in china...
15:44:25 <bswartz> #endmeeting