15:00:04 #startmeeting manila 15:00:05 Meeting started Thu Oct 5 15:00:04 2017 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:08 The meeting name has been set to 'manila' 15:00:09 o/ 15:00:17 \o 15:00:24 hello all 15:00:29 o/ 15:00:37 hello 15:01:04 so there's nothing on the agenda today, and no announcements 15:01:11 hi 15:01:19 hi 15:01:19 we can talk about whatever we like 15:01:25 we should cover the gate issues 15:01:36 drivers_private_storage hard-delete vs soft-delete 15:01:41 and try to clear up any other discussions 15:01:43 ^ yes like that 15:01:48 bswartz: I've got an update on the bug czar thing when the gate stuff is discussed 15:02:50 okay I cobbled together a quick agenda 15:02:55 #agenda https://wiki.openstack.org/wiki/Manila/Meetings 15:03:04 #topic Gate issues 15:03:21 so it looks like the zuulv3 migration broke our gate late last week 15:03:32 last I checked, infra was working on the issue but that might not still be true 15:03:47 has anyone else looked into recent failures? 15:04:09 so zuulv3 is now non-gating 15:04:25 pep8 is failing with this error: 15:04:25 zuulv2 aka jenkins is running again on check and gate 15:04:41 zuulv3 works more or less like a third party job 15:04:44 http://paste.openstack.org/show/622774/ 15:04:53 looks like we have a bad hacking check 15:05:19 bswartz: other CIs are not failing for that reason though 15:05:43 CI failures are not relevant to the gate 15:06:08 bswartz: which patch was that pep8 failure on? 15:06:20 bswartz: first-party CIs I meant 15:06:28 oh it could just be I looked at at broken patch 15:06:32 doh 15:06:54 there're some recent patches from vkmc that are going to fix py3x for us, guess it's a genuine failure from one of them :D 15:07:02 yeah it seems so 15:07:23 okay we should just try some more known-good patches and see what fails if anything 15:07:30 then address those issues 15:07:43 I'm deep in the process of rewriting generic driver 15:07:45 jenkins upstream's passing now, but i haven't looked at why my fancy CI is broken.. probably needs some tinkering, based off how upstream fixed stuff 15:07:51 well https://review.openstack.org/#/c/508680/ works with zuulv2 15:08:01 s/my fancy ci/third party CI 15:08:04 except maybe something screwy with the coverage job 15:08:27 bswartz: this is a good one to watch: https://review.openstack.org/#/c/508680/ 15:08:40 yes 15:08:41 oops, too late 15:09:17 not that zuulv3 is supposed to run on recheck but it's in a different queue with many fewer nodes 15:09:30 so it will eventually run .... theoretically 15:09:36 if 3rd party CIs are broken, talk to the infra and QA teams -- I'm not sure there's anything we can do about it on this team 15:09:37 tbarron: the coverage job is not working locally as well 15:09:47 but on this patch the last zuulv3 is from 9/30 ... 15:10:07 ganso: yeah, i haven't had time to chase it 15:10:24 okay let's move on to the issue that's been causing so much strife 15:10:33 #topic drivers_private_storage hard-delete vs soft-delete 15:10:36 lol 15:10:47 ganso is causing trouble again 15:10:47 ganso raised this issue in the channel 2 days ago 15:10:49 strife, i miss vponomaryov 15:10:55 #link https://bugs.launchpad.net/manila/+bug/1721123 15:10:56 Launchpad bug 1721123 in Manila "cannot update previously deleted drivers_private_storage entries" [Medium,Confirmed] - Assigned to Rodrigo Barbieri (rodrigo-barbieri2010) 15:11:05 beelzebub 15:11:16 he-who-must-not-be-named 15:11:17 * tbarron tries to raise him 15:11:31 we have a pattern of using soft deletes everywhere 15:11:36 xD 15:11:52 it seems there's a bug with the sqlalchemy layer related to deleted driver private share data 15:12:03 I don't understand why we don't just fix this bug and move on 15:12:14 expressed my opinion on https://bugs.launchpad.net/manila/+bug/1721123 15:12:15 Launchpad bug 1721123 in Manila "cannot update previously deleted drivers_private_storage entries" [Medium,Confirmed] - Assigned to Rodrigo Barbieri (rodrigo-barbieri2010) 15:12:21 what's the reason for considering hard deletes? 15:12:50 this bug stirred discussion on whether soft-deletes make sense for drivers_private_storage 15:12:54 soft deletes are faster and we do them everywhere else 15:12:56 because it is a key value store that no one knows how to use besides individual driver authors, so what's the point of soft deleted? 15:12:59 deleted* 15:13:05 why would we want to be inconsistent here? 15:13:24 the point of soft deletion is that it's what we do everywhere else 15:13:31 O/ 15:13:34 and it's arguably faster than hard deletion 15:13:49 in a key-value store like driver-private-data, soft-deleting is unnecessary 15:13:56 you need a really good reason to add an inconsistency to our databae model 15:14:07 i.e, will lead to us writing weird logic to reuse an existing "deleted" row 15:14:17 it also does not add value to store stale private_storage information 15:14:22 and defeats one use case of soft-deletes: keeping things around for auditing 15:14:33 it's not necessary anywhere -- we chose to do it for the advantages it offers 15:14:51 I agree nobody cares about auditing old deleted driver private share data 15:14:58 that's not a good argument for hard deletion though 15:15:10 the bugfixed soft-delete code would allow entries to be "undeleted", so it loses its purpose 15:15:36 ganso +1 15:15:45 who cares? 15:15:56 it's up to the driver to do what it wants with that data 15:16:12 absolutely, so if a driver deletes something, it expects it gone 15:16:17 as long as the driver can do what it wants, I see no issue 15:16:31 if we're going to start seeing SQL exceptions, that's bad design 15:17:01 soft deletes should look like "gone" from the driver perspective 15:17:06 handling updates on key-value stores isn't straightforward, as is evidenced by the insertion logic 15:17:10 if not, there's a bug, and we should fix it 15:17:10 * gouthamr looks for link 15:17:23 bswartz: they actually would not be 15:17:32 explain 15:17:59 #LINK https://github.com/openstack/manila/blob/master/manila/db/sqlalchemy/api.py#L3497 15:18:16 bswartz: the alternative that fixes the bug with soft-deletes needs to query the soft-deleted entries to undeleted them... so they are not really gone 15:18:25 bswartz: this happens in the DB layer though 15:18:30 okay 15:18:55 so if I add a key that was previously deleted, the DB layer just undeletes the old value and updates it 15:19:02 that seems like optimal behavior to me 15:19:18 bswartz: it has been designed that way, but the bug prevents that exact scenario you described 15:19:29 and if we fix the bug, then what's the problem? 15:19:53 bswartz: we can fix it with soft-deletes or hard-deletes 15:20:14 let's assume hard deletes aren't the answer 15:20:21 bswartz: it will remove part of the insertion logic gouthamr linked 15:20:22 what's the problem after we fix the soft delete bug? 15:20:26 if we fix the bug and maintain consistency with soft-deltetes elsewhere, what's the problem? 15:20:47 or is fixing the soft delete bug extremely hard? 15:20:48 there's no problem other than some code duplication performing another query 15:20:54 it is not 15:20:55 earlier I think you said it makes ugly code? 15:21:00 tbarron: yes 15:21:04 we effectively pay the price of the time saved 15:21:12 by soft-deleting 15:21:17 the existing code isn't very pretty 15:21:29 so ugliness is a price we've already paid 15:21:54 don't assume that a soft-undelete is expensive 15:22:00 bswartz: hard-deletes you make the code less ugly while soft deletes make it uglier :D 15:22:03 UPDATES and SELECTS are usually very fast 15:22:09 INSERTS and DELETES are usually very slow 15:22:22 depending on the size of the table, or so SQL documentation tells me 15:23:06 gouthamr: hummm, so if we have a lot of old soft-deleted keys lying around, updates are going to be more expensive than a delete, am I right? 15:23:07 if the argument comes down to performance, then we have to benchmark the alternatives and see which is faster in practice 15:23:26 my argument is one of simple consistency with existing code 15:23:56 and I suspect the performance differences are not large in either case 15:24:09 probably a small table 15:24:31 but maybe not i guess 15:25:19 in a previous project we implemented soft deletes purely for the performance gain -- but that was a different database in a different era 15:25:27 in the patch I am working on which uses private storage, if I attempt to migrate a share and fail over and over and over until I succeed, all those attempts will be lying around in the drivers_private_storage database soft-deleted 15:25:48 It sounds like the only downside to just fixing the existing soft delete code is that there is some code ugliness or duplication 15:25:58 I suspect we can clean it up in code review 15:26:33 okay, and if we're not able to do that, we can go down the path of hard deletes? 15:26:40 we have the manila-manage command to clean soft-deleted entries though, but that has to be performed manually 15:27:01 if there is a good reason for hard deletes we'll do them 15:27:07 I'm still waiting for the good reason 15:28:02 there's a benefit to code readability from using soft deletes across the board 15:28:34 we shouldn't give that up unless we're gaining something else really big 15:29:02 okay let's move on 15:29:31 #topic Bug Czar 15:29:45 dustins: you had more to share on this topic? 15:29:52 bswartz: Yeah! 15:30:11 So I'm starting to go through the list of Manila bugs that we have open on Launchpad 15:30:17 * bswartz cringes 15:30:35 And I'm going to go through some of the ones that haven't been updated in the last cycle or so 15:30:44 (some haven't been touched in nearly three years) 15:30:57 Just to have a baseline of what needs to be fixed where 15:31:27 For things that are outstanding, I'll leave comments asking what the status is and if it's urgent, I'll find you on IRC :) 15:31:46 dustins: you had said you'd like to use part of this meeting to review bugs 15:31:57 bswartz: Is there a way that I can be notified of new bugs added to launchpad as they come in? 15:32:03 oh yeah 15:32:06 dustins is the new bug police 15:32:12 Czar! 15:32:15 * bswartz checks LP groups membership 15:32:17 king 15:32:20 ganso: I prefer Bug Sherpa 15:32:23 emperor 15:32:33 dude 15:32:39 I don't want to be a dictator, I just wanna help (and stay mostly benevolent) 15:32:41 supreme .... okay i'm not going there 15:32:51 pretty sure if you don't capitalize king and emperor he'll be offended 15:33:09 dilly dilly 15:33:12 I want to get help where it's needed and provide a gentle push when required 15:33:25 This is to help everyone 15:33:35 https://launchpad.net/~manila-bug-supervisors/+members#active 15:33:40 dustins: just added you 15:33:44 bswartz: Thanks! 15:33:59 we need to dramatically trim the membership of this group 15:34:05 I'll go through the bugs over the next several days and comment as I go along 15:34:19 If I have any questions about the status of a bug 15:34:41 Chances are we can reduce our backlog by a decent margin in the span of just a week 15:34:48 well, hopefully :D 15:35:17 bswartz: When you get the chance, I'd like to have the list of driver maintainers as well 15:35:30 dustins: sure thing 15:35:33 And I'll go ahead and codify that in the Wiki (if it's not there already) 15:36:38 And I'll set aside some time next week in the meeting to mention any critical bugs 15:36:49 dustins: did you go to https://bugs.launchpad.net/manila and click "Subscribe to bug mail"? 15:37:08 ^^ nice 15:37:37 bswartz: Just did! 15:37:45 okay excellent 15:38:13 bswartz: That's all I have for today, thanks, everyone! 15:38:34 dustins: while I have a list of driver maintainers based on historical conversations, we should be using something like driver log for the official record 15:39:08 bswartz: Driver log? As in "in the driver code"? 15:39:20 I haven't checked how many manila drivers are there, but if any are missing, we should add entries with what we know and ask maintainers to make updates there 15:39:21 dustins: https://github.com/openstack/driverlog 15:40:01 sounds good 15:40:08 dustins: this is the 5500 line json horror-show we discussed last week 15:40:22 bswartz: Oh, it's THAT 15:40:47 it's still an official record and better than some spreadsheet on bswartz's laptop 15:41:03 That's a LOT of JSON 15:41:07 I'm not sure we can get driverlog changed from json to yaml but it would be nice 15:41:44 So...uhh...I'll have a look at this and see if everything's up to date and ping folks that need to update the driver log 15:42:05 yeah that's the best path forward 15:42:25 and we can work together on filling gaps in the data 15:42:28 JSON seems like a silly format for this, but that's a discussion for another time and place 15:42:33 indeed 15:42:35 Indeed! 15:42:40 YAML > JSON 15:42:43 #topic open discussion 15:43:03 anyone have something else for today? 15:43:45 we didn't spend any time discussion specs today but the deadline to review and merge those is coming up fast 15:44:13 thanks all 15:44:18 it's a 8 day national holiday in china... 15:44:25 #endmeeting