16:00:22 #startmeeting cinder 16:00:22 Meeting started Wed Jan 6 16:00:22 2016 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:25 The meeting name has been set to 'cinder' 16:00:32 o/ 16:00:35 hi o/ 16:00:35 hi 16:00:35 Hey everyone 16:00:36 hi 16:00:38 hi 16:00:41 o/ 16:00:41 Hello 16:00:42 hey 16:00:52 hi 16:00:54 #link https://wiki.openstack.org/wiki/CinderMeetings#Weekly_Cinder_team_meeting 16:00:54 hi 16:00:57 Hello :) 16:00:59 hi 16:01:02 hello 16:01:13 hi 16:01:16 hi 16:01:17 hi 16:01:20 o/ 16:01:21 mornin 16:01:23 #topic Announcements 16:01:28 Hey 16:01:35 hello 16:01:39 Happy Wednesday! 16:02:04 #info python-cinderclient-ext is an official repo now. 16:02:12 great! 16:02:14 nice 16:02:18 #link https://github.com/openstack/python-brick-cinderclient-ext 16:02:20 This is for the work e0ne has been driver for non-Nova management. 16:02:38 e0ne: Nice work there! 16:02:40 o/ 16:02:52 I'll send e-mail to openstack-dev after meeting 16:03:00 nifty 16:03:02 Hello 16:03:05 e0ne: Great. That could be good to get awareness out there. 16:03:05 and will anonce it once more on the ironic meeting 16:03:06 e0ne: good job 16:03:23 thanks everybody for the help to get it landed 16:03:31 it was good teamwork 16:03:31 e0ne: Cool! 16:03:32 #info Midcycle coming up Jan 26-29 16:03:40 #link https://etherpad.openstack.org/p/mitaka-cinder-midcycle 16:03:47 Please add your name if attending. 16:04:02 Also, if you scroll down, there is a section for proposed topics. 16:04:07 Please add anything there. 16:04:10 I should have my camera setup with me again 16:04:22 I'm sure there won't be a shortage of things to discuss, but it would be useful to have them listed. 16:04:23 and will try and do the youtube channel streaming of the meetup 16:04:32 hemna: Sweet. That will be nice. 16:04:34 hemna: it will be awsome! 16:04:37 hemna: Thanks for doing that. 16:04:41 np 16:04:45 hemna: Please, no pictures. ;-) 16:04:49 thanks! 16:05:08 jungleboyj: Hah, part of the contributor agreement is a media release. 16:05:10 jungleboyj: no pictures or no pictures with you? ;-) 16:05:12 jk 16:05:13 jungleboyj, just sit next to me and you'll be off camera :) (one of the advantages of being the camera man) 16:05:20 hemna: ;) 16:05:32 hemna: Dibs on a spot next to you. 16:05:35 hemna: ++ 16:05:37 hemna: pretty elaborate way to get jungleboyj to sit next to you 16:05:43 lol 16:05:43 * jungleboyj will take the other side. 16:05:54 jungleboyj: Sit on his lap 16:05:55 ameade, lol 16:05:57 ameade: is on to me. Crap! 16:06:07 hemna: You just made yourself incredibly popular. 16:06:14 #link https://etherpad.openstack.org/p/mitaka-cinder-spec-review-tracking 16:06:19 diablo_rojo, :P 16:06:21 ^^ some areas that need focus. 16:06:32 is any recommended hotel for the last night? 16:06:37 I think we're making good progress, but sooner is better than later. 16:06:50 e0ne: I'm staying at the one listed. 16:07:17 It seems the spec-review-tracking is evolving to code-review-tracking 16:07:22 smcginnis: me too. but we have to book additional night to get special price 16:07:44 scottda: Yeah. I know it's a little different than what you were thinking with nova's way. 16:07:49 smcginnis: do we have list on new drivers on review? 16:08:10 e0ne: No, I'll see if I can pull that together. Unless someone beats me to it. 16:08:27 smcginnis: No problem, just making sure reviewers know that this page points to priority code reviews, not just spec reviews 16:08:46 at least now that many specs are merged and code is up. 16:08:55 scottda: We don't really group review all blueprints and specs are on normal reviews. Maybe we can talk at the midcycle and you can give any suggestions for how you see nova doing things. 16:09:16 scottda: I'm certainly willing to try something different. 16:09:43 sure, we can talk. Just nice to know there is a page that points to priority items, whether spec or code. 16:10:04 #info Bugstats: cinder - 466, cinderclient - 36, os-brick - 13 16:10:08 Just for info. 16:10:21 I checked the os-brick bugs yesterday 16:10:32 there are only 2 in 'new' status, the rest are in progress basically 16:10:33 Cinder count really went up, but it appears half of them are now bugs for single character comment typos. 16:10:36 :/ 16:10:42 hemna: Awesome! 16:11:14 OK, that's all I've got to go over. 16:11:18 #topic Cinder Multiattach (hemna, ildikov) 16:11:25 hemna, ildikov: All yours. 16:11:37 coolio thanks 16:11:51 so ildikov has been doing a ton of work in the Nova side of things recently trying to get multiattach to land 16:12:12 she's finding a lack of help reviewing code over there, so I wanted to raise awareness and see if we can get some folks from Cinder to help doing reviews 16:12:31 hemna: Do we have a list somewhere? Or a common topic name? 16:12:32 we are running up against a Nova deadline soon and I'd like to see her stuff land. 16:12:44 ildikov should have that list 16:13:10 I've been helping as much as I can, but it's a big task on the Nova side of things 16:13:30 +1 16:13:39 hemna: If we can get a list we can try to get mriedem to help out. 16:13:52 as far as Cinder goes, I have a patch up that fixes a multiattach issue for lvm, that also might benefit everyone else 16:13:55 https://review.openstack.org/#/c/255595/ 16:14:17 jgriffith, has given me some feedback on it, and I pushed up a new patch yesterday. If I could get others to take a look 16:14:24 especially driver maintainers 16:14:40 ildikov, ping 16:15:07 Pretty late for her I think. 16:15:07 hemna: for https://review.openstack.org/#/c/255595/ we need to handle terminate_connection as well. not all drivers use remove_export 16:15:26 hemna, I checked the code, we have some comments to handle the same for terminate connection 16:15:31 she has been up pretty late every day hacking on this 16:15:43 ok, please provide feedback in the patch if you can 16:15:54 shyama: But should all drivers be using remove_export? 16:15:57 terminate connection is a bit tricky 16:16:00 hemna: I added the comment please check 16:16:08 some drivers create a new export for every attach, even for the same volume 16:16:16 so we'd need terminate_connection to get called 16:16:21 Lots of drivers don't use remove_export 16:16:27 xyang1, yup 16:16:34 I think though, we are supposed to 16:16:43 re: lvm being the reference driver. 16:16:46 sorry, here 16:16:53 the lvm driver removes the target at remove_export time 16:16:53 haha 16:16:58 Yes, if a driver doesn't use remove_export to remove the export, is that a driver bug? 16:17:17 For some there isn't a separate export and attach. 16:17:22 yah 16:17:30 and the 3PAR is one of those as well. 16:17:35 scottda: it should be a driver bug, IMO 16:17:37 smcginnis: review topic for the Nova patches: https://review.openstack.org/#/q/topic:bp/volume-multi-attach 16:17:44 hemna: how did you solve that in 3PAR 16:17:44 we can't use remove_export, because we don't have the connection_info then 16:17:49 #link https://review.openstack.org/#/q/topic:bp/volume-multi-attach 16:17:51 #link https://review.openstack.org/#/q/topic:bp/volume-multi-attach 16:17:54 ildikov: Thanks! 16:17:54 umm... why would that be a bug? 16:17:55 hemna: yea we've got the same problem 16:17:58 scottda: from what I see most fc drivers are using initialize and terminate connection and not remove export 16:17:58 xyang1, 3PAR is one of those drivers that creates a new export for every attach. 16:18:07 so terminate_connection is called every time and it works 16:18:13 even for the same volume. 16:18:59 jgriffith, I think because we are 'supposed' to follow the reference driver and all be doing the same things in the same driver API entry points. 16:19:04 which, I think most of us aren't. 16:19:05 no, not at all 16:19:06 smcginnis: I have johnthetubaguy, mriedem and ndipanov looking at the patches and basically two weeks to land some code 16:19:13 remove export actually makes no sense for most 16:19:14 netapp doesn't exactly create new exports but does create rules so would need terminate_connection to be called as well 16:19:24 shyama, scottda: Dell driver doesn't use create_export or remove_export. Simply wasn't enough info being passed to the driver. (Now, tho, I think there is.) 16:19:28 remove_export "removes" the locally created iscsi Target on the cinder volume node 16:19:31 That's what it's intent was 16:19:40 smcginnis: the plan is to block more things for the first round and then add support later just to have something that gives us a base scenario to play with 16:19:50 other backends don't have a created iscsi target on the cinder node so it's a noop 16:19:52 Swanson, yah, we had the same problem. I 'fixed' the create_export problem by adding the connector to it. 16:19:59 ildikov: +1 Good plan 16:20:18 jgriffith: Agree - it just doesn't apply to some backends. 16:20:19 in other words it's exactly as the name would imply "remove the target export" 16:20:35 smcginnis: i agree , we could it work on it for next round 16:20:38 sure, we do that inside of terminate_connection 16:20:41 scottda: storwize, emc drivers uses terminate connection 16:20:45 sigh 16:20:56 connection and export are NOT the same thing 16:21:18 for some of us, they are 16:21:20 smcginnis: also would be nice to talk about this with the Nova folks during the mid-cycle 16:21:22 jgriffith: +1 16:21:25 connection == an iscsi nexus/attachment/session whatever you prefer to call it 16:21:51 export == a local device "exported" via an iscsi (or other) target mechanism 16:21:52 Right, some don't have anything to "export". They just need to attach (i.e. create that I-T nexus) 16:21:53 jgriffith, I took connection to mean connecting to a remote san and export to mean a locally created target. 16:22:09 Swanson: not really 16:22:22 Swanson: they're separate components 16:22:34 and this isn't a griffith/cinderism FWIW 16:22:36 :) 16:22:50 griffism 16:22:50 Sounds like we could spend some time at the midcycle getting everyone on the same page for attach vs export. 16:22:59 It's kind of a standard thing, and it's why the two parts are separated in Cinder 16:23:01 jgriffith: here by connection we need to avoid deleting the host entry, if the attachment exists. 16:23:16 smcginnis: ++ 16:23:17 wasn't there an effort to document the driver api's? was any of this covered by that? 16:23:19 smcginnis: should be added to the driver doc that eharney is working on 16:23:22 I feel like this has come up before. 16:23:26 smcginnis: +1 16:23:31 xyang1: +1 16:23:32 #link https://github.com/eharney/driver_doc/blob/master/driver.rst 16:23:34 sure, but I think in order for that to really work for some backends, we also need the connector at remove_export time. 16:23:41 jungleboyj: yeah, I feel like I write stuff and talk about this stuff a lot 16:23:43 xyang1: Yep! :) 16:23:43 for all the same reasons we needed it at create_export time. 16:23:55 * mriedem sneaks in extra late 16:23:57 we can't even find the export unless we have the connector. 16:24:07 hemna: my point was actually that you probably don't need remove_export, you're most likely using it wrong 16:24:10 * smcginnis marks mriedem tardy 16:24:21 :) 16:24:23 in our case, we don't use it at all. 16:24:26 hemna: unless on your device (and you may) you are actually saying "hey... delete this target" 16:24:33 hemna: me neither 16:24:41 hemna: until some bozo put it in an ABC class :) 16:24:42 This all came up around issues with live migration, didn't it? 16:24:48 we nuke the target at terminate_connection time. 16:24:49 heh 16:24:57 hemna, ditto. create_export is more useful. 16:25:01 jungleboyj, yep 16:25:10 jungleboyj, no, this came up due to a multiattach issue related to lvm 16:25:16 sigh 16:25:20 heh 16:25:24 I don't think this is constructive any more 16:25:34 the first detach nuked the lvm target, and other attachments on the same host vanished 16:25:37 basically folks will do what they want and interpret how they choose 16:25:38 hemna: Ok. 16:25:41 that's fair enoug 16:25:42 h 16:25:44 Yeah, in the interest of time let's table this for now. 16:25:53 We're making jgriffith sigh a lot. :) 16:25:59 LOL 16:26:11 coolio 16:26:12 hey.. sighing is ok right? 16:26:14 we might run out of air :) 16:26:16 hemna, ildikov: Anything else regarding multiattach at this point? 16:26:18 Better than yelling :) 16:26:23 so provide any feedback on my cinder patch please 16:26:28 jgriffith: ;) 16:26:36 or we can take this up in cinder channel after the meeting 16:26:40 guitarzan: LOL 16:26:44 and I'm sure I can confuse everyone even more there. 16:26:50 heeh 16:26:58 jgriffith: You don't let me sigh. ;-) 16:27:12 smcginnis: a favour from me: reviews and any hints if something looks fragile and I didn't realize :) 16:27:28 jungleboyj: it's different when you look at me and roll your eyes while I'm speaking :) 16:27:29 hemna, ildikov: Thanks. Will do ildikov. 16:27:30 ildikov, thanks again for taking up this huge Nova effort. It helps everyone in Cinder. 16:27:39 #topic os-brick local_dev (hemna) 16:27:40 thanks! 16:27:49 ildikov: Thank you! 16:27:53 hemna: Still you. 16:27:58 jgriffith: touche 16:28:02 ok, so before the break, I started taking a look at the nova and cinder lvm code 16:28:15 we have a lot of duplicate code between the 2 16:28:27 and the code that jgriffith wrote in cinder/brick/local_dev is really nice compared to the nova code 16:28:46 i put up a patch at one point that copies some of the cinder lvm code into nova 16:28:48 so I put together an os-brick patch that pulls that code as well as a cinder removal patch. 16:28:49 NOT: we have the same issue between nova/cinder/glance and RBD stuff 16:28:50 for retries and error handling 16:28:56 hemna: That duplication is the main reason I'm behind it. 16:29:01 s/NOT/NOTE 16:29:22 so I just wanted to raise the issue and see if everyone thinks this is a good idea. 16:29:24 mriedem: Does it just use a small part of the local_dev code? 16:29:35 i'd have to find it 16:29:37 #link https://review.openstack.org/#/c/260739 16:29:48 https://review.openstack.org/#/c/240611/ 16:29:53 #link https://review.openstack.org/#/c/260756/ 16:29:57 mriedem: Or more importantly, would it be usable on the nova side if we pulled all of that into brick? 16:30:17 i'd be behind pulling the common code into os-brick and use from there 16:30:20 mriedem, the idea was to get the cinder/brick/local_dev code moved to os-brick first 16:30:26 since cinder's was more recent and had nice things 16:30:32 then get nova to use it 16:30:39 nice things like what i ported into https://review.openstack.org/#/c/240611/ 16:30:52 mriedem, there were a few missing functions in cinder's code 16:30:58 i even said "At some point the common code should live in a library like os-brick and then we can avoid the duplication." 16:30:58 mriedem: Which is a sign that we should probably have one place and not duplicate and be out of sync. :) 16:31:00 that should be easily ported into os-brick 16:31:06 mriedem: :) 16:31:17 so I put a note in hemna 's review last night... 16:31:22 mriedem: To be clear, I'm not saying it was a bad thing to do when it was done. We didn't have os-brick then. 16:31:24 this was really for the udevadm settle call that cinder has 16:31:37 I think it's an ok idea; I'm not sure why "brick" vs an oslo.lvm lib 16:31:43 I'm also concerned about a few things... 16:32:02 os-brick is owned by cinder cores, oslo things are not 16:32:04 jgriffith: Well, that raises another question I guess. Should os-brick be oslo.brick? 16:32:06 1. When we discussed this a year or so ago I think there was a concensus that this was wayyy too heavy for Nova 16:32:08 mriedem, +1 16:32:13 too much object and abstraction madness 16:32:13 i don't think oslo cores should own brick 16:32:26 jgriffith: I don't like oslo.lvm because we'll need oslo.rbd later too 16:32:27 mriedem: I'm not saying anything about brick :) 16:32:35 e0ne: fair enough 16:32:38 I think the local_dev code you write is clean 16:32:41 mriedem: I don't either, but raising the question since it is a common library that does somewhat fall under oslo's mission. 16:32:42 wrote 16:32:52 e0ne: mriedem hemna and I'm not pushing one vs the other 16:32:55 smcginnis, the same could be said of os-brick though 16:32:58 so what's too heavy for nova? 16:32:59 honestly not a big deal to me either way 16:33:05 brick or cinder's local_dev/lvm? 16:33:09 hemna: That is what I'm saying. :) 16:33:26 I just don't know the motivations here 16:33:39 there is precedent for non-oslo shared libs, like os-win and os-vif 16:33:52 mriedem: and the point being that the lvm code is actually more along the lines of what oslo is intended; stable, very low-freq change rate 16:33:56 anyway, I think it should live in os-brick, as it's owned by the Cinder/Storage folks that have the knowledge. 16:34:05 agree with hemna 16:34:12 hemna: +1 16:34:12 oslo libs change way more often than you'd think 16:34:13 hemna: yeah, I'm fine with that 16:34:19 hemna: +1 16:34:20 hemna: +1 16:34:32 but there's still the bigger point that I raised earlier 16:35:00 jgriffith, the 'heavy for nova' issue ? 16:35:05 brick/lvm was an interesting experiment on my part with objects... I'm really not certain it was "good" or not 16:35:24 I'd love to make sure eharney is wrapped in on this 16:35:30 I think you did a great job of it actually. 16:35:32 it's clean 16:35:41 especially if you compare it to the nova lvm.py 16:35:43 eharney: I think he and mtanino are the other two that have comitted the most blood sweet and tears with me on lvm 16:35:43 jgriffith: Yeah, what I've looked at looks good. 16:36:34 hemna: mriedem cool, if Nova is ok with the changes and want to move forward this time I'm certainly good with it 16:36:42 * eharney needs to catch up on all of this 16:36:50 i guess i'm not familiar with the objects talk 16:36:53 cinder objects? 16:37:09 mriedem, https://review.openstack.org/#/c/260739/4/os_brick/local_dev/lvm.py 16:37:17 if the cinder lvm code is using objects, i'd think those would have to be ported to os-brick also 16:37:31 from what I can tell it's just that object 16:37:40 oh so just a class rather than util methods 16:37:46 I think so yah 16:37:59 mriedem: the code itself... 16:38:17 mriedem: at one point the lvm modules were just static helper methods (or embedded in the driver) 16:38:24 yeah, that's what nova still has 16:38:27 jgriffith, it's far cleaner than the utils code in nova imho. 16:38:29 mriedem: now it's based off of a parent VG object 16:38:50 So just some refactoring required when nova picks up the common code? 16:38:58 but the VG stuff doesn't need to come into os-brick, nor nova. 16:39:06 mriedem: so the result is you have a bit more *process* enforcement, and a bit heavier lift in terms of managing things 16:39:12 just the local_dev/lvm.py object. 16:39:13 not a huge deal 16:39:39 I'll take on the nova patch if this lands in os-brick 16:39:40 it looks like it maps (from quickly glancing) 16:39:44 hemna: there's no way not to the way I wrote that code. Everything is off of parent vg object, cuz that's how Cinder works :) 16:40:14 anyway... just pointing out that there were concerns from the Nova side in the past that I didn't disagree with 16:40:17 as long as nova can create the object w/o having to jump through 20 hoops, i think we'd be ok 16:40:31 mriedem: sounds good 16:40:33 cinder has that whole exec helper construct that nova doesn't 16:40:42 but hemna already crossed that bridge with the os-brick adoption in nova 16:40:43 Nova would just need to pass in the CONF entries at constructor time 16:40:44 mriedem: I'd honeslty like that to go away 16:40:48 mriedem: it's trouble! 16:40:54 jgriffith: yeah i'm not crazy about it 16:40:55 which could be done with a factory or utils helper 16:41:02 makes it seem pluggable which it probably shoudn't be 16:41:19 mriedem: indeed... and causes weird things with rootwrap and inheritance 16:41:34 obsufication for obsufication purposes :) 16:41:38 right, there could be other rootwrap filter/privsep implications here 16:41:40 So it sounds like there are some details to work out, but we are in general agreement? 16:41:44 mriedem: there are 16:41:47 smcginnis: i think so 16:41:49 privsep will eventually move rootwrap out 16:41:54 but it's not ready yet 16:41:54 privsep is supposed to be coming along 16:41:58 smcginnis: "some" details is an understatement IMO :) 16:42:18 angus has a WIP patch in os-brick already for a first stab at privsep 16:42:19 privsep will be ready not earlier than N release 16:42:22 for a small piece of it. 16:42:23 jgriffith: That's cinder in general. We just have to work out some details. :) 16:42:32 smcginnis: True That!! 16:42:40 hemna: Good enough for now? Should we move on? 16:42:41 https://upload.wikimedia.org/wikipedia/en/d/dd/Gnomes_plan.png 16:42:45 i think we know the detials 16:42:47 yah I'm done. 16:42:54 so I'll take the os-brick patch out of WIP 16:43:00 I have a BP up that I posted yesterday for it. 16:43:02 mriedem: :) 16:43:12 hemna: Awesome, thanks! 16:43:16 lol 16:43:22 mriedem, heh nice. 16:43:28 #topic Cinder Code Cleanup Patches (e0ne) 16:43:34 e0ne: Your turn. 16:43:41 #link https://wiki.openstack.org/wiki/CinderCodeCleanupPatches 16:43:48 Just please don't miss phase-1 :) 16:43:56 jgriffith: LOL 16:44:29 should we follow "'code_cleanup_batching" process now? 16:44:44 I see a lot of small cleanup/refactoging patches for cinder 16:45:21 Most I've seen didn't conflict with anything, so I've been letting those through. 16:45:27 e0ne: For any vaguely sizeable patch, I think we should follow it 16:45:33 I guess I'm not of a strong opinion. 16:46:26 DuncanT, smcginnis: what about patches like "Fix 'an' article usage? 16:46:40 e0ne: They should die a horrible and painful death. 16:46:48 +2 16:46:52 e0ne: :) 16:46:54 :) 16:47:11 e0ne: Especially if a bug has been filed for each individual patch! 16:47:15 e0ne: for one liners, I've just been firing them in, but maybe batching them will put people off posting them? 16:47:26 I heard that's the new trend in all openstack projects. 16:47:41 winston-d: Which? 16:47:44 ? 16:47:44 fixing 'an' 16:47:52 seriously 16:47:53 ? 16:47:59 winston-d: Ah. The great "an" scourge of 2016. 16:48:03 I think die die die is in order. 16:48:05 That's an bad idea IMO 16:48:06 easy way to boost stats 16:48:09 smcginnis: Even diablo_rojo isn't that picky. ;-) 16:48:10 personally, I don't like to review 2-3 lines patches wich doesn't fix real bugs 16:48:15 jungleboyj: Hah! 16:48:16 jungleboyj, I dunno about that. 16:48:29 e0ne, +2 16:48:35 hemna: Ouch. 16:48:42 it's waste of time 16:48:50 e0ne: Agree. 16:48:53 e0ne: +1 16:49:01 e0ne: I don't care that much. If it is small and someone wants to try to fix it, I will hit the button. But whatever. 16:49:04 patrickeast: yup, that's most likely the motivation behind this. 16:49:18 But then on the other hand, if we go through this batch cleanup processing that adds potentially more wasted time rather than just letting them through. 16:49:22 So I'm not sure what's better. 16:49:30 I dont think it would be so bad if it was all getting fixed in a single patch. 16:49:32 you can just ignore them 16:49:33 :P 16:49:39 jungleboyj: I'm kind of at that point too. 16:49:41 hemna: :) 16:49:45 we have a priority list anyway that we should be looking at. 16:49:58 if you want extra review stats padding, then go for it. 16:49:58 hemna: True. 16:49:58 hemna: ++++1 16:49:59 hemna: agree 16:50:09 hemna: Agreed. 16:50:56 let's summurize our discussion 16:51:00 I would suggest to add BP for each such category : python3, pep8 etc 16:51:02 So I think we're saying - if it's a significant enough cleanup patch, it's at the cores discression whether to batch it? 16:51:13 we won't enforce CinderCodeCleanupPatches with -2 16:51:21 I've been trying to discourage ppl from submitting such patch, but if that's what they want, whatever. 16:51:40 sheel: it will be 'in progress' forever 16:51:43 smcginnis: ++ 16:51:48 e0ne: I think we still can if there's reason to hold it off. 16:52:09 e0ne:right 16:52:25 Or if we get 10 patches for the same thing in 10 different files we can push back and have them combine them. 16:52:48 smcginnis: fair enouph 16:53:19 I don't know - does that make sense to everyone? Feel free to say otherwise. 16:54:00 e0ne, that's the other issue, is the bugs associated with such patches, will 'skew' our bug counts. :( 16:54:13 I think it makes sense that we look to have people collapse patches if they are ridiculous. 16:54:26 hemna: That I'm definitely against and think we should actively push back on non-bug bugs. 16:54:27 for that reason alone, I think we should -2 individual patches, and make them do a single bug/patch to fix em. 16:54:39 If we have some clean-up patches that are large and are going to cause rebase issues, put it into the code clean-up queue. 16:54:41 useless churn! 16:54:51 jungleboyj: +1 16:55:13 hemna: Close the bug with 'not a bug' or something 16:55:16 Definitely think we need to discourage bugs for simple things. 16:55:18 smcginnis: I prefer they're batched, or just flat out go away :) 16:55:28 jungleboyj: is't just an(# TODO(e0ne): fix article in the following patch) rebase hell 16:55:29 jgriffith, +1 16:55:29 e0ne : at least for pep8 issues we can raise one BP and fix all in one go to avoid multiple patches 16:55:41 e0ne, :) 16:55:42 I've watched some of those "cleanup" patches that just flip fop things 3 times over the last 4 years 16:55:46 jgriffith: So keep the policy we should have been following? 16:55:58 jgriffith: +1 16:56:24 smcginnis: maybe... but honestly sometimes it's just easier to not fight it 16:56:31 jgriffith: Very true 16:56:41 smcginnis: but, that being said... if we clearly agreee and document what we expect... well then that's easy 16:57:05 the problem is we've never done that before, so it's kind of an individual thing 16:57:07 3 mins reminder 16:57:48 OK, how about we try to get back to enforcing CinderCodeCleanupPatches. But it can still be at a core's discretion to push something through if it seems like it's going to be easier not to go through the hoops? 16:58:07 smcginnis: +1 16:58:11 smcginnis: More work on the cores, but that is probably best. 16:58:37 Isn't really worth fighting over. 16:58:37 OK, makes sense then I guess. 16:58:37 smcginnis: works for me 16:58:47 smcginnis: that's fine 16:58:51 sounds good 16:58:53 coo 16:58:54 smcginnis: LOL... "I guess" 16:58:59 the disclaimer there :) 16:59:03 yhugik 16:59:03 Alright, I'll try to remember. :) 16:59:09 sorry! cat on keyboard 16:59:21 OK, times up. 16:59:28 Don't forget to midcycle planning. 16:59:32 Thanks everyone! 16:59:35 toodles 16:59:39 ntpttr: Oh, I miss those days. 16:59:43 #endmeeting