16:00:22 <smcginnis> #startmeeting cinder
16:00:22 <openstack> Meeting started Wed Jan  6 16:00:22 2016 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:23 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:25 <openstack> The meeting name has been set to 'cinder'
16:00:32 <ameade> o/
16:00:35 <ntpttr> hi o/
16:00:35 <tbarron> hi
16:00:35 <smcginnis> Hey everyone
16:00:36 <xyang1> hi
16:00:38 <patrickeast> hi
16:00:41 <akerr> o/
16:00:41 <jgregor> Hello
16:00:42 <scottda> hey
16:00:52 <Yogi1> hi
16:00:54 <smcginnis> #link https://wiki.openstack.org/wiki/CinderMeetings#Weekly_Cinder_team_meeting
16:00:54 <jseiler> hi
16:00:57 <diablo_rojo> Hello :)
16:00:59 <Guest56749> hi
16:01:02 <yhayashi> hello
16:01:13 <merooney> hi
16:01:16 <adrianofr> hi
16:01:17 <e0ne> hi
16:01:20 <ildikov> o/
16:01:21 <hemna> mornin
16:01:23 <smcginnis> #topic Announcements
16:01:28 <DuncanT> Hey
16:01:35 <jixuepeng> hello
16:01:39 <jungleboyj> Happy Wednesday!
16:02:04 <smcginnis> #info python-cinderclient-ext is an official repo now.
16:02:12 <e0ne> great!
16:02:14 <hemna> nice
16:02:18 <e0ne> #link  https://github.com/openstack/python-brick-cinderclient-ext
16:02:20 <smcginnis> This is for the work e0ne has been driver for non-Nova management.
16:02:38 <smcginnis> e0ne: Nice work there!
16:02:40 <thingee> o/
16:02:52 <e0ne> I'll send e-mail to openstack-dev after meeting
16:03:00 <jgriffith> nifty
16:03:02 <Swanson> Hello
16:03:05 <smcginnis> e0ne: Great. That could be good to get awareness out there.
16:03:05 <e0ne> and will anonce it once more on the ironic meeting
16:03:06 <jgriffith> e0ne: good job
16:03:23 <e0ne> thanks everybody for the help to get it landed
16:03:31 <e0ne> it was good teamwork
16:03:31 <jungleboyj> e0ne: Cool!
16:03:32 <smcginnis> #info Midcycle coming up Jan 26-29
16:03:40 <smcginnis> #link https://etherpad.openstack.org/p/mitaka-cinder-midcycle
16:03:47 <smcginnis> Please add your name if attending.
16:04:02 <smcginnis> Also, if you scroll down, there is a section for proposed topics.
16:04:07 <smcginnis> Please add anything there.
16:04:10 <hemna> I should have my camera setup with me again
16:04:22 <smcginnis> I'm sure there won't be a shortage of things to discuss, but it would be useful to have them listed.
16:04:23 <hemna> and will try and do the youtube channel streaming of the meetup
16:04:32 <smcginnis> hemna: Sweet. That will be nice.
16:04:34 <e0ne> hemna: it will be awsome!
16:04:37 <smcginnis> hemna: Thanks for doing that.
16:04:41 <hemna> np
16:04:45 <jungleboyj> hemna: Please, no pictures.  ;-)
16:04:49 <ntpttr> thanks!
16:05:08 <smcginnis> jungleboyj: Hah, part of the contributor agreement is a media release.
16:05:10 <e0ne> jungleboyj: no pictures or no pictures with you? ;-)
16:05:12 <smcginnis> jk
16:05:13 <hemna> jungleboyj, just sit next to me and you'll be off camera :)  (one of the advantages of being the camera man)
16:05:20 <smcginnis> hemna: ;)
16:05:32 <diablo_rojo> hemna: Dibs on a spot next to you.
16:05:35 <jungleboyj> hemna: ++
16:05:37 <ameade> hemna: pretty elaborate way to get jungleboyj to sit next to you
16:05:43 <smcginnis> lol
16:05:43 * jungleboyj will take the other side.
16:05:54 <DuncanT> jungleboyj: Sit on his lap
16:05:55 <hemna> ameade, lol
16:05:57 <jungleboyj> ameade: is on to me.  Crap!
16:06:07 <diablo_rojo> hemna: You just made yourself incredibly popular.
16:06:14 <smcginnis> #link https://etherpad.openstack.org/p/mitaka-cinder-spec-review-tracking
16:06:19 <hemna> diablo_rojo, :P
16:06:21 <smcginnis> ^^ some areas that need focus.
16:06:32 <e0ne> is any recommended hotel for the last night?
16:06:37 <smcginnis> I think we're making good progress, but sooner is better than later.
16:06:50 <smcginnis> e0ne: I'm staying at the one listed.
16:07:17 <scottda> It seems the spec-review-tracking is evolving to code-review-tracking
16:07:22 <e0ne> smcginnis: me too. but we have to book additional night to get special price
16:07:44 <smcginnis> scottda: Yeah. I know it's a little different than what you were thinking with nova's way.
16:07:49 <e0ne> smcginnis: do we have list on new drivers on review?
16:08:10 <smcginnis> e0ne: No, I'll see if I can pull that together. Unless someone beats me to it.
16:08:27 <scottda> smcginnis: No problem, just making sure reviewers know that this page points to priority code reviews, not just spec reviews
16:08:46 <scottda> at least now that many specs are merged and code is up.
16:08:55 <smcginnis> scottda: We don't really group review all blueprints and specs are on normal reviews. Maybe we can talk at the midcycle and you can give any suggestions for how you see nova doing things.
16:09:16 <smcginnis> scottda: I'm certainly willing to try something different.
16:09:43 <scottda> sure, we can talk. Just nice to know there is a page that points to priority items, whether spec or code.
16:10:04 <smcginnis> #info Bugstats: cinder - 466, cinderclient - 36, os-brick - 13
16:10:08 <smcginnis> Just for info.
16:10:21 <hemna> I checked the os-brick bugs yesterday
16:10:32 <hemna> there are only 2 in 'new' status, the rest are in progress basically
16:10:33 <smcginnis> Cinder count really went up, but it appears half of them are now bugs for single character comment typos.
16:10:36 <smcginnis> :/
16:10:42 <smcginnis> hemna: Awesome!
16:11:14 <smcginnis> OK, that's all I've got to go over.
16:11:18 <smcginnis> #topic Cinder Multiattach (hemna, ildikov)
16:11:25 <smcginnis> hemna, ildikov: All yours.
16:11:37 <hemna> coolio thanks
16:11:51 <hemna> so ildikov has been doing a ton of work in the Nova side of things recently trying to get multiattach to land
16:12:12 <hemna> she's finding a lack of help reviewing code over there, so I wanted to raise awareness and see if we can get some folks from Cinder to help doing reviews
16:12:31 <smcginnis> hemna: Do we have a list somewhere? Or a common topic name?
16:12:32 <hemna> we are running up against a Nova deadline soon and I'd like to see her stuff land.
16:12:44 <hemna> ildikov should have that list
16:13:10 <hemna> I've been helping as much as I can, but it's a big task on the Nova side of things
16:13:30 <smcginnis> +1
16:13:39 <jungleboyj> hemna: If we can get a list we can try to get mriedem to help out.
16:13:52 <hemna> as far as Cinder goes, I have a patch up that fixes a multiattach issue for lvm, that also might benefit everyone else
16:13:55 <hemna> https://review.openstack.org/#/c/255595/
16:14:17 <hemna> jgriffith, has given me some feedback on it, and I pushed up a new patch yesterday.  If I could get others to take a look
16:14:24 <hemna> especially driver maintainers
16:14:40 <hemna> ildikov, ping
16:15:07 <smcginnis> Pretty late for her I think.
16:15:07 <shyama> hemna: for https://review.openstack.org/#/c/255595/ we need to handle terminate_connection as well. not all drivers use remove_export
16:15:26 <chhavi> hemna, I checked the code, we have some comments to handle the same for terminate connection
16:15:31 <hemna> she has been up pretty late every day hacking on this
16:15:43 <hemna> ok, please provide feedback in the patch if you can
16:15:54 <scottda> shyama: But should all drivers be using remove_export?
16:15:57 <hemna> terminate connection is a bit tricky
16:16:00 <chhavi> hemna: I added the comment please check
16:16:08 <hemna> some drivers create a new export for every attach, even for the same volume
16:16:16 <hemna> so we'd need terminate_connection to get called
16:16:21 <xyang1> Lots of drivers don't use remove_export
16:16:27 <hemna> xyang1, yup
16:16:34 <hemna> I think though, we are supposed to
16:16:43 <hemna> re: lvm being the reference driver.
16:16:46 <ildikov> sorry, here
16:16:53 <hemna> the lvm driver removes the target at remove_export time
16:16:53 <jgriffith> haha
16:16:58 <scottda> Yes, if a driver doesn't use remove_export to remove the export, is that a driver bug?
16:17:17 <smcginnis> For some there isn't a separate export and attach.
16:17:22 <hemna> yah
16:17:30 <hemna> and the 3PAR is one of those as well.
16:17:35 <e0ne> scottda: it should be a driver bug, IMO
16:17:37 <ildikov> smcginnis: review topic for the Nova patches: https://review.openstack.org/#/q/topic:bp/volume-multi-attach
16:17:44 <xyang1> hemna: how did you solve that in 3PAR
16:17:44 <hemna> we can't use remove_export, because we don't have the connection_info then
16:17:49 <e0ne> #link https://review.openstack.org/#/q/topic:bp/volume-multi-attach
16:17:51 <smcginnis> #link https://review.openstack.org/#/q/topic:bp/volume-multi-attach
16:17:54 <smcginnis> ildikov: Thanks!
16:17:54 <jgriffith> umm... why would that be a bug?
16:17:55 <patrickeast> hemna: yea we've got the same problem
16:17:58 <shyama> scottda: from what I see most fc drivers are using initialize and terminate connection and not remove export
16:17:58 <hemna> xyang1, 3PAR is one of those drivers that creates a new export for every attach.
16:18:07 <hemna> so terminate_connection is called every time and it works
16:18:13 <hemna> even for the same volume.
16:18:59 <hemna> jgriffith, I think because we are 'supposed' to follow the reference driver and all be doing the same things in the same driver API entry points.
16:19:04 <hemna> which, I think most of us aren't.
16:19:05 <jgriffith> no, not at all
16:19:06 <ildikov> smcginnis: I have johnthetubaguy, mriedem and ndipanov looking at the patches and basically two weeks to land some code
16:19:13 <jgriffith> remove export actually makes no sense for most
16:19:14 <bswartz> netapp doesn't exactly create new exports but does create rules so would need terminate_connection to be called as well
16:19:24 <Swanson> shyama, scottda: Dell driver doesn't use create_export or remove_export.  Simply wasn't enough info being passed to the driver.  (Now, tho, I think there is.)
16:19:28 <jgriffith> remove_export "removes" the locally created iscsi Target on the cinder volume node
16:19:31 <jgriffith> That's what it's intent was
16:19:40 <ildikov> smcginnis: the plan is to block more things for the first round and then add support later just to have something that gives us a base scenario to play with
16:19:50 <jgriffith> other backends don't have a created iscsi target on the cinder node so it's a noop
16:19:52 <hemna> Swanson, yah, we had the same problem.  I 'fixed' the create_export problem by adding the connector to it.
16:19:59 <smcginnis> ildikov: +1 Good plan
16:20:18 <smcginnis> jgriffith: Agree - it just doesn't apply to some backends.
16:20:19 <jgriffith> in other words it's exactly as the name would imply "remove the target export"
16:20:35 <shyama> smcginnis: i agree , we could it work on it for next round
16:20:38 <hemna> sure, we do that inside of terminate_connection
16:20:41 <chhavi> scottda: storwize, emc  drivers uses terminate connection
16:20:45 <jgriffith> sigh
16:20:56 <jgriffith> connection and export are NOT the same thing
16:21:18 <hemna> for some of us, they are
16:21:20 <ildikov> smcginnis: also would be nice to talk about this with the Nova folks during the mid-cycle
16:21:22 <patrickeast> jgriffith: +1
16:21:25 <jgriffith> connection == an iscsi nexus/attachment/session whatever you prefer to call it
16:21:51 <jgriffith> export == a local device "exported" via an iscsi (or other) target mechanism
16:21:52 <smcginnis> Right, some don't have anything to "export". They just need to attach (i.e. create that I-T nexus)
16:21:53 <Swanson> jgriffith, I took connection to mean connecting to a remote san and export to mean a locally created target.
16:22:09 <jgriffith> Swanson: not really
16:22:22 <jgriffith> Swanson: they're separate components
16:22:34 <jgriffith> and this isn't a griffith/cinderism FWIW
16:22:36 <jgriffith> :)
16:22:50 <Swanson> griffism
16:22:50 <smcginnis> Sounds like we could spend some time at the midcycle getting everyone on the same page for attach vs export.
16:22:59 <jgriffith> It's kind of a standard thing, and it's why the two parts are separated in Cinder
16:23:01 <chhavi> jgriffith: here by connection we need to avoid deleting the host entry, if the attachment exists.
16:23:16 <jungleboyj> smcginnis: ++
16:23:17 <patrickeast> wasn't there an effort to document the driver api's? was any of this covered by that?
16:23:19 <xyang1> smcginnis: should be added to the driver doc that eharney is working on
16:23:22 <jungleboyj> I feel like this has come up before.
16:23:26 <e0ne> smcginnis: +1
16:23:31 <jungleboyj> xyang1: +1
16:23:32 <smcginnis> #link https://github.com/eharney/driver_doc/blob/master/driver.rst
16:23:34 <hemna> sure, but I think in order for that to really work for some backends, we also need the connector at remove_export time.
16:23:41 <jgriffith> jungleboyj: yeah, I feel like I write stuff and talk about this stuff a lot
16:23:43 <smcginnis> xyang1: Yep! :)
16:23:43 <hemna> for all the same reasons we needed it at create_export time.
16:23:55 * mriedem sneaks in extra late
16:23:57 <hemna> we can't even find the export unless we have the connector.
16:24:07 <jgriffith> hemna: my point was actually that you probably don't need remove_export, you're most likely using it wrong
16:24:10 * smcginnis marks mriedem tardy
16:24:21 <smcginnis> :)
16:24:23 <hemna> in our case, we don't use it at all.
16:24:26 <jgriffith> hemna: unless on your device (and you may) you are actually saying "hey... delete this target"
16:24:33 <jgriffith> hemna: me neither
16:24:41 <jgriffith> hemna: until some bozo put it in an ABC class :)
16:24:42 <jungleboyj> This all came up around issues with live migration, didn't it?
16:24:48 <hemna> we nuke the target at terminate_connection time.
16:24:49 <hemna> heh
16:24:57 <Swanson> hemna, ditto.  create_export is more useful.
16:25:01 <Swanson> jungleboyj, yep
16:25:10 <hemna> jungleboyj, no, this came up due to a multiattach issue related to lvm
16:25:16 <jgriffith> sigh
16:25:20 <hemna> heh
16:25:24 <jgriffith> I don't think this is constructive any more
16:25:34 <hemna> the first detach nuked the lvm target, and other attachments on the same host vanished
16:25:37 <jgriffith> basically folks will do what they want and interpret how they choose
16:25:38 <jungleboyj> hemna: Ok.
16:25:41 <jgriffith> that's fair enoug
16:25:42 <jgriffith> h
16:25:44 <smcginnis> Yeah, in the interest of time let's table this for now.
16:25:53 <smcginnis> We're making jgriffith sigh a lot. :)
16:25:59 <jgriffith> LOL
16:26:11 <hemna> coolio
16:26:12 <jgriffith> hey.. sighing is ok right?
16:26:14 <guitarzan> we might run out of air :)
16:26:16 <smcginnis> hemna, ildikov: Anything else regarding multiattach at this point?
16:26:18 <jgriffith> Better than yelling :)
16:26:23 <hemna> so provide any feedback on my cinder patch please
16:26:28 <smcginnis> jgriffith: ;)
16:26:36 <hemna> or we can take this up in cinder channel after the meeting
16:26:40 <jgriffith> guitarzan: LOL
16:26:44 <hemna> and I'm sure I can confuse everyone even more there.
16:26:50 <smcginnis> heeh
16:26:58 <jungleboyj> jgriffith: You don't let me sigh.  ;-)
16:27:12 <ildikov> smcginnis: a favour from me: reviews and any hints if something looks fragile and I didn't realize :)
16:27:28 <jgriffith> jungleboyj: it's different when you look at me and roll your eyes while I'm speaking :)
16:27:29 <smcginnis> hemna, ildikov: Thanks. Will do ildikov.
16:27:30 <hemna> ildikov, thanks again for taking up this huge Nova effort.  It helps everyone in Cinder.
16:27:39 <smcginnis> #topic os-brick local_dev (hemna)
16:27:40 <ildikov> thanks!
16:27:49 <jungleboyj> ildikov: Thank you!
16:27:53 <smcginnis> hemna: Still you.
16:27:58 <jungleboyj> jgriffith: touche
16:28:02 <hemna> ok, so before the break, I started taking a look at the nova and cinder lvm code
16:28:15 <hemna> we have a lot of duplicate code between the 2
16:28:27 <hemna> and the code that jgriffith wrote in cinder/brick/local_dev is really nice compared to the nova code
16:28:46 <mriedem> i put up a patch at one point that copies some of the cinder lvm code into nova
16:28:48 <hemna> so I put together an os-brick patch that pulls that code as well as a cinder removal patch.
16:28:49 <e0ne> NOT: we have the same issue between nova/cinder/glance and RBD stuff
16:28:50 <mriedem> for retries and error handling
16:28:56 <smcginnis> hemna: That duplication is the main reason I'm behind it.
16:29:01 <e0ne> s/NOT/NOTE
16:29:22 <hemna> so I just wanted to raise the issue and see if everyone thinks this is a good idea.
16:29:24 <smcginnis> mriedem: Does it just use a small part of the local_dev code?
16:29:35 <mriedem> i'd have to find it
16:29:37 <hemna> #link https://review.openstack.org/#/c/260739
16:29:48 <mriedem> https://review.openstack.org/#/c/240611/
16:29:53 <hemna> #link https://review.openstack.org/#/c/260756/
16:29:57 <smcginnis> mriedem: Or more importantly, would it be usable on the nova side if we pulled all of that into brick?
16:30:17 <mriedem> i'd be behind pulling the common code into os-brick and use from there
16:30:20 <hemna> mriedem, the idea was to get the cinder/brick/local_dev code moved to os-brick first
16:30:26 <mriedem> since cinder's was more recent and had nice things
16:30:32 <hemna> then get nova to use it
16:30:39 <mriedem> nice things like what i ported into https://review.openstack.org/#/c/240611/
16:30:52 <hemna> mriedem, there were a few missing functions in cinder's code
16:30:58 <mriedem> i even said "At some point the common code should live in a library like os-brick and then we can avoid the duplication."
16:30:58 <smcginnis> mriedem: Which is a sign that we should probably have one place and not duplicate and be out of sync. :)
16:31:00 <hemna> that should be easily ported into os-brick
16:31:06 <jgriffith> mriedem: :)
16:31:17 <jgriffith> so I put a note in hemna 's review last night...
16:31:22 <smcginnis> mriedem: To be clear, I'm not saying it was a bad thing to do when it was done. We didn't have os-brick then.
16:31:24 <mriedem> this was really for the udevadm settle call that cinder has
16:31:37 <jgriffith> I think it's an ok idea; I'm not sure why "brick" vs an oslo.lvm lib
16:31:43 <jgriffith> I'm also concerned about a few things...
16:32:02 <mriedem> os-brick is owned by cinder cores, oslo things are not
16:32:04 <smcginnis> jgriffith: Well, that raises another question I guess. Should os-brick be oslo.brick?
16:32:06 <jgriffith> 1. When we discussed this a year or so ago I think there was a concensus that this was wayyy too heavy for Nova
16:32:08 <hemna> mriedem, +1
16:32:13 <jgriffith> too much object and abstraction madness
16:32:13 <mriedem> i don't think oslo cores should own brick
16:32:26 <e0ne> jgriffith: I don't like oslo.lvm because we'll need oslo.rbd later too
16:32:27 <jgriffith> mriedem: I'm not saying anything about brick :)
16:32:35 <jgriffith> e0ne: fair enough
16:32:38 <hemna> I think the local_dev code you write is clean
16:32:41 <smcginnis> mriedem: I don't either, but raising the question since it is a common library that does somewhat fall under oslo's mission.
16:32:42 <hemna> wrote
16:32:52 <jgriffith> e0ne: mriedem hemna and I'm not pushing one vs the other
16:32:55 <hemna> smcginnis, the same could be said of os-brick though
16:32:58 <mriedem> so what's too heavy for nova?
16:32:59 <jgriffith> honestly not a big deal to me either way
16:33:05 <mriedem> brick or cinder's local_dev/lvm?
16:33:09 <smcginnis> hemna: That is what I'm saying. :)
16:33:26 <jgriffith> I just don't know the motivations here
16:33:39 <mriedem> there is precedent for non-oslo shared libs, like os-win and os-vif
16:33:52 <jgriffith> mriedem: and the point being that the lvm code is actually more along the lines of what oslo is intended; stable, very low-freq change rate
16:33:56 <hemna> anyway, I think it should live in os-brick, as it's owned by the Cinder/Storage folks that have the knowledge.
16:34:05 <mriedem> agree with hemna
16:34:12 <mtanino> hemna: +1
16:34:12 <mriedem> oslo libs change way more often than you'd think
16:34:13 <jgriffith> hemna: yeah, I'm fine with that
16:34:19 <smcginnis> hemna: +1
16:34:20 <e0ne> hemna: +1
16:34:32 <jgriffith> but there's still the bigger point that I raised earlier
16:35:00 <hemna> jgriffith, the 'heavy for nova' issue ?
16:35:05 <jgriffith> brick/lvm was an interesting experiment on my part with objects... I'm really not certain it was "good" or not
16:35:24 <jgriffith> I'd love to make sure eharney is wrapped in on this
16:35:30 <hemna> I think you did a great job of it actually.
16:35:32 <hemna> it's clean
16:35:41 <hemna> especially if you compare it to the nova lvm.py
16:35:43 <jgriffith> eharney: I think he and mtanino are the other two that have comitted the most blood sweet and tears with me on lvm
16:35:43 <smcginnis> jgriffith: Yeah, what I've looked at looks good.
16:36:34 <jgriffith> hemna: mriedem cool, if Nova is ok with the changes and want to move forward this time I'm certainly good with it
16:36:42 * eharney needs to catch up on all of this
16:36:50 <mriedem> i guess i'm not familiar with the objects talk
16:36:53 <mriedem> cinder objects?
16:37:09 <hemna> mriedem, https://review.openstack.org/#/c/260739/4/os_brick/local_dev/lvm.py
16:37:17 <mriedem> if the cinder lvm code is using objects, i'd think those would have to be ported to os-brick also
16:37:31 <hemna> from what I can tell it's just that object
16:37:40 <mriedem> oh so just a class rather than util methods
16:37:46 <hemna> I think so yah
16:37:59 <jgriffith> mriedem: the code itself...
16:38:17 <jgriffith> mriedem: at one point the lvm modules were just static helper methods (or embedded in the driver)
16:38:24 <mriedem> yeah, that's what nova still has
16:38:27 <hemna> jgriffith, it's far cleaner than the utils code in nova imho.
16:38:29 <jgriffith> mriedem: now it's based off of a parent VG object
16:38:50 <smcginnis> So just some refactoring required when nova picks up the common code?
16:38:58 <hemna> but the VG stuff doesn't need to come into os-brick, nor nova.
16:39:06 <jgriffith> mriedem: so the result is you have a bit more *process* enforcement, and a bit heavier lift in terms of managing things
16:39:12 <hemna> just the local_dev/lvm.py object.
16:39:13 <jgriffith> not a huge deal
16:39:39 <hemna> I'll take on the nova patch if this lands in os-brick
16:39:40 <mriedem> it looks like it maps (from quickly glancing)
16:39:44 <jgriffith> hemna: there's no way not to the way I wrote that code.  Everything is off of parent vg object, cuz that's how Cinder works :)
16:40:14 <jgriffith> anyway... just pointing out that there were concerns from the Nova side in the past that I didn't disagree with
16:40:17 <mriedem> as long as nova can create the object w/o having to jump through 20 hoops, i think we'd be ok
16:40:31 <jgriffith> mriedem: sounds good
16:40:33 <mriedem> cinder has that whole exec helper construct that nova doesn't
16:40:42 <mriedem> but hemna already crossed that bridge with the os-brick adoption in nova
16:40:43 <hemna> Nova would just need to pass in the CONF entries at constructor time
16:40:44 <jgriffith> mriedem: I'd honeslty like that to go away
16:40:48 <jgriffith> mriedem: it's trouble!
16:40:54 <mriedem> jgriffith: yeah i'm not crazy about it
16:40:55 <hemna> which could be done with a factory or utils helper
16:41:02 <mriedem> makes it seem pluggable which it probably shoudn't be
16:41:19 <jgriffith> mriedem: indeed... and causes weird things with rootwrap and inheritance
16:41:34 <jgriffith> obsufication for obsufication purposes :)
16:41:38 <mriedem> right, there could be other rootwrap filter/privsep implications here
16:41:40 <smcginnis> So it sounds like there are some details to work out, but we are in general agreement?
16:41:44 <jgriffith> mriedem: there are
16:41:47 <mriedem> smcginnis: i think so
16:41:49 <hemna> privsep will eventually move rootwrap out
16:41:54 <hemna> but it's not ready yet
16:41:54 <mriedem> privsep is supposed to be coming along
16:41:58 <jgriffith> smcginnis: "some" details is an understatement IMO :)
16:42:18 <hemna> angus has a WIP patch in os-brick already for a first stab at privsep
16:42:19 <e0ne> privsep will be ready not earlier than N release
16:42:22 <hemna> for a small piece of it.
16:42:23 <smcginnis> jgriffith: That's cinder in general. We just have to work out some details. :)
16:42:32 <jgriffith> smcginnis: True That!!
16:42:40 <smcginnis> hemna: Good enough for now? Should we move on?
16:42:41 <mriedem> https://upload.wikimedia.org/wikipedia/en/d/dd/Gnomes_plan.png
16:42:45 <mriedem> i think we know the detials
16:42:47 <hemna> yah I'm done.
16:42:54 <hemna> so I'll take the os-brick patch out of WIP
16:43:00 <hemna> I have a BP up that I posted yesterday for it.
16:43:02 <smcginnis> mriedem: :)
16:43:12 <smcginnis> hemna: Awesome, thanks!
16:43:16 <jgriffith> lol
16:43:22 <hemna> mriedem, heh nice.
16:43:28 <smcginnis> #topic Cinder Code Cleanup Patches (e0ne)
16:43:34 <smcginnis> e0ne: Your turn.
16:43:41 <e0ne> #link https://wiki.openstack.org/wiki/CinderCodeCleanupPatches
16:43:48 <jgriffith> Just please don't miss phase-1 :)
16:43:56 <smcginnis> jgriffith: LOL
16:44:29 <e0ne> should we follow "'code_cleanup_batching" process now?
16:44:44 <e0ne> I see a lot of small cleanup/refactoging patches for cinder
16:45:21 <smcginnis> Most I've seen didn't conflict with anything, so I've been letting those through.
16:45:27 <DuncanT> e0ne: For any vaguely sizeable patch, I think we should follow it
16:45:33 <smcginnis> I guess I'm not of a strong opinion.
16:46:26 <e0ne> DuncanT, smcginnis: what about patches like "Fix 'an' article usage?
16:46:40 <smcginnis> e0ne: They should die a horrible and painful death.
16:46:48 <e0ne> +2
16:46:52 <smcginnis> e0ne: :)
16:46:54 <sheel> :)
16:47:11 <smcginnis> e0ne: Especially if a bug has been filed for each individual patch!
16:47:15 <DuncanT> e0ne: for one liners, I've just been firing them in, but maybe batching them will put people off posting them?
16:47:26 <winston-d> I heard that's the new trend in all openstack projects.
16:47:41 <smcginnis> winston-d: Which?
16:47:44 <hemna> ?
16:47:44 <winston-d> fixing 'an'
16:47:52 <hemna> seriously
16:47:53 <hemna> ?
16:47:59 <smcginnis> winston-d: Ah. The great "an" scourge of 2016.
16:48:03 <hemna> I think die die die is in order.
16:48:05 <scottda> That's an bad idea IMO
16:48:06 <patrickeast> easy way to boost stats
16:48:09 <jungleboyj> smcginnis: Even diablo_rojo isn't that picky.  ;-)
16:48:10 <e0ne> personally, I don't like to review 2-3 lines patches wich doesn't fix real bugs
16:48:15 <smcginnis> jungleboyj: Hah!
16:48:16 <hemna> jungleboyj, I dunno about that.
16:48:29 <hemna> e0ne, +2
16:48:35 <diablo_rojo> hemna: Ouch.
16:48:42 <e0ne> it's waste of time
16:48:50 <smcginnis> e0ne: Agree.
16:48:53 <patrickeast> e0ne: +1
16:49:01 <jungleboyj> e0ne: I don't care that much.  If it is small and someone wants to try to fix it, I will hit the button.  But whatever.
16:49:04 <winston-d> patrickeast: yup, that's most likely the motivation behind this.
16:49:18 <smcginnis> But then on the other  hand, if we go through this batch cleanup processing that adds potentially more wasted time rather than just letting them through.
16:49:22 <smcginnis> So I'm not sure what's better.
16:49:30 <diablo_rojo> I dont think it would be so bad if it was all getting fixed in a single patch.
16:49:32 <hemna> you can just ignore them
16:49:33 <hemna> :P
16:49:39 <smcginnis> jungleboyj: I'm kind of at that point too.
16:49:41 <e0ne> hemna: :)
16:49:45 <hemna> we have a priority list anyway that we should be looking at.
16:49:58 <hemna> if you want extra review stats padding, then go for it.
16:49:58 <jungleboyj> hemna: True.
16:49:58 <smcginnis> hemna: ++++1
16:49:59 <e0ne> hemna: agree
16:50:09 <jungleboyj> hemna: Agreed.
16:50:56 <e0ne> let's summurize our discussion
16:51:00 <sheel> I would suggest to add BP for each such category : python3, pep8 etc
16:51:02 <smcginnis> So I think we're saying - if it's a significant enough cleanup patch, it's at the cores discression whether to batch it?
16:51:13 <e0ne> we won't enforce CinderCodeCleanupPatches with -2
16:51:21 <winston-d> I've been trying to discourage ppl from submitting such patch, but if that's what they want, whatever.
16:51:40 <e0ne> sheel: it will be 'in progress' forever
16:51:43 <jungleboyj> smcginnis: ++
16:51:48 <smcginnis> e0ne: I think we still can if there's reason to hold it off.
16:52:09 <sheel> e0ne:right
16:52:25 <smcginnis> Or if we get 10 patches for the same thing in 10 different files we can push back and have them combine them.
16:52:48 <e0ne> smcginnis: fair enouph
16:53:19 <smcginnis> I don't know - does that make sense to everyone? Feel free to say otherwise.
16:54:00 <hemna> e0ne, that's the other issue, is the bugs associated with such patches, will 'skew' our bug counts.  :(
16:54:13 <jungleboyj> I think it makes sense that we look to have people collapse patches if they are ridiculous.
16:54:26 <smcginnis> hemna: That I'm definitely against and think we should actively push back on non-bug bugs.
16:54:27 <hemna> for that reason alone, I think we should -2 individual patches, and make them do a single bug/patch to fix em.
16:54:39 <jungleboyj> If we have some clean-up patches that are large and are going to cause rebase issues, put it into the code clean-up queue.
16:54:41 <hemna> useless churn!
16:54:51 <smcginnis> jungleboyj: +1
16:55:13 <DuncanT> hemna: Close the bug with 'not a bug' or something
16:55:16 <jungleboyj> Definitely think we need to discourage bugs for simple things.
16:55:18 <jgriffith> smcginnis: I prefer they're batched, or just flat out go away :)
16:55:28 <e0ne> jungleboyj: is't just an(# TODO(e0ne): fix article in the following patch) rebase hell
16:55:29 <hemna> jgriffith, +1
16:55:29 <sheel> e0ne : at least for pep8 issues we can raise one BP and fix all in one go to avoid multiple patches
16:55:41 <hemna> e0ne, :)
16:55:42 <jgriffith> I've watched some of those "cleanup" patches that just flip fop things 3 times over the last 4 years
16:55:46 <smcginnis> jgriffith: So keep the policy we should have been following?
16:55:58 <e0ne> jgriffith: +1
16:56:24 <jgriffith> smcginnis: maybe... but honestly sometimes it's just easier to not fight it
16:56:31 <smcginnis> jgriffith: Very true
16:56:41 <jgriffith> smcginnis: but, that being said... if we clearly agreee and document what we expect... well then that's easy
16:57:05 <jgriffith> the problem is we've never done that before, so it's kind of an individual thing
16:57:07 <e0ne> 3 mins reminder
16:57:48 <smcginnis> OK, how about we try to get back to enforcing CinderCodeCleanupPatches. But it can still be at a core's discretion to push something through if it seems like it's going to be easier not to go through the hoops?
16:58:07 <winston-d> smcginnis: +1
16:58:11 <jungleboyj> smcginnis: More work on the cores, but that is probably best.
16:58:37 <jungleboyj> Isn't really worth fighting over.
16:58:37 <smcginnis> OK, makes sense then I guess.
16:58:37 <jgriffith> smcginnis: works for me
16:58:47 <xyang1> smcginnis: that's fine
16:58:51 <e0ne> sounds good
16:58:53 <hemna> coo
16:58:54 <jgriffith> smcginnis: LOL... "I guess"
16:58:59 <jgriffith> the disclaimer there :)
16:59:03 <ntpttr> yhugik
16:59:03 <smcginnis> Alright, I'll try to remember. :)
16:59:09 <ntpttr> sorry! cat on keyboard
16:59:21 <smcginnis> OK, times up.
16:59:28 <smcginnis> Don't forget to midcycle planning.
16:59:32 <smcginnis> Thanks everyone!
16:59:35 <Swanson> toodles
16:59:39 <jungleboyj> ntpttr: Oh, I miss those days.
16:59:43 <smcginnis> #endmeeting