16:00:26 <smcginnis> #startmeeting Cinder
16:00:27 <openstack> Meeting started Wed May 25 16:00:26 2016 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:31 <openstack> The meeting name has been set to 'cinder'
16:00:33 <smcginnis> ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang tbarron scottda erlon rhedlind jbernard _alastor_ vincent_hou kmartin patrickeast sheel dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir
16:00:37 <e0ne> hi
16:00:37 <geguileo> smcginnis: Thanks
16:00:39 <smcginnis> Hey everyone
16:00:40 <geguileo> Hi
16:00:42 <sheel> hi
16:00:42 <e0ne> #link https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting
16:00:43 <jgregor> Hiya!
16:00:46 <DuncanT> Hi
16:00:47 <adrianofr> Hey
16:00:52 <smcginnis> e0ne: Beat me to it. ;)
16:00:52 <Swanson> Hello
16:00:57 <xyang1> hi
16:01:00 <scottda> hi
16:01:09 <smcginnis> #topic Announcements
16:01:09 <e0ne> smcginnis: :)
16:01:24 <hemna> hi
16:01:33 <jgriffith> o/
16:01:36 <jungleboyj> Hello.
16:01:49 <smcginnis> #link https://etherpad.openstack.org/p/cinder-spec-review-tracking Review focus tracking
16:02:01 <baumann> Hello!
16:02:04 <smcginnis> I finally spent a little time and updated that ^^
16:02:27 <aimeeu> .
16:02:27 <diablo_rojo> Hello :)
16:02:31 <smcginnis> Please take a look and try to spend some time on the things we've identified as priorities.
16:02:34 <bswartz> .o/
16:02:52 <smcginnis> I've definitely missed some things we've talked about, so please add any glaring ommissions.
16:03:26 <smcginnis> Also, if any of those are yours, feel free to update with latest links or any other helpful information.
16:03:45 <smcginnis> #link https://bugs.launchpad.net/nova/+bugs?field.status:list=NEW&field.tag=volumes Nova volume bugs
16:04:03 <smcginnis> Just usual reminder on that one ^ that nova can always use our input on volume related issues.
16:04:13 <tbarron> hi
16:04:18 <DuncanT> Dynamic reconfig is listed as both a priority and a nice-to-have
16:04:43 <smcginnis> DuncanT: Oh?
16:04:45 <diablo_rojo> Yeah I am special like that
16:04:56 <smcginnis> diablo_rojo: You're special all right. :P
16:05:04 <diablo_rojo> smcginnis: :P
16:05:12 <diablo_rojo> I was working on addressing comments today
16:05:48 <diablo_rojo> I guess I am not sure what needs to be added based on the larger conversation between jungleboyj, hemna, patrickeast, and dulek
16:05:58 <smcginnis> #link http://lists.openstack.org/pipermail/openstack-dev/2016-May/095691.html Gerrit Downtime ML Announcement
16:06:08 <jungleboyj> I added a question out there as I have gotten kind-of lost.
16:06:13 <smcginnis> #info Gerrit will have an outage Friday 2016-06-03 at 20:00 UTC
16:06:50 <smcginnis> Plan accordingly. :)
16:06:57 <hemna> ?
16:07:08 <smcginnis> hemna: For the gerrit downtime.
16:07:11 * jungleboyj will take vacation.
16:07:13 <sheel> *20:00 through 24:00 UTC
16:07:14 <jungleboyj> ;-)
16:07:19 <bswartz> They should do it this friday rather than next friday
16:07:26 <smcginnis> hemna: Cuz I know how bad you;ll feel without access to gerrit. ;)
16:07:35 <hemna> heh :)
16:07:45 <e0ne> :)
16:07:49 <smcginnis> bswartz: I'm sure they have a reason for their timing.
16:07:59 <scottda> hemna: And that's 1:00 PM PDT, since we know how much you love UTC time
16:08:00 <jungleboyj> smcginnis: Are they going to make it suck less now?
16:08:14 <smcginnis> jungleboyj: One can only hope. :)
16:08:23 * jungleboyj start to pray
16:08:30 * diablo_rojo crosses fingers
16:08:33 <smcginnis> #info Women of OpenStack looking for mentors
16:08:36 <hemna> gerrit down at 1pm...man, that sounds like an excuse to hit the river with a fly rod....
16:08:39 <smcginnis> #link http://lists.openstack.org/pipermail/openstack-dev/2016-May/095667.html ML announcement
16:08:40 <e0ne> 11pm in my timezone.. I'll be able to finish work before 11pm :)
16:08:42 <sheel> they want to rename some projects
16:08:53 <jgriffith> hemna: +100
16:09:10 <smcginnis> Looking for mentors to help. See ML post for more details.
16:09:13 <geguileo> e0ne: 1 hour earlier for me  ;-)
16:09:32 <diablo_rojo> smcginnis: Its pretty simple to get into the system. It's just a google form to fill out.
16:09:33 <e0ne> geguileo: you're a lucky man
16:09:54 <smcginnis> diablo_rojo: Great to see that taking off. :)
16:10:08 <sheel> diablo_rojo: lot of does and don't as well in form
16:10:10 <diablo_rojo> smcginnis: If you want to a mentor and a mentee that's fine too
16:10:15 <geguileo> diablo_rojo: But it was asking about Austin stuff  ;-)
16:10:34 <diablo_rojo> geguileo: Yeah, it's been out since like January
16:10:59 <diablo_rojo> geguileo: Just skip the Austin stuff :)
16:11:23 <smcginnis> #topic Midcycle planning
16:11:30 <smcginnis> #link https://etherpad.openstack.org/p/newton-cinder-midcycle Planning etherpad
16:11:32 <geguileo> diablo_rojo: I was actually considering signing in, but I'm no good at some of the stuff a mentor should be doing according to the document
16:11:44 <smcginnis> I had this as an announcement item, but probably worth making its own topic.
16:11:53 <smcginnis> Please add your name to the etherpad if planning on attending.
16:11:59 <smcginnis> List is pretty small so far.
16:12:02 <diablo_rojo> geguileo: Maybe you sign up as a mentee to get good at those things ;) Then later sign up as a mentor
16:12:10 <smcginnis> And don't forget to reserve a hotel room.
16:12:13 <geguileo> diablo_rojo: rofl
16:12:25 <smcginnis> :)
16:12:38 <scottda> Yes and please book your Hotel room for the mid-cycle to get a discount. There is not a block of reserved rooms, it is first-come, first-served.
16:13:08 <smcginnis> We also can start capturing a list of topics to cover at the midcycle in the etherpad.
16:13:12 <jungleboyj> scottda: They still had rooms available yesterday.
16:13:21 <smcginnis> I just reserved mine yesterday.
16:13:26 <hemna> smcginnis, fwiw, I'm attempting to get travel approval.....
16:13:27 <smcginnis> Finally remembered to do it.
16:13:31 <scottda> jungleboyj: Cool. I just don't want anyone to have to pay more than the discount rate.
16:13:32 <smcginnis> hemna: Awesome!
16:13:37 <hemna> unlikely though that we'll be there.
16:13:38 <diablo_rojo> hemna: Me too.
16:13:50 <jungleboyj> hemna: Thow shalt get travel approval.
16:13:54 <smcginnis> hemna: I think you need to go to the HP office there to transition things. ;)
16:14:05 <jungleboyj> smcginnis: I should be approved.  Just waiting for the go-ahead.
16:14:32 <smcginnis> Some interesting challenges this time around. :|
16:15:19 <smcginnis> Any other agenda items? Pretty light this week.
16:15:34 <diablo_rojo> Can talk about the conversation going on for my spec
16:15:38 <smcginnis> #topic Open Discussion
16:15:39 <e0ne> I've got one item about unit tests
16:15:46 <smcginnis> diablo_rojo: Go ahead
16:15:51 <smcginnis> e0ne: You're next
16:16:01 <DuncanT> Avishy asked that we discuss the list-manageable volumes spec and patches
16:16:10 <geguileo> I would also like to talk about our broken rolling upgrades  :-)
16:16:15 <diablo_rojo> So, is the conversation just that I need to add to it saying things could be hosed if this is run in an HA environment
16:16:16 <diablo_rojo> ?
16:16:35 <diablo_rojo> Or is there another part of the implementation that needs to happen?
16:17:00 <diablo_rojo> #link https://review.openstack.org/#/c/286234/10 Dynamic Reconfig
16:17:23 <diablo_rojo> hemna: dulek jungleboyj patrickeast all had comments there so I am looking to you all for clarification
16:17:23 <e0ne> diablo_rojo: IMO, it should not be depended on HA things
16:17:43 <jungleboyj> e0ne: I agree.
16:18:08 <diablo_rojo> So I just need to point out that there could be issues here that are independent of this approach and could happen anyway?
16:18:13 <jungleboyj> I just thought that we need to note in a disclaimer that sysadmins may enable settings using this that could cause issues.
16:18:27 <hemna> as I said in my last comment on the review "All I'm saying is that we need to document this as a potential for major problems.  "
16:18:31 <jungleboyj> This is nothing new.  The concerns for A/A c-vol would go with the HA documentation.
16:18:40 <e0ne> +1
16:18:41 <diablo_rojo> hemna: Document in the spec or somewhere else?
16:18:42 <hemna> folks seem to be worried that I'm asking for A/A solutions in this spec
16:18:47 <hemna> and that's NOT what I said.
16:18:47 <jungleboyj> hemna: The 'here is a loaded gun' disclaimer.
16:19:26 <hemna> the spec should mention the potential issues with A/A, and we should document in the devref what not to do.
16:19:28 <hemna> that's it.
16:19:44 <diablo_rojo> hemna: Got it. Will do :)
16:19:52 <hemna> the A/A issue can cause really bad problems
16:19:57 <hemna> we should be up front about it.
16:20:00 <e0ne> hemna: good point
16:20:16 <diablo_rojo> hemna: I agree. Make sure people know how badly they can mess it up
16:20:28 <smcginnis> diablo_rojo: good?
16:20:32 <diablo_rojo> smcginnis: Yup
16:20:35 <smcginnis> Thanks!
16:20:39 <smcginnis> e0ne: You're up
16:20:41 <diablo_rojo> Thank you :)
16:20:46 <e0ne> thanks, Sean
16:21:04 <e0ne> I've tried to organize our unit tests a bit
16:21:15 <e0ne> and found that backends' unit-tests are depended on execution order due to lack of mocking, I guess
16:21:20 <e0ne> here is my patch: https://review.openstack.org/#/c/320148/
16:21:38 <e0ne> I need help from drivers maintainers with it
16:21:41 <smcginnis> e0ne: I think that's what I was seeing in the 5 minutes I spent on it. :)
16:21:53 <smcginnis> Then I gave up.
16:22:03 <smcginnis> It would be nice to reorganize that tree.
16:22:04 <e0ne> and I  don't have an idea, how to check it with hacking rules
16:22:23 <e0ne> smcginnis: I'll propose other patches too
16:22:36 <e0ne> smcginnis: but drivers tests make me very sad :(
16:22:42 <DuncanT> I don't think you can check it with hacking rules, you need to do runtime data poisoning analysis to detect it
16:23:24 <DuncanT> It isn't generally detectable via static analysis in python... it's a stupidly hard language to static analyse
16:23:41 <e0ne> StorwizeSVCCommonDriverTestCase takes more than 4 seconds :(
16:23:43 <jgriffith> e0ne: sorry... can ellaborate a bit about the problem?
16:23:59 <e0ne> jgriffith: sure
16:24:10 <jgriffith> e0ne: sorry...  /me is slow :)
16:24:16 <e0ne> jgriffith: there is some issues mock
16:24:25 <e0ne> jgriffith: not everything is mocked right
16:24:50 <e0ne> jgriffith: so once I've moved tests to other directory, they stucked :(
16:24:54 <jgriffith> e0ne: oh... you mean like relative vs absolute paths or something in the mocks?
16:25:14 <e0ne> jgriffith: I think it's mocks related, but not 100% sure
16:25:23 <jgriffith> e0ne: hmm... interesting
16:25:28 <hemna> I'm not following the issue here
16:25:42 <hemna> can you show some examples?  pastebin errors, etc. ?
16:25:45 <jgriffith> I think the issue is that mock blows up when he moves things
16:26:01 <jgriffith> e0ne: I'll download the patch and run it to see what you mean
16:26:23 <tbarron> e0ne: does it make sense to divice and conquer here, moving fewer tiles in each patch, raising bugs when that patch fails?
16:26:27 <smcginnis> jgriffith: That's probably the best way.
16:26:29 <e0ne> jgriffith, hemna: http://logs.openstack.org/48/320148/1/check/gate-cinder-python27-db/a33e7a1/console.html
16:26:35 <smcginnis> Things do fall apart when you try to run them.
16:26:59 <e0ne> jgriffith, hemna: TBH, that patch works on some envs but fails on other
16:27:08 <e0ne> jgriffith, hemna: I don't know why:(
16:27:14 <jgriffith> e0ne: hehe... we'll see if it's my lucky day or not
16:27:18 <jgriffith> running now
16:27:21 <e0ne> jgriffith: :)
16:27:23 <smcginnis> e0ne: With the timeout it's looking like a sleep isn't getting mocked right, but that's odd.
16:27:39 <hemna> huh
16:27:40 <jgriffith> oh no... not sleep mock problems again :(
16:27:47 <e0ne> smcginnis: sleep or some I/O process
16:27:59 <smcginnis> e0ne: True
16:28:05 <smcginnis> jgriffith: No kidding!
16:28:22 <e0ne> StorwizeSVCCommonDriverTestCase <- looks like sleep is not mocked there!
16:28:22 <jgriffith> ahh... test_terminate_connection_with_decorator ;(
16:28:38 <jungleboyj> e0ne: :-(
16:29:01 <smcginnis> e0ne: I wonder if it would be better to move these in smaller chunks.
16:29:02 <jgriffith> e0ne: as you said though, very strange that moving the tests exposes this
16:29:14 <smcginnis> At least for problem isolation.
16:29:16 <e0ne> jgriffith: agree
16:29:23 <e0ne> I'll appreciate any help with this things.
16:29:31 <smcginnis> Any maybe a little easier than the last time some guys decided to move everything around.
16:29:34 <smcginnis> ;)
16:29:55 <e0ne> we can continue to discuss it in the cinder channel to free this one for more important topics
16:30:12 <smcginnis> e0ne: Thanks
16:30:33 <smcginnis> avishay couldn't attend, but he wanted his list manageable volumes/snapshots discussed.
16:30:50 * smcginnis is looking for link
16:30:56 <e0ne> smcginnis, jgriffith:  FYI: test_rbd always hangs on my envs if it is moved to unit/volume/drivers directory
16:31:09 <smcginnis> https://review.openstack.org/#/c/285296/
16:31:12 <jgriffith> e0ne: yeah, for me it's the zonemanager tests :(
16:31:29 <e0ne> jgriffith: it's something new:)
16:31:33 <DuncanT> The basic objection here seems to be that the paging stuff is hard... it's totally driver dependant though, so can only really be done in the driver
16:31:48 <smcginnis> DuncanT: Yeah, that's the problem/concern I have with it.
16:32:07 <smcginnis> I wanted to code it up and see, but with the way he did it for LVM it would definitely be bad for my backend.
16:32:08 <jgriffith> e0ne: You're right though, those usages of mock are all wrong
16:32:19 <jgriffith> e0ne: they're mocking the entire class :(
16:32:39 <e0ne> DuncanT: maybe we can make it manager, not in driver
16:32:49 <smcginnis> It would be something like O(n*4) for me, so if I end up throwing away 90% of the results at the end that would be bad.
16:32:50 <xyang1> smcginnis: I'll review again.  he said he addressed my comments
16:32:58 <smcginnis> xyang1: OK, good.
16:33:03 <e0ne> jgriffith: what test is it? not sure that I want to see it...
16:33:08 <e0ne> :)
16:33:16 <jgriffith> e0ne: LOL.. I'll ping you after meeting
16:33:20 <DuncanT> e0ne: If you do it in the manager, you force the driver to pull in potentially thousands of results, which might be expensive
16:33:32 <e0ne> jgriffith: thanks
16:33:38 <smcginnis> So driver maintainers, please take a look at that list patch and make sure it's going to work out for you.
16:34:00 <xyang1> e0ne: I don't think it can be dobe without driver call
16:34:04 <e0ne> DuncanT: but not every driver will be able to do it on the storage side
16:34:08 * hemna reruns the hp tests....
16:34:13 <DuncanT> smcginnis: Just look at the method signature and try to implement it from scratch.... looking at LVM isn't very helpful in this case I think
16:34:16 <jgriffith> FTR I'm opposed to the idea altogether
16:34:17 <xyang1> s/dobe/done
16:34:28 <DuncanT> jgriffith: Why?
16:34:31 <smcginnis> e0ne: I won't either, but in my driver I need to get the bare minimum, then filter, then get the full details for what's left.
16:34:31 <jgriffith> some devices are used for more than one cloud or deployment
16:34:34 <smcginnis> I think.
16:34:37 <e0ne> I like the idea how we did with generic volume migrations
16:35:05 <jgriffith> things like importing into OpenStack IMO are admin things that frankly *should* require a bit of effort
16:35:06 <xyang1> e0ne: because each driver defines its own ref
16:35:19 <xyang1> e0ne: manager does not know
16:35:23 <hemna> jgriffith, +1
16:35:34 <Swanson> smcginnis, It involves getting tons of data (some of which might be used by other backends or clouds) repeatedly or adding a cache.
16:35:38 <smcginnis> jgriffith: So you think they should just go to their storage to find the volume, then do a manage in OpenStack once they know which one they want?
16:35:40 <DuncanT> jgriffith: admins are disagreeing
16:35:46 <jgriffith> smcginnis: yes
16:35:59 <hemna> this also exposes other volumes on the backend that shouldn't be viewable by an openstack admin
16:36:11 <e0ne> hemna: +1
16:36:14 <Swanson> smcginnis, sucks. Not against it but I'd rather return less data and have another option to drill down.
16:36:26 <DuncanT> jgriffith: Particularly as figuring out the ref for some backends turns out not to be trivial
16:36:28 <jgriffith> hemna: exactly, that's part of my point.
16:36:52 <hemna> I think this exposes too much
16:37:07 <jgriffith> I'm not convinced of the use case to begin with but that's ok
16:37:16 <smcginnis> OK, please comment on the patch and make sure concerns are captured there.
16:37:39 <smcginnis> Would be good having some operator feedack for things like this. :/
16:37:51 <Swanson> Can't we just let it pass and then whine and counter patch later?
16:37:56 <xyang1> DuncanT: is operator?
16:38:18 <jgriffith> Swanson: that always ends well :)
16:38:32 <smcginnis> It's the Cinder way! :)
16:38:35 <Swanson> jgriffith, It's the process I'm most comfortable with.
16:38:35 <DuncanT> xyang1: Yes. The format is different between backends, and often not exactly what appears on the (propriatary) backend gui
16:39:06 <Swanson> jgriffith, smcginnis: :)
16:39:26 <DuncanT> smcginnis: I've got a customer who banged their head against the wall trying to get manage working... it's easy once you know how but not easy to do from scratch
16:39:30 <Swanson> DuncanT, Isn't this a documentation issue?
16:39:56 <DuncanT> Swanson: I'm not convinced that documenting a complex proceedure is better than making it easier
16:40:01 <hemna> DuncanT, because they didn't know the format for the particular backend ?
16:40:08 <DuncanT> hemna: Yes
16:40:23 <hemna> yah that part really sux0rs
16:40:42 <DuncanT> hemna: With avishy's patch, it is easy
16:40:43 <hemna> to me it's similar to not knowing what extra specs are supported by any particular backend
16:40:49 <hemna> the driver knows what it needs
16:40:58 <hemna> we just don't have a way of exposing it
16:40:59 <DuncanT> hemna: Yah, somebody should fix that too ;-)
16:41:19 <Swanson> DuncanT, So we make knowing the values easier but then they have to go back to the backend to figure out what those values mean. Same issue. Other side.
16:41:19 <hemna> well, avishay's patch also does something that I think is a security problem
16:41:26 <hemna> by listing all available volumes on an array
16:41:29 <hemna> that is bad IMHO
16:42:43 <DuncanT> hemna: It's admin only... how often does the admin not have login access to the box, and therefore full access to the creds to talk to the backend anyway?
16:42:56 <jgriffith> DuncanT: fairly often actually
16:43:13 <smcginnis> OK, everyone with concerns please comment. Maybe Avishay can attend next week's meeting to discuss it further.
16:43:15 <jgriffith> DuncanT: it's not uncommon to have an openstack-admin
16:43:24 <hemna> DuncanT, that depends entirely on the organization deploying.  an openstack admin != storage admin
16:43:30 <jgriffith> so I think all these concerns can be worked out if control is left to the driver
16:43:39 <smcginnis> Well, that admin has the credentials in cinder.conf, so it uses the same thing. They can log in and view the same data that would be shown.
16:44:06 <DuncanT> jgriffith: People are actually doing that in the wild now? Cool
16:44:28 <DuncanT> jgriffith: I wonder how e.g. not having access to the logs plays out?
16:44:40 <jgriffith> DuncanT: yeah, like hemna mentioned... openstack-admin, storage-admin, VSphere admin etc
16:45:11 <DuncanT> jgriffith: I'm aware of the concept, but everybody I know who tried it failed in practice to make it useful
16:45:22 <jgriffith> smcginnis: make a fair point too though about cinder.conf... which I get complaints about on a regular basis
16:45:36 <jgriffith> ie don't expose credentials in the conf file
16:45:42 <jgriffith> but I don't really know of a good way to fix that
16:45:56 <jgriffith> at leat reasonably
16:45:57 <DuncanT> jgriffith: the cinder.conf point was the one I was trying to make, sorry if I wasn't clear
16:46:07 <jgriffith> DuncanT: yeah, valid
16:46:20 <smcginnis> We could possibly provide a tool to store credentials in an encrypted file that cinder would pull from.
16:46:26 <jungleboyj> jgriffith: I don't understand why the obfuscation we previously proposed didn't take off for that reason.
16:46:37 <smcginnis> jungleboyj: Oh, what was that?
16:46:40 <jgriffith> jungleboyj: which?
16:47:01 <DuncanT> If cinder can decrypt it, then it's tricky to stop anybody with any useful admin access to the box from doing so too
16:47:17 <jungleboyj> It was an obfuscation approach that encrypted the data in conf files for like the database connection, etc.
16:47:28 <jungleboyj> IBM carried the patch for it internally for quite some time.
16:47:29 <DuncanT> jungleboyj: Link?
16:47:30 <jgriffith> jungleboyj: don't remember exactly but wasn't it something about too easy to reverse decode?  And the fact that the driver would have the code to decrypt it anywa?
16:47:32 <smcginnis> DuncanT: Just slightly more secure than being able to "cat cinder.conf" and see it.
16:47:32 <jgriffith> I don't recall
16:47:44 <DuncanT> smcginnis: Security theatre.
16:47:45 <smcginnis> Or rather /secure/obfuscated/
16:47:51 <smcginnis> DuncanT: Very trye
16:47:53 <smcginnis> *true
16:47:57 <xyang1> DuncanT: that's the reason why apple does not want to unlock the iphone:)
16:48:02 <hemna> that sounds like something that should be a cross project discussion
16:48:08 <DuncanT> smcginnis: That's how you end up with the TSA
16:48:11 <hemna> other projects might need to secure their conf files?
16:48:13 <hemna> I dunno
16:48:13 <jungleboyj> jgregor: Oh, maybe that was it.  It was better than nothing.
16:48:21 <smcginnis> DuncanT: LOL
16:48:21 <jgriffith> smcginnis: "security through obfusication", isn't that said t be a bad thing :)
16:48:31 <hemna> just seems like a useless exercise to me
16:48:31 <jgriffith> TSA... LOL
16:49:05 <jgriffith> so back to the manage thing....
16:49:17 <jgriffith> well... never mind I'll just put it in the patch comments
16:49:22 <smcginnis> :)
16:49:28 <DuncanT> jgriffith: You said something about maybe being able to fix it in the driver?
16:49:49 <DuncanT> jgriffith: Does your backend know which volumes are part of which cloud or something?
16:50:46 <hemna> DuncanT, ours doesn't.  we won't know which ones to show and which to not show.
16:51:06 <DuncanT> hemna: I know most don't.
16:51:29 <DuncanT> hemna: A security conscious deployment could always disable this call via policy....
16:51:39 <hemna> sure
16:51:48 <jungleboyj> DuncanT: ++
16:52:07 <hemna> then there is the issue with showing volumes already managed by another deployment, and then managing them....causing conflicts and pain.
16:52:29 <Swanson> I'm not implementing nuthin until Dell's phalanx of liability lawyers and crack security team takes a gander.
16:53:02 <DuncanT> hemna: Nothing stopping them doing that manually today
16:53:09 <hemna> sure
16:53:23 <hemna> that is true, but this makes it way easier and more likely to happen
16:53:42 <smcginnis> One more thing I wanted to bring up before we run out of time. There's been some grumbling that it's taking much longer to get things reviewed.
16:53:43 <hemna> today the admin has to manually find the volume, which is good and bad at the same time.
16:53:44 <DuncanT> hemna: Not so sure of that, personally
16:53:47 <smcginnis> #link https://wiki.openstack.org/wiki/Cinder#Review_Links Review Links
16:53:57 <smcginnis> Please please spend time reviewing patches if you can.
16:54:08 <hemna> smcginnis, yes dad!  :)
16:54:17 <smcginnis> hemna: Now go to your room!
16:54:19 <smcginnis> :)
16:54:23 <hemna> :)
16:54:33 * geguileo wonder if everybody missed his comment on wanting to talk about broken rolling upgrades...
16:54:37 <Swanson> My failback patch finally got a review! Thanks xyang1 !
16:54:38 <geguileo> s/wonder/wonders
16:54:40 <kfarr> Also before we run out of time and in a similar vein, to follow up from last week, I'd like to request reviews for the Castellan integration code. Spec: https://review.openstack.org/#/c/247577/, code: https://review.openstack.org/#/c/280492/
16:54:48 * DuncanT has had his upstream time cut by his employer :-( Still doing some reviews but not nearly as many as I did
16:54:50 <smcginnis> geguileo: Doh! Don't be so polite! :)
16:54:59 <geguileo> lol
16:55:13 <xyang1> Swanson: now you need to find more reviewers:)
16:55:13 <smcginnis> geguileo: That's more important. We still have 5 minutes.
16:55:22 <geguileo> Well, our rolling upgrades are currently a little broken
16:55:24 <DuncanT> kfarr: I need to get back to that. Looks like the patch is actually better than the spec suggests it is
16:55:34 <geguileo> I discovered it last month a posted a couple of patches
16:55:34 <smcginnis> geguileo: Can you sumarize the issue?
16:55:38 <geguileo> Sure
16:56:02 <kfarr> Ok, thanks DuncanT, please leave a comment on the review links with your thoughts
16:56:09 <geguileo> Full description is here: http://gorka.eguileor.com/learning-something-new-about-oslo-versioned-objects/
16:56:20 <geguileo> First patch is this one: https://review.openstack.org/#/c/307074
16:56:27 <geguileo> And we have 2 issues
16:56:36 <DuncanT> kfarr: Will do. Sorry about the delay, I've been rather busy :-(
16:56:43 <geguileo> 1, we are not keeping our OVO lists in sync with the OVO they are listing
16:56:55 <geguileo> 2 we don't have relationship maps for the backports
16:56:59 <kfarr> DuncanT, I understand :)
16:57:08 <smcginnis> geguileo: Nice write up. I'll have to spend some time reading through that.
16:57:17 <geguileo> So when OSLO tries to do a backport of a Volume
16:57:34 <geguileo> Doesn't know which version of VolumeType it must backport the volume_type field
16:57:47 <smcginnis> geguileo: So you have some patches out to fix this? Or is there more to fix?
16:57:51 <geguileo> And that happens for all Versioned Objects that have other Versioned Object fields
16:58:10 <geguileo> smcginnis: The first 2 patches in that series fix this
16:58:20 <geguileo> First one links lists to their OVO contents
16:58:40 <geguileo> The second one creates a mechanism to auto generate the relationships based on our OVO history
16:58:53 <smcginnis> geguileo: Great!
16:59:16 <geguileo> I'll send a new update in 10 minutes with a commit message change on the second patch
16:59:24 <geguileo> But they should be ready to go after that
16:59:49 <geguileo> Well, that was all
16:59:49 <thingee> o/
16:59:50 <smcginnis> geguileo: Thanks! I'll watch for the update.
16:59:53 <DuncanT> geguileo: Can we log a warning (or info) every time we backport an object version?
17:00:00 <smcginnis> thingee: Hah! One minute to spare. ;)
17:00:07 <thingee> ;D
17:00:11 <geguileo> DuncanT: Mmmmm, we probably could
17:00:14 <DuncanT> geguileo: IT should only happen during an upgrade, and stop happening at the end of it, right?
17:00:18 * thingee was passed out somewhere in vancouver
17:00:19 <smcginnis> And times up...
17:00:20 <geguileo> DuncanT: Yes
17:00:37 <smcginnis> See you back in channel. Thanks everyone.
17:00:37 <geguileo> DuncanT: So during upgrades your logs would grow a lot...
17:00:45 <smcginnis> #endmeeting