16:00:09 <smcginnis> #startmeeting Cinder
16:00:09 <openstack> Meeting started Wed Aug 17 16:00:09 2016 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:13 <openstack> The meeting name has been set to 'cinder'
16:00:16 <smcginnis> ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex vincent_hou kmartin patrickeast sheel dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell stevemar watanabe.isao,tommylike.hu
16:00:20 <yuriy_n17> hi
16:00:21 <e0ne> hi
16:00:22 <erlon> hey
16:00:23 <smcginnis> Hey everyone
16:00:23 <stevemar> o/
16:00:24 <geguileo> Hi! o/
16:00:24 <Swanson> hello
16:00:28 <scottda> hola
16:00:28 <jgregor> Hello :)
16:00:33 <hemna> morning
16:00:35 <DuncanT> Greetings
16:00:39 <xyang1> hi
16:00:41 <rajinir> o/
16:00:41 <ntpttr___> heyo
16:00:45 <bswartz> .o/
16:01:00 <smcginnis> #topic Announcements
16:01:13 <smcginnis> #link http://releases.openstack.org/newton/schedule.html Newton release schedule
16:01:22 <flip214> hi
16:01:26 <smcginnis> Next week is the non-client library release deadline.
16:01:31 <jungleboyj> o/
16:01:37 <smcginnis> AKA os-brick
16:01:38 <diablo_rojo> Hello
16:01:41 <hemna> I thought it was friday?
16:01:44 <smcginnis> #link https://review.openstack.org/#/q/project:openstack/os-brick+status:open Open os-brick reviews
16:01:53 <jseiler> hi
16:02:00 <_alastor_> o/
16:02:14 <smcginnis> hemna: Yeah, looks like we have through next week, so a few more days.
16:02:29 <scottda> Do we a list of os-brick priorities?
16:02:41 <smcginnis> If anyone has anything critical outstanding there, let me know.
16:02:49 <hemna> ok phew
16:02:51 <smcginnis> Not really.
16:03:04 <diablo_rojo> hemna: Are you trying to get the patch you sent me yeaterday in?
16:03:04 <hemna> I put up the patch to remove the locks in iSCSI and FC
16:03:09 <diablo_rojo> Lol
16:03:10 <smcginnis> The c-vol/n-cpu locking issue is kind of a high priority.
16:03:11 <hemna> seems like most of the CI's are passing
16:03:26 <jgriffith> hemna: go figure :)
16:03:29 <smcginnis> hemna: I don't think that's actually what we want, but we'll see.
16:03:30 <DuncanT> An awful lot of unanswered -1s on brick patches there
16:03:39 <hemna> that needs to be rally tested
16:03:45 <jgriffith> hemna: the only ones I think should be affected are Pure, Dell and 3Par
16:03:53 <hemna> jgriffith, yah
16:03:57 <jgriffith> Maybe others I don't know about but those are the main ones
16:04:02 <smcginnis> hemna: This is actually the approach I think they were looking for: https://review.openstack.org/#/c/356532/
16:04:09 * patrickeast wanders in late
16:04:11 <jgriffith> shared target devices
16:04:14 <smcginnis> But definitely need lot's of rally testing.
16:04:21 <hemna> smcginnis, well we also discussed removing the locks entirely
16:04:23 <smcginnis> jgriffith: Yeah, I think so too.
16:04:26 <patrickeast> jgriffith: yep
16:04:48 <patrickeast> hemna: that approach scares me a bit
16:04:50 <smcginnis> hemna: Yeah, at the time we talked about removing the locks and just having retries, but I think the locks are valid for some scenarios.
16:05:04 <hemna> patrickeast, me too.  there are obvious race conditions in the code
16:05:16 <xyang1> jgriffith: I think we have that too
16:05:18 <hemna> and the locks were in there to help prevent that, but the locks aren't shared.....
16:05:27 <jgriffith> I'm still not sure I agree with the statement that we need them... but I don't want to derail meeting :)
16:05:30 <hemna> retries....may help, but also cause other problems
16:05:37 <smcginnis> Anyway, just be aware that deadline is coming up quick.
16:05:41 <hemna> so it's a crap shoot.   I'd like to get all of this rally tested IMHO
16:05:47 <smcginnis> #topic Moving the failing jenkins tests to check-experimental until they pass
16:05:50 <patrickeast> jgriffith: yea i don't think we need them for all platforms, situations, deployment configs, etc 100% agree there
16:05:55 <patrickeast> jgriffith: but we do for some of em
16:05:56 <jgriffith> boris-42: you around ^^ :)
16:05:56 <smcginnis> DuncanT: Was this new this week, or left over from last week?
16:06:09 <DuncanT> Left over form last week, we ran out of time
16:06:26 <smcginnis> Oh, OK. So we still need to discuss it then.
16:06:28 <jgriffith> hemna: I'll bet boris-42 would be happy to help with rally tests if you want/need
16:06:34 <e0ne> jgriffith: who sad rally?
16:06:37 <DuncanT> It's a pretty straight-forward thing, I hope
16:07:01 <jgriffith> oh yeah... e0ne is another potential victim... err candidate :)
16:07:04 <e0ne> jgriffith, hemna: please, ping me if you need any kind of help with rally
16:07:06 <hemna> jgriffith, yah, I kinda want to create a rally gate test for brick as well
16:07:18 <DuncanT> We currently approve new jenkins jobs with no evidence of them passing, then they slowly get fixed up
16:07:21 <hemna> I'm just busy training my replacement this week.
16:07:24 <hemna> smh
16:07:26 <jgriffith> thaks e0ne
16:07:40 <DuncanT> It means there's no way of knowing which of the none-voting jobs you should pay any attention to
16:07:46 <smcginnis> DuncanT: Won't hiding them in experimental just make it worse (playing devil's advocate)
16:08:08 <jgriffith> hemna: OMG, not even going to try and respond
16:08:42 <e0ne> jgriffith, hemna: I'm ready to help with rally job if needed
16:08:49 <smcginnis> DuncanT: Still there?
16:08:50 <DuncanT> smcginnis: Until they pass at all? No, I don't think so. It isn't like most people work on fixing them, just a small group, who can run "check experimental" easily enough until the job passes at least once
16:09:08 <DuncanT> smcginnis: Sorry, injured finger, slow typing
16:09:12 <smcginnis> DuncanT: Do you have a list of jobs you'd like to move.
16:09:20 <smcginnis> DuncanT: Sorry. :)
16:09:39 <DuncanT> smcginnis: I think they're all passing right now, but there were a few a week and a half ago
16:09:47 <diablo_rojo> DuncanT: Swing dancing injury?
16:10:03 <hemna> e0ne, ok thanks man.
16:10:11 <smcginnis> DuncanT: So just looking for general consensus that we should do that if/when we get another batch of failing ones?
16:10:15 <DuncanT> gate-tempest-dsvm-neutron-identity-v3-only-full-nv I haven't seen passing, but I haven't checked in carefully
16:10:36 <smcginnis> DuncanT: I think that's fair. If it's being worked on, would be good to get working in experimental before polluting our normal results.
16:10:46 <DuncanT> smcginnis: I'm asking so that there's a consensious behind me if I go asking on the infra review as to whether the jobs actually work yet
16:10:54 <smcginnis> patrickeast: Didn't you have a patch for that identity v3 issue?
16:11:07 <scottda> If a test is failing for some great length of time, that sounds ok. But merging patches into infra repos can be a lengthy process.
16:11:12 <DuncanT> smcginnis: I'd like to be able to say we discussed it before I go off complaining
16:11:12 <e0ne> smcginnis, DuncanT: +1 on moving such jobs to experimental queue
16:11:16 <patrickeast> smcginnis: i dont think so
16:11:27 <smcginnis> DuncanT: +1 from me as well.
16:11:32 <DuncanT> Thanks
16:11:35 <smcginnis> patrickeast: OK, thought I saw something.
16:11:53 <patrickeast> smcginnis: haha, i'll have to check... i've got a bunch of random crap up for review
16:11:54 <jungleboyj> DuncanT: I think that makes sense.
16:12:01 <DuncanT> I don't promise to catch them all, but I'd like to change the culture of randomly failing tests if I can
16:12:11 <smcginnis> Gotta catch em all.
16:12:14 <smcginnis> :)
16:12:19 <hemna> DuncanT, I think there are many of those :(
16:12:20 <jungleboyj> smcginnis: +2
16:12:21 <diablo_rojo> smcginnis: :)
16:12:47 <smcginnis> DuncanT: Yeah, make sense. You've got backing on it if you do.
16:12:53 <DuncanT> I've also been gently poking 3rd party CI folks who've been failing a lot - netapp apparently have a plan in place to improve theirs for example
16:13:07 <DuncanT> smcginnis: Thanks. I'm done with this subject then
16:13:14 <smcginnis> DuncanT: Thanks!
16:13:17 <smcginnis> #topic Cinderclient issues to discuss
16:13:26 <smcginnis> scottda: Hey
16:13:32 <scottda> So, Deadline to release is R-5 Aug29-Sep02
16:13:33 <hemna> DuncanT, we got the wiki names in the drivers now
16:13:46 <smcginnis> Woot
16:13:51 <scottda> #link https://review.openstack.org/#/q/project:openstack/python-cinderclient
16:13:58 <bswartz> DuncanT: netapp has plans to invest a lot in making CI more reliable
16:14:04 <DuncanT> hemna: Yup, I've started playing with tooling around that :-)
16:14:29 <scottda> It'd be especially good to have client CLI support for features that are already in the server...
16:14:33 <DuncanT> bswartz: Great to hear. Anything that can be shared in terms of lessons learnt would be good to hear too
16:14:37 <scottda> so perhaps authors of the server code can help us get the client parts in.
16:14:39 <patrickeast> scottda: +1
16:14:53 <smcginnis> Related to that...
16:14:55 <smcginnis> #link https://review.openstack.org/#/q/project:openstack/python-brick-cinderclient-ext
16:15:14 <e0ne> smcginnis: thanks:)
16:15:20 <smcginnis> Huh, thet're all merged now though. :)
16:15:20 <hemna> :)
16:15:21 <DuncanT> One thing that would be good for client reviews is some evidence of testing... since tempest doesn't cover any of these new calls
16:15:33 <scottda> Not much in brick-cinderclient that hasn't merged...
16:15:48 <e0ne> DuncanT: we could require functional tests
16:15:51 <DuncanT> I've been holding off on client reviews because I don't currently have time to test them
16:15:53 <scottda> Yeah, good to test as well as review client
16:16:08 <bswartz> DuncanT: right now we suspect that we're overloading our hardware with too many jobs executing in parallel
16:16:10 <DuncanT> e0ne: Not sure if that's practical, but would be nice
16:16:27 <DuncanT> bswartz: Servers or arrays?
16:16:40 <bswartz> 1 array, 40 jobs
16:16:56 <DuncanT> bswartz: Got you. Thanks.
16:17:19 <scottda> back to cinderclient?
16:17:23 <e0ne> DuncanT: I mean that we already have a solution to get cinderclient patches tested
16:17:31 * bswartz apologies for being offtopic
16:17:37 <e0ne> DuncanT: but it's really low functional tests coverage for it
16:17:50 <smcginnis> bswartz: :)
16:17:51 <DuncanT> e0ne: Do we have a good example functional test in there yet?
16:18:17 <smcginnis> Kind of related to that, would be good to get input on this: https://review.openstack.org/#/c/356177/
16:18:21 <diablo_rojo> DuncanT: Good question
16:18:21 <e0ne> DuncanT: everything we have is here https://github.com/openstack/python-cinderclient/tree/master/cinderclient/tests/functional
16:18:46 <e0ne> DuncanT: ther covers only CLI now:(
16:18:56 <DuncanT> e0ne: I'll take a look. It would be nice to be able to point people at a small number of good examples to emulate
16:19:24 <scottda> stevemar: Are you around? stevemar is working on OpenSTack Client stuff and has volunteered to keep cinder team informed on OSC stuff. And you can bug him as well .....
16:19:27 <smcginnis> Oh, here's the endpoint v3 I was thinking of: https://review.openstack.org/#/c/349602/
16:19:32 <stevemar> yo
16:19:35 <e0ne> DuncanT: my team is working on extend functional tests. ping me if we  want to implement something faster
16:19:36 <smcginnis> stevemar: Thank you for that!
16:19:44 <stevemar> heyo!
16:19:48 <DuncanT> e0ne: It would be nice to have some sort of test that matched the client and server bits together
16:19:57 <tbarron> hi
16:20:09 <DuncanT> e0ne: Getting one example up ASAP would suit my purposes well
16:20:12 <stevemar> we had sheel rana working on a blueprint to fill the holes that were in OSC
16:20:16 <stevemar> but he's gone AWOL
16:20:25 <DuncanT> e0ne: I can review it, I'm not going to be able to code it ATM
16:20:38 <stevemar> luckily, huanxuan has decided to pick up the blueprint! :)
16:20:42 <e0ne> DuncanT: ok. I'll try to get patch proposed asap
16:21:03 <DuncanT> e0ne: Much appreciated
16:21:10 <stevemar> you can see hes' been pumping out OSC patches for volume support: https://review.openstack.org/#/q/owner:%22Huanxuan+Ao%22
16:21:36 <diablo_rojo> Our guys that were working on it for a while got pulled off too.
16:21:37 <stevemar> his latest patches could use some eyes from cinder folks: https://review.openstack.org/#/c/356384/1 and https://review.openstack.org/#/c/353931/ since they are not as straight forward
16:22:31 <stevemar> scottda: smcginnis i'll try to bring issues we're having to the meeting here, but otherwise i'm hoping i have time to create a compatibility matrix for cinderclient vs osc
16:22:32 <scottda> stevemar: Thanks for that. I find the OSC is not really on my radar screen, and probably true for most of us..
16:22:44 <stevemar> scottda: it's the bees knees!
16:22:46 <smcginnis> stevemar: That would be hugely useful.
16:22:52 <stevemar> smcginnis: right!
16:22:55 <jungleboyj> stevemar: That would be great!
16:23:12 <jungleboyj> So we have an idea where we are at and what needs to be done.
16:23:20 <stevemar> i'll mention the osc and the cinder CLI command
16:23:33 <stevemar> we've diverted in some ways, hopefully for the better
16:23:37 <diablo_rojo> smcginnis: Agreed
16:23:43 <hemna> fwiw, I had to stop using osc from git recently due to failures to configure keystone.  :(
16:24:00 <stevemar> hemna: that's cruddy
16:24:10 <stevemar> hemna: file a bug in osc, we can help out
16:24:22 <stevemar> we've heard good feedback from a UX perspective
16:24:27 <hemna> stevemar, ok will do.  it was easy to reproduce.  devstack wouldn't come up
16:24:34 <e0ne> can we just implement OSC plugin inside cinderclient and not re-implement all things in OSC?
16:24:52 <e0ne> this plugin should be mandatory
16:24:52 <stevemar> e0ne: swift asked the same thing
16:25:08 <DuncanT> e0ne: I asked that too :-)
16:25:13 <stevemar> e0ne: and i think it's feasible, it's just code after all
16:25:16 <scottda> Yeah, we discussed at the summit..
16:25:35 <stevemar> just depends on where it lives
16:25:41 <e0ne> stevemar, DuncanT:  we discussed it and I was not able to try implement it :(
16:25:50 <DuncanT> One downside is that it risks missing the OS team's holistic view and standardisation of terminology
16:26:02 <e0ne> stevemar: I like how heatclient implemented OSC plugin
16:26:04 <smcginnis> DuncanT: Probably the biggest risk.
16:26:10 <DuncanT> AKA renaming metadata and breaking all the docs
16:26:11 <e0ne> all code is inside heatclient repo
16:26:12 <stevemar> DuncanT: righto!
16:26:54 <stevemar> e0ne: yes, all our plugins (there are many!) have code in-tree except for the big 6 services (identity, compute, volume, image, object, network) -- we have those in osc
16:26:56 <DuncanT> At while point OS becomes a thing wrapper around a bunch of heavily deseparate clients, and therefore entirely pointless
16:27:43 <stevemar> not sure i follow that one
16:28:02 <e0ne> DuncanT: OSC is only for CLI
16:28:11 <stevemar> e0ne: to answer your question -- i think its possible, but we risk losing consistency
16:28:37 <stevemar> e0ne: but you'd be inheriting a bunch of our stuff, so maybe y'all can just follow the established pattern
16:28:42 <DuncanT> e0ne: Yes, but unless terminology within the CLI is standardised, there's no point adding the extra layer of having a 'single' CLI
16:28:54 <e0ne> DuncanT:  +1
16:29:19 <e0ne> stevemar: we can try to do it if somebody has a time for this effort
16:29:42 <DuncanT> stevemar: How are you avoiding such terminology fragmentation now?
16:29:43 <e0ne> stevemar: plugin will help implement CLI faster
16:29:54 <stevemar> e0ne: yeah, it would involve some poking around, but it would look a lot like how heat client did it
16:30:13 <stevemar> DuncanT: we publish dev guides in osc docs
16:30:22 <e0ne> stevemar: it's better than having to CLIs, IMO
16:30:41 <DuncanT> stevemar: But are the various plugins actually following it?
16:30:49 <smcginnis> scottda: Maybe we should move on to the last subtopic?
16:30:52 <erlon> stevemar: do you ahve an ideia on how much is missising in OSC to meet all functionalities already in openstack client?
16:31:03 <scottda> I'm ready.....
16:31:05 <DuncanT> stevemar: Is there anyway to extract the command tree, including all the plugins?
16:31:07 <stevemar> DuncanT: i hope so, i'll admit we can't keep up with all the reviews, but i hope they are copying established patterns
16:31:21 <scottda> So, patch is up to deprecate support for 'cinder endpoints' #link https://review.openstack.org/#/c/349602/
16:31:26 <stevemar> erlon: i mentioned i'll get that info in a week or so
16:31:52 <stevemar> we can totally jump to another topic, i don't want to hold the bus up
16:32:08 <erlon> stevemar: mhm, I think we need to wait to see if is worth to work in the plugin then
16:32:08 <smcginnis> scottda: So the background on that is it was carried over from nova and it doesn't quite fit. But do we care enough to clean it up now.
16:32:20 <smcginnis> scottda: Is that the jist of it?
16:32:35 <scottda> yes.
16:32:45 <smcginnis> #link https://bugs.launchpad.net/cinder/+bug/1614104 Keystone v3 support bug
16:32:45 <scottda> and further complicated with this bug: https://bugs.launchpad.net/cinder/+bug/1614104
16:32:46 <openstack> Launchpad bug 1614104 in Cinder "Cinder endpoints throws error in Keystone V3" [Undecided,New] - Assigned to Jay Conroy (jayconroy)
16:33:05 <scottda> So, we'd want to fix that ^^ if we continue to support cinder endpoint.
16:33:23 <e0ne> we can fix it in any case
16:33:33 <smcginnis> scottda: We probably would need to fix it anyway, right. We'd have to deprecate the command so it will need to stick around a little while anyway.
16:33:34 <e0ne> deprecated!=not working
16:33:41 <scottda> yes, deprecation will take time, so we'll end up fixing that bug.
16:33:45 <DuncanT> e0ne: +1 - it's a bug, we should fix it if we can
16:34:05 <scottda> OK, forget about the bug. Do we want to deprecate that command?
16:34:09 <smcginnis> But good point in that we are needing to devote time and resources to something that doesn't really need to be there.
16:34:26 <stevemar> smcginnis: hmm, i thought i fixed that bug... https://bugs.launchpad.net/python-cinderclient/+bug/1608166
16:34:26 <openstack> Launchpad bug 1608166 in python-cinderclient "cinder list endpoints results in parsing error" [Undecided,Fix released] - Assigned to Steve Martinelli (stevemar)
16:34:26 <smcginnis> I'm actually in favor of deprecation/removal.
16:34:36 <DuncanT> Has anybody seen a script that actually uses it? I've found one in our internal tools dump I've got, but it was a one-shot thing and can be trivally updated
16:34:56 <smcginnis> stevemar: Yeah, seemed like we already did something there. SOmething must have snuck in and broke it again or it's slightly different.
16:35:09 <stevemar> smcginnis: funky
16:35:14 <smcginnis> DuncanT: Yeah, should be trivial to change.
16:35:22 <DuncanT> Do we need functional tests for OSClient as well?
16:35:26 <smcginnis> Just switch everything over to osc, right stevemar? :)
16:35:38 <stevemar> DuncanT: those already exist in osc's tree
16:35:38 <jungleboyj> smcginnis: Not so fast!  :-)
16:35:49 <smcginnis> jungleboyj: :)
16:35:49 <stevemar> smcginnis: you wouldn't have to deal with all these bugs :P
16:36:22 <stevemar> nova had the same command did deprecate it as well
16:36:25 <smcginnis> So I guess the question on the table is deprecate or keep maintaining the endpoint command.
16:36:28 <stevemar> for the same reasons
16:36:28 <DuncanT> stevemar: Do they run against the cinder side changes though?
16:36:32 <scottda> Probably not a big deal to deprecate. Just wanted to ask for feedback...We said we weren't ready to depricate the cinderclient. But this is a bit different.
16:36:46 <stevemar> scottda: yerp
16:36:48 <e0ne> smcginnis: fix and deprecate it
16:36:49 <DuncanT> smcginnis: I've no objection to deprecating it
16:37:13 <jungleboyj> Sounds like the way to go ... especially if Nova has done that.
16:37:15 <smcginnis> scottda: Yeah, I see that as completely different. I don't want to deprecate the cinder client any time soon, but an individual command that doesn't make sense, I'm all for deprecation.
16:37:27 <jungleboyj> smcginnis: ++
16:37:32 <e0ne> smcginnis: +1
16:37:49 <scottda> ok. It was the cinderclient deprecation that caused me to put on the breaks. I'll remove -1 and others can review, etc.
16:37:51 <smcginnis> Great, no major objections. Let's get it marked deprecated.
16:38:04 <scottda> I'm done. Thanks.
16:38:12 <smcginnis> Thanks scottda
16:38:27 <smcginnis> And now on to the fun stuff...
16:38:32 <smcginnis> #topic Driver deprecation policy
16:38:40 <jgriffith> haha
16:38:43 <e0ne> :)
16:38:48 <smcginnis> #link http://lists.openstack.org/pipermail/openstack-dev/2016-August/101428.html ML Thread
16:38:51 * jungleboyj waits for fireworks
16:38:56 <e0ne> smcginnis: I like hemna's proposal
16:39:04 <stevemar> thanks all!
16:39:08 <smcginnis> My thoughts if anyone is interested: http://lists.openstack.org/pipermail/openstack-dev/2016-August/101568.html
16:39:46 <smcginnis> I do like the idea of marking unsupported.
16:40:01 <smcginnis> The question is whether that still breaks policy if we disable those by default.
16:40:11 <smcginnis> Since upgrades will break without making config changes.
16:40:18 <e0ne> smcginnis: let's change the policy
16:40:30 <jgriffith> e0ne: which one?
16:40:36 <DuncanT> Given that they're likely to break anyway, I think explicitly calling them out is good
16:40:53 <e0ne> jgriffith: does it matter?
16:40:57 <jgriffith> wait... let's back up a second :)
16:41:11 <DuncanT> I still support removal - backporting is easy enough for a savy deployer, and a non-savy deployer would be better off migrating before upgrade
16:41:17 <jgriffith> So there's a LOT of stuff in that thread and even a LOT of stuff in just Sean's posting
16:41:34 <jungleboyj> smcginnis: I just used the threat of removal to get exec attention.  :-)
16:41:35 <jgriffith> I think Sean touched on a version of each proposal that's feasible
16:41:43 <jgriffith> in his ML post
16:42:14 <jgriffith> After an awful lot of thought and consideration I felt that the third option while not the one with the most teeth was the most reasonable
16:42:29 <jgriffith> ie a new tag that marks the status of each driver
16:42:56 <jgriffith> I know a number of folks want the power of removal, and that makes sense... I totally get it and it's the largest stick you can have
16:43:17 <jgriffith> But it's also a pretty destructive stick the more I think about it... and not just for the driver being removed
16:43:30 <jungleboyj> jgriffith: Can we combine that with deprecation?
16:43:35 <smcginnis> jgriffith: And I guess to expand on that, tag it and also mark as deprecated so if they don't turn around we actually can remove them.
16:43:40 <smcginnis> jungleboyj: jinx
16:43:45 <jgriffith> jungleboyj: yes, and smcginnis yes :)
16:43:46 <jungleboyj> smcginnis: :-)
16:43:58 <jgriffith> would've been funny if I answered each of you differently :)
16:44:10 <smcginnis> jgriffith: Wouldn't have been the first time. :P
16:44:12 <smcginnis> JOKE!
16:44:13 <jungleboyj> I think it is very important that we don't make it appear that Cinder doesn't follow deprecation policy.
16:44:17 <DuncanT> jgriffith: Is leaving a deployer unable to access their volumes because the drive ris broken actually better?
16:44:35 <jgriffith> DuncanT: you don't know that it's broken though
16:44:40 <jungleboyj> I also think we need to stick to our guns that unmaintained drivers can't stay in.
16:44:44 <jgriffith> DuncanT: and in some cases... yeah, it kinda is
16:44:49 <jgriffith> IMO anyway
16:44:57 <patrickeast> DuncanT: with the tag they would presumably get a warning that says "this driver is probably broken"
16:45:04 <patrickeast> so it would at least give a clue
16:45:09 <jgriffith> As a consumer/operator you chose poorly.. there are consequences
16:45:11 <jungleboyj> So, kind-of like we are doing with XIV/DS8k right now we have said they need to opensource and have given them a grace period.
16:45:13 <patrickeast> s/would/should/
16:45:15 <DuncanT> patrickeast: Not until after they've upgraded, and it's a bit late by then
16:45:27 <jgriffith> but at the same time, we as an OpenStack community aren't going to rip the rug out from under your
16:45:28 <jgriffith> you
16:45:37 <jungleboyj> They are 'deprecated ' right now.  Will be removed in the next release if the problem isn't resolved.
16:45:40 <hemna> https://review.openstack.org/#/c/355608/
16:45:41 <patrickeast> DuncanT: true, maybe we document the list of drivers and their status for each release so its not a suprise
16:45:45 <smcginnis> So mark as unsupported, check and log a huge warning when it's loaded, mark as deprecated, and if they don't start working on things remove it in the next release.
16:46:07 <jungleboyj> smcginnis: I think that is the best of both worlds.
16:46:11 <smcginnis> Easier on deployers who are already in an unfortunate situation that have a cloud running on deadbeat storage.
16:46:12 <diablo_rojo> smcginnis: That sounds reasonable
16:46:19 <hemna> check out that patch and try it out.
16:46:22 <jgriffith> patrickeast: DuncanT the documentation of drivers and status was something we're supposed to be doing anyway :)
16:46:33 <patrickeast> jgriffith: heh yea, supposed to
16:46:40 <jgriffith> patrickeast: :)
16:46:42 <DuncanT> patrickeast: A big stonking list of 'possibly broken, really, upgrading your cinder version might eat all your data' warning in the release notes for sure
16:46:44 <smcginnis> hemna: So the only difference from what we discussed before I think is not needing to edit the config to say to run it anyway.
16:47:08 <jgriffith> DuncanT: I think that might be a little dramatic in reality
16:47:10 <hemna> well, I do like forcing admins to do that to re-enable
16:47:19 <DuncanT> jgriffith: Good.
16:47:29 <smcginnis> For non-compliant drivers, rather than a removal patch, we put up a patch to add the flag to their driver and mark it deprecated.
16:47:40 <DuncanT> jgriffith: Actually data-loss is really dramatic, every time
16:47:43 <smcginnis> hemna: Yeah, but then we have the "grenade" issues of broken upgrades.
16:47:48 <jgriffith> DuncanT: if you as an operator are aware of the tag, and note that it's not in the state it should be at release... then I would hope you'd do some testing
16:48:02 <jgriffith> DuncanT: but we're not going to *eat* volumes off the back end
16:48:10 <smcginnis> nom nom
16:48:13 <hemna> smcginnis, it won't affect the grenade tests unless we mark LVM as unsupported :)
16:48:13 <jgriffith> DuncanT: there's a difference between data LOSS and data UNAVAILABLE
16:48:16 <DuncanT> jgriffith: Making the tag obvious helps a lot, both the operation and the stick that it represents
16:48:18 <flip214> well, if they're tasty with a little cheese?
16:48:36 <DuncanT> jgriffith: Data unavailable is still quite a big drama
16:48:38 <smcginnis> hemna: Well, not the actual grenade tests we run in gate, but it would break the principle that is trying to enforce.
16:49:05 <jgriffith> DuncanT: sure, but that quite frankly is between vendor and customer... or perhaps vendor, customer and distro
16:49:06 <hemna> it's a grey area IMO.  we used to completely delete drivers.....
16:49:16 * jgriffith avoids inappropriate joke about the number 3
16:49:23 <smcginnis> hemna: Right, it's completely better overall.
16:49:26 <jgriffith> jungleboyj: quiet!
16:49:41 <smcginnis> I think we'd still get called out on it though, and I'd rather just set this straight and be done with it.
16:49:42 <jungleboyj> jgriffith: You don't want to be around when number 3 happens.  :-)
16:49:52 <hemna> the TC folks seemed ok with it
16:49:54 <hemna> fwiw
16:50:08 <jgriffith> DuncanT: I'll raise my point again about the fact that distros are requiring a seperate qual/test process for every driver on every release anyway
16:50:15 <hemna> I dunno, if we remove that, then the stick to whack vendors becomes very.....limp.
16:50:35 <jgriffith> hemna: stop that
16:50:39 <hemna> :P
16:50:40 <DuncanT> jgriffith: They've got no choice right now, the upstream testing is pretty weak
16:50:44 <scottda> hemna: nice visual
16:50:58 <jgriffith> DuncanT: BS... they have me run the EXACT same upstream tests
16:51:04 <jungleboyj> hemna: Ok with which?
16:51:05 <patrickeast> haha
16:51:06 <jgriffith> DuncanT: there's nothing special or different
16:51:09 <patrickeast> jgriffith: +1
16:51:14 <jgriffith> not a damn thing
16:51:18 <DuncanT> jgriffith: ok, the distro tests are pretty weak too then ;-)
16:51:23 <jgriffith> it's just a bull shit process that causes me extra work
16:51:28 <jgriffith> DuncanT: :)
16:51:42 <jgriffith> DuncanT: but that's a ridiculous argument anyway...
16:51:47 <smcginnis> OK, disabling by default aside, is everyone in agreement to set a flag in the driver and mark as deprecated?
16:51:49 <jungleboyj> hemna: You would be surprised the damage you can do with a limp stick.  ;-)
16:51:56 <smcginnis> Oy
16:52:06 <DuncanT> jgriffith: Honestly, I don't have strong answers there, distros have a history of being a pain and requiring work... you can always choose to not be certified
16:52:07 <jgriffith> DuncanT: You're saying "remove them cuz they don't run upstream tests" but then saying "upstream tests are weak and aren't good enough anyway"
16:52:35 <jgriffith> DuncanT: I don't have nice things to say on that topic so I"m going to do as I was taught as a child and say nothing about it ;)
16:52:43 <jungleboyj> smcginnis: Yes, I think so.
16:52:47 <DuncanT> jgriffith: In house, we found passing upstream tests was not a good indicator that it will actually work in a real system
16:52:58 <smcginnis> jungleboyj: Thank you :)
16:53:01 <DuncanT> smcginnis: I still prefer removal, and fixing the tag
16:53:03 <jungleboyj> smcginnis: At least I thought we were.
16:53:15 <jgriffith> DuncanT: understood, but not really germain to this discussion I don't think
16:53:15 <DuncanT> smcginnis: But I'm not going to fight it
16:53:19 <jungleboyj> DuncanT: I agree, but are we going to get wide spread agreement on that.
16:53:28 <patrickeast> smcginnis: i'm on board
16:53:54 <smcginnis> DuncanT: After trying to change the stable policy (and the SS that triggered) I'm hesitant to push for an openstack wide tag change.
16:53:56 <jgriffith> DuncanT: don't get me wrong... I have no real problem with removal... but I do think that there are some consequences that we would have that I don't want to deal with
16:54:24 <DuncanT> smcginnis: SS can be really healthy though, things need reexamining from time to time
16:54:31 <hemna> removal is a good stick for vendors, but it's awful for deployments.
16:54:35 <jgriffith> DuncanT: like it or not the deprecation policy tag exists, and if we lose that it *could* have consequences for drivers in Cinder.
16:54:46 <jgriffith> DuncanT: at least for those that don't belong to a distro
16:55:01 <smcginnis> DuncanT: True. But other than a small number of supporters for our policy up to this point, I didn't get the feeling I was getting much buy in on the idea.
16:55:07 <jgriffith> DuncanT: things like "we're the only driver we can guarantee will be there next release"
16:55:20 <hemna> I think we should be punishing vendors for not supporting their drivers and give deployments and chance to get off that vendor's backend.
16:55:26 <hemna> hence the proposal I made
16:55:28 <DuncanT> jgriffith: Yeah, I guess the best bet is to follow the policy, put flashing lights in the release notes and review the tag definition in slow time
16:55:35 <DuncanT> hemna: +1
16:55:51 <flip214> <blinking>
16:55:53 <jgriffith> hemna: I think we should publish their contributions... including reviews, code submissions to core etc and show comparisons :)
16:55:55 <flip214> in the release notes
16:56:02 <hemna> jgriffith, :)
16:56:07 <hemna> jgriffith, +1
16:56:09 <jungleboyj> DuncanT: +1
16:56:27 <jungleboyj> jgriffith: Rough. ;-)
16:56:35 <smcginnis> OK, so if we have general agreement towards tagging the driver and deprecating it, the next question is the automatic disabling of the driver without and explicit config file setting saying to load it anyway.
16:57:02 <hemna> re: limp stick
16:57:20 <smcginnis> AKA, the limp stick issue. :D
16:57:34 <jungleboyj> smcginnis: I think that firms up the stick a bit.
16:57:53 <scottda> I'm glad HR doesn't pay attention to IRC....
16:58:09 <hemna> scottda, shhh!
16:58:12 <jungleboyj> scottda: Chicken.  ;-)
16:58:14 <patrickeast> scottda: no worries, these logs aren't logged or anything :p
16:58:25 <diablo_rojo> patrickeast: Lol
16:59:00 <diablo_rojo> Two min left btw
16:59:06 <diablo_rojo> Make that one
16:59:07 <patrickeast> smcginnis: so i'm not sure how much that config option really buys us other than as an annoyance for deployers
16:59:22 <patrickeast> smcginnis: at the point they've upgraded and still have a bad driver, they are going to use it anyway
16:59:39 <jungleboyj> smcginnis: I think we should have it.  It will get some pressure on the vendor from customers that will be made more aware of what is going on.
16:59:42 <hemna> patrickeast, that's kinda the point really
16:59:54 <smcginnis> #link https://review.openstack.org/#/c/355608/ Driver tag patch
17:00:04 <hemna> since we don't remove the drivers, we don't have much of anything to really annoy the vendors to force them to comply
17:00:10 <smcginnis> Since we're out of time, please comment in the review. ^^
17:00:12 <hemna> re: https://goo.gl/kHnWVN
17:00:27 <smcginnis> Times up. Thanks everone
17:00:32 <smcginnis> hemna: Seriously? :)
17:00:36 <smcginnis> #endmeeting