16:00:01 #startmeeting Cinder 16:00:02 Meeting started Wed Jun 29 16:00:01 2016 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:05 The meeting name has been set to 'cinder' 16:00:06 ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang tbarron scottda erlon rhedlind jbernard _alastor_ vincent_hou kmartin patrickeast sheel dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio wanghao 16:00:11 hi 16:00:12 hi 16:00:13 Hello! 16:00:14 Hello 16:00:14 <_alastor_> ol 16:00:14 hi 16:00:15 #chair DuncanT 16:00:15 Current chairs: DuncanT smcginnis 16:00:16 hi 16:00:18 hi 16:00:36 hola 16:00:42 hi 16:00:43 I probably have to get going part way through the meeting, so DuncanT has been nice enough to offer to take over for me. 16:00:47 Hi 16:00:48 Oi! 16:00:54 o/ 16:00:56 #topic Announcements 16:01:06 hi 16:01:07 hi 16:01:09 Just the usual stuff today... 16:01:19 o/ 16:01:20 Hello :) 16:01:21 #link https://etherpad.openstack.org/p/cinder-spec-review-tracking Review focus 16:01:34 I'd love to see some more of those drivers out of the way. 16:01:39 We've gotten a couple through. 16:01:45 I think there are a few more that are close. 16:01:46 hi 16:01:51 hi 16:01:55 And probably a few others that aren't going to make it. 16:02:22 hi 16:02:34 Still waiting on CI for several of the new drivers 16:02:39 If you have a driver up for review - please make sure you are responding to review feedback and making sure CI is running. 16:02:43 DuncanT: +1 16:02:55 That seems to be the biggest challenge for most of them. 16:03:07 #link https://etherpad.openstack.org/p/newton-cinder-midcycle Midcycle planning 16:03:27 Please add any midcycle topics to the etherpad. 16:03:31 o/ 16:03:43 And if you are going and haven't yet - please add your info to the etherpad for planning. 16:03:57 scottda: You need all non-US folks' info, right> 16:04:36 <_alastor_> Since the tintri site has been down for the past week, I've been building a tool to replace it. It's just a simple cmdline one to find the status of a Third party CI, but I'm taking feature requests 16:04:40 Yeah, security flagged it last time as a US state department rule... 16:04:46 But mainly for certain countries. 16:04:54 Sorry about that. 16:05:06 _alastor_: Awesome! 16:05:16 scottda: Have you been getting everything you need on that front? 16:05:26 smcginnis: Yes, seems ok so far. 16:05:31 scottda: OK, good. 16:05:42 hi 16:05:45 _alastor_: Would love to see that once you have it running. I think DuncanT would too. 16:06:00 * smcginnis marks tbarron tardy... 16:06:01 :) 16:06:10 :) 16:06:18 _alastor_: Can we chat after the meeting please? :-) 16:06:26 <_alastor_> DuncanT: sure 16:06:38 #topic Blueprint discussion 16:06:43 Just one on the list this week. 16:06:53 #link https://blueprints.launchpad.net/cinder/+spec/feature-classification Feature classification blueprint 16:07:21 Would like feedback from everyone if this is something desired. 16:07:34 It's being done for Nova, but I'm not so sure about it. 16:07:39 *click* 16:07:47 Or at least don't have a clear picture in my head of what this means for us. 16:08:22 smcginnis: we probably should wait and see how it works in Nova first 16:08:49 xyang: Yeah, that might be good to wait for it to be fully adapted there first and be able to see. 16:08:56 We only have 2 completely optional features (CGs and replication) but there are a few other bits (supported QoS options etc) that are probably interesting to somebody choosing storage 16:09:13 it would be good to have a list of a few examples for this blueprint 16:09:38 don't we have more variation within a feature across backends than nova would? 16:09:51 tbarron: Potentially. 16:10:05 xyang: ++ Seems good to wait and see how this plays out for Nova. 16:10:15 I think our support matrix looks much simpler compared to Nova’s 16:10:22 Not sure if a spec would be overkill, but I would like more specifics. 16:10:24 hi 16:10:27 just look at the current ones in wiki 16:10:31 and i worry about "eventual goal is to automate this list from some theird party CI reporting mechanism" when the 3rd party CI isn't yet reliable 16:10:52 smcginnis: Doesn't this go along with other work that has been discussed in providing an automated way to show what drivers support what features? 16:10:52 Yeah, seems weird. These features actually work: xxxx, These features suck: xxxx 16:10:54 tbarron: And part of that automation I planned with the interface checks I've added. 16:11:00 jungleboyj: Yep 16:11:14 scottda: Hah, we'll make sure that's how it gets published. ;) 16:11:16 smcginnis: Ok, so I am surprised that you aren't sure about it. What am I missing? 16:11:18 smcginnis: did you already have some driver interface check? 16:11:29 smcginnis: yeah, get that going first 16:11:30 xyang: Just the base functionality so far. 16:11:49 But it should now be easy to extend that to generate the support matrix. 16:12:10 There's a new compliance job that verifies minimum required interface for all drivers. 16:12:15 smcginnis: THanks to your fancy driver interface work :) 16:12:31 Of course that doesn't verify they actual work right, but it at least makes sure all the right methods are there. 16:13:09 So this matrix will be backed by a CI run that tests all the drivers for the different features? 16:13:33 diablo_rojo: Not yet. 16:13:50 smcginnis: I mean like, is that the plan? 16:13:51 I'll ask for more details in the bp, but for now I'm just going to leave it as is. 16:13:55 Thanks for all the input. 16:13:55 It'd be kind of embarrassing to list features we've tested vs. features we've merged, but not really tested and don't have good automted tests 16:14:05 scottda: Very true! 16:14:23 Speaking of which... 16:14:26 #topic Create from volume/snapshot fixes 16:14:31 erlon: Hey 16:14:37 Hey 16:14:42 So, Im trying to make a generic fix for all clone from volume/snapshot bugs. 16:14:53 https://review.openstack.org/#/c/333037/ 16:15:05 erlon: what is the bug? that the size in the volume is not respected? 16:15:15 but why does this change *remove* all the size changing code? 16:15:16 Just a scratch of the idea 16:15:20 flip214: yes 16:15:22 erlon: I was going to let you finish before bomarding you with questions :) 16:15:41 jgriffith: thanks! :) 16:15:48 The ideia of the fix is to extend the volume (if it is bigger than the source) in the manager, not in the driver like some drivers are fixing the problem now. 16:16:09 flip214: so, remove the fix in the drivers 16:16:18 erlon: the disadvantage is that it does an extra manage=>driver "context switch", which might affect performance 16:16:20 The problem is that there is some drivers that does not clone the volume using the smaller size, ie, they already clone the volume using the right size (like hemma's, LVM, may be others), so, an extend in the manager would break those drivers. 16:16:22 understood. 16:16:33 erlon: Sorry, I haven't looked, but I remember last time we discussed this there were some drivers that could more efficiently create the new volume at the larger size. 16:16:39 erlon: Does this account for that? 16:17:01 flip214: why is this context switch? it doesnt seem to me that there is a context switch 16:17:18 erlon: The hang up with a generic fix was it wasn't easy to tell if the new volume was the right size or not without changing the driver method. 16:17:22 erlon: "context switch" as in a REST or other API call to the underlying storage. 16:17:24 smcginnis: yes 16:17:38 if the new size already gets sent along with the snapshot-restore it's more efficient 16:17:39 flip214: shouldn't be it's still in the manager 16:18:02 flip214: there's no context switch, it happens in the context of the task_flow 16:18:12 flip214: it doesnt leave the task 16:18:14 erlon: can I make a suggestion? I don't want to interrupt you if you have more to say. 16:18:18 erlon: OK, sounds good from a high level. I'll try to take a look later today. More likely tomorrow though. 16:18:33 erlon: "context switch" as in an additional REST call to the storage driver that's being used to actually provide storage. 16:18:34 jgriffith: go ahead 16:18:40 the general idea here does fit how i would expect things to work 16:19:04 One idea is to make those drivers always to create the volume of the size of the source, then a generic extend() would work for all. Other option could be have a flag in the driver/volume, so manager knows how the driver behaves in clone operations. 16:19:07 erlon: so I have some concerns about pushing more and more in to the manager. This has created some issues for us in the past 16:19:21 erlon: that being said, I definitely see the advantage here 16:19:34 +1 for the flag 16:19:47 erlon: I'd suggest we put the logic in the base driver 16:19:59 erlon: and NOT the manager 16:20:16 erlon: or in tflow manager for that matter 16:20:16 jgriffith: it makes sense, 16:20:21 jgriffith: mhm, I have to connect the dots in this approach 16:20:43 erlon: so the beauty of that is that a driver can choose to override the call if they have a better way of doing it 16:20:43 #link https://review.openstack.org/#/c/333037/ Proposed change 16:20:53 jgriffith: +1 , i think base driver is the right place for this 16:21:00 jgriffith: the implementation in flow_manager seems more straight forward to me right now 16:21:06 ah, so that would be a "restore-snapshow-with-new-size" API? 16:21:17 erlon: that's becuase you've never had to update or fix a bug in that layer :) 16:21:32 ;) 16:22:03 jgriffith: ok, :), Ill figure out how to do it, Ill ping you if I have any blocker 16:22:52 erlon: thanks for the explanation! 16:23:01 erlon: I'm happy to help and talk through it with you if you like 16:23:02 flip214: no problem! 16:23:31 jgriffith: thanks, ill spent some time thinking and get back to you 16:23:56 erlon: Anything else at this point? 16:23:56 smcginnis: I think for me that is it 16:24:00 erlon: OK, thanks! 16:24:05 #topic Proposed cinder.conf changes for defining/configuring backends 16:24:12 patrickeast: You're up 16:24:16 hey 16:24:35 ok so, i put some proposals up for adjusting how we allow for defined backends 16:24:44 https://blueprints.launchpad.net/cinder/+spec/shared-backend-config 16:24:48 ^ has links to code and spec 16:24:49 #link https://review.openstack.org/#/c/330767/ Spec 16:25:06 #link https://blueprints.launchpad.net/cinder/+spec/shared-backend-config Blueprint 16:25:26 I've already looked over the spec and I think it sounds like it could help simplify the config file a lot 16:25:34 i bring it up at the meeting mainly to get some eyes on it and as a heads up that folks don't like using DEFAULT for volume backends 16:25:50 i've _really_ wanted to fix this for a while now 16:25:54 i know for myself and several others it has been a big pain for deployment tooling and just explaining how it works 16:26:02 We've talked about doing this before, good to see it moving ahead 16:26:04 but the approach i had in mind was the same one that Gorka suggested in the spec 16:26:12 eharney :+ 16:26:12 patrickeast: I'd rather we used the [DEFAULT] section 16:26:16 ah yea, so i thought about that first 16:26:27 I thought there were issues with that. 16:26:28 It would be good to get a little separation of volume backends options frm the more general options 16:26:34 and the main reason i didn't go down that path initially is that imo it makes thing *more* confusing 16:26:37 Something with oslo_config or something? 16:26:43 diablo_rojo: But then [DEFAULT] does not mean default 16:26:45 patrickeast: Yeah, DEFAULT isn't good. 16:26:52 IMO, we should implement such feature in oslo.config first 16:26:56 trying to explain that sometimes DEFAULT does what you want and sometimes it doesnt, depending on some other config option 16:26:58 I'm not so sure about gorka's approach - there are some things that are system wide and don't make sense to be inherited per backend 16:27:00 * geguileo wonder how DEFAULT section would be confusing... 16:27:02 everyone expects DEFAULT to mean "default", pretty much anything else is confusing 16:27:06 geguileo: Well it does, its just a more generic default. 16:27:06 geguileo: We did discuss this in Tokyo and decided to make it explicit in its own section. 16:27:10 e0ne: +1 16:27:12 is just as bad as explaining when config items should go into backend sections or DEFAULT 16:27:14 diablo_rojo: ++ 16:27:36 Driver defaults and other defaults. Kinda. 16:27:46 e0ne: yea, i looked at that too a little bit, i think maybe as a longer term solution to simplify the code we use 16:27:56 patrickeast: The idea would be that DEFAULT worked for everything 16:28:00 patrickeast: Including backends 16:28:02 I'd call it 'backend_shared' rather than 'backend_defaults' to remove a little of the confusion, but that's bikeshedding 16:28:06 geguileo: right, i get that 16:28:14 So no explanation necessary 16:28:18 eharney: patrickeast hehe... I tried to fix it and FAILED... kudos patrickeast 16:28:19 We need to have consistent behavior. People have been confused about the driver sections and DEFAULT for a long time. 16:28:42 jungleboyj: i'd rather fix it to have predictable behavior, we just have to be consistent w/ handling upgrades 16:28:50 Changing the semantics of DEFAULT is a serious upgrade pain point 16:28:51 consistent but confusing is not a huge win 16:28:57 eharney: Good point. 16:29:06 geguileo: so the issue is that its not always the case... i've spent literally hours explaining to doc folks, deployers, and guys working on deployment tooling when config options need to go where 16:29:20 geguileo: and its like a cross release thing, backwards compatability aside 16:29:33 I think one of the points for a new config section is if something was in DEFAULT and not being applied, then you upgrade and suddenly that behavior changes, that could be bad. 16:29:37 geguileo: having to learn which version of openstack means DEFAULT does something different is confusing 16:29:53 geguileo: so a new section that only exists in newer ones is, imo, less cofusing 16:29:59 confusing* 16:30:03 does that make sense? 16:30:07 smcginnis: That is going to take thought to deal with. 16:30:08 Can we deprecate the non-multi-backend style config file this release? So that we've option sin the future, whatever we decide for this/ It's a terrible idea to use it anyway 16:30:13 patrickeast: I think it's as confusing, but I don't have as much experience as others, so... 16:30:18 DuncanT: +1 16:30:21 DuncanT: please 16:30:26 there's a standard deprecation process for config options 16:30:32 DuncanT: yea there is a patch up for that already 16:30:38 I was going to ask if there was a way to start migrating people and deprecating the options that need to be changed. 16:30:42 we could also put some logic in something like a cinder-manage tool or something maybe 16:30:45 patrickeast: There is? Awesome 16:30:51 DuncanT: the shared config part is kind of a secondary patchset 16:31:02 DuncanT: https://review.openstack.org/#/c/335135/ 16:31:04 DuncanT: +1 16:31:30 if we made the new config section [GENERIC] or something instead of default, and had it override [DEFAULT], we could just lean toward not having a [DEFAULT] section at all as we move forward 16:31:43 would that be less confusing than DEFAULT + backend_defaults? 16:31:44 patrickeast: I'll go review that straight after the meeting, since it really needs to be in this release (because I odn't have a time machine to put it in the last one or the one before) 16:32:10 eharney: dunno, we can try it out 16:32:20 anyone going to the ops meetup? 16:32:25 we need a focus group XD 16:32:52 #link https://review.openstack.org/#/c/335135/ Non-multibackend deprecation 16:33:00 eharney: Would need to put thought into the name. GENERIC is confusing. 16:33:02 eharney: What about options that are in DEFAULT now and need ot be common across backends? (like RPC driver options) 16:33:07 eharney: I like your idea 16:33:27 [SERVICE] [DRIVERS] [backenda]... 16:33:46 so one other option, not sure if i put it up as a alternative in the spec, is to just give up and do a versioned cinder conf 16:33:49 I thinked we be the only project NOT using default though. 16:33:50 eharney: the only problem is in the naming, since every project uses "DEFAULT" it could lead to bad user exp 16:33:52 we roll out a v2 cinder.conf format 16:34:00 and just do whatever we want to make it nice 16:34:01 smcginnis: Jinx! 16:34:04 :) 16:34:10 jgriffith: that's why i liked the other idea of adding an option to change the behavior of DEFAULT. 16:34:16 patrickeast: XML based 16:34:19 JK! 16:34:21 noooo 16:34:29 eharney: yeah, that might be mo-betta 16:34:46 Do other projects all use an inheritted DEFAULT? 16:34:52 [DEFAULT] use_new_style_thang=True 16:35:09 alright, so i'll put up a patch that does that and we can play around with it 16:35:16 the problem with something like "backend_defaults" is that i think options in there that aren't really for backends/drivers will still affect things 16:35:17 Or at least nova and manilla? 16:35:26 DuncanT: AFAIK, oslo.config doesn't support it. so there is no generic implementation 16:35:27 luckily this code is all pretty small... so we can try different things 16:35:33 DuncanT: Shoot, gotta go. Can you take over? 16:35:41 Yup 16:35:46 DuncanT: Thanks! 16:35:57 eharney: ah, so the implementation is only on the volume/configuration.py in theory only volume drivers are using it 16:36:02 All action items now assigned to me I guess. :) 16:36:07 eharney: or at least that is what is currently proposed 16:36:27 patrickeast: deployers won't know which options work there and which don't 16:36:27 geguileo: i think its worth trying out the approach you are proposing 16:36:47 patrickeast: But do you think it will be better for users? 16:36:55 eharney: true, and i guess saying anything that worked in the multi-backend stanza doesn't really help much 16:37:10 patrickeast: Because what I think makes more sense may not be what users think 16:37:11 geguileo: ehh not sure, i'm not convinced either way is really great 16:37:15 but both are better than what we have now 16:37:16 lol 16:37:21 patrickeast: +1 16:37:40 I totally agree, either way is better than what we have now 16:38:05 Given we've got enthusiastic agreement for a change, shall we move on? 16:38:12 ok so i don't wanna take up too much more time, i'll code up a couple of alternatives (all this is a small amount of changes) 16:38:16 DuncanT: +1 16:38:29 i'll maybe put something on the agenda for the midcycle and we can pick a flavor 16:38:30 #topic Third party CI 16:39:34 So I put this one on the agenda. Sadly HPE has temporarily taken up most of my time with non-cinder work and that will be true for several more months, so I've not been able to do any chacing 16:39:37 chasing 16:40:04 Some CIs fail instantly on every patch, some fail on most patches, some seem to have vanished entirely 16:40:30 Several driver patches recently have 3 or more rechecks needed for a pass, every revision 16:40:45 :( 16:40:57 I'm hoping _alastor_'s work can help highlight these cases, but something clearly needs sorting 16:41:31 i wonder if its a symptom of the infrastructure getting worse, or just not being maintained? 16:41:55 In my case it's just a matter of shit quit working :( 16:41:57 I'm not sure, I was kind of hoping some of the CI maintainers might chip in here 16:42:09 so for mine, i think there is a problem with iscsi... somewhere.. 16:42:10 jgriffith: Lol 16:42:14 haven't figured out exactly why... but a rewrite/redesign is in progress on my side 16:42:16 not sure if its my testbed or in cinder/brick 16:42:20 jgriffith, Shit quit working and network reshuffling here. 16:42:22 jgriffith: Cinder shit broke or random supporting shit broke? 16:42:23 <_alastor_> DuncanT: I haven't gotten rechecks yet. But I can certainly add it. Right now I've got features like "is-reporting within x days" and "number of reports within x days" and the like 16:42:28 ours seems to broke on one test case. 16:42:29 jgriffith, plus smcginnis rebooted something. 16:42:38 DuncanT: nothing to do with Cinder in my case 16:42:46 <_alastor_> DuncanT: also failure ratio 16:42:48 jgriffith: Thanks, that's useful data 16:42:49 DuncanT: upgraded OpenStack, shit went flakie 16:42:51 IBM is converging all of our CIs to one lab and monitoring it more closely, so hopefully our results will be getting better. 16:42:58 * jgriffith needs to stop using the s word 16:43:15 jgriffith: Schoolboy error. Just run kilo and be happy ;-) 16:43:22 DuncanT: I'm seeing things like threads dieing, and my nova instances puking 16:43:29 ie the instances running CI 16:43:59 puking as in shutting down and going in to Error state... 16:44:13 jgriffith: Yuck. 16:44:22 ouch, what configuration are you using? 16:44:30 yeah, I have obviously screwed something up 16:44:48 patrickeast: I actually switched over to using "real" bits 16:44:50 RDO 16:44:56 should've stuck with devstack :) 16:45:02 oh interesting 16:45:13 i'm using mitaka RDO for mine right now and its pretty good so far 16:45:14 honestly though, I'm sure it's something I have screwed up 16:45:58 DuncanT: so, whats the plan to help bring up our 3rd party ci stats? 16:46:09 DuncanT: just poke people to fix their stuff? 16:46:39 <_alastor_> Weakest-link style gameshow? 16:46:40 patrickeast: To start with. Some name & shame once we have the tooling. That has been surprisingly effective in the past 16:46:55 DuncanT: definitely effective 16:47:17 _alastor_: hah, we could vote one driver off every week at the meetings? 16:47:31 <_alastor_> Oooooh, I like that. Scary though 16:47:40 patrickeast: Pick the worse performing two and make them fight for it 16:47:44 lol 16:47:50 I am sorry, you are the weakest link .... Goodbye. 16:47:54 haha, I remember those times 16:48:07 patrickeast: The video revenue can pay for the cinder social at the summit 16:48:09 Naming and shaming has always been effective. 16:48:18 * DuncanT has no shame 16:48:22 Some people need shaming to get their execs' attention. 16:48:26 in the past we've had efforts to improve the 3rd party ci infrastructure 16:48:36 DuncanT: So we have heard. ;-) 16:48:42 with the openstackci converged stuff being our golden ticket... 16:48:46 are there still issues with that? 16:48:54 is there more work we can do there? 16:49:11 jungleboyj: Yup. Emailing execs directly with email with 'I'll be shaming you very publicly in two weeks by pulling you driver and posting about why everywhere' was very, very effective 16:49:28 DuncanT: ++ 16:49:32 patrickeast: I think maintaining a CI system is just more effort htan many companies thing 16:49:51 <_alastor_> DuncanT: +1 16:49:54 DuncanT: Yep, it is. 16:49:54 DuncanT: yea, but if we lower the bar more... maybe more won't require shame 16:50:02 like, if it wasn't such a huge pain 16:50:02 DuncanT: +1, 16:50:35 patrickeast: The previous improvements came when all the people struggling started talking to each other 16:50:42 <_alastor_> I'm surprised more drivers aren't just lying about their results 16:50:52 _alastor_: it probably would be easier XD 16:50:54 <_alastor_> I think akerr/amead brought that up a few weeks ago 16:51:01 patrickeast: I don't know how else to encourage more of that, other than lighting a fire under people. 16:51:28 _alastor_: I tried to do some scraping of logs, but there are too many different formats etc 16:51:35 DuncanT: yea, i guess maybe we should try and re-kindle some of the like 3rd party working group meeting advertising 16:51:48 to try and get folks with broken ci's to work together to unbreak them 16:51:54 (myself included) 16:51:59 <_alastor_> DuncanT: I think the easiest way is to push a change that's obviously going to break stuff, but will pass normal Jenkins 16:52:13 <_alastor_> DuncanT: And see who reports success :) 16:52:26 _alastor_: That is an interesting idea. We have found issues that way in the past. 16:52:40 _alastor_: challenging but probably interesting, yes 16:52:55 _alastor_: I like that idea. 16:53:20 DuncanT: Just delete a bunch of important things. Simple. 16:53:26 I'd really like to expand what we test in third party (require LVM as well as vendor backend & test migration, for example) but that's pointless if we don't have working CIs 16:53:30 _alastor_: pass jenkins but break stuff? Wouldn't that mean something horribly wrong with Jenkins... or you mean actually manually break every single driver individually? 16:53:54 DuncanT: ah yea, thats something up on my backlog 16:53:56 _alastor_: and break them in some way that their unit tests still work? 16:54:05 jgriffith: It should be easy enough to remove/patch a method from every driver except the LVM one 16:54:09 DuncanT: i'm planning to add instructions for multi-backend and multi-node 3rd party CI setups 16:54:11 <_alastor_> jgriffith: I can't remember specifics, but I remember ameade doing something that caused that perfect storm 16:54:17 DuncanT: famous last words :) 16:54:19 jgriffith: And monkey path it to pass unit tests 16:54:48 i think you can just delete driver files... right? 16:54:50 just leave lvm? 16:55:29 patrickeast: Thats what I was thinking 16:55:32 jgriffith: It's never going to get merged... we can make it a coding contest, see how cruel and unusual people can be. Maybe get somebody to sponser a small prize? 16:55:33 patrickeast: *just* delete drivers, and unit tests? 16:55:45 DuncanT: +1 16:55:47 DuncanT: sounds fun :) 16:55:49 jgriffith: no need for unit tests to smoke check which ci's are actually running the code 16:55:59 patrickeast: no no... the problem is jenkins won't pass 16:56:13 jgriffith: 16:56:15 oh yea 16:56:17 patrickeast: I thought the lead in was Jenkins (or I guess Zuul now) passes 16:56:20 jgriffith: Add a call to 'cinder.utils.explode' in every create method, and monkey path it out in the unit tests should do the job though 16:56:26 LOL 16:57:15 could add code into the volume manager to just raise an exception when it loads a driver other than lvm 16:57:31 or loads it from the config 16:57:56 Oh, the other thing I wanted to bring up is I intend to update the 'how to submit a driver' wiki page to ask you to put your CI name in the commit message for a new driver... I'm fed up of trying to figure it by hand. Any complaints? 16:57:56 I know what will work, but I'm not going to tell you all, so I can win the contest. 16:57:59 As an aside could someone put a bullet in the brain of the Cisco PNR CI? Insta fail every time. 16:58:05 DuncanT: +1 16:58:12 Two minute warning. 16:58:26 Swanson: I'll ask -infra to ban it 16:58:27 DuncanT: it should include a link to their wiki page on https://wiki.openstack.org/wiki/ThirdPartySystems 16:58:36 DuncanT: I think that is a great idea. 16:58:39 patrickeast: That too, yes 16:59:18 patrickeast: +1 16:59:29 Ok, that's time, thanks folks 16:59:36 patrickeast: +1 16:59:47 Thanks! 16:59:52 #endmeeting