16:00:09 #startmeeting cinder 16:00:09 Meeting started Wed Jun 3 16:00:09 2015 UTC and is due to finish in 60 minutes. The chair is thingee. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:13 The meeting name has been set to 'cinder' 16:00:13 \0/ 16:00:20 hi again 16:00:23 Hello to the meeting minutes. 16:00:28 hi 16:00:33 Hello again. 16:00:34 hey 16:00:34 Hi 16:00:36 hi 16:00:38 announcements... 16:00:38 hi 16:00:40 hi 16:00:43 hi 16:00:43 hi 16:00:45 Hi 16:00:45 hi 16:00:46 hi 16:00:50 hello 16:00:52 hi 16:00:53 hi 16:00:54 #topic announcements 16:01:05 Liberty-1 is approaching 16:01:26 I will start cutting blueprints on June 10 that do not have a patch up that is passing jenkins 16:01:29 #link https://launchpad.net/cinder/+milestone/liberty-1 16:01:45 Better luck in L-2 16:01:53 hi 16:02:12 #info Blueprints in L-1 will be cut if they do not have a passing patch posted by June 10th 16:02:13 By cutting you mean - moving to L-2? 16:02:29 not drivers right? 16:02:32 dulek: out of l-1. You can try for l-2 16:02:43 thats still 12th? 16:03:03 #todo thingee to send to ML about blueprint cut 16:03:38 #action thingee to send to ML about blueprint cut 16:03:48 asselin__: thanks 16:03:50 heh 16:04:05 also drivers, please read the ML post about deadlines for drivers. 16:04:05 Hi 16:04:32 #link http://lists.openstack.org/pipermail/openstack-dev/2015-May/064072.html 16:04:36 June 15th 16:04:52 I'm not going to repeat what's already in there, but that's a separate deadline from regular blueprints 16:05:04 http://www.openstack.org/blog/2015/05/deadline-for-new-cinder-volume-drivers-in-liberty/ 16:05:11 \o/ 16:05:13 kk ty 16:05:26 I will start an etherpad to track drivers 16:05:29 in review 16:05:52 specs I will be approving by the end of this week.... 16:05:56 image caching https://review.openstack.org/182520 16:06:06 and replication v2 https://review.openstack.org/#/c/155644/ 16:06:12 speak up now 16:06:30 and lastly don't forget about casual review friday, thanks to DuncanT 16:06:31 I have one 16:06:33 https://review.openstack.org/#/c/186327/ 16:06:41 alright agenda for today 16:07:05 #link https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting 16:07:12 vincent_hou: We are getting there. 16:07:23 Thx 16:07:30 #topic 3rd Party CI - FC Passthrough - upstream now available 16:07:32 asselin__: hi 16:07:38 #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/065677.html 16:07:44 just posted to the mailing list... 16:07:57 just to let everyone know patrickeast and I have fc passthrough scripts avaialble 16:08:10 asselin__: +1 16:08:11 https://git.openstack.org/cgit/stackforge/third-party-ci-tools/tree/provisioning_scripts/fibre_channel 16:08:34 asselin__: is this communicated on the infra third party page or Cinder third party page? 16:08:38 nikeshm is probably the first 'new' person to use them on his driver 16:08:49 thingee, cinder FAQ 16:08:55 asselin__: perfect :) 16:09:01 #link https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#FAQ 16:09:22 so just remember to go to the FAQ :) 16:09:33 that's it from me...just wanted to make it known 16:09:42 it's new and frequently asked already ;) 16:10:06 thanks asselin__ 16:10:08 big thanks to patrickeast for refactoring it to be consumable 16:10:13 and patrickeast ! 16:10:23 nice job guys 16:10:23 asselin__ and patrickeast Thanks guys! 16:10:44 #topiic Volume status during migration 16:10:47 vincent_hou: jungleboyj hi 16:10:54 Hi 16:10:56 thingee: Howdy. 16:11:04 #link https://review.openstack.org/#/c/186327/ 16:11:50 So, I think vincent_hou is making great progress proposing improvements to volume migration but avishay has raised a concern. 16:11:51 I am proposing that we change the status of a volume to 'migrating' when it is migrating for the migration improvement. 16:12:09 ^^ want to get feedback from others on that. 16:12:17 why are comments in ps6 being ignored 16:12:20 that's kind of annoying 16:12:31 I like to avoid things like attempting to attach the volume while migrating. 16:12:33 two people have asked for there comments to be addressed. 16:12:38 their* 16:12:50 So as a deployer, it would be really nice to be able to migrate transparently to the tenant 16:12:52 Avishay proposes not changing the volume status during migration and aborting a migration if a user attempts to attach/detach the volume. 16:13:13 i.e. Avishay's approach 16:13:16 so migration is an admin operation and has to be because users don't know about backends 16:13:43 if you expose the fact that a volume is being migrated, suddenly users become aware of backends and the abstraction breaks 16:13:43 avishay: We have a 'retyping' status. How is this different? 16:13:53 Though it would also be nice to be able to stop a user getting in the way of a migration if I really need to move the volume like now 16:13:54 jungleboyj: the user initiates retype 16:14:09 avishay, +1 16:14:11 DuncanT: +1 16:14:16 So setting the status to 'migrating' only during a --force would be ideal 16:14:26 DuncanT: agreed 16:14:29 For normal migrates, just abort 16:14:45 DuncanT: or even better than migrating, "maintenance" 16:14:48 Ahh... Ok. I am just concerned that allowing the user to cause an action initiated by the admin is dangerous. 16:15:08 DuncanT: +1 this is nothing new of what we already said at the summit of just being ok with aborting 16:15:09 DuncanT: the user doesn't need to know why, just that it's temporarily unavailable for attach/detach 16:15:11 If we don't use ING state then we would have to change the ING approach to locking 16:15:19 We are trying to make migration more robust and having something to abort seems to just be more fragile. Simple is better. 16:15:23 avishay: ++ 16:15:42 avishay: that seems simple to me 16:15:44 avishay: I do like the idea of having a general 'unavailable' status. 16:15:58 That could be useful else where. 16:16:02 geguileor, we should always be doing ING status checks in the API anyway 16:16:05 jungleboyj: Depends what you're trying to achieve... we often want to reballance or drain a server without users knowing 16:16:12 but we currently can't because of Nova now. 16:16:18 hemna: the problem is we don't have an ING for migrate 16:16:25 hemna: maintenance is not ING state ;-) 16:16:26 hemna: that's the point of this proposal 16:16:37 well, we kinda do...migration_status no ? 16:16:44 geguileor: Maintaining. ;-) 16:17:08 put the volume in 'something' status :) 16:17:09 jungleboyj: Ok, just wanted to point that out before we agreed on the term :) 16:17:09 that's ING 16:17:31 hemna : is this with regards to the idea that we were talking about the other day 16:17:37 What about 'unavailable' ? 16:17:53 Though that brings up more questions, potentially then 'maintenance' 16:17:57 hemna : just joined so wanted to know the context 16:18:02 the name of the state doesn't really matter 16:18:05 The point is, as an admin, I /really/ want to be able to do things like reballance without affecting the tenant 16:18:19 jungleboyj, well as long as the 'ing' state check also checks for unavailable 16:18:22 avishay: +1 16:18:24 DuncanT: +1, not only without affecting, without him knowing 16:18:26 maintainance mode means SLA affected 16:18:27 to prevent actions as well 16:18:32 I'm ok with that 16:18:36 DuncanT: +1 16:18:43 DuncanT: +1 16:18:45 avishay: +1 16:18:47 avishay, DuncanT : +1 16:19:01 It really is a necessary function for a large cloud 16:19:04 I'm seeing a lot of agreement 16:19:07 hemna: unavailabling 16:19:08 yup 16:19:12 :P 16:19:17 thingee: scary right? 16:19:17 smcginnis: :-) 16:19:33 Which agreement are coming to? 16:19:36 smcginnis: XD 16:19:39 so something like "unavailable" for the volume status? 16:19:48 during migration? 16:20:02 as jungleboyj raised, this can be used for others thing 16:20:19 vincent_hou: noooo 16:20:27 vincent_hou: No 16:20:28 thingee: Ok, I am glad that people like that idea but I don't feel like we are greeing on that for migration. 16:20:38 Yeah, that was what I as afraid of. 16:20:38 Unavailable is frightening to users 16:20:56 vincent_hou: Need to be able to do it without user knowing 16:20:57 vincent_hou: no state change normally, and abort if attach/detach. if the admin passes a special flag, then the volume goes to 'unavailable' or whatever state and then no attcah/detach is allowed. 16:20:59 +1 for 'Maintaining' 16:21:06 OK. 16:21:29 vincent_hou: *maybe* a 'migrating' state for migrate --force, but that is less important than the silent migration 16:21:36 avishay: +1 16:21:53 DuncanT: It sounds OK to me. 16:21:54 DuncanT: +1 16:21:55 avishay DuncanT So, we are going to allow a user to override what an admin has requested? 16:22:15 jungleboyj: If the admin doesn't say --force, yes, absolutely 16:22:17 jungleboyj: If admin doesn't force it yes 16:22:20 +2 for 'Migrating' 16:22:28 I have the same concern as Jay has. 16:22:33 Ok, that makes me queasy 16:22:53 jungleboyj: It's nice to be able to kick off migrations without affecting SLAs or otherwise impacting the customer 16:23:00 think of it from the POV of the operator, not the developer :) 16:23:13 So, in the case a user requests and attachment we would stop the migration and delete the new volume that was being created. Throw some huge warning into the logs as to why the migration stopped/ 16:23:32 avishay: :-) 16:23:33 jungleboyj: even info, not warning 16:23:42 jungleboyj: info 16:23:46 it's not a big deal, normal flow 16:23:58 sounds like an awful experience for ops if it's an emergency 16:24:09 thingee: --force then 16:24:10 thingee: that's why you have the --force flag 16:24:10 thingee: Then you would use --force 16:24:12 One more issue is that when we do "retype", we already put "retyping" in the volume status. Shall we change that approach as well? 16:24:15 avishay: DuncanT Ok, that seems odd to me, but you guys have more experience. 16:24:30 vincent_hou: no, retype is a user-initiated action 16:24:30 piranhas in this room 16:24:34 vincent_hou: No, sounds like that is different as it can be initiated by user. 16:24:38 thingee: :P 16:24:45 jungleboyj: It really is a common thing to want to move volumes off a 'hot' backend without affecting the user 16:24:46 thingee: XD 16:24:55 I think unavailable makes sense 16:25:02 OK. 16:25:13 unavailable .... in a cloud? 16:25:14 DuncanT: Ok. 16:25:19 yup 16:25:19 not sure what I think about that 16:25:35 'maintainence' might be better 16:25:41 yea 16:25:43 (give or take some spelling) 16:25:47 OK. 16:25:52 ok, vincent_hou sounds like you got some feedback to go with 16:25:54 anything else? 16:26:02 Thank you so much folks 16:26:19 #topic Status update on c-vol A/A efforts 16:26:19 I will resolve the comments for this spec ASAP> 16:26:23 Thanks guys! 16:26:24 dulek: geguileor hi 16:26:29 THank you folks. 16:26:31 hi! 16:26:32 hi 16:26:32 (I'm on a mobile approaching some forest area, so may be disconnected any time) 16:26:37 #link https://etherpad.openstack.org/p/cinder-active-active-vol-service-issues 16:26:41 ha 16:26:54 * geguileor is the backup 16:26:56 So we just wanted to gather updates from folks engaged in c-vol A/A efforts 16:27:32 so i am working with a syzmon regarding https://review.openstack.org/#/c/183537/ 16:27:38 Let me start - me and an engineer from my team are developing Tooz locks implementation for Cinder. 16:28:15 Right now we're in the process of testing it and looking for corner cases. 16:28:32 review is in progress for this effort 16:29:17 So will it only change current locks or include new needed locks? 16:30:17 (new needed as in we see they are missing) 16:30:21 its a proposal not a way to go for 16:30:24 also do we need to open a bug and start working on fixing the nova-cinder interaction because i have been seeing it a lot lately that volume getting stuckin in "ing" state 16:30:27 I/we are looking at fixing up Nova to start catching some failures rather than trying to figure everything out in advance. Requires some substantial moving things round in Nova though 16:30:47 DuncanT : do we have a bug for it 16:30:55 DuncanT, https://bugs.launchpad.net/nova/+bug/1458958 16:30:55 Launchpad bug 1458958 in OpenStack Compute (nova) "Exceptions from Cinder detach volume API not handled" [Undecided,New] 16:31:08 DuncanT, that's a start and needs to get handled 16:31:10 #link https://bugs.launchpad.net/nova/+bug/1458958 16:31:22 hemna: I started on attach actually, but yeah 16:31:29 aaaaand no one has signed up to fixy that one yet :( 16:31:37 i did assign myself now 16:31:43 will work on it 16:32:06 but just in general, Cinder can't do 'ING' state checks now and return VolumeIsBusy exceptions now because of a bunch of those types of failures on the Nova side. 16:32:14 hemna: It is true of every nova call to cinder TBH 16:32:21 hemna : agree 16:32:28 personally, I think we need to shore up the cinder API side and do 'ING' checks 16:32:31 DuncanT : +1 16:33:13 we need additional validation at the cinder API layer before procedding ahead and coming to know abt it at the manager layer …what does everyone think abt it 16:33:24 dulek, geguileor: so can we talk about what we would like to have for l-1? 16:33:40 hemna: Just patch the client in nova to raise an exception at random one time in 4, then make nova handle it... can do the cinder API work afterwards 16:33:46 thingee: Progably a good idea :) 16:33:48 there's a lot rainbow and unicorn talk happening atm. 16:33:58 :) 16:34:19 We can't change the cinder API until nova can handle the response, or things break even more than they do now 16:34:27 DuncanT, +1 16:34:33 ok 16:35:06 DuncanT: ok lets start there. 16:35:11 I had leeantho file that defect as an example of what happens when the Nova side doesn't compensate for VolumeIsBusy being sent back from Cinder. 16:35:12 who is looking into that? 16:35:24 who cares to help look at the nova side for catching failures? 16:35:45 I'd do it if I wasn't slammed at the moment 16:35:46 once we know who wants to take on that, we can track the patch posted and help review. 16:36:20 We are just doing sprint planning, but hopefully us 16:36:23 *sigh* 16:36:26 ok no one so far 16:36:30 ... 16:36:36 everyone wants to work on the new shiny Cinder stuff and not fix these major issues :( 16:36:39 I can also help hemna, DuncanT 16:36:48 thingee: i can take a look 16:36:50 Sorry, got disconnected... 16:36:57 hey look at that, winston-d to the rescue 16:37:07 winston-d, thank you. 16:37:12 I can help if possible. 16:37:30 hemna: cool 16:37:37 I would say at most point winston-d to the particular known failure areas. which is apparently every cinder call from nova 16:37:47 hemna, DuncanT ^ 16:38:04 we should ping jaypipes or jogo and get them up to speed with what we are trying to fix here from the Nova side. 16:38:15 winston-d: I'll be working on the tests to confirm all things that break 16:38:16 winston-d: if you come up with a patch next week or whenever, let us know so I can bring it up in the meeting and we can help review. 16:38:31 getting some nova core's to buy off on this will help us push it through the nova side as well 16:38:32 Attach is the messiest piece to fix by the looks of it 16:38:33 thingee: sure 16:38:36 Do we need any Cinder blueprints for that? 16:38:40 DuncanT, yup, for sure 16:38:42 Since it does some crazy inspection 16:38:45 And Nova's BP deadline is closing fast. 16:38:59 We'll need a nova BP for sure 16:39:02 hemna: I'm kinda swamped :( Perhaps ndipanov would be a good alternative? 16:39:29 Andrea in the nova team @HP is available I believe 16:39:36 jaypipes, thanks man. any help with just supporting with timely reviews would be great at this point. 16:39:37 big shoes to fill 16:39:41 jaypipes: we're talking about some try/catches in nova for cinder calls. pretty small stuff. 16:39:50 ndipanov: ^ 16:39:59 * ndipanov scrolls back 16:40:01 if anyone cares about gate failures, seems like a good priority 16:40:08 fwiw, https://review.openstack.org/#/c/167815/ has been sitting there for 2 months 16:40:21 ndipanov, for reference https://bugs.launchpad.net/nova/+bug/1458958 16:40:21 Launchpad bug 1458958 in OpenStack Compute (nova) "Exceptions from Cinder detach volume API not handled" [Undecided,New] - Assigned to Vilobh Meshram (vilobhmm) 16:40:53 we also have this one: https://review.openstack.org/#/c/138664/, not sure how to proceed 16:40:54 So who will start with writing a BP for Nova? 16:41:17 vilobhmm: FYI, best not to assign yourself to a bug before pushing a patch for it. when you push a patch, it will auto-assign you to the bug if you reference the bug in your commit message. 16:41:22 do we need a Nova BP to fix a bug ? 16:41:29 how descriptive does this bp need to be? We're catching potential exceptions. 16:41:33 I can volunteer if DuncanT will help me to understand relations there. 16:41:34 jaypipes : sure 16:41:40 hemna: no, not unless the Nova API changes. 16:41:48 I'll start for attach, and get Andrea to input. 16:41:53 thingee : I can help with the blueprint and the spec 16:41:56 hemna, thingee ok so what's the question 16:42:14 ndipanov: review the patch when it's posted. We're doing better try/catch of cinder calls from nova. 16:42:18 getting a sponsor 16:42:32 ndipanov, so we are just going to do some exception catching in the calls to Cinder. and need reviews when patches go up, so we can fix this. 16:42:33 thingee, sounds like a good idea - pls ping me 16:42:40 ndipanov, thank you!!! 16:42:51 ndipanov: thank you, I appreciate your time on helping with failures here. 16:43:05 Cool, this seem like we can fix that without a BP and spec? 16:43:06 I'll do my best to get to reviews. 16:43:10 ndipanov: I'll get cinder folks here to help verify things as well 16:43:16 jaypipes: thank you 16:43:25 it is somewhere on my todo list to move all of the detach stuff into nova/virt/block_device.py 16:43:32 but sadly hadn;t done that yet 16:43:36 vilobhmm: honestly I don't think this is a complicated bp, but DuncanT please help? 16:43:47 in that case we would have a single place where to catch those... 16:44:19 ndipanov: yeah should be that one wrapper file nova has for cinder 16:44:24 nova.volumes.cinder or something 16:44:34 winston-d: ^ 16:44:45 whew 16:44:46 thingee: It isn't just try/catch, since nova tries to figure out stuff for itself that needs to be re-arranged so it can be skipped once cinder returns more 16:44:50 ndipanov, +1 16:44:58 ndipanov: what about failure during attach? 16:45:13 the trick is to get Nova to bail gracefully when a volume is busy 16:45:16 DuncanT: lets start with the try/catch issues for l-1 16:45:22 fast approaching 16:45:23 I think that's probably the part where we'll need help 16:45:30 thingee: +1 16:45:30 thingee: agree 16:45:36 thingee: But Nova has BP deadline for L-1. 16:45:37 xyang, attach is done using the classes so it's wrapped inside a block_device.attach() 16:45:49 thingee: I mean L-1 is BP deadline. 16:45:50 dulek: I understand and I got people to handle that ;) 16:45:59 dulek: vilobhmm and DuncanT will work together on the bp 16:46:07 how complicated is a bp on try/catch? honestly 16:46:09 thingee: Agreed! :) 16:46:16 ok thanks everyone 16:46:17 thingee: cool..will do that 16:46:28 detach is sadly inlined in the compute manager callbacks 16:46:32 #action DuncanT vilobhmm to work on bp for try/catch issue catching on calls to cinder from nova 16:46:38 but this doesn't affect you guys 16:46:39 dulek: these are fundmentally bugs, bp isn't really necessary 16:46:49 #action winston-d to work on patch(es) to do try/catch handling on calls to cinder from nova 16:46:59 #agreed this is a good starting point for l-1 being around the corner 16:47:17 #topic Third party CI requirements for re-branded drivers 16:47:19 DuncanT: hi 16:47:23 DuncanT: and thanks for bringing this up 16:47:33 You can't just catch the eception, since you're too caught up in nova to be able to /do/ anything... so you're pretty much where you are today (broken) but with a log message 16:47:38 #link https://review.openstack.org/#/c/187853/ 16:47:45 #link https://review.openstack.org/#/c/187707/ 16:48:16 so these are rebranded drivers... basically inheriting off other driver classes. 16:48:38 #idea should rebranded drivers also have CI's or do they just piggy-back off the real driver CI's? 16:49:21 asselin__ anteaya ^ 16:49:23 I think they should 16:49:30 * asselin__ is thinking 16:49:32 where do we draw the line then? 16:49:46 honestly...not sure why we need those drivers... 16:49:50 patrickeast: that's what I was thinking. What if they make some modifications. 16:49:54 These are trivial inheritence, so I see little value. But I could see the same situation where more functionality is introduced that would need it. 16:50:02 they could override a method or two 16:50:06 asselin__: That is a very good point. 16:50:16 asselin__: branding 16:50:20 asselin__: good point 16:50:32 yes...buy why do we need branding inside cinder? 16:50:35 asselin__: The internals are the same, but the marketing, pricing, etc is very different 16:50:40 not exactly the problem cinder is trying to solve IMO. 16:50:54 I can see branding as a strong marketing requirement. But completely new drivers seems excessive. 16:50:56 asselin__: they are effectively different products in the consumer's eyes 16:50:57 brand the hardware. use the common driver 16:51:03 Especially since config options don't get "rebranded". 16:51:06 BUt I see the need. 16:51:12 asselin__: so their customers won't notice what they have is actually sth else if not looking close enough? 16:51:18 Though I don't really like it. 16:51:23 In this case, I see config options are rebranded 16:51:27 All these two patches do is rename some config options TBH 16:51:35 opencompute: Oh, right. 16:51:47 IMO it's not solving anything. Sure, you use a different dot path with your products name, but that doesn't solve the problem over the product you're rebranding being in the config option name /description 16:51:51 and log messages 16:52:02 At least they are complete duplication of the same driver code. 16:52:17 smcginnis: for now 16:52:22 I do appreciate that they tried to minimize the impact. 16:52:26 tbarron: Right. 16:52:28 They need a CI. If you bring a driver you bring a CI. 16:52:36 So isn't valid point for us - if you want branding, then introduce brand new driver, without inheritance? 16:52:40 And the CI? 16:52:41 If that is too much pain then don't bring the driver. 16:52:53 Swanson: Fair point. 16:52:54 Swanson: +1 16:52:56 Swanson: +1 16:53:00 Asking for CI is not totally unreasonable 16:53:14 The inheritence model is definitely better than cut n paste 16:53:20 I'm also leaning on needing ci. 16:53:29 DuncanT: implement CI#2 via sed :-) 16:53:34 True, because we cannot control how much changes went in to new driver 16:53:46 o/ 16:53:53 can they create a symbol link to a inherited ci? 16:53:56 if we require the ci, they are treated as regular drivers and future reviewers dont have to decide how much of a modification warrants adding CI testing 16:54:01 * DuncanT leans for not needing CI for what is there, but is entirely accepting of others voting differently 16:54:12 patrickeast: Valid point 16:54:20 opencompute: we can controll, we review the patches and approve them. 16:54:33 The point of us asking for a CI from some vendors besides verifying they're stuff works was for them to actually be committed in some way to OpenStack and actually understanding how things work. Some people had no idea how to deploy OpenStack before this requirement. IMO, this rebrand thing introduces another way for drive by driver patches and not be 16:54:33 involved with the community 16:54:39 opencompute: if we are unhappy about driver change, -2 and ask for really ci 16:54:40 I like the idea of inherited CI 16:55:02 so parent driver changes require both CI +1s 16:55:39 I'm still a -1 on these rebranded drivers. It's not a solving anything. 16:55:53 opencompute: I don't see how that should work. Child driver can change things. 16:56:05 I'm also -1 on rebranded drivers. 16:56:09 see my comments earlier about config options and log messages being confusing of the product you're attempting to rebrand 16:56:12 Child driver only needs it own CI to pass 16:56:29 thingee: The log messages get rebranded correctly 16:56:38 DuncanT: how so? 16:56:40 Parent driver needs parent and children CI to pass 16:57:09 thingee: The only 'brand' in there are the config option names and the class names, all of which are changed in the child 16:57:10 three minute warning 16:57:10 3 minutes warning 16:57:35 DuncanT: what about config options that mention the parent product? 16:57:39 It solves a problem for a vendor and a customer. I've dev X so I get dev X driver. Not dev Y driver. Log and options should properly reflect that it is dev X tho or there is no point. 16:57:39 name/description 16:57:56 So a complete rebranding or nothing. 16:58:17 Log should include vendor name 16:58:31 as asselin__ mentioned earlier, what if they make one method override? we would need to draw the line somewhere. 16:58:37 only 2 minutes 16:58:38 thingee: Those config options are redone in the child AFAICT. Maybe I need to read the patches again 16:58:39 it just seems complicated for little gain from the community 16:58:56 thingee: then we -2 to those changes and ask for ci 16:59:17 are any of these maintainers of the rebrand here? 16:59:20 in this meeting? 16:59:33 .... 16:59:35 I don't see nikesh around. 16:59:35 and there's my point 16:59:38 thingee: Assuming the rebranding works correctly, it is less complicated to see the name of the array in the config file, rather than some other competitor's array... 16:59:43 the smallest involvement as possible 16:59:45 to get your product in 16:59:57 He is time-zone challenged today I believe 17:00:05 He's been in the meeting before 17:00:06 he was in sanjose 17:00:12 #endmeeting