16:00:42 <smcginnis> #startmeeting Cinder
16:00:42 <openstack> Log:            http://eavesdrop.openstack.org/meetings/massively_distributed_clouds/2017/massively_distributed_clouds.2017-07-19-15.00.log.html
16:00:48 <openstack> Meeting started Wed Jul 19 16:00:42 2017 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:49 <scottda> hi
16:00:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:51 <openstack> The meeting name has been set to 'cinder'
16:00:56 <Swanson> Hallo.
16:01:02 <smcginnis> ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex karthikp_ patrickeast dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino karlamrhein diablo_rojo jay.xu jgregor lhx_ baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell watanabe.isao,tommylikehu mdovgal ildikov wxy
16:01:08 <smcginnis> viks ketonne abishop sivn
16:01:14 <tbarron> hi
16:01:28 <scottda> I thought that last bit was some German or something smcginnis
16:01:38 <scottda> "viks ketonne abishop sivn"
16:01:40 <smcginnis> Nien
16:01:47 <wiggin15> Hi
16:01:53 <rajinir> hi
16:01:55 <DuncanT> Hi
16:01:57 <smcginnis> Hah, does sound a little like it. :)
16:01:57 <lhx__> hi all
16:02:02 <guyr-infinidat> hi
16:02:06 <abishop> o/
16:02:09 <lpetrut_> hi
16:02:20 <tommylikehu> hey
16:02:24 <diablo_rojo> Hello
16:02:31 <zengyingzhe> Hi
16:02:42 <jungleboyj> @!
16:02:42 <_pewp_> jungleboyj (=゚ω゚)ノ
16:02:56 <smcginnis> #topic Announcements
16:03:01 <smcginnis> The usual...
16:03:12 <smcginnis> #link https://etherpad.openstack.org/p/cinder-spec-review-tracking Review focus
16:03:21 <smcginnis> I need to go through and update the approved specs there.
16:03:28 <smcginnis> We've been able to get a few more through.
16:04:23 <smcginnis> Next week is Pike-3, so there are probably some that we can revert or move to the queens folder.
16:04:38 <smcginnis> Tomorrow is non-client library freeze.
16:04:58 <smcginnis> Barring anything major cropping up with os-brick, we should be all set there now with 1.15.1.
16:05:17 <smcginnis> #link https://etherpad.openstack.org/p/cinder-ptg-queens Planning etherpad for PTG
16:05:34 <smcginnis> If you have any topics for the PTG, please add them to the etherpad.
16:05:43 <smcginnis> Whether you are physically able to attend or not.
16:06:05 <smcginnis> We will hopefully stream the sessions for anyone unable to attend.
16:06:42 <smcginnis> And if you have a topic that requires your participation, we can try to get two way communication going with hangouts or something.
16:06:52 <smcginnis> Or Duo or whatever it's called now. :)
16:07:20 <smcginnis> If you are able to attend in person, please register so they can start getting a feel for numbers.
16:07:23 <smcginnis> #link https://www.eventbrite.com/e/project-teams-gathering-denver-2017-tickets-33219389087 PTG registration
16:07:58 <smcginnis> And if you need it, look in to the travel support program.
16:08:02 <jungleboyj> smcginnis:  Does registration close at some point?
16:08:23 <smcginnis> jungleboyj: I think they closed it just a day or two before last time. Not entirely sure.
16:08:43 <jungleboyj> smcginnis:  Ok, good.  I have people dragging their feet.  :-)
16:08:57 <smcginnis> Tell them the deadline is Friday then. :)
16:09:20 <jungleboyj> Aye aye captain.
16:09:28 <smcginnis> #topic Documentation migration update
16:09:42 <jungleboyj> Hey.
16:09:46 <smcginnis> jungleboyj: Captian Docs? Doctor Docs? We need another name for you.
16:09:53 <jungleboyj> So, just wanted to update people on progress.
16:10:01 <jungleboyj> smcginnis:  I will leave that up to you.  ;-)
16:10:09 <tommylikehu> :)
16:10:10 <smcginnis> DocMan
16:10:11 * jgriffith picks up his feet
16:10:33 <jungleboyj> So, thank you to those who are helping reviews and thanks to tommylikehu  for pitching in.
16:10:51 <jungleboyj> I have all the trivial changes going in under the doc-bld-fix topic.
16:10:56 <tommylikehu> jungleboyj:  np
16:11:05 <jungleboyj> The sooner we get that stuff merged the sooner I can be done.  :-)
16:11:44 <jungleboyj> I have one more large patch coming today that will move existing content to the right places based on the migration plan:  http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html#proposed-change
16:12:10 <jungleboyj> The patch adds a template for how everything should look and has README.rts in each directory to describe appropriate content.
16:12:10 <Swanson> DocboyJ
16:12:19 <smcginnis> #link https://review.openstack.org/#/q/project:openstack/cinder+topic:doc-bld-fix+status:open
16:12:47 <jungleboyj> ericyoung:  And I had a good chat with Doug yesterday and he is working on some automation improvements for Cinder but they are still future work.
16:13:13 <jungleboyj> So, at this point please remember that any changes to configuration items or functionality will need an associated doc change.
16:13:26 <smcginnis> ++
16:13:38 <smcginnis> No more DocImpact tag. If you have a DocImpact, doc it.
16:13:48 <jungleboyj> We will talk about making the docs look decent at some point in the future.  They are an ugly mess, but at least we can get the content there.
16:13:54 <jungleboyj> smcginnis:  ++
16:14:23 <jungleboyj> That was all I had.  Cores, just please help me get the bld fixes in place.
16:14:32 <jungleboyj> They are easy reviews.
16:14:54 <smcginnis> jungleboyj: Thanks for working on that. You too tommylikehu
16:15:16 <smcginnis> #topic max_over_subscription_ratio: the ratio between total capacities or free capacities
16:15:23 <jungleboyj> smcginnis:  Welcome.  Something I am passionate about.  Just wish the doc build didn't take forever to do.
16:15:27 <smcginnis> wiggin15: You're up.
16:15:41 <wiggin15> Thanks
16:15:44 <smcginnis> #link https://bugs.launchpad.net/cinder/+bug/1704786
16:15:46 <openstack> Launchpad bug 1704786 in Cinder "CapacityFilter calculates free virtual capacity incorrectly" [Undecided,Opinion] - Assigned to Arnon Yaari (arnony)
16:15:47 <wiggin15> Regarding the configuration value max_over_subscription_ratio. I noticed there is a bit of inconsistency in the code.
16:15:56 <lhx__> jungleboyj, what's the mean of BLD?
16:16:07 <wiggin15> Parts of the code treat this value as the ratio between the _total_ physical capacity and total virtual capacity (e.g. caculate_virtual_free_capacity)
16:16:08 <wiggin15> (where physical is data you can write and virtual is data you can allocate, when using thin provisioning with oversubscription)
16:16:19 <wiggin15> Other parts of the code treat it as the ratio between the _free_ physical capacity and free virtual capacity (e.g. CapacityFilter.backend_passes and some volume drivers).
16:16:20 <tommylikehu> build?
16:16:23 <jungleboyj> lhx__:  Build ... they are fixes to correct WARNINGS in the build.
16:16:44 <lhx__> thanks , very clear
16:16:45 <wiggin15> The issue is that the ratio between the "free" capacities is not constant because they are not directly related. So the way I see it, what the drivers and CapacityFilter calculate is not accurate.
16:17:07 <wiggin15> I opened a bug about this but there was disagreement about what the value means.
16:17:43 <wiggin15> I wanted to hear your input...
16:17:58 <smcginnis> wiggin15: My understanding was it was supposed to be the second one you described.
16:18:12 <smcginnis> xyang1: You introduced this in Kilo, right?
16:18:21 <xyang1> smcginnis: yes
16:18:28 <wiggin15> Most of the code uses caculate_virtual_free_capacity (the first one)
16:18:35 <wiggin15> and the second one can be inconsistent
16:18:38 <xyang1> smcginnis: we had lots of discussions on this over the years:)
16:18:57 <xyang1> smcginnis: the current code is after some discussion in the reviews
16:18:57 <geguileo> And the first one (that uses the method) is *wrong*
16:19:17 <wiggin15> umm, I'm not sure it is
16:19:17 <geguileo> method just doesn't calculate % correctly
16:19:23 <xyang1> smcginnis: I have to spend some time and find the patch that adds this change
16:19:38 <xyang1> smcginnis: not just the initial patch
16:19:43 <geguileo> well, you do the percentage of the physical total and then decrement it from the virtual
16:19:47 <smcginnis> #link http://specs.openstack.org/openstack/cinder-specs/specs/kilo/over-subscription-in-thin-provisioning.html Original spec
16:20:09 <patrickeast> ^ a bunch has shifted around since then iirc
16:20:16 <xyang1> In summary this is a safer way to evaluate after some review discussion
16:20:35 <xyang1> smcginnis: there is another patch after that
16:20:46 <xyang1> smcginnis: I don't have it now
16:20:52 <geguileo> xyang1: how is it correct to decrease the reserved based on the real total from the virtual total?
16:21:00 <xyang1> smcginnis: will find it and reference in the patch
16:21:19 <xyang1> geguileo: I need to find that patch
16:21:24 <geguileo> xyang1: ok
16:21:27 <geguileo> but in my mind is simple
16:21:44 <geguileo> reserved must be calculated either from total and then that space is not used to calculate the virtual
16:21:48 <wiggin15> <geguileo> even if we need to decrease reserved from total.. that doesn't answer the question
16:21:48 <xyang1> geguileo: unfortunately this has always been controversial
16:22:02 <wiggin15> I'm not asking about reserved. I'm asking about the ratio
16:22:06 <geguileo> or is reserved from the virtual total directly
16:22:32 <geguileo> wiggin15: and I'm saying that the method in question you referenced is *wrong*
16:22:32 <wiggin15> so let's say for the sake of the discussion that reserved should be from the virtual total
16:22:46 <wiggin15> let's also drop the specific method that calculates it
16:23:01 <jgriffith> wiggin15 but what if that's infinite :)
16:23:03 <wiggin15> when we do calculate the virtual capacity - do we take the total physical and multiply it by the ratio?
16:23:29 <wiggin15> jgriffith there is a default of 20, even if it is infinite
16:24:09 <xyang1> wiggin15: if it is infinite it does not matter what the ratio is
16:24:11 <geguileo> wiggin15: that's the over subscription, not the total capacity, right?
16:24:12 <jgriffith> wiggin15 sorry, I'll refrain from my historical comments and why I think that sucks
16:24:15 <xyang1> It always passes
16:24:27 <wiggin15> A key question (from the bug report) is this: if we have 1 TB physical capacity and 1.2 TB virtual capacity, and we provisio and write 1 TB - should we allow creation? We can't *write* any more but maybe we can *provision* because we have more virtual capacity... CapacityFilter currently doesn't pass in this case.
16:24:44 <jgriffith> we've made this way more complex and difficult than it needs to be.
16:24:44 <smcginnis> jgriffith: You're referring to the array capacty, not the ratio, right?
16:24:51 <jgriffith> smcginnis correct
16:25:26 <jgriffith> smcginnis was going to go back to the "just report actual capacity" and deal with over-subscription and quit screwing around with so many "virtual" numbers that are all dependent and variable
16:26:10 <wiggin15> jgriffith when you deal with totals then the numbers aren't variable, and when you deal with used or free then they are. That's the problem
16:26:21 <smcginnis> jgriffith: Well, yeah. But I know one array very well that really has no good number for "actual capacity", because there are too many variables that go in to determining that.
16:26:33 <jungleboyj> jgriffith:  It seems that making this less complex is better and less likely to cause issues.
16:26:58 <geguileo> if you don't have free space, why keep allowing the creation of volumes just because the over-subscription ration allows it?
16:27:05 <geguileo> would be the counter argument
16:27:20 <jgriffith> geguileo I agree with that 100%
16:27:24 <patrickeast> geguileo: space can be added or recolaimed
16:27:25 <jungleboyj> geguileo:  :-)  That makes sense to me.
16:27:28 <patrickeast> reclaimed even
16:27:28 <wiggin15> geguileo because that's always the case in over probisioning
16:27:45 <geguileo> patrickeast: then it would be reported as free again, right?
16:27:57 <wiggin15> if you have 1 TB physical and ratio of 20 then you can create a 20 TB volume even if you don't have space to write
16:27:57 <patrickeast> yea but like after it is provisioned
16:28:02 <patrickeast> but before data is written
16:28:18 <geguileo> patrickeast: or you mean that it's being reported as full, but if I create a volume it will reclaim space and say that it's no longer full?
16:28:32 <wiggin15> that sounds right
16:29:04 <geguileo> patrickeast: to me that doesn't make sense
16:29:17 <patrickeast> geguileo: so like, the array is full (and for whatever reason the scheduler has no better alternatives) if it can provision the volume and the volume doesn't go immediately over the edge the admin can add more space, or cleanup old stuff as needed
16:29:20 <jgriffith> geguileo I think the problem he's pointing out is that for those that do dedupe, compression etc you can easily store 2TB of data on a 1TB device
16:29:20 <geguileo> creating a volume when there's no free space...
16:29:29 <wiggin15> geguileo do you agree that in over-provisioning you can provision more data than you can write?
16:29:39 <patrickeast> its just a part of over-provisioning storage and dealing with it when the arrays get full
16:29:56 <patrickeast> jgriffith: +1
16:30:01 <patrickeast> thats the other part of it
16:30:26 <patrickeast> shit, if its a copy of a volume you might put the whole 2TB into like 1K of metadata
16:30:29 <geguileo> jgriffith: and that's ok with over provisioning now, it's not a problem
16:30:35 <jgriffith> and in turn, even though my 1TB device has 2TB's of "data" on it, I may very well still be able to store another 2TB's
16:30:59 <geguileo> jgriffith: as long as you have free space >= requested * max_over_provisioning
16:30:59 <wiggin15> If you have 1 TB physical and 1.2 ratio, you write 900 GB. Now you can create a 100GB * 1.2  = 120 GB volume (whatever that number means). That means you will still provision MORE than you can write
16:31:00 * jungleboyj 's head explodes
16:31:05 <wiggin15> because that's what the feature means!
16:31:27 <jgriffith> wiggin15 don't have to yell :)
16:31:40 <wiggin15> geguileo wrong...
16:31:48 <wiggin15> sorry :)
16:32:03 <xyang1> jungleboyj: I'll send you a review links with more calculations for you to enjoy:)
16:32:04 <jgriffith> wiggin15 I'm just kidding, I'm the resident "yeller" here ;)
16:32:19 <smcginnis> jgriffith: WHAT?
16:32:27 <jungleboyj> xyang1:  No thank you.
16:32:28 <jgriffith> smcginnis YOU HEARD ME!!!!
16:32:38 <wiggin15> So..
16:32:45 <jungleboyj> @!h
16:32:45 <_pewp_> jungleboyj (/ .□.) ︵╰(゜Д゜)╯︵ /(.□. )
16:32:46 <patrickeast> xyang1: maybe thats what we need, is for some documentation that shows how all the math works today and what is expected behaviors/use-cases
16:33:01 <patrickeast> instead of knowledge buried into reviews
16:33:06 <jungleboyj> patrickeast:  Not an all bad idea.
16:33:06 <wiggin15> like I said, I looked at the code and current behavior is not consistent
16:33:12 <xyang1> patrickeast: sure, let me dig something out:)
16:33:14 <patrickeast> then we can take a look at it and decide if it is actually what we need
16:33:21 <smcginnis> Since there's inconsistency between drivers, I do think that's a sign we need to better document expectations here.
16:33:32 <jungleboyj> Would go into doc/source/user or doc/source/reference
16:33:33 <geguileo> this is pretty easy, let's vote if we want to change it from free to the normal overprovisioning mechanism
16:33:37 <smcginnis> wiggin15: +1 - it should at least be consistent.
16:33:49 <wiggin15> Do we agree that over-provisioning means you can provision more capacity than we can write?
16:33:54 <geguileo> I'm OK with changing it, but what about backward compativility? Do we care?
16:34:02 <smcginnis> Consistently right would be best, but consistently wrong is maybe a little better than inconsistently wrong.
16:34:04 <geguileo> people may be expecting Cinder to work as it was
16:34:06 <xyang1> geguileo: I disagree with a vote
16:34:18 <xyang1> geguileo: this needs more thoughts
16:35:00 <smcginnis> Document expectations, fix inconsistent behavior, release note upgrade impacts for those that are "fixed" at least.
16:35:22 <geguileo> xyang1: ok, I have already thought about this when reading the bug and the patch
16:35:31 <wiggin15> Before we move maybe I can get thoughts on the question  from above. Let me repeat
16:35:40 <geguileo> xyang1: so I'm ok with people having time to do as well
16:35:43 <wiggin15> if we have 1 TB physical capacity and 1.2 TB virtual capacity, and we provisio and write 1 TB - should we allow creation? We can't write any more but maybe we can provision because we have more virtual capacity... CapacityFilter currently doesn't pass in this case.
16:35:46 <patrickeast> smcginnis: +1, lets do that
16:35:55 <xyang1> geguileo: that's just you, not everyone who votes have already thought about it
16:36:23 <xyang1> geguileo: I just saw it a few minutes before the meeting
16:36:25 <smcginnis> wiggin15: Yes, if that's what they have configured, I believe we should allow creation of that .2 TB more.
16:36:36 <geguileo> xyang1: that's why I said I'm ok with giving time to other people
16:36:38 <xyang1> geguileo: even I need to dig out the old patches
16:36:39 <geguileo> ;-P
16:36:49 <xyang1> :)
16:37:42 <wiggin15> so.. should I bring this up next week again?
16:38:04 <geguileo> I think we should agree that we have to look at it
16:38:14 <geguileo> who is going to be looking at this besides wiggin15 and me?
16:38:22 <smcginnis> Who has an action to move this forward? xyang1 to find patches for background?
16:38:23 <wiggin15> xyang1 I would really appreciate if you leave your input in the bug report
16:38:32 <xyang1> I'll take a look
16:38:36 <wiggin15> thanks
16:38:44 <smcginnis> #action geguileo to add thoughts to bug report
16:38:48 <xyang1> smcginnis: yes
16:38:54 <geguileo> smcginnis: I already did that!
16:38:56 <smcginnis> #action xyang1 to find patches for historical reference.
16:39:07 <wiggin15> geguileo already commented, we are in disagreement :)
16:39:13 <smcginnis> geguileo: Then that's an easy action item to check off. :)
16:39:18 <geguileo> oh, I'm ok to change it
16:39:32 <xyang1> smcginnis: geguileo if we can simplify this, that will be great, but I could not get there in the past
16:39:37 <jgriffith> I hate to say this... but would it be worth a new spec to try and resolve all of this again?  Or are we resolute in what we have?
16:39:42 <geguileo> I was more explaining what we have and why it makes sense in a way
16:39:59 <geguileo> xyang1: probably wiggin15 approach will simplify it
16:39:59 <smcginnis> jgriffith: That may be the documentation we're looking for.
16:40:03 <xyang1> jgriffith: you really think so?:)
16:40:05 <jgriffith> and honestly perhaps an opportunity to discuss at PTG?
16:40:08 <smcginnis> Although I'd like to see something in the devref.
16:40:32 <smcginnis> PTG discussion sounds good. I have a feeling we won't resolve all questions by then.
16:40:35 <jgriffith> xyang1 I'm not a fan, but honestly we don't seem to be on the same page as a group here which is part of our problem
16:41:02 <jgriffith> anybody you ask about this seems to have a different interpretation, which explains the recurring bugs and the differences from device to device
16:41:14 <guyr-infinidat> Can we try just to summarize the disagreement? it is about whether the capacity filter should lo ok at the free physical capacity when over-provisioned or not?
16:41:28 <jgriffith> we may not have done a very good job of communicating/documenting this.  Maybe that's all we need is some authoritative documentation
16:41:29 <eharney> there is an existing spec on this, so it would be interesting to look through it and see if it matches what the code is doing at this point or not, and if the spec still agrees with how we think this is supposed to work
16:42:22 <smcginnis> [18 minutes left and two more topics on the agenda]
16:42:38 <xyang1> The spec was actually updated after the code was merged, but after that there are additional changes to the code
16:42:51 <wiggin15> I'm ok to move forward
16:43:00 <smcginnis> wiggin15: Can you add this to the PTG etherpad referenced earlier?
16:43:00 <wiggin15> We'll have to continue discussion though
16:43:09 <smcginnis> wiggin15: Yep, agree.
16:43:11 <wiggin15> I'm not sure I can attend. but ok
16:43:24 <smcginnis> wiggin15: We'll see how far we can get before then.
16:43:35 <wiggin15> Thank you
16:43:46 <smcginnis> And hopefully have a better common understanding by then.
16:43:58 <smcginnis> #topic Where should 3rd party drivers store their private data
16:44:08 <smcginnis> zengyingzhe: Hi, you're up.
16:44:14 <zengyingzhe> smcginnis, thanks.
16:44:36 <zengyingzhe> here's the thing.
16:44:39 <zengyingzhe> I commited a patch for Huawei driver, which moved some driver private data from admin-metadata to the volume DB field provider_location, because non-admin user creating volume from image would fail due to not having the admin metadata access permission.
16:44:40 <smcginnis> #link https://review.openstack.org/#/c/475774
16:44:51 <zengyingzhe> however, looks like there're some controversy about this change, the main question is if provider_location is a good place to store private data.
16:45:16 <zengyingzhe> So I start this topic to discuss where is the proper place to record driver private data, cause we found that there're else drivers already
16:45:17 <zengyingzhe> using provider_location.
16:45:17 <smcginnis> And I think a key point is some of this is private data to the driver, not metadata for the admin.
16:45:26 <eharney> provider_location isn't the correct place for what is being stored here
16:45:48 <eharney> but there is also some confusion, because the original bug was that non-admins can't write to where it was previously stored -- and that is usually fixed by just elevating context and then you don't have to move the data
16:46:16 <DuncanT> I certainly thought this was what admin_metadata is for
16:47:12 <jgriffith> zengyingzhe can you clarify *what* s in that metadata?  I was under the impression that it was lun_id and lun_wwn ?
16:47:16 <smcginnis> Image data is one thing, but I believe the Huawei driver has some other internal only data it needs to store.
16:47:47 <jgriffith> zengyingzhe or is it that entire list of lun_params?
16:47:51 <zengyingzhe> jgriffith, currently, we store LUN wwn in admin metadata.
16:48:09 <jgriffith> zengyingzhe right
16:48:22 <jgriffith> and that certainly fits in provider info IMO
16:48:28 <jgriffith> I'm asking if there are things I'm missing here?
16:49:05 <zengyingzhe> jgriffith, only LUN WWN in admin metadata, no other stuff
16:49:56 <tbarron> fwiw manila has "driver-private-data" - driver can return arbitrary key-values of significance only to it in this as a model update and it will be stored in DB
16:50:07 <zengyingzhe> but we also store some other data in normal metadata, it may incur an issure at the circumstance backup&restore
16:50:09 * tbarron fills in for bswartz bingo role on that one
16:50:19 <smcginnis> tbarron: :)
16:50:30 <jgriffith> zengyingzhe maybe another way of asking; can you show me what the resultant provider-info dict will look like after your change?
16:50:34 <zengyingzhe> So we also move those data.
16:50:59 <eharney> is backup and restore not keeping track of some data that it should?
16:51:47 <zengyingzhe> jgriffith, something like this, {'lun_wwn':***, 'lun_id':***} and also other info about LUN itself.
16:53:20 <zengyingzhe> so after this change, all the private data about lun is stored in provider_location, no use admin metadata or normal metadata anymore.
16:53:45 <smcginnis> I believe it's the only usable field we have for privae driver data at the moment.
16:54:38 <geguileo> smcginnis: is that good enough reason to use it?
16:55:02 <jgriffith> provider_id is another one
16:55:03 <eharney> does it fit into DriverInitiatorData at all?
16:55:05 <smcginnis> geguileo: If we don't want to expose certain data externally, I think so.
16:55:09 <zengyingzhe> eharney, while restoring to 3rd volume, metadata is also restored to the new created volume, but this will incur overwrite.
16:56:24 <eharney> zengyingzhe: what overwrites it?
16:56:39 <smcginnis> 4 minutes.
16:57:24 <zengyingzhe> some private info about original LUN, was written to the new created volume's metadata
16:57:29 <tommylikehu> eharney:  zengyingzhe 's point is if we use metadata the lun information will be overwrited, I guess
16:57:41 <zengyingzhe> tommylikehu, yes
16:57:59 <eharney> ok
16:58:15 <zengyingzhe> so we must make this change I think.
16:58:38 <smcginnis> They want to store private data in the private data field. Not sure why the pushback really.
16:58:59 <eharney> the pushback was because the original explanation didn't make sense
16:59:06 <eharney> if they want to move it they can move it
16:59:33 <eharney> i was just trying to figure out if the change was being made with a correct understanding of how we manage this data
16:59:58 <smcginnis> eharney: OK, so maybe just need a better commit mesage explaining some more of the background?
17:00:27 <smcginnis> Out of time...
17:00:38 <geguileo> lucky me!
17:00:46 <smcginnis> Sorry geguileo, I guess we'll have to save your for next time.
17:00:59 <geguileo> smcginnis: I'll bug everyone in the channel, no problem
17:01:05 <smcginnis> geguileo: Thanks
17:01:17 <johnsom> o/
17:01:31 <smcginnis> zengyingzhe: We'll have to follow up. Thanks for the discussion.
17:01:34 <smcginnis> #endmeeting