16:00:01 <thingee> #startmeeting Cinder
16:00:01 <openstack> Meeting started Wed Nov 19 16:00:01 2014 UTC and is due to finish in 60 minutes.  The chair is thingee. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:04 <openstack> The meeting name has been set to 'cinder'
16:00:09 <thingee> hi everyone
16:00:16 <bswartz> .o/
16:00:18 <e0ne> hi
16:00:25 <rhe00_> hi
16:00:32 <DuncanT_> hey
16:00:33 <scottda> hi
16:00:36 <bswartz> welcome back mike hope you had a good vacation
16:00:43 <thingee> bswartz: thanks!
16:00:44 <tbarron> hi
16:00:49 <mtanino> hi
16:01:02 <pwehrle> o/
16:01:18 <thingee> yes sorry for the slow moving on things. drivers bps should've still been taken care of, but last week I was out on vacation and unfortunately my laptop was destroyed on the way to the summit
16:01:20 <eharney> hi
16:01:26 <thingee> so reviewing stuff was not great.
16:02:01 <thingee> I'm caught up on stuff with emails/work etc so reviews should be moving forward
16:02:14 <rushiagr> hi!
16:02:20 <thingee> just quick announcement, we're past accepting new drivers this release
16:02:21 <lpabon> o/
16:02:25 <rushil1> o/
16:02:30 <winston-d> o/
16:02:51 <thingee> so what you see in k1 is what we have
16:02:54 <jungleboyj> o/
16:03:09 <thingee> https://launchpad.net/cinder/+milestone/kilo-1
16:03:09 <xyang1> hi
16:03:22 <thingee> ok lets get started
16:03:26 <jungleboyj> thingee: So, those that are still in the review process, though are valid?
16:03:44 <thingee> jungleboyj: yup, all I wanted was the bp.
16:03:51 <thingee> the intention that this is going to happen
16:03:55 <jungleboyj> Excelletn +2
16:04:06 <thingee> #topic 3rd party CI
16:04:09 <thingee> DuncanT_: you're up
16:04:19 <DuncanT_> Right
16:04:23 <thingee> oh yes agenda today: https://wiki.openstack.org/wiki/CinderMeetings
16:04:58 <DuncanT_> My usual question: How are people doing with 3rd party CI? Are we ready to pencil a cutoff date for having it working or are we going to let it drag?
16:05:21 <DuncanT_> Several people have suggested that a grace period is needed for new drivers
16:05:25 <DuncanT_> That sounds fine to me
16:05:43 <bswartz> +1 for grace period -- it's hard enough to write a new driver
16:05:46 <flip214> DuncanT_: what cutoff date? isn't it good to _add_ CI, no matter what time?
16:05:57 <jungleboyj> DuncanT_ We are working on moving some of our CI that is currently in China to the US to avoid firewall problems.
16:06:04 <thingee> flip214: to create pressure. otherwise people put it off
16:06:11 <flip214> we're currently setting up one, and pressure won't help
16:06:16 <jbernard> what is the proposed length of the grace period?
16:06:17 <DuncanT_> flip214, I mean a cutoff where we start to talk about removing untested drivers
16:06:17 <thingee> flip214: +1
16:06:31 <thingee> flip214: not saying it's right, just that's the short answer.
16:06:33 <jungleboyj> DuncanT_ We hope that will help with consistent results.  I have set an internal cutoff date that I will keep to myself.  ;-)
16:06:36 <DuncanT_> jbernard, the end of the release
16:06:49 <DuncanT_> jbernard, Maybe?
16:07:08 <thingee> is anyone interested in collaborating on a simpler solution?
16:07:11 * DuncanT_ is flexible for when we pick, but as things are going, progress is glacial
16:07:12 <flip214> well, I'm planning to use the CI system for internal QA too, so there's a good reason to *make it work*.
16:07:22 <DuncanT_> flip214, +2
16:07:38 <xyang1> DuncanT_: do you mean for existing drivers, if CI is not up and running, the driver will be removed before Kilo GA?
16:07:41 <flip214> well, then I'm at +3, so I'm good to go ;)
16:07:47 <thingee> I've brought this up at the summit. and I think people are likely to build a ci if it's easier than the current solutions
16:08:00 <DuncanT_> xyang1, That is what I'm thinking, and it is what we've said before
16:08:11 <xyang1> DuncanT_: for new drivers, it is different?
16:08:45 <thingee> DuncanT_: what's the suggested deadline?
16:09:03 <jungleboyj> DuncanT_ K2?
16:09:03 <rhe00_> would it be possible for someone that has it running to clone the environment without releasing anything confidential?
16:09:06 <DuncanT_> K-2 seem reasonable for exisiting drivers?
16:09:25 * jungleboyj hopes so.
16:09:35 <thingee> DuncanT_: are you going to communicate to maintainers?
16:09:43 <DuncanT_> thingee, I can do so again, yes
16:09:57 <thingee> DuncanT_: are we deprecating drivers? or just flat out removing?
16:10:16 <thingee> even broken non responsive maintainers, we agreed to deprecate
16:10:30 <thingee> broken drivers/non responsive maintainers*
16:10:42 <thingee> I know that takesaway what you're trying to do here, but bringing it up
16:10:44 <timcl> K2 for existing drivers only? What about the new drivers coming in? K2 is going to be a challenge especially with Fibre Channel
16:10:46 <DuncanT_> thingee, deprecation or removal... I'll probably put the patches up for removal then convert them to deprecation
16:10:48 <jungleboyj> DuncanT_: So the expectation is that maintainers are reliably reportng CI results by K-2 ?
16:11:04 <DuncanT_> jungleboyj, For exisiting drivers, yes
16:11:10 <jungleboyj> Ok.
16:11:38 <DuncanT_> timcl, New drivers maybe target the end of the release? With a hard cutoff of L-2
16:11:44 <thingee> Since I know not everyone attends this meeting unfortunately, I think DuncanT_ should also post this to the list.
16:12:09 <DuncanT_> thingee, Will do. I'll email maintainers directly where possible too
16:12:29 <thingee> anyone opposed to this, besides there being more work for you? :)
16:12:30 <timcl> DuncanT_: OK we'll digest that and see where we are in the FC side
16:12:53 <DuncanT_> timcl, Cool. Reach out to me if there are major issues, we can work on them.
16:13:14 <DuncanT_> Ok, I think that's me done for this topic. Thanks all
16:13:17 <timcl> DuncanT_: thx
16:13:22 <DuncanT_> Feel free to action me
16:13:35 <DuncanT_> #action or whatever
16:13:40 <thingee> ok I take this silence that people are fine with k-2. this will move on to the mailing list
16:13:48 <thingee> DuncanT_: end of k-2?
16:13:49 <jungleboyj> thingee: +2
16:14:01 <DuncanT_> End of k-2, yes
16:14:29 <thingee> #agreed end of k-2 deadline for existing drivers to have a ci
16:14:33 <thingee> thanks DuncanT_
16:14:52 <thingee> #action DuncanT_ to post to the openstack dev list about ci deadline
16:15:10 <thingee> #action DuncanT_ to email existing driver maintainers about ci deadline
16:15:30 <thingee> #topic Kilo mid-cycle meet-up:
16:15:34 <thingee> jungleboyj: you're up
16:16:01 <jungleboyj> Thank you.  So, I have started the process of planning the mid-cycle meet-up in Austin:  https://etherpad.openstack.org/p/cinder-kilo-midcycle-meetup
16:16:09 <thingee> #link https://etherpad.openstack.org/p/cinder-kilo-midcycle-meetup
16:16:44 <jungleboyj> For room planning I need to get a high estimate of how many people are coming sooner rather than later.
16:17:19 <jungleboyj> So, if you think that you have a decent chance of getting travel approved, please put your name in the ether pad.
16:17:25 * DuncanT_ is waiting to hear back from management
16:17:40 <thingee> jungleboyj: so these are set?
16:17:45 <thingee> these dates*
16:17:52 <jungleboyj> I have space for ~20 lined up.  If that isn't enough space I will have to get creative.
16:18:29 <jungleboyj> thingee: I thought that was worked best, based on discussion at the Summit.
16:18:37 <thingee> jungleboyj: yup just making sre
16:18:38 <thingee> sure
16:18:42 <thingee> anteaya: ^
16:19:00 <jungleboyj> January 27, 28 and 29 for those who weren't aware.
16:19:01 <thingee> #info dates for meetup is Jan 27-29 2015
16:19:11 <anteaya> yup
16:19:18 <thingee> #info room for 20 people sign up now!
16:19:36 <jungleboyj> Unfortunately there is another event on site at that time which is why I need to get creative if we ahve more than 20 people.  :-)
16:19:46 <thingee> jungleboyj: anything else?
16:19:50 <asselin> \o
16:20:05 <thingee> jungleboyj: can you post this to the mailing list as well?
16:20:26 <jungleboyj> I think that is it right now.  Sure can.  Thanks to those who are already putting their names in!
16:20:30 <thingee> #action jungleboyj to post to the openstack lists about the meetup
16:20:34 <winston-d> Dell was also willing to sponse the meetup?
16:20:38 <jungleboyj> We will be covering dinner one night as well.
16:20:43 <thingee> woot
16:20:52 <anteaya> I'll sign up, bump me if you need the space for someone else
16:20:54 <jungleboyj> Going to talk to dell and NetApp to see if they want to do dinner on other nights.
16:21:01 <thingee> anteaya: no way, you're going
16:21:13 <thingee> jungleboyj: thanks
16:21:13 <jungleboyj> Can't have a party without anteaya
16:21:19 <jungleboyj> Thank you.
16:21:28 <tbarron> jungleboyj: esker said he wanted to sponsor dinner from NetApp one night
16:21:33 <jungleboyj> B there B square.
16:21:37 <thingee> #topic What to do about cinder.conf.sample
16:21:40 <jungleboyj> tbarron: Right.
16:21:41 <thingee> tbarron: awesome!
16:21:47 <thingee> jungleboyj: you're up again
16:21:57 <jungleboyj> :-)
16:22:04 <anteaya> thingee jungleboyj awwww, thanks
16:22:11 <jungleboyj> So, I don't know what to say here.  Just wanted to have the discssion.
16:22:11 <thingee> do you have the change in question from yesterday?
16:22:13 <hemna> so whats wrong with cinder.conf.sample right now?
16:22:34 <eharney> most recent problem is that dep changes broke it on the stable branch, right?
16:22:35 <jungleboyj> Do we want to do something different or just keep making 1sie 2sie fixes when libraries change?
16:22:52 <hemna> :(
16:23:04 <DuncanT_> We have the option of removing external libraries for the sample conf
16:23:08 <eharney> i liked the idea that someone proposed of only generating the sample based on our options and not those from other libraries, which i think helps most of this
16:23:17 <thingee> DuncanT_: that's what I was wondering too
16:23:20 <hemna> what external lib broke this ?
16:23:22 <DuncanT_> Not ideal, but it solves my (review) usecases
16:23:26 <jungleboyj> eharney: +1
16:23:28 <ameade_> eharney: +1 at least
16:23:29 <akerr> +1 to removing external libs
16:23:32 <thingee> eharney: +1
16:23:37 <jungleboyj> hemna:  oslo.db
16:23:41 <hemna> ugh
16:23:55 <ameade_> heck i almost think if generating it is easy why dont we just have it generated on install or something?
16:24:02 <DuncanT_> Packagers won't like it, but they already have to rebuild for other openstack projects now so it shouldn't be a biggie
16:24:02 <winston-d> Or, we can stop using check_update.sh in gate
16:24:06 <ameade_> and not maintain a generated file in the repo
16:24:13 <hemna> ameade_, except the purpose of generating it was to verify it in the gate as well
16:24:23 <hemna> in case drivers made changes and the sample didn't contain those changes
16:24:42 <DuncanT_> Being able to see the generated changes is *really* useful when looking for back compat issues
16:24:45 <hemna> I think it's best to remove those external lib conf entries if possible
16:24:48 <eharney> yeah, one of the reasons we wanted to keep the generation was to help reviewing
16:24:52 <ameade_> hemna: :but it wouldn't matter if it was generated
16:24:56 <hemna> maybe put those in another sample that isn't gated
16:25:00 <thingee> eharney: which change exists to stop generating for external libs?
16:25:14 <eharney> thingee: i don't know, this was just an idea from Duncan i think?
16:25:22 <thingee> #idea stop generating cinder.conf.sample based on external libs
16:25:33 <thingee> DuncanT_: what's the annoyance of this for packagers?
16:26:17 <DuncanT_> thingee, They were initially unhappy about having builddeps on the tools needed to rebuild the sample conf
16:26:26 <hemna> if we can put those external lib conf entries in their own sample, then at least those exist somewhere for an admin to lookup.  we just don't gate on that other file.
16:26:32 <e0ne> DuncanT_: +1
16:26:36 <DuncanT_> thingee, But that should be a none-issue now since some projects dropped sample ocnf completely
16:26:57 <eharney> hemna: but then we have to figure out how to not have it out of date when the external libs change
16:27:28 <DuncanT_> hemna: An out-of-date file is probably as bad if not worse than no file... packagers can always generate an up-to-date one
16:27:29 <hemna> :( yah I suppose so.  they would have to get regenerated
16:27:36 <hemna> bleh
16:27:41 <thingee> jungleboyj: can we get a bug unless it already exists to monitor this?
16:27:42 <hemna> ok, -1 on my idea then :)
16:28:13 <thingee> this would be good to target for k-1
16:28:13 <jungleboyj> thingee: Yeah, I can create a bug.
16:28:23 <jungleboyj> hemna: -1 do your idea.
16:28:44 <thingee> who wants to take this on? :)
16:28:57 <jungleboyj> should I open a bug and see what it takes to jsut remove the external libraries from the generation?
16:29:05 <thingee> jungleboyj: yea
16:29:05 <ameade_> maybe external lib dependencies should be locked down to a version
16:29:12 <hemna> jungleboyj, +1
16:29:12 <ameade_> so they would have to explicitly change
16:29:26 <jungleboyj> thingee: Ok.  I will take a look.
16:29:29 <eharney> ameade_: seems like something to consider for the stable branch, i don't think we can in master
16:29:32 <hemna> ameade_, I think most are in the requirements.txt
16:29:40 <jungleboyj> eharney: +1
16:29:56 <jungleboyj> I am surprised that we don't lock things down in stable.
16:29:57 <eharney> hemna: requirements.txt doesn't restrict upgrades though
16:29:57 <ameade_> eharney: why not?
16:30:03 <thingee> #action jungleboyj to make a bug to for removing external libs from cinder.conf.sample generation
16:30:14 <hemna> eharney, even if it has an upper version limit ?
16:30:26 <eharney> hemna: most don't
16:30:35 <hemna> unless someone does a manual pip install I suppose
16:30:44 <thingee> jungleboyj: anything else?
16:30:46 <eharney> hemna: it does if we add them, which i think was the proposal
16:30:50 <e0ne> hemna: how do you propose managa max version?
16:30:59 <e0ne> s/managa/manage
16:31:00 <hemna> eharney, I think that's always a good idea to have an upper version limit
16:31:10 <jungleboyj> thingee: Nope, we have a direction to try.
16:31:19 <jungleboyj> I will give it a shot and see who cries.
16:31:21 <hemna> as you don't want a 2.0 upgrading to 3.0 of a package where the api is completely broken
16:31:27 <eharney> ameade_: i think we'd have to have other projects do the same for gate testing to work
16:31:38 <flip214> ----------- people: half the time == half the topics??
16:32:01 <thingee> flip214: thanks.
16:32:05 <thingee> #topic Volume metadata having semantic meaning to Cinder
16:32:07 <e0ne> hemna: minor version could also break api:(
16:32:08 <thingee> DuncanT_: you're up
16:32:16 <DuncanT_> Me again? Ok
16:32:18 <DuncanT_> So
16:32:44 <DuncanT_> Historically speaking, volume metadata had no semantic meaning
16:32:50 <jungleboyj> It is the DuncanT_ and jungleboyj show.
16:32:56 <DuncanT_> Cinder (or nova-volumes) never looked at the contents
16:33:14 <DuncanT_> Periodicallly, people try to put values in the that cinder acts on
16:33:33 <DuncanT_> Personally, I think this breaks workload portability, and it's a terrible interface
16:33:39 <thingee> DuncanT_: it might make sense to make a comparison in our dev sphinx doc?
16:33:42 <DuncanT_> I'm wondering what others think
16:33:43 <thingee> that we can point people to
16:34:05 <eharney> this concern doesn't apply to admin metadata, right?
16:34:05 <DuncanT_> If we want per-volume tuning, I'd rather define a good interface for that
16:34:12 <DuncanT_> eharney, Correct
16:34:23 <hemna> DuncanT_, +1
16:34:40 <eharney> i generally agree with the concern then
16:34:44 <tbarron> portability seems very important, esp. since we restore volume metadata from backup
16:34:47 <e0ne> thingee: +1 but will peope read it?
16:34:49 <DuncanT_> Note that solidfire, among others, already optionally consumes the volume metadata
16:34:52 <hemna> we used to store array specific cruft in the volume metadata for us to use at a later date.  But we pulled it because it's visible to the user.
16:35:07 <thingee> e0ne: I just feel like poor DuncanT_ has to explain this over and over :)
16:35:14 <e0ne> :)
16:35:25 <DuncanT_> I can certainly document it if we come to a decision
16:36:12 <thingee> DuncanT_: I agree with a good interface for it. what suggestions do you have?
16:36:40 <DuncanT_> It needs to be discoverable (since every backend will potentially have at least some unique feature)
16:36:51 <DuncanT_> Other than that, I'm not yet sure
16:37:06 <DuncanT_> I can have a go at a blueprint, see what I can come up with
16:37:15 <thingee> sure
16:37:16 <DuncanT_> But anybody else with ideas is very, very welcome
16:37:22 <DuncanT_> I've plenty on my plate already
16:37:28 <hemna> isn't per volume tuning something that volume types are for ?
16:37:44 <DuncanT_> Hemna: Yeah, but you can't change the type per volume
16:37:46 <flip214> hemna: that would give an explosion of volume types, I believe
16:37:56 <AmitKDas> discovery seems good..what about updates
16:38:03 <hemna> every volume in that type gets the same tuning.
16:38:05 <DuncanT_> hemna, Some people want to fine tune QoS and stuff per-volume
16:38:10 <rushiagr> Personally, I feel as a user, I should get my volumes without any metadata, so that I can play around with it the way I want.
16:38:15 <DuncanT_> hemna: Within the limits of the type
16:38:19 <hemna> yah
16:38:22 <DuncanT_> rushiagr, Agreed
16:38:25 <winston-d> When QoS spec was implemented, it was created as standalone entity that can be associated with either types or single volume. we can do the same for general per-vol tuning.
16:38:44 <DuncanT_> winston-d, That might mean thousands of QoS types though
16:39:12 <hemna> DuncanT_, so who is going to create these per volume tuning metrics?   the drivers?   the admin ?
16:39:19 <DuncanT_> hemna: The tenant
16:39:29 <winston-d> DuncanT_: unfortunately, yes
16:39:35 <hemna> hrmm
16:39:38 <DuncanT_> hemna: Hence the need for a well designed interface
16:39:50 <DuncanT_> hemna: I'm in no rush to get it in, I'd rather get it right
16:40:08 <DuncanT_> I just want to stop new drivers copying the couple of old ones that abuse volume metadata
16:40:27 <thingee> DuncanT_: I think a write up of the problem might be fine. No solution needed yet.
16:40:27 <hemna> almost sounds like a temp qos type that's created and applied one time to a single volume.
16:40:40 <xyang1> DuncanT_: will the metadata still be stored as key-value pairs
16:40:49 <DuncanT_> xyang1, Don't know yet
16:41:04 <DuncanT_> xyang1, I want to stop calling it metadata right now though
16:41:13 <DuncanT_> 3 types of metadata is more than enough
16:41:16 <thingee> DuncanT_: +1
16:41:18 <DuncanT_> Tuning values?
16:41:32 <hemna> sure
16:41:51 <rushiagr> DuncanT_: +1 for stopping calling it metadata
16:41:59 <rushiagr> 'volume/backend properties' maybe :)
16:42:03 <thingee> DuncanT_: just write up the problem if you can and link to it on the ML?
16:42:10 <DuncanT_> Sure, will do
16:42:24 <DuncanT_> I'll also -1 the new driver that is doing the wrong thing
16:42:26 <thingee> DuncanT_: don't spend time on a solution. I'd rather leave that to the ML discussion
16:42:41 <DuncanT_> thingee, Ok
16:42:47 <thingee> #action DuncanT_ to write up the problem and mention it in the openstack dev ML for discussion
16:42:57 <flip214> perhaps a good example implementation would be nice to have?
16:43:03 <thingee> #topic Discuss how to cleanup stuck volumes (creating/deleting/attaching/detaching)
16:43:07 <flip214> eg. the LVM driver could switch some things in /sys
16:43:08 <thingee> scottda: you're up
16:43:09 <scottda> Blueprint is https://blueprints.launchpad.net/cinder/+spec/reset-state-with-driver
16:43:09 <DuncanT_> flip214, I've got no example yet
16:43:14 <scottda> The bigger problem is that volumes can get stuck in various states:
16:43:20 <scottda> creating/deleting/attaching/detaching
16:43:20 <thingee> #link https://blueprints.launchpad.net/cinder/+spec/reset-state-with-driver
16:43:27 <scottda> and this needs fixing and/or syncing in Cinder DB, the backend storage, Nova DB,
16:43:27 <scottda> compute host, compute <instance_id>.xml
16:43:35 <scottda> The blueprint ^^^ is just to modify cinderclient reset-state
16:43:40 <thingee> #idea reset state should also involve drivers
16:43:44 <scottda> Currently, reset-state just changes the Cinder DB
16:43:49 <flip214> scottda: how will that interact with DuncanT_'s fine-grained state machine?
16:43:52 <scottda> which can cause things to break.
16:44:01 <scottda> Flip214: good question
16:44:11 <hemna> this one seems problematic because you don't know if the error was in the cinder code or the backend
16:44:12 <DuncanT_> flip214, It will integrate... might add some new states but that is fine
16:44:16 <scottda> a well-implemented state machine could help to preven these issues...
16:44:26 <hemna> the error could simply be an artifact of a failure in cinder, in which case the driver has nothing to do.
16:44:34 <scottda> The idea is that reset-state will call the driver.
16:44:40 <jbernard> scottda: +1, i see this somewhat frequently in volume migration
16:44:41 <scottda> The driver would attempt to set the state, and then return, and then the DB would set state.
16:44:46 <DuncanT_> hemna: The driver will just be asked if the transition is ok
16:44:57 <DuncanT_> hemna: If it has nothing to do, it can just return
16:45:07 <hemna> DuncanT_, that's not what scottda just said though :(
16:45:25 <scottda> Well , the drive will attempt to "do the right thing"
16:45:32 <hemna> that's the problem
16:45:33 <scottda> whether that is nothing, or change the state
16:45:37 <DuncanT_> hemna: It will always call the driver
16:45:43 <hemna> I don't think the driver can really know what to do, if the failure was cinder's alone
16:45:44 <guitarzan> this is a really good idea that sounds hard :)
16:45:52 <hemna> the driver will not know what to clean up.
16:46:00 <flip214> scottda: DuncanT_: I meant that the fine-grained state machine might _solve_ the restart-problem?
16:46:03 <DuncanT_> hemna: The driver just gets asked 'is it ok to mark this volume as available'?
16:46:08 <thingee> taking a step back, the original goal of reset state was just to be simple for updating the db. The admin is suppose to be responsible verifying the state of a resource in a backend and then forcing cinder to have it in whatever state after.
16:46:12 <scottda> if the command is "reset-state available" , the driver should see if it can set state to available
16:46:15 <xyang1> I think this is an important problem to solve.  just not sure about the Nova side.  If you don't look at the Nova side, the state you reset could still be wrong
16:46:23 <hemna> thingee, +1
16:46:25 <DuncanT_> thingee, That turns out to be not much use in practice
16:46:33 <scottda> xyang1: agreed
16:46:45 <thingee> DuncanT_: not saying it's right, just saying that was the original intent
16:46:46 <guitarzan> DuncanT_: I disagree, it's pretty useful when nova/cinder interactions break down
16:46:46 <scottda> But the complete solution is much more complicated.
16:46:51 <DuncanT_> thingee, Agreed
16:47:03 <scottda> thingee: perhaps a new command or API then?
16:47:13 <DuncanT_> guitarzan, Only for a tiny subset of cases, or where you go kick the backend by hand
16:47:25 <scottda> I'm more interested in a solution to the problem than how it gets done
16:47:25 <DuncanT_> We can keep the old behaviour too
16:47:29 <thingee> scottda: I agree. I'd rather not change original behavior to be done by --force
16:47:29 <DuncanT_> --force or somethign
16:47:30 <hemna> I think this one needs to be thought out more.   It seems like a rathole that could lead to more trouble.
16:47:39 <thingee> scottda: I would've like it that way originally though :)
16:47:43 <guitarzan> DuncanT_: nah, whenever nova fails to attach, cinder is still in attaching
16:47:48 <guitarzan> we might actually see that problem more than most though
16:47:51 <xyang1> if attach volume times out, the volume will be back to 'available', but the array continues with attach and volume can be attached to the host.  so it is out of sync
16:47:58 <hemna> xyang1, +1
16:48:08 <DuncanT_> guitarzan, yes, but if the backend has opened targets and such, then it is wrong to just mark it available again
16:48:10 <TobiasE1> xyang, +1
16:48:10 <flip214> -------------- only 11 mins left
16:48:13 <hemna> if nova pukes on attaching, then it tells cinder to reset it back to available.
16:48:13 <scottda> I've written an environment-specific solution, and it is 3000 lines of code....
16:48:16 <guitarzan> DuncanT_: that's true
16:48:17 <DuncanT_> guitarzan, The targets should be torn down first
16:48:19 <thingee> flip214: the last topic will be quick :)
16:48:21 <scottda> so this is not an easy problem to solve.
16:48:30 <flip214> thingee: 2 more topics.
16:48:32 <DuncanT_> guitarzan, Just about every possible transition has similar issues
16:48:37 <hemna> if it doesn't, that's a nova problem, and asking a driver the question can I move it to available it the wrong thing to do.
16:48:38 <scottda> So where is the best place to carry on this discussion?
16:48:39 <guitarzan> with the caveat that some drivers do the export stuff on create anyway
16:48:46 <scottda> ML?
16:48:47 <thingee> flip214: doh thanks
16:48:53 <guitarzan> DuncanT_: sure, I'm just saying it's far from useless
16:48:54 <xyang1> so in this case, we need to do something from the Nova side, 'available' is definitely wrong
16:48:58 <DuncanT_> guitarzan, In which case, they can just return 'ok' for all transitions.
16:49:17 <DuncanT_> xyang1, If you can tear down the targets while they are in use, the available is fine
16:49:39 <guitarzan> yeah, there's several cases that seem like we could handle
16:49:50 <scottda> I can document more use cases, perhaps on the wiki, and then send out something on the ML
16:49:55 <thingee> scottda: can we involve the nova folks on this? perhaps discussion of at least [cinder][nova] in the ML?
16:49:56 <flip214> scottda: +1
16:49:58 <thingee> scottda: +1
16:50:00 <DuncanT_> Document them in the BP
16:50:12 <hemna> DuncanT_, +1
16:50:12 <scottda> thingee: yes
16:50:14 <guitarzan> nova is certainly aainvolved
16:50:18 <thingee> DuncanT_: not a spec?
16:50:24 <hemna> and/or in the cinder-spec
16:50:24 <scottda> There is a spec
16:50:26 <DuncanT_> Spec, sorry, yes
16:50:28 <scottda> I'll put the use cases there
16:50:34 <thingee> scottda: thanks
16:50:36 <scottda> Ok, thans all
16:50:41 <scottda> s/thans/thanks
16:50:48 <DuncanT_> Nova might need a similar call too, but cinder generally can make some decisions
16:50:49 <thingee> #action scottda to document more use cases and most the openstack dev ML to involve nova folks
16:51:02 <thingee> #topic Are changes to the Cinder V1 API still worth the trouble?
16:51:10 <thingee> pwehrle: you're up
16:51:18 <pwehrle> Hi all
16:51:23 <pwehrle> more of a confused noob's question, really
16:51:32 <thingee> #link https://review.openstack.org/#/c/132657/
16:51:32 <pwehrle> I was encouraged by my upstream training mentor to ask it here to make sure the response is a unanimous "no"
16:51:48 * jungleboyj gives a unanimous no
16:51:49 <pwehrle> came across the problem with https://review.openstack.org/#/c/133383/, felt kind of bad not fixing it for v1
16:51:58 <thingee> pwehrle: I have spoke to smcginnis, and v1 changes were removed
16:52:05 <thingee> pwehrle: v1 will be gone in K
16:52:14 <DuncanT_> No new features in V1
16:52:16 <pwehrle> thingee: I read that
16:52:30 <thingee> pwehrle: I'm fine with bug fixes, just no new features
16:52:37 <pwehrle> thingee: thanks for making it clear
16:52:37 <thingee> pwehrle: this is the same policy across other projects
16:52:59 <DuncanT_> If we're having to change the catalogue format, then I'm not sure about removing it in K, but that's a separate arguement
16:53:00 <pwehrle> OK, that works for me
16:53:05 <guitarzan> do most apis allow both name/id? is that not just a clent feature?
16:53:21 <guitarzan> names aren't unique, this one specifically seems like a bad idea
16:53:35 <thingee> guitarzan: that's a good question. I'm only aware of clients doing this
16:53:36 <winston-d> guitarzan: +1, uuid only
16:53:46 <DuncanT_> guitarzan, Without good server-side search, doing it on the client is hard - you might have thousands of images
16:53:47 <guitarzan> this should just be a non controversial cinderclient patch
16:53:48 <thingee> for example, you can with the nova client
16:53:56 <eharney> guitarzan: i think some APIs allow either and return a server error if the search is ambiguous
16:53:59 <guitarzan> DuncanT_: we'd have to do that on the api side anyway
16:54:06 <guitarzan> eharney: ah, ok, I think that's nuts, but ok :)
16:54:14 <eharney> but i could be thinking about a Nova API, i forget..
16:54:14 <hemna> if you look at the client help for cinder, the volume_type param seems ambiguous during volume create.  It's not clear if it's name of uuid or both.
16:54:21 <DuncanT_> guitarzan, If we add a good search API then client side is fine for me
16:54:27 <thingee> #topic Backport NFS security bugfix to stable/juno and stable/icehouse
16:54:35 <thingee> bswartz: you're up
16:54:36 <bswartz> hey guys
16:54:41 <thingee> link?
16:54:54 <bswartz> I've been poking various people to find out how they feel about backporting this change https://review.openstack.org/#/c/107693/
16:55:02 <thingee> #link https://review.openstack.org/#/c/107693/
16:55:14 <bswartz> it's a bug, but it's a significant change
16:55:22 <hemna> it's a big patch :(
16:55:35 <eharney> the main question i have there is do we also end up backporting the similar fixes for other *FS drivers
16:55:40 <bswartz> whether the bug deserves to be called a "security bug" is arguable
16:55:54 <jungleboyj> hemna: There is a lot of test code in there too.
16:56:00 <bswartz> but if you feel it is a security issue, then a backport should be seriously considered
16:56:15 <hemna> sure, but tests don't always get the bugs that are harder to find with these large patches
16:56:29 <thingee> bswartz: my thought, it does little to cinder core, so I'm not as concerned. if the other nfs folks want to spend time on this, I'm fine with that.
16:56:37 <bswartz> in all my 1on1 conversations, people suggested that I bring the issue to the whole group
16:56:49 <jungleboyj> hemna: Just saying a good number of the lines are in test files.
16:56:56 <hemna> true
16:56:57 <thingee> but I'm really going to rely on others to tell me this is fine for their drivers
16:56:58 <bswartz> okay
16:57:00 <DuncanT_> I think the stable guys would want strong by-in from core on a change this big
16:57:16 * jungleboyj was the one suggesting that.
16:57:18 <hemna> I'm afraid of these large changes getting backported.  but that's just me.
16:57:21 <bswartz> the stable guys include thingee, jgriffith, and jungleboyj
16:57:33 <bswartz> so they can speak on behalf of cinder
16:57:34 <eharney> it's not very obvious that it would be worthwhile to backport IMO
16:57:50 <jungleboyj> thingee: I talked to him about it for a while last week.  I am ok with it if it is a pretty clean backport.
16:58:04 <thingee> jungleboyj: did someone from your team try it?
16:58:15 <bswartz> okay I'm hearing enough positive remarks that we'll go ahead and do the backport, and do the rest of the argument in the code review for the backport
16:58:18 <jungleboyj> thingee: Not yet.  jgriffith looked at it.
16:58:33 <thingee> jungleboyj: jgriffith doesn't have a driver with nfs to try.
16:58:44 <jungleboyj> thingee: Touche.
16:58:54 <eharney> so when we make the same fix for the GlusterFS driver, will people be ok with backporting it as well?
16:59:07 <bswartz> eharney: when a backport patch lands I'll make sure you're a reviewer on it
16:59:18 <jungleboyj> thingee: I will need to have our SONAS guy take a look at this.
16:59:32 <thingee> eharney: I won't be opposed
16:59:41 <thingee> bswartz: thanks
16:59:42 <jungleboyj> thingee: +1
16:59:53 <bswartz> thanks for squishing all the agenda items into 1 hour thingee!
17:00:00 <thingee> :)
17:00:03 <thingee> thanks everyone
17:00:05 <thingee> #endmeeting