16:00:20 #startmeeting cinder 16:00:21 Meeting started Wed Dec 3 16:00:20 2014 UTC and is due to finish in 60 minutes. The chair is thingee. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:25 The meeting name has been set to 'cinder' 16:00:40 o? 16:00:43 hi 16:00:44 o/ 16:00:44 Hi 16:00:45 \o 16:00:46 hi 16:00:47 hello all 16:00:50 hi 16:00:59 agenda today 16:01:01 mornin 16:01:03 #link https://wiki.openstack.org/wiki/CinderMeetings 16:01:05 hello 16:01:35 hi 16:01:48 I sent an email to new driver maintainers that have a bp for k-1. 16:01:50 hello 16:01:51 .o/ 16:01:57 o/ 16:02:00 \o 16:02:03 hi 16:02:03 If I don't see code by the end of this week, I'm afraid we won't have time to review it. 16:02:16 so just fyi, I will start to untarget bps 16:02:30 thingee: I thought the 15th was the deadline previously stated. 16:02:40 smcginnis: reread the email 16:02:54 And there it is in my junk folder.... 16:03:00 the 15th is for november 16:03:08 thingee: even if eg. snapshots are not fully implemented yet? didn't get an email. 16:03:10 o/ 16:03:23 ok lets start 16:03:35 #topic should openstack-cinder room be archived? 16:03:40 so, Cinder is not archived here: http://eavesdrop.openstack.org/irclogs/ 16:03:41 scottda: you're up 16:03:46 but it could be. 16:03:52 I have no problems with this 16:03:52 o/ 16:03:57 +1 16:04:01 +1 16:04:03 +1 16:04:04 scottda: +1 16:04:08 I was told in the Infra channel to get a resolution at this meeting and they'd do it. 16:04:15 +1 16:04:16 scottda: looks golden :) 16:04:21 anything else? 16:04:25 fine by me 16:04:28 nope. 16:04:31 +1 16:04:31 yah it's ok. 16:04:33 +1 16:04:38 no more pr0n links in the channel guys. 16:04:40 Is there some kind of magic you need to do thingee? 16:04:41 fine with me 16:04:46 #topic Continue the discussion on over subscription 16:04:49 hemha_ :( 16:04:49 xyang1: you're up 16:04:52 I have no concerns. Have wished I had it in the past. +1 16:04:53 ok 16:04:59 so we talked about it last week 16:05:05 #link https://review.openstack.org/#/c/129342/ 16:05:19 xyang and I have had a few conversations about this 16:05:20 I discussed with bswartz and sorted out some differences and updated the spec 16:05:49 I decided to drop most of my objections last week after we reached agreement on the thick/thin provisioning capbilities 16:06:08 I still thing we need to tweak the capabilities and extra specs (I think there should be 2, not 1) 16:06:19 but on the whole I'm pretty happy with the form xyang's spec takes now 16:06:51 bswartz1: what does that mean 2, not one?is there explicit limit in the spec? 16:06:54 should we discuss the capabilities/extra specs here, or in the spec review on gerrit? 16:06:54 so the capabilities is the provisioning_capabilities, right now it can be thin, thick, or both 16:07:10 the spec suggests 1 capability with 3 values 16:07:16 I think I'd prefer 2 boolean capabilities 16:07:17 xyang1: why is that? 16:07:40 What does 'both' mean? 16:07:46 thingee: I can go either way. I put it together quickly last night 16:07:46 something like "supports_thick_volumes" and "supports_thin_volumes" 16:07:50 DuncanT: some devices let you "pick" 16:07:56 Ah, got you 16:07:58 both means it supports both thin and thick 16:07:58 Thanks 16:08:13 I was thinking extra specs, not capabilities 16:08:15 My bad 16:08:24 there's extra specs too 16:08:29 xyang1: bswartz1 how about "supported_provision_types=x,y,z" 16:08:32 bswartz1: sounds like that can be accounted for 16:08:33 just the spec needs to be updated 16:08:43 where x=thick, y=thin, z=both 16:08:43 jgriffith: it's impossible for the scheduler to filter on that 16:08:51 jgriffith: sure 16:08:54 bswartz1: why? 16:08:54 for filtering to work, it's better to have a list of booleans 16:09:02 bswartz1: disagree :) 16:09:10 jgriffith: I think that is the current proposal 16:09:27 xyang1: yes, but discussion started aroudn booleans, names etc 16:09:38 the scheduler can filter on lots of things, you just have to set the extra specs correctly 16:09:41 ok 16:09:43 xyang1: just trying to inject some sanity and keep things readable and understandable 16:09:56 well..... maybe I'm lacking a clue here 16:10:16 I'm fine with whatever the team has decided on this one 16:10:30 bswartz1: really by what I just proposed you can treat it as 3 booleans to eval in a single stat 16:10:45 bswartz1: still does what you're proposing, just less verbose 16:10:50 jgriffith: makes sense. I'm not sure what the problem is here. 16:10:51 bswartz1: look at the oslo scheduler code, it has lots of keywords to filter by 16:10:55 bswartz1: ans easier to deal with 16:10:56 jgriffith: +1 16:10:59 I just want to make sure that admins are able to specify that some volume_types MUST be thick, while others must be thin, and maybe some don't care 16:11:12 bswartz1: sure 16:11:13 bswartz1: noted :) 16:11:13 and backends need to be able to advertise what they support 16:11:38 bswartz1: provisioning_type=thin... 16:11:45 must have "thin" in the results 16:12:01 if you don't care, you omit the spec from the type altogether 16:12:17 okay jgriffith I will try that out 16:12:19 * jgriffith is worried he's missing something 16:12:21 bswartz1: are there any other issues besides that? if not, I'm sure xyang1 is going to treat herself to a milkshake with that spec being merged. 16:12:25 perhaps I underestimated how smart the filtering is in the scheduler 16:12:38 mmmmmmm.... milkshakes 16:12:41 thingee :) 16:13:22 * jungleboyj wants a milkshake 16:13:23 we specify thin,thick in our volume types already. not sure how this affects that if at all 16:13:24 >_< it's lunchtime here and you're talking about delicious food 16:13:52 I'm going to take that as a no then. So I will look for a +1 from bswartz1 on that spec. 16:13:57 anything else xyang1? 16:14:03 all set 16:14:12 I'll +1 the spec after I convince myself that the capabilities/extra_specs are good enough 16:14:21 I don't expect there will be any issues 16:14:39 #topic Discussion on the current status of this spec "Support Modifying Volume Image Metadata" 16:14:49 davechen: hi 16:14:52 hi 16:14:57 #link https://review.openstack.org/#/c/136253/ 16:14:58 thingee: hi 16:15:00 we're doing this again :) 16:15:12 DuncanT: present? 16:15:14 * jgriffith quickly reads spec 16:15:28 yes, I just update the spec some mins ago. 16:15:30 bswartz1: https://github.com/openstack/cinder/blob/master/cinder/openstack/common/scheduler/filters/extra_specs_ops.py 16:15:33 Yup 16:16:07 I think the delete confusion is now sorted? 16:16:11 avishay: ty 16:16:32 davechen: can you describe a use case for me real quick? 16:16:41 sure 16:17:03 This blueprint is actually a partial task of Graffiti project, 16:17:16 davechen: and you lost me :) 16:17:32 no 16:17:34 I'm also not entirely sure I get graffiti myself 16:17:44 that's another topic though 16:17:51 thingee: +1 16:17:59 spray paint, wall, big letters. 16:18:04 let's focus on a cinder use case as related to this spec 16:18:10 on, the intention of this BP is support modifying image metadata in cinder 16:18:11 * jungleboyj is resisting the urge to make a Graffiti joke. 16:18:17 Thanks hemna_ 16:18:22 jungleboyj, :P 16:18:29 davechen: yes, but the question is "why" 16:18:44 or at least "my question" 16:18:50 jgriffith: mine too 16:19:10 there is one question in the BP. 16:19:14 davechen: i think the question is, you created an volume from an image, and the volume's image metadata reflects the source. why would you need to change it? 16:19:31 davechen: your problem statement describes what you want to do, but I don't quite see the *problem* or the *why* 16:19:49 davechen: I've glanced through it myself. can you point me to which line? 16:19:50 avishay: correct... and more importantly should you be allowed to change it 16:19:54 we need change it as the part of task of graffiti. 16:19:57 avishay: Because a volume is mutable. I install a new kernel, maybe it needs different metadata (e.g. the attach method for BfV) 16:20:21 davechen, what is the volume metadata missing at that point that prevents graffiti from working ? 16:20:29 DuncanT: so your making the decision to just cut glance out of the picture then? 16:20:36 DuncanT: not saying that's bad :) 16:20:37 davechen: I understand graffiti needs to change it, but *why*. 16:20:40 LINE81 16:20:54 DuncanT: I would expect the user to do volume->image with the updated metadata 16:21:00 jgriffith: Once glance has created the bootable volume, it *is* out of the picture for that volume 16:21:00 davechen: rest api impact? 16:21:04 avishay: +1 16:21:08 davechen: I'm not sure I'm following 16:21:27 yes, i am not sure whether we need a new API to handle image metadata 16:21:37 Delete API 16:21:39 avishay: Upload it to glance and suck it back down just to change one piece of metadata? 16:22:01 davechen: I'm not sure if use cases are thought out given how this discussion is going. I'm going to defer this topic until there are stated use cases listed on this spec. 16:22:13 and api is not a use case. 16:22:27 DuncanT: I don't see the necessity 16:22:46 DuncanT: dont' see why it needs to be in Glance necessarily either 16:22:51 nova scheduler need the metadata information for vm schedule 16:22:55 jgriffith: Some of the glance metadata affects how nova sets up the volume attachment 16:22:55 DuncanT: and the only thing I see as a use case here is Graffiti 16:22:58 why the heck does nova care the volume came from an image "Cinder volume_image_metadata is used by nova for things like scheduling and 16:22:58 for setting device driver options" 16:23:01 DuncanT: i assume if you have your images in glance, and you make a change to an image that is noteworthy, you would store it in glance again. ideally uploading and sucking it back are just CoW operations :) 16:23:12 DuncanT: sure.... but not that much really 16:23:19 DuncanT: maybe you have a use case to share? 16:23:28 DuncanT: jgriffith: doesn't that seem like the real issue here? 16:23:52 ameade_: let's not side track :) 16:23:58 jgriffith: The only usecase I have is a long running bootable volume I upgrade the kernel on such that a different attach method is prefered 16:24:12 jgriffith: It's pretty niche but is a usecase 16:24:21 imo, the spec is basically ready for approval :) 16:24:25 DuncanT: meh... I'm not getting it 16:24:44 To me it's a solution without a real problem 16:24:44 jgriffith: I'll get the exact key names and such and put a comment on the spec 16:25:06 i don't think it's that important, but if it is for some people, i don't think it's that big of a deal to implement and include 16:25:12 jgriffith: You can do it by uploading to glance, editting the metadata there and re-downloading, but that is a faff 16:25:18 I copy the problem desc here. 16:25:20 DuncanT: and IIRC KVM at least is not very picky about kernel version 16:25:28 sorrison, on the reverse, what is the downside to allowing updating the metadata ? 16:25:30 When creating a bootable volume from an image, the image metadata (properties)is copied into a volume property named volume_image_metadata. 16:25:31 gah 16:25:31 but I'm wasting everybody's time I fear 16:25:31 so 16:25:43 Cinder volume_image_metadata is used by nova for things like scheduling and 16:25:51 for setting device driver options. This information may need to change after 16:25:55 hemna_: breaks things like cached copies of image 16:26:09 the volume has been created from an image, besides, the additional properties 16:26:23 jgriffith: Why? I don't see how 16:26:35 jgriffith, but the image is untouched. I don't see how that breaks cached images ? 16:26:35 jgriffith: The image in glance is still mutable 16:26:39 DuncanT: if I have a cached copy of image FOO on my backend device 16:26:45 DuncanT: you update the metadata 16:26:47 davechen: wouldn't it be better if that information lived in volume metadata? 16:26:54 DuncanT: glance images are immutable 16:26:58 it's a core tenant 16:27:04 Next person that comes along and want a Voluem with FOO image... I say "oh, got it... here ya go" 16:27:19 jgriffith: The cached copy doesn't get touched at all 16:27:20 ameade_: nova just use image metadata 16:27:30 DuncanT, yah I don't understant hat 16:27:31 that 16:27:36 thingee gave up 16:27:54 jgriffith: Only the db record associated with one specific live, mutable copy of that image gets changed 16:28:14 DuncanT: so then I really don't get the point :( 16:28:30 DuncanT: anyway... maybe you and davechen can write up a real problem statement 16:28:35 and I'll see the light 16:28:36 :) 16:28:54 jgriffith: The metadata is associated with what is in the image. Once you change the contents of the volume, that data, for that specific volume, can be out of date 16:28:55 I'm just not getting it. As far as downside, maybe there's not one 16:29:07 I'll work on a concrete example 16:29:09 DuncanT: yeah, that's my point 16:29:20 the cached copy never has the kernel update for example 16:29:26 i want to know if can get any review comments and fix them asap before spec freeze? 16:29:27 but the image still has the same ID 16:29:34 DuncanT: seems problematic. volumes are going to change. metadata will always be out of date 16:29:35 right? 16:29:38 i'll post some thoughts on the spec 16:29:41 The cached copy isn't having its metadata changed, only the live volume is 16:29:43 davechen, maybe give some concrete examples of what nova needs to change in the metadata that prevents things from working as needed. 16:29:47 It's anit-cloudy if nothing else 16:30:01 DuncanT: yes, but you're missing what I'm saying 16:30:03 i'm not sure what "cached copy" means 16:30:12 User loads an image on the vol.... updates the kernel 16:30:16 updates the metadata 16:30:36 jgriffith: That metadata will get used next boot from that vol 16:30:43 Does that update go anywhere else? Does the metadata change go back up to glance? 16:30:46 hemna_: nova needn't do any changes 16:30:57 jgriffith: no, just on the volume, not in glance 16:30:58 * hemna_ is confused then. 16:31:00 DuncanT: so that metadata change is *isolated* to that particular volume? 16:31:06 DuncanT: ahhh... ok!!! 16:31:08 jgriffith: It will only go back to glance if the user uploads that volume as a new image 16:31:10 I don't get the point of it. if you change the kernel, make a new image, set the metadata...but don't worry about chasing the contents of the volume and keeping the metadata accurate. 16:31:11 Now we're talking!! 16:31:14 jgriffith: Yes 16:31:20 jgriffith: Totally isolated 16:31:31 DuncanT: in that case I don't care :) 16:31:35 jgriffith: Sorry, I thought that was clear 16:31:38 jgriffith: +1 16:31:41 DuncanT: thank you very much for exaplaining the details 16:31:57 DuncanT: no, based on 5 of us asking about it I don't think it was clear at all 16:32:05 thingee: yes, i guess the user needs to keep the volume contents and the metadata in sync for some cases, which seems odd 16:32:16 jgriffith: NP, glad we could sort out which bits were unclear :-) 16:32:28 davechen: PLEASE put a note in the spec explicitly pointing out that this ONLY affects the individual volume 16:32:42 avishay: It will get outdated. just doesn't seem worth it 16:32:44 jgriffith: sure. 16:32:55 avishay: yeah, I still don't see the point, but now I don't care because it won't break things I don't think 16:33:01 thingee: The glance metadata stored by cinder gets actively used to make decissions at boot time 16:33:17 jgriffith: +1 - You shouldn't care for the most part 16:33:17 thingee: outdated is OK as long as it doesn't break things, but if nova is relying on it, that's a bit sketchy 16:33:19 jgriffith, DuncanT: any other gaps in the spec? 16:33:21 DuncanT: and it will do it wrong. humans are going to forget to update metadata. 16:33:32 DuncanT: images are more a point in time than volumes are ever changing 16:33:41 davechen, I still think having a use case in the spec is valuable. 16:33:44 davechen: If you could point out the "decisions" that Duncan is eluding to that would be great 16:33:48 thingee: This is not for an image though, it is for a volume 16:34:03 davechen: I'm a fan of requiring a real use case proposed and described in specs 16:34:04 jgriffith: I'll find an example, no problem 16:34:20 thanks, sorry to take up so much time on that 16:34:40 thingee: humans forget.... whaaaat you say? 16:34:42 :) 16:34:52 :-) 16:34:53 jgriffith: I define some use case in the spec. 16:35:01 DuncanT: I think you would have better luck have a new volume from an image if you want to rely on that metadata. 16:35:05 davechen, +1 16:35:31 thingee, this is metadata on a volume, not an image 16:35:43 thingee: So you have to create an image, edit the metadata in glance then suck the image back down just to get new metadata associated with the volume... that sucks 16:35:45 hemna_: create a volume from an image is my point 16:35:55 hemna_: is where you would have more luck of that metadata being accurate 16:36:07 the contents of the volume hasn't changed as much, because it's a new volume from an image. 16:36:30 thingee: I think you're on the same concept I was stuck on 16:36:32 DuncanT: better than having false positives with a 6 month old volume saying it has kernel X 16:36:38 yah, this isn't metadata on the image 16:36:43 thingee: this is for the volume only 16:36:47 this is metadata about the image on a volume. 16:36:49 jgriffith: +1 16:36:52 thingee: NO change to image at all 16:37:07 thingee: just let's you do things like update the boot info for that particular volume that you might modify 16:37:12 i see what thingee is saying, and i don't think he's confused :) 16:37:17 thingee: so you can start with a base and build off of it 16:37:26 so why do we call it volume IMAGE metadata 16:37:28 avishay: thingee Oh, sorry 16:37:36 in that case I'll shut my mouth and read again 16:37:36 like i'm trying to say, just shove this info in volume metadata 16:37:42 ameade_, I think it's data about the image where the volume came from? 16:37:43 i posted on the spec 16:37:58 ameade_: you made a good point 16:38:00 hemna_: yeah, which doesnt make sense to change 16:38:09 ameade_: Historical reasons 16:38:12 I'll review the spec again once things are updated, but it sounds like this error prone. I understand this is metadata with the volume. *that's* the problem 16:38:16 if you store "golden images" in glance, and you made a significant change in one (by way of a volume), i would expect the user to put it in glance for themselves/others to use 16:38:26 ameade_: to me, the only missing point is the abliity for nova to consume volume metadata. 16:38:34 winston-d: +1 16:38:46 winston-d: nova does consume it 16:38:51 jgriffith: in the case like TRIM support 16:38:55 davechen: anything else? 16:39:08 winston-d: hmmm... 16:39:16 TRIM is different IMO 16:39:20 #topic Posix backup driver replaces NFS backup driver 16:39:28 thingee: yes, i really cannot catch your idea. 16:39:39 winston-d: but why would you do that instead of just letting file systems be file systems and do their thing? 16:39:40 nfs backup driver: 16:39:43 #link https://review.openstack.org/#/c/138234/1 16:39:56 posix backup driver: 16:39:58 #link https://review.openstack.org/#/c/82996/ 16:40:06 davechen: likewise 16:40:23 is tbarron here? 16:40:35 kevin fox has proposed that the nfs backup driver is replaced with the posix backup driver 16:40:35 #link https://review.openstack.org/#/c/130858/ 16:40:42 jgriffith: to let kernel/file system do their thing, you have to present a storage controller that appears to kernel that support TRIM. 16:41:05 is kevin fox here? 16:41:17 bswartz: Nope 16:41:22 I thought tbarron and kevin fox were talking between themselves about these 2 options 16:41:33 seems neither are here 16:41:37 jgriffith: and nova relies on image metadata (only) to decide, which is sad for BFV. 16:42:06 winston-d: Nova uses the cinder copy though, it doesn't pull it from glance for BfV 16:42:10 I agree with DuncanT that the posix backup driver should be using some code that's already in swift. it's just duplicating code. 16:42:12 we believe the NFS backup driver is better, and possibly could be generalized to support other (posix) use cases 16:42:32 winston-d: answered in #cinder so not to interrupt here :) 16:42:35 winston-d: It used to, but we fixed it because you can delete an image from glance and that used to break the BfV 16:42:36 bswartz1: it's better if you support dedup, right? 16:42:56 bswartz1: I think that's what kevin was getting it at 16:43:15 there were a few things we didn't like about the posix driver -- several options would need to be added for it to be acceptable for us 16:43:21 I thought tbarron has worked all that out though 16:43:24 had* 16:43:38 since neither are here to argue their case, I suggest we revisit it next week 16:43:53 sounds fine to me. I see this pushed to k-2 now anyways 16:43:58 I'd really like to see the de-dupe done before merging this driver 16:43:59 bswartz1: +1, and the conversation on the review isnt' complete 16:44:07 is the posix driver supposed to be the choice for all filesystem drivers, like nfs, netapp, gluster, gpfs, etc? 16:44:18 avishay: yes 16:44:24 avishay: that was the theory -- but he implementation has limitations we don't like 16:44:33 not all filesystems are created equal 16:44:37 but lets shelve this 16:44:46 bswartz1: I'm interested, because it doesn't make sense to me right now. :) 16:44:47 won't you end up with one driver with 1000 options in that case? 16:44:58 bswartz1: you keep saying "we didn't like" "didn't work for us" 16:45:21 jgriffith: I'm summarizing the discussion -- the details should be in the spec review 16:45:49 bswartz1: I read the spec review, it didn 16:45:52 t make sense 16:45:59 oy 16:46:13 akerr: i can't imagine 1000 options for "take this file, put it there" 16:46:33 one thing in particular that causes problem is that kevin's driver gzips everything 16:46:34 will the posix driver mount an nfs share for me? 16:46:39 that makes it very hard to do incrementals 16:46:43 each filesystem type can have its own optimizations 16:46:52 conf.gzip_everything = False 16:46:56 Ok, I agree we can defer this, but this isn't very productive to keep discussions elsewhere if we're all going to make a decision on these drivers. 16:46:57 avishay: +1 16:47:19 avishay: those are the kinds of changes we would want to see if kevin's driver is the one we use 16:47:21 #topic RemoteFS snapshots 16:47:22 kaisers: hello 16:47:30 thingee: thanks 16:47:40 #link https://blueprints.launchpad.net/cinder/+spec/remotefs-snaps 16:47:43 This is about refactoring the remotefs driver 16:47:54 has been started some time ago (see blueprint) 16:48:22 and due to our work on the quobyte driver becomes current now as we have some code duplication with other remotefs derived drivers 16:48:24 yea i've been asking for this since havana :) 16:48:47 i just wanted to announce that we want to start on this in the next weeks 16:48:54 and see who is interested in this 16:48:55 kaisers: quobyte doesn't support snapshots i assume? 16:49:10 trying to bash tempest in accepting the code right now ;-) 16:49:19 start on what exactly? 16:49:44 moving code from quobyte.py, glusterfs.py, etc. to remotefs.py 16:49:55 e.g. online snapshots, lock wrappers, etc. 16:50:11 i would start by creating a blueprint / spec on what to change 16:50:19 locks are bad 16:50:38 we already have a list of proposed changes from the duplications we saw in our code 16:50:54 kaisers: is that list on an etherpad or something? 16:50:59 i just didn't want to start this without having everybody know 16:51:10 avishay: Now you're talking!!!!! 16:51:14 i'll be working on snapshots for the NFS driver (https://review.openstack.org/#/c/133074/) and presumably looking at similar issues... maybe we should come up with a more specific list 16:51:17 currently not, i intended to add this to the spec 16:51:31 eharney: Yep 16:51:49 kaisers: a spec, even better :) 16:51:51 kaisers: sounds good to me, and thanks for starting this 16:52:00 as soon as if complete the online snapshots with the quobyte driver (one test fails) i wanted to start this 16:52:33 kaisers: makes sense. can people just ping you on #openstack-cinder if they want to help out? 16:52:41 absolutely, yes 16:52:45 great 16:52:50 yes, i'd like to make sure we don't duplicate effort 16:52:51 kaisers: anything else? 16:52:54 sounds good though 16:52:58 nope, that's it 16:53:18 #topic Open discussion 16:53:32 7 mins left 16:54:22 There's the let-nova-put-volumes-in-error spec that I still don't understand 16:54:36 DuncanT: link? 16:54:41 DuncanT: is that ours? 16:54:43 If anybody can explain why you'd ever want nova to do that, I'd appreciate it 16:54:48 xyang1: I think so, yeah 16:55:01 currently nova put volume status back to available 16:55:17 then but backend already attached the volume 16:55:21 they are out of sync 16:55:25 so that is not right either 16:55:49 if volume is 'error' instead of 'available', it at least tells admin something is wrong 16:55:52 that is the purpose 16:56:04 xyang1: doesn't nova call terminate_connection and allow the driver to reset? 16:56:08 xyang1: error means the tenant can now do nothing with their volume 16:56:16 avishay: not in this case 16:56:28 xyang1: It should just be terminate_connection 16:56:31 when initialize_connection times out, it didn't 16:56:35 but we will be adding that 16:56:47 even so, it could still fail 16:56:49 xyang1: If there's a missing call to terminate then fixing that is sensible 16:57:04 if it still fail, error is more consistent with the status in the backend 16:57:21 But nova shouldn't get to make that decision 16:57:26 DuncanT: yes, we are adding a terminate_connection call 16:57:51 but if that also fails, the status will still be 'available' in cinder while in the backend it is attached 16:58:26 So when cinder transitions from attaching back to available, it should tell the driver to terminate 16:58:27 xyang1: why doesn't the backend clean up on terminate_connection? 16:58:38 DuncanT: currently it will be changed from 'attaching' to 'available'. that is not right either 16:59:03 avishay: that would seem better... clean up, if you can't fail and put volume in error 16:59:04 avishay: if terminate_connection still times out, then we are still out of sync 16:59:09 xyang1: It should be made available on timeout. If we need to call the driver terminate at that point then we can 16:59:20 jgriffith: +1 16:59:29 xyang1: ah... and round and round it goes 16:59:49 xyang1: get a faster API :) 16:59:58 quit timing out on everything 17:00:02 you have the cinder DB mirroring your backend's metadata, obviously things will go out of sync 17:00:02 basically if terminate_connection still can't fix it, set it to error 17:00:08 jgriffith: :) 17:00:11 thanks everyone 17:00:14 #endmeeting