18:04:16 #startmeeting 18:04:17 Meeting started Thu Mar 29 18:04:16 2012 UTC. The chair is jdg. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:04:18 Useful Commands: #action #agreed #help #info #idea #link #topic. 18:04:25 Anybody here? 18:04:46 Yup 18:05:09 Hey Duncan... 18:05:15 Likewise, here... 18:05:28 Ok, great... have a few folks 18:05:49 #topic Boot From Volume 18:06:18 So there was pretty limited feedback from the email JDurgin sent out last week. 18:06:35 That's good and bad I suppose, means less controversary 18:07:03 Could you forward to esker@netapp.com.... I don't seem to have the email in question. 18:07:09 I propose we polish up his use cases get something a little more concrete and send it out 18:07:12 esker: sure 18:07:15 thanks 18:07:27 I didn't see said email either 18:07:33 duncan.thomas@hp.com 18:08:04 Anyone else before I hit send? 18:08:19 Going once, twice.... 18:08:20 Sent 18:09:05 It seems the list established for communication on this topic isn't used so much. 18:09:06 openstack-volume@lists.launchpad.net 18:09:37 esker: Good point, I think folks stopped using the sub-lists 18:09:57 We discussed at the last meeting doing the subset first before the onslaught from the overall community 18:10:20 I propose we clean it up over the next day or two and send it to the entire Openstack list 18:11:14 Particularly inputs from DuncanT and esker would be good. I plan to add some detail to it later today or tomorrow morning 18:11:17 I certainly see no problems with John's use-cases 18:11:56 I think there are more that could be added, but at least this will be a starting point 18:11:57 I've a couple more possibilities to add 18:12:01 :) 18:12:32 DuncanT: Good... do you want to go through them here or add them and send via email? 18:13:18 Probably via email is easiest, I don't think they are either contentious or incompatable with John's, they are mostly just fleshed out life-cycles 18:13:36 Ok, sounds good... 18:13:54 So is there anything we need/want to talk about on this topic right now? 18:14:31 Ok... I wanted to talk about the summit real quick 18:14:39 #topic Folsom Summit 18:14:48 Any folks here planning to attend? 18:15:04 I think there's a case to be made for Glance modification so that references to "persistent" bootable volumes are understood... and to avoid copying out from Glance when something might already be available 18:15:06 I'd like to see some propsals for the Nova-Volume tracks 18:15:20 TimR and I are planning to attend 18:15:44 So... go to glance to get image RHEL whatever w/ appstack whatever... Glance says... well, there's something like that over here already. 18:16:03 I'll broach that via email. 18:16:07 esker: Sounds good, write some ideas up and send it 18:16:08 esker: We've been debating that internally quite a lot... interested in hearing other views before I detail my thoughts 18:16:23 Do we want to talk more here? 18:17:39 I'd like to hear the case for making glace aware of bootable volumes 18:17:46 Don't mind if it is here or via email 18:17:53 esker: DuncanT: are you proposing using a "nova volume" as backend storage for Glance as well as primary storage for instances? 18:18:08 Essentially 18:18:18 That's a use case I've thought of and would like to make work as well 18:18:34 I have proposed that, but then realised it isn't actually necessary 18:18:45 DuncanT: why is that? 18:19:18 There are advantages to having Glance images and instances on the same device 18:19:39 We can (in our case) use the fact that a glace image has a persistent and never-reused ID to cache the glace image on first use, transparently to glance and without needing to modify glance 18:19:42 Doesn't have to be the same volume of course, but the same device 18:20:26 I'm not necessarily against making glance volume-aware, just that it isn't necessarily necessary 18:20:30 So you're just caching the image on your device regardless? 18:20:38 That's great for you :) 18:21:11 jdg: Not caching yet, but realised we could 18:21:26 Hence being interested in other points of view 18:21:51 DuncanT: cool... from my perspective I'm looking at the similar use case 18:22:23 The difference is I need to be able to use the same device/appliance 18:22:31 Or at least that's what I want 18:22:40 Different volumes on the same device 18:23:22 DuncanT... is the scheme you described predicated on using something other than an object store to back Glance? 18:23:22 TBH I don't know much about Glance yet and what the possibilities are for configuring backend storage 18:24:09 esker: No, I don't believe so 18:24:43 DuncanT: If I understand correctly you're saying you'll just cache the image when you create an instance.... 18:24:53 Then you don't care where it came from correct? 18:24:59 Correct 18:25:11 You'll have it in cache based on uuid so any time it gets used again you just pull it from your cache 18:25:21 Spot on 18:25:43 We actually use COW layers so we don't even need to copy, but that is an optimisation 18:25:43 What does that require in terms of driver/extension work? 18:25:51 Ahhh 18:26:45 It requires your volume backend to have some sort of table of imageid to cached copy mapping, and probably a way of aging out cached copies that haven't been referenced for some time 18:27:34 So you'd still need some sort of tie in to override pulling the image from Glance and using your cached copy instead correct? 18:27:59 That's what I'm thinking through 18:28:06 There's a bit of messing needs doing to get nova-compute to let you know it is copying down an image, which I haven't fully figured yet, but it looks like you can make that layer pluggable / overridable and default to the current behaviour 18:28:32 How do you prevent new instances querying Glance and getting back a glance response... instead of a "I'm cached here" response 18:28:42 ah, okay 18:29:30 DuncanT: I like the idea, I'm trying to figure out how to generalize it 18:29:47 So it might look something like this: 18:30:06 The volume driver implementated has like a cache volume on it 18:30:27 There is already code in nova-compute that knows when an image is cached for ephemeral (local) volumes 18:30:37 Any time an instance is pulled from Glance and created on the device it stores the glance image in that cache volume 18:30:52 DuncantT: Oh... 18:31:42 I'm still torn though, I'd like to have the ability to specify block storage for Glance back-end storage 18:31:55 I think both cases are good/useful 18:32:18 hi guys, sorry I'm late 18:32:24 What does doing it explicitly gain you? 18:32:45 DuncanT: One copy of the Glance image instead of two 18:33:02 Yup, ok, I can buy that 18:33:16 We start getting into vendor specific behaviors so I don't want to push too hard 18:33:34 DuncanT: having the same backing store for glance as for volumes allows you to do CoW too 18:33:34 But in my case then you take advantage of things like dedupe, internal copy capabilities etc 18:34:03 I think as long as using block store as a glance backend doesn't stop you using it as a normal glance store (i.e. still supports pull over http and similar) it's a fine idea 18:34:34 DuncanT: Yes, I would not propose taking away capabilities, just adding 18:35:14 I need to talk with Glance folks to get an idea of what's possible and how much investment it would take 18:35:29 WHo's PTL on Glance? 18:35:30 we wrote a glance backend for rbd in september 18:35:32 Certainly I'd be interested in hearing the answers 18:35:36 JayPipes 18:35:37 it was pretty simple 18:35:54 ptl is actually Brian Waldon now 18:36:47 jdurgin: Thanks forgot about the last election 18:37:44 jdurgin: Did you submit something in the core Glance code or is it a custom deal? 18:37:59 no, it was upstream as soon as I wrote it 18:38:19 you might be able use glance's existing filesystem backend though, not sure what level of customization you'd need 18:38:46 jdurgin: thanks, think I stumbled across it 18:40:48 jdurgin: So this uses some ceph components to implement object store on top of a Block Dev? 18:41:26 Or am I missing something 18:41:49 yeah, rbd stripes objects across ceph's object storage layer, RADOS 18:42:21 Did you notice any significant performance hits or anything? 18:43:13 I haven't benchmarked other glance backends really 18:43:37 jdurgin: Ok, just curious... probably irrelevant 18:44:03 So that's good, it sounds like my use case is pretty much covered so that leaves the one DuncanT proposed 18:44:47 esker: DuncanT: do you agree? 18:44:56 If you're using a block store backend, don't you still need to make some changes to stop nova-compute pulling the copy over http anyway? Or am I missing something? 18:45:34 DuncanT: yeah, that part still needs work 18:45:38 DuncanT: I would assume yes 18:45:55 but I have to run now, see you guys later 18:46:02 wouldn't the modification consist of: don't pull anything, boot from volume 18:46:13 As long as that is plugable, I think my usecase just falls out as a subset of yours 18:46:35 DuncanT: That's what I'm thinking 18:46:50 esker: No, you can't boot from the Glance side 18:46:59 esker: I assume there would be some call out so that the block storage system can make the volume ready to be used? You don't want to mess up your golden cached copy 18:47:13 yep... which would clone from golden 18:47:21 using CoW or other pointer tricks 18:48:22 esker: so I think we're all on the same page. As DuncanT said if it's pluggable we just do the transfer internally via our driver 18:48:42 Whether that be using CoW or whatever... shouldn't necessarily matter 18:48:48 right 18:48:56 that's up to you... 18:49:35 So if we spin out block storage as a seperate project/api we can tie all of this together rather neatly I believe 18:50:06 Of course we could do it either way, but I like clean seperation :) 18:50:35 It would be nice if 'local' volumes in nova were provided by the block store service too, it would mean much of this code was common... 18:50:49 DuncanT: My point exactly 18:51:10 So is that topic in scope for today's meeting? The spinout? 18:51:45 I've been avoiding that topic because I know Vish has some ideas already... but we can surely talk about it folks are interested 18:51:56 We're going to run out of time I'm afraid 18:52:03 I think we're starting to get a feel for what we want out of the spinout 18:52:04 Oh... well no point in colliding w/ Vish on this. Perhaps we can invite him next week to discuss? 18:52:17 DuncanT: Yes, I agree 18:52:28 I was hoping to start tackling the spin out idea during the summit 18:52:47 Which brought up the topic of "Folsom Summit" 18:53:13 I was hoping there'd be some poking of it in the meetings before the sumit so people aren't so much thinking on their feet... I suspect it will be a long discussion at the summit 18:53:46 DuncanT: Yes, I agree... so how about this: 18:53:58 Next meeting we start the converstation? 18:54:12 Get some sort of foundation started so we can be effective at the Summit? 18:54:35 Seems sensible to me 18:54:44 I think there's plenty to talk about regarding use cases and high level architecture without getting stuck if somebody has something in progress already 18:55:20 There'll also be a BFV blueprint kicking round next week from us, and that ends up tying into the volume-as-a-service in several ways 18:55:39 DuncanT: Sounds great, I'll keep my eyes open for it 18:56:08 I'll also clean up JDurgin's email and add what we talked about today as a seperate section 18:56:47 As far as the Summit goes I'd still like some input if folks have it. Otherwise I'll propose sessions for spin out, boot from volume/snap-shot etc 18:57:14 We may be passed the point of brainstorming so they'd be presentation or workshop slots 18:57:37 I don't mind who proposes sessions, as long as they happen ;-) 18:57:48 :) 18:58:14 Ok, just want to make sure I'm not the only one sitting in the room and that I'm not missing topics people want to discuss :) 18:58:56 Alright, well if nobody has anything else for now? 18:59:03 I'll check with the rest of my team here and make sure we have a list of what we'd like sessions on... there is a lot of overlap between things that need discussing 18:59:27 DuncanT: Excellent, you can either put it on the website or send it to me directly 18:59:43 Will do 19:00:10 Alright, well we're out of time. Thanks for showing up guys!! 19:00:30 #end meeting 19:01:08 #endmeeting