20:10:20 #startmeeting glance 20:10:21 Meeting started Thu Mar 20 20:10:20 2014 UTC and is due to finish in 60 minutes. The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:10:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:10:24 The meeting name has been set to 'glance' 20:10:25 lol 20:10:26 :-) 20:10:29 Sorry folks, I accidentally started the meeting in the wrong room 20:10:30 o/ 20:10:31 man how did we all miss that? 20:10:44 hello :) 20:10:46 quick roll call for the eavesdrop logs 20:10:47 I usually monitor both 20:10:50 o/ 20:10:55 \o 20:11:21 okay great 20:11:47 So, as I mentioned in the other room, lets quickly form an agenda for today 20:12:17 markwash: so i think i am going to start pushing for the 'brick' library to get out of cinder and then help zhiyan with the cinder store code at some point 20:12:20 *) Glance V2 state and future 20:12:48 Nova new blueprint process: applicable for glance? 20:13:11 other items before we get started ? 20:13:37 (items to add to the agenda) 20:13:49 okay! I'll ask again when we finish 20:13:56 ok 20:14:09 if you don't mind, lets learn about the blueprint stuff first 20:14:19 #topic new nova blueprint process (arnaud__) 20:14:35 arnaud__: floor is yours, please inform us 20:14:39 so the idea is to use gerrit 20:14:52 Russel sent an email a few hours ago 20:15:02 https://wiki.openstack.org/wiki/Blueprints#Nova 20:15:23 http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst 20:16:19 interesting idea 20:16:31 wow, like the last link 20:16:38 should we take a look at the ML and other resources and revisit it next meeting? 20:16:44 sounds good 20:17:01 I think we all know there is a lot left to be desired by the current blueprint process, even with my attempt at a polish-up 20:17:13 +1 20:17:20 my feeling is that I like the idea, but I think having 1 process for 1 O/S project and a different one for all the others is a bit weird 20:17:57 let's continue this next week then 20:18:06 arnaud__: thanks for bringing it to our attention! 20:18:17 #topic brick driver (ameade) 20:18:37 ameade: floor is yours if you can elaborate a bit on your update 20:18:56 so basically, this bp has been around for awhile: https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver 20:19:08 it seems the only blocker for it's completion is the Brick library 20:19:19 which is the attach/detach code currently in cinder 20:19:46 i just want to make sure we are good with the plan before i push this as a use case for brick 20:20:01 ameade: there is a little issue that I'm worried about there, maybe we can help resolve it to clarify this blueprint 20:20:10 cinder will only store raw images, right? 20:20:50 markwash: actually it will not care about the bits and act soley as a store 20:21:02 ah 20:21:06 for image upload it would create a volume, attach to the volume, upload the image bits 20:21:31 ameade: iirc, i think there are three steps to make cinder-store be normal: 1. brick lib. 2. multi-attaching 3. store enhancements (add add(), delete() and etc. ) 20:21:33 if I upload a qcow2 to glance, and we want to use the cinder driver, what does it do? 20:21:34 i do think however that if someone did upload a 'raw' image it could then lead to more enhancements such as no copy boot from volume 20:22:12 markwash: it would just be a cinder volume with the qcow2 bits in it 20:22:17 ameade: do you know what is done/need to be done in for the brick library? 20:22:27 zhiyan: +1, multi-attach for brick is also a blocker 20:23:33 arnaud__: I don't know all the details but i think it boils down to talking with jgriffith (cinder ptl), ripping the code out, and supporting multiple attaching to volumes 20:23:42 ok 20:23:51 ameade: and as we discussed, if we want to go "second" way, I think there's no any enhancement be needed on glance side. 20:24:21 ameade: I guess I don't see why we would want to store qcow2 bits in a cinder volume exactly. . since then I guess we wouldn't be able to attach or boot the cinder volume 20:24:23 zhiyan: correct, the 'second' way being nova using brick directly 20:24:46 markwash: it just wouldn't be a bootable volume at that point 20:25:00 I guess what I'm getting at is that cinder seems to depend on supporting multiformat images 20:25:00 but it would open up storing images on a ton of block storage backends 20:25:19 ameade: hmm, actually in nova side, even we have not brick, we could leverage existing volumes.py directly, those are existing volume attachment code 20:26:23 markwash: why is that exactly? 20:26:44 it just seems uneven if we make a store that works specially with raw images formats 20:27:22 especially since we might like to store the same image in qcow2 on swift and in raw on cinder 20:27:39 i'm a little confused, i think there's no any limitation on image formation. volume can save any bits of image, 20:27:44 so I'm just wondering if it makes more sense to actually pull out the cinder driver and then figure out how to do it over with multiformat images 20:28:06 zhiyan: but your goal is zero copy boot, right? 20:28:55 markwash: that's any other topic, for example, with image-handler support, we can do zero-copy provisioning with glance-vmware-store 20:29:17 markwash: but on the same time, glance-vmware-store can support original provisioning approach: downloading 20:30:11 it feels a bit of a special case to have a client side logic that says "if location type = X and format = Y: do something amazing; else: do something normal" 20:30:17 a bit *too* special 20:30:17 markwash: IMO, iiuc, there's any assumption on store level 20:31:17 markwash: i think even if we ignore the super awesome special case there is still value to storing images in cinder 20:31:19 markwash: even we go this logic on client, i think it is: "if location type = X" (no format = Y) 20:31:29 ameade: +1 20:31:48 there's no any direct relationship, iiuc 20:31:50 ameade: but in that case, why just images? shouldn't we work on making cinder drivers work as backends to swift somehow? 20:32:16 swift is a object storage but *block* 20:32:22 I actually have a question from our team about ceph 20:32:40 Thay were trying to add ceph support to nova to boot from ceph storage 20:32:51 zhiyan: right, but the store is an abstraction to object storage, not block storage 20:32:56 a qcow2 is an object, not a block 20:33:01 but now they have to rebase their changes together with nova on Glance v2 api 20:33:04 i think the original idea of cinder store is to make glance support different block storage as single 20:33:33 qcow2 is a block device image 20:33:51 right, so if I read a qcow2 file, I can produce a block device abstraction from it 20:33:54 but the file is a file is an object 20:34:29 interesting idea 20:34:32 I can't encode a block Foo as qcow2, upload it to cinder, and then use it as the block Foo. . I can only use to read the file Foo.qcow2 20:36:15 to me its fine to allow raw images in the glance cinder store 20:36:27 and thats' consistent with what we were trying to achieve with the cinder store 20:36:36 but putting any other format is not appropriate 20:36:50 because it doesn't fit the expectations of being a block device 20:37:00 i can see where you are coming from 20:37:10 it's kind of a hack 20:37:38 if we made something like "cinder_location" that wasn't a store location, then we could accomplish this hack a bit more cleanly 20:38:06 we can store that qcow2 *file* to s3/swift object storage *or* storage store, like filesystem 20:38:07 all we really want is metadata to the client that says "this image is already stored as a block device " 20:39:09 some of those stores which current glance supported are object store, and some are block store (except cinder), am i right? 20:39:37 markwash: iiuc, this way should be cover by multi-location? 20:40:16 zhiyan: I don't think it should really be covered by multi-location either, because we need to distinguish between replica locations that are different just for network performance, and alternative formats that are available 20:40:32 and this is partly represented by the fact that currently the format is an attribute of the image, not the locaiton 20:40:48 and there is not a backwards compatible migration path to move format from the image to the location 20:41:38 markwash: ok let's say multi-location for the same format image case. 20:41:41 so for v2, i think we should add something like "volume location" as metadata and just put cinder or other use cases through that metadata 20:42:27 would that idea work and help unblock the core use cases of zero copy boot ? 20:42:42 ameade: though I guess you have some other use cases like being able to use netapp as your backing store, right? 20:43:30 markwash: hum..i believe the original cinder-store idea is that try to use a single store to enable more block backend storage supporting for glance, in short it might leverage cinder capabilities for glance, and de-duplicate store driver implementations 20:43:43 markwash: yeah, i think maybe we should get back to this next week 20:44:20 ameade: yeah, I'd love to sit down with you sometime before then to make sure I understand and properly appreciate your goals, I don't want to just be negative like this :-) 20:44:59 it will be cool if someone could let me know the whole picture 20:44:59 markwash: yeah definitely, it's good to think through these concerns :) 20:45:26 zhiyan: that's interesting, maybe we can separate that use case out from the zero-copy use case and present things a bit differently 20:45:54 but perhaps we should move on 20:46:08 zhiyan: would you like a chance to meet about this next week as well? 20:46:21 yes! 20:46:59 and i'd like to know the whole use case before meeting disucssion 20:47:12 okay, lets follow up 20:47:15 next topic for today 20:47:23 #topic Glance V2 state and future (gokrokve) 20:47:31 Yes. 20:47:34 care to introduce it, gokrokve 20:48:11 I just want to understand will it be changed significantly in Juno? 20:48:26 I am aware of one chnage related to artifacts. 20:48:42 But as I see there are discussions about locations support. 20:49:01 gokrokve: plans are still a bit unclear I suppose 20:49:06 We have a team who tryed to add ceph support to nova to boot directly from ceph volume. 20:49:31 taking a look at https://blueprints.launchpad.net/glance/future I think I see mostly tasks stuff in addition to the artifacts 20:49:45 They asked me either they can rebase theis patches on current Glance v2 and do nova migration to Glance v2, or they should wait for V2 is stable. 20:50:17 gokrokve: do you plan to use the image handler? 20:50:20 gokrokve: I think glance v2 core functionality will not change, and we are committed to backwards compatibility 20:51:01 gokrokve: i'm not sure we are doing similar things for zero-copy provisiong for ceph backend. 20:51:02 Heh I got this question 30 minutes ago. I have no ide how it works. Probably with image handler. 20:51:13 gokrokve: I think from the nova perspective it becomes high priority to support v1 and v2 (back to esheffie1d bp) 20:51:30 They need to support nova locations and Glance v2. 20:52:17 gokrokve: actually there already has a rbd image handler, but yse arnaud__, i'm working on v2 stuff with esheffie1d (and next step is tempest cases for that corner) 20:52:21 arnaud__: Yes. It was a comment for their patch to align with Glance v2 support in nova. 20:52:46 gokrokve: may i know the review link for that patch? 20:53:17 zhiyan: rbd handler was here but it work somehow indirectly. Our guys added dirrect copy from ceph or boot from ceph without copy. 20:53:41 I will sent it ot glance channel. I don't have it right now. 20:53:52 I will send it to the glance channel. I don't have it right now. 20:54:03 gokrokve: tbh i don't think so sorry, that handler already could support *zero* copy 20:54:16 ok 20:54:23 gokrokve: thanks for the link 20:54:44 Probaly this is about something else. I don't know all the details. 20:55:28 yes I think reviewing that patch would be the best move for us 20:55:41 maybe we can try to hash it out there, or better understand the questions 20:55:57 okay, quick open discussion in case there are any more announcements / other business 20:56:02 #topic open discussion 20:56:06 #link http://lists.openstack.org/pipermail/openstack-dev/2014-March/030444.html 20:56:11 . . just sayin' ;-) 20:56:15 yeah :) 20:57:03 another quick ML link 20:57:05 #link http://lists.openstack.org/pipermail/openstack-dev/2014-March/029402.html 20:57:13 I don't quite feel comfortable saying it on ML 20:57:26 but I think the idea about constructive conversations is a really good one, personally 20:57:39 I know I have failed many of you many times by not first seeking to fully understand an idea 20:58:02 I think the thread there moved a bit in the direction of a whitewash 20:58:23 but I would encourage anyone who feels similarly to kgriffs to consider that and to bring up their concerns where they feel comfortable 20:58:46 anyway, enough touchy-feely 20:58:48 :-) 20:59:05 got to run! thanks everybody! 20:59:07 #endmeeting