20:10:20 <markwash> #startmeeting glance
20:10:21 <openstack> Meeting started Thu Mar 20 20:10:20 2014 UTC and is due to finish in 60 minutes.  The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:10:22 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:10:24 <openstack> The meeting name has been set to 'glance'
20:10:25 <ameade> lol
20:10:26 <gokrokve> :-)
20:10:29 <markwash> Sorry folks, I accidentally started the meeting in the wrong room
20:10:30 <nikhil__> o/
20:10:31 <ameade> man how did we all miss that?
20:10:44 <arnaud__> hello :)
20:10:46 <markwash> quick roll call for the eavesdrop logs
20:10:47 <gokrokve> I usually monitor both
20:10:50 <markwash> o/
20:10:55 <gokrokve> \o
20:11:21 <markwash> okay great
20:11:47 <markwash> So, as I mentioned in the other room, lets quickly form an agenda for today
20:12:17 <ameade> markwash: so i think i am going to start pushing for the 'brick' library to get out of cinder and then help zhiyan with the cinder store code at some point
20:12:20 <gokrokve> *) Glance V2 state and future
20:12:48 <arnaud__> Nova new blueprint process: applicable for glance?
20:13:11 <markwash> other items before we get started ?
20:13:37 <markwash> (items to add to the agenda)
20:13:49 <markwash> okay! I'll ask again when we finish
20:13:56 <gokrokve> ok
20:14:09 <markwash> if you don't mind, lets learn about the blueprint stuff first
20:14:19 <markwash> #topic new nova blueprint process (arnaud__)
20:14:35 <markwash> arnaud__: floor is yours, please inform us
20:14:39 <arnaud__> so the idea is to use gerrit
20:14:52 <arnaud__> Russel sent an email a few hours ago
20:15:02 <arnaud__> https://wiki.openstack.org/wiki/Blueprints#Nova
20:15:23 <arnaud__> http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst
20:16:19 <markwash> interesting idea
20:16:31 <nikhil__> wow, like the last link
20:16:38 <markwash> should we take a look at the ML and other resources and revisit it next meeting?
20:16:44 <arnaud__> sounds good
20:17:01 <markwash> I think we all know there is a lot left to be desired by the current blueprint process, even with my attempt at a polish-up
20:17:13 <ameade> +1
20:17:20 <arnaud__> my feeling is that I like the idea, but I think having 1 process for 1 O/S project and a different one for all the others is a bit weird
20:17:57 <arnaud__> let's continue this next week then
20:18:06 <markwash> arnaud__: thanks for bringing it to our attention!
20:18:17 <markwash> #topic brick driver (ameade)
20:18:37 <markwash> ameade: floor is yours if you can elaborate a bit on your update
20:18:56 <ameade> so basically, this bp has been around for awhile: https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver
20:19:08 <ameade> it seems the only blocker for it's completion is the Brick library
20:19:19 <ameade> which is the attach/detach code currently in cinder
20:19:46 <ameade> i just want to make sure we are good with the plan before i push this as a use case for brick
20:20:01 <markwash> ameade: there is a little issue that I'm worried about there, maybe we can help resolve it to clarify this blueprint
20:20:10 <markwash> cinder will only store raw images, right?
20:20:50 <ameade> markwash: actually it will not care about the bits and act soley as a store
20:21:02 <markwash> ah
20:21:06 <ameade> for image upload it would create a volume, attach to the volume, upload the image bits
20:21:31 <zhiyan> ameade: iirc, i think there are three steps to make cinder-store be normal: 1. brick lib. 2. multi-attaching 3. store enhancements (add add(), delete() and etc. )
20:21:33 <markwash> if I upload a qcow2 to glance, and we want to use the cinder driver, what does it do?
20:21:34 <ameade> i do think however that if someone did upload a 'raw' image it could then lead to more enhancements such as no copy boot from volume
20:22:12 <ameade> markwash: it would just be a cinder volume with the qcow2 bits in it
20:22:17 <arnaud__> ameade: do you know what is done/need to be done in for the brick library?
20:22:27 <ameade> zhiyan: +1, multi-attach for brick is also a blocker
20:23:33 <ameade> arnaud__: I don't know all the details but i think it boils down to talking with jgriffith (cinder ptl), ripping the code out, and supporting multiple attaching to volumes
20:23:42 <arnaud__> ok
20:23:51 <zhiyan> ameade: and as we discussed, if we want to go "second" way, I think there's no any enhancement be needed on glance side.
20:24:21 <markwash> ameade: I guess I don't see why we would want to store qcow2 bits in a cinder volume exactly. . since then I guess we wouldn't be able to attach or boot the cinder volume
20:24:23 <ameade> zhiyan: correct, the 'second' way being nova using brick directly
20:24:46 <ameade> markwash: it just wouldn't be a bootable volume at that point
20:25:00 <markwash> I guess what I'm getting at is that cinder seems to depend on supporting multiformat images
20:25:00 <ameade> but it would open up storing images on a ton of block storage backends
20:25:19 <zhiyan> ameade: hmm, actually in nova side, even we have not brick, we could leverage existing volumes.py directly, those are existing volume attachment code
20:26:23 <ameade> markwash: why is that exactly?
20:26:44 <markwash> it just seems uneven if we make a store that works specially with raw images formats
20:27:22 <markwash> especially since we might like to store the same image in qcow2 on swift and in raw on cinder
20:27:39 <zhiyan> i'm a little confused, i think there's no any limitation on image formation. volume can save any bits of image,
20:27:44 <markwash> so I'm just wondering if it makes more sense to actually pull out the cinder driver and then figure out how to do it over with multiformat images
20:28:06 <markwash> zhiyan: but your goal is zero copy boot, right?
20:28:55 <zhiyan> markwash: that's any other topic, for example, with image-handler support, we can do zero-copy provisioning with glance-vmware-store
20:29:17 <zhiyan> markwash: but on the same time, glance-vmware-store can support original provisioning approach: downloading
20:30:11 <markwash> it feels a bit of a special case to have a client side logic that says "if location type = X and format = Y: do something amazing; else: do something normal"
20:30:17 <markwash> a bit *too* special
20:30:17 <zhiyan> markwash: IMO, iiuc, there's any assumption on store level
20:31:17 <ameade> markwash: i think even if we ignore the super awesome special case there is still value to storing images in cinder
20:31:19 <zhiyan> markwash: even we go this logic on client, i think it is: "if location type = X" (no format = Y)
20:31:29 <zhiyan> ameade: +1
20:31:48 <zhiyan> there's no any direct relationship, iiuc
20:31:50 <markwash> ameade: but in that case, why just images? shouldn't we work on making cinder drivers work as backends to swift somehow?
20:32:16 <zhiyan> swift is a object storage but *block*
20:32:22 <gokrokve> I actually have a question from our team about ceph
20:32:40 <gokrokve> Thay were trying to add ceph support to nova to boot from ceph storage
20:32:51 <markwash> zhiyan: right, but the store is an abstraction to object storage, not block storage
20:32:56 <markwash> a qcow2 is an object, not a block
20:33:01 <gokrokve> but now they have to rebase their changes together with nova on Glance v2 api
20:33:04 <zhiyan> i think the original idea of cinder store is to make glance support different block storage as single
20:33:33 <zhiyan> qcow2 is a block device image
20:33:51 <markwash> right, so if I read a qcow2 file, I can produce a block device abstraction from it
20:33:54 <markwash> but the file is a file is an object
20:34:29 <zhiyan> interesting idea
20:34:32 <markwash> I can't encode a block Foo as qcow2, upload it to cinder, and then use it as the block Foo. . I can only use to read the file Foo.qcow2
20:36:15 <markwash> to me its fine to allow raw images in the glance cinder store
20:36:27 <markwash> and thats' consistent with what we were trying to achieve with the cinder store
20:36:36 <markwash> but putting any other format is not appropriate
20:36:50 <markwash> because it doesn't fit the expectations of being a block device
20:37:00 <ameade> i can see where you are coming from
20:37:10 <ameade> it's kind of a hack
20:37:38 <markwash> if we made something like "cinder_location" that wasn't a store location, then we could accomplish this hack a bit more cleanly
20:38:06 <zhiyan> we can store that qcow2 *file* to s3/swift object storage *or* storage store, like filesystem
20:38:07 <markwash> all we really want is metadata to the client that says "this image is already stored as a block device <here>"
20:39:09 <zhiyan> some of those stores which current glance supported are object store, and some are block store (except cinder), am i right?
20:39:37 <zhiyan> markwash: iiuc, this way should be cover by multi-location?
20:40:16 <markwash> zhiyan: I don't think it should really be covered by multi-location either, because we need to distinguish between replica locations that are different just for network performance, and alternative formats that are available
20:40:32 <markwash> and this is partly represented by the fact that currently the format is an attribute of the image, not the locaiton
20:40:48 <markwash> and there is not a backwards compatible migration path to move format from the image to the location
20:41:38 <zhiyan> markwash: ok let's say multi-location for the same format image case.
20:41:41 <markwash> so for v2, i think we should add something like "volume location" as metadata and just put cinder or other use cases through that metadata
20:42:27 <markwash> would that idea work and help unblock the core use cases of zero copy boot ?
20:42:42 <markwash> ameade: though I guess you have some other use cases like being able to use netapp as your backing store, right?
20:43:30 <zhiyan> markwash: hum..i believe the original cinder-store idea is that try to use a single store to enable more block backend storage supporting for glance, in short it might leverage cinder capabilities for glance, and de-duplicate store driver implementations
20:43:43 <ameade> markwash: yeah, i think maybe we should get back to this next week
20:44:20 <markwash> ameade: yeah, I'd love to sit down with you sometime before then to make sure I understand and properly appreciate your goals, I don't want to just be negative like this :-)
20:44:59 <zhiyan> it will be cool if someone could let me know the whole picture
20:44:59 <ameade> markwash: yeah definitely, it's good to think through these concerns :)
20:45:26 <markwash> zhiyan: that's interesting, maybe we can separate that use case out from the zero-copy use case and present things a bit differently
20:45:54 <markwash> but perhaps we should move on
20:46:08 <markwash> zhiyan: would you like a chance to meet about this next week as well?
20:46:21 <zhiyan> yes!
20:46:59 <zhiyan> and i'd like to know the whole use case before meeting disucssion
20:47:12 <markwash> okay, lets follow up
20:47:15 <markwash> next topic for today
20:47:23 <markwash> #topic Glance V2 state and future (gokrokve)
20:47:31 <gokrokve> Yes.
20:47:34 <markwash> care to introduce it, gokrokve
20:48:11 <gokrokve> I just want to understand will it be changed significantly in Juno?
20:48:26 <gokrokve> I am aware of one chnage related to artifacts.
20:48:42 <gokrokve> But as I see there are discussions about locations support.
20:49:01 <markwash> gokrokve: plans are still a bit unclear I suppose
20:49:06 <gokrokve> We have a team who tryed to add ceph support to nova to boot directly from ceph volume.
20:49:31 <markwash> taking a look at https://blueprints.launchpad.net/glance/future I think I see mostly tasks stuff in addition to the artifacts
20:49:45 <gokrokve> They asked me either they can rebase theis patches on current Glance v2 and do nova migration to Glance v2, or they should wait for V2 is stable.
20:50:17 <arnaud__> gokrokve: do you plan to use the image handler?
20:50:20 <markwash> gokrokve: I think glance v2 core functionality will not change, and we are committed to backwards compatibility
20:51:01 <zhiyan> gokrokve: i'm not sure we are doing similar things for zero-copy provisiong for ceph backend.
20:51:02 <gokrokve> Heh I got this question 30 minutes ago. I have no ide how it works. Probably with image handler.
20:51:13 <arnaud__> gokrokve: I think from the nova perspective it becomes high priority to support v1 and v2 (back to esheffie1d bp)
20:51:30 <gokrokve> They need to support nova locations and Glance v2.
20:52:17 <zhiyan> gokrokve: actually there already has a rbd image handler, but yse arnaud__, i'm working on v2 stuff with esheffie1d (and next step is tempest cases for that corner)
20:52:21 <gokrokve> arnaud__: Yes. It was a comment for their patch to align with Glance v2 support in nova.
20:52:46 <zhiyan> gokrokve: may i know the review link for that patch?
20:53:17 <gokrokve> zhiyan: rbd handler was here but it work somehow indirectly. Our guys added dirrect copy from ceph or boot from ceph without copy.
20:53:41 <gokrokve> I will sent it ot glance channel. I don't have it right now.
20:53:52 <gokrokve> I will send it to the glance channel. I don't have it right now.
20:54:03 <zhiyan> gokrokve: tbh i don't think so sorry, that handler already could support *zero* copy
20:54:16 <gokrokve> ok
20:54:23 <zhiyan> gokrokve: thanks for the link
20:54:44 <gokrokve> Probaly this is about something else. I don't know all the details.
20:55:28 <markwash> yes I think reviewing that patch would be the best move for us
20:55:41 <markwash> maybe we can try to hash it out there, or better understand the questions
20:55:57 <markwash> okay, quick open discussion in case there are any more announcements / other business
20:56:02 <markwash> #topic open discussion
20:56:06 <markwash> #link http://lists.openstack.org/pipermail/openstack-dev/2014-March/030444.html
20:56:11 <markwash> . . just sayin' ;-)
20:56:15 <arnaud__> yeah :)
20:57:03 <markwash> another quick ML link
20:57:05 <markwash> #link http://lists.openstack.org/pipermail/openstack-dev/2014-March/029402.html
20:57:13 <markwash> I don't quite feel comfortable saying it on ML
20:57:26 <markwash> but I think the idea about constructive conversations is a really good one, personally
20:57:39 <markwash> I know I have failed many of you many times by not first seeking to fully understand an idea
20:58:02 <markwash> I think the thread there moved a bit in the direction of a whitewash
20:58:23 <markwash> but I would encourage anyone who feels similarly to kgriffs to consider that and to bring up their concerns where they feel comfortable
20:58:46 <markwash> anyway, enough touchy-feely
20:58:48 <markwash> :-)
20:59:05 <markwash> got to run! thanks everybody!
20:59:07 <markwash> #endmeeting