16:01:19 <jgriffith> #startmeeting cinder
16:01:20 <openstack> Meeting started Wed Jun  5 16:01:19 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:23 <openstack> The meeting name has been set to 'cinder'
16:01:26 <jgriffith> Hey everyone
16:01:28 <avishay> hello all!
16:01:30 <dachary> hi
16:01:31 <xyang_> hi
16:01:31 <rushiagr> heylo!
16:01:32 <DuncanT> Hey
16:01:34 <jgallard> hi
16:01:40 <guitarzan> hi
16:01:40 * bswartz waves hello
16:01:42 <thingee> o/
16:01:45 <kmartin> hi
16:01:46 <jgriffith> Yay... full house :)
16:01:46 <jsbryant> Happy Wednesday!
16:01:48 <jdurgin1> hi
16:01:56 <jgriffith> alrighty, let's get started
16:02:05 <jgriffith> #topic Shares service decision
16:02:24 <jgriffith> bswartz: I believe you have some info to share here?
16:02:36 <eharney> hi
16:02:40 <zhiyan> hi
16:02:42 <jgriffith> eharney: 0/
16:02:47 <bswartz> yes we've decided to give up on having the share service in cinder -- we're going to start a new project
16:03:13 <bswartz> It's what everyone seems to prefer, and we agree it's the right long term approach
16:03:15 <jgriffith> bswartz: very exciting!
16:03:36 <bswartz> in the short term the biggest hurdle to doing that is getting stuff moved to oslo
16:03:41 <avishay> cool, sounds like the right approach
16:03:47 <eharney> sounds great
16:03:52 <kmartin> +1
16:03:56 <jgriffith> bswartz: so abou that
16:04:07 <jgriffith> bswartz: can you tell me what Cinder code it is you want moved to OSLO?
16:04:43 <jgriffith> bswartz: Rob Esker mentioned that yesterday but couldn't tell me what code he was referring to
16:04:46 <bswartz> so the easiest thing for us to do is to fork cinder including our current share service implementation, then remove everything blocks related, then start renaming stuff
16:04:58 <bswartz> however that will result in a lot of duplicated code at the end
16:04:59 <jgriffith> bswartz: sure...
16:05:12 <jgriffith> bswartz: but I'm wondering about the OSLO statements
16:05:23 <jgriffith> they've come up a couple times and I don't know what they mean :)
16:05:28 <bswartz> so if we can find a way to move some of the common stuff into oslo then the new project and cinder can simply share it
16:05:53 <bswartz> I'm thinking in particular about some of the scheduler and API server bits
16:06:05 <jgriffith> sure... that's the intent of OSLO anyway
16:06:05 <bswartz> I know there will be a lot of duplication there
16:06:18 <avishay> I guess a fair amount of that is in Nova too
16:06:18 <jgriffith> scheduler and API are efforts OpenStack wide
16:06:26 <jgriffith> avishay: +1
16:06:27 <rushiagr> I think many of the 'things' in Cinder are derived from nova, and most of that is already in Oslo? not sure
16:06:35 <jgriffith> avishay: bswartz and EVERY other OpenStack project :)
16:06:45 <jgriffith> rushiagr: it's getting there
16:06:46 <bswartz> honestly I don't have more detail than that today on requirements to move stuff to oslo
16:07:04 <bswartz> when we start performing the fork we may discover other stuff that's common
16:07:08 <jgriffith> rushiagr: bswartz you may have noticed massive efforts by winstond and others on the scheduler moving in to OSLO already
16:07:12 <jgriffith> as well as RPC
16:07:25 <jgriffith> bswartz: cool
16:07:37 <rushiagr> yes, I saw efforts on scheduler
16:07:59 <jgriffith> bswartz: keep in mind the API stuff has also been brought up by teams like ironic, triple-o, RedDwarf etc etc
16:08:05 <jgriffith> There's a lot of interest there
16:08:13 <rushiagr> hmmm..need to come up with a better understanding of what bits of Cinder is already in oslo, or is used openstack wide
16:08:41 <jgriffith> rushiagr: so the best thing to check is oslo-incubator code base, there's a lot there
16:08:47 <rushiagr> s/is/are/
16:08:59 <jgriffith> rushiagr: and link up with other "new" projects that are going through the same process right now
16:09:08 <jgriffith> There's no shortage of new project ramping up :)
16:09:15 <rushiagr> jgriffith: okay, sounds cool
16:09:32 <bswartz> I don't have anything more on the share service for now
16:09:44 <jgriffith> bswartz: rushiagr cool.. thanks!
16:09:50 <bswartz> thanks everyone for your feedback on our share service plans
16:09:57 <jgriffith> bswartz: rushiagr let me an others know if we can help with getting started
16:10:09 <rushiagr> thanks all
16:10:10 <jgriffith> bswartz: rushiagr you should probably propose something to TC
16:10:27 <jgriffith> bswartz: rushiagr or at least send an email to openstack-dev about your plans
16:10:30 <avishay> thanks for your work and understanding, and best of luck with the new effort
16:10:33 <jgriffith> get the ball rolling so to speak
16:11:46 <rushiagr> jgriffith: ya, we're going to do that soon
16:11:53 <jgriffith> rushiagr: cool
16:12:17 <jgriffith> Ok, anything else?
16:12:36 <jgriffith> #topic direct/IO direct/Attach
16:12:40 <jgriffith> zhiyan: you around?
16:12:45 <zhiyan> yes
16:12:46 <zhiyan> do you remember 'host direct I/O / direct attach feature' discussion in last weekly meeting?
16:12:47 <zhiyan> (https://etherpad.openstack.org/linked-template-image)
16:13:13 <avishay> yes, go ahead please
16:13:20 * jgriffith didn't know you could use etherpad to write novels :)
16:13:21 <zhiyan> I'm not sure there is a discusable draft design currently, hema is working on a design
16:13:55 <zhiyan> i know this is a big change, so in order to prevent this cinder requirement not block glance-cinder-driver implementation, i have a plan B..
16:14:05 <jgriffith> zhiyan: So my proposal is/was...
16:14:19 <zhiyan> the glance-cinder-driver need cinder support, and now there are two choice for me: 1. change cinder to support attaching volume to host/ direct IO. 2. upload/download volume contnet via http. if 1# is not starting, i'd like to get #2 as the plan B.
16:14:19 <jgriffith> Start with the generic attach/detach lib
16:14:42 <zhiyan> yes, it's great if we can get 1#
16:15:20 <jgriffith> zhiyan: there is effort to get this by H2 (I think) :)
16:15:31 <jgriffith> hemna_: ^^
16:15:55 <zhiyan> does 2# make your sense too?
16:16:13 <jgriffith> zhiyan: I'm not sure I like #2 but I may not fully understand what you have in mind
16:16:36 <jgriffith> zhiyan: it doesn't seem that much different from functionality we already provide
16:17:16 <bswartz> zhiyan -- which storage backend is this feature supporting?
16:17:22 <bswartz> or is it generic?
16:17:31 <zhiyan> use-case like this: glance api client request download image, then if we have 1#, we can attach the volume from glance host and read the volume bits and send to glance client, if we use 2#, glance will acted as a http proxy
16:17:46 <zhiyan> bswartz: generic, i think.
16:18:00 <kmartin> I believe hemna is in route to the office
16:18:25 <DuncanT> bswartz: I'd really not like to see it done as anything other than generic (with backend specific optimisation paths where appropriate)
16:18:39 <jdurgin1> DuncanT: +1
16:18:42 <bswartz> oh this is part of the proposal to use cinder as an image store behind glance
16:18:53 <avishay> yes
16:18:55 <jgriffith> DuncanT: +1000
16:19:06 <bswartz> okay I get it
16:20:35 <jgriffith> zhiyan: can you talk a bit more about #2?
16:20:48 <jgriffith> Unless you guys covered all of this already
16:20:54 <jgriffith> (ie last weeks meeting)
16:21:36 <DuncanT> Not #2, no
16:21:58 <zhiyan> jgriffith: in 2#, will add a new apis to cinder: upload/download volume by http, then glance-cinder-driver will use this api to read volume bits when glance api client ask download image
16:22:10 <thingee> DuncanT: +1
16:22:18 <jgriffith> zhiyan: that's what I thought and I don't like that idea
16:22:33 <jgriffith> zhiyan: if you want to attach a cinder volume to glance and do somethign there that's fine
16:22:39 <avishay> no no no
16:22:47 <jgriffith> avishay: ??
16:22:58 <avishay> jgriffith: no to idea #2
16:23:03 <jgriffith> avishay: oh :)
16:23:16 <avishay> yes yes yes to what you said :)
16:23:28 <jgriffith> zhiyan: so adding an HTTP transfer layer to Cinder isn't something anybody seems overly interested in :)
16:23:33 <zhiyan> jgriffith: yes, i like it too, that's 1# option. it believe it will happen, but maybe not in the timescale that suites me
16:23:44 <jgriffith> zhiyan: not sure why not?
16:24:02 <jgriffith> zhiyan: It's not that difficult to create/attach a Cinder volume
16:24:07 <jgriffith> not sure what I'm missing here...
16:24:15 <avishay> well if we add HTTP we can compete with Swift ;)
16:24:22 <jgriffith> avishay: hush!
16:24:40 <jgriffith> :)
16:24:43 <smulcahy> lol
16:24:48 <DuncanT> jgriffith: Attaching a ceph volume isn't easy
16:24:54 <avishay> zhiyan: why not start by copy-pasting the code and continue working on the main portion of your code, then replace it with the generic attach/detach code?
16:24:55 <DuncanT> jgriffith: Nor sheepdog
16:25:23 <jgriffith> DuncanT: fair, but that is what we're supposed to be in the business of doing isn't it?
16:26:04 <DuncanT> jgriffith: Indeed. I'm just not at all sure what the final solution is going to look like
16:26:15 <jgriffith> jdurgin1: thoughts ^^
16:26:22 <avishay> hemna: good timing :)
16:26:30 <zhiyan> avishay: do you means, copy nova attach/detach code to glance directly temporarily, and make glance-cinder-drvier work, then to move to cinder  generic attach/detach code?
16:26:45 <DuncanT> jdurgin had one suggestion that we never expose a volume device and just have a data transfer API
16:26:47 <jdurgin1> personally I'd like to see a generic lib for doing i/o to cinder block devices, like brick
16:26:49 <avishay> zhiyan: does that make sense?  just so that this isn't blocking you?
16:26:56 <jgriffith> zhiyan: that's an option but I'd rather get the brick code first
16:27:06 <jgriffith> jdurgin1: +1
16:27:18 <jgriffith> jdurgin1: I'm working on it, hopefully get back to it next week
16:27:22 <hemna> baby steps
16:27:24 <zhiyan> yes, jgriffith, but seems the brick code not ready for attach/detach...
16:27:37 <hemna> we are working on doing attach/detach in brick soon for iSCSI and FC to start
16:27:45 <jdurgin1> I'd just like to see the focus on I/o rather than attach/detach, since that seems to be what most things actually care about
16:27:50 <hemna> zhiyan, we are supposed to get it done for H2
16:27:58 <jgriffith> jdurgin1: fair
16:28:11 <jgriffith> jdurgin1: but you'll have to clarify what you mean specfically there for me :)
16:28:12 <zhiyan> hemna: thanks. so the time is not good for me...
16:28:29 <hemna> well, unfortunately I can't go back in time.
16:28:45 <jdurgin1> one way to do this would be to reference volume drivers in brick, and add file-like driver methods for opening, closing, reading and writing
16:29:04 <jdurgin1> where open/close could include attach/detach when that's necessary
16:29:14 <jgriffith> jdurgin1: interesting.... so add another abstraction of sorts
16:29:18 <zhiyan> hemna: fine. so maybe avishay is right...i need copy nova attach/detach code to glance directly temporarily, and make glance-cinder-drvier work, then to move to brick...
16:29:48 <hemna> that's an option.
16:29:49 <zhiyan> jdurgin1: +1
16:30:00 <hemna> that's what Cinder did for Grizzly
16:30:16 <hemna> and if brick is stable after H2, then you could pull in brick for H3 and use that instead
16:30:22 <hemna> so you aren't blocked on us for the time being.
16:30:54 <bswartz> jdurgin1: are you proposing having python code in the data path for bulk data operations?
16:31:06 <bswartz> if so, that seems unwise
16:31:41 <avishay> Is this going to Nova as well?
16:32:06 <jgriffith> avishay: yes (depending on exactly which piece you're referring to)
16:32:22 <jgriffith> avishay: we want to commonize the existing attach/detach code
16:32:29 <hemna> avishay, the hope was to eliminate the dupe code between nova and cinder at least for attach/detach
16:32:50 <avishay> jgriffith: OK, that's what I thought.  In that case an open/close API doesn't seem too good.
16:32:58 <zhiyan> jgriffith: do you think the http volume download/upload is a common feature for cinder?
16:33:15 <zhiyan> or no value to implement it..
16:33:18 <avishay> I wouldn't really care if it was just Cinder and Glance, but for all Nova I/O to go through python...
16:33:19 <jdurgin1> bswartz: yes, I don't expect python to be a big bottleneck there
16:33:35 <jgriffith> zhiyan: I don't.. not really
16:34:00 <jdurgin1> avishay: this wouldn't affect how vms talk to disks
16:34:11 <DuncanT> HTTP volume upload was proposed before... I don't remember the outcome of those discussions
16:34:25 <winston-d> sorry i'm very late
16:34:32 <avishay> jdurgin1:  so what would that path look like?
16:34:34 <jgriffith> DuncanT: I think somebody said "use swift"
16:34:37 <jdurgin1> avishay: what nova I/O are you worried about?
16:34:38 <hemna> someone wants to upload a 30G volume over http ?
16:35:06 <avishay> jdurgin1: a VM reading/writing to a volume
16:35:29 <DuncanT> jgriffith: I was wondering if it could be used e.g. to move volume for AWS into another public cloud [cough]
16:35:52 <jgriffith> DuncanT: hehe
16:35:59 <jdurgin1> avishay: that would still go through the hypervisor - brick need only be used for attach/detach in that case
16:36:00 <smulcahy> DuncanT: I think you could achieve something like that with swift and the cinder volume backup api
16:36:07 <jgriffith> DuncanT: so that's a volume migration use case
16:36:11 <smulcahy> possibly with some slight enhancements to the backup api
16:36:31 <jgriffith> DuncanT: that's going to suck no matter what IMO
16:36:46 <avishay> jdurgin1: so there will be attach/detach as well as open/close?
16:37:24 <jdurgin1> avishay: could be, that's the one use case where we care about attaching rather than doing i/o from nova itself
16:37:36 <DuncanT> avishay: There will be attach-to-vm, not necessarily attach-to-local-host I think
16:37:52 <jgriffith> jdurgin1: I guess I don't quite understand the point still (sorry)
16:38:00 <jgriffith> I mean abstraction is neat, but....
16:38:38 <avishay> OK, if I understand right, that works for me
16:38:39 <jdurgin1> the driver-level i/o is useful more generally - it lets us do things like implement generic backup or volume migration without modifying every backend
16:38:56 <zhiyan> jgriffith: do you think we can wrap brick with something like cinder-agent, it take care attach volume to instance or attach the volume to the host
16:39:38 <jgriffith> zhiyan: I think that would be tricky (permissions, host access etc)
16:39:49 <jgriffith> jdurgin1: I guess I don't follow
16:40:00 <zhiyan> since i think the 'connection_info' which cinder api provided is enough for client (i mean cinder-agent) attach it to instance or host
16:40:11 <jgriffith> jdurgin1: not sure I see the advantage/difference for migration?
16:40:46 <jgriffith> zhiyan: I have always just thought it should be something like:
16:40:56 <winston-d> jdurgin1: having every backend to support driver-level i/o requires them to modify their drivers
16:40:58 <jgriffith> glance has volume/image database and it's connect info
16:41:06 <jgriffith> nova or other asks for an image
16:41:14 <jgriffith> glance can give back a list of choices
16:41:24 <jgriffith> 1. Download it via HTTP (existing model)
16:41:37 <zhiyan> yes, but glance need support create (and upload image content) and download
16:41:38 <jgriffith> 2. Use a Cinder volume that has it (here's the info for it)
16:41:44 <zhiyan> yes
16:41:45 <zhiyan> yes
16:41:52 <jgriffith> 3. Create a Cinder volume with it
16:41:56 <jdurgin1> jgriffith: right now the backup or other new apis that require i/o have to be implemented in each driver, if drivers have an i/o interface a generic implementation can be done that works for these new apis without modifying the drivers, although they could override it for optimization
16:42:28 <jdurgin1> winston-d: yes, but it's one addition instead of adding a way of doing it for every new api that require i/o
16:42:29 <jgriffith> jdurgin1: got ya, although for the majority of back-ends that's not really true
16:42:43 <jgriffith> jdurgin1: with things like iSCSI we have generic implementations
16:42:50 <jdurgin1> jgriffith: yes, but just for iscsi
16:43:02 <jdurgin1> we can bump that up a level to all drivers
16:43:03 <jgriffith> jdurgin1: indeed
16:43:11 <zhiyan> jgriffith: i think so, but under implementation level, i meet the blocker such as, glance-cinder driver how to upload image content to the volume backend ? it need attach volume then to write image bits
16:43:33 <jgriffith> zhiyan: that's what I've been saying though, Cinder already provides that
16:43:41 <jgriffith> "cinder create --image-id xxxx"
16:44:00 <avishay> jgriffith: did I miss something where some cinder-agent won't eventually be doing the attach/detach?  it will stay as a library?
16:44:44 <jgriffith> avishay: I'm not sure how I feel about the whole agent thing, but I think regardless there are some first steps that need to be taken before we tackle that
16:44:48 <zhiyan> jgriffith: are you means the image will be saved in two different place, one in glance, on in cinder?
16:45:17 <hemna> jgriffith, +1
16:45:19 <jgriffith> zhiyan: yes that would be the model to start
16:45:30 <avishay> jgriffith: yes, agreed.  but i'd like to remove the requirement for cinder servers to have HBAs connected to the storage.
16:45:36 <jgriffith> zhiyan: then the next step would be create volume and download image allin one
16:45:51 <jgriffith> avishay: then use iSCSI :)
16:46:07 <jgriffith> so this has been why I don't like FC all along just FYI :)
16:46:41 <jgriffith> avishay: but again that seems like a discussion in and of itself to me
16:46:52 <avishay> jgriffith: iSCSI doesn't solve everything.  Some deployments would want separate management and data paths, and Cinder doesn't need to be on the data.
16:46:58 <jgriffith> I think we're straying fro the issues zhiyan is presenting
16:47:07 <jgriffith> avishay: ?
16:47:19 <thingee> and we're running out of time :)
16:47:20 <avishay> separate networks
16:47:24 <jgriffith> avishay: you're just moving that requirement to another location
16:47:32 <jgriffith> thingee: +1
16:47:37 <jgriffith> we still have two topics
16:47:37 <avishay> OK, let's focus :)
16:47:50 <jgriffith> I'd like to close out the issue with zhiyan quickly and move on
16:48:13 <jgriffith> zhiyan: It seems option #2 isn't appealing to anybody
16:48:25 <jgriffith> zhiyan: Option #1 seems favorable
16:48:31 <jgriffith> zhiyan: but I think we need to break it down
16:48:35 <jgriffith> zhiyan: take smaller chunks
16:48:36 <zhiyan> yes
16:48:49 <jgriffith> zhiyan: solve the basic model we talked about earlier first
16:48:52 <jgriffith> then build on it
16:48:53 <zhiyan> jgriffith: i will think about your option
16:49:01 <jgriffith> as far as timing...
16:49:14 <jgriffith> I hate the idea of you duplicating code then replacing it when brick is ready
16:49:22 <jgriffith> I'd really like to see if you can either:
16:49:27 <jgriffith> 1. help with what's needed in brick
16:49:35 <jgriffith> 2. wait for brick to be ready
16:49:41 <zhiyan> jgriffith: but i like use brick/cinder-agent in future
16:49:43 <jgriffith> maybe you could stub some things out in the meantime?
16:50:03 <jgriffith> zhiyan: I don't know what the timing is for cinder-agent etc
16:50:13 <avishay> jgriffith: to be clear, i didn't mean to contribute that duplicate code, just use it locally in the meantime so he's not stuck
16:50:20 <DuncanT> My issue here is that I strongly suspect we'll end up shipping H final with a feature that just plain won't work for ceph/sheepdog, and I think that sets a dangerous precident
16:50:21 <jgriffith> zhiyan: but that's going to be mostly handled by avishay hemna and FC folks I think :)
16:50:36 <jgriffith> avishay: I htink we're on the same page
16:51:01 <jgriffith> DuncanT: so this is the dilema, does that mean we just don't do new features in Cinder?
16:51:01 <hemna> DuncanT, what breaks ceph ?
16:51:42 <jgriffith> DuncanT: not saying you're wrong
16:51:52 <jdurgin1> jgriffith: it means don't depend on implementation details of the backend for generic new features
16:51:57 <DuncanT> jgriffith: I don't know how to square the circle...
16:51:58 <jgriffith> DuncanT: so maybe the answer is cinder is feature complete
16:52:07 <zhiyan> jgriffith: we are in the same page i think, i don't like use duplicate code also, will use brick
16:52:25 <jgriffith> jdurgin1: understood
16:52:32 <zhiyan> and i'd like contribute something to brick if hemna, avishay need me
16:52:43 <jgriffith> jdurgin1: but I have to rely on you or other Ceph experts to propose alternatives that work for everybody
16:52:51 <jgriffith> jdurgin1: and also don't hinder/hamper anybody else
16:53:10 <zhiyan> seems R/O volume support discussion need move to next meeting?
16:53:16 <DuncanT> jgriffith: We work like iscsi in this regards, but I can understand the concerns of people who don't
16:53:34 <jgriffith> Ok.. I think we should talk about the Ceph and FC issues in another topic
16:53:47 <jgriffith> #topic R/O volumes
16:53:50 <hemna> k
16:54:05 <jgriffith> zhiyan: Just to be clear I intend to see this land in Cinder at some point
16:54:17 <jgriffith> zhiyan: I don't necessarily see all of the concerns
16:54:26 <jgriffith> zhiyan: and all the hypervisors seem to have some level of support
16:54:40 <jgriffith> zhiyan: if I'm completely wrong and it blows up, well then live and learn
16:54:55 <avishay> does Xen support?  I thought at the summit they said no?
16:55:31 <DuncanT> Xen does, according to their docs
16:55:45 <kmartin> avishay: jgriffith found a way that Xen will support as well as VMware
16:56:42 <kmartin> working with the VMware team in HP to switch to a different api to support this then what they had planned on using
16:57:15 <johnthetubaguy> For xenapi do ping me if you have issues, there is work Ceph support and already do some FC/HBA support
16:58:13 <johnthetubaguy> oh, missread, there are read only volumes too
17:01:36 <kmartin> jgriffith: u there?
17:01:39 <hartsocks> @jgriffith hey guys we have the room in this time slot.
17:02:35 <hartsocks> Is the previous meeting over?
17:03:12 <kmartin> we lost jgriffith, so yes it's over
17:03:20 <hartsocks> #endmeeting
17:03:30 <hartsocks> (not sure that will work)
17:03:35 <kmartin> I believe he needs to end the meeting
17:03:37 <rushiagr> hartsocks: wont work without jgriffith :(
17:03:43 <hartsocks> ugh.
17:04:00 <hartsocks> Wonder what happens when I do this...
17:04:03 <openstack> hartsocks: Error: Can't start another meeting, one is in progress.
17:04:12 <hemna> #endmeeting
17:04:13 <hartsocks> nice.
17:04:33 <rushiagr> I think you can move folks to #openstack-meeting-alt
17:04:43 <rushiagr> the alternate meeting channel
17:05:09 <hartsocks> Let me check to see if it's clear.
17:05:52 <hartsocks> Okay.
17:05:52 <hartsocks> If folks can /join me over on openstack-meeting-alt
17:05:59 * rushiagr hopes this net split disconnects jgriffith :P
17:06:29 <jgriffith> #endmeeting