18:01:34 <jdg> #startmeeting
18:01:35 <openstack> Meeting started Thu Feb 23 18:01:34 2012 UTC.  The chair is jdg. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:01:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic.
18:01:42 <ogelbukh> hi
18:01:44 <clayg> I don't see vlad, or renuka, or vish :(
18:02:05 <jdg> We should give them a minute or two... I know Renuka was planning to attend
18:02:24 <jdg> Perhaps we can start talking about current work until they get here?
18:02:37 <jdg> I put an agenda up:
18:02:55 <jdg> #link http://wiki.openstack.org/NovaVolumeMeetings
18:03:15 <clayg> has anyone started using/testing the new volumes api endpoint?
18:03:27 <jdg> #topic new/current work for Essex
18:03:43 <bcwaldon> vishy and I were going to add support for using it through novaclient
18:03:54 <bcwaldon> but not yet :)
18:04:19 <DuncanT> We only started looking at it today
18:04:41 <bcwaldon> one major pain point I've seen is when the volumes compute extensions get out of sync with the volumes endpoint
18:04:56 <bcwaldon> I would love to see that code merged somehow
18:05:15 <YorikSar> bcwaldon: I still could not find time to look into it
18:05:30 <bcwaldon> ok, no worries
18:06:13 <jdg> Ok, anybody have anything specific they want to talk about regarding Essex, or do we need to spend some tim discussing this issue?
18:06:15 <YorikSar> As I saw, it should not take a lot of time
18:06:38 <YorikSar> I think, this issue should be solved befor essex
18:06:49 <jdg> Sounds good
18:07:06 <clayg> bcwaldon: regarding api/openstack/compute/contrib/volumes getting out of sync with "volumes endpoint" - don't they currently both use the same db?
18:07:21 <bcwaldon> the code gets out of sync, not the data
18:07:26 <clayg> oh
18:07:37 <bcwaldon> they're the same code, but copied to two separate places
18:07:40 <YorikSar> The extension is not covered with tests at all now
18:07:42 <bcwaldon> so its easy to fix a bug in one place
18:07:43 <clayg> right, yes
18:08:15 <clayg> right - i saw the volumes_type fix that you did fixed it in both - you're attentive like that - most of us aren't
18:08:46 <clayg> YorikSar: which extention? rly?
18:08:59 <bcwaldon> not necessarily attentive, I've just already felt the pain of not fixing it in both places ;)
18:09:42 <YorikSar> clayg: The os_volume extension... I bumped into 500 error using novaclient, spent a lot of time wondering how could it pass all the tests.
18:09:46 <clayg> jdg: ok, you're running the show here - what's next! ;)
18:10:01 <jdg> :) alright....
18:10:03 <YorikSar> clayg: And then I realized that tests cover only endpoint
18:10:24 <jdg> Sounds like there's some more discussion needed on this, maybe at the end of the meeting or after
18:10:55 <jdg> I wanted to see if anything has anything specific to work they've done for Essex that should be shared to a wider audience
18:11:09 <jdg> IE Blueprints
18:11:28 <clayg> Vlad had that bit on the multi type driver, don't think it ever got wrote
18:12:03 <clayg> but like when he talked about it - I never understood how he was planning on implementing it... so I think I was missing some understanding of his use case.
18:12:03 <YorikSar> Blueprints that were approved to FFe are all merged or postponed now
18:12:36 <jdg> Ok... I was more or less trying to sync folks up but maybe not applicable here.
18:12:56 <jdg> Moving on...
18:13:10 <jdg> #topic outstanding work that folks need help with
18:13:32 <jdg> Any specific bugs issues that folks are workign on and could use some help on?
18:13:55 <clayg> but #897075
18:13:58 <clayg> volume int is id not uuid
18:14:02 <clayg> heh *bug
18:15:40 <jdg> Ok, anybody looked at this one?
18:15:48 <YorikSar> clayg: Is it a real issue?
18:16:44 <bcwaldon> I have looked into it, and gave up when I realized how much work it was going to be
18:16:48 <YorikSar> I mean, isn't it just aestetic?
18:17:20 <renuka> no i think even wrt security
18:17:41 <clayg> YorikSar: there's some risk with the id's being auto incrementing that can be a collision problem in staging...
18:17:54 <renuka> you should not be able to predict volume ids for a particular user
18:17:59 <bcwaldon> yes, so I think it does need to happen
18:18:00 <YorikSar> Got it.
18:18:33 <YorikSar> And, may be, it should also happen before Essex
18:18:52 <jdg> bcwaldon: How far did you get into finding where the changes need to be made?
18:19:07 <jdg> Or anybody else that's familiar with the issue
18:19:13 <YorikSar> Since people heard the word LTS in one sentence with Essex
18:19:15 <bcwaldon> oh, it's basically a sweep of the entire volume-related codebase
18:19:21 <bcwaldon> I did some of the work for instances uuids
18:19:27 <jdg> Oh... that's all?  :)
18:19:28 <bcwaldon> so I know the depth of the changes
18:19:34 <bcwaldon> yes...that's the problem!
18:20:18 <jdg> Well, I'll volunteer to take a piece of it and see how it goes.  Anyone else?
18:21:00 <bcwaldon> the thing is you can't really break it up into pieces, since as soon as you make the change *all* the tests break
18:21:18 <renuka> why is it a sweep of the code base? i would assume that once a volume id is assigned, it just gets used without interpretation
18:21:28 <YorikSar> There can also be a problem with drivers there...
18:21:32 <jdg> bcwaldon:  Yeah, my hope was someobdy else would step up and we could divide the effort
18:21:55 <bcwaldon> jdg: I'm saying I don't think that would work very well ;)
18:22:04 <jdg> bcwaldon:  Ah, ok.
18:22:32 <jdg> well my offer still stands to work on this if somebody wants to bring me up to speed on the issue later?
18:22:41 <renuka> jdg can help out with the creation bit.. after that point, everyone ensures their drivers keep working?
18:23:29 <jdg> renuka: I'm willing to got that route if others agree
18:23:44 <YorikSar> renuka: Well, the thing with drivers is that there can be some that are not supported too active
18:23:52 <jdg> Remember I'm still relatively new so I may need a little guidance to start.
18:23:55 <clayg> jdg: yeah I mean if you get a branch up I'll definately check it out and deploy/test/review
18:24:47 <jdg> Ok, I have time to devote so I can work on it if folks are in agreement.  bcwaldon, sound reasonable?  Or are we dreaming here?
18:24:50 <clayg> YorikSar: if Theirry can rip out Hyper-V we can rip out sheepdog?
18:25:02 <bcwaldon> jdg: no, I was just too lazy to do it
18:25:07 <renuka> By their drivers, i meant either those you wrote or are interested in... if you are not familiar, all you need to do is file a bug and bring it to peoples notice.. :)
18:25:09 <bcwaldon> jdg: have fun with it
18:25:27 <jdg> LOL.. that's always ominous
18:25:42 <YorikSar> clayg: It can be too late for this
18:26:03 <renuka> yea, we ought to check with vish if they will have this for essex
18:26:24 <YorikSar> Actually we can be more optimistic and hope that drivers can accept long ugly string as id
18:26:54 <renuka> I don't expect drivers to manipulate the ids so I am optimistic, yes
18:27:00 <YorikSar> For example, the Nexenta one can handle this (I hope so)
18:27:41 <DuncanT> We make use of the ID, but it is easy enough to work with the change as long as the length is well defined
18:29:10 <clayg> I think we mostly see the 'vol-0000001' looking "id"
18:30:54 <clayg> jdg: DuncanT: YorikSar: renuka: is SolidFire, HP, Nexenta, or Citrix using "volume_types" ???
18:31:08 <jdg> Negative for SolidFire
18:31:20 <DuncanT> Not yet in production but we've plans around it
18:31:24 <renuka> DuncanT: Since compute has already converted to use uuids, there is already code in there doing what we need. I don't think we need to worry about how long the resulting string is
18:31:25 <DuncanT> (HP)
18:31:29 <YorikSar> clayg: I've been looking for it, only Zadara used them
18:32:05 <renuka> clayg: Citrix is not using it yet... I haven't gotten around to changing the SM driver to start
18:32:17 <DuncanT> renuka: We use the volume id as a key into our own databases inside our own databases, so we need a spec for it. As long as there is a spec, we don't care too much what it is
18:32:23 <renuka> but we do need it
18:32:26 <ogelbukh> I thought this is tied to volume scheduler
18:32:33 <clayg> yeah I was hopeing to get a "state of the union" update on Xen Storage Manager support :)
18:33:11 <renuka> clayg: heh I had switched to devstack work for the last couple of months :)
18:33:16 <clayg> ogelbukh: the types is just an attribute on the volume model - in theory it could be used by the scheduler - on in our case (maybe hp too) passed along unmodified
18:33:34 <DuncanT> clayg: Yeah, we want it unmodified too
18:33:43 <renuka> so give me about a week.. there are some bugs that are fixed on our internal branch that need to be rebased
18:33:48 <ogelbukh> I see
18:33:54 <clayg> renuka: so you're just here being nosy - you don't really _care_ about volumes any more :D
18:34:10 <clayg> oh... wow nm, looking forward to it!
18:34:11 <renuka> clayg: haha, no!
18:34:26 <renuka> clayg: I am supposed to be working on everything :D
18:34:34 <clayg> lol
18:34:39 <ogelbukh> )
18:35:36 <clayg> jdg: what do you mean by BSaaS?
18:35:38 <jdg> Ok, so WRT bug #897075 it sounds like initial thought is move forward with putting in a "real" UUID for ID correct?
18:35:39 <uvirtbot`> Launchpad bug 897075 in nova/essex "volume int is id not uuid" [Medium,Triaged] https://launchpad.net/bugs/897075
18:35:39 <YorikSar> I looked through id usage in drivers. Looks like it will go well if iSCSI supports long IDs
18:36:01 <jdg> clayg:  Block Storage as a Service (sorry, should've added that)
18:36:21 <ogelbukh> this was Lunr once
18:36:29 <renuka> #agreed fix bug #897075 for essex
18:36:34 <jdg> I'll explain more next
18:36:38 <clayg> ... sry, i sort of assumed that... I meant to say tell me what you think "Block Storage as Service" means?
18:36:53 <jdg> I'll get there next...
18:37:17 <jdg> Ok, so I'll create a branch and try to get started on this.  I may need a quick run down from folks more familiar.
18:37:17 * clayg bubbles with excitement
18:37:31 <renuka> lol
18:37:36 <jdg> If bcwaldon or somebody else wants to give me a quick over view later that would be great.
18:37:48 <jdg> Ok clayg  :)
18:37:49 <bcwaldon> jdg: I'm free later
18:37:57 <jdg> Great thanks!
18:38:03 <jdg> #topic BSaaS
18:38:24 <jdg> So I think this has come up before with mixed feeling from folks, but...
18:38:39 <YorikSar> We (with ogelbukh) did some drawing and writing on this one
18:38:48 <jdg> There's been some more thoughts about spinning Block Storage out into it's own project seperate from Nova
18:38:56 <YorikSar> http://goo.gl/xM0aD
18:39:05 <ogelbukh> probably invented a kind of bicycle here
18:39:27 <YorikSar> We believe that can be done a Quantum way
18:40:01 <jdg> YorikSar: thanks for the link
18:40:18 <YorikSar> So that it can become easier to add more backends and protocols etc
18:40:21 <DuncanT> So what would be left in nova-volume?
18:40:24 <ogelbukh> jdg: I think its a fate of Lunr that has caused this confusion
18:40:28 <YorikSar> Nothing :)
18:40:37 <jdg> ogelbukh: exactly
18:40:45 <clayg> lol
18:40:52 <ogelbukh> DuncanT: we actually thought of making VolumeManager
18:41:07 <jdg> So one thing I've run into is there seems to be pockets of work/ideas around this
18:41:10 <ogelbukh> that can replace nova-volume
18:41:14 <renuka> i worry about complicating things ... we dont have enough contributors, so I tend feel easier the code, the better...
18:41:22 <ogelbukh> like Quantum replaces nova-network
18:41:28 <YorikSar> I think, we can propose a way to switch between nova-volume and (let's say) Lunr
18:41:43 <YorikSar> And then cut nova-volume out entirely
18:41:46 <jdg> renuka:  ultimately wouldn't it make the code "easier" as you suggest to seperate it?
18:41:56 * clayg has no idea how quantum "works"
18:42:04 <YorikSar> It will be easier
18:42:11 <YorikSar> I believe in this one :)
18:42:29 <jdg> The growing pains are tough, but the end result would be better I believe
18:42:40 <renuka> jdg: if someone can dedicate enough time to it to make it work well... people already familiar with nova volume, there is a knowledge base.
18:42:42 <YorikSar> We can do a lot of abstractions and reuse if we rearchitect things a bit
18:43:08 <renuka> ... for the record, I know this is shortsighted, but given the size of community contributing to volumes at the moment, it seems like a big task
18:43:31 <jdg> renuka: I agree, but I think there's a growing interest here
18:43:38 <renuka> YorikSar: what is the most painful thing about nova volume right now
18:43:45 <ogelbukh> we have at least 6 months in incubation
18:43:52 <ogelbukh> and probably more
18:43:52 <renuka> can it be fixed by Vlad's suggestion for volume scheduler
18:43:52 <jdg> I also believe that if it was a first class citizen of it's own it would gain even more attention/interest
18:44:29 <YorikSar> renuka: For example, there is no way to add another protocol, there is only iSCSI
18:44:40 <ogelbukh> btw, I heard that Lunr is still in development
18:44:58 <jdg> ogelbukh: I think Lunr has morphed into something different
18:44:59 <renuka> you can add drivers for any type of backend
18:44:59 <DuncanT> Writing a driver that doesn't use iSCSI is easy enough?
18:45:09 <clayg> YorikSar: rbd?
18:45:15 <bvanzant> iSCSI is a limitation of the hypervisor, right?
18:45:42 <ogelbukh> couldn't find anything more specific on this morph, alas
18:45:48 <clayg> if we're still talking about hte host connecting to storage and exposting it to the guest - then any BSaaS would be limited by what is supported in the virtdriver
18:45:56 <renuka> SM on xenserver can connect to a large number of backends, including netapp, nfs, iscsi, etc.
18:46:07 <YorikSar> clayg: I mean, you can add FibreChannel as option to standard driver, but it will be painfull to use itin other drivers (e.g. Nexenta)
18:46:25 <jdg> ogelbukh: Sorry... I believe it's turned more into actually creating an iSCSI target from commodity hardware, ie Swift but for Block
18:46:40 <ogelbukh> oh, I see
18:47:01 <ogelbukh> sounds like Ceph
18:47:03 <renuka> I think this is a long discussion which ought to happen on the mailing list with more visibility
18:47:32 <clayg> ogelbukh: not really like ceph, more like iscsidriver now - but with backups to cold storage (swift)
18:47:33 <jdg> renuka: Yes, but I wanted to try to get the ball rolling (even if folks throw it at me)
18:47:36 <YorikSar> Am I wrong thinking that if we can presen a block device in the host system, we can attach it to VM in any hypervisor?
18:47:38 <renuka> we should deal with current essex issues like adding tests, finding/fixing bugs
18:47:45 <ogelbukh> clayg: the idea is to get client-agent that can connect devices to compute hosts
18:47:54 <ogelbukh> via any storage protocol
18:48:00 <clayg> whoa
18:48:09 * clayg 's mind is blow
18:48:14 <ogelbukh> and make it into VM like local storage
18:48:19 <ogelbukh> )
18:48:29 <jdg> renuka: Agreed, but the summit is coming up and we should have some sort of plan/goal don't you think?
18:49:10 <jdg> clayg: sorry, you're right.
18:49:20 <YorikSar> jdg: I don't really see a difference in "BS for commodity hardware" and nova-volume... It gives block storage spreaded over some Linux hosts too...
18:49:35 <renuka> again, I think its a topic that needs more visibility. And this has been tried before. Folks tend to say they are interested, but it doesn't really translate to contributions.. so perhaps it isnt pinching them enough yet
18:49:38 <jdg> YorikSar: probably a topic for another discussion
18:49:56 <vishy> I'm not sure the point of rewriting a new BSaaS as opposed to breaking out nova-volume
18:50:18 <DuncanT> If what is being proposed is to rename nova-volume, make it a first class citizen then growing it organically, that seems quite reasonable
18:50:36 <jdg> vishy: So maybe that's what it ends up being, I'm not proposing a specific "plan" or "design"
18:50:43 <YorikSar> vishy: As I told, the wiring protocols is a good example where just separating nova-volume is not enough
18:50:45 <jdg> Just the concept of seperating it
18:50:46 <clayg> idk it soulds like the idea of a client-agent is dramatically different then what nova-volume is now
18:51:14 <vishy> YorikSar: wiring protocols?
18:51:30 <YorikSar> vishy: iSCSI, FibreChannel, etc
18:51:33 <ogelbukh> clayg: it's about removing volume code from virt driver actually
18:51:33 <jdg> Once you have an API into the "volume" code however doesn't it make life easier to do things like add protocols etc?
18:51:40 <ogelbukh> making it more lightweight
18:51:59 <ogelbukh> I believe I've seen couple of lines on it just today
18:52:06 <clayg> with a guest that has to run on a guest environment that I don't really need/want to log into?  It's one way to do it...
18:52:08 <vishy> YorikSar: we already support different protocols
18:52:19 <vishy> YorikSar: I don't see why we would need to rearchitect for that
18:52:31 <DuncanT> YorikSar: Anything that presents as a block device is supported now
18:52:59 <clayg> DuncanT: only as far as the virt layer supports that connection type
18:53:32 <DuncanT> Are there any that don't support raw devices? I wasn't aware of any
18:53:34 <YorikSar> I don't see a way to mix protocols in one driver...
18:54:32 <vishy> YorikSar: you can pass back whatever you want in initilalize_connection
18:54:50 <clayg> DuncanT: "raw devices" oh erm... I'm not sure... like currently libvirt uses iscsiadm to connect to remote target, and xen uses xapi to make the calls to setup the iscsi SR...
18:54:57 <vishy> and as long as there is corresponding logic on the compute side it will work
18:55:25 <vishy> so one driver could pass back iscsi/rbd/sheepdog/...
18:55:28 <YorikSar> vishy: But what if we want volume to be accessible over any protocol? It can be helpful in any environment
18:55:32 <clayg> DuncanT: I hadn't really though of the storage already existing as a "raw device" on the hypervisor?  What the connection type ofr htat?
18:55:52 <YorikSar> vishy: s/any/mixed/
18:55:58 <DuncanT> clayg: We set up complex dm devices and pass them to back via discover (diablo not essex, some of the driver names have changed a bit)
18:56:13 <vishy> YorikSar: then we just have to extend initialize_connection to allow you to specify a type of connection
18:56:35 <clayg> "we set up complex dm devices" durning attach, or this is preconfigured on the hypervisor?
18:56:44 <DuncanT> clayg: on attach
18:57:01 <clayg> DuncanT: so... doesn't nova-compute need to know how to do all that?
18:57:06 <YorikSar> vishy: Then we need to somehow control information about compute host (it can do one protocol, but not another)
18:57:14 <clayg> er... does nova-volume run on every compute node!?
18:57:41 <vishy> correct
18:57:45 <YorikSar> vishy: And I think that this should be controlled by separate agent, not nova-compute
18:58:04 <vishy> it should explicitly try to make one type of connection
18:58:15 <DuncanT> clayg: We only have one instance of nova-volume, it passes all the work via rpc to our backend
18:58:28 <vishy> running another agent on the compute host seems excessive
18:58:44 <clayg> DuncanT: sigh... so but... then... who exactly is setting up the "complex dm devices" on the compute node during attach :D
18:59:02 <clayg> DuncanT: sorry, I just find this very interesting...
18:59:14 <YorikSar> Then we end up with volume logic spread over nova-compute and nova-volume
18:59:27 <vishy> YorikSar: you can't get around taht
18:59:32 <clayg> vishy: ++
18:59:33 <vishy> YorikSar: we tried initially
18:59:51 <vishy> YorikSar: you have too many potential backends on the compute side
18:59:52 <DuncanT> nova-compute calls our driver discover method in our driver. It sets up the device(s)
19:00:01 <vishy> and each backend needs its own logic to connect to volumes
19:00:20 <jdg> Ok, we're unfortunately running out of time
19:00:23 <clayg> DuncanT: yes got it!  I remember that in diablo nova-compute used to have an instance of the volume-driver
19:00:26 <clayg> brilliant!
19:00:54 <YorikSar> vishy: Well, then I have another card to draw. What about usability as a stand-alone service?
19:01:13 <clayg> YorikSar: this is acctually an interesting use case
19:01:15 <vishy> YorikSar: the point of separating nova-volume is to turn it into BSaaS
19:01:19 <DuncanT> clayg: I admit I haven't looked very carefully at essex recently
19:01:35 <vishy> own repo, on rest endpoint, own api, own extensions
19:01:47 <DuncanT> clayg: I'll get somebody here to look and check our approach still works :-)
19:02:04 <clayg> whoa... DuncanT has "people" for that sort of thing.
19:02:05 <YorikSar> vishy: if we keep logic in compute (not in agent), we can not reuse it for some other service
19:02:25 <vishy> YorikSar: It could even be rearchitected in the way you suggest, I just don't think you need to start from scratch
19:02:48 <jdg> +1 Start by separating
19:02:51 <vishy> YorikSar: if there is some common code that could live in BS service and be imported by nova-compute I'm all for it
19:02:57 <ogelbukh> vinayp: we definitely were not thinking of it in this way
19:02:59 <vishy> I just don't think there is much reuse there
19:03:09 <vishy> look for example at the iscsi code in libvirt vs xen
19:03:13 <vishy> 0 shared code
19:03:38 <clayg> vishy: and not a lot of oppertunity to share either, the hv's approach it differently
19:03:43 <vishy> you could in nova compute have from bssass.hyepervisor.drivers import libvirt
19:03:44 <ogelbukh> sorry, it was for you vishy
19:04:01 <vishy> libvirt.iscsi.connect
19:04:06 <YorikSar> Yes, but isn't iscsi driver ensure connection anyway?
19:04:10 <vishy> libvirt.fibre.connect
19:04:18 <vishy> etc.
19:04:42 <clayg> vishy: maybe better to leave that to the guys that know the hypervisors (i.e. nova.virt)
19:04:49 <vishy> but then the authors of bsaas have to understand all potential hypervisors
19:05:05 <clayg> ahahahahhghghghgh no!
19:05:14 <vishy> clayg: that is my thinking, define a common interface for what will be requested and returned
19:05:26 <vishy> a la initialize_connection
19:05:38 <clayg> yup it's pretty good IMHO
19:05:47 <vishy> then the hypervisor maintainers in nova can figure out how to connect to the different potential exports
19:05:51 <clayg> YorikSar: what do you think?  Long term this is big limitation?
19:06:13 <clayg> seems like the most pragmatic approach to me
19:06:47 <YorikSar> vishy: I think, we should be able to provide external interface like "make that volume attached to current host", so that someone who does not know anything about iSCSI could get a volume.
19:06:58 <clayg> I'm scared of the agent based attach, even just running a shared storage pool that has direct connection to running guests is scary (must eaiser to just have connectivity to hv)
19:07:03 <vishy> YorikSar: that is fine for libvirt
19:07:09 <vishy> YorikSar: but xen doesn't work that way
19:07:20 <vishy> YorikSar: everything hast to be implemented as a xenapi plugin
19:07:40 <vishy> YorikSar: because all extra code runs in a vm (nova-compute nova-network, etc.)
19:08:10 <vishy> YorikSar: I initially tried to do it exactly that way, but you can't expect that every hypervisor is running the same code on the host
19:08:21 <YorikSar> vishy: Hm... I was talking about the world out of Nova
19:08:25 <vishy> so you really have to let the hypervisor control the BS connection
19:08:27 <YorikSar> vishy: But I get the point
19:08:58 <YorikSar> vishy: We can later add an option "attach to host" along with that agent
19:09:01 <vishy> YorikSar: providing general client code to connect to volumes seems excellent
19:09:17 <vishy> YorikSar: we could even put it in python-xxxxclient
19:09:33 <vishy> and the hypervisors could use it where in makes sense
19:10:05 <vishy> you could even have guests connecting on their own
19:10:05 <clayg> huh, that's interesting...
19:10:54 <jdg> It sounds like we have a concensus to move forward with this yes?
19:11:03 <YorikSar> vishy: I still insist on an agent for this so that it caould be used as a stand-alone service with persistent attachements etc. But make it optionally, to let user delegate attachement burden to its code (like xenapi plugin)
19:11:03 <vishy> +1 to having generic connection code, I'm just not convinced that it will work for all hypervisors, so I think you need to let the hypervisors optionally use it.
19:11:04 <jdg> Renuka dropped off unfortunatley
19:11:25 <vishy> YorikSar: sure seems very useful
19:11:29 <renuka_> no i am here
19:11:50 <vishy> YorikSar: I would say that is priority 2 vs getting all of the other stuff working
19:12:03 <DuncanT> Sounds like we all basically want the same thing, just different priorities on the layers
19:12:09 <vishy> solidifying api and extensions, getting the code separated, etc.
19:12:22 <jdg> So I have a proposal...
19:12:26 <YorikSar> Sounds like we found common ground to start with :)
19:12:43 <jdg> Can we agree as DuncanT pointed out to start laying a plan
19:13:00 <jdg> We can phase things over time, prioritize sepearation for Folsom
19:13:15 <vishy> jdg: yes, I was going to propose a discussion at the summit
19:13:31 <jdg> vishy: great
19:13:37 <YorikSar> Do we want to force separation in Folsom?
19:13:39 <jdg> I would also like to get a discussion going via email
19:13:55 <vishy> jdg: I think we can separate into a new repo in the first couple of weeks
19:13:56 <jdg> As Renuka suggested to get more buy in from everybody
19:14:00 <YorikSar> I mean, shouldn't we keep nova-volume deprecated for one release?
19:14:13 <vishy> YorikSar: we can leave existing nova-volume in
19:14:25 <vishy> YorikSar: but I really think we can complete the separation pretty quickly
19:14:38 <vishy> we already have the api separated, need a few extensions, etc.
19:15:04 <vishy> waldon and I are planning on improving the documentation of the api a bit so that people can start using it.
19:15:12 <DuncanT> Can we make nova-volume a shim that jsut calls our new stuff?
19:15:25 <DuncanT> s/jsut/just/
19:15:33 <vishy> DuncanT: My plan is to replace all of the volume_api calls in nova
19:16:00 <vishy> with a little wrapper that imports python-xxxxclient
19:16:05 <vishy> and makes the calls through the client
19:16:17 <DuncanT> vishy: Clearly you've thought more about this than me :-)
19:16:30 <vishy> I've been planning this out for about 6 months
19:16:34 <vishy> :)
19:16:48 <YorikSar> I think, after the start of separation, user will have to run bsaas-api alongside the nova-api and bsaas-storage-agent instead of nova-volume - and that's it
19:16:50 <clayg> vishy: attach still goes to compute endpoint yes?
19:16:59 <vishy> clayg: yes
19:17:04 <vishy> so now it goes
19:17:20 <vishy> attach -> volume_api -> initialize_connection
19:17:22 <vishy> it will go
19:17:47 <vishy> attach --> volume_shim -> python-xxxclient -> volume_api -> initialize_connection
19:18:14 <vishy> once that is done volume_api can be running from an external repo no problem
19:18:41 <vishy> so basically it is making sure all of the calls that are just going over the queue are going over the rest api instead
19:19:04 <clayg> vishy: awesome
19:19:26 <vishy> then xxx-core can rearchitect components if they feel it is necessary
19:19:45 <clayg> so the canonical representation of guest xyz is attached to volume xyz is in volumes service or nova database?
19:19:46 <vishy> as long as they maintain consistent api and extensions it is totally decoupled
19:20:14 <clayg> like on a migration, when the guest is coming up on the new host, where does it look for the list of volumes to make initialize connection calls for?
19:20:30 <vishy> clayg: that is a good question.  I think based on the current implementation you can reserve a volume
19:20:42 <vishy> clayg: it is on both sides
19:20:42 <jdg> api call in to the volume service?
19:20:57 <vishy> clayg: compute has a list of block_device_mapping
19:21:23 <vishy> clayg: the volumes on the other end should know that they have an active connection from <something>
19:21:41 <vishy> clayg: and I think the reservation idea allows us to specify a uuid for what is connecting to it
19:21:43 <YorikSar> I think, there will be some client_migration call in volume code too...
19:21:49 <clayg> vishy: yeah right, but generally it's to the host, so you don't know which geust except for metadata
19:22:13 <vishy> clayg: it probably has to be metadata in the reserve
19:22:22 <clayg> yes, makes sense, thanks
19:22:43 <vishy> clayg: these are things that need to be hammered out, so hopefully we have a core volume team that owns all of this stuff
19:22:58 <vishy> in prep for the summit we will document what exists
19:23:09 <vishy> I will lay out my plan for getting volume into its own repo
19:23:24 <vishy> we will come up with a code name (< most important part)
19:23:31 <jdg> vishy: Do you want to do that before the summit or during the summit?
19:23:41 <vishy> jdg: which?
19:23:46 <clayg> what?  cinder already has momentum!
19:23:49 <jdg> vishy: Lay out the plan
19:23:49 <YorikSar> Shouldn't we consider Lunr vacant now?
19:23:54 <jdg> Your paln
19:23:55 <jdg> plan
19:24:38 <vishy> Lunr already is in use
19:25:00 <vishy> clayg: i love cinder, apparently there is another opensource project by that name so there is concern
19:25:08 <vishy> jdg: I can lay out the plan in advance of the summit
19:25:18 <jdg> vishy: Got it thanks
19:25:19 <clayg> awww man!
19:25:26 <YorikSar> Hm... I still don't get it, what will Lunr do what nova-volume (or xxx) does not
19:25:31 <vishy> jdg: I don't think much will get done on it in advance
19:25:36 <clayg> oh ummm... can someone core review:
19:25:37 <vishy> Lunr is just a backend
19:25:41 <clayg> https://review.openstack.org/#change,4293
19:25:52 <clayg> ^ for python-novaclient
19:25:53 <uvirtbot`> clayg: Error: "for" is not a valid command.
19:25:54 <vishy> volumes on commodity hardware
19:26:21 <YorikSar> But nova-volume installed on that hw gives us volumes on it too
19:26:42 <clayg> YorikSar: lunr is very much like what you currently get from nova-volume
19:26:48 <js42> vishy: are there any docs or information about lunr or is it proprietary?
19:27:13 <Mike656> hi
19:27:13 <vishy> YorikSar: you could say that it is just a better version of the existing iscsi backend for nova-volume
19:27:33 <clayg> js42: currently being developed internally at rackspace
19:27:35 <vishy> YorikSar: if lunr gets opensourced we could just tear out the existing one
19:27:39 <Mike656> Can nova work without keystone?
19:27:53 <clayg> Mike656: noauth works a treat!
19:27:55 <vishy> or, perhaps the lunr team will pull code in gradually
19:28:05 <YorikSar> So we'll get a good version and a better version?..
19:28:24 <Mike656> clayg: how do they interact?
19:28:29 <jdg> Unfortunatly I have to drop off and end the meeting.
19:28:34 <js42> clayg: Rackspace proprietary? or are there public design docs?
19:28:39 <clayg> jdg: thanks for putting this together!
19:28:50 <clayg> js42: there are not
19:28:58 <ogelbukh> thank you gentlemen
19:29:02 <jdg> clayg: Thank you .. and everyone else for that matter.
19:29:18 <jdg> This was good.  So I'll plan on meeting next week as well.
19:29:25 <YorikSar> We should do it again
19:29:43 <YorikSar> jdg: Yes, very good idea
19:29:45 <clayg> jdg: well _yeah_ we can't wait to get a status update on the id -> uuid branch
19:29:45 <Mike656> How should I arrange nova and keystone to work together?
19:29:46 <jdg> It's scheduled as weekly so jump in
19:30:07 <DuncanT> jdg: I'd like to get boot from volume firmly on the agenda for next week if possible?
19:30:14 <jdg> clayg: :)  I'll keep you posted, maybe even before next week
19:30:17 <jdg> #endmeeting