16:00:42 <jgriffith> #startmeeting
16:00:43 <openstack> Meeting started Wed Aug 15 16:00:42 2012 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:44 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:05 <thingee> o/
16:01:08 <jgriffith> bswartz: thingee rnirmal ...around?
16:01:21 <rnirmal> I'm here
16:01:34 <jgriffith> anybody else?
16:01:42 <vincent_hou> me
16:01:51 <jgriffith> Hi vincent_hou
16:02:01 <vincent_hou> hi
16:02:17 <thingee> don't physically see durgin =/
16:02:36 <jgriffith> We'll get started anyway, should be a short meeting
16:02:50 <jgriffith> DuncanT: ?
16:03:07 <jgriffith> #topic F3 status
16:03:13 <jgriffith> #link https://launchpad.net/cinder/+milestone/folsom-3
16:03:42 <jgriffith> So most everything here is under review or done
16:03:58 <bswartz> I'm here
16:04:03 <jgriffith> The two exceptions: https://blueprints.launchpad.net/cinder/+spec/cinder-notifications
16:04:24 <jgriffith> and https://bugs.launchpad.net/bugs/1023311
16:04:25 <uvirtbot> Launchpad bug 1023311 in cinder "Quotas management is broken" [High,Triaged]
16:04:47 <jgriffith> I think cinder-notifications is going to slip, unless I catch up with Craig and we get it in today
16:05:07 <jgriffith> It's actually probably pretty close, just bit rotted a bit with all of the changes over the past month or so
16:05:24 <rnirmal> jgriffith: I can update on notifications
16:05:31 <jgriffith> rnirmal: cool
16:05:38 <rnirmal> cp16net: has it mostly done...having issues with the tests
16:05:46 <jgriffith> #action rnirmal take a look at finishing cinder-notifications
16:05:52 <rnirmal> after his update... tox doesn't seem to run any tests
16:06:01 <jgriffith> rnirmal: yeah, I think it was mostly just moving to openstack.common
16:06:08 <rnirmal> yup
16:06:20 <jgriffith> probably rebase off of master and should be ok
16:06:22 <jgriffith> cool
16:06:37 <jgriffith> I'm moving Quota management to RC1
16:06:41 <rnirmal> jgriffith: will ask cp16net to ping offline for any help on that
16:06:42 <DuncanT> Sorry, just back from the dentist
16:06:48 <dricco> here, too
16:06:57 <jgriffith> rnirmal: Sounds good.. and if you need something from me shout
16:07:06 <jgriffith> DuncanT: fun fun
16:07:28 <jgriffith> So other than those two, we have code for everything else
16:07:43 <jgriffith> Just a matter of getting it reviewed, making any fixes and submitting before end of day
16:08:10 <DuncanT> I've a patch that is subbonly refusing to work to make size optional when creating volumes from snapshots
16:08:13 <jgriffith> ttx will cut F3 late tonight and after that new features are pretty much shut down
16:08:27 <jgriffith> DuncanT: haven't seen it?
16:08:55 <jgriffith> DuncanT: Throw it out, maybe some of us can help figure out the issue?
16:08:56 <dricco> I have https://review.openstack.org/#/c/11141/
16:09:22 <jgriffith> dricco: That's nova, this is cinder ;)
16:09:30 <DuncanT> jgriffith: Will put it up in a moment
16:10:04 <dricco> sorry, though some of ye guys had nova core status
16:10:21 <jgriffith> dricco: No problem, kinda just giving you a hard time
16:10:22 <dricco> I'll wait for Russell Bryant to get back on it
16:10:26 <dricco> lol
16:10:29 <dricco> :-)
16:10:34 <jgriffith> dricco: Yes, it's good to bring it to everybody's attention
16:10:36 <russellb> hm?
16:10:42 <jgriffith> russellb: ??
16:11:48 <jgriffith> Ok, so it looks like everybody has their drivers in
16:12:07 <jgriffith> We should all focus on clearing out the reviews today
16:12:26 <bswartz> I'm happy to review code if anyone needs it
16:13:02 <jgriffith> bswartz: (and all) https://review.openstack.org/#/q/status:open+cinder,n,z
16:13:20 <jgriffith> I just monitor this page throughout the day, easier than trying to catch email notifications etc
16:13:35 <jgriffith> of course if you have bandwidth help out on the Nova side too
16:13:52 <jgriffith> Just a reminder...
16:14:07 <jgriffith> After F3 is cut it's bug fixes only unless there's an FFE
16:14:28 <jgriffith> So if you have a feature it's going to get increasingly difficult to introduce it after today
16:14:46 <jgriffith> #topic RC
16:14:52 <jgriffith> Speaking of RC...
16:15:03 <jgriffith> The other thing that was decided at the PPB yesterday...
16:15:23 <jgriffith> Due to the screaming and yelling on the ML regarding nova-vol and Cinder
16:15:40 <jgriffith> After RC1 we'll backport all Cinder changes/additions to Nova-Volume
16:16:04 <jgriffith> The idea is having a feature to feature match between nova-vol and Cinder
16:16:14 <jgriffith> I'm not crazy about it, but I see the reasoning behind it
16:16:21 <bswartz> does that include drivers?
16:16:28 <jgriffith> Then hopefully we can truly deprecate nova-vol in Grizzly
16:16:31 <jgriffith> bswartz: yes
16:16:43 <jgriffith> bswartz: You should be covered already though no?
16:17:18 <bswartz> jgriffith: we've submitted 4 different drivers, and only one of them is in nova-vol
16:17:34 <bswartz> I will need to port the other 3 back
16:17:38 <jgriffith> bswartz: I thought all of them were there... sorry
16:17:57 <jgriffith> bswartz: Don't worry about it right now, just keep it in mind that you'll want to do it in the coming weeks
16:18:02 <bswartz> jgriffith: what is the deadline for backporting driers from cinder to nova-vol?
16:18:10 <bswartz> drivers*
16:18:35 <jgriffith> bswartz: So there's going to be a massive effort to dump/backport everything after RC1
16:19:20 <jgriffith> This was just decided yesterday so it's not going to be unrealistic in terms of timeline
16:19:21 <bswartz> jgriffith: do you have a link for the schedule for the rest of the release?
16:19:42 <jgriffith> bswartz: http://wiki.openstack.org/FolsomReleaseSchedule
16:20:06 <bswartz> thanks
16:20:25 <thingee> :q
16:20:52 <jgriffith> :O $#%$#%#
16:20:57 <jgriffith> That's me blowing chunks
16:21:03 <thingee> :)
16:21:21 <jgriffith> Ok, any questions on F3 or Folsom in general?
16:21:38 <jgriffith> I was hoping to catch up with winstond regarding scheduler, but no luck
16:21:51 <jgriffith> #topic open discussion
16:21:54 <rnirmal> when does trunk get branched for folsom
16:22:07 <jgriffith> Sorry rnirmal I just cut you off :)
16:22:15 <rnirmal> np... I was a tad late
16:22:23 <rnirmal> is it F3 or RC1
16:22:25 <jgriffith> That's a good question...
16:22:26 <jdurgin> DuncanT: making size optional when creating from an image would be good as well
16:22:29 <jgriffith> I had that date
16:22:42 <jgriffith> I believe ttx will do that when he cuts F3 but might be later
16:22:53 <DuncanT> I plan on fixing the regression around scheduler and volume nodes being down at some point... but I see that as a bug fix :-)
16:22:54 <jgriffith> I would expect no later than the end of this month
16:23:04 <jgriffith> DuncanT: exactly
16:23:10 <jgriffith> DuncanT: and a critical bug fix no less
16:23:28 <jgriffith> Just remember, soooner is better at this stage
16:23:40 <jgriffith> Each day past F3 things will get more difficult
16:24:02 <jgriffith> I also wanted to clarify some things about volume_type :)
16:24:25 <bswartz> jgriffith: oh yes, I was going to ask about that
16:24:31 <jgriffith> bswartz: :)
16:24:50 <jgriffith> The idea behind volume_type was to give a way to tell the scheduler to select different back-ends
16:25:15 <jgriffith> We've had dicussions about other uses (such as QOS) but haven't implemented anything yet
16:25:28 <bswartz> jgriffith: but volume_type was added in diablo, long before we supported multiple backends
16:25:48 <rnirmal> jgriffith: isn't volume_types also a user facing feature ?
16:25:52 <jgriffith> bswartz: added, implemented and used are all different things
16:26:04 <jgriffith> rnirmal: yes it is (user facing)
16:26:06 <DuncanT> bswartz: You could always run different abckends on different volume nodes
16:26:14 <bswartz> jgriffith: in the diablo timeframe, I thought the Zedara driver used it for qos
16:26:27 <jgriffith> bswartz: yes *but*
16:26:47 <jgriffith> bswartz: zadara's definition of qos is actually disk/backend type
16:27:08 <jgriffith> bswartz: sata, scsi, ssd etc
16:27:44 <rnirmal> jgriffith: back to my question... if volume_types is being thought of as sata, scsi, ssd etc
16:28:01 <rnirmal> then it differs slightly from the volume backend for the scheduler to choose
16:28:08 <rnirmal> supposing multiple backends support ssd
16:28:21 <jgriffith> rnirmal: Well...... it doesn't have to be limited to that either
16:28:30 <DuncanT> volume_types of "gold, silver, budget" were also suggested
16:28:33 <creiht> the volume_type was added specifically so you could support multiple backends
16:28:36 <bswartz> jgriffith: I would argue that it doesn't matter how the driver interprets the volume_type, only that it's processed inside the driver rather than outside the driver
16:28:42 <jgriffith> You can say something like: netapp = type-1, SF=type2, rbd=type3
16:28:42 <creiht> or multiple options within one backend
16:28:43 <rnirmal> true but what I'm getting at is
16:28:54 <rnirmal> volume_type doesn't necessarily translate to a single backend
16:29:03 <creiht> correct
16:29:14 <creiht> it is up to the interpretation of the scheduler
16:29:27 <jgriffith> rnirmal: Yes, I understand your point
16:29:37 <creiht> I'm not sure if the default scheduler ever got updated so that you could map volume_types to the backends
16:29:42 <rnirmal> what I'm getting at is the scheduler needs something more than just volume_type to scheduler to the correct volume backend
16:29:50 <jgriffith> creiht: So there's the *PROBLEM*
16:29:53 <rnirmal> creiht: nope I don't think it has it
16:30:02 <jgriffith> creiht: The scheduler doesn't supoort it anyway
16:30:07 <DuncanT> There was a method added to driver so that it could provide key/value pairs to the scheduler
16:30:20 <DuncanT> I think only one driver implemented it and the scheduler was never written
16:30:25 <jgriffith> Ok, before we rat hole....
16:30:34 <bswartz> I think we need to clearly seperate data meant to be consumed by the scheduler from data meant to be consumed by the drivers
16:30:38 <creiht> our driver uses it :)
16:30:43 <jgriffith> DuncanT: Yes, that's the problem, nothing is implemented in the scheduler yet anyway
16:30:58 <jgriffith> bswartz: I have no problem with that
16:31:09 <rnirmal> driver.get_volume_stats is what reports back to the scheduler
16:31:23 <jgriffith> bswartz: But in the case of driver that do require/use extra information where would you propose that comes from other than metadata?
16:31:50 <bswartz> jgriffith: I have to admit that I don't know how volume metadata works
16:31:58 <bswartz> I need to look into that
16:32:00 <creiht> bswartz: I don't think anyone does :)
16:32:08 <jgriffith> bswartz: It's just metadata you get to add to a volume when you create it
16:32:16 <jgriffith> In a nut shell
16:32:20 <bswartz> jgriffith: is it surfaced at the API/CLI?
16:32:32 <jgriffith> bswartz: yes
16:32:44 <jgriffith> In fact you'll notice creiht submitted a bug on this very topic
16:33:04 <jgriffith> we allow you to set it, but then don't return it in the response info
16:33:12 <bswartz> jgriffith: Then I support improving the documentation for volume metadata and encouraging everyone to use that instead
16:33:15 <jgriffith> This was a bug, becuase it should be surfaced via the API
16:33:37 <jgriffith> bswartz: HA HA... I support improving *ALL** documentation
16:33:51 <jgriffith> bswartz: our documentation isn't so good for a new comer IMHO
16:34:02 <creiht> hah... yeah the volume documentation is a bit lacking
16:34:07 <jgriffith> bswartz: This is something we really need to try and improve
16:34:22 <DuncanT> I think volume_types can provide an easier to understand interface than metadata if it is fully implemented
16:34:37 <jgriffith> DuncanT: yes, but they have *different* uses!
16:34:40 <creiht> it depends on what you want to do with metadata
16:35:01 <rnirmal> think of metadata a per volume basis
16:35:12 <DuncanT> We also have a requirement (and have expressed for a while) that volume_type gets all the way to the driver
16:35:13 <jgriffith> rnirmal: exactly!!!
16:35:13 <bswartz> I will see what NetApp can do about documentation -- we are hiring more people on my team. No promises though
16:35:13 <rnirmal> rather than from the backend provider basis
16:35:30 <DuncanT> Since we do multiple types from one backend
16:35:43 <creiht> I need to see as well what we can do to cross-pollinate some of our doc efforts
16:35:57 <jgriffith> DuncanT: It does already
16:35:59 <creiht> DuncanT: indeed, same boat here
16:35:59 <annegentle> that would be awesome, both of you
16:36:08 <creiht> annegentle: :P
16:36:10 <creiht> :)
16:36:19 <annegentle> it'll pay back in spades :)
16:36:20 <creiht> annegentle: I was going to pass that buck to you :)
16:36:37 <jgriffith> DuncanT: It's in the volume db object that's passed into the driver on create, or do you mean something different?
16:36:41 <annegentle> lol I really am working on it behind the scenes, believe me.
16:37:03 <jgriffith> annegentle: The problem lies on our side IMO
16:37:05 <creiht> annegentle: yeah I know, just need to make sure we have things set up correctly so that what david does on our docs can also help you guys
16:37:12 <DuncanT> Shall we try to get some usecases of volume_types .v. metadata written up and see if we're ont he same page? It's a bit fluffy at the moment
16:37:18 <jgriffith> annegentle: We all throw our code in but *never* document it :(
16:37:22 <annegentle> creiht: that's perfect, thanks
16:37:38 <jgriffith> DuncanT: So one use case for metadata is my patch submission :)
16:37:40 <rnirmal> DuncanT: yes that would be really helpful
16:37:54 <annegentle> jgriffith: we have sysadmins who wrote the volumes stuff that already exists who would LOVE more info to write more docs. So it's really a matter of matching up people
16:37:55 <creiht> gah... and I gotta run
16:38:05 <DuncanT> The only use I've got for metadata is affinity / anti-affinity
16:38:07 <creiht> jgriffith: if there are any areas that I can help with this stuff, please email me
16:38:14 <creiht> and I'll check back on the backlog later
16:38:19 <jgriffith> creiht: we'll do... thanks!
16:38:21 <DuncanT> I think I understand jgriffith's case as well
16:38:42 <jgriffith> So let's ignore *what* the metadata contains for a second...
16:39:11 <rnirmal> is metadata here == volume_type extra specs?
16:39:34 <jgriffith> rnirmal: gaaaa.... I didn't even want to talk about that one yet :)
16:39:43 <rnirmal> ok :)
16:39:46 <jgriffith> So this is exaclty the problem IMO
16:40:04 <jgriffith> We have metadata, volume_type and the extra specs
16:40:12 <rnirmal> cos I think we are going in circles confused between the two... without a clear separation
16:40:24 <jgriffith> But we don't have a clear agreement/understanding of what they're intended use is
16:40:43 <jgriffith> rnirmal: yes, I think you are precisely correct
16:41:39 <jgriffith> So volume_types as I understand it was intended to be used to make scheduler decisions
16:41:54 <jgriffith> does anybody disagree with that?
16:42:06 <rnirmal> can we clear it up a little more
16:42:07 <DuncanT> Ish
16:42:13 <bswartz> jgriffith: I can't speak to that, but if it's true, then Netapp is definitey doing the wrong thing
16:42:48 <jgriffith> rnirmal: I was intentional avoiding specific use cases
16:42:53 <jgriffith> bswartz: I don't think that's true
16:43:12 <rnirmal> jgriffith: I don't want to avoid specific cases right now
16:43:17 <jgriffith> bswartz: I think it works extremely well for your cases where you've used it
16:43:19 <DuncanT> If you take a broad definition of 'scheduler'
16:43:19 <rnirmal> if we are to implement the rest of it correctly
16:43:22 <jgriffith> rnirmal: :) fair enough
16:43:52 <bswartz> The NetApp driver assumes that its the only backend running
16:44:17 <bswartz> We need to do some testing of the multi-backend scenarios to see if anything evil happens
16:44:20 <jgriffith> bswartz: as it should
16:44:44 <jgriffith> bswartz: That shouldn't be your problem, it should be up to the scheduler and API's to sort that out
16:45:07 <jgriffith> bswartz: The whole point of the abstraction is it shouldn't matter to the driver
16:45:15 <rnirmal> yeah the driver need not understand beyond it's presence.
16:45:17 <jgriffith> rnirmal: Ok... I'm not ignoring you I promise
16:45:23 <bswartz> so if there are multiple backends, don't they all share the same cinder.volumes DB table?
16:45:51 <jgriffith> bswartz: yes
16:46:11 <bswartz> jgriffith: how does the scheduler know which backends created which volumes?
16:47:06 <jgriffith> bswartz: ?
16:47:23 <jgriffith> bswartz: You mean the host column?
16:47:30 <rnirmal> right now it's just the host column
16:47:38 <rnirmal> I don't think anything else is being used
16:47:39 <jgriffith> :)
16:47:39 <bswartz> well, once a volume is created, when an attach call comes in, it needs to get sent to the right backend
16:47:49 <bswartz> so the host column is it?
16:47:58 <bswartz> perhaps that's all that's needed
16:47:58 <jgriffith> bswartz: Ahhh... that's different, that's the driver
16:48:27 <jgriffith> So here's the thing...
16:49:10 <bswartz> okay maybe I'm not understanding this
16:49:16 <bswartz> can different backends have different drivers?
16:49:38 <rnirmal> yes if you run them on different hosts currently
16:49:57 <bswartz> if they can, then it matters which backend the the attach call goes to, the the right driver can handle it
16:50:04 <DuncanT> I thought somebody did some work to allow different drivers on one host?
16:50:13 <rnirmal> DuncanT: I'm working on it
16:50:29 <jgriffith> DuncanT: rnirmal did that, yes but it doesn't look like it's going to make it for Folsom
16:51:11 <DuncanT> Ah, ok, got it
16:51:43 <jgriffith> bswartz: The volume/api will do an rpc cast to the appropriate volume node
16:51:57 <jgriffith> bswartz: That volume node will *only* support a single backend/driver
16:52:18 <bswartz> jgriffith: until rnirmal's change
16:52:26 <jgriffith> bswartz: It figures out what volume node to use via the scheduler
16:52:44 <jgriffith> bswartz: yes, but that's not in so let's leave it out of your question for now
16:52:49 <bswartz> okay
16:53:02 <jgriffith> bswartz: I'm just trying to explain why the backend doesn't need to *know* or care and how it works
16:53:35 <jgriffith> bswartz: So the current solution for multiple back-ends is multiple volume nodes
16:53:41 <bswartz> I'm willing to believe that everything just somehow works, but I plan to do some testing to see exactly how it works
16:53:53 <jgriffith> bswartz: :) that's what I had to do
16:54:06 <jgriffith> bswartz: I used pdb and traced a bunch of crap to figure it out
16:54:07 <rnirmal> bswartz: :)
16:54:39 <bswartz> I think we still need to tackle the user interface for selecting qos stuff vs driver type stuff
16:54:40 <vincent_hou> is there anything about CHAP?
16:54:57 <bswartz> having one argument that's overloaded for both purposes seems like a recipe for trouble
16:55:13 <jgriffith> vincent_hou: don't think we'll get to it, do you have any updates?
16:55:20 <jgriffith> bswartz: ?
16:55:22 <DuncanT> Might be worth enhancing the dummy driver so that it publishes enough to allow the scheduler to tell it apart from a real backend?
16:55:40 <vincent_hou> i wrote soem specs
16:55:42 <vincent_hou> http://wiki.openstack.org/IscsiChapSupport
16:56:03 <vincent_hou> i hope people can help to look at it
16:56:09 <jgriffith> bswartz: The whole point I'm trying to make is in my case qos stuff *IS* driver stuff (as you put it)
16:56:20 <jgriffith> vincent_hou: Yes, definitely!
16:56:47 <jgriffith> vincent_hou: I meant to talk to you the other night... single way chap seems fine to me
16:56:52 <rnirmal> bswartz: +1 for not overloading
16:57:06 <jgriffith> rnirmal: bswartz: overloading what???
16:57:13 <vincent_hou> ok
16:57:19 <jgriffith> rnirmal: bswartz: I still don't know what's being overloaded?
16:57:42 <rnirmal> jgriffith: n/m we can talk abt it later... overloading a single construct for deciding user specified type and which backend to choose
16:57:45 <bswartz> jgriffith: I don't like the idea of volume_types being consumed by both the scheduler and the drivers
16:57:59 <bswartz> I'm happy to table that discussion though
16:58:04 <DuncanT> I don't think volume_types is purely about backend selection... indeed bbackend selection should be invisible to the user
16:58:13 <jgriffith> bswartz: Ahhh.. I see what you're saying now
16:58:16 <jgriffith> bswartz: hmmm
16:58:26 <DuncanT> I think they are about classes of service
16:58:37 <jgriffith> bswartz: I don't know that I see a problem with that, but I'm open minded
16:58:43 <rnirmal> jgriffith: that goes back to why we didn't use volume_types and choose volume_backends instead
16:58:57 <DuncanT> I don't want the user to have to know or care about backends at all
16:59:18 <jgriffith> rnirmal: Yes!  That's correct
16:59:32 <bswartz> I think it's possible to make things work with the current design, but I also think that it can lead to trouble, and we'd be better off changing the design to avoid future problems
17:00:05 <jgriffith> bswartz: I don't necessarily see why allowing the backend to read volume_type is *dangerous*
17:00:06 <bswartz> I there was on argument for the scheduler, and then some other argument for the driver-specific stuff, that would be better IMO
17:00:29 <DuncanT> bswartz: From the user facing API? Yuck yuck yuck
17:00:45 <rnirmal> jgriffith: it's not dangerous just a ton more confusing.
17:01:00 <jgriffith> Ok... so here's what I propose
17:01:09 <bswartz> maybe just add a QOS user parameter?
17:01:13 <rnirmal> and potentially relay to the user what the backends are maybe
17:01:18 <jgriffith> keep in mind that this is going to be Grizzly work and not Folsom
17:01:27 <jgriffith> bswartz: That won't work I don't think even though I'd like it :)
17:01:50 <jgriffith> bswartz: Or, I should say it definitely won't work for Folsom, but we can pitch it for Grizzly
17:02:35 <jgriffith> So I propose that we flush out the meaning/purpose of volume_type including some use cases
17:02:41 <rnirmal> yeah all of this should just be grizzly... but getting it early on in grizzly is going to be tremendously helpful.. since it's a lot of moving parts
17:02:49 <jgriffith> In addition we do the same thing for metadata
17:03:14 <jgriffith> I'll also agree that a blueprint for exposing QOS is the best thing for Grizzly
17:03:16 <bswartz> jgriffith: use cases would be good, so we can have a concrete discussion
17:03:42 <jgriffith> bswartz: I'm glad you said that because I'm going to ask everybody involved in this converstation to present some :)
17:04:16 <jgriffith> #action bswartz DuncanT rnirmal jgriffith Work on use cases/definition for volume_type and metadata for next week
17:04:27 <rnirmal> jgriffith: I def have a few
17:04:31 <jgriffith> And on that note we're out of time :)
17:04:38 <jgriffith> Don't get too bogged down on this right now
17:04:43 <jgriffith> We need to focus on Folsom
17:05:08 <jgriffith> But it's good that this came up, we definitely won't to get it ironed out for Grizzly
17:05:27 <jgriffith> errr... "won't == want to"
17:05:36 <jgriffith> Anything else real quick?
17:05:49 <bswartz> everyone do some reviews!
17:05:54 <jgriffith> Yes!!!
17:06:12 <jgriffith> And don't get too wrapped up in whether somebody uses metadata versus volume types :)
17:06:17 <jgriffith> Thanks everyone!
17:06:22 <jgriffith> #endmeeting