16:00:36 #startmeeting cinder 16:00:37 Meeting started Wed May 21 16:00:36 2014 UTC and is due to finish in 60 minutes. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:40 hello 16:00:41 The meeting name has been set to 'cinder' 16:00:46 Hi asselin 16:00:49 Hey everyone 16:01:00 hello! 16:01:02 hi 16:01:07 hi 16:01:20 hi 16:01:53 Still missing a few folks 16:02:01 give them a moment.... 16:02:04 then we'll get started 16:02:18 hi jgriffith, guys 16:02:23 Hi guys. 16:02:29 asselin: Thanks for adding notes to the 3rd party testing etherpad. That looked helpful. 16:02:37 Well, we've got everyobdy that has an agenda item at least ;) 16:02:50 let's roll with what we've got 16:03:00 #topic Consistency Groups 16:03:02 jungleboyj, you're welcome 16:03:04 xyang1: you're up 16:03:17 sure, thanks 16:03:18 https://etherpad.openstack.org/p/juno-cinder-cinder-consistency-groups 16:03:48 so some people have concerns about the restriction of having only one volume type per CG 16:03:53 Avishay's comments on etherpad: Why only one type in a CG? This limitation seems very restrictive. For example, I can have a database volume and a separate volume for the log, which are related but may have different QoS settings. I understand where this limitation is coming from (scheduling issues), but maybe we can get around it? For example, when creating the CG, the user specifies which volume types they want to allow 16:03:54 jungleboyj, hopefully we can get it working without the manual workaround... 16:04:53 when creating the CG, the user specifies which volume types they want to allow within the CG. Then the scheduler tries to find a backend that can handle those types. If such a backend exists, the CG creation succeeds, and all volumes placed in that CG are placed on that backend. If the requested volume types are incompatible, the CG creation fails. 16:05:12 I think this is a good suggestion 16:05:36 what do you think? we need to make some changes in the scheduler for this to happen 16:05:56 i think it's a terrible idea ;) 16:06:12 :) 16:06:14 Since we ruled out modifying CG membership in Juno, would that mean you can't modify the list of volume types allowed as well? 16:06:23 xyang1: it's a good idea but I think it would be simpler and more robust if the admin just set this up 16:06:49 eharney: what does that mean that you can't modify CG membership? 16:06:50 jgriffith: +1 Thank we need to start simple as discussed in the session. 16:06:55 jgriffith: what do you mean? only allow one volume type per CG? 16:06:57 xyang1: In other words, the type encompasses the consistency group and you're done 16:07:08 avishay: Adding/subtracting volumes from a CG 16:07:18 xyang1: most likely 16:07:34 adding/removing volumes from CG is not in the first pass 16:07:38 xyang1: as long as we have volume-types representing backends this seems like it adds a lot of room for error 16:07:48 i don't understand how that's possible 16:07:58 avishay: which? 16:08:04 oh..."create a CG with volumes 1, 2, and 3?" 16:08:31 jgriffith: still one CG can be on one backend, just one backend can support more than 1 type 16:08:51 xyang1: that's right 16:08:53 I'm here (sorry I'm late) 16:08:55 xyang1: Did we talk about that. 16:08:58 hey all, got lost in the lvm stuff discussion 16:09:20 xyang1: sure, but how do you enforce that? What I'm saying is we need another construct for that to work efficiently 16:09:26 jungleboyj: the decision at the summit is to have only one type per CG 16:09:28 so that seems too restrictive 16:09:31 xyang1: If you have a backend that can do thick and thin provisioned volumes then you would have two types with two different CGs. 16:09:42 xyang1: yes one type per CG 16:09:45 xyang1: Right. 16:09:46 one type per CG doesn't help ensure that all volumes are on the same backend 16:10:05 avishay: +1 16:10:09 the restriction doesn't solve anything 16:10:16 but one backend may support more than 1 volume types 16:10:26 I think CGs need to be associated with a backend/driver, not with a type 16:10:34 avishay: Yes, but isn't up to the administrator to set the system up to avoid that? 16:10:38 bswartz: -1, don't want to expose that 16:10:43 NetApp knows of many use cases for a CG spanning multiple types of storage 16:11:02 jungleboyj: no, quite the opposite - the whole point of cinder is "i want a volume like this, put it somewhere" 16:11:06 xyang1: but why in the context of consistency groups would you want "different" volume types? 16:11:39 so volumes in a CG can belong to one application, but the application need volumes of different service level 16:11:54 So this is kind of way I'm not a fan of things like replication and CG's 16:12:22 it just turns in to a debate about "this is how I do it" and it creates a complex and brittle model 16:12:26 i think the restriction should be that you have to specify the types allowed in the CG ahead of time, and only allow new volumes to be added to the CG, so we can ensure they're on the same backend 16:12:51 avishay: I agree 16:12:59 avishay: I think that's fine and achieves the same end result I was looking for 16:13:10 avishay: I think we agree on the only new volumes part. 16:13:17 let the CG have more than 1 type, let the schduler enforce that the CG is tied to a single backend/driver 16:13:26 exactly 16:13:27 How does having multiple types ensure that it goes to the same backend? 16:13:32 avishay: ^ 16:13:46 bswartz: avishay that's fine, but it's kind of a train wreck to implement/enforce in what we have today 16:13:49 jungleboyj: we need to make some changes in scheduler to achieve that 16:13:50 jungleboyj: it doesn't - declaring the CG first and only allowing new volumes does that 16:14:00 bswartz: avishay I still think we need some higher level grouping construct 16:14:24 xyang1: avishay Ok. 16:14:34 avishay: we have agreed to create a CG and create volumes as part of it, but not update the CG in the first phase 16:14:42 jgriffith: what kind of grouping construct? 16:14:50 There needs to be a different grouping construct for the purposes of DR/replication -- but that has totally different needs than CGs, so I'd rather discuss that separately 16:15:02 avishay: no allowing add/remove volume from CG after it is created 16:15:06 avishay: something like a parent grouping which may be just contain backend members 16:15:27 avishay: you can implement that laterally as well 16:15:59 avishay: but my thought was something like a parent-group that provdes info on all types that have "soem thing" in common 16:16:14 that might be backend, or even something else if there's a use case 16:16:14 jgriffith: that is putting cinder host in a group? 16:16:40 xyang1: yes, but it's another free-form type of thing so you can use it for things besides hosts 16:16:53 jgriffith: so make "pools", which in general are limited to one backend? 16:17:02 avishay: yes, it could be used for that 16:17:12 avishay: that is the most prevelant use case 16:17:25 avishay: I haven't really come up with another meaningful use 16:18:03 jgriffith: it breaks the placement model of "put this volume wherever you think", but we may not have a choice 16:18:20 so then instead of duplicating all sorts of extra-specs entries across multiple types you can just inherit from teh parent grouping 16:18:31 jgriffith: the fact is that because cinder is only on the control path, it relies on the underlying storage, which forces some grouping 16:18:40 avishay: well, in this case we want to break that IMO 16:19:05 avishay: what do you mean "only" the control path? 16:19:14 jgriffith: i mean not data path 16:19:38 avishay: well, I figured but I'm not sure what you're getting at? 16:19:38 jgriffith: it was a bit broken before, like where a volume clone could only inherit the source's type (and maybe retype after without migrating, maybe) 16:19:54 avishay: ahh... ok, I kinda see where you're going 16:20:07 ooh, calling that part broken is a good sign 16:20:08 jgriffith: i'm getting at that maybe we do need to bend the rules a bit, or live with the restrictions 16:20:19 guitarzan: don't get excited :) 16:20:23 lol 16:20:35 avishay: I say bend the rules and go with my grouping proposal :) 16:20:40 i don't have a good suggestion, just saying what's going through my head at the moment :) 16:20:43 but I'm completely unbiased here ;) 16:20:47 jgriffith: I think this parent group thing is different from having one backend support multiple volume types 16:20:53 jgriffith: if it gets me snapshot to different type you have my vote :P 16:20:55 jgriffith: your proposal could be great, just don't quite fully get it :) 16:21:01 guitarzan: :) 16:21:01 xyang1: how so? 16:21:14 guitarzan: just run Havan and you can do that ;) 16:21:18 havana 16:21:19 haha 16:21:22 jgriffith: grouping cinder hosts is like groups multiple backends together? 16:21:25 jgriffith: no, havana is what broke it 16:21:28 jgriffith: is that what you mean? 16:21:37 xyang1: no, doesn't have to be 16:21:52 xyang1: my use case was actually just grouping types to the same backend 16:22:00 xyang1: the opposite of what you just said 16:22:13 jgriffith: okay, that makes sense. 16:22:43 jgriffith: group multiple volume types together, but they have to be on the same backend 16:22:51 xyang1: yeah 16:22:56 i think a group/pool would have some properties (i.e., supported volume types), and have the added property that all operations must work within it (i.e., clone a volume between any two supported types, make a CG, etc) 16:23:24 xyang1: so when you say you want a CG using volume-types x,y,z it just makes sure it can do that and have them all in the same grouping 16:23:34 avishay: exactly 16:23:52 avishay: the point is to make it invisible to the end user 16:24:04 jgriffith: i'm ok with that 16:24:06 avishay: puts a bit more burden on the cloud admin, but not much 16:24:13 jgriffith: and then the CG would be tied to a pool 16:24:17 jgriffith: ok. but you are saying this grouping is not CG, but a different group construct? 16:24:19 avishay: yes 16:24:32 xyang1: yes, I think it has a number of uses besides CG 16:24:36 jgriffith: that means we need to introduce another table in db? 16:24:44 xyang1: but CG becomes *easier* to manage and setup this way 16:24:54 xyang1: yup 16:24:55 xyang1: the grouping puts a restriction on placement, the CG is a subset of the group 16:25:22 DuncanT: thingee: thoughts? 16:25:25 xyang1: actually this could probably be done outside of the db if it's a big deal 16:25:31 avishay: ok, I'll need to think about it 16:25:53 i think the idea must be flawed because i'm agreeing with jgriffith too quickly ;) 16:25:59 avishay: LOL 16:26:08 avishay: not really... I proposed this in HongKong :) 16:26:10 jgriffith, avishay: so the idea is this group construct can be used in other places other than CG? 16:26:12 avishay: I missed the part of how you figure two backends are compatible to be in the same CG 16:26:17 avishay: so 6 months+ is about right :) 16:26:18 it's only flawwed if duncan agrees too 16:26:32 bswartz: if DuncanT agrees the world is likely to end 16:26:35 jgriffith: i must have spaced out 16:26:48 I would like to see the idea written down concisely though -- I'm not 100% sure I'm on the same page with yall 16:26:57 avishay: I think a heated debate about somethign started at the same time 16:27:11 jgriffith: sounds about right 16:27:15 bswartz: geesh... you are sure a picky sort 16:27:18 thingee: I don't think that is possible. 16:27:40 jungleboyj: it's possible 16:27:43 jungleboyj: ok, well I thought that it was suggested multiple types could be in same CG. I'm not sure I follow then. 16:27:46 jungleboyj: just probably not really smart 16:27:50 thingee, jungleboyj: we are still talking about volumes in one CG on the same backend 16:27:54 hey I just want to know what I'm agreeing/disagreeing with 16:28:01 thingee: i think the admin would need to figure it out? good point, probably tricky 16:28:04 xyang1: That was what I thought. 16:28:07 bswartz: :) 16:28:15 bswartz: just vote in favor of my proposals and everything will be alright 16:28:18 avishay, jgriffith: this is why we originally said one type per CG 16:28:20 thingee, jungleboyj: it is just multiple volume types per backend 16:28:26 jgriffith: okie dokie 16:28:31 it's specifying backends for volume types 16:28:38 or vice versa 16:28:38 I guess as avishay said, one type could be multiple backends 16:28:39 thingee: it needs to be one backend for CG though, not type 16:28:39 xyang1: +1 16:28:41 doesn't help anything 16:28:46 avishay: +1 16:28:48 bswartz: wow! I should've tried that years ago :) 16:28:57 bswartz: I'll write up a doc and share it with everyone 16:29:08 thingee: what do you mean? 16:29:14 avishay: +1 16:29:33 jgriffith: you going to share it thru the ML or LP or something else? 16:29:38 jgriffith: originally we said at the summit one type per CG. avishay brought up the point that a type could mean two backends 16:29:38 jgriffith: thingee's point is a good one - the admin would need to set up the pools correctly (with knowlege of who can cooperate with who) 16:29:42 thingee: its scheduler hacks to enforce that 16:29:48 bswartz: LP 16:29:59 or specs repo if it ever merges 16:30:25 avishay: thingee that's what we've been talking about here 16:30:30 jgriffith: I'm just worried about relying on the admin to set that up correctly. 16:30:39 i think to be safe, we could have a hierarchy: backend > pool > volume, and only expose pools and volumes to users (admins see backends obviously) 16:30:39 avishay: thingee the whole point of this is to create a grouping of types that are on the same backend 16:30:42 that's the whole point 16:30:51 jgriffith: yeah sorry, I thought people were agreeing this was the approach and I missed the solution of how all this works :) 16:31:10 avishay: yeah, I think that seems like a good approach, although 16:31:25 not sure about the term "pool" and what it means in an end use context etc 16:31:32 thingee: people get nervous if there's too much agreement -- that's why we're disagreeing 16:31:45 bswartz: I disagree 16:31:54 we're disagreeing on principle 16:32:01 seriously though I think there's a lot of agreement on the individual points and we just need to see a complete proposal 16:32:03 guitarzan: i disagree 16:32:11 Ok 16:32:21 I think we should at least move forward with it 16:32:24 guitarzan: :) 16:32:26 I think the "it's impossible for operators to configure" needs to either be addressed or ignored 16:32:30 I'll work on writing up the grouping proposal 16:32:49 xyang1: I'll get that to you to use for the CG work 16:32:52 sound like a plan? 16:33:00 jgriffith: sure 16:33:07 #action jgriffith propose a workable CG grouping construct in a launchpad BP 16:33:11 I agree move forward, but I'm still curious how we figure how all the types in the pool are compatible. Is this just doing a comparison on extra_specs? 16:33:22 thingee: nope 16:33:26 jgriffith: so when using CGs: backend > pool/group > CG > volume, yes? 16:33:27 the admin sets it up 16:33:30 thingee: the flow is like this: 16:33:37 admin creates a Group 16:33:52 admin creates a volume-type and assigns it to that group (or doesn't) 16:34:12 types asigned to a group inherit properties from the group 16:34:29 jgriffith: so my CG proposal will have a dependency on the grouping construct that you are going to propose? 16:34:33 what are the properties like? 16:34:36 so extra-specs keys like "backend-name" is the first candidate for inheritance 16:34:42 xyang1: yes 16:35:12 navneet: to start the big one is just backend-name, but you use meta like we do everywhere else to make it flexible 16:35:34 navneet: I suspect there are other use cases for this 16:35:50 jgriffith: I'm wondering if that will affect multiple pools per backend as well? 16:35:55 and it would be inherited as an extra-spec? 16:36:00 navneet: it may mean something completely different for guitarzan for example 16:36:06 jgriffith: ohh... i was thinking something a bit different 16:36:11 avishay: do tell 16:36:35 jgriffith: pools aren't very meaningful on mass quantities of small nodes I think 16:36:40 jgriffith: er, groups 16:36:52 guitarzan: probably true 16:36:57 xyang1: multiple pools not limited by types 16:37:16 actually i thought that probably backend == pool, and the scheduler could figure out what types each backend supported and report that 16:37:24 guitarzan: but at least I don't have to duplicate extra-specs entries for every type 16:37:33 guitarzan: in other words say I only have one backend 16:37:40 navneet: ok 16:37:42 guitarzan: but I have 10 different types serviced by that backend 16:37:45 jgriffith: yeah 16:37:49 avishay: pools will translate to separte schedulable entities 16:37:54 guitarzan: k... I'll stop :) 16:37:57 on in the case of a driver like netapp which i believe manages multiple pools, the backend would have multiple pool constructs 16:38:01 avishay: we can have multiple pools on the same backend though 16:38:18 avishay: xyang1 I haven't gotten to the whole pools thing 16:38:18 navneet: yes i think i like that idea 16:38:31 avishay: xyang1 I'm still unsure of that whole thing and what it means to an end user 16:38:42 jgriffith: as am i :) 16:38:48 we're supposed to make this easy for the end-user :) 16:38:57 jgriffith: pffft 16:39:04 avishay: :) 16:39:24 usefulness trumps usability 16:39:26 Still confused people? 16:39:29 guitarzan: +1 16:39:32 the out of box experience should be easy but setting up complicated features can be hard 16:39:53 make easy things easy and hard things possible 16:39:59 someone setting up 14 backends and 47 types is going to have to suffer through some pain 16:40:03 bswartz: good statement 16:40:25 i think navneet's idea of having pool == schedulable entity sounds right, and bswartz's idea of having the scheduler aware of internal pools goes well with that 16:40:34 jgriffith: so one group construct can have multiple CGs in it? 16:40:37 I think Larry Wall gets credit for that quote 16:40:47 xyang1: sure 16:40:53 so navneet are you arguing against your pool implementation? :D 16:41:14 bswartz: he also coined "there's more than one way to do it" which isn't so hot :) 16:41:22 lol I know 16:41:46 guitarzan: no...I think its agreement everywhr :) 16:42:02 Ok, so there's details, we an adjust but let's get started down this path 16:42:02 he also coined "int F=00,OO=00;main(){F_OO();printf("%1.3f\n",4.*-F/OO/OO);}F_OO()" 16:42:07 navneet: I just mean this "scheduleable pool" idea seems contrary to your pool per service implementation :) 16:42:09 else we discuss for another 6 months :) 16:42:21 needs more regex 16:42:28 guitarzan: haha 16:42:32 I'd like to have the pools discussion as a seperate topic if we could 16:42:33 guitarzan: no its not...each service is actually schedulable 16:42:42 jgriffith, +1 16:43:08 Shall we move on to the next topic? 16:43:12 guitarzan: thhats the whole point of having pools as service 16:43:43 arguments/objections? 16:43:45 i think we need to solve pools before CGs...once we have pools the restrictions on CGs change 16:43:46 navneet: yes, I follow 16:44:20 avishay: that means navneet needs to hurry up and finish the unit tests so we can turn the WIP into a mergable submission 16:44:21 avishay: sighh... 16:44:34 I don't see why we need a separate driver instance for each pool. Specify the pool in volume types and use 1 driver instance. report the pool stats up to the scheduler. 16:44:38 bswartz: working on it :) 16:44:46 avishay: I think CG should be tied to volume types, not pools. whether it is restricted by pools depends on driver implemenation 16:45:03 hemna: navneet said it doesn't work while we were in ATL :) 16:45:12 xyang1: I agree 16:45:19 xyang1, +1 16:45:22 xyang1: particularly since many of us don't have that silly concept :) 16:45:34 jgriffith: I said it worked 2 days back...I need to check again 16:45:42 jgriffith: :) 16:45:43 I'm not sure how that doesn't work. It works for us perfectly today. :) 16:45:50 i think what i proposed for CGs will work... we can think about pools in parallel, i don't think i care much 16:45:53 navneet: no I mean the grouping of types you said didn't work for some reason 16:45:55 we just don't have support for reporting the pool stats into the scheduler 16:45:57 hemna:+1 16:46:01 navneet: which I'm still a bit unclear as to why 16:46:22 I don't think it was "doesn't work" I think it was "lots harder" 16:46:31 guitarzan: ahh... good point 16:46:33 or "lots more change" 16:46:41 mor intrusive 16:46:44 more even 16:46:44 jgriffith: latest WIP ll work for all 16:47:13 honestly there will probably be some overlap then in what I proposed 16:47:20 meaning things may change 16:48:02 jgriffith: idea does not change... 16:48:24 Ok... 48 minutes in 16:48:25 wow we have 2 more topics and 10 minutes 16:48:29 we still have a couple topics 16:48:36 #topic external ci testing 16:48:59 I just wanted to get a quick pules of where people are at in terms of briging this online 16:49:09 anybody working on it? Supporting all of your drivers? 16:49:14 Done. ;-) 16:49:19 we are working on it...have been for about a month 16:49:20 problems/concerns etc etc 16:49:23 jungleboyj: really? 16:49:25 we'll start soon 16:49:29 jungleboyj: I don't see it reporting in to Gerrit 16:49:33 jgriffith: No, I wish. 16:49:36 akerr: ping 16:49:40 we had problems getting a 2nd NIC recognized by the vm to support our iSCSI network 16:49:41 working on it (Oracle ZFSSA) 16:49:54 hemna: ahhh yes, had the same problem 16:49:55 bah he's not here 16:49:59 and libvirt doesn't support FC vHBA passthrough 16:50:05 We met about it yesterday and have started working with the driver owners to get the systems set up. 16:50:05 hemna: so I cheated and added a bridge across my iscsi net 16:50:12 Is there a place where people can share tips? Does etherpad work? 16:50:45 I think asselin finally found a workaround for the 2nd iSCSI nic issue, we haven't tried PCI passthrough through 16:50:47 for FC 16:50:56 xyang1: there's a weekly meeting 16:51:16 jgriffith: time? 16:51:27 hemna: share the 2nd nic solution if you have it 16:51:36 ok, I'll have asselin post up something 16:51:39 hemna: I made it work with Neutron but that's another battle in itself 16:51:46 masochist! 16:52:21 hi, i posted my manual workaround for 2nd eth in the same etherpad for summit discussion 16:52:28 do we know anyone that actually works on libvirt itself? 16:52:40 asselin: thanks! 16:52:42 xyang1: I think asselin Already put notes in the etherpad. Seems a good place to put stuff. 16:52:44 xyang1: xyang1 https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting 16:52:54 #link https://etherpad.openstack.org/p/juno-cinder-3rd-party-cert-and-verification 16:53:04 jgriffith, bswartz: thanks! 16:53:05 also jaypipes posted the links there too to setup the ci system. I haven't tried it yet 16:53:11 hemna: there's a number of folks on the Nova team, but libvirt is open source 16:53:12 https://github.com/jaypipes/os-ext-testing 16:53:13 hemna: i can probably help find people that do 16:53:16 damn i missed the first two parties *waka waka* 16:53:23 https://github.com/jaypipes/os-ext-testing-data 16:53:43 thanks. I want to pick their brains and see if there is a way to get vHBA passthrough working. That's the best long term FC solution. 16:53:51 asselin: actually you should point folks to his blog 16:54:04 http://www.joinfu.com/ 16:54:06 jgriffith, yes, his blog posts are there too 16:54:19 Oh.. he links them off the github readme... cool 16:54:49 ok, we all have a lot of work to do here 16:55:06 multi-attach ? 16:55:06 don't be surprised when someobdy says "why aren't you testing" 16:55:10 I think everybody is clear 16:55:16 hemna: nope 16:55:18 k 16:55:28 #topic GlusterFS 16:55:34 mberlin: You're up 16:55:35 My turn ;) 16:55:39 My name is Michael Berlin and I work for the storage startup Quobyte which has its office in Berlin, Germany. 16:55:41 hemna: sorry, it was on the agenda 16:55:45 np 16:55:46 multi-attach was not 16:55:51 yah no worries. 16:55:55 Our storage interface for VMs is file based and therefore our Cinder driver is very similar to the NFS and GlusterFS one. 16:56:00 from previous topic: etherpad link with links referenced above: https://etherpad.openstack.org/p/juno-cinder-3rd-party-cert-and-verification 16:56:09 In fact, I reused the GlusterFS snapshot code for our Cinder driver. I've submitted our driver for review earlier this week. 16:56:19 Avishay (for good reasons) complained about the code duplication. 16:56:26 As a solution, I volunteer to move out the GlusterFS snapshot code into the general RemoteFS class in the NFS driver. 16:56:35 Eric planned to do this for Juno anyway and agrees with that. 16:56:41 The code duplication would be reduced and the NFS driver profits from the snapshot functionality. 16:56:41 Any concerns about this? 16:56:46 sounds fine to me 16:57:08 +1 16:57:12 i think this is a good plan and i don't forsee any issues with it -- goal is to just get all the *FS drivers that need snapshot functionality working the same way 16:57:13 jgriffith: stop agreeing with me 16:57:18 mberlin: we need to evaluate it on netapp nfs drivers 16:57:20 eharney: +1 16:57:24 I have one issue/requirement 16:57:24 avishay: NO 16:57:25 :) 16:57:28 :) 16:57:36 mberlin: hopefully should be fine but ll let you know 16:57:40 navneet: i don't think this is relevant for NFS drivers... 16:57:43 bswartz: ? 16:57:45 navneet: Netapp NFS i mean 16:58:00 bswartz: issue/requirement? 16:58:02 whatever is pulled into the RemoteFS driver needs to be overridable in the child drivers (i.e. don't create any objects in the constructor) 16:58:06 eharney: ok...not sure 16:58:20 correct 16:58:28 RemoteFs is something netapp also inherits 16:58:29 bswartz: fo sho 16:58:30 just makes sure that setup work is done in methods that can be overridden 16:58:35 bswartz: sure, that would kinda defeat the purpose 16:58:36 unless things have chnaged 16:58:59 navneet: the NetApp driver provides all its own create/delete snapshot methods though. so as long as we don't do something silly it'll be fine 16:59:05 mberlin: sounds good? 16:59:05 last time I checked the glusterFS driver did some stuff in the constructor which would make the driver very hard to subclass correctly 16:59:15 Sounds good to me. 16:59:31 bswartz: it might, i know i caused some issues there at one point. i think i cleaned it up a bit but will need to look around for issues like that again 16:59:31 I'll make sure that overriding is no problem. 16:59:41 mberlin: ty 16:59:43 so the verdict is: move the code to remotefs, but do it cleanly so that it can be overridden 16:59:57 yes 16:59:58 awesome 17:00:13 right on time :) 17:00:21 Yeah :-) 17:00:25 :-) 17:00:55 Okiedokie 17:00:58 thanks everyone 17:01:02 #endmeeting