16:01:19 #startmeeting cinder 16:01:20 Meeting started Wed Jun 5 16:01:19 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:23 The meeting name has been set to 'cinder' 16:01:26 Hey everyone 16:01:28 hello all! 16:01:30 hi 16:01:31 hi 16:01:31 heylo! 16:01:32 Hey 16:01:34 hi 16:01:40 hi 16:01:40 * bswartz waves hello 16:01:42 o/ 16:01:45 hi 16:01:46 Yay... full house :) 16:01:46 Happy Wednesday! 16:01:48 hi 16:01:56 alrighty, let's get started 16:02:05 #topic Shares service decision 16:02:24 bswartz: I believe you have some info to share here? 16:02:36 hi 16:02:40 hi 16:02:42 eharney: 0/ 16:02:47 yes we've decided to give up on having the share service in cinder -- we're going to start a new project 16:03:13 It's what everyone seems to prefer, and we agree it's the right long term approach 16:03:15 bswartz: very exciting! 16:03:36 in the short term the biggest hurdle to doing that is getting stuff moved to oslo 16:03:41 cool, sounds like the right approach 16:03:47 sounds great 16:03:52 +1 16:03:56 bswartz: so abou that 16:04:07 bswartz: can you tell me what Cinder code it is you want moved to OSLO? 16:04:43 bswartz: Rob Esker mentioned that yesterday but couldn't tell me what code he was referring to 16:04:46 so the easiest thing for us to do is to fork cinder including our current share service implementation, then remove everything blocks related, then start renaming stuff 16:04:58 however that will result in a lot of duplicated code at the end 16:04:59 bswartz: sure... 16:05:12 bswartz: but I'm wondering about the OSLO statements 16:05:23 they've come up a couple times and I don't know what they mean :) 16:05:28 so if we can find a way to move some of the common stuff into oslo then the new project and cinder can simply share it 16:05:53 I'm thinking in particular about some of the scheduler and API server bits 16:06:05 sure... that's the intent of OSLO anyway 16:06:05 I know there will be a lot of duplication there 16:06:18 I guess a fair amount of that is in Nova too 16:06:18 scheduler and API are efforts OpenStack wide 16:06:26 avishay: +1 16:06:27 I think many of the 'things' in Cinder are derived from nova, and most of that is already in Oslo? not sure 16:06:35 avishay: bswartz and EVERY other OpenStack project :) 16:06:45 rushiagr: it's getting there 16:06:46 honestly I don't have more detail than that today on requirements to move stuff to oslo 16:07:04 when we start performing the fork we may discover other stuff that's common 16:07:08 rushiagr: bswartz you may have noticed massive efforts by winstond and others on the scheduler moving in to OSLO already 16:07:12 as well as RPC 16:07:25 bswartz: cool 16:07:37 yes, I saw efforts on scheduler 16:07:59 bswartz: keep in mind the API stuff has also been brought up by teams like ironic, triple-o, RedDwarf etc etc 16:08:05 There's a lot of interest there 16:08:13 hmmm..need to come up with a better understanding of what bits of Cinder is already in oslo, or is used openstack wide 16:08:41 rushiagr: so the best thing to check is oslo-incubator code base, there's a lot there 16:08:47 s/is/are/ 16:08:59 rushiagr: and link up with other "new" projects that are going through the same process right now 16:09:08 There's no shortage of new project ramping up :) 16:09:15 jgriffith: okay, sounds cool 16:09:32 I don't have anything more on the share service for now 16:09:44 bswartz: rushiagr cool.. thanks! 16:09:50 thanks everyone for your feedback on our share service plans 16:09:57 bswartz: rushiagr let me an others know if we can help with getting started 16:10:09 thanks all 16:10:10 bswartz: rushiagr you should probably propose something to TC 16:10:27 bswartz: rushiagr or at least send an email to openstack-dev about your plans 16:10:30 thanks for your work and understanding, and best of luck with the new effort 16:10:33 get the ball rolling so to speak 16:11:46 jgriffith: ya, we're going to do that soon 16:11:53 rushiagr: cool 16:12:17 Ok, anything else? 16:12:36 #topic direct/IO direct/Attach 16:12:40 zhiyan: you around? 16:12:45 yes 16:12:46 do you remember 'host direct I/O / direct attach feature' discussion in last weekly meeting? 16:12:47 (https://etherpad.openstack.org/linked-template-image) 16:13:13 yes, go ahead please 16:13:20 * jgriffith didn't know you could use etherpad to write novels :) 16:13:21 I'm not sure there is a discusable draft design currently, hema is working on a design 16:13:55 i know this is a big change, so in order to prevent this cinder requirement not block glance-cinder-driver implementation, i have a plan B.. 16:14:05 zhiyan: So my proposal is/was... 16:14:19 the glance-cinder-driver need cinder support, and now there are two choice for me: 1. change cinder to support attaching volume to host/ direct IO. 2. upload/download volume contnet via http. if 1# is not starting, i'd like to get #2 as the plan B. 16:14:19 Start with the generic attach/detach lib 16:14:42 yes, it's great if we can get 1# 16:15:20 zhiyan: there is effort to get this by H2 (I think) :) 16:15:31 hemna_: ^^ 16:15:55 does 2# make your sense too? 16:16:13 zhiyan: I'm not sure I like #2 but I may not fully understand what you have in mind 16:16:36 zhiyan: it doesn't seem that much different from functionality we already provide 16:17:16 zhiyan -- which storage backend is this feature supporting? 16:17:22 or is it generic? 16:17:31 use-case like this: glance api client request download image, then if we have 1#, we can attach the volume from glance host and read the volume bits and send to glance client, if we use 2#, glance will acted as a http proxy 16:17:46 bswartz: generic, i think. 16:18:00 I believe hemna is in route to the office 16:18:25 bswartz: I'd really not like to see it done as anything other than generic (with backend specific optimisation paths where appropriate) 16:18:39 DuncanT: +1 16:18:42 oh this is part of the proposal to use cinder as an image store behind glance 16:18:53 yes 16:18:55 DuncanT: +1000 16:19:06 okay I get it 16:20:35 zhiyan: can you talk a bit more about #2? 16:20:48 Unless you guys covered all of this already 16:20:54 (ie last weeks meeting) 16:21:36 Not #2, no 16:21:58 jgriffith: in 2#, will add a new apis to cinder: upload/download volume by http, then glance-cinder-driver will use this api to read volume bits when glance api client ask download image 16:22:10 DuncanT: +1 16:22:18 zhiyan: that's what I thought and I don't like that idea 16:22:33 zhiyan: if you want to attach a cinder volume to glance and do somethign there that's fine 16:22:39 no no no 16:22:47 avishay: ?? 16:22:58 jgriffith: no to idea #2 16:23:03 avishay: oh :) 16:23:16 yes yes yes to what you said :) 16:23:28 zhiyan: so adding an HTTP transfer layer to Cinder isn't something anybody seems overly interested in :) 16:23:33 jgriffith: yes, i like it too, that's 1# option. it believe it will happen, but maybe not in the timescale that suites me 16:23:44 zhiyan: not sure why not? 16:24:02 zhiyan: It's not that difficult to create/attach a Cinder volume 16:24:07 not sure what I'm missing here... 16:24:15 well if we add HTTP we can compete with Swift ;) 16:24:22 avishay: hush! 16:24:40 :) 16:24:43 lol 16:24:48 jgriffith: Attaching a ceph volume isn't easy 16:24:54 zhiyan: why not start by copy-pasting the code and continue working on the main portion of your code, then replace it with the generic attach/detach code? 16:24:55 jgriffith: Nor sheepdog 16:25:23 DuncanT: fair, but that is what we're supposed to be in the business of doing isn't it? 16:26:04 jgriffith: Indeed. I'm just not at all sure what the final solution is going to look like 16:26:15 jdurgin1: thoughts ^^ 16:26:22 hemna: good timing :) 16:26:30 avishay: do you means, copy nova attach/detach code to glance directly temporarily, and make glance-cinder-drvier work, then to move to cinder generic attach/detach code? 16:26:45 jdurgin had one suggestion that we never expose a volume device and just have a data transfer API 16:26:47 personally I'd like to see a generic lib for doing i/o to cinder block devices, like brick 16:26:49 zhiyan: does that make sense? just so that this isn't blocking you? 16:26:56 zhiyan: that's an option but I'd rather get the brick code first 16:27:06 jdurgin1: +1 16:27:18 jdurgin1: I'm working on it, hopefully get back to it next week 16:27:22 baby steps 16:27:24 yes, jgriffith, but seems the brick code not ready for attach/detach... 16:27:37 we are working on doing attach/detach in brick soon for iSCSI and FC to start 16:27:45 I'd just like to see the focus on I/o rather than attach/detach, since that seems to be what most things actually care about 16:27:50 zhiyan, we are supposed to get it done for H2 16:27:58 jdurgin1: fair 16:28:11 jdurgin1: but you'll have to clarify what you mean specfically there for me :) 16:28:12 hemna: thanks. so the time is not good for me... 16:28:29 well, unfortunately I can't go back in time. 16:28:45 one way to do this would be to reference volume drivers in brick, and add file-like driver methods for opening, closing, reading and writing 16:29:04 where open/close could include attach/detach when that's necessary 16:29:14 jdurgin1: interesting.... so add another abstraction of sorts 16:29:18 hemna: fine. so maybe avishay is right...i need copy nova attach/detach code to glance directly temporarily, and make glance-cinder-drvier work, then to move to brick... 16:29:48 that's an option. 16:29:49 jdurgin1: +1 16:30:00 that's what Cinder did for Grizzly 16:30:16 and if brick is stable after H2, then you could pull in brick for H3 and use that instead 16:30:22 so you aren't blocked on us for the time being. 16:30:54 jdurgin1: are you proposing having python code in the data path for bulk data operations? 16:31:06 if so, that seems unwise 16:31:41 Is this going to Nova as well? 16:32:06 avishay: yes (depending on exactly which piece you're referring to) 16:32:22 avishay: we want to commonize the existing attach/detach code 16:32:29 avishay, the hope was to eliminate the dupe code between nova and cinder at least for attach/detach 16:32:50 jgriffith: OK, that's what I thought. In that case an open/close API doesn't seem too good. 16:32:58 jgriffith: do you think the http volume download/upload is a common feature for cinder? 16:33:15 or no value to implement it.. 16:33:18 I wouldn't really care if it was just Cinder and Glance, but for all Nova I/O to go through python... 16:33:19 bswartz: yes, I don't expect python to be a big bottleneck there 16:33:35 zhiyan: I don't.. not really 16:34:00 avishay: this wouldn't affect how vms talk to disks 16:34:11 HTTP volume upload was proposed before... I don't remember the outcome of those discussions 16:34:25 sorry i'm very late 16:34:32 jdurgin1: so what would that path look like? 16:34:34 DuncanT: I think somebody said "use swift" 16:34:37 avishay: what nova I/O are you worried about? 16:34:38 someone wants to upload a 30G volume over http ? 16:35:06 jdurgin1: a VM reading/writing to a volume 16:35:29 jgriffith: I was wondering if it could be used e.g. to move volume for AWS into another public cloud [cough] 16:35:52 DuncanT: hehe 16:35:59 avishay: that would still go through the hypervisor - brick need only be used for attach/detach in that case 16:36:00 DuncanT: I think you could achieve something like that with swift and the cinder volume backup api 16:36:07 DuncanT: so that's a volume migration use case 16:36:11 possibly with some slight enhancements to the backup api 16:36:31 DuncanT: that's going to suck no matter what IMO 16:36:46 jdurgin1: so there will be attach/detach as well as open/close? 16:37:24 avishay: could be, that's the one use case where we care about attaching rather than doing i/o from nova itself 16:37:36 avishay: There will be attach-to-vm, not necessarily attach-to-local-host I think 16:37:52 jdurgin1: I guess I don't quite understand the point still (sorry) 16:38:00 I mean abstraction is neat, but.... 16:38:38 OK, if I understand right, that works for me 16:38:39 the driver-level i/o is useful more generally - it lets us do things like implement generic backup or volume migration without modifying every backend 16:38:56 jgriffith: do you think we can wrap brick with something like cinder-agent, it take care attach volume to instance or attach the volume to the host 16:39:38 zhiyan: I think that would be tricky (permissions, host access etc) 16:39:49 jdurgin1: I guess I don't follow 16:40:00 since i think the 'connection_info' which cinder api provided is enough for client (i mean cinder-agent) attach it to instance or host 16:40:11 jdurgin1: not sure I see the advantage/difference for migration? 16:40:46 zhiyan: I have always just thought it should be something like: 16:40:56 jdurgin1: having every backend to support driver-level i/o requires them to modify their drivers 16:40:58 glance has volume/image database and it's connect info 16:41:06 nova or other asks for an image 16:41:14 glance can give back a list of choices 16:41:24 1. Download it via HTTP (existing model) 16:41:37 yes, but glance need support create (and upload image content) and download 16:41:38 2. Use a Cinder volume that has it (here's the info for it) 16:41:44 yes 16:41:45 yes 16:41:52 3. Create a Cinder volume with it 16:41:56 jgriffith: right now the backup or other new apis that require i/o have to be implemented in each driver, if drivers have an i/o interface a generic implementation can be done that works for these new apis without modifying the drivers, although they could override it for optimization 16:42:28 winston-d: yes, but it's one addition instead of adding a way of doing it for every new api that require i/o 16:42:29 jdurgin1: got ya, although for the majority of back-ends that's not really true 16:42:43 jdurgin1: with things like iSCSI we have generic implementations 16:42:50 jgriffith: yes, but just for iscsi 16:43:02 we can bump that up a level to all drivers 16:43:03 jdurgin1: indeed 16:43:11 jgriffith: i think so, but under implementation level, i meet the blocker such as, glance-cinder driver how to upload image content to the volume backend ? it need attach volume then to write image bits 16:43:33 zhiyan: that's what I've been saying though, Cinder already provides that 16:43:41 "cinder create --image-id xxxx" 16:44:00 jgriffith: did I miss something where some cinder-agent won't eventually be doing the attach/detach? it will stay as a library? 16:44:44 avishay: I'm not sure how I feel about the whole agent thing, but I think regardless there are some first steps that need to be taken before we tackle that 16:44:48 jgriffith: are you means the image will be saved in two different place, one in glance, on in cinder? 16:45:17 jgriffith, +1 16:45:19 zhiyan: yes that would be the model to start 16:45:30 jgriffith: yes, agreed. but i'd like to remove the requirement for cinder servers to have HBAs connected to the storage. 16:45:36 zhiyan: then the next step would be create volume and download image allin one 16:45:51 avishay: then use iSCSI :) 16:46:07 so this has been why I don't like FC all along just FYI :) 16:46:41 avishay: but again that seems like a discussion in and of itself to me 16:46:52 jgriffith: iSCSI doesn't solve everything. Some deployments would want separate management and data paths, and Cinder doesn't need to be on the data. 16:46:58 I think we're straying fro the issues zhiyan is presenting 16:47:07 avishay: ? 16:47:19 and we're running out of time :) 16:47:20 separate networks 16:47:24 avishay: you're just moving that requirement to another location 16:47:32 thingee: +1 16:47:37 we still have two topics 16:47:37 OK, let's focus :) 16:47:50 I'd like to close out the issue with zhiyan quickly and move on 16:48:13 zhiyan: It seems option #2 isn't appealing to anybody 16:48:25 zhiyan: Option #1 seems favorable 16:48:31 zhiyan: but I think we need to break it down 16:48:35 zhiyan: take smaller chunks 16:48:36 yes 16:48:49 zhiyan: solve the basic model we talked about earlier first 16:48:52 then build on it 16:48:53 jgriffith: i will think about your option 16:49:01 as far as timing... 16:49:14 I hate the idea of you duplicating code then replacing it when brick is ready 16:49:22 I'd really like to see if you can either: 16:49:27 1. help with what's needed in brick 16:49:35 2. wait for brick to be ready 16:49:41 jgriffith: but i like use brick/cinder-agent in future 16:49:43 maybe you could stub some things out in the meantime? 16:50:03 zhiyan: I don't know what the timing is for cinder-agent etc 16:50:13 jgriffith: to be clear, i didn't mean to contribute that duplicate code, just use it locally in the meantime so he's not stuck 16:50:20 My issue here is that I strongly suspect we'll end up shipping H final with a feature that just plain won't work for ceph/sheepdog, and I think that sets a dangerous precident 16:50:21 zhiyan: but that's going to be mostly handled by avishay hemna and FC folks I think :) 16:50:36 avishay: I htink we're on the same page 16:51:01 DuncanT: so this is the dilema, does that mean we just don't do new features in Cinder? 16:51:01 DuncanT, what breaks ceph ? 16:51:42 DuncanT: not saying you're wrong 16:51:52 jgriffith: it means don't depend on implementation details of the backend for generic new features 16:51:57 jgriffith: I don't know how to square the circle... 16:51:58 DuncanT: so maybe the answer is cinder is feature complete 16:52:07 jgriffith: we are in the same page i think, i don't like use duplicate code also, will use brick 16:52:25 jdurgin1: understood 16:52:32 and i'd like contribute something to brick if hemna, avishay need me 16:52:43 jdurgin1: but I have to rely on you or other Ceph experts to propose alternatives that work for everybody 16:52:51 jdurgin1: and also don't hinder/hamper anybody else 16:53:10 seems R/O volume support discussion need move to next meeting? 16:53:16 jgriffith: We work like iscsi in this regards, but I can understand the concerns of people who don't 16:53:34 Ok.. I think we should talk about the Ceph and FC issues in another topic 16:53:47 #topic R/O volumes 16:53:50 k 16:54:05 zhiyan: Just to be clear I intend to see this land in Cinder at some point 16:54:17 zhiyan: I don't necessarily see all of the concerns 16:54:26 zhiyan: and all the hypervisors seem to have some level of support 16:54:40 zhiyan: if I'm completely wrong and it blows up, well then live and learn 16:54:55 does Xen support? I thought at the summit they said no? 16:55:31 Xen does, according to their docs 16:55:45 avishay: jgriffith found a way that Xen will support as well as VMware 16:56:42 working with the VMware team in HP to switch to a different api to support this then what they had planned on using 16:57:15 For xenapi do ping me if you have issues, there is work Ceph support and already do some FC/HBA support 16:58:13 oh, missread, there are read only volumes too 17:01:36 jgriffith: u there? 17:01:39 @jgriffith hey guys we have the room in this time slot. 17:02:35 Is the previous meeting over? 17:03:12 we lost jgriffith, so yes it's over 17:03:20 #endmeeting 17:03:30 (not sure that will work) 17:03:35 I believe he needs to end the meeting 17:03:37 hartsocks: wont work without jgriffith :( 17:03:43 ugh. 17:04:00 Wonder what happens when I do this... 17:04:03 hartsocks: Error: Can't start another meeting, one is in progress. 17:04:12 #endmeeting 17:04:13 nice. 17:04:33 I think you can move folks to #openstack-meeting-alt 17:04:43 the alternate meeting channel 17:05:09 Let me check to see if it's clear. 17:05:52 Okay. 17:05:52 If folks can /join me over on openstack-meeting-alt 17:05:59 * rushiagr hopes this net split disconnects jgriffith :P 17:06:29 #endmeeting