18:08:23 #startmeeting 18:08:24 Meeting started Thu Nov 10 18:08:23 2011 UTC. The chair is renuka. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:08:25 Useful Commands: #action #agreed #help #info #idea #link #topic. 18:08:43 #topic SolidFire volume driver 18:08:51 I was kind of hoping I could get a quick comment from somebody on https://bugs.launchpad.net/nova/+bug/888649 and whether they thoguh it was a real bug if we're time? 18:08:53 Launchpad bug 888649 in nova "Snapshots left in undeletable state" [Undecided,New] 18:09:47 I've implemented a SolidFireISCSIDriver in nova/volume/san.py and done a bit of testing here. 18:10:01 Had a couple of questions regarding reviews, submittal etc 18:10:16 Also wanted to make sure I was not incorrect in my assumptions. 18:11:17 jdg: all ears 18:11:24 Ok.. 18:11:37 So we behave bit differently than others. 18:11:55 In order to create a volume you need to have an estabilished account-ID 18:12:14 This account ID also includes all of the CHAP settings and information 18:12:40 What I ended upd with is that the only methods really implemented are create/delete volume. 18:12:55 We don't have any concept of export, assign etc. When a volume is created it's ready for use. 18:13:30 So my proposal was: Openstack administrator would create an account on the SF appliance for each compute node 18:13:58 They would also set up /etc/iscsid.conf with the appropriate chap settings on each compute node. 18:14:25 The only other thing that would be needed is a FLAG for the account ID to use on each compute node 18:14:48 I didn't want to add anything specific to the base class driver, or the db etc. 18:14:54 Does this sound reasonable? 18:15:18 Why is the account for a compute node, versus how it is normally done, on a per user basis 18:15:39 So we have two different accounts we use: 18:15:54 1. The actual management account to send API commands 18:16:13 2. A user account associated with each volume that has the CHAP info embedded 18:16:28 Perhaps I overlooked a way to do this with the existing user accounts? 18:17:03 My thought was that since the Compute node will actually make the ISCSI connection to the Volume and pass it to the VM's via LVM this seemed to make sense 18:17:20 Did I miss something in how the ISCSI implementation works maybe? 18:17:34 What is typically done is, during the attach call, we have a way of passing connection information (which the volume driver is responsible for) to the compute node that this volume will be attached to 18:18:17 Right, but I have this chicken or egg situation. I can't create a volume without an account 18:18:58 I have to have the account-ID at creation time which includes CHAP info... 18:19:22 #idea What I could do is dynamically create an a random account each time. 18:19:36 This would then fit more in to the model that you have today. 18:19:38 are you using some proprietary code for auth? 18:19:49 No, it's just CHAP 18:21:01 we don't have a way of associating this info when we create a user today? 18:21:31 Not that I could find 18:21:45 jdg: Is there a copy of the driver available at all, to see exactly what you did? 18:22:09 i agree, looking at code might be useful 18:22:12 DuncanT: I'm happy to post it/send it. 18:22:51 There's nothing modified in the existing code, just the addition of my sub-class and one added FLAG in san.py 18:23:11 jdg: I am not entirely sure creating random accounts makes sense... sounds more like a hack 18:23:23 It's a total hack :) 18:23:35 That's why I thought the account by compute node was a better approach 18:23:50 The flag / account per node sounds reasonable to me 18:24:02 although how would you deal with it when the user's vm moves from one compute node to another 18:24:15 or if the user wants to now attach it to a VM on a different compute node 18:24:27 jdg: you will probably have to maintain a mapping of tenants to backend accounts in your backend 18:24:55 I came up with two ideas, one would be to do a clone of the volume (this is really just a map copy for us so not a big deal) 18:24:56 then you are completely eliminating any kind of auth anyway... so might as well have a single admin account... if the only purpose is to beat some hardware limitation 18:25:24 renuka: and there's the second option (single admin account) 18:25:35 jdg: you could have the driver dynamically create a backend account the first time it sees a given tenant and store the info 18:26:02 vishy: This would be ideal, not sure how this works though? 18:26:14 every request you get contains a context object 18:26:25 vishy: that assumes the user never needs their own credentials 18:26:31 context.project_id 18:26:38 renuka: why would they? 18:26:57 Ahh... so perhaps I could create an account-ID based on the project_id 18:26:59 renuka: I don't think you want to give users direct access to the infrastructure that makes the cloud work 18:27:02 jdg: didn't you say the users are created with CHAP info 18:27:02 jdg: exactly 18:27:13 project_id is the canonical tenant_id from keystone 18:27:15 renuka: yes 18:27:29 so project_id is already verified as the user 18:27:50 When creating the account via our api you need to include the desired CHAP settings. 18:28:13 If I do this via project_id, then I can return the info via the assign/export method. 18:28:15 jdg: so you look up the account based on project_id, if it doesn't exist 18:28:28 Yep 18:28:28 isn't it cleaner to just have an extension which adds CHAP info for a user 18:28:30 create an account in the backend with a random chap password and store it 18:29:15 at the time the user account is created 18:29:24 renuka: that is in keystone 18:29:51 renuka: which means we would have to make a request back to keystone to get the chap info 18:29:52 yea, that was my next qs... is it worth looking into using keystone for volume? 18:30:16 renuka: long term that might be better, but I don't know if it is worth it short term. 18:30:24 So short term... 18:30:57 Today, who calls the export/assign methods to get the chap info after creation? And how is this set up in /etc/initd.conf on the cmpute node? 18:31:01 keystone does support other credential sets. We use them for ec2. 18:31:19 jdg: there is a call called initialize_connection 18:31:33 you pass in an ip address and get back connection info 18:31:46 and the setup on the compute node is different depending on the backend 18:32:04 Ok, so it sounds like the cleanest initial implementation is: 18:32:19 1. call to create_volume comes in 18:32:43 2. I use the project_id to check if an account-id exists, if not I create it 18:32:50 3. create the volume 18:33:09 yes, the one remaining question is, where is chap info stored? 18:33:14 4. Chap information is returned to initialize_connection the same as it is today 18:33:23 will you have multiple drivers connecting to the same backend? 18:33:36 ISCSI only 18:33:48 sorry i mean multiple copies of the driver code 18:33:57 as in multiple nova-volume hosts 18:34:10 yes 18:34:10 because if so, the chap info needs to be stored in the db 18:34:24 so it can be retrieved from another host if necessary 18:34:29 right, but couldn't I do that through model_update? 18:34:37 and there are race conditions that will be a little nasty 18:35:04 chap currently can be stored in a volume but not associated with a project_id 18:35:24 you could do something hacky like look for any volume with the project_id and get the chap info from there 18:35:41 (including deleted volumes), but it seems a little fragile 18:35:56 jdg: so just to be clear, you are ok with throwing away the initial CHAP credentials generated when the account was created, and from the first create call onwards, you will be using fake ones? 18:36:17 oh? I didn't get that 18:36:21 renuka: no, unfortunatley that's not the case 18:36:36 jdg: so you need to be able to get that initial set of creds again 18:36:41 for the second volume 18:37:03 I don't even need the creds again, I just need an account-id (int) 18:37:08 if there is only one copy of the volume-driver, you could just store it on disks. 18:37:10 and it has to exist of course 18:37:17 jdg: oh? 18:37:26 jdg: it will pass the creds back to you later? 18:37:43 here's the thing, if this is all being done simply to beat the hardware, might as well have a single account, no? 18:37:44 Yes, you can do a get_account_info call or something along those lines 18:38:04 renuka: I'm leaning towards this idea at least for the first pass 18:38:12 jdg: oh in that case, I think that is all fine. You don't need to store the creds at all 18:38:29 jdg: this = single account? 18:38:39 renuka: correct 18:38:48 renuka: I agree, it is kind of security theatre, but at least you can look in the backend and see which volumes belong to which account 18:39:12 So to reiterate: 18:39:13 renuka: even if it isn't inherently more secure than using an account per project 18:39:40 The openstack admin sets up the SF appliance and creates some "global" SF-volume account 18:40:12 Any compute node that will attach to the SF appliance will use this account for volume creation 18:41:01 I have all kinds of capabilities to create/return account info, the problem is it's all custom, and I don't know how to build and extension that plays nicely with all the other components 18:42:34 Does this still make sense or am I missing something obvious? 18:42:50 jdg: I think the first pass with a single account makes sense at this point 18:43:13 Certainly seems to make sense 18:43:26 Ok, thanks. I'll submit what I"ve done along with a design doc later today 18:44:07 as long as the division between create/delete and attach/detach is clean, I think it can be extended to use account info 18:44:09 Not sur of the process but perhaps someone can help me with that outside of the meeting later today 18:44:19 once we become more clear of what the driver does 18:44:39 Remember, though. That's the problem, we don't have an attach/detach phase. 18:44:50 We are "ready for use" on creation 18:45:08 So your iscsi volumes are all always mounted on every compute host? 18:45:11 jdg: how does a VM on a random compute node connect to the storage? 18:46:05 Sorry, may have sarted a rat hole. I mean form the SF appliance. There is no seperate attach command. 18:46:27 This is where the requirement for the account-id comes into play because it contains the chap info 18:46:30 oh the attach command we are talking of is a nova thing 18:46:38 Right... figured that out :) 18:47:06 Ok, I'll send my docs and code and hopefully it will clarify 18:47:17 Thanks for walking through it with me!! 18:47:17 yea, what i meant was, as long as creating the volume, and attaching it to the VM on compute node have been clearly separated, ...etc 18:47:37 sure 18:47:40 Yes.. that part should be very cleanly seperated 18:47:49 #action jdg to send out docs for SolidFire driver 18:48:12 #action openstack-volume to review SolidFire design 18:51:07 DuncanT: I haven't tried to repro the snapshot bug 18:51:23 is that affecting you? 18:51:28 It is, yes 18:51:48 What I'm trying to get input on is what the correct behaviour should be 18:52:11 We can have snapshots in existance after the volumes they came from have been delete just fine 18:52:20 LVM for example can't 18:53:17 Hence I /think/ that the driver for LVM needs to either block the volume delete if there are snapshots, or delete the snapshots. 18:53:45 makes sense 18:53:53 I'm happy to provide patches for one behaviour or the other, jsut wanted some input on which to pick 18:54:03 sounds like a question for the mailing list. 18:54:32 Fair enough, I'll post it up 18:54:41 DuncanT: I would think delete, but yes ask for input from the people using it 18:55:07 vishy: Cheers 18:56:19 anything else before we wrap up? 18:56:58 I'm done for now... will post something to the list about snapshot/backup soon, almost got a sane first pass at design and example code 18:58:17 right, thanks all 18:58:22 #endmeeting