17:16:38 <johngarbutt> #startmeeting xenapi
17:16:39 <openstack> Meeting started Wed Dec 19 17:16:38 2012 UTC.  The chair is johngarbutt. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:16:40 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:16:42 <openstack> The meeting name has been set to 'xenapi'
17:16:47 <jgriffith> johngarbutt: sorry about that
17:17:08 <johngarbutt> jgriffith: it happens, no worries, have a good christmas!
17:17:15 <johngarbutt> #topic blueprints
17:17:20 <johngarbutt> hi all
17:17:27 <matelakat> hi
17:17:34 <mikal> Greetings
17:17:36 <johngarbutt> any blueprints to dicsuss, I think config drive
17:17:48 <mikal> Yes, I'd love to talk about config drive
17:17:49 <johngarbutt> mikal: hows it going, I think matelakat took a look at the code you pusshed
17:18:01 <matelakat> y, I have 2 notes
17:18:11 <mikal> I assume its mostly wrong...
17:18:22 <matelakat> I think it is fine
17:18:44 <matelakat> So the question 1: Is it important, that the filesystem's label is config-2?
17:18:50 <matelakat> (I guess it is)
17:18:55 <mikal> Yes. cloud-init checks for that.
17:18:57 <zykes-> johngarbutt: hey you :)
17:19:04 <johngarbutt> it looked like what we were planning, from a quick look over the sholder
17:19:08 <mikal> smoser would be more definitive, but I'm pretty sure its important
17:19:21 <johngarbutt> that sounds like what I heard too
17:19:29 <matelakat> Okay, so I think the _generate_disk 's name-label won't be the filesystem's label.
17:19:47 <matelakat> name-label is a sort of name for the vdi.
17:20:16 <smoser> config-2 is imprtant, yes.
17:20:16 <mikal> Oh, I can see that
17:20:27 <matelakat> so we need to modify _generate_disk, so that when it calls mkfs, it passes a label value (which should be a new argument I guess)
17:20:30 <mikal> It just needs to be passed to the mkfs call in _generate_disk. I can add that.
17:20:38 <johngarbutt> sounds good
17:20:51 <matelakat> And the other stuff is around the vm_utils modification.
17:21:27 <matelakat> so the _generate_disk creates a vdi, and than creates a link to the instance, by creating a vbd.
17:21:40 <matelakat> so after that call, your fresh vdi has a vbd.
17:22:02 <matelakat> And in the next call, you will have another vbd, that connects the very same vdi to your compute node.
17:22:03 <smoser> what matters is that from the guest there is a block device present that has an iso9660 filesystem on it with label config-2.
17:22:28 <matelakat> okay, so we definitely need to make that labelling happen.
17:22:29 <johngarbutt> ah, so vfat no good?
17:22:31 <mikal> smoser: this version will be vfat
17:22:45 <smoser> (which is unfortunate, but should probably work)
17:22:49 <mikal> vfat is supported by the code already, I understood there was complexity with ISO9660 for xen?
17:23:09 <matelakat> yes, I guess attaching isos is not as easy as the disks.
17:23:20 <smoser> just for the record, you should attach disks!
17:23:22 <johngarbutt> there isn't a super easy way of adding an ISO without that being the only cd drive, from memory
17:23:23 <smoser> *not* "isos"
17:23:33 <smoser> the disk should have content that happens to be a ISO9660 filesystem.
17:23:45 <matelakat> okay, that could work, I guess.
17:23:49 <smoser> just like if you'd done : mkisofs /dev/vdb
17:24:12 <smoser> that doesn't turn my block device into a cdrom :)
17:24:13 <johngarbutt> I guess that should work, its just a block device, I think, but would have to check
17:24:17 <johngarbutt> right
17:24:20 <johngarbutt> good point
17:24:30 <mikal> So... Its actually harder to do that anyways.
17:24:45 <johngarbutt> there was a security issue around this before right?
17:24:53 <mikal> As best as I can see I'd have to create the iso9660 filesystem to one side and then dd it onto the vbd
17:24:56 <johngarbutt> it would be good not to add that back
17:25:00 <matelakat> mikal, and what do you think about using the existing config-drive code segments for the filesystem generation?
17:25:12 <mikal> Whereas with vfat I can just mount the new vbd and do the thing
17:25:36 <mikal> Ok, so I think we now have three questions in flight and my brain is full
17:25:45 <johngarbutt> mikal: dd was what we were talking about doing at one point, not very graceful I know
17:25:46 <mikal> Let's stick with the fs format for a sec...
17:25:55 <matelakat> ok.
17:26:14 <mikal> smoser: I thought configdrive supported vfat? Its certainly an option in the code. Will cloud-init get angry with a vfat config drive?
17:26:26 <mikal> smoser: if it wont work, we should remove it from the code
17:26:36 <smoser> the code probably supports writing vfat, but i'd really like to not do that if possible.
17:26:41 <smoser> cloud-init will probably find it.
17:26:55 <smoser> but potentially using vfat just complicates a guest
17:26:58 <mikal> smoser: yeah, the code _definitely_ is willing to write vfat
17:27:09 <mikal> smoser: I don't know if anyone actually does it though
17:27:13 <matelakat> btw, do we have any tests that would show how to use configdrive?
17:27:16 <mikal> smoser: I think it was for backwards compatability
17:27:30 <smoser> i really dont believe that xen can possibly be silly enough to inspect the content of a thing it is about to attach and say "oh, that has an ISO9660 filesystem on it, I will attach it as a cdrom"
17:27:51 <johngarbutt> smoser: agreed
17:27:59 <smoser> if it was, and i booted a system with 2 block devies, and then, from the guest did 'mkisofs /dev/vdb' would a reboot magically make it read-only ?
17:28:24 <smoser> to xen, this is just data on a disk.
17:28:42 <johngarbutt> agreed, I was thinking about getting an ISO file read by XenServer
17:28:59 <johngarbutt> if we make a disk contain an ISO, as you say, that should be fine
17:29:02 <mikal> smoser: hmmm. The code as released in folsom let's users use a flag (config_drive_format) to request vfat
17:29:15 <mikal> So I think we'd have to have a more public discussion if we wanted to drop that
17:29:25 <smoser> mikal, thats fine. i'm not saying rip it out. i'm saying don't proliferate it, or make it the default on a hypervisor.
17:29:36 <mikal> smoser: ok
17:29:41 <smoser> make the working expectation be that it is iso9660 always.
17:30:02 <matelakat> So, back to the code?
17:30:03 <mikal> Alright. I will rearrange the code to do an iso9660, which may or may not require some horrible dd hackery
17:30:12 <mikal> Yep, so next I think was the vbd thing.
17:30:21 <mikal> I just saw your review comments. I haven't read them yet.
17:30:27 <mikal> I assume that's just a case of some refactoring?
17:30:46 <matelakat> yes.
17:30:55 <mikal> Ok, I'm not too worried about that one then
17:31:00 <mikal> What was the third thign again/
17:31:01 <mikal> ?
17:31:18 <matelakat> I had two, the label, and the vbd stuff.
17:31:20 <mikal> Oh, code reuse for generation
17:31:34 <mikal> I think its a really good idea to keep as much of the logic in virt/configdrive.py as possible
17:31:38 <mikal> That way you get updates for free
17:31:44 <johngarbutt> +1
17:31:47 <matelakat> +1
17:31:58 <johngarbutt> cool, that is looking good
17:32:05 <mikal> A few other quick things -- your file injection didn't support admin passwords. Config drive does. Should config drive in xen set admin passwords?
17:32:09 <matelakat> SO basically, that would mean, that we won't ask _generate_disk to create the fs.
17:32:16 <johngarbutt> #link https://blueprints.launchpad.net/nova/+spec/xenapi-config-drive
17:32:20 <mikal> matelakat: correct
17:32:38 <mikal> matelakat: well, it will create an FS in a temp file, and then copy it across to the block device
17:32:50 <matelakat> mikal: y
17:33:08 <matelakat> mikal: you mean dd, right?
17:33:14 <mikal> matelakat: yep
17:33:38 <matelakat> So, let's pick up this admin passwords.
17:33:41 <johngarbutt> one sec, what is the question about password injection?
17:33:48 <johngarbutt> xen does that using the agent at the moment
17:33:49 <matelakat> :-)
17:33:57 <mikal> configdrive wants to inject passwords onto the config disk
17:34:05 <johngarbutt> I think that is fine
17:34:06 <mikal> Well, I don't understand the agents very well
17:34:15 <mikal> Is there an agent if you're using config drive?
17:34:15 <matelakat> we added some flags, so agent is optional
17:34:27 <matelakat> let me look for the changeset.
17:34:29 <johngarbutt> I think we turn off the agent for config drive
17:34:35 <johngarbutt> at this stage anyway
17:34:47 <mikal> So therefore we _have_ to have the admin password in the config drive, yeah?
17:34:48 <johngarbutt> we can look at if there are things it wasn't to do later
17:34:59 <johngarbutt> mikal: I guess
17:35:11 <mikal> Cool
17:35:36 <matelakat> #link https://review.openstack.org/15212
17:35:40 <johngarbutt> we could look at doing later: agenet does later password changes
17:35:53 <mikal> Yeah, I had a question about that for smoser
17:35:53 <johngarbutt> hang on now in english...
17:36:08 <mikal> smoser: does cloud-init only run at boot? How are password changes later done?
17:36:51 <johngarbutt> the agent can currently use xenstore to do to way communication, so it can reset the password later, there was talk of adding something like a place to post an encrypted password and a place to poll an see if a password reset is required
17:37:04 <smoser> mikal, tcp, puppet, any other daemon.
17:37:44 <mikal> smoser: ok, so cloud-init is boot only and then you have to be an adult? That's cool because there's no attempt to update the configdrive with new data later, which would be ... complicated
17:37:47 <johngarbutt> its more for windows, for users just doing things the old way, they need some other way, so maybe it is a little bit agecase
17:37:57 <matelakat> mikal: xenapi_disable_agent config option could be used to turn off the agent.
17:38:22 <johngarbutt> OK, so any other configdrive things?
17:38:24 <mikal> matelakat: cool. I haven't got as far as actually running this code yet. I need to build a test environment first.
17:38:34 <mikal> No, I think that's it from me. Sorry for taking so much time.
17:38:52 <johngarbutt> mikal: devstack works well for that, not sure what you guys use internally
17:38:52 <matelakat> johngarbutt: done
17:39:21 <johngarbutt> not tried it, but you should be able to run XenServer inside virtual box, and run devstack on the virtual box VM
17:39:41 <johngarbutt> no problem, it was good to chat about that
17:39:42 <mikal> johngarbutt: that's my plan, but I only downloaded xenserver yesterday
17:39:53 <johngarbutt> cool
17:40:04 <johngarbutt> any other blueprint?
17:40:40 <johngarbutt> anyone got news on the idempotent action stuff?
17:41:06 <zykes-> johngarbutt: speaking of blueprints: https://blueprints.launchpad.net/cinder/+spec/fibre-channel-block-storage
17:41:39 <johngarbutt> pvo: were your guys going to look at OVS support?
17:41:41 <johngarbutt> interesting
17:41:55 <johngarbutt> zykes: i see the plans are KVM only at the mo
17:42:32 <johngarbutt> zykes: I think there is a new SR being added to help with HBA support to attach to random LUNs, so that might allow XenServer to work with these things
17:43:15 <zykes-> johngarbutt: eta?
17:43:44 <johngarbutt> zykes: no idea right now, let me find out, there may be something on the public XCP repos somewhere
17:44:00 <johngarbutt> any more blueprint stuff, before we move to docs?
17:44:10 <zykes-> ovs what btw ?
17:44:18 <johngarbutt> Open vSwitch
17:44:37 <johngarbutt> #topic docs
17:44:49 <johngarbutt> anyone with any specific docs issues today?
17:45:11 <matelakat> The only issue, I guess, that I need to document the XenAPINFS stuff.
17:45:14 <zykes-> johngarbutt: yeh, but for what :)
17:45:26 <johngarbutt> docs relating to XenServer and XenAPI support
17:45:49 <zykes-> sorry for bothering, but ovs + <what>?
17:45:54 <johngarbutt> #action matelakat to document XenAPI NFS
17:46:10 <johngarbutt> zykes: OVS + XenServer + Quantum
17:46:21 <johngarbutt> #topic bugs
17:46:36 <johngarbutt> any killer bugs people want to discuss, preferable XenServer related ones
17:46:51 <matelakat> We had some floating-ip issues this week, see the fix here:
17:47:08 <matelakat> #link https://review.openstack.org/18337
17:47:15 <johngarbutt> right, with nova-network HA flatdhcp
17:47:23 <matelakat> multihost
17:47:37 <johngarbutt> sorry yes, that is what I meant with HA
17:47:39 <matelakat> y
17:47:48 <matelakat> And the resize stuff
17:48:12 <matelakat> we ran tempest tests, and the flavor was smaller than the image, and the shrink operation failed.
17:48:24 <johngarbutt> #action makelakat to raise a resize bug
17:48:24 <matelakat> But I haven't raised a bug.
17:48:29 <matelakat> y.
17:48:51 <johngarbutt> OK, so any more?
17:49:03 <johngarbutt> #topic QA
17:49:28 <johngarbutt> not heard from rackspace QA team yet
17:49:37 <matelakat> some random failures while running 12.04 guest on volume operations.
17:49:44 <matelakat> mostly timeout
17:49:48 <johngarbutt> there was hope to start co-ordinating efforts
17:49:55 <johngarbutt> as mentioned in folsom release notes right?
17:50:18 <johngarbutt> OK, moving on if nothing else...
17:50:26 <johngarbutt> #topic AOB
17:50:34 <johngarbutt> Any more for any more?
17:50:42 <matelakat> pass
17:51:01 <zykes-> uhm, johngarbutt doesn't it have ovs support already ?
17:51:32 <johngarbutt> XenServer has OVS support, Quantum has OVS support, but the two don't play well together
17:51:44 <johngarbutt> there are two patches pending to fix that
17:51:50 <johngarbutt> from internap
17:52:02 <johngarbutt> #topic date of next meeting
17:52:09 <johngarbutt> next week is Christmas!
17:52:23 <johngarbutt> I vote we skip next week, and chat again the following week?
17:52:37 <matelakat> What 's the date exactly?
17:52:56 <johngarbutt> Jan 2nd
17:53:18 <johngarbutt> sounds like that is everything
17:53:20 <johngarbutt> thanks all!
17:53:21 <matelakat> hmm, I don't expect too much activity, but let's go for it.
17:53:28 <johngarbutt> #endmeeting