15:00:23 <BobBall> #startmeeting XenAPI
15:00:24 <openstack> Meeting started Wed Jan 15 15:00:23 2014 UTC and is due to finish in 60 minutes.  The chair is BobBall. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:27 <openstack> The meeting name has been set to 'xenapi'
15:00:36 <BobBall> hah - ok - my first meeting as chair!
15:00:47 <BobBall> Raise hands if you're here
15:00:54 <thouveng> hello bob
15:01:03 <BobBall> howdy thouveng!
15:01:06 <BobBall> Do we have a matel too?
15:01:37 <matel> hi
15:01:44 <BobBall> Good good
15:01:47 <matel> I am here, sorry, was in the middle of stg.
15:01:50 <BobBall> #topic blueprints
15:02:03 <BobBall> So - thouveng - I think that yours is the only formal blueprint under way for I-2
15:02:08 <BobBall> What's the latest?
15:02:35 <thouveng> Is the attachement if pci will be in the same BP
15:02:42 <thouveng> I mean the same patchset
15:02:53 <BobBall> I'd put them as separate patchsets for starters
15:03:00 <BobBall> it's easier to merge them than separate them
15:03:14 <BobBall> just put them in the same code branch and when you git review it'll add them as a dependency
15:03:14 <thouveng> ok
15:03:28 <BobBall> Are you experienced with git?
15:03:47 <thouveng> I played a little bit with git but not with gerrit
15:04:23 <BobBall> ok - all I was going to say is playing with multiple patches and git review is fun
15:04:28 <BobBall> git rebase -i is your friend :)
15:04:55 <thouveng> ok thanks for the hint. I'll look the manual
15:05:04 <BobBall> I do "git rebase -i origin/master" regularly, then edit the patch I want to change, git commit -a --amend to update with the new changes and git rebase --continue to carry on with the rest of the patchset and stop the rebase
15:05:16 <BobBall> that way you can modify a patch rather than add patches on the top
15:05:37 <BobBall> normally bad practice, but absolutely the right thing to do for OpenStack since yours is the only copy of the local git repo until it's formally merged through gerrit
15:05:54 <matel> Before doing git rebase, always "backup" your branch -> this way if something goes wrong, you can always go back.
15:06:15 <thouveng> waow it seems cool... right now I'm only playing with commit -a --amend
15:06:43 <BobBall> It's a very specific workflow for OpenStack I think, but it seems to work well.
15:06:46 <BobBall> Agreed with Matel!
15:07:06 <BobBall> So - I know we've been talking on the patch and on email, but could you give an update for matel + the logs of where things are with the BP?
15:07:14 <matel> if you are on branch b1, do a backup by: git checkout b1; git checkout -b backup-b1; git checkout b1
15:07:41 <thouveng> Currently the resource tracker is working and see all pci resources
15:08:01 <thouveng> we are using lspci -vmmkn to get information about pci devices on dom0 and
15:08:15 <thouveng> in particular wbout which kernel driver is used
15:08:24 <BobBall> BTW - this is where merging changesets is substantially easier than splitting.  git rebase -i has an option to merge changesets, but splitting up is entirely manual and quite hard.  Hence the suggestion to start with multiple changesets and only merge if you have to.
15:08:51 <BobBall> So the lspci is running in dom0 then all the logic that actually uses the output is in nova
15:09:01 <matel> I see.
15:09:02 <thouveng> yes
15:09:21 <thouveng> the plugin only returns the output of lspci as a string
15:09:43 <BobBall> So we get a list from dom0 and then use that to see which devices are using the "pciback" driver - these are the ones that can be passed through.
15:09:52 <BobBall> But the list is further restricted by the pci whitelist in nova
15:09:54 <thouveng> if the device is using pciback as device driver then it is added into a list
15:10:08 <thouveng> yes I confirm
15:10:27 <matel> Ok, thanks for that.
15:10:47 <BobBall> Ok - so the next steps are what thouveng ?
15:10:57 <thouveng> I started to work on the attachement of the device by setting the other-config:pci=xxx
15:11:31 <thouveng> I tested on my machine with two K2 and I can start two VMs with one K2 on each
15:11:41 <BobBall> woohoo
15:11:53 <BobBall> through OpenStack? or still manually?
15:11:53 <thouveng> if I start a third one than the filter says that their is no more GPUs
15:12:05 <thouveng> through openstack with a specif flavor
15:12:12 <BobBall> Brilliant!
15:12:17 <thouveng> cool :)
15:12:20 <matel> This sounds great.
15:12:46 <thouveng> I need to test with Doom :)
15:13:10 <BobBall> If you need someone to play against thouveng, I'm sure we can arrange something.
15:13:22 <BobBall> It's been a long time since I've had a good frag.
15:13:31 <thouveng> Thanks for the help :)
15:13:32 <BobBall> You a doom player matel ?
15:13:37 <matel> I-2 is just one week.
15:13:51 <matel> I played doom once. :-)
15:14:01 <BobBall> Indeed.  I'm assuming thouveng that this won't make I-2?
15:14:21 <BobBall> That's OK - we can re-target to I-3.  The code is well under way and so we have strong confidence it'll make I-3.
15:14:24 <matel> Can we split it, so that the reporting can go into I-2?
15:14:28 <thouveng> It will be short if we want to test it
15:14:32 <matel> (just an idea)
15:14:46 <thouveng> I think the code will be ready but without that much of test
15:15:04 <matel> You mean unit tests?
15:15:27 <BobBall> I think it'll need the tests completed before we merge.  I'd prefer to update target to I-3 unless you have a strong desire to fit in I-2 thouveng?
15:15:30 <thouveng> it will be safe to target I-3
15:15:51 <thouveng> matel: yes.
15:16:14 <BobBall> Ok - well let's let this one slip then.
15:16:25 <matel> Without unit tests, it's unlikely, that you'll pass the eagle-eyed cores.
15:16:27 <BobBall> I believe russellb will soon be retargetting BPs that aren't code complete
15:16:54 <matel> The reporting patch is complete, isn't it?
15:17:16 <matel> If it's ready, I would remove the draft flag from it.
15:17:27 <matel> So that you can get more feedback.
15:17:44 <thouveng> Yes I think it is time to publish it
15:17:52 <BobBall> I think it's ready - but I wouldn't push for it without at least a draft useage patch up and ready (perhaps if that's ready without unit tests that'd be good)
15:18:02 <BobBall> i.e. a patch that proves the theory can work.
15:18:48 <thouveng> OK. It's not clear what I should provide to prove that
15:19:11 <BobBall> I think just the draft patch for the attaching
15:19:33 <thouveng> Ah ok you mean for the attaching part
15:19:40 <BobBall> yup
15:19:53 <thouveng> I can provide this patch quickly. At least a draft
15:20:03 <BobBall> if the draft attaching patch was uploaded I think that it'd be easier to argue that the API part should be accepted
15:20:21 <russellb> BobBall: already done
15:20:30 <thouveng> Ok I see. I will provide this patch quickly
15:21:45 <BobBall> hehe - thanks russellb!
15:22:08 <BobBall> Ok - next topic unless there is more on BP
15:22:15 <Guest56631> one thing from me.
15:22:25 <BobBall> hi Guest56631 !
15:22:43 <BobBall> Ah - Christopher - nice obvious nickanme :)
15:22:46 <Guest56631> this is leif.  I just posted a blueprint last night that would love to get feedback on.
15:22:55 <Guest56631> https://blueprints.launchpad.net/nova/+spec/xenapi-image-cache-management
15:22:58 <BobBall> Linky?
15:23:10 <Guest56631> It's around image cache management and leveraging someo of the vmware work.
15:23:36 <BobBall> So what's the basic flow ?
15:23:43 <Guest56631> don't need feedback now, but please give it a look and let me know.
15:23:44 <BobBall> It downloads images preemptively?
15:24:10 <Guest56631> Basic flow is to have driver run periodic call to imagecachemanagement object which in turns checks to see what
15:24:20 <BobBall> or once they have been downloaded, it preserves the base image that was previously downloaded from glance?
15:24:32 <Guest56631> images are not being used.  If they haven't been used for a long enough time they are removed from the host.
15:24:46 <matel> Bob - it's about the management of the cache itself.
15:24:59 <Guest56631> we are thinking about preserving "special" base images and preloaded whitelists, but this is to get us to parity with vmware.
15:25:28 <BobBall> So what I'm trying to understand is what this does on top of the current "caching" of glance images
15:25:30 <Guest56631> Right now we have the delete cache images code, but it appears it only gets called out of band (from my reading).
15:25:50 <BobBall> I'm probably missing something obvious :)
15:25:57 <Guest56631> Right now the current caching keeps the images around forevere short of a out of band configuration.
15:26:05 <Guest56631> This puts it in the nova configuration for aging.
15:26:11 <BobBall> Ahh - ok, I see.
15:26:18 <Guest56631> small steps to start.
15:26:21 <BobBall> I assumed the base images were deleted when the last VM that used it was deleted.
15:26:44 <Guest56631> We did too and John and I reviewed yesterday.  We determined it wasn't the case (ie. they remain forever).
15:26:48 <BobBall> So this work will ensure the downloaded images are timed-out and deleted
15:27:00 <Guest56631> If they are not in use.
15:27:08 <Guest56631> and only if it's configured.
15:27:20 <BobBall> Doesn't matter if they are - as long as you only delete the glance image and leave the base copy.  It'd be merged in
15:27:25 <BobBall> coalesced*
15:27:55 <Guest56631> okay.  maybe we can talk about that offline?
15:28:01 <BobBall> Maybe deleting rare glance images in all cases (even if in use) would be an option if we don't think they will be used again that could just safe a little space
15:28:04 <BobBall> sure
15:28:16 <BobBall> OK - I'll have a butchers at the BP.
15:28:23 <Guest56631> LOL I like it.
15:28:27 <BobBall> #link https://blueprints.launchpad.net/nova/+spec/xenapi-image-cache-management
15:28:30 <BobBall> For the logs.
15:28:53 <BobBall> Okies - bugs!
15:28:56 <BobBall> #topic bugs
15:29:06 <BobBall> I've fixed a couple - reviews are up
15:29:12 <BobBall> A couple of long standing bugs
15:29:31 <BobBall> #link https://review.openstack.org/#/c/60253/ is around monitoring the GC when garbage collecting
15:29:34 <BobBall> worth talking about that one
15:29:48 <BobBall> Guest56631: worth you having a think on that one too - to get another RS perspective.
15:30:08 <BobBall> Basically the question around this one is whether we should abandon early if the GC is believed not to be running
15:30:18 <BobBall> That was the original patch, but John wasn't totally convinced to start.
15:30:30 <BobBall> I think I've convinced him, so he's happy, but I wanted to get a slightly wider view on it too
15:30:47 <Guest56631> BobBall: thanks!
15:31:10 <BobBall> #link https://review.openstack.org/#/c/66597/  fixes the CPU reporting
15:31:13 <BobBall> finally.
15:31:31 <BobBall> I had too many people asking about it in #openstack so I thought I'd just do it
15:32:24 <BobBall> Oh - another thing for BP's - sorry.  garyk is working on the v3 diagnostics API, and I've done the XAPI support for it (very straight forward in the end)
15:32:27 <BobBall> #link https://review.openstack.org/#/c/66338/
15:32:49 <BobBall> All of the above could do with review comments :)
15:32:56 <BobBall> That's me for bugs.
15:33:07 <BobBall> Any more? or shall we move to matel for the update on QA/gate?
15:33:25 <BobBall> #topic QA
15:33:27 <matel> nice bob.
15:33:30 <BobBall> Over to matel then.
15:33:44 <matel> Okay, so I am working on the localrc atm.
15:34:04 <matel> I failed to get the official devstack scripts running.
15:34:25 <BobBall> because of the wrong localrc?
15:34:41 <matel> The "framework" is here: #link https://github.com/citrix-openstack/xenapi-os-testing
15:35:00 <matel> No messages coming out of the VM, and it just cuts me off.
15:35:11 <matel> Probably messing around the interfaces, I guess.
15:35:32 <BobBall> oh
15:35:38 <BobBall> so this is when devstack is running
15:35:53 <BobBall> it runs differently for some reason then we lose internet connection
15:35:57 <BobBall> not surprising I suppose
15:36:10 <matel> So some VM configuration happens here, I think: https://github.com/openstack-infra/devstack-gate
15:36:36 <matel> That's the update.
15:36:50 <BobBall> OK
15:37:15 <BobBall> And I'm right in saying the nodepool changes that we require are all up and waiting for review?
15:37:42 <matel> As far as I know, yes - although I did not have the chance to try them.
15:38:15 <BobBall> matel - check out https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L270
15:38:38 <BobBall> that's the host config it does
15:38:40 <matel> yes, I saw that.
15:38:43 <BobBall> including messing around with iptables
15:38:45 <BobBall> ah ok
15:39:07 <BobBall> I'll shush up then
15:39:09 <BobBall> ok :)
15:39:12 <BobBall> #topic AOB
15:39:14 <matel> So atm, I start that script, and it cuts me off, so I just need to reverse engineer it.
15:39:40 <BobBall> are you running as root?
15:39:42 <BobBall> or stack?
15:39:48 <matel> as jenkins
15:40:03 <BobBall> I see those changes make a stack user - which is what the rest of the script should probably execute as
15:40:10 <BobBall> so maybe the cutting you off is at that point
15:40:19 <matel> I am following this: https://github.com/openstack-infra/devstack-gate/blob/master/README.rst
15:40:28 <matel> https://github.com/openstack-infra/devstack-gate/blob/master/README.rst#simulating-devstack-gate-tests
15:40:33 <matel> #link https://github.com/openstack-infra/devstack-gate/blob/master/README.rst#simulating-devstack-gate-tests
15:40:51 <matel> bot does not seem to pick up these links?
15:40:55 <BobBall> ok
15:40:57 <BobBall> I think it does
15:41:01 <BobBall> but only shows them in the output
15:41:13 <BobBall> i.e. no point in repeating "link to XYZ"
15:41:21 <matel> I thought it puts some garbage here as well.
15:41:36 <BobBall> Maybe.  Well, it's running, so hopefully we'll get logs even if no links
15:41:42 <BobBall> Ok - any more for any more?
15:41:49 <matel> I'm done.
15:42:45 <BobBall> Good good - we'll stop the meeting there then.#
15:42:51 <BobBall> #endmeeting