15:00:23 #startmeeting XenAPI 15:00:24 Meeting started Wed Jan 15 15:00:23 2014 UTC and is due to finish in 60 minutes. The chair is BobBall. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:27 The meeting name has been set to 'xenapi' 15:00:36 hah - ok - my first meeting as chair! 15:00:47 Raise hands if you're here 15:00:54 hello bob 15:01:03 howdy thouveng! 15:01:06 Do we have a matel too? 15:01:37 hi 15:01:44 Good good 15:01:47 I am here, sorry, was in the middle of stg. 15:01:50 #topic blueprints 15:02:03 So - thouveng - I think that yours is the only formal blueprint under way for I-2 15:02:08 What's the latest? 15:02:35 Is the attachement if pci will be in the same BP 15:02:42 I mean the same patchset 15:02:53 I'd put them as separate patchsets for starters 15:03:00 it's easier to merge them than separate them 15:03:14 just put them in the same code branch and when you git review it'll add them as a dependency 15:03:14 ok 15:03:28 Are you experienced with git? 15:03:47 I played a little bit with git but not with gerrit 15:04:23 ok - all I was going to say is playing with multiple patches and git review is fun 15:04:28 git rebase -i is your friend :) 15:04:55 ok thanks for the hint. I'll look the manual 15:05:04 I do "git rebase -i origin/master" regularly, then edit the patch I want to change, git commit -a --amend to update with the new changes and git rebase --continue to carry on with the rest of the patchset and stop the rebase 15:05:16 that way you can modify a patch rather than add patches on the top 15:05:37 normally bad practice, but absolutely the right thing to do for OpenStack since yours is the only copy of the local git repo until it's formally merged through gerrit 15:05:54 Before doing git rebase, always "backup" your branch -> this way if something goes wrong, you can always go back. 15:06:15 waow it seems cool... right now I'm only playing with commit -a --amend 15:06:43 It's a very specific workflow for OpenStack I think, but it seems to work well. 15:06:46 Agreed with Matel! 15:07:06 So - I know we've been talking on the patch and on email, but could you give an update for matel + the logs of where things are with the BP? 15:07:14 if you are on branch b1, do a backup by: git checkout b1; git checkout -b backup-b1; git checkout b1 15:07:41 Currently the resource tracker is working and see all pci resources 15:08:01 we are using lspci -vmmkn to get information about pci devices on dom0 and 15:08:15 in particular wbout which kernel driver is used 15:08:24 BTW - this is where merging changesets is substantially easier than splitting. git rebase -i has an option to merge changesets, but splitting up is entirely manual and quite hard. Hence the suggestion to start with multiple changesets and only merge if you have to. 15:08:51 So the lspci is running in dom0 then all the logic that actually uses the output is in nova 15:09:01 I see. 15:09:02 yes 15:09:21 the plugin only returns the output of lspci as a string 15:09:43 So we get a list from dom0 and then use that to see which devices are using the "pciback" driver - these are the ones that can be passed through. 15:09:52 But the list is further restricted by the pci whitelist in nova 15:09:54 if the device is using pciback as device driver then it is added into a list 15:10:08 yes I confirm 15:10:27 Ok, thanks for that. 15:10:47 Ok - so the next steps are what thouveng ? 15:10:57 I started to work on the attachement of the device by setting the other-config:pci=xxx 15:11:31 I tested on my machine with two K2 and I can start two VMs with one K2 on each 15:11:41 woohoo 15:11:53 through OpenStack? or still manually? 15:11:53 if I start a third one than the filter says that their is no more GPUs 15:12:05 through openstack with a specif flavor 15:12:12 Brilliant! 15:12:17 cool :) 15:12:20 This sounds great. 15:12:46 I need to test with Doom :) 15:13:10 If you need someone to play against thouveng, I'm sure we can arrange something. 15:13:22 It's been a long time since I've had a good frag. 15:13:31 Thanks for the help :) 15:13:32 You a doom player matel ? 15:13:37 I-2 is just one week. 15:13:51 I played doom once. :-) 15:14:01 Indeed. I'm assuming thouveng that this won't make I-2? 15:14:21 That's OK - we can re-target to I-3. The code is well under way and so we have strong confidence it'll make I-3. 15:14:24 Can we split it, so that the reporting can go into I-2? 15:14:28 It will be short if we want to test it 15:14:32 (just an idea) 15:14:46 I think the code will be ready but without that much of test 15:15:04 You mean unit tests? 15:15:27 I think it'll need the tests completed before we merge. I'd prefer to update target to I-3 unless you have a strong desire to fit in I-2 thouveng? 15:15:30 it will be safe to target I-3 15:15:51 matel: yes. 15:16:14 Ok - well let's let this one slip then. 15:16:25 Without unit tests, it's unlikely, that you'll pass the eagle-eyed cores. 15:16:27 I believe russellb will soon be retargetting BPs that aren't code complete 15:16:54 The reporting patch is complete, isn't it? 15:17:16 If it's ready, I would remove the draft flag from it. 15:17:27 So that you can get more feedback. 15:17:44 Yes I think it is time to publish it 15:17:52 I think it's ready - but I wouldn't push for it without at least a draft useage patch up and ready (perhaps if that's ready without unit tests that'd be good) 15:18:02 i.e. a patch that proves the theory can work. 15:18:48 OK. It's not clear what I should provide to prove that 15:19:11 I think just the draft patch for the attaching 15:19:33 Ah ok you mean for the attaching part 15:19:40 yup 15:19:53 I can provide this patch quickly. At least a draft 15:20:03 if the draft attaching patch was uploaded I think that it'd be easier to argue that the API part should be accepted 15:20:21 BobBall: already done 15:20:30 Ok I see. I will provide this patch quickly 15:21:45 hehe - thanks russellb! 15:22:08 Ok - next topic unless there is more on BP 15:22:15 one thing from me. 15:22:25 hi Guest56631 ! 15:22:43 Ah - Christopher - nice obvious nickanme :) 15:22:46 this is leif. I just posted a blueprint last night that would love to get feedback on. 15:22:55 https://blueprints.launchpad.net/nova/+spec/xenapi-image-cache-management 15:22:58 Linky? 15:23:10 It's around image cache management and leveraging someo of the vmware work. 15:23:36 So what's the basic flow ? 15:23:43 don't need feedback now, but please give it a look and let me know. 15:23:44 It downloads images preemptively? 15:24:10 Basic flow is to have driver run periodic call to imagecachemanagement object which in turns checks to see what 15:24:20 or once they have been downloaded, it preserves the base image that was previously downloaded from glance? 15:24:32 images are not being used. If they haven't been used for a long enough time they are removed from the host. 15:24:46 Bob - it's about the management of the cache itself. 15:24:59 we are thinking about preserving "special" base images and preloaded whitelists, but this is to get us to parity with vmware. 15:25:28 So what I'm trying to understand is what this does on top of the current "caching" of glance images 15:25:30 Right now we have the delete cache images code, but it appears it only gets called out of band (from my reading). 15:25:50 I'm probably missing something obvious :) 15:25:57 Right now the current caching keeps the images around forevere short of a out of band configuration. 15:26:05 This puts it in the nova configuration for aging. 15:26:11 Ahh - ok, I see. 15:26:18 small steps to start. 15:26:21 I assumed the base images were deleted when the last VM that used it was deleted. 15:26:44 We did too and John and I reviewed yesterday. We determined it wasn't the case (ie. they remain forever). 15:26:48 So this work will ensure the downloaded images are timed-out and deleted 15:27:00 If they are not in use. 15:27:08 and only if it's configured. 15:27:20 Doesn't matter if they are - as long as you only delete the glance image and leave the base copy. It'd be merged in 15:27:25 coalesced* 15:27:55 okay. maybe we can talk about that offline? 15:28:01 Maybe deleting rare glance images in all cases (even if in use) would be an option if we don't think they will be used again that could just safe a little space 15:28:04 sure 15:28:16 OK - I'll have a butchers at the BP. 15:28:23 LOL I like it. 15:28:27 #link https://blueprints.launchpad.net/nova/+spec/xenapi-image-cache-management 15:28:30 For the logs. 15:28:53 Okies - bugs! 15:28:56 #topic bugs 15:29:06 I've fixed a couple - reviews are up 15:29:12 A couple of long standing bugs 15:29:31 #link https://review.openstack.org/#/c/60253/ is around monitoring the GC when garbage collecting 15:29:34 worth talking about that one 15:29:48 Guest56631: worth you having a think on that one too - to get another RS perspective. 15:30:08 Basically the question around this one is whether we should abandon early if the GC is believed not to be running 15:30:18 That was the original patch, but John wasn't totally convinced to start. 15:30:30 I think I've convinced him, so he's happy, but I wanted to get a slightly wider view on it too 15:30:47 BobBall: thanks! 15:31:10 #link https://review.openstack.org/#/c/66597/ fixes the CPU reporting 15:31:13 finally. 15:31:31 I had too many people asking about it in #openstack so I thought I'd just do it 15:32:24 Oh - another thing for BP's - sorry. garyk is working on the v3 diagnostics API, and I've done the XAPI support for it (very straight forward in the end) 15:32:27 #link https://review.openstack.org/#/c/66338/ 15:32:49 All of the above could do with review comments :) 15:32:56 That's me for bugs. 15:33:07 Any more? or shall we move to matel for the update on QA/gate? 15:33:25 #topic QA 15:33:27 nice bob. 15:33:30 Over to matel then. 15:33:44 Okay, so I am working on the localrc atm. 15:34:04 I failed to get the official devstack scripts running. 15:34:25 because of the wrong localrc? 15:34:41 The "framework" is here: #link https://github.com/citrix-openstack/xenapi-os-testing 15:35:00 No messages coming out of the VM, and it just cuts me off. 15:35:11 Probably messing around the interfaces, I guess. 15:35:32 oh 15:35:38 so this is when devstack is running 15:35:53 it runs differently for some reason then we lose internet connection 15:35:57 not surprising I suppose 15:36:10 So some VM configuration happens here, I think: https://github.com/openstack-infra/devstack-gate 15:36:36 That's the update. 15:36:50 OK 15:37:15 And I'm right in saying the nodepool changes that we require are all up and waiting for review? 15:37:42 As far as I know, yes - although I did not have the chance to try them. 15:38:15 matel - check out https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L270 15:38:38 that's the host config it does 15:38:40 yes, I saw that. 15:38:43 including messing around with iptables 15:38:45 ah ok 15:39:07 I'll shush up then 15:39:09 ok :) 15:39:12 #topic AOB 15:39:14 So atm, I start that script, and it cuts me off, so I just need to reverse engineer it. 15:39:40 are you running as root? 15:39:42 or stack? 15:39:48 as jenkins 15:40:03 I see those changes make a stack user - which is what the rest of the script should probably execute as 15:40:10 so maybe the cutting you off is at that point 15:40:19 I am following this: https://github.com/openstack-infra/devstack-gate/blob/master/README.rst 15:40:28 https://github.com/openstack-infra/devstack-gate/blob/master/README.rst#simulating-devstack-gate-tests 15:40:33 #link https://github.com/openstack-infra/devstack-gate/blob/master/README.rst#simulating-devstack-gate-tests 15:40:51 bot does not seem to pick up these links? 15:40:55 ok 15:40:57 I think it does 15:41:01 but only shows them in the output 15:41:13 i.e. no point in repeating "link to XYZ" 15:41:21 I thought it puts some garbage here as well. 15:41:36 Maybe. Well, it's running, so hopefully we'll get logs even if no links 15:41:42 Ok - any more for any more? 15:41:49 I'm done. 15:42:45 Good good - we'll stop the meeting there then.# 15:42:51 #endmeeting