15:06:46 #startmeeting XenAPI 15:06:47 Meeting started Wed Jul 16 15:06:46 2014 UTC and is due to finish in 60 minutes. The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:06:48 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:06:50 The meeting name has been set to 'xenapi' 15:06:57 howdy all 15:07:07 #topic Mid cycle meet up 15:07:09 howdy 15:07:20 just thinking, who is going to the meet up? 15:07:26 I am heading over for that 15:07:31 * BobBall keeps his hands in his pockets 15:07:37 the nova mid cycle I mean 15:07:44 OKay, just checking 15:07:45 which makes it more impressive that I can continue to type 15:07:52 * johnthetubaguy giggles 15:07:56 #topic CI 15:08:01 OK, so how is the CI this week 15:08:05 Fun 15:08:20 garyk very helpfully pointed out a break that meant I had to rebase devstack-gate 15:08:37 I'm thinking of a cronjob to rebase d-g which would be exciting 15:08:51 or perhaps what would be better is a merge... Hmmm... anyway 15:09:01 it meant we were broken for around 2 hours 15:09:07 which is a shocking length of time 15:09:22 Apart from that, we're hitting the bugs that are also seen by the gate 15:09:22 ah 15:09:25 so all's good 15:09:43 ah, interesting, I thought we might avoid some of the gate bugs 15:09:46 which ones are you seeing? 15:10:39 https://bugs.launchpad.net/openstack-ci/+bug/1286818 for example 15:10:41 Launchpad bug 1286818 in openstack-ci "Ubuntu package archive periodically inconsistent causing gate build failures" [Low,Triaged] 15:10:45 and https://bugs.launchpad.net/devstack/+bug/1340660 15:10:48 Launchpad bug 1340660 in devstack "Apache failed to start in the gate" [Undecided,New] 15:10:58 (the first one is a known Rackspace problem! :) ) 15:11:49 yeah, I never really understand that 15:12:05 people don't seem to agree thats and issue, etc. 15:12:24 really? 15:12:27 I get it a lot 15:12:37 well, you are hitting the ubuntu mirrors, not the rackspace mirros 15:12:41 you're talking about the Rackspace Ubuntu mirror sometimes being out of date, right? 15:13:06 no - mirror.rackspace.com/ubuntu 15:13:12 I think you need to explicitly point to the rackspace mirror, when I checked up on that 15:13:15 that's the hurtful one 15:13:19 hmm, that sounds like our mirror, lol 15:13:20 the default is the rackspace mirror 15:14:00 but that exception trace in the bug doesn't list the rackspace mirror, I guess thats what was confusing 15:14:06 remebering, we actually point _away_ from the rackspace mirror so we don't hit that problem (that one is gate only) - the second one is the one I was thinking of but I just hit those two in check jobs which is why I was thinking about it 15:14:44 https://bugs.launchpad.net/openstack-ci/+bug/1251117 15:14:45 Launchpad bug 1251117 in openstack-ci "Rackspace package mirror periodically inconsistent causing gate build failures" [Low,Triaged] 15:14:50 That's the job I should have pasted, sorry 15:15:00 -job + bug 15:15:37 ah, OK, so I will try and raise that internally, last time someone dug into that, I got told people were not actually hitting the rackspace mirror 15:15:47 it could be a network issue on the way there I guess 15:16:44 anyways, thanks for the extra detail 15:17:04 I was just checking we didn't see loads of the ssh issues 15:17:10 nah 15:17:11 I was hoping we side step those ones 15:17:12 not this week 15:17:56 cool 15:18:01 any more on CI? 15:18:09 any news on getting those extra tests enabled yet? 15:18:30 waiting on a devstack review 15:19:07 OK, you got a link? 15:19:19 I can see if my +1 helps 15:19:37 https://review.openstack.org/#/c/107345/ 15:20:11 does that image work OK? 15:20:26 doesn't look like a VHD one, I guess its a raw one? 15:20:37 it's the three part image 15:20:41 it's what we use in our internal CI 15:20:54 so what about removing the github link all together? 15:20:59 https://github.com/citrix-openstack/qa/blob/master/install-devstack-xen.sh#L428 15:21:14 We want to test VHD, so I'd argue against removing it 15:21:51 Both formats have been broken by someone else's changes in the past - so we might as well have both in there because tempest will (by chance) test both for us 15:21:55 ah, so you want to have both, that makes sense 15:22:09 the default is the VHD image 15:22:23 but there are a specific class of tempest tests (that upload images to glance) that break that assertion 15:22:26 so the test that needs the latest image will pick that one? 15:22:47 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/manager.py#n490 15:23:05 Specifically, see the except IOError: clause 15:23:11 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/manager.py#n504 15:24:31 but all of that is too much info for the commit message :P 15:25:29 yeah 15:25:34 maybe 15:25:42 just can't see why just yet 15:25:44 Oh - I know it's not CI stuff, but can you comment on https://bugs.launchpad.net/bugs/1204165 ? 15:25:46 Launchpad bug 1204165 in nova "xenapi: vm_utils.ensure_free_mem does not take into account overheads" [Low,Triaged] 15:25:52 can't see why what? 15:26:42 well, just not sure why it needs the three part image for the test 15:26:47 vs a qcow2 image 15:26:57 feels very hypervisor specific 15:27:08 because it can't find qcow2, so it assumes everyone must have 3-part? or libvirt supports both, so it tries qcow and falls back to 3-part? 15:27:11 yes 15:27:14 it is exceptionally hypervisor specific 15:27:19 and perhaps the answer is to fix tempest 15:27:27 yeah, I think so 15:27:41 but the easy answer for us - and a very useful one for testing - is to have the 3 part image tested by that code path 15:28:00 as I said before, people sometimes break 3-part so we should test it somewhere; and we want to default to VHD 15:28:06 so this change is the right one for us 15:29:00 OK, but why is this stopping you adding some tests, because that bit of tempest can't create the correct image? 15:29:25 without the 3-part image tempest fails because it's not there and we don't support/provide qcow2 15:29:51 http://dd6b71949550285df7dc-dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/34/96734/2/17227/testr_results.html.gz 15:30:00 IOError: [Errno 2] No such file or directory: '/opt/stack/new/devstack/files/images/cirros-0.3.2-x86_64-uec/cirros-0.3.2-x86_64-vmlinuz' 15:30:08 I think I kinda get that now 15:30:13 anyways +1 on that 15:30:27 I'll ask in -infra 15:30:36 better to fix tempest though, but no idea which is quicker to review 15:30:38 cool 15:30:45 OK, any more? 15:30:46 no, better _not_ to fix tempest 15:30:55 as I said, we want UEC images to be tested 15:30:59 and they are not tested anywhere else 15:31:20 so if we can test them here because tempest is hypervisor specific then that's just fine by me 15:31:21 I kinda think we should rip that feature out of the system myself, but Ok 15:31:43 perhaps 15:32:04 Yes - any more: https://bugs.launchpad.net/bugs/1204165 15:32:05 Launchpad bug 1204165 in nova "xenapi: vm_utils.ensure_free_mem does not take into account overheads" [Low,Triaged] 15:32:08 Please comment on the bug :) 15:32:15 it's one you raised and the question is is it still relevant 15:32:28 I just added a comment 15:33:01 ah then no :( 15:33:29 yeah, we still need that fixing, I should work through some of those soon 15:33:49 I forgot about that one 15:34:03 fair enough 15:34:45 I think the real answer is to remove that check, or at least move it, but thats a different converstaion 15:34:49 #topic Bugs 15:34:54 I guess we did that already 15:35:02 #topic Open Discussion 15:35:03 well the one I was talking about yeah... 15:35:05 any more for any more 15:35:17 have you/anyone looked at the updated ocaml-vhd rpm? 15:35:20 for the snapshot bug? 15:35:45 I don't think there is an easy why for us to test that, the bug is not very reproducable 15:36:12 Shame... 15:36:17 or at least, we don't have the information on how to reproduce it right now 15:36:29 well I thought you suggested it should be 100% reproducible :) 15:36:32 done loads and they worked fine, all seemingly doing the same thing 15:37:09 on one particular VM it was reproducible, but we worked around things for that VM I think 15:37:22 shame you didn't copy the VHDs 15:38:07 yeah, duno what they are doing, its not my team working on that right now 15:38:22 they may have them, but thats customer data, so I suspect we couldn't do that 15:38:43 duno the details of that case right now 15:38:49 ok 15:38:55 anyways, any more for any more? 15:39:55 I guess we are done 15:40:02 BobBall: thanks, catch you next week 15:40:05 #endmeeting