15:02:20 <johnthetubaguy> #startmeeting XenAPI
15:02:25 <openstack> Meeting started Wed Dec 17 15:02:20 2014 UTC and is due to finish in 60 minutes.  The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:02:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:02:28 <openstack> The meeting name has been set to 'xenapi'
15:02:40 <johnthetubaguy> #topic XenServer CI
15:02:51 <johnthetubaguy> so, wondering how things are going in the world of CI
15:03:03 <johnthetubaguy> might wait a little bit for matel to join
15:03:18 <BobBall> So generally things are OK
15:03:22 <BobBall> although we're a bit worried
15:03:27 <BobBall> images aren't rebuilding at the moment
15:03:30 <BobBall> and we don't understand why
15:03:40 <BobBall> there is some weird puppet interaction with iptables
15:03:50 <johnthetubaguy> oh dear, thats annoying
15:03:55 <BobBall> but figuring out the root cause for that is hard
15:04:00 <BobBall> Very short term it's not a problem
15:04:05 <BobBall> because we have active images in each region
15:04:06 <johnthetubaguy> yeah, thats stuff is a nightmare to debug
15:04:10 <BobBall> but they can quickly get stale
15:04:14 <johnthetubaguy> agreed
15:04:42 <BobBall> I think that's all I can say for that though
15:04:54 <BobBall> did we mention last week that we've rebuilt the CI?
15:05:06 <BobBall> mahusive rebuild to pick up the latest nodepool etc
15:05:09 <johnthetubaguy> yeah, or at least, its now easily automated
15:05:15 <BobBall> yup
15:05:21 <johnthetubaguy> ah, cool, not sure you said that was complete
15:05:28 <BobBall> But the rebasing on nodepool should have given us some more stability in the image generation
15:05:32 <BobBall> and reduce the bill for RAX ;)
15:05:51 <BobBall> {'': 33, 'Failed': 29, 'Passed': 224, None: 2}
15:05:57 <BobBall> Failure rate of 10% is not ideal though
15:06:01 <BobBall> Too high for my liking
15:06:32 <johnthetubaguy> yeah 10% seems high
15:06:45 <johnthetubaguy> unless people are uploading lots of very broken patches… which is possible I guess
15:07:00 <johnthetubaguy> did you add in those extra tests?
15:07:09 <johnthetubaguy> just wondering if there is a common cause for the failure
15:07:17 <BobBall> rebuild + resize server are failing ish
15:07:24 <BobBall> Failures
15:07:25 <BobBall> -------------------
15:07:25 <BobBall> 19 No tempest failures detected
15:07:25 <BobBall> 3 Fewer than 2 duplicates
15:07:25 <BobBall> 3 tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
15:07:28 <BobBall> 2 tempest.api.compute.servers.test_disk_config.ServerDiskConfigTest.test_resize_server_from_manual_to_auto
15:07:31 <BobBall> 2 tempest.api.compute.admin.test_servers.ServersAdminTest.test_rebuild_server_in_error_state
15:07:33 <BobBall> 2 tempest.api.compute.servers.test_server_actions.ServerActionsTest.test_rebuild_server
15:07:37 <BobBall> 2 tempest.api.compute.servers.test_server_actions.ServerActionsTest.test_resize_server_revert
15:07:40 <BobBall> 2 tempest.api.compute.servers.test_disk_config.ServerDiskConfigTest.test_resize_server_from_auto_to_manual
15:07:43 <BobBall> 2 tempest.api.compute.servers.test_disk_config.ServerDiskConfigTest.test_rebuild_server_with_auto_disk_config
15:07:46 <BobBall> 2 tempest.api.compute.servers.test_server_actions.ServerActionsTest.test_rebuild_server_in_stop_state
15:07:48 <johnthetubaguy> hmm, thats a worry
15:07:49 <BobBall> 2 tempest.api.compute.servers.test_disk_config.ServerDiskConfigTest.test_rebuild_server_with_manual_disk_config
15:07:52 <BobBall> 2 tempest.api.compute.servers.test_disk_config.ServerDiskConfigTest.test_update_server_from_auto_to_manual
15:07:55 <BobBall> but the 19 no-tempest-failures are also frustraing.
15:08:00 <BobBall> implies that devstack failed
15:08:05 <matel> sorry for being late.
15:08:09 <BobBall> or we couldn't collect logs (often seems to be a RAX network failure)
15:08:31 <BobBall> but perhaps adding a retry to collect logs would fix the network failures
15:08:52 <johnthetubaguy> BobBall: yeah, was just about to say a retry, swift can be a bit "fussy" now and then
15:09:50 <BobBall> I thought the issue was with downloading from the guest to the CI machine
15:09:55 <BobBall> but I'm not certain
15:10:02 <BobBall> I might have a look at adding a retry at some point
15:12:27 <matel> ok
15:12:40 <matel> I guess that's the CI.
15:12:41 <johnthetubaguy> matel: just CI updates
15:12:47 <johnthetubaguy> so yeah, all done I guess
15:12:55 <johnthetubaguy> #topic Open Discussion
15:12:59 <BobBall> Indeed - I don't have anything else to say on OSCI
15:13:04 <BobBall> our internal CI is very poorly
15:13:04 <johnthetubaguy> so any more we need to cover?
15:13:35 <BobBall> We redeployed it but it's not come up so we're having to put a lot of effort into trying to fix various things
15:14:07 <matel> yeah.
15:14:39 <BobBall> and Mate has done a stunning job on docs
15:15:54 <johnthetubaguy> awesome stuff, there is certainly "room for growth" in that area...
15:16:04 <johnthetubaguy> appreciate that getting some attention
15:16:12 <johnthetubaguy> it is sure to be hurting adoption
15:18:20 <matel> yeah, we no longer have the warning in xenserver docs
15:18:24 <matel> Which is good.
15:18:33 <johnthetubaguy> cool
15:18:42 <johnthetubaguy> any more for any more?
15:19:10 <BobBall> no, not this year.
15:19:14 <johnthetubaguy> OK, one bit more...
15:19:17 <BobBall> Watch this space for fun things happening next yar
15:19:19 <BobBall> year*
15:19:29 <johnthetubaguy> next week, no meeting I guess?
15:19:39 <BobBall> I'm on vacation
15:19:39 <johnthetubaguy> probably same the week after?
15:19:44 <BobBall> Indeed.
15:20:00 <johnthetubaguy> #info next meeting on 7th Jan
15:20:10 <johnthetubaguy> OK thanks all
15:20:17 <BobBall> matel: :) build-xenserver-core-master-centos worked.
15:20:25 <johnthetubaguy> matel: thats for all your hard work on this, really is appreciated :)
15:20:42 <johnthetubaguy> Happy christmas and happy new year folks
15:20:45 <johnthetubaguy> #endmeeting