15:00:57 #startmeeting XenAPI 15:00:58 Meeting started Wed Sep 18 15:00:57 2013 UTC and is due to finish in 60 minutes. The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:02 The meeting name has been set to 'xenapi' 15:01:54 hello! 15:01:58 meeting time then :) 15:02:10 yup 15:02:15 whos here for the meeting? 15:02:37 just me ATM 15:02:40 from Citrix anyway 15:02:42 no worries 15:02:48 Mate has stepped out but will be back in a short time 15:02:56 just me from Rackspace (actively anyways) 15:03:04 you got lots of items to raise? 15:03:15 sure 15:03:18 we can talk about a few bugs 15:03:20 a few reviews 15:03:22 cool 15:03:25 and prioritsation of bugs 15:03:26 :) 15:03:30 Want a couple of bugs first? 15:03:33 nah 15:03:35 let's go to reviews 15:03:37 prompted by bugs 15:03:38 :D 15:03:41 just looking at agenda 15:03:47 ok ok 15:03:50 go thruogh the agenda then 15:04:01 #topic Blueprints 15:04:10 No actions, so lets look at blueprints 15:04:20 I have started raising a few for Icehouse 15:04:22 hopefully none since we're in RC! 15:04:24 oh right 15:04:27 what have you been raising? 15:04:40 migration and making it work with ephemeral disks 15:04:49 and general stuff in that direction 15:05:08 I see 15:05:09 would be good to get Icehouse blueprints up there before the summit 15:05:20 what doesn't work atm? just to understand 15:05:31 submitted a XenAPI roadmap talk, I should be availbe at the moment 15:05:51 BobBall: its listed in the blueprint, a bit confusing, but ephemeral disks get deleted the re-created at the momemnt 15:05:58 oh right 15:06:01 got a linky? 15:06:34 #link https://blueprints.launchpad.net/nova/+spec/xenapi-migrate-ephemeral-disks 15:06:42 * BobBall will have a read 15:06:55 there is a dependent blueprint I may of may not do 15:07:06 that looks a bit confused... 15:07:23 #link https://blueprints.launchpad.net/nova/+spec/xenapi-resize-ephemeral-disks 15:07:27 confused? 15:07:35 I'm confused yeah 15:07:44 why are there three cases there? 15:07:53 hi 15:08:39 hi 15:08:39 BobBall: one is migrate (and resize), one is evacuate, one is live-migrate 15:08:44 looking at https://blueprints.launchpad.net/nova/+spec/xenapi-migrate-ephemeral-disks 15:09:12 oh I get it I think 15:09:13 ok 15:09:14 no problem 15:09:18 no worries 15:09:54 I don't have nay other blueprints 15:09:57 can we move to bugs then? 15:10:02 yep 15:10:12 lets get blueprints up soon for Icehouse 15:10:17 #link https://bugs.launchpad.net/nova/+bug/1227019 15:10:19 Launchpad bug 1227019 in nova "Error with run_tests.sh: Invalid version of "six" installed" [Undecided,In progress] 15:10:24 then in a few weeks we can plan the XenAPI summit session 15:10:27 right... 15:10:29 that's a silly one that hit me and Mate 15:10:32 #topic Bugs 15:10:42 but worth showing you so you can approve it as a nova core :D 15:10:58 (and then approve the fix of course :)) 15:11:27 but that's not worth talking about here 15:11:29 that's just for info 15:11:34 I want to talk about the next one: 15:11:37 #link https://bugs.launchpad.net/nova/+bug/1226622 15:11:38 Launchpad bug 1226622 in nova "Obscure error when plugins mismatch" [Medium,In progress] 15:11:46 yeah, not been through reviews yet 15:12:02 I've got a fix up there and I would really love it to get into Havana 15:12:29 a bunch of people will be testing with Havana in domU so I want the plugin version check accepted if at all possible 15:13:33 OK, not gone through reviews today, but will hit those when I get chance later in the week 15:14:14 cool 15:14:31 just bug me on IRC if I forget 15:14:36 I accept the method is slightly ummm iffy - e.g. I'd really wnt to use MD5 hashes 15:14:43 we have some security bugs on XenAPI 15:14:45 BobBall - pep8 issues. 15:14:45 but hashlib isn't installed in dom0 :( 15:14:54 I thought I fixed those... 15:14:58 maybe I didn't push the latest 15:15:07 I pushed up fixes for them 15:15:09 hmm 15:15:28 which are your patches? 15:15:29 @john - which are the secbugs? 15:15:31 thanks matel - dunno how I missed that email. 15:15:51 #link https://bugs.launchpad.net/nova/+bug/1073306 15:15:53 Launchpad bug 1073306 in nova "xenapi migrations don't apply security group filters" [High,In progress] 15:16:01 #link https://bugs.launchpad.net/nova/+bug/1202266 15:16:04 Launchpad bug 1202266 in nova "xenapi: secgroups are not in place after live-migration" [High,In progress] 15:16:15 they are security bugs that got made open 15:16:28 because they were reported in the open 15:17:43 I have only got partial fixes 15:17:48 there is some new stuff here: 15:18:19 why is it partial? 15:18:39 #link https://bugs.launchpad.net/nova/+bug/1224587 15:18:41 Launchpad bug 1224587 in nova "xenapi: secgroups are not in place for a short duration after live-migration" [Medium,Triaged] 15:18:49 basically, only fixed the first bit 15:19:03 so what will be left over after your fix? 15:19:52 the above bug 15:20:03 oh right 15:20:07 sorry 15:20:12 me being slow 15:20:14 understood 15:20:19 I'd seen that 15:20:25 I'm happy with that being leftover ATM 15:20:31 we need to figure out what the right fix is internally 15:20:37 of course I need to push that 15:20:50 sure 15:20:51 OK 15:20:55 can you take that bug then? 15:20:59 and add comments? 15:21:26 we should probably have a bug in github against xapi for it 15:21:40 or when we have an external bug tracker that's where it should go too 15:21:41 :) 15:21:43 I'll add a comment 15:21:52 yeah, once there is an external one available 15:22:34 So I also went through a nd prioritised some bugs 15:22:41 OK 15:22:59 but the list isn't easy to see cuz it's on trello 15:23:08 but - from https://bugs.launchpad.net/nova/+bugs?field.tag=xenserver 15:23:09 changed priorties? or non priortiesed ones? 15:23:23 just internal prioritsaion cuz I can't do anything other 15:23:33 hmm 15:23:35 https://bugs.launchpad.net/nova/+bug/1030108 15:23:37 Launchpad bug 1030108 in nova "xenapi: bad handling of "volume in use" errors" [Medium,Triaged] 15:23:44 yeah, thats a nasty one 15:23:44 https://bugs.launchpad.net/nova/+bug/1161471 15:23:45 well 15:23:46 Launchpad bug 1161471 in nova "xenapi: guest kernel not cleaned up" [Medium,Triaged] 15:23:50 easier question for oyu... 15:23:55 https://bugs.launchpad.net/nova/+bug/1192528 15:23:56 Launchpad bug 1192528 in nova "XenAPI VCPU information unavailable" [Medium,Triaged] 15:24:02 which ones are not medium? 15:24:08 https://bugs.launchpad.net/nova/+bug/1215383 15:24:09 Launchpad bug 1215383 in nova "XenAPI: Consider removing safe_copy_vdi" [Medium,Triaged] 15:24:10 all medium 15:24:16 good good 15:24:28 any medium ones you don't think should be medium? 15:24:48 yes 15:24:56 can we go through those 15:25:04 thats useful for upstream readability 15:25:17 but I'm only going on what other bugs are in reviewday 15:25:30 confused 15:25:34 https://bugs.launchpad.net/nova/+bug/1161471 I think should be lower 15:25:36 Launchpad bug 1161471 in nova "xenapi: guest kernel not cleaned up" [Medium,Triaged] 15:25:56 (looking at how other hypervisor bugs have been classified rather than what the "official" criteria are) 15:26:25 yeah, so I am using these: 15:26:30 https://wiki.openstack.org/wiki/BugTriage 15:26:59 yup 15:27:02 others aren't :) 15:27:27 or I should sya 15:27:40 made that one low 15:27:43 others don't appear to be adhering to the guidelines 15:27:49 it doesn't break the featrue 15:28:00 I think its more than cosmetic, but low is OK 15:28:09 https://bugs.launchpad.net/nova/+bug/1192528 should be low too IMO 15:28:10 Launchpad bug 1192528 in nova "XenAPI VCPU information unavailable" [Medium,Triaged] 15:28:17 anyways, if you want to do the bug, please claim them in the bug tracker 15:28:19 (even though I want to fix it now) 15:28:28 We'll claim them when we start work on them 15:28:46 well that breaks the feature of vCPU scheduling 15:29:01 (and really looks bad in horizon) 15:29:09 but I get your point 15:29:20 its quite a corse grain priority 15:29:23 oh, of course 15:29:28 I hadn't thought about the scheduler filter 15:29:32 yes - keep it medium 15:29:48 OK, any more 15:29:59 probably - but not for today :) 15:30:09 OK 15:30:18 any fixes you want to discuss for the listed bugs? 15:30:27 also, do claim them, if you want them 15:30:46 no fixes yet 15:30:50 I only went through them today 15:31:00 they are the ones I would like to see fixed for Havana 15:31:33 So the goal is to fix everything which is >= medium? 15:31:37 no 15:31:48 hmm, well I don't think any of them are high enough to block havana, so will not target them for Havana 15:31:48 some of the mediums I think are less urgent for the stable release 15:31:57 agreed - none are blocking 15:32:06 would be good to get fixes up 15:32:08 but they are my focus for what I'd like to get done :) 15:32:17 I have about 17 patches that are mostly bug fixes, up for review 15:32:29 although I would very much like to see https://bugs.launchpad.net/nova/+bug/1226622 as higher prioirty 15:32:30 but no one is reviewing them at the moment 15:32:30 Launchpad bug 1226622 in nova "Obscure error when plugins mismatch" [Medium,In progress] 15:32:47 purely from a long-term supportability perspective 15:32:55 not because it is a real bug in havana 15:32:59 "bug" I should say 15:33:07 well, it should be low on the rules, so I thought medium was the best comprimise 15:33:10 i.e. nothing goes wrong if you do everything right 15:33:12 agreed 15:33:33 I'm not going to argue for higher - I'm just saying that's the most important one to get in from my perspective 15:33:42 OK, thats cool 15:34:20 the next one is this one maybe: 15:34:21 https://bugs.launchpad.net/nova/+bug/1030108 15:34:23 Launchpad bug 1030108 in nova "xenapi: bad handling of "volume in use" errors" [Medium,Triaged] 15:34:33 do you know how you plan on fixing that? 15:35:01 no 15:35:14 but it is importrant 15:35:21 so I'd like to spend some time thinking about it 15:36:12 hmm, OK 15:36:18 I have some idea 15:36:39 its mostly a mismatch between the xenapi state machine and the cinder state machine 15:36:49 so whatever the fix, I feel they should be in-sync 15:37:11 ideally the user should be told that something has gone wrong 15:37:19 I think if you but the instance into error 15:37:32 but leave the volume as attaching 15:37:36 then perform a reboot 15:37:51 then check volume is now attached after boot 15:38:09 user can peform a reboot to get the instance out of the error state 15:38:22 I mean, make sure after the reboot, it is properly detached 15:38:32 that might mean make the reboot a turn off then on again 15:38:34 right 15:38:36 anyways 15:38:39 just some idea 15:39:05 it's a fun one for sure 15:39:12 yeah 15:39:22 its probably getting fixed in Icehouse now 15:39:26 be we should really fix that one 15:39:31 OK 15:39:34 so ... 15:39:38 #topic Open Discussion 15:39:42 anything else? 15:39:44 You think we should leave that one to Icehouse? 15:39:50 nope 15:39:55 *confused* 15:39:57 just think it will take that long to get right 15:40:01 oh 15:40:03 maybe yeah 15:40:11 but we should get it fixed before Icehouse-1 15:40:15 johnthetubaguy: is there a way in os with xs to migrate a vm from a host to another with diff cpu capabilities ? 15:40:27 *grin* 15:40:29 ekarlso: yes, but you have to do it in advance 15:40:36 ekarlso: set up CPU masking for the hosts 15:40:38 ekarlso: you can set the CPU masks 15:41:26 We do want to look at per-VM masking so you can migrate to a host with more features than the current host - but you'll never be able to migrate the other way. And that'd need a Xen change. 15:41:30 johnthetubaguy: meaning ? 15:41:45 ekarlso: there are hints here http://support.citrix.com/article/CTX127059 15:42:03 ekarlso: basically, make sure Xen makes all your CPUs look the same 15:42:10 to the guests 15:42:18 Also have a look at http://blogs.citrix.com/2011/08/08/create-common-cpu-masks-for-heterogeneous-pools-in-xenserver/ which has a funky tool to work out what the mask should be 15:42:26 ah, cool 15:43:26 so the key is even if you're not using a "pool" we still have to verify the flags are the same in order to do bidirectional migrates 15:43:52 normally we'd do the check on pool-join so it would be _very_ obvious that you couldn't migrate between the two 15:44:11 yeah, the non pool case is a bit manual 15:44:16 is it a bad thing to do or ? 15:44:18 but if you're using live migrate with Xen storage motion then that check occurs at run time when we check that the VM can go to the host 15:44:30 It's perfectly fine to restrict the flags :) 15:44:34 BobBall: can you guys get a CTX article out for that? 15:44:59 specifically for non-pool people doing storage motion 15:45:02 just means the VM won't use some of the capabilities that one host may have that the others dont 15:45:47 Agreed - but probably just to update http://support.citrix.com/article/CTX127059 to talk about XSM as well 15:46:27 cool 15:46:40 so, any more for any more? 15:47:00 not from me! 15:48:04 cool 15:48:08 happy bug fixing 15:48:18 and happy blueprint suggesting for Icehouse 15:48:36 lets catch up in a little bit to talk about what goes in the XenAPI roadmap summit session 15:48:40 #endmeeting