09:30:58 #startmeeting XenAPI 09:30:59 Meeting started Wed May 25 09:30:58 2016 UTC and is due to finish in 60 minutes. The chair is BobBall. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:31:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:31:03 The meeting name has been set to 'xenapi' 09:31:13 Good morning / afternoon / evening all 09:31:21 johnthetubaguy: pingity ping :) 09:31:41 * johnthetubaguy is lurking with intent to read 09:31:50 Good intent 09:31:52 Good morning Bob&johnthetubaguy. 09:31:56 Well - we need you for the first bit johnthetubaguy :) 09:32:05 #topic Blueprints / Reviews 09:32:16 We're rapidly approaching non-priority blueprint freeze 09:32:31 The three blueprints in https://etherpad.openstack.org/p/newton-nova-priorities-tracking are still pending core reviewers 09:32:45 Sorry, four 09:32:48 https://review.openstack.org/#/c/280099/7 - XenAPI: support VGPU via passthrough PCI 09:32:51 https://review.openstack.org/#/c/277452/ - XenAPI independent hypervisor (fixing interaction layer between Nova + Hypervisor) 09:32:54 https://review.openstack.org/#/c/274045/5 - Xenapi: a new VDI store via streaming 09:32:57 https://review.openstack.org/#/c/304377/ - Xenerver compute driver support neutron security group 09:33:03 Let's go through them one at a time? 09:33:13 johnthetubaguy: Any further thoughts on https://review.openstack.org/#/c/280099 ? 09:33:13 hi all 09:33:20 That's the VGPU spec 09:33:41 oh 09:33:42 haha 09:33:45 I just added a comment, I think we need to point to the code 09:33:48 Comment 1 minute ago :D 09:33:50 yeah 09:34:05 Oh - against sdagues comment? 09:34:10 Do you want the reference in the spec? 09:34:36 more just in the comments 09:34:45 we don't have any decent docs on this stuff 09:34:52 OK; jianghuaw can you add an update there? 09:34:53 so its hard to tell when the API is changing 09:35:06 Ah - Moshe Levi added the reference to the code 09:35:20 ah, I was just going to double check devref 09:35:27 I didn't get much reference on that. 09:35:30 clearly someone is watching the spec :) 09:35:50 Do you have any other comments johnthetubaguy? Or are you close to a +2? :) 09:35:56 But that's the way I know how pci pass-through can work. 09:36:42 jianghuaw: It's OK - the reference to the code has been added 09:37:11 ah, yes. I see it. 09:37:12 thanks. 09:37:36 not sure I am close to a +2 yet, just getting my head around it all really 09:38:00 fair enough - if we could request a re-review then that would be appreciated 09:38:01 so follow on question 09:38:08 "vgpu1:1" 09:38:14 what is the ":1" bit for? 09:38:24 It's the number of instances you are requesting 09:38:35 See the comment above https://github.com/openstack/nova/blob/master/nova/pci/request.py#L181-L185 09:38:35 yes. 09:38:46 The pci_passthrough:alias scope in flavor extra_specs 09:38:46 describes the flavor's pci requests, the key is 09:38:46 'pci_passthrough:alias' and the value has format 09:38:46 'alias_name_x:count, alias_name_y:count, ... '. The alias_name is 09:38:47 defined in 'pci_alias' configurations. 09:39:05 hmm, I se this bit now: https://github.com/openstack/nova/blob/master/nova/pci/request.py#L134 09:39:48 It's almost always going to be 1 (IMO it would have been better if the original PCI spec had said that if the : was missing it would default to 1) 09:41:03 so the problem here, is we are modifying a bit of code that has few docs, and few folks who understand it, so it takes time to agree things, sadly 09:41:22 anyways, getting there 09:41:59 I just hope we can get there fast enough. Just 1 week to no priority feature freeze 09:42:06 hence us pushing quite hard now :) 09:42:11 oh wait, the alternatives sections... 09:42:32 did we decide to only expose one type per host 09:42:45 Yes 09:42:55 we should cover the alternative, in that alternatives section 09:43:05 so we remember why we are not doing that 09:43:09 Good point 09:43:42 sure, I will update it. 09:43:48 Awesome 09:43:53 Let's move on to the next spec 09:43:59 https://review.openstack.org/#/c/277452/ - XenAPI independent hypervisor (fixing interaction layer between Nova + Hypervisor) 09:44:17 I've addressed your comments and removed the link to the new VDI store via streaming BP 09:44:41 So I think the next step is if you could re-review it and let me know what your thoughts are? 09:45:23 yeah, did you get my concerns on the functional tests 09:45:31 I just worry if we add more code branches 09:45:49 if its just checks about not being able to do certain things, then that doesn't feel as bad, for sure 09:46:16 I guess we will need to always stream config drive to the hypervisor? 09:46:25 to take that approach 09:46:28 I do understand the concerns, yes. I hoped that my comments would reassure you :) 09:46:31 Yes 09:46:49 No reason not to. It's a small enough drive to just create in-guest and then stream to the hypervisor in a 'supported' way 09:47:07 Wrong use of ''s there... Supported + potentially isolated way 09:47:07 :) 09:47:26 yeah, thats all fine 09:48:01 oh, one thing comes to mind about partition_utils 09:48:08 do you know what the load will be on Dom0? 09:48:17 It should be very low 09:48:45 We're not planning to do anything big in there iirc? 09:49:01 I thought we created ext3 filesystems 09:49:05 do you know the load of resizing the partition in domU? 09:49:11 for ephemeral disks 09:49:26 Yeah - but that's quite quick even for large disks 09:49:32 I got the impression the load isn't minimal 09:49:44 ext4 would be quick, but ext3 has to do a lot more work, I believe 09:50:02 I had forgot about that until now 09:50:14 What level of 'load' do you think would be concerning? 09:50:37 honestly, this is based more on my laptop fan getting excited when doing this inside a VM running XenServer 09:50:38 And what load are you thinking of? bytes written to disk? CPU load? 09:50:50 more CPU load, honestly 09:51:03 disk load will kinda be the same, I am guessing 09:51:26 They will both be the same; but in a different place (i.e. dom0 vs the scheduled domU) 09:51:30 I suspect I am overthinking it, its just something thats worth checking 09:51:42 right, but the compute node has throttled CPU, dom0 less so 09:52:06 it would be bad if other guests saw issues during a resize, etc 09:52:18 Well - while it does have access to it's CPUs, it still has Dom0 has a fixed number of them 09:52:31 well, thats the problem 09:52:40 the CPUs in Dom0 are needed to keep the guests responsive 09:52:46 not so for the compute node VM CPUs 09:53:02 Yes; so how many CPUs for how long would be worrying? 09:53:25 i.e. if it had the potential to use up to 100% of one CPU (single threaded task, obviously) for 30 seconds, would that be worrying? 09:53:36 yeah, that would be a worry 09:53:37 Clearly we can nice it if you like 09:53:50 nice is probably a good iea 09:53:54 idea 09:54:30 its more, I would like to know the rough impact, on a good size VM 09:54:36 OK; so I'll update the spec to say we will use nice to lower the priority of the intensive tasks such as mkfs.ext3 09:54:41 lets just double check if its terrible 09:54:48 yeah, that works 09:54:50 Good size = how much disk? 09:55:54 hmm, 200GB or something like that? 09:56:01 I will check. 09:56:07 OK, next spec.. :) 09:56:13 https://review.openstack.org/#/c/304377/ - Xenerver compute driver support neutron security group 09:56:18 Hopefully this is a very simple one :) 09:56:32 The nutshell is that we want to do the same thing as libvirt to get security groups working in Neutron 09:56:46 libvirt sets up a Linux Bridge that it can apply iptables rules to 09:56:51 and we want to do the same thing 09:56:55 yes, main part is creating linux bridge 09:57:21 It hasn't had a review yet, but if you remember this is the change that you reviewed a while ago and requested a simple spec to make it clear what we were doing 09:57:30 so the linux bridge is created inside the compute VM? 09:57:37 No - Dom0 09:57:39 No 09:57:43 yes, Dom0 09:57:49 so you are running both linux bridge and ovs in Dom0? 09:57:56 yes 09:58:08 yes; which is also what libvirt does (ovs+bridge) 09:58:55 *but* clearly we only want to add a linux bridge if security groups are applied and you're using the appropriate firewall driver 09:59:17 So if you don't select a firewall driver (or you use a different one, which I guess you do at RAX) then it doesn't affect you 09:59:47 But it is clearly critical to getting neutron support as it's the only way we can get security groups with the upstreamed drivers 09:59:52 so the spec doesn't mention about this being in a firewall driver 10:00:01 and that it needs to be configured 10:00:04 i.e. its optional 10:00:10 I will update that 10:00:37 Yes; we will make it clear; i.e. it will not affect Rackspace as I understand your deployment 10:01:06 thats not my main concern here, just trying to work out what the impact is 10:01:18 Understood 10:01:28 Finally https://review.openstack.org/#/c/274045/ - you said you would have a closer look at this spec :) 10:01:53 so just to be clear 10:02:01 Nova is doing the security groups, and not neutron? 10:02:16 or does Nova just put in place the bridge, that neutron detects and updates? 10:02:36 neutron will write security group rules 10:02:47 but the rules are applied on linux bridge 10:03:04 My understanding is that Neutron requires security groups to be enabled (e.g. some Neutron tempest tests depend on security groups) 10:03:21 And the linux bridge should be created during booting VM, so we mainly do the creating linux bridge work 10:03:51 yeah, its just some neutron things actually make nova add the rules 10:03:59 Yes 10:03:59 its a bit odd, so glad thats not the case here 10:04:38 Indeed; this is just 'standard' Nova; just a part that is expected to work :/ 10:04:40 yes, neutron does most of the rules on the linux bridge, and nova create the linux bridge 10:05:46 OK, added comments on the spec, I think thats close 10:05:54 thanks a lot 10:06:07 So, finally https://review.openstack.org/#/c/274045/ - you said you would have a closer look at this spec :) 10:06:33 yeah, and another 20/30 of them, but its not happened 10:06:45 I can totally understand 10:07:20 I think my existing comments still stand 10:07:33 ah, wait, I am looking at the old version 10:07:58 :) 10:08:34 so testing is the issue here 10:08:40 if we start testing this by default 10:08:45 the old system will break 10:09:20 maybe we keep the neutron CI on the new one, and the old CI on the old one? 10:09:32 Yeah; can do 10:09:53 (Also, we could do the same for the isolated compute change) 10:10:27 But please note: the purpose is to make this new store as default. 10:10:33 yeah, I figured that one is harder, as it forces us towards multi-node 10:11:27 yeah, we should probably get the CI running, before we switch over the default 10:11:33 Isolated compute can be tested even if the compute is embedded - just set the flag 10:12:17 yeah, but you don't stop people "doing things they shouldn't", which is probably quite useful 10:12:17 I'm sure we can stop the host from attaching any disks to the guest; which is the main problem 10:12:28 yeah, that could work 10:12:53 OK. Well, I think we've reached time. 10:13:11 So - is there anything else we should cover? 10:13:24 #link: https://review.openstack.org/#/c/242846/ 10:13:34 We'll work on updating those specs by tomorrow 10:14:02 and then, johnthetubaguy, do you mind if I nag you again on Friday, given how close we are to non-priority feature freeze? 10:14:10 Hi, I hope this patchset can be reviewed again https://review.openstack.org/#/c/213112/ 10:14:10 totally keep bugging me 10:14:28 John, if you have time could you help to re-review this patch set? 10:14:29 https://review.openstack.org/#/c/242846/ 10:14:32 jianghuaw: Could we cover that bug next time? 10:14:44 sure. 10:14:46 thank. 10:14:51 thanks. 10:14:59 huanxie: Same; I think we should focus on BPs 10:15:09 sure, thanks 10:15:35 The deadline for non-priority blueprints is in 1 week's time, so if we can grab any of johnthetubaguy's time I'd personally rather it was looking at specs than those bug fixes - which we have more time for 10:15:58 yeah, specs should be the focus for the moment 10:16:02 Bob: Got it. Thanks. 10:16:15 OK - then let's close the meeting there. 10:16:19 Thanks for the feedback johnthetubaguy! 10:16:22 #endmeeting