13:00:15 #startmeeting hyper-v 13:00:15 Meeting started Wed Jun 29 13:00:15 2016 UTC and is due to finish in 60 minutes. The chair is claudiub. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:18 The meeting name has been set to 'hyper_v' 13:00:21 Hi all 13:00:23 hellooo 13:00:27 Hi 13:01:09 hello 13:01:11 anyone else around? 13:02:02 oh well, that means that it might be a very short meeting this time around. :) 13:02:09 hi all 13:02:16 Hi 13:02:20 #topic designate status 13:02:27 so the os-win patch merged. 13:02:42 #link os-win dnsutils https://review.openstack.org/#/c/327846/ 13:02:49 good job abalutoiu. :) 13:02:58 thanks :) 13:03:11 it will be included in the os-win 1.1.0 release 13:03:19 and here's the release request: 13:03:28 #link os-win 1.1.0 release https://review.openstack.org/#/c/335448/ 13:03:52 so, once that's done, the designate patch will have to be updated to use os-win 1.1.0 13:04:06 the release will hopefully happen today. will ping some people 13:04:37 Hi All 13:04:38 the designate patch has been reviewed by Graham Hayes, he says it's looking pretty good. 13:04:52 got delayed a bit today 13:05:00 so, it probably be over soon. :) 13:05:05 sagar_nikam: hellou :) 13:05:25 so, yeah, looking good. 13:05:44 next topic. 13:06:03 #topic networking-hyperv status 13:06:34 kvinod: did you have a chance to test this? https://review.openstack.org/#/c/332715/ 13:06:58 NO we have not consumed it yet 13:07:41 We tried recreating the issue around 25 times but no luck 13:07:41 aham, ok. Well, from I've tested it several times, it works fine. Was hoping we close the bug this week. :) 13:08:06 so I am not sure of how far it will solve the issue 13:08:07 strange. what branch? 13:08:20 We are on Liberty 13:08:44 hm, from my guess, that would only mean that the bug doesn't exist in liberty. 13:09:08 As we said in Bug also that it is not always seen but never thought that it will never be seen 13:09:36 We will consume it if in case we see the issue again on liberty 13:10:00 kvinod: well, when I confirmed the bug, I actually checked if it was happening. 13:10:04 and it was 13:10:15 I only checked on master though. 13:10:38 anyways its good to see it being fixed on master 13:10:55 claudiub: thaks anyways 13:11:00 yeah.. no problem. :) 13:11:22 #link https://review.openstack.org/#/c/328210/ 13:11:31 as for the other patch, I saw that there were a couple of new patchsets today. 13:11:33 new patch is uploaded 13:11:38 jenkins still fails though. 13:11:48 Jenkin failing, we are working on it 13:12:23 in the mean time you can have a look into the changes and confirm that it addresses your comment 13:12:37 yeah. looks ok, will comment on it. 13:12:56 sure thanks, we will fix Jenkin failure 13:13:21 cool, thanks. :) 13:13:28 next topic 13:13:30 can you share any update on Microsoft certification on OVS 13:13:50 last meeting you said you will ask and let us know 13:14:29 claudiub: any update :) 13:14:42 still ongoing, but there isn't anything I know about the topic.. 13:15:12 can you suggest some contact who can give us update on this 13:15:34 we can start email keeping you in loop 13:16:47 claudiub: the intention was to get idea on plan and probable date 13:17:24 yeah, probably our colleague aserdean might know more about this 13:17:53 as he's been working on the OVS on Windows. 13:18:17 ok, I should be able to get email id of aserdean from launchpad? 13:18:43 I will get in touch with aserdean 13:18:45 , thanks 13:18:49 kvinod: yep. 13:19:16 kvinod: https://launchpad.net/~aserdean 13:19:33 fine, thanks 13:19:42 #topic windows containers 13:19:59 atuvenie: hello. :) 13:20:10 hello :) 13:20:23 atuvenie: enlighten us with news on this topic. :) 13:20:24 so, we've had some progress with running kublets on windows 13:20:25 nice topic.... 13:20:50 now I'm working on making it also work with OVS 13:21:03 to be able to use gre/vxlan 13:21:32 I'm trying to replicate the way it works on linux 13:22:32 atuvenie: you mean OVS support for containers ? 13:23:20 sagar_nikam: yes, we want to use ovs with kubernetes on windows 13:24:22 that's where I am now, but the good news is that kublets seem to work on windows 13:25:38 atuvenie: is it working ? 13:26:07 sagar_nikam: kublets work on windows, using ovs with containers is still a work in progress 13:26:32 ok.. thanks 13:27:11 well, having kublets working on windows is a big step forward. :) 13:27:28 anyways. next topic. 13:27:39 #topic open discussion 13:28:30 FreeRDP 13:28:33 sooo... a few weeks ago, there was a question if we could migrate neutron hyper-v ports to ovs ports, or something like that. This is just a theory, but I think there's a way to do so with 0 downtime. 13:29:12 claudiub: are you planning to test it ? 13:29:48 what I am thinking is this: host A is using networking-hyperv and host B is using OVS. if you live-migrate instances from A to B, then, the ports should be then OVS ports. 13:29:50 claudiub: what is the way with 0 downtime 13:30:09 as at live-migration, the ports are migrated as well. 13:30:29 though, as I said, it is just a theory. I didn't test it. 13:30:36 ports migrated from networking-hyperv to OVS ? 13:31:05 yeah. basically the ports will be rebound on host B, which has neutron-ovs-agent. 13:32:17 ok 13:32:22 claudiub: you mean in that case we atleast need two computes to perform migration from hyperv to ovs 13:32:31 anyways, if anyone is willing to try it out, it would be great. :) 13:32:48 yeah. 13:32:59 kvinod: you have any plans to check it ? 13:33:04 cold migration should work too, but that has some downtime. 13:33:12 ok, good approach 13:33:20 claudiub: yeah,but how does neutron deal with this? Because you are basically moving between networks? The source one is vlan (Hyper-V)and the destination is vxlan? 13:33:46 yeah, ofcouse, you can't move from hyper-v vlan to ovs vxlan 13:33:59 the network will stay the same, ofcourse. 13:34:06 sagar_nikam: yes sure we will try it but will not commit on dates now, but yes we will have it as action item later when we encounter migration 13:34:26 kvinod: thanks 13:34:30 if there's a neutron hyper-v port, then it probably is vlan. when you live-migrate, you don't change the network. 13:34:31 claudiub: so basically it's just confined to vlan with networking hyper-v -> vlan with ovs ? 13:35:25 atuvenie: yep. 13:35:39 gre should work too. 13:35:56 would nvgre with networking-hyperv -> gre with ovs work? 13:36:10 I guess so 13:36:23 oh, you already said that,sorry I seem to lag a little 13:36:30 yeah, that's my opinion. 13:38:05 as for other news, I have been working on this thingy: https://review.openstack.org/#/c/331889/ 13:39:03 it's part of a quite large spec. the main idea is that you can give names / tag different instance NICs, block_devices (volumes, ephemerals) 13:39:39 which can be very useful if you have something like 10 block devices on an instance. it will be easier to distinguish them, as they have a user-given name. 13:40:30 for example if you do: nova boot --nic net-id=$net_id,tag=lenet test-vm 13:40:43 then that VM's NIC will be tagged as 'lenet'. 13:41:04 soo, yeah. 13:41:53 questions? 13:43:07 no from me 13:43:25 k. anything else? 13:43:43 yes 13:43:58 shoot 13:44:10 we hit a issue with FreeRDP. c64cosmin: is working on it 13:44:35 https://github.com/FreeRDP/FreeRDP-WebConnect/issues/149 13:45:09 the issue is FreeRDP does not authenticate the keystone token if the rest endpoint is TLS enabled 13:45:33 yes, I'm working on it 13:45:49 waiting for a fix, whenever c64cosmin: gets it fixed, we will pick up and test 13:46:02 it seems to be a cpprestsdk problem, we're using this lib for the authentication 13:46:10 thanks c64cosmin: for working on this 13:46:50 thanks sagar for pointing it out 13:47:55 claudiub: next topic, we will soon move to Mitaka, we are planning to check cluster driver soon 13:47:55 anything else? 13:48:03 possibly this week 13:48:12 we will let you know if we hit any issue 13:48:13 sagar_nikam: i see. :) 13:48:23 sounds good. 13:49:11 i am done with my topics 13:49:19 cool, me too. :) 13:49:24 anyone anything else? 13:49:41 if not, we might have out shortest meeting yet. :) 13:49:45 thanks all... early end today 13:50:14 cool. thanks folks for joining, see you next week! 13:50:23 see ya! 13:50:31 #endmeeting