*** jgriffith has joined #openstack-kuryr | 00:26 | |
*** jgriffith has quit IRC | 00:30 | |
*** jgriffith has joined #openstack-kuryr | 00:33 | |
*** jgriffith has quit IRC | 00:36 | |
*** hongbin has quit IRC | 00:43 | |
*** diogogmt has quit IRC | 01:05 | |
*** pmannidi_ has joined #openstack-kuryr | 01:09 | |
*** pmannidi has quit IRC | 01:11 | |
*** yedongcan has joined #openstack-kuryr | 01:35 | |
*** diogogmt has joined #openstack-kuryr | 01:49 | |
*** jgriffith has joined #openstack-kuryr | 01:49 | |
*** hongbin has joined #openstack-kuryr | 02:09 | |
*** jgriffith has quit IRC | 02:12 | |
*** limao has joined #openstack-kuryr | 02:25 | |
*** tonanhngo has quit IRC | 02:48 | |
*** yamamoto_ has joined #openstack-kuryr | 02:48 | |
*** tonanhngo has joined #openstack-kuryr | 02:48 | |
*** tonanhngo has quit IRC | 02:50 | |
*** yamamoto_ has quit IRC | 02:57 | |
*** yamamoto_ has joined #openstack-kuryr | 03:00 | |
*** tonanhngo has joined #openstack-kuryr | 03:04 | |
*** tonanhngo has quit IRC | 03:06 | |
*** jgriffith has joined #openstack-kuryr | 03:36 | |
*** jgriffith has quit IRC | 03:36 | |
*** hongbin has quit IRC | 03:51 | |
*** hongbin has joined #openstack-kuryr | 03:55 | |
*** tonanhngo has joined #openstack-kuryr | 04:04 | |
*** tonanhngo has quit IRC | 04:06 | |
janonymous | Merge conflicts :( | 04:11 |
---|---|---|
*** hongbin has quit IRC | 04:13 | |
*** diogogmt has quit IRC | 04:15 | |
*** yamamoto_ has quit IRC | 04:17 | |
openstackgerrit | Dongcan Ye proposed openstack/kuryr-libnetwork: Fullstack: Using the credentials from openrc config file https://review.openstack.org/399585 | 04:17 |
yedongcan | janonymous: :P, bad news. | 04:18 |
janonymous | yedongcan: :P | 04:19 |
*** limao has quit IRC | 04:22 | |
*** tonanhngo has joined #openstack-kuryr | 04:24 | |
*** yamamoto_ has joined #openstack-kuryr | 04:25 | |
*** tonanhngo has quit IRC | 04:25 | |
*** yamamoto_ has quit IRC | 04:27 | |
*** yamamoto_ has joined #openstack-kuryr | 04:32 | |
janonymous | ivc_, vikasc: can i work on vagrant for kuryr-kubernetes once devstack is done? | 04:32 |
vikasc | janonymous, that would be great | 04:32 |
janonymous | vikasc: thanks | 04:33 |
vikasc | janonymous, yw!! | 04:33 |
*** yamamoto_ has quit IRC | 04:34 | |
*** pmannidi_ is now known as pmannidi | 04:37 | |
*** tonanhngo has joined #openstack-kuryr | 04:45 | |
*** tonanhngo has quit IRC | 04:48 | |
*** yamamoto_ has joined #openstack-kuryr | 05:02 | |
*** yamamoto_ has quit IRC | 05:03 | |
vikasc | linux bridge is not letting arp reply pass through it. No iptable rule seem to be blocking. even tried after flushing iptables. Any suggestion? | 05:04 |
*** yamamoto_ has joined #openstack-kuryr | 05:09 | |
janonymous | vikasc: what about security groups? | 05:17 |
janonymous | *rules | 05:17 |
vikasc | janonymous, as i said, i flushed iptable rules, thats where security groups are applied, so ... | 05:18 |
vikasc | janonymous, something else | 05:18 |
janonymous | vikasc: have you checked on ovs ? | 05:21 |
*** yamamoto_ has quit IRC | 05:21 | |
vikasc | janonymous, yes, ovs is not blocking anything. I can see packet entering on one interface of bridge but not coming out of the expected interface | 05:22 |
*** limao has joined #openstack-kuryr | 05:22 | |
*** janki has joined #openstack-kuryr | 05:23 | |
janonymous | vikasc: aah, out of suggestions now :P | 05:26 |
vikasc | janonymous, np :) | 05:26 |
vikasc | janonymous, thanks !! | 05:26 |
janonymous | vikasc: :) | 05:26 |
*** limao has quit IRC | 05:27 | |
*** limao has joined #openstack-kuryr | 05:28 | |
reedip | vikasc : You are getting the Input packet on OVS and Linux Bridge, but not the output packets , right ? | 05:33 |
vikasc | reedip: limao: "into the vm" direction, linux bridge is not forwarding the arp reply onto the tap interface, which i can see entring into the linux bridge on 'qvb' interface from ovs | 05:44 |
limao | Hello vikasc | 05:44 |
vikasc | limao, hello Liping :) | 05:44 |
vikasc | limao, i was trying out trunk port support in neutron manually | 05:45 |
limao | I just join the channel, let me check the log | 05:45 |
*** yedongcan has quit IRC | 05:46 | |
reedip | vikasc : so you have vlan enabled in trunk mode? | 05:46 |
vikasc | limao, created two vlan interfaces in the nova-vm, and plugged them into two namespaces | 05:46 |
*** irenab_ has joined #openstack-kuryr | 05:46 | |
vikasc | reedip, yes | 05:46 |
vikasc | when i am trying to ping one namespace to another(both in same vm, but on netwrok different than vm), i can see vlan tagged arp packets on linux bridge | 05:47 |
vikasc | and the strange part is, from with in both namespaces i can ping router ip. Then arp reply from router(which is also tagged) is not getting blocked on linux bridge | 05:50 |
limao | You create two vlan interface and plug into different namespace in the nested vm, but they can't ping each other, right? | 05:50 |
vikasc | limao, yes | 05:50 |
vikasc | limao, on debugging found that, tagged arp reply is not able to pass linux bridge | 05:51 |
limao | vikasc: oh, Let me have a try in my local. | 05:51 |
vikasc | limao, sure, thanks | 05:52 |
*** yedongcan has joined #openstack-kuryr | 05:54 | |
*** tonanhngo has joined #openstack-kuryr | 05:55 | |
*** tonanhngo has quit IRC | 05:56 | |
vikasc | limao, i did launch vm with neutron trunk parent port and created vlan interfaces and child subports | 06:02 |
*** yamamoto_ has joined #openstack-kuryr | 06:02 | |
limao | vikasc: Can you list your neutron commands? so that I can get same env | 06:03 |
vikasc | limao, i followed these commands https://wiki.openstack.org/wiki/Neutron/TrunkPort#API-CLI_mapping | 06:04 |
limao | ok, thanks | 06:04 |
*** tonanhngo has joined #openstack-kuryr | 06:15 | |
limao | hi vikasc, I'm downloading image(which follow the guide), just a quick confirm : did you set default gw in the namespace? | 06:16 |
*** tonanhngo has quit IRC | 06:16 | |
limao | are your create eth0.101 and eth0.102 , then add them into different ns? | 06:16 |
vikasc | yes on last line | 06:17 |
vikasc | on default gw, yes i created neutron router (implemeted with namespace) | 06:17 |
vikasc | on host | 06:17 |
limao | In the NS of eth0.101, did you set up the default GW? | 06:18 |
limao | I mean default route | 06:18 |
vikasc | limao, 50.0.0.0/24 i used as network for vlan interfaces inside ns | 06:18 |
limao | can you show me the output of ip route in the ns? | 06:19 |
vikasc | limao, i did set router ip, 50.0.0.1 as default gw | 06:19 |
limao | vikasc: ok, get it | 06:20 |
vikasc | limao, 50.0.0.6 and 50.0.0.9 are vlan interface ips | 06:20 |
vikasc | limao, 100.0.0.6 is the vm ip, inside which vlan interfaces are created | 06:20 |
limao | oh, you create one subport right? and both of the containers are in same subnet | 06:21 |
limao | (I was thought, the containers are in different subnet, something like 50.0.0.6, 60.0.0.9) | 06:22 |
vikasc | limao, containers/ns are from same subnet, 50.0.0.0/24 | 06:23 |
vikasc | limao, two subports | 06:23 |
vikasc | limao, one for each vlan interface | 06:23 |
limao | vikasc: OK, let me see | 06:24 |
vikasc | thanks limao | 06:25 |
*** tonanhngo has joined #openstack-kuryr | 06:27 | |
*** janki is now known as janki|lunch | 06:30 | |
*** tonanhngo has quit IRC | 06:30 | |
*** yedongcan has quit IRC | 06:38 | |
openstackgerrit | vikas choudhary proposed openstack/kuryr: Fix container port ipaddress setting in ipvlan/macvlan drivers https://review.openstack.org/397057 | 06:48 |
*** oanson has joined #openstack-kuryr | 06:51 | |
*** reedip has quit IRC | 07:06 | |
limao | Hi vikasc | 07:12 |
vikasc | hi limao | 07:12 |
limao | Are you add two subports which in same subnet, but with different vlan? | 07:15 |
vikasc | limao, yes | 07:16 |
vikasc | vlan for seperating out their traffic within vm | 07:16 |
limao | OK, get it... I'm not sure if this is supported, let me try | 07:18 |
*** reedip has joined #openstack-kuryr | 07:20 | |
vikasc | limao, what is supported use-case in your understanding? | 07:20 |
limao | vikasc: different vlan for different subnet(in my mind) | 07:21 |
limao | vagrant@devstack:~/devstack$ openstack port create --network net1 --mac-address "$parent_mac" port3 | 07:23 |
limao | HttpException: Unable to complete operation for network c4f24371-3605-4c84-9471-63069574fb4b. The mac address fa:16:3e:75:20:26 is in use. | 07:23 |
limao | Hi vikasc, I get this error, when I create second subport in the same network | 07:23 |
vikasc | limao, one moment pls, let me check | 07:24 |
limao | vikasc: it should not allow you create two ports with same mac in same subnet( this is make sense) | 07:24 |
vikasc | limao, macs will be different | 07:24 |
limao | vikasc: Did you create two subport with same mac with Parent_mac? | 07:24 |
*** tonanhngo has joined #openstack-kuryr | 07:25 | |
vikasc | limao, no, with neutron provided macs | 07:25 |
limao | # # (a) either create child ports having the same MAC address as the parent port | 07:25 |
limao | # # (remember, they are on different networks), | 07:25 |
limao | # # NOTE This approach is affected by a bug of the openvswitch firewall driver: | 07:25 |
limao | # # https://bugs.launchpad.net/neutron/+bug/1626010 | 07:25 |
openstack | Launchpad bug 1626010 in neutron "OVS Firewall cannot handle non unique MACs" [High,In progress] - Assigned to Jakub Libosvar (libosvar) | 07:25 |
*** tonanhngo has quit IRC | 07:26 | |
limao | Did you follow The guide with a) or b) to do this? | 07:26 |
limao | # # (a) either create child ports having the same MAC address as the parent port | 07:26 |
limao | # # (remember, they are on different networks), | 07:26 |
vikasc | limao, 'b' | 07:26 |
limao | # # (b) or create the VLAN subinterfaces with MAC addresses as random-assigned by neutron. | 07:26 |
limao | OK | 07:26 |
limao | I'm using a) now, let me try b) | 07:26 |
limao | So when you create sub interface, you assigned the mac in VM, right? | 07:27 |
limao | # ssh vm0 sudo ip link add link eth0 name eth0.101 address "$child_mac" type vlan id 101 | 07:27 |
limao | # # eth0 and eth0.101 have different MAC addresses | 07:27 |
vikasc | limao, i am working on vlan-per-container approach, vlan-per-subnet/network will be another approach | 07:29 |
vikasc | limao, yes, with this neutron provided mac, vlan interface is created | 07:29 |
limao | vikasc: thanks, get it. trying this | 07:32 |
vikasc | limao, thanks !! | 07:32 |
limao | vikasc: how do you configure the eth0.101 ip, with dhcp or manually assign? | 07:44 |
*** janki|lunch is now known as janki | 07:44 | |
*** tonanhngo has joined #openstack-kuryr | 07:45 | |
vikasc | limao, i had dhcp enabled on neutron network 50.0.0.0/24, so whatever neutron dhcp allocated to ports, i assigned manually o vlan interfaces | 07:46 |
*** tonanhngo has quit IRC | 07:47 | |
vikasc | limao, 50.0.0.6 and 50.0.0.9 | 07:47 |
limao | vikasc: OK, I find when I add first subport, I can get it from dhcp(dhclient eth0.203) | 07:47 |
limao | when I add second subport(which is in same subnet), it can't get ip from dhcp then | 07:48 |
vikasc | limao, sorry i could not understand | 07:48 |
limao | Let's say your case, for example, you use 101 for 50.0.0.6, and 102 for 50.0.0.9 | 07:49 |
vikasc | limao, ok | 07:49 |
limao | then after ifup eth0.101, I can use dhclient eth0.101 to get ip 50.0.0.6 | 07:50 |
limao | but after ifup eth0.102, I can't use dhclient eth0.102 to get 50.0.0.9 | 07:50 |
vikasc | limao, is it assigning same ip, which is allocated by neutron when corresponding child port was created? | 07:50 |
limao | not same ip, it is allocated by neutron | 07:51 |
limao | which is child port ip | 07:51 |
vikasc | limao, yes , i meant that only... same ip as child port's ip | 07:51 |
vikasc | limao, i just looked up child ports and assigned manually | 07:52 |
limao | vikasc: ohh.. same with child port's ip | 07:52 |
limao | vikasc: OK, I think I need some time to look into it, I need to double confirm if it can work with two subport which are in same subnetwork | 07:53 |
vikasc | limao, sure | 07:53 |
vikasc | limao, will look forward to your findings | 07:53 |
limao | (Two subport in different subnet works in my env) | 07:53 |
vikasc | limao, thats like ipvlan case | 07:54 |
vikasc | limao, right? | 07:54 |
vikasc | limao, or each container will have seperate subnet the? | 07:54 |
vikasc | s/the/then | 07:54 |
limao | (or each container will have seperate subnet...) | 07:55 |
limao | each container will have seperate subnet | 07:55 |
vikasc | limao, how a cluster of containers belonging to same subnet can be created in such case | 07:57 |
vikasc | limao, if each container will have seperate subnet | 07:57 |
limao | vikasc: I have not think it clear now, just mean that can work in neutron trunk feature :-), I agree that each container should in same subnet in our vm-nest case | 07:58 |
vikasc | limao, ok | 07:58 |
vikasc | limao, where do you think two vlan interfaces on same subnet case will fail? | 07:59 |
vikasc | limao, in my understanding there should not be a problem | 08:00 |
limao | vikasc: I mean may have some bug | 08:00 |
vikasc | limao, ok.. | 08:00 |
vikasc | limao, yeah.. i will explore more | 08:00 |
*** tonanhngo has joined #openstack-kuryr | 08:02 | |
*** dimak has joined #openstack-kuryr | 08:04 | |
*** tonanhngo has quit IRC | 08:05 | |
*** tonanhngo has joined #openstack-kuryr | 08:23 | |
*** tonanhngo has quit IRC | 08:24 | |
*** yedongcan has joined #openstack-kuryr | 08:26 | |
*** tonanhngo has joined #openstack-kuryr | 08:44 | |
*** tonanhngo has quit IRC | 08:45 | |
*** gsagie has joined #openstack-kuryr | 08:45 | |
openstackgerrit | Merged openstack/kuryr: [docs] Libnetwork remote driver missing a step https://review.openstack.org/400323 | 08:57 |
*** lmdaly has joined #openstack-kuryr | 09:00 | |
*** tonanhngo has joined #openstack-kuryr | 09:04 | |
*** tonanhngo has quit IRC | 09:06 | |
openstackgerrit | Ilya Chukhnakov proposed openstack/kuryr-kubernetes: Controller side of pods' port/VIF binding https://review.openstack.org/376044 | 09:08 |
ivc_ | irenab, vikasc, apuimedo, i've rebased VIFHandler patch https://review.openstack.org/#/c/376044/3 to use drivers | 09:11 |
ivc_ | also removed the lengthy docstring in favor of the devref that irenab is working on | 09:12 |
vikasc | thanks ivc_ | 09:15 |
openstackgerrit | Louise Daly proposed openstack/kuryr-libnetwork: [WIP]Driver based model for kuryr-libnetwork https://review.openstack.org/400365 | 09:18 |
*** tonanhngo has joined #openstack-kuryr | 09:24 | |
*** tonanhngo has quit IRC | 09:25 | |
openstackgerrit | Ilya Chukhnakov proposed openstack/kuryr-kubernetes: Generic VIF controller driver https://review.openstack.org/400010 | 09:27 |
vikasc | irenab, ping | 09:31 |
irenab_ | vikasc, pong | 09:32 |
vikasc | irenab, can you please reword yr comment on https://review.openstack.org/#/c/361993/11/kuryr/lib/segmentation_type_drivers/vlan.py | 09:32 |
vikasc | irenab, I was expecting to see the class deriving from the SegmentationDriver to deal with VLAN type | 09:33 |
vikasc | Reply | 09:33 |
vikasc | 09:33 | |
vikasc | Done | 09:33 |
vikasc | 09:33 | |
vikasc | Fix | 09:33 |
vikasc | 19 | 09:33 |
vikasc | def __init__(self): | 09:33 |
vikasc | 20 | 09:33 |
vikasc | self.available_local_vlans = set(moves.range(const.MIN_VLAN_TAG, | 09:33 |
vikasc | 21 | 09:33 |
vikasc | const.MAX_VLAN_TAG)) | 09:33 |
vikasc | irenab, I was expecting to see the class deriving from the SegmentationDriver to deal with VLAN type | 09:33 |
vikasc | irenab, what i need to do | 09:33 |
irenab_ | vikasc, sorry, what is your question? | 09:34 |
vikasc | irenab, " I was expecting to see the class deriving from the SegmentationDriver to deal with VLAN type" | 09:34 |
vikasc | irenab, sorry, i could not get this comment clearly | 09:34 |
vikasc | irenab, what you are expecting | 09:35 |
vikasc | irenab, you want a base class and then specific classes deriving from it? | 09:35 |
vikasc | irenab, or something else | 09:35 |
irenab_ | vikasc, let me explain. The current class is SegmentationDriver, but it deals with VLAN management | 09:35 |
vikasc | ok | 09:36 |
irenab_ | so when VXLAN driver will be added, it will need other handling | 09:36 |
irenab_ | it can either delegate alloc/release to Driver or just have VLanSegDriver | 09:37 |
vikasc | irenab, i was thinking that vxlan driver will have vxlan.py , using the name of seg_driver from config, i am loading driver dynamically | 09:38 |
vikasc | like this | 09:39 |
vikasc | segmentation_driver = importutils.import_module(cfg.CONF.binding.driver) | 09:39 |
vikasc | driver = segmentation_driver.SegmentationDriver() | 09:39 |
irenab_ | so you are using 'namespace' and not class to differentiate | 09:42 |
vikasc | irenab, yes | 09:42 |
vikasc | irenab, same name in different namespace | 09:42 |
irenab_ | vikasc, somehow missed this, then my comment ir resolved | 09:43 |
irenab_ | s/ir/is | 09:43 |
vikasc | irenab, thanks | 09:43 |
vikasc | irenab, anyways going to update patch to resolve other reviewers comments | 09:43 |
*** tonanhngo has joined #openstack-kuryr | 09:43 | |
vikasc | irenab, wanted to make sure i am not missing something | 09:43 |
irenab_ | vikasc, thank you for the patience | 09:44 |
*** irenab_ has quit IRC | 09:44 | |
*** irenab_ has joined #openstack-kuryr | 09:44 | |
*** tonanhngo has quit IRC | 09:44 | |
vikasc | irenab, s/for/from :) | 09:45 |
*** garyloug has joined #openstack-kuryr | 09:47 | |
openstackgerrit | Merged openstack/fuxi: Fix the .gitignore file https://review.openstack.org/399282 | 09:47 |
openstackgerrit | vikas choudhary proposed openstack/kuryr: [WIP] Nested-Containers: vlan driver https://review.openstack.org/361993 | 09:48 |
apuimedo | cool | 09:49 |
irenab_ | vikasc, is this still WIP? | 09:49 |
janonymous | ivc_,vikasc: one question.. currently hyperkube binary is extracted and run for kuryr.... what is the problem when run directly kubernetes env with restarting with cni driver ? | 09:50 |
vikasc | irenab, test cases are pending | 09:50 |
apuimedo | janonymous: ? | 09:50 |
irenab_ | vikasc, so cannot be merged, right? | 09:50 |
vikasc | irenab, right | 09:50 |
janonymous | apuimedo: hey was going through kuryr-kubernetes | 09:51 |
vikasc | irenab, working on test cases | 09:51 |
janonymous | apuimedo: is that question correct? | 09:51 |
apuimedo | janonymous: could you rephrase it? I'm afraid I didn't get it | 09:52 |
*** lmdaly has quit IRC | 09:52 | |
janonymous | apuimedo:yeah | 09:52 |
vikasc | janonymous, you mean y not individual binaries? | 09:53 |
janonymous | apuimedo: to be specific in devstack installation i found that hyperkube image is extracted , extract_hyperkube --> prepare_kubelet--> run_k8s_kubelet , so when a kubernetes cluster is running doesn't placing cni driver in appropiate location and restarting hyperkube would work to take up changes? | 09:55 |
*** apuimedo has quit IRC | 09:55 | |
*** apuimedo has joined #openstack-kuryr | 09:56 | |
janonymous | vikasc: existing binaries | 09:57 |
apuimedo | janonymous: which existing binaries? | 09:58 |
janonymous | apuimedo: like in already running kubernetes cluster, running kubelet,api-server etc.. | 09:58 |
vikasc | apuimedo, no need to rerun already running api-server | 10:00 |
vikasc | soory | 10:00 |
vikasc | janonymous, ^ | 10:00 |
apuimedo | janonymous: oh, you can reuse existing ones | 10:00 |
apuimedo | look at the devstack settings | 10:00 |
apuimedo | if you specify already running services, IIRC, it won't launch its own | 10:00 |
vikasc | apuimedo, +1 | 10:00 |
apuimedo | if I don't remember correctly, I can fix it | 10:00 |
janonymous | apuimedo: https://git.openstack.org/cgit/openstack/kuryr-kubernetes/tree/devstack/plugin.sh#n340 | 10:01 |
vikasc | janonymous, it is like running kubelet on this node only, on which devstack is being run | 10:01 |
vikasc | janonymous, and then kubelet will get registerd itself with apiserver | 10:02 |
vikasc | janonymous, we should be placing cni driver also at expected path and run kuebelet accordingly | 10:03 |
janonymous | yes this last point i was referring to | 10:04 |
*** tonanhngo has joined #openstack-kuryr | 10:04 | |
janonymous | apuimedo: vikasc: but sry for not clearly framing the question | 10:04 |
apuimedo | mmm... I'm not sure I got it yet | 10:05 |
apuimedo | you can decide not to run the kubelet | 10:05 |
*** tonanhngo has quit IRC | 10:05 | |
apuimedo | and run it manually | 10:05 |
apuimedo | that's why I put it so that you have to explicitly enable the kubelet service | 10:06 |
vikasc | janonymous, do you mean that you could not find in devstack plugin where is the code for placing cni at expected path? | 10:06 |
janonymous | sorry for creating confusion, let me rephrase it from beginning | 10:07 |
vikasc | :) sure | 10:07 |
janonymous | We have https://git.openstack.org/cgit/openstack/kuryr-kubernetes/tree/devstack/plugin.sh#n340 which has 3 steps as mentioned in it. | 10:07 |
janonymous | Question1) If i do not extract hyperkube and use already installed hyperkube image with cni parametes would it be okay ? | 10:09 |
janonymous | Question2) also placing cni in expected path and restarting kubernetes services would not be enough to take the changes? just like in kuryr-libnetwork we do...place driver at a path and tell docker to use | 10:10 |
apuimedo | 1) janonymous: you would have to modify your hyperkube image to place our cni stuff inside | 10:13 |
apuimedo | 2) you need to start the kubelet when the plugin descriptors are already in place | 10:13 |
janonymous | 1) by modify you mean restart with different parameters? or to rebuild hyperkube binary? | 10:17 |
janonymous | 2) right | 10:17 |
limao | Hi vikasc | 10:20 |
ivc_ | janonymous, what is the use-case that you have in mind? why exactly do we want an existing (and thus uncontrolled) kubernetes cluster? | 10:21 |
openstackgerrit | vikas choudhary proposed openstack/kuryr: Fix container port ipaddress setting in ipvlan/macvlan drivers https://review.openstack.org/397057 | 10:22 |
vikasc | limao, hello | 10:22 |
limao | vikasc: just share you what I find right now | 10:22 |
openstackgerrit | vikas choudhary proposed openstack/kuryr: [WIP] Nested-Containers: vlan driver https://review.openstack.org/361993 | 10:23 |
limao | vikasc: when we add two subport in different vlan within same subnet, it will happen mac flapping | 10:23 |
vikasc | limao, mac flapping? | 10:24 |
janonymous | ivc_: not specific, trying to understand...as magnum might be deploying the coes...so was thinking | 10:24 |
*** tonanhngo has joined #openstack-kuryr | 10:24 | |
limao | 09:55:27.377407 fa:16:3e:f6:d3:a0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 204, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.0.5.1 tell 10.0.5.7, length 28 | 10:24 |
limao | 09:55:27.377449 fa:16:3e:f6:d3:a0 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 203, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.0.5.1 tell 10.0.5.7, length 28 | 10:24 |
limao | 09:55:27.377465 fa:16:3e:44:81:4b > fa:16:3e:f6:d3:a0, ethertype 802.1Q (0x8100), length 46: vlan 204, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Reply 10.0.5.1 is-at fa:16:3e:44:81:4b, length 28 | 10:24 |
* vikasc looking | 10:24 | |
apuimedo | irenab: now https://review.openstack.org/#/c/397057/7 looks mergeable to me | 10:25 |
vikasc | limao, which interface these captures are taken | 10:25 |
limao | vikasc: I send out arp request from vlan 204, but when I tcpdump from qbr, it will have two arp request | 10:25 |
vikasc | limao, thats expected | 10:25 |
*** tonanhngo has quit IRC | 10:26 | |
limao | vikasc: This is because there will be vlan flapping between tbr- and br-int | 10:26 |
limao | vikasc: there will be spt-XXX with vlan 203, and spt-YYY with vlan 204 in tbr- | 10:27 |
vikasc | limao, that i understand | 10:27 |
limao | vikasc: but in br-int, the spi-XXX and spi-YYY are in same local vlan | 10:27 |
limao | vikasc: then it will happen that | 10:28 |
limao | vikasc: This will impact the mac learning table on qbr- | 10:28 |
vikasc | limao, let me explain my understanding.. step by step | 10:28 |
limao | vikasc: that's why you can tcpdump arp response on qbr- , but it did not fw to the vm | 10:29 |
limao | vikasc: sorry, I have to catch a bus very soon | 10:29 |
vikasc | limao, i will catchup with you | 10:29 |
limao | vikasc : brctl showmacs qbrXXX | 10:30 |
vikasc | limao, i will mail you | 10:30 |
limao | vikasc: you can use showmacs to check the mac learning table of qbr | 10:30 |
limao | vikasc: when it have problem, you will find the mac of subport is learning on qvbYYY port, it should be on tap port | 10:31 |
vikasc | limao, i will check through yr points and get back | 10:32 |
vikasc | limao, thanks a lot for the time :) | 10:32 |
limao | vikasc: Thanks, and sorry to have to go now | 10:32 |
vikasc | limao, np.. will cathcup | 10:32 |
*** limao has quit IRC | 10:33 | |
janonymous | ivc_,vikasc, apuimedo: Thanks for your responses,i would try from viewpoint mentioned maybe i am wrong at some point:) | 10:33 |
vikasc | janonymous, yw! | 10:37 |
apuimedo | ;-) | 10:38 |
vikasc | apuimedo, do you have time to continue this discussion a bit? | 10:39 |
apuimedo | vikasc: which? | 10:39 |
vikasc | apuimedo, trunk port one | 10:39 |
*** garyloug has quit IRC | 10:40 | |
apuimedo | oh | 10:41 |
apuimedo | sure | 10:41 |
apuimedo | let's go | 10:41 |
apuimedo | what is the issue? | 10:41 |
vikasc | explaining | 10:42 |
vikasc | i have a nova vm launched on parent trunk port 100.XXXXXX | 10:42 |
ivc_ | apuimedo, vikasc, can you point me at where in kuryr-lib do we handle network namespaces (i.e. create ns and move veth peer to it)? | 10:42 |
vikasc | following this doc https://wiki.openstack.org/wiki/Neutron/TrunkPort | 10:42 |
apuimedo | paging ltomasbo | 10:44 |
apuimedo | :-) | 10:44 |
vikasc | ivc_, https://github.com/openstack/kuryr/blob/master/usr/libexec/kuryr/ovs#L25 | 10:44 |
vikasc | apuimedo, i already had discussion with him | 10:45 |
vikasc | apuimedo, since morning i am telling my story to everynody | 10:45 |
vikasc | :D | 10:45 |
vikasc | ivc_, is this what you were looking for? | 10:45 |
apuimedo | :-) | 10:45 |
vikasc | apuimedo, he said he will try my scenario and get back | 10:45 |
ivc_ | vikasc, nope :) i'm looking for a container side of veth | 10:46 |
vikasc | apuimedo, let forget that i asked you to discuss this :P . ltomasbo will get back | 10:46 |
vikasc | apuimedo, i will update you | 10:47 |
apuimedo | ivc_: IIRC it depends on the COE | 10:47 |
apuimedo | docker moves it for you in libnetwork | 10:47 |
ivc_ | apuimedo, COE is what? | 10:47 |
apuimedo | Container Orchestration Engine | 10:47 |
apuimedo | in this case, the Docker runtime | 10:47 |
apuimedo | libcontainer+libnetwork | 10:47 |
apuimedo | you want to see how to do it in ipdb? | 10:48 |
ivc_ | na | 10:48 |
ivc_ | i was wondering if it is already in kuryr-lib but couldn't find it | 10:48 |
apuimedo | no, I do not think it is there | 10:49 |
openstackgerrit | vikas choudhary proposed openstack/kuryr: [WIP] Nested-Containers: vlan driver https://review.openstack.org/361993 | 10:50 |
apuimedo | ivc_: vikasc: what's the status on https://review.openstack.org/#/c/397853/5 | 10:57 |
apuimedo | I'd like to shorten the outstanding queue | 10:57 |
ivc_ | apuimedo, i don't have any updates planned for the whole chain now (unless there are -1) | 10:59 |
apuimedo | I'm asking about 'He is going to update VIFHandler patch with driver api calls.' | 10:59 |
apuimedo | it seems you and vikasc agreed to some change, is that right? | 10:59 |
ivc_ | apuimedo, i've rebased VIFHandler already :) | 10:59 |
vikasc | apuimedo, ivc_ has done that | 10:59 |
apuimedo | vikasc: then +2 ;-) | 10:59 |
apuimedo | let's get rid of patches! | 11:00 |
vikasc | apuimedo, sure.. allow me couple of minutes :) | 11:00 |
apuimedo | or we'll only be rebasing till we grow white beards | 11:00 |
openstackgerrit | Ilya Chukhnakov proposed openstack/kuryr-kubernetes: Controller side of pods' port/VIF binding https://review.openstack.org/376044 | 11:02 |
ivc_ | ^ rebased as there was an update in the chain and the patch was not visible in 'related changes' | 11:05 |
apuimedo | ok | 11:05 |
apuimedo | ivc_: are you using the current devstack code in your vagrant VM? | 11:06 |
apuimedo | I've been getting some errors starting kubelet | 11:06 |
ivc_ | i'm not using vagrant | 11:06 |
ivc_ | aye you need to set the config options | 11:06 |
ivc_ | for default project_id/subnet etc | 11:06 |
ltomasbo | hi apuimedo, vikasc was in a meeting the whole morning, going to try it now and get back to you | 11:06 |
apuimedo | ltomasbo: so much meeting will make you a politician | 11:07 |
ltomasbo | :D I hope not! | 11:07 |
vikasc | ltomasbo, are you able to scroll back and see liping's comments? | 11:08 |
vikasc | ltomasbo, he also tried same | 11:08 |
ivc_ | apuimedo, do you think we need to update devstack as part of https://review.openstack.org/376044 or are you ok if we do so in separate patch? | 11:08 |
* apuimedo checking | 11:09 | |
ltomasbo | so, not working for him either? | 11:09 |
ivc_ | apuimedo, and while i'm not using vagrant, i do use devstack, just deploying it manually | 11:09 |
* apuimedo will check in a little while actually | 11:09 | |
openstackgerrit | Merged openstack/kuryr-kubernetes: Controller driver base and pod project driver https://review.openstack.org/397853 | 11:09 |
vikasc | ltomasbo, yes.. not working | 11:09 |
apuimedo | ivc_: and devstack launched the kubelet service fine for you at every stacking? | 11:09 |
ltomasbo | I undestand that it could be tricky for br-int, but there should be no problem with the qbr | 11:09 |
apuimedo | cause for me there's some flakiness | 11:10 |
vikasc | ltomasbo, thats what i was thinking | 11:10 |
ltomasbo | let me try it anyway! :D | 11:10 |
vikasc | ltomasbo, sure.. and then lets discuss | 11:10 |
apuimedo | ivc_: IntegrityError is rather broad in scope :P | 11:10 |
apuimedo | I propose we subclass it python3 OSError style in the future | 11:11 |
ivc_ | apuimedo, IntegrityError is something that is not expected to happen | 11:11 |
apuimedo | fucksakes, I just proposed subclassing | 11:11 |
ivc_ | you need to set all options for [neutron_defaults] in kuryr.conf | 11:11 |
apuimedo | y=ー( ゚д゚)・∵. | 11:12 |
apuimedo | ivc_: sure. I get that, I'm just saying that it would be nicer to have a NeutronConfError subclassing IntegrityError and subclass taht | 11:12 |
apuimedo | *that | 11:12 |
ivc_ | apuimedo, in fact that should be an RequiredOption error | 11:13 |
apuimedo | ivc_: not necessarily | 11:13 |
ivc_ | can you post an exception stacktrace to paste? | 11:13 |
apuimedo | RequiredOption could be a type of IntegrityError | 11:13 |
apuimedo | because I imagine IntegrityError can also be that the Option Required is present but leading to bad values/resources | 11:14 |
ivc_ | apuimedo, no, i mean i'm raising RequiredOption in case the option is missing | 11:14 |
ivc_ | like https://review.openstack.org/#/c/398324/10/kuryr_kubernetes/controller/drivers/default_subnet.py@50 | 11:14 |
ivc_ | so if you got an IntegrityError, then i've probably missed something | 11:15 |
ivc_ | can you paste a stacktrace? | 11:15 |
apuimedo | ivc_: I'm just reviewing the code | 11:15 |
apuimedo | ;-) | 11:15 |
apuimedo | I meant to say that I prefer that when we get stacktraces, I'd like us to be more specific in the kind that gets raise | 11:16 |
apuimedo | *raised | 11:16 |
apuimedo | that I'm fine catching IntegrityErrors, but for logging it would be nices if we had subtypes | 11:16 |
ivc_ | there should be an error message | 11:17 |
apuimedo | I know. The error message is nicely specific. Nothing to complain about in the message | 11:18 |
ivc_ | apuimedo, stop teasing me and give me the stacktrace xD | 11:18 |
apuimedo | ivc_: I will when I get there | 11:18 |
apuimedo | network = next(net.obj_clone() for net in subnets.values() | 11:18 |
apuimedo | if net.id == neutron_port.get('network_id')) | 11:18 |
apuimedo | this code is a bit peculiar | 11:19 |
apuimedo | naming wise | 11:19 |
apuimedo | due to how we handle the subnets -> nets mapping | 11:19 |
apuimedo | it gets confusing | 11:19 |
apuimedo | I'll propose a name change | 11:19 |
openstackgerrit | Dongcan Ye proposed openstack/kuryr-libnetwork: Fullstack: Using the credentials from openrc config file https://review.openstack.org/399585 | 11:20 |
ivc_ | apuimedo, ikr, but to fix that we need Subnet.id in os-vif | 11:21 |
apuimedo | ivc_: I just posted the comment | 11:21 |
apuimedo | https://review.openstack.org/#/c/399953/2/kuryr_kubernetes/os_vif_util.py | 11:22 |
apuimedo | line 113 | 11:22 |
ivc_ | aye | 11:23 |
ivc_ | well subnets is in the same format as returned by PodSubnetsDriver.get_subnets | 11:23 |
apuimedo | ivc_: sure. I'm just trying to get it readable for new onlookers and for my future self that tends to forget things | 11:24 |
ivc_ | imo, we can add a docstring | 11:24 |
apuimedo | ivc_: yes. docstring is a good solution | 11:24 |
ivc_ | apuimedo, that whole os_vif_util is like a can of worms because of that missing subnet.id | 11:25 |
apuimedo | I agree on the assessment :-) | 11:25 |
ivc_ | i expect to refactor it once os-vif is updated | 11:25 |
apuimedo | I'm really tempted to send the patch to os-vif | 11:26 |
ivc_ | that'd be cool | 11:26 |
ivc_ | just note that everything OVO-related requires a OVO version bump | 11:26 |
ivc_ | and there might be some resistance on their side | 11:26 |
apuimedo | ivc_: I expect all difficulty to be exactly on that | 11:28 |
ivc_ | apuimedo, can you pls post the stacktrace with the exact error message to paste.openstack.org? | 11:29 |
apuimedo | ivc_: I didn't get any stacktrace. I was just imagining ways in which it could fail | 11:30 |
apuimedo | sorry I wasn't clear enough in my explanation above | 11:30 |
ivc_ | oh | 11:31 |
ltomasbo | vikasc, I'm trying with two subports in the same network | 11:34 |
ltomasbo | and I see that dhclient only get IP for the first one, not the second one | 11:34 |
ivc_ | apuimedo, well those IntegrityErrors should never happen (i.e. i don't expect that code to be reachable at all), but its there so we have some context instead of that StopIteration (or something else) | 11:34 |
vikasc | ltomasbo, problem is because of arp reply coming from ns2-->linux_br-->trunk_br-->br-int-->trunk_br--->linux_br--> ... so finally brigde sees that arp reply is comming from other direction than vm | 11:34 |
ltomasbo | and I can only ping from outside to the first one | 11:34 |
apuimedo | ivc_: ok | 11:34 |
ivc_ | in case i've missed something and/or something went completely wrong | 11:34 |
ltomasbo | I'm not even using namespaces | 11:35 |
*** tonanhngo has joined #openstack-kuryr | 11:35 | |
ltomasbo | just the eth0.101 and eth0.102 | 11:35 |
vikasc | ltomasbo, yes .. i also noticed that both namespaces can ping outside ip | 11:35 |
ltomasbo | both can ping out, but only one can be ping in | 11:35 |
vikasc | ltomasbo, thats because in that case port learned is correct | 11:35 |
*** tonanhngo has quit IRC | 11:36 | |
vikasc | ltomasbo, but where try to ping each other, just check linux_br mac table..mac of one of them is learned on outside port 'qvo' of bridge | 11:36 |
ltomasbo | the problem will be also the route table | 11:37 |
apuimedo | ivc_: ok. Finished reviewing https://review.openstack.org/#/c/399953/2 . Once the docstring is there I can +2 | 11:37 |
ltomasbo | 10.0.5.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0.101 | 11:38 |
ltomasbo | 10.0.5.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0.102 | 11:38 |
ivc_ | apuimedo, cool, thnx | 11:38 |
vikasc | ltomasbo, this route table is from vm? | 11:40 |
ltomasbo | yep, inside the VM | 11:41 |
vikasc | ltomasbo, thats not the case in my vm : | 11:44 |
vikasc | root@vm0 ~]# netstat -nr | 11:44 |
vikasc | Kernel IP routing table | 11:44 |
vikasc | Destination Gateway Genmask Flags MSS Window irtt Iface | 11:44 |
vikasc | 0.0.0.0 100.0.0.1 0.0.0.0 UG 0 0 0 eth0 | 11:44 |
vikasc | 100.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 | 11:44 |
vikasc | 169.254.169.254 100.0.0.1 255.255.255.255 UGH 0 0 0 eth0 | 11:44 |
vikasc | ltomasbo, my interfaces are in namespaces, so not visible in vm's network namespace | 11:45 |
vikasc | ltomasbo, i dont think this issue will be there if ovs is used instead of ovs+linux_br. WDYT? | 11:47 |
*** lmdaly has joined #openstack-kuryr | 11:49 | |
ltomasbo | umm, I can try that too | 11:50 |
ltomasbo | what I see is the arp reply being on the wrong vlan | 11:50 |
ltomasbo | 22:23:36.873910 fa:16:3e:de:ac:83 > Broadcast, ethertype 802.1Q (0x8100), length 46: vlan 101, p 0, ethertype ARP, Request who-has 10.0.5.12 tell 10.0.5.1, length 28 | 11:51 |
ltomasbo | 22:23:36.873931 fa:16:3e:de:ac:83 > Broadcast, ethertype 802.1Q (0x8100), length 46: vlan 102, p 0, ethertype ARP, Request who-has 10.0.5.12 tell 10.0.5.1, length 28 | 11:51 |
ltomasbo | 22:23:36.875733 fa:16:3e:e2:1d:71 > fa:16:3e:de:ac:83, ethertype 802.1Q (0x8100), length 46: vlan 101, p 0, ethertype ARP, Reply 10.0.5.12 is-at fa:16:3e:e2:1d:71, length 28 | 11:51 |
ltomasbo | while it should be 102 (and the mac of eth0.102) | 11:51 |
ltomasbo | I just tried removing the route to 10.0.5.0 from eth0.101 | 11:54 |
ltomasbo | and then the eth0.102 works perfectly | 11:54 |
ltomasbo | both in and out | 11:55 |
* vikasc reading | 11:55 | |
ltomasbo | going to try if ovs-firewall can deal with this, and even if it does, if it is doing the right encapsulation | 11:55 |
*** tonanhngo has joined #openstack-kuryr | 11:55 | |
ltomasbo | perhaps trunk ports are not intended to have several vlans on the same network, but rather one vlan per network | 11:55 |
*** tonanhngo has quit IRC | 11:56 | |
vikasc | ltomasbo, above tcpdump is from "tap" interface of "qbr" right? | 11:58 |
vikasc | ltomasbo, if you do tcpsump on other side interface,"qvo" .. you will see one more arp reply packet with vlan 102 | 11:59 |
vikasc | ltomasbo, above vlan 101 arp reply is coming from inside vm and going to br-int via trunk bridge and from there when it will coming back | 12:00 |
ltomasbo | yup, tap inferface | 12:00 |
openstackgerrit | Ilya Chukhnakov proposed openstack/kuryr-kubernetes: Port-to-VIF os-vif translator for hybrid OVS case https://review.openstack.org/399953 | 12:01 |
ltomasbo | vikasc, not sure I got your last comment | 12:01 |
ivc_ | apuimedo, added docstrings https://review.openstack.org/#/c/399953/2..3/kuryr_kubernetes/os_vif_util.py | 12:02 |
ltomasbo | that was the reply to a ping from the qrouter namespace, not from one ns to another | 12:02 |
vikasc | ltomasbo, this arp reply with vlan 102 | 12:02 |
apuimedo | ivc_: ivc_ why should https://review.openstack.org/#/c/400010/3/kuryr_kubernetes/os_vif_util.py line 198 restrict how many subnets the backing network of a subnet has? | 12:02 |
apuimedo | or am I misreading? | 12:02 |
vikasc | ltomasbo, sorry vlan 101 | 12:02 |
vikasc | ltomasbo, it is coming from inside vm and going to br-int via trunk bridge | 12:02 |
ivc_ | apuimedo, its not restricting networks, but its because of the same mapping problem | 12:03 |
apuimedo | how I read it, if let's say we have a network A with subnets X an Y | 12:03 |
ltomasbo | yep, via qbr and trunk bridge | 12:03 |
ivc_ | apuimedo, you will have (X.id: A[X], Y.id: A[Y]} | 12:04 |
apuimedo | you'd have a mapping {id(X): osvif(a), id(Y): osvif(a)} | 12:04 |
ivc_ | yup | 12:04 |
apuimedo | the trick here is | 12:04 |
vikasc | ltomasbo, from br-int this same packet will turn back and will enter trunk-br again and this time will be tagged with vlan 102.. and you can see this arp reply with 102 on other interface of linux bridge | 12:04 |
ivc_ | apuimedo, but those two osvif(a) will have different subnets attribute | 12:04 |
apuimedo | that's what I thought | 12:04 |
apuimedo | I have to say, that's a bit weird | 12:05 |
ivc_ | its ugly, but i can't help it | 12:05 |
apuimedo | and only result of how we craft the network osvif objects :P | 12:05 |
ivc_ | no, because we need subnet_id -> subnet mapping while that structure should also have network object | 12:05 |
vikasc | ltomasbo, bluejeans? | 12:06 |
ltomasbo | ok | 12:06 |
apuimedo | ivc_: but the osvif Network object that you get as values in the example above could have references to both X and Y | 12:07 |
apuimedo | in both values | 12:07 |
*** yedongcan has left #openstack-kuryr | 12:07 | |
ivc_ | apuimedo, erm no. those would be copies of the same Network object | 12:08 |
apuimedo | you could have both X and Y in the subnets SubnetList | 12:08 |
ivc_ | nope | 12:08 |
ivc_ | then i'll loose subnet_id -> subnet mapping | 12:08 |
apuimedo | mmm | 12:08 |
apuimedo | I really misenderstood then. I thought the mapping is subnetid to osvif Network object | 12:09 |
ivc_ | well, you see, this thing should have been [NetworkA[SubnetA1, SubnetA2], NetworkB[SubnetB1]] | 12:09 |
ivc_ | that mapping is logically subnet_id -> subnet mapping, that also has the Network that contains the subnet | 12:10 |
ivc_ | apuimedo, so as soon as we get a subnet.id, the 'subnets' mapping will be converted into [NetworkA[SubnetA1, SubnetA2], NetworkB[SubnetB1]] | 12:13 |
apuimedo | ok | 12:14 |
*** tonanhngo has joined #openstack-kuryr | 12:15 | |
ivc_ | in fact it would probably much cleaner if os-vif network did not have a 'subnets' attribute at all and instead a Subnet had network_id | 12:17 |
*** tonanhngo has quit IRC | 12:17 | |
apuimedo | ivc_: your database background (foreign key) is showing :P | 12:20 |
ivc_ | haha | 12:20 |
ivc_ | but it would allow bi-directional relationship between Network <-> Subnet | 12:21 |
apuimedo | and I have to say... I tend to agree. But I would probably have both for browsing and to have it lazy load the resource | 12:21 |
apuimedo | ORM style | 12:21 |
apuimedo | killing performance since 2000s | 12:21 |
ivc_ | xD | 12:22 |
apuimedo | lmdaly: mchiappero: have you tried baremetal kuryr-libnetwork with dpdk? | 12:23 |
ivc_ | apuimedo, you'd then also have to solve circular dependency or that lazy loading would kill the server :) | 12:24 |
*** lmdaly has quit IRC | 12:24 | |
apuimedo | ivc_: soft links not to kill also the garbage collector | 12:24 |
ivc_ | apuimedo, during serialization i mean :) | 12:24 |
ivc_ | apuimedo, or add some sort of AI to guess if the user wants net as part of the subnet, or subnet as part of net. | 12:26 |
*** tonanhngo has joined #openstack-kuryr | 12:26 | |
ivc_ | apuimedo, maybe thats how skynet was born | 12:27 |
*** tonanhngo has quit IRC | 12:27 | |
apuimedo | :-) | 12:28 |
*** garyloug has joined #openstack-kuryr | 12:37 | |
irenab_ | ivc_, ping | 12:38 |
ivc_ | irenab_, pong | 12:39 |
irenab_ | ivc_, going though the Controller pod patch and have few quesetions | 12:40 |
irenab_ | questions | 12:40 |
irenab_ | The drivers that are sued by VifHandler are instintiated or referenced by the VifHandler? | 12:41 |
ivc_ | yes | 12:41 |
irenab_ | get_instance sort of implying singleton | 12:42 |
ivc_ | it is | 12:42 |
irenab_ | So there maybe the case that case Driver will be used by other entity Handler, correct? | 12:42 |
ivc_ | it is possible, yes | 12:42 |
irenab_ | s/case/same | 12:42 |
ivc_ | same instance | 12:43 |
irenab_ | ivc_, great. thanks | 12:43 |
ivc_ | irenab, np :) | 12:43 |
irenab_ | ivc_, I think I will drop the drivers hierarchy at the diagrams, it gets too crowded | 12:43 |
*** garyloug has quit IRC | 12:44 | |
ivc_ | yep, just keep the interface ones | 12:44 |
ivc_ | i.e. PodVIFDriver | 12:44 |
*** tonanhngo has joined #openstack-kuryr | 12:45 | |
irenab_ | for the derived ones, the question regarding naming | 12:45 |
irenab_ | there is GenericVIFDriver but the rest are DefaultXXXDriver | 12:46 |
irenab_ | ivc_, by design? | 12:46 |
ivc_ | irenab_, yes, Default not as in 'default driver', but 'default project' or 'default subnet' | 12:47 |
*** tonanhngo has quit IRC | 12:47 | |
ivc_ | irenab_, also i'd prefer if we have base class names (like PodVIFDriver) on the diagram, because it will allow us to keep devref up-to-date even if we change the drivers used by default | 12:48 |
irenab_ | I think we better have and explain the default drivers too, since I was confused about if tis defualt driver or default subnet/project | 12:51 |
irenab_ | ivc_, I will complete going through the patch and share the modified devref | 12:52 |
ivc_ | we can do it in text. i'm just worried that diagrams/images are harder to maintain :) | 12:52 |
ivc_ | irenab_, cool, thnx | 12:52 |
*** tonanhngo has joined #openstack-kuryr | 13:04 | |
*** tonanhngo has quit IRC | 13:05 | |
*** limao has joined #openstack-kuryr | 13:10 | |
irenab_ | ivc_, posted few questions on the patch | 13:12 |
mchiappero | apuimedo: no, we never tried, but I guess we will soon | 13:16 |
mchiappero | apuimedo: so, I'm afraid we are busy these days, I would suggest to go for the release and not wait for us | 13:17 |
mchiappero | although we will catch up soon | 13:17 |
mchiappero | now, two big problems we spotted | 13:17 |
limao | hi vikasc,ltomasbo | 13:18 |
vikasc | hi limao | 13:18 |
mchiappero | the first one is that the binding drivers assign IP addresses to interfaces, while a docker guy claims the remote plugin should never do that | 13:19 |
limao | vikasc: just back home and read through your irc log | 13:19 |
vikasc | limao, ltomasbo is going to try with ovs-firewall, to see if this problem is with linux bridge only | 13:20 |
mchiappero | the second very big problem is that it seems that neutron won't let you assign the same mac address to two different neutron ports | 13:20 |
mchiappero | but we can't change the mac address on the ipvlan interface either as it's not supported (but could be, I could easily produce a patch) | 13:20 |
limao | vikasc: OK, cool | 13:20 |
mchiappero | also, I'm wondering whether it's possible to create a neutron port with just IPv4 addresses and no IPv6 ones | 13:21 |
irenab_ | ivc_, apuimedo : ping | 13:28 |
irenab_ | vikasc, ping | 13:28 |
ivc_ | pong | 13:30 |
irenab_ | ivc_, I posted questions on patchset 3 and then realised you rebased it | 13:31 |
ivc_ | irenab_, no problem | 13:31 |
*** yamamoto_ has quit IRC | 13:31 | |
*** tonanhngo has joined #openstack-kuryr | 13:34 | |
*** tonanhngo has quit IRC | 13:35 | |
limao | vikasc: one more question , is the latest design vm-nested(trunk / subport) in this spec https://github.com/openstack/kuryr/blob/master/doc/source/specs/newton/nested_containers.rst | 13:35 |
vikasc | limao, yes.. but it needs update | 13:38 |
ivc_ | irenab_, i've replied to your comments | 13:38 |
limao | vikasc: OK, let me go through that again :-), will check with you if have questions. | 13:39 |
vikasc | limao, sure :) | 13:40 |
irenab_ | ivc_, checking in a min | 13:48 |
openstackgerrit | OpenStack Proposal Bot proposed openstack/fuxi: Updated from global requirements https://review.openstack.org/373745 | 13:52 |
*** jgriffith has joined #openstack-kuryr | 13:53 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/kuryr-libnetwork: Updated from global requirements https://review.openstack.org/351976 | 13:54 |
*** tonanhngo has joined #openstack-kuryr | 13:54 | |
*** tonanhngo has quit IRC | 13:56 | |
*** tonanhngo has joined #openstack-kuryr | 14:15 | |
*** tonanhngo has quit IRC | 14:16 | |
*** yamamoto has joined #openstack-kuryr | 14:16 | |
ivc_ | irenab_, ping | 14:20 |
irenab_ | ivc_, pong | 14:20 |
ivc_ | irenab_, https://review.openstack.org/#/c/376044/3/kuryr_kubernetes/controller/handlers/vif.py@55 | 14:20 |
ivc_ | i thought you suggested that we add 'Filter' abstraction | 14:20 |
irenab_ | I did | 14:20 |
ivc_ | and they you ask me why we need it :) | 14:21 |
ivc_ | i'm confused | 14:21 |
irenab_ | I just not sure why ADD/DELETE may need different ones | 14:21 |
irenab_ | Filter is for not handle the event | 14:21 |
apuimedo | irenab_: pong | 14:21 |
*** lmdaly has joined #openstack-kuryr | 14:21 | |
irenab_ | apuimedo, wanted to ask not to merge urgently after +2 for non trivial patches, didn't have time to review :-) | 14:22 |
ivc_ | irenab_, well for VIFHandler the filter would be 'if not self._is_host_network(pod)' and imo abstracting it into a Filter entity does not seem worth it | 14:22 |
irenab_ | I though you suggest there can be cases when Filter for ADD may differ from Filter for DELETE | 14:23 |
ivc_ | and we'll still have the 'if not self._is_pending(pod)' for 'on_present' | 14:23 |
irenab_ | I was more on 'Filter if this k8s instance should be handled by kuryr' | 14:24 |
ivc_ | irenab_, what i mean is i don't see a reason to justify introducing that 'Filter' entity | 14:24 |
ivc_ | for now at least | 14:24 |
irenab_ | is_pending is logical check | 14:25 |
apuimedo | ok | 14:25 |
ivc_ | irenab_, so is 'is_host_network' | 14:26 |
irenab_ | ivc_, I agree that Filter abstraction is not required right now, possibly we will have to add it in case there will be multiple CNI drivers | 14:26 |
ivc_ | irenab_, agreed | 14:26 |
ivc_ | irenab_, so to make it clear, your comment was for the 'REVISIT' part, right? | 14:27 |
irenab_ | ivc_, yes :-). I think we have enough abstaction/pluggability points for now | 14:27 |
ivc_ | :) | 14:27 |
irenab_ | ivc_, so there are only 2 questions left | 14:29 |
irenab_ | I am not sure I got your response on one time annotation update for the current code (I got your intention for the next step) | 14:29 |
ivc_ | irenab_, regarding the 'why else' | 14:29 |
irenab_ | yes | 14:30 |
ivc_ | do you suggest that we remove 'elif' and move the inner block outside? | 14:30 |
ivc_ | or replace 'elif' with 'else'? | 14:30 |
irenab_ | I was not sure what happens for Pod that is updated | 14:31 |
irenab_ | seems nothing, but then maybe code can be simplified | 14:31 |
ivc_ | not sure if i understand what you mean | 14:31 |
ivc_ | there are 3 cases covered there | 14:32 |
irenab_ | I do not understand why Else at line 70 is required | 14:32 |
ivc_ | if there's no VIF -> we create one and annotate (the VIF may have 'active' either True or False - that depends on VIFDriver) | 14:32 |
ivc_ | if VIF.active == True -> do nothing | 14:33 |
ivc_ | if VIF.active == False -> activate it | 14:33 |
irenab_ | so if it is not active, when its going to be activated? | 14:33 |
apuimedo | irenab_: when you have a moment, merge https://review.openstack.org/#/c/394819/ | 14:33 |
ivc_ | irenab_, if it is not active, it is up to 'activate_vif' to do something about it :) | 14:34 |
ivc_ | in case of neutron-ovs or neutron-linuxbridge 'activate_vif' will poll neutron port status | 14:34 |
ivc_ | (and raise ResourceNotReady to retry if necessary) | 14:35 |
irenab_ | ivc_, but else prevents to enter this block | 14:35 |
*** tonanhngo has joined #openstack-kuryr | 14:35 | |
ivc_ | it does not | 14:35 |
irenab_ | ivc_, seems its time for me to take another cup of coffee :-) | 14:36 |
ivc_ | :) | 14:36 |
ivc_ | its tricky one :0 | 14:36 |
*** tonanhngo has quit IRC | 14:36 | |
ivc_ | the thing is that if request_vif returned active==False, we've updated pod annotation | 14:36 |
ivc_ | it triggers another event which will get us to the 'elif' part | 14:37 |
irenab_ | ivc_, where this event is triggered? | 14:37 |
ivc_ | by k8s | 14:37 |
ivc_ | when you annotate pod it triggers another event | 14:37 |
irenab_ | update? | 14:37 |
ivc_ | 'modified', but yes :) | 14:38 |
apuimedo | irenab_: ivc_: thanks for going thoroughly over this ;-) | 14:38 |
irenab_ | ivc_, ok, got it :-). We need sequences diagrams for k8s-kuryr-neutron ... | 14:38 |
apuimedo | irenab_: almost flow charts | 14:39 |
ivc_ | irenab_, the trick here is to get 'active=False' annotation to CNI driver so it can plug the VIF so neutron will update port status | 14:39 |
irenab_ | ivc_, sort of like it again how k8s does things | 14:39 |
irenab_ | apuimedo, ivc_ will try to add them tomorrow to the devref. Need to go now | 14:40 |
ivc_ | irenab_, and when neutron updates the status, activate_vif succeeds and updates annotation which is captured by CNI and causes it to unblock and to return to kubelet | 14:40 |
openstackgerrit | Merged openstack/kuryr: Replaces uuid.uuid4 with uuidutils.generate_uuid() https://review.openstack.org/394819 | 14:40 |
irenab_ | ivc_, so activate_vif blocks till timeout or status is ACTIVE? | 14:40 |
apuimedo | thanks irenab_ for the merge | 14:41 |
ivc_ | irenab_, i expect activate_vif to only 'show_port' once and just raise ResourceNotReady but it is up to the VIFDriver | 14:41 |
*** garyloug has joined #openstack-kuryr | 14:41 | |
ivc_ | irenab_, https://review.openstack.org/#/c/400010/3/kuryr_kubernetes/controller/drivers/generic_vif.py@59 | 14:42 |
*** hongbin has joined #openstack-kuryr | 14:42 | |
mchiappero | apuimedo: ping | 14:42 |
irenab_ | ivc_, got it, so pipeline will retry. Nice | 14:43 |
ivc_ | :) | 14:43 |
ivc_ | irenab_, so do you see the big picture now? :) | 14:44 |
*** irenab_ has quit IRC | 14:47 | |
apuimedo | mchiappero: pong | 14:48 |
ivc_ | irenab_, i've replied to https://review.openstack.org/#/c/376044/3/kuryr_kubernetes/controller/handlers/vif.py@62 | 14:51 |
apuimedo | jerms: ping | 14:53 |
jerms | apuimedo: sup | 14:53 |
*** tonanhngo has joined #openstack-kuryr | 14:55 | |
apuimedo | jerms: hongbin could use hearing what's the storage usage of kubernetes from openshift | 14:55 |
apuimedo | he's working on fuxi (kuryr's storage backend) | 14:56 |
jerms | i see | 14:56 |
*** tonanhngo has quit IRC | 14:57 | |
*** gsagie has quit IRC | 14:57 | |
* hongbin is news to the fuxi project | 14:57 | |
hongbin | i just started to pick up fuxi, but let me know if anything i can help | 14:58 |
mchiappero | apuimedo: I'm going to have meetings from now on, but I'm interested in an opinion on the above | 14:58 |
mchiappero | thank you :) | 14:58 |
mchiappero | I'll check the chat later | 14:58 |
hongbin | jerms: ^^ | 14:58 |
limao | hi vikasc | 14:58 |
jerms | i must be missing somethign terribly obvious -- kube already talks to cinder | 14:58 |
limao | vikasc: I have a workaround to let it work "# brctl setageing qbrXXX 0", after this, the containers can ping each other | 14:59 |
ivc_ | apuimedo, vikasc, merge https://review.openstack.org/#/c/398324 ? | 15:01 |
ivc_ | ^s/merge/shall we merge/ :) | 15:01 |
*** hongbin has quit IRC | 15:10 | |
apuimedo | :P | 15:15 |
apuimedo | mchiappero: I'm afraid I miss "the above" | 15:15 |
apuimedo | jerms: it's about finding if there is any gap that needs to be covered. Currently you put annotations for volumes in the pod definitions. Then I suppose the K8s driver picks that up, right? | 15:17 |
jerms | pod only asks for PV and size, and as of 1.4, something called a storageclass | 15:18 |
jerms | kube itself is configure with the storage backend info | 15:18 |
apuimedo | jerms: kubelet? | 15:18 |
apuimedo | ivc_: merged | 15:20 |
*** tonanhngo has joined #openstack-kuryr | 15:20 | |
*** tonanhngo has quit IRC | 15:22 | |
openstackgerrit | Merged openstack/kuryr-kubernetes: Default pod subnet driver and os-vif utils https://review.openstack.org/398324 | 15:24 |
openstackgerrit | Merged openstack/kuryr-kubernetes: Default pod security groups driver https://review.openstack.org/399518 | 15:26 |
*** yamamoto has quit IRC | 15:27 | |
ivc_ | apuimedo, cool. thnx :) | 15:32 |
apuimedo | anytime | 15:33 |
apuimedo | lmdaly: thanks for the new patch. Just posted the review | 15:35 |
apuimedo | mchiappero: I think it should be possible to have ipv4 and no ipv6 | 15:36 |
lmdaly | thanks apuimedo! | 15:36 |
apuimedo | about the binding assigning address, I think that is something that we should make libnetwork be able to disable on binding | 15:37 |
apuimedo | k8s uses it, docker bans it | 15:37 |
apuimedo | so probably we should have libnetwork not pass ip address info to the binding | 15:37 |
apuimedo | and the binding not attempting to set if none is passed | 15:37 |
*** tonanhngo has joined #openstack-kuryr | 15:38 | |
*** tonanhngo has quit IRC | 15:40 | |
*** yamamoto has joined #openstack-kuryr | 15:41 | |
*** tonanhngo has joined #openstack-kuryr | 15:55 | |
*** tonanhngo has quit IRC | 15:55 | |
*** yamamoto has quit IRC | 15:59 | |
*** kristianjf has joined #openstack-kuryr | 16:09 | |
*** tonanhngo has joined #openstack-kuryr | 16:16 | |
*** tonanhngo has quit IRC | 16:17 | |
*** oanson has quit IRC | 16:17 | |
*** yamamoto has joined #openstack-kuryr | 16:19 | |
*** dimak has quit IRC | 16:23 | |
*** diogogmt has joined #openstack-kuryr | 16:25 | |
*** hongbin has joined #openstack-kuryr | 16:30 | |
*** yamamoto has quit IRC | 16:31 | |
*** yamamoto has joined #openstack-kuryr | 16:32 | |
*** tonanhngo has joined #openstack-kuryr | 16:35 | |
*** tonanhngo has quit IRC | 16:36 | |
*** yamamoto_ has joined #openstack-kuryr | 16:43 | |
*** yamamoto has quit IRC | 16:46 | |
limao | apuimedo: vikasc: Hi, ltomasbo and I check with jlibosva in openstack-neutron channel about Trunk port feature. Here is some highlight: 1) Neutron only support Trunk feature with ovs-fw(if we need sg for subport) 2) We need to make sure the src mac is different for all the containers on the Nested VM, because in the guest OS, as by default it's gonna have the mac of parent port even though it goes out from vlan interface. ( For this | 16:48 |
limao | bug: https://bugs.launchpad.net/neutron/+bug/1626010) | 16:48 |
openstack | Launchpad bug 1626010 in neutron "OVS Firewall cannot handle non unique MACs" [High,In progress] - Assigned to Jakub Libosvar (libosvar) | 16:48 |
*** yamamoto_ has quit IRC | 16:50 | |
*** tonanhngo has joined #openstack-kuryr | 16:55 | |
*** tonanhngo has quit IRC | 16:56 | |
*** janki has quit IRC | 16:59 | |
apuimedo | :/ | 17:01 |
apuimedo | limao: mchiappero: lmdaly: this could be bad news for ipvlan | 17:02 |
limao | apuimedo: looks like only macvlan can work in my mind (if that bug not fixed) | 17:03 |
ltomasbo | why? | 17:03 |
apuimedo | yes, sounds to me like that too | 17:04 |
apuimedo | ltomasbo: ipvlan always has the mac address of the link iface | 17:04 |
ltomasbo | with ipvlan you can have different macs too | 17:04 |
apuimedo | really? | 17:04 |
limao | ltomasbo: ohh | 17:04 |
apuimedo | I thought it always reuses the mac of the parent device | 17:04 |
limao | I do not know this.. | 17:04 |
ltomasbo | yes, we need to make the kuryr plugin in a way that the iface is dynamic | 17:04 |
*** huikang has joined #openstack-kuryr | 17:04 | |
ltomasbo | I tried with ipvlan (lmdaly patch) over a different (hardcoded) iface, e.g., eth0.101 | 17:05 |
ltomasbo | and it worked for me | 17:05 |
ltomasbo | the only think is to have that iface loaded dynamically, which needs some thinking | 17:05 |
ltomasbo | for the ipvlan binding | 17:06 |
limao | ltomasbo: the src mac will be different with eth0 in that case? (ipvlan on eth0.101) | 17:07 |
apuimedo | ltomasbo: what you mean is the one vlan per network | 17:07 |
ltomasbo | if you create the eth0.101 with a different mac, yes, it should be | 17:07 |
apuimedo | each vlan virtual device gets a different mac | 17:07 |
apuimedo | and then, ipvlans on top | 17:07 |
apuimedo | and it's not an issue then that the different ipvlans of the same vlan have the same mac, is that right? | 17:08 |
limao | I think we can have a try if it will get different src mac in that case | 17:09 |
ltomasbo | if I undestood the differences between ipvlan and macvlan | 17:09 |
ltomasbo | is that macvlan you differentiate by mac address, while ipvlan you differentiate by ip, right? | 17:10 |
ltomasbo | but does that mean the mac of the ipvlan needs to be the same? | 17:10 |
ltomasbo | I don't think so | 17:10 |
ltomasbo | didn't try the complete approach, but what I tried with lmdaly ipvlan patch was to have a container attached to iface eth0.101 (with MAC 1) | 17:11 |
ltomasbo | and then change iface (manually) to eth0.102 (with MAC 2) and create another container | 17:12 |
ltomasbo | and they could reach their networks (didn't remember if I try them to talk to each other) | 17:12 |
*** huikang has quit IRC | 17:15 | |
*** tonanhngo has joined #openstack-kuryr | 17:15 | |
limao | ltomasbo: then looks like we need one local vlan(to get different mac) for one container in ipvlan case | 17:16 |
ltomasbo | each subport will have a different mac | 17:16 |
ltomasbo | when you create it | 17:16 |
ltomasbo | we just need to use that one | 17:16 |
*** tonanhngo has quit IRC | 17:17 | |
ltomasbo | I did a quick check with ovs-firewall: | 17:17 |
ltomasbo | ping from namespace 1 to qroute | 17:17 |
ltomasbo | 17:21:33.115816 fa:16:3e:06:af:7c > fa:16:3e:1d:4e:54, ethertype 802.1Q (0x8100), length 102: vlan 101, p 0, ethertype IPv4, 10.0.5.12 > 10.0.5.1: ICMP echo request, id 950, seq 2, length 64 | 17:17 |
ltomasbo | 17:21:33.116105 fa:16:3e:1d:4e:54 > fa:16:3e:06:af:7c, ethertype 802.1Q (0x8100), length 102: vlan 101, p 0, ethertype IPv4, 10.0.5.1 > 10.0.5.12: ICMP echo reply, id 950, seq 2, length 64 | 17:17 |
ltomasbo | so, vlan 101 and mac fa:16:3e:06:af:7c | 17:18 |
limao | fa:16:3e:06:af:7c is your subport of 101 mac right? | 17:18 |
ltomasbo | and from namespace 2: | 17:18 |
limao | ok | 17:18 |
ltomasbo | 7:22:36.283519 fa:16:3e:a2:81:06 > fa:16:3e:1d:4e:54, ethertype 802.1Q (0x8100), length 102: vlan 102, p 0, ethertype IPv4, 10.0.5.10 > 10.0.5.1: ICMP echo request, id 956, seq 2, length 64 | 17:18 |
ltomasbo | 17:22:36.284507 fa:16:3e:1d:4e:54 > fa:16:3e:a2:81:06, ethertype 802.1Q (0x8100), length 102: vlan 102, p 0, ethertype IPv4, 10.0.5.1 > 10.0.5.10: ICMP echo reply, id 956, seq 2, length 64 | 17:18 |
ltomasbo | so, vlan 102, and mac fa:16:3e:a2:81:06 | 17:18 |
limao | get you | 17:19 |
ltomasbo | both of them different from the vm mac fa:16:3e:90:03:12 | 17:19 |
limao | in vm-nested case, then why do not we just move the eth0.101 into the container? | 17:20 |
limao | (since it will be one vlan for one container) | 17:20 |
ltomasbo | I guess so, for the 1 vlan per container should be ok | 17:21 |
limao | then maybe there is no need to create macvlan and ipvlan up on the vlan interface in this case in my mind | 17:21 |
ltomasbo | the problem could be for the 1 vlan per network if ipvlan is used on top (I guess) | 17:22 |
limao | Then It would be a little bit similiar with our allowed-address-pair case | 17:23 |
ltomasbo | yep, allowed-address-pair should work, true | 17:23 |
limao | great thanks ltomasbo! I'd go to sleep now. Zzzz... ;-) | 17:25 |
ltomasbo | I need to leave now too | 17:25 |
ltomasbo | I'll keep digging tomorrow! | 17:25 |
limao | ltomasbo: see you and thanks for the help! | 17:25 |
ltomasbo | you're welcome! | 17:25 |
*** limao has quit IRC | 17:30 | |
*** tonanhngo has joined #openstack-kuryr | 17:42 | |
mchiappero | I'll catch up with the chat later but meanwhile I'll try to shed some light on IPVLAN | 17:50 |
mchiappero | basically, functionality wise IPVLAN and MACVLAN are the same with the only difference that IPVLAN uses a single MAC address all the time, to comply to arp/spoofing settings on many setups | 17:52 |
mchiappero | the implementation is different though, so performance might be different too | 17:52 |
mchiappero | now, IPVLAN uses the master interface for egressing packets (if I'm not wrong the L2 header in the skb buffer is actually replaced before leaving the master interface) | 17:54 |
mchiappero | the slave selection for incoming packets it's obviously made by looking at the IP addresses (by means of a hashtable) | 17:55 |
mchiappero | so, the mac address from the slave interfaces is never actually used, and by default is equal to the one belonging to the master (that's actually the only one you can ever see) | 17:56 |
*** diogogmt has quit IRC | 17:56 | |
mchiappero | the problem: 1) neutron doesn't allow to have multiple neutron ports with the same mac address 2) there is no support for changing the mac address in the kernel IPVLAN driver | 17:57 |
mchiappero | however (2) can be solved | 17:58 |
mchiappero | not sure about (1) (I guess no) | 17:58 |
mchiappero | is anyone still tuned? :) | 17:58 |
*** diogogmt has joined #openstack-kuryr | 17:59 | |
*** lmdaly has quit IRC | 17:59 | |
mchiappero | of course we can ignore the mac address in the "dangling" neutron port, but it's far from being a good solution | 18:00 |
mchiappero | technically speaking adding the support for changing the mac address for IPVLAN slave devices is just a few lines of code, could be accepted upstream, could not. But I see someone might point out that it could turn out to be confusing | 18:03 |
mchiappero | even though by default could be the same as the master, and if you are changing it is because you know what you are doing | 18:04 |
mchiappero | so... | 18:04 |
mchiappero | regarding kuryr-lib assigning IP addresses, yes, I think we can provide no IPs (good), a flag (less good), an additional code path to be invoked by k-k8s only | 18:06 |
*** huikang has joined #openstack-kuryr | 18:16 | |
*** huikang has quit IRC | 18:20 | |
*** garyloug has quit IRC | 18:25 | |
*** huikang has joined #openstack-kuryr | 18:27 | |
*** oanson has joined #openstack-kuryr | 18:28 | |
openstackgerrit | Ilya Chukhnakov proposed openstack/kuryr-kubernetes: Generic VIF controller driver https://review.openstack.org/400010 | 18:34 |
openstackgerrit | Ilya Chukhnakov proposed openstack/kuryr-kubernetes: Controller side of pods' port/VIF binding https://review.openstack.org/376044 | 18:34 |
*** diogogmt has quit IRC | 18:43 | |
*** diogogmt has joined #openstack-kuryr | 18:45 | |
*** oanson has quit IRC | 18:47 | |
*** jgriffith has quit IRC | 18:51 | |
*** diogogmt has quit IRC | 18:55 | |
*** diogogmt has joined #openstack-kuryr | 19:00 | |
*** huikang has quit IRC | 19:13 | |
*** huikang has joined #openstack-kuryr | 19:25 | |
*** huikang has quit IRC | 19:36 | |
*** oanson has joined #openstack-kuryr | 19:57 | |
*** diogogmt has quit IRC | 20:00 | |
*** huikang has joined #openstack-kuryr | 20:08 | |
*** ltomasbo has quit IRC | 20:19 | |
*** huikang has quit IRC | 20:22 | |
*** ajo has quit IRC | 20:25 | |
*** ltomasbo has joined #openstack-kuryr | 20:25 | |
*** ajo has joined #openstack-kuryr | 20:25 | |
*** huikang has joined #openstack-kuryr | 20:31 | |
*** huikang has quit IRC | 20:32 | |
*** huikang has joined #openstack-kuryr | 20:33 | |
*** diogogmt has joined #openstack-kuryr | 20:37 | |
*** oanson has quit IRC | 20:53 | |
*** oanson has joined #openstack-kuryr | 20:54 | |
*** jgriffith has joined #openstack-kuryr | 21:13 | |
*** jgriffith has quit IRC | 21:17 | |
*** jgriffith has joined #openstack-kuryr | 21:20 | |
*** oanson has quit IRC | 21:24 | |
*** jgriffith has quit IRC | 21:29 | |
*** huikang has quit IRC | 21:36 | |
*** huikang has joined #openstack-kuryr | 21:37 | |
*** huikang has quit IRC | 21:40 | |
*** jgriffith has joined #openstack-kuryr | 21:44 | |
*** jgriffith has quit IRC | 22:06 | |
*** jgriffith_ has joined #openstack-kuryr | 22:07 | |
*** jgriffith_ is now known as jgriffith | 22:13 | |
openstackgerrit | Hongbin Lu proposed openstack/fuxi: Fix the installation guide https://review.openstack.org/399296 | 22:48 |
*** hongbin has quit IRC | 23:51 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!