*** yuanying_ has joined #openstack-kuryr | 00:25 | |
*** yuanying has quit IRC | 00:27 | |
*** gouthamr has joined #openstack-kuryr | 00:42 | |
*** yuanying_ has quit IRC | 01:00 | |
*** yuanying has joined #openstack-kuryr | 01:09 | |
*** caowei has joined #openstack-kuryr | 01:14 | |
*** kiennt26 has joined #openstack-kuryr | 01:19 | |
*** threestrands_ has joined #openstack-kuryr | 02:22 | |
*** threestrands_ has quit IRC | 02:22 | |
*** threestrands_ has joined #openstack-kuryr | 02:22 | |
*** threestrands has quit IRC | 02:22 | |
*** threestrands_ has quit IRC | 02:22 | |
*** threestrands_ has joined #openstack-kuryr | 02:23 | |
*** threestrands_ has quit IRC | 02:23 | |
*** threestrands_ has joined #openstack-kuryr | 02:23 | |
*** salv-orl_ has joined #openstack-kuryr | 02:29 | |
*** threestrands_ has quit IRC | 02:32 | |
*** caowei has quit IRC | 02:32 | |
*** yuanying has quit IRC | 02:32 | |
*** kzaitsev_pi has quit IRC | 02:32 | |
*** salv-orlando has quit IRC | 02:32 | |
*** s1061123 has quit IRC | 02:32 | |
*** dmellado has quit IRC | 02:32 | |
*** ChanServ has quit IRC | 02:32 | |
*** ChanServ has joined #openstack-kuryr | 02:34 | |
*** barjavel.freenode.net sets mode: +o ChanServ | 02:34 | |
*** threestrands_ has joined #openstack-kuryr | 02:36 | |
*** caowei has joined #openstack-kuryr | 02:36 | |
*** yuanying has joined #openstack-kuryr | 02:36 | |
*** kzaitsev_pi has joined #openstack-kuryr | 02:36 | |
*** s1061123 has joined #openstack-kuryr | 02:36 | |
*** dmellado has joined #openstack-kuryr | 02:36 | |
*** gouthamr has quit IRC | 03:17 | |
*** yamamoto has joined #openstack-kuryr | 04:18 | |
*** kiennt26 has quit IRC | 04:36 | |
*** kiennt26 has joined #openstack-kuryr | 04:37 | |
*** caowei has quit IRC | 04:58 | |
*** s1061123 has quit IRC | 05:20 | |
*** s1061123 has joined #openstack-kuryr | 05:25 | |
*** caowei has joined #openstack-kuryr | 05:45 | |
*** janki has joined #openstack-kuryr | 05:48 | |
*** openstackgerrit has quit IRC | 06:47 | |
*** threestrands_ has quit IRC | 06:53 | |
*** juriarte has joined #openstack-kuryr | 07:21 | |
*** kiennt26 has quit IRC | 07:24 | |
*** gcheresh has joined #openstack-kuryr | 07:32 | |
*** janki has quit IRC | 08:15 | |
*** reedip has quit IRC | 08:15 | |
*** reedip has joined #openstack-kuryr | 08:18 | |
*** celebdor has joined #openstack-kuryr | 08:24 | |
*** pmannidi has quit IRC | 08:26 | |
*** openstackgerrit has joined #openstack-kuryr | 08:50 | |
openstackgerrit | Eyal Leshem proposed openstack/kuryr-kubernetes master: Translate k8s policy to SG https://review.openstack.org/526916 | 08:50 |
---|---|---|
*** huats has quit IRC | 08:55 | |
*** huats has joined #openstack-kuryr | 08:55 | |
*** janki has joined #openstack-kuryr | 09:36 | |
*** garyloug has joined #openstack-kuryr | 09:44 | |
*** dougbtv_ has joined #openstack-kuryr | 09:50 | |
*** dougbtv has quit IRC | 09:51 | |
*** dougbtv__ has joined #openstack-kuryr | 09:51 | |
*** dougbtv_ has quit IRC | 09:55 | |
*** caowei has quit IRC | 10:05 | |
*** dougbtv_ has joined #openstack-kuryr | 10:14 | |
*** dougbtv__ has quit IRC | 10:17 | |
*** dougbtv__ has joined #openstack-kuryr | 10:33 | |
*** dougbtv_ has quit IRC | 10:36 | |
*** reedip has quit IRC | 10:44 | |
*** reedip has joined #openstack-kuryr | 10:59 | |
*** dougbtv_ has joined #openstack-kuryr | 11:02 | |
*** dougbtv__ has quit IRC | 11:05 | |
*** maysamacedos has joined #openstack-kuryr | 11:11 | |
*** yamamoto has quit IRC | 11:12 | |
openstackgerrit | Eyal Leshem proposed openstack/kuryr-kubernetes master: Kubernetes Network Policy support Spec https://review.openstack.org/519239 | 11:39 |
*** janki has quit IRC | 11:46 | |
*** yamamoto has joined #openstack-kuryr | 11:53 | |
*** yamamoto has quit IRC | 11:54 | |
*** yamamoto has joined #openstack-kuryr | 11:55 | |
*** jerms has joined #openstack-kuryr | 12:12 | |
*** jerms has left #openstack-kuryr | 12:14 | |
*** yamamoto has quit IRC | 12:19 | |
*** reedip has quit IRC | 12:54 | |
*** kzaitsev_pi has quit IRC | 13:03 | |
*** kzaitsev_pi has joined #openstack-kuryr | 13:04 | |
*** yamamoto has joined #openstack-kuryr | 13:07 | |
*** reedip has joined #openstack-kuryr | 13:08 | |
openstackgerrit | Eyal Leshem proposed openstack/kuryr-kubernetes master: Kubernetes Network Policy support Spec https://review.openstack.org/519239 | 13:13 |
*** c00281451_ has quit IRC | 13:28 | |
*** c00281451 has joined #openstack-kuryr | 13:29 | |
*** maysamacedos has quit IRC | 13:34 | |
*** maysamacedos has joined #openstack-kuryr | 13:37 | |
*** yamamoto has quit IRC | 13:41 | |
*** yamamoto has joined #openstack-kuryr | 13:50 | |
*** yamamoto has quit IRC | 13:54 | |
*** kiennt26 has joined #openstack-kuryr | 13:55 | |
*** atoth has joined #openstack-kuryr | 14:00 | |
*** maysamacedos has quit IRC | 14:02 | |
*** yamamoto has joined #openstack-kuryr | 14:14 | |
*** yamamoto has quit IRC | 14:19 | |
*** gouthamr has joined #openstack-kuryr | 14:25 | |
*** kiennt26 has quit IRC | 15:26 | |
*** gcheresh has quit IRC | 15:57 | |
*** juriarte has quit IRC | 16:17 | |
*** maysamacedos has joined #openstack-kuryr | 16:17 | |
dulek | ltomasbo: Hm, just a thought. Does your code have some protection from pools that are "gone"? | 16:23 |
ltomasbo | dulek, what do you mean with pools that are gone? | 16:23 |
dulek | ltomasbo: I mean situation when e.g. node gets removed from the cluster so pools that are related to it are not needed anymore. | 16:23 |
ltomasbo | if the node gets removed from the cluster, the kubernetes scheduler will not allocate ports on them | 16:24 |
ltomasbo | *pods | 16:24 |
ltomasbo | so, we will still have that pool, but we will not use it | 16:24 |
dulek | ltomasbo: Sure, but some ports will still get allocated there and those will just waste subnet addresses. | 16:24 |
ltomasbo | thats true | 16:25 |
ltomasbo | but that could be like a clean periodic tasks | 16:25 |
dulek | ltomasbo: In case of nodes that would work pretty well. | 16:25 |
ltomasbo | but...? | 16:26 |
dulek | ltomasbo: It would be a bit more difficult if someone changed SG id in the config? | 16:26 |
ltomasbo | :D | 16:26 |
ltomasbo | we talked some time agou about having the ports in the pool with some kind of TTL | 16:26 |
dulek | ltomasbo: I'm just not sure what should trigger pool removal in such case. | 16:26 |
ltomasbo | so that they are removed eventually if not used | 16:27 |
ltomasbo | so, we can add some extra info into the pools | 16:27 |
dulek | ltomasbo: Hm, that makes sense. Aaaand… it's pretty easy to implement with KuryrPort CRD as I can just check object's age. | 16:28 |
ltomasbo | and have a counter that increases the value everytime there is a periodic check | 16:28 |
ltomasbo | and when the port is taken (and put it back into the pool after usage) set that value to 0 | 16:28 |
dulek | ltomasbo: What if your controller is restarted each hour? We don't really have place to keep such info. | 16:29 |
dulek | I mean - in a persistent way. | 16:29 |
ltomasbo | why the controller will be restartee each hour? | 16:29 |
ltomasbo | still, when it is restarted, it will put the ports into the pools and set the TTL flag to 0 | 16:30 |
ltomasbo | or to the max value | 16:30 |
ltomasbo | ahh, you mean to keep it in sync with the kuryr-cni view... | 16:30 |
dulek | ltomasbo: That's exaggerated example, but it's a normal practice to restart Python applications in a regular fashion, as Python's GC will not release memory on its own. | 16:30 |
dulek | ltomasbo: Not really, I don't have a solution now. | 16:31 |
ltomasbo | that should not be a problem unless you are restarting the controller more frequently than the time to live | 16:31 |
dulek | ltomasbo: Sure thing! | 16:31 |
dulek | I'm just not sure what TTL makes sense. It should be balanced between using unnecessary resources and having higher latency when starting a pod. | 16:32 |
ltomasbo | perhaps we can have a ttl per pool? | 16:32 |
dulek | That problem sound like something you can write a masters thesis about. :D | 16:33 |
ltomasbo | and if the pool is not requested at all in a few hours/days, then assume we can delete it? | 16:33 |
ltomasbo | and of course delete it if the node is not present anymore | 16:33 |
dulek | and no pods exist that would be in such pool. | 16:33 |
ltomasbo | yep | 16:33 |
dulek | That would work IMO. :) | 16:34 |
ltomasbo | also, does it matter is the kuryr-cni is aware of the pools that are removed? | 16:34 |
ltomasbo | dulek, ^^ | 16:34 |
ltomasbo | it will never assing a pod to that pool anyway, and the resources are freed by the kuryr-controller | 16:34 |
dulek | Yeah, I don't think it matters. | 16:35 |
dulek | If pod comes that will match the pool - it will get created and CNI will see the port. | 16:35 |
ltomasbo | ok | 16:35 |
ltomasbo | dulek, what will happen in the other case | 16:35 |
dulek | ltomasbo: That is? | 16:35 |
ltomasbo | where the kuryr-controller is up all the time and a node (kuryr-cni) reboots? | 16:35 |
ltomasbo | will it read the kuryrCRD to recover the info about the pools at that node? | 16:36 |
dulek | ltomasbo: Not much. In current master it doesn't matter, data is on the annotation. | 16:36 |
dulek | ltomasbo: And in case of what I'm writing currently - it will just re-read the KuryrPort list from k8s API. | 16:36 |
dulek | ltomasbo: I'm trying to make the API the source of truth, so it's fine. | 16:37 |
ltomasbo | dulek, great! | 16:37 |
dulek | ltomasbo: Am I interrupting too much? One more question comes to my mind. | 16:37 |
dulek | ltomasbo: Well, just ignore it if you're too busy. ;) | 16:38 |
ltomasbo | no! please ask! I'm fed up already with tripleO issues... | 16:38 |
dulek | :D | 16:38 |
ltomasbo | I prefer to switch context a bit! | 16:38 |
dulek | ltomasbo: What's the difference in Neutron port management when we're running in nested configuration? | 16:38 |
ltomasbo | you mean interactions between kuryr-controller and neutron? | 16:39 |
dulek | ltomasbo: Because I see NestedVIFPoolDriver having a lot of additional code. | 16:39 |
dulek | ltomasbo: Yup. I guess some additional checks are needed when deleting the ports? | 16:39 |
dulek | Not sure about adding? | 16:39 |
ltomasbo | the difference is that when creating a pod in BM, you just need to create the port | 16:40 |
ltomasbo | but for the nested case | 16:40 |
ltomasbo | you need to create the port and attach it to the trunk port where the VM is | 16:40 |
ltomasbo | ahh, but you mean from the pool perspective? | 16:40 |
dulek | ltomasbo: Attach through Neutron API? | 16:40 |
ltomasbo | yep | 16:41 |
ltomasbo | without pools, you need to create the port, select a vlanid and attach it to that port in that vlan_id | 16:41 |
dulek | ltomasbo: Hm, okay. And I can get the vlan_id for the node from? | 16:42 |
ltomasbo | do you need the vlan id for anything? | 16:42 |
dulek | ltomasbo: Oh, sorry, I misunderstood. | 16:43 |
ltomasbo | and, regarding code length, note there are some extra functions on the nested pool to force repopulation and free actions of the pool | 16:43 |
dulek | ltomasbo: Anyway how do I know the trunk port I should attach the pod port to? | 16:43 |
ltomasbo | and you needed to get the trunk info in a different way to speed up the recovery of pre-created ports avoiding neutron calls | 16:43 |
ltomasbo | from the kuryr-cni perspective, you are already inside the VM, right? | 16:44 |
ltomasbo | or you mean the controller? | 16:44 |
dulek | ltomasbo: The controller. | 16:44 |
ltomasbo | if it is the controller, it is on the pod info, and then we get through neutron calls what trunk belongs to that host | 16:45 |
ltomasbo | there is a get_parent_port_by_host_ip function for that | 16:45 |
*** yamamoto has joined #openstack-kuryr | 16:46 | |
ltomasbo | at vif dirver | 16:46 |
dulek | ltomasbo: Okay, so let's say I don't care about recovery in my PoolDriver. | 16:46 |
dulek | ltomasbo: Then I don't need trunk info in there? | 16:47 |
dulek | ltomasbo: Note that I have all the os-vif object saved into K8s API. | 16:47 |
ltomasbo | kuryr-controller will be the one calling neutron to attach the port into that trunk, right? | 16:48 |
dulek | ltomasbo: Uhm… Yes, it will be creating the port. | 16:48 |
dulek | ltomasbo: But it will be going through VIFDriver, right? | 16:49 |
ltomasbo | yep | 16:49 |
ltomasbo | we also use the trunk information when removing pods | 16:49 |
dulek | ltomasbo: Still VIFDriver will take care of that for me, isn't it? | 16:50 |
ltomasbo | as that will put the port back into the pool or not, depending on the max size you set for the pool | 16:50 |
ltomasbo | not on the kuryr-cni | 16:50 |
ltomasbo | umm, wait | 16:50 |
ltomasbo | yep, if the kuryr-controller still does that (I need to review your patch) | 16:50 |
*** garyloug has quit IRC | 16:52 | |
celebdor | ltomasbo: dulek: I didn't read it all | 16:53 |
dulek | celebdor: Which one? | 16:53 |
celebdor | are you suggesting cleaning up pools when hosts are gone? | 16:53 |
*** yamamoto has quit IRC | 16:54 | |
dulek | celebdor: Ah. So I'm worried about dead pools that take up addresses. | 16:54 |
dulek | celebdor: Especially when running on VMs as they're bit more transient than BM. | 16:54 |
celebdor | dulek: this one http://s1.picswalls.com/wallpapers/2015/11/22/deadpool-photo_103702363_293.jpeg ? | 16:54 |
ltomasbo | we may need to delete pools once NetworkPolicy is there | 16:54 |
dulek | celebdor: Oh well. Then I'm more worried. | 16:55 |
celebdor | well, whatever is being used to delete nodes or create nodes | 16:55 |
celebdor | should clear the pools | 16:55 |
celebdor | otherwise the VM won't be destroyed xD | 16:55 |
celebdor | a trunk with ports can't be deleted | 16:55 |
celebdor | so in openshift-ansible case, the scale down will have to handle it with shade | 16:55 |
dulek | celebdor: Ehm… Doesn't this behavior suck? | 16:56 |
celebdor | dulek: which? the fact that deleting a trunk doesn't automatically remove its subports? | 16:57 |
celebdor | yes, that sucks | 16:57 |
celebdor | XD | 16:57 |
celebdor | big time | 16:57 |
celebdor | If you ask me | 16:57 |
dulek | celebdor: The fact that admin will need to run this pool manager and free pools. | 16:57 |
dulek | celebdor: Was that the primary reason for pool manager to exist? | 16:58 |
celebdor | dulek: no, no | 16:58 |
celebdor | the expectation is that if you want to add/remove worker nodes | 16:59 |
celebdor | you do it with a deployment tool | 16:59 |
celebdor | in openshift's case, that's openshift-ansible | 16:59 |
celebdor | dulek: if batch operations in Neutron were faster | 16:59 |
celebdor | I wouldn't mind deleting the ports and create them automatically | 17:00 |
celebdor | but with how expensive and taxing to openstack it is | 17:00 |
celebdor | you kinda want the operator to know that's gonna happen | 17:00 |
celebdor | rather than have it happen by accident | 17:00 |
dulek | celebdor: Hm, okay, fine… Still for the people running without deployment tool… And in the BM case - dead vif pools are an issue. | 17:02 |
celebdor | dulek: I agree | 17:02 |
celebdor | and I would make it configurable | 17:03 |
celebdor | that you can configure the controller so that we have a node watcher | 17:03 |
dulek | celebdor: Pool key is composite, it's not only nodes. | 17:03 |
celebdor | dulek: so? | 17:04 |
celebdor | when a node is deleted | 17:04 |
dulek | celebdor: So such dead pools will get created also when SG is changed in the config. | 17:04 |
celebdor | you do pools.items() | 17:04 |
celebdor | oh, that... | 17:04 |
dulek | celebdor: Or in the future subnet. | 17:04 |
celebdor | we don't have networkpolicy yet | 17:04 |
dulek | celebdor: project_id will be safe in that case. :D | 17:04 |
dulek | celebdor: Yeah, with NP it will be even worse. | 17:05 |
celebdor | in any case, let's take one thing at a time | 17:05 |
celebdor | if the node delete event comes | 17:05 |
dulek | celebdor: That's why ltomasbo came up with this idea of TTL for pools that have no pods. | 17:05 |
celebdor | we can delete all the pools in which the host is in the key | 17:05 |
celebdor | I'd rather not | 17:06 |
celebdor | defeats the purpose of pre-allocation | 17:06 |
dulek | celebdor: Yeah, TTL-pool size ratio would be hard to set according to deployment usage characteristic. | 17:07 |
celebdor | exactly | 17:15 |
openstackgerrit | Michał Dulko proposed openstack/kuryr-kubernetes master: WiP: Preview of CNI daemon side VIF choice https://review.openstack.org/527243 | 17:38 |
*** maysamacedos has quit IRC | 17:42 | |
*** maysamacedos has joined #openstack-kuryr | 17:43 | |
*** yamamoto has joined #openstack-kuryr | 18:05 | |
*** gouthamr has quit IRC | 18:24 | |
*** reedip has quit IRC | 18:29 | |
*** gouthamr has joined #openstack-kuryr | 18:37 | |
*** reedip has joined #openstack-kuryr | 18:41 | |
*** celebdor has quit IRC | 19:15 | |
*** openstack has joined #openstack-kuryr | 20:30 | |
*** ChanServ sets mode: +o openstack | 20:30 | |
*** gouthamr has quit IRC | 20:47 | |
*** yamamoto has joined #openstack-kuryr | 21:07 | |
*** openstack has joined #openstack-kuryr | 21:10 | |
*** ChanServ sets mode: +o openstack | 21:10 | |
*** yamamoto has quit IRC | 21:15 | |
*** threestrands has joined #openstack-kuryr | 21:31 | |
*** gouthamr_ has joined #openstack-kuryr | 21:45 | |
*** yamamoto has joined #openstack-kuryr | 21:51 | |
*** gouthamr_ is now known as gouthamr | 22:04 | |
*** ChanServ has quit IRC | 22:17 | |
*** ChanServ has joined #openstack-kuryr | 22:24 | |
*** barjavel.freenode.net sets mode: +o ChanServ | 22:24 | |
*** gouthamr has quit IRC | 22:28 | |
*** ChanServ has quit IRC | 22:28 | |
*** ChanServ has joined #openstack-kuryr | 22:31 | |
*** barjavel.freenode.net sets mode: +o ChanServ | 22:31 | |
*** threestrands_ has joined #openstack-kuryr | 23:10 | |
*** threestrands has quit IRC | 23:13 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!