openstackgerrit | Maysa de Macedo Souza proposed openstack/kuryr-kubernetes master: Update cni_main.patch file to match the current version of main.py https://review.openstack.org/527273 | 00:03 |
---|---|---|
*** portdirect has joined #openstack-kuryr | 00:29 | |
*** salv-orlando has joined #openstack-kuryr | 00:56 | |
*** salv-orlando has quit IRC | 01:00 | |
*** caowei has joined #openstack-kuryr | 01:24 | |
*** yamamoto has joined #openstack-kuryr | 01:41 | |
*** gouthamr has joined #openstack-kuryr | 01:53 | |
*** salv-orlando has joined #openstack-kuryr | 01:57 | |
*** salv-orlando has quit IRC | 02:02 | |
*** reedip has quit IRC | 02:06 | |
*** reedip has joined #openstack-kuryr | 02:17 | |
*** dmellado has quit IRC | 02:42 | |
*** dmellado has joined #openstack-kuryr | 02:48 | |
*** salv-orlando has joined #openstack-kuryr | 03:27 | |
*** salv-orlando has quit IRC | 03:32 | |
sapd_ | Hi everyone. I think kuryr-libnetwork has a bug. | 03:36 |
sapd_ | on this line. https://github.com/openstack/kuryr-libnetwork/blob/master/kuryr_libnetwork/controllers.py#L1688 | 03:36 |
sapd_ | if req_mac_address = '', neutron return all port, we expected is only one of not. | 03:36 |
*** dmellado has quit IRC | 03:37 | |
*** dmellado has joined #openstack-kuryr | 03:38 | |
sapd_ | if this is a neutron feature, we have to have a condition if filtered_ports | 03:39 |
sapd_ | if filtered_ports > 2 , we have to create a new port | 03:40 |
*** salv-orlando has joined #openstack-kuryr | 04:28 | |
*** salv-orlando has quit IRC | 04:33 | |
*** caowei has quit IRC | 04:46 | |
*** gouthamr has quit IRC | 04:53 | |
*** janki has joined #openstack-kuryr | 04:57 | |
*** caowei has joined #openstack-kuryr | 05:27 | |
*** salv-orlando has joined #openstack-kuryr | 05:29 | |
*** salv-orlando has quit IRC | 05:34 | |
*** yboaron has joined #openstack-kuryr | 05:50 | |
*** salv-orlando has joined #openstack-kuryr | 06:30 | |
*** salv-orlando has quit IRC | 06:34 | |
*** juriarte has joined #openstack-kuryr | 07:10 | |
*** salv-orlando has joined #openstack-kuryr | 07:31 | |
*** salv-orlando has quit IRC | 07:35 | |
*** salv-orlando has joined #openstack-kuryr | 07:44 | |
*** yboaron has quit IRC | 08:04 | |
*** salv-orlando has quit IRC | 08:35 | |
openstackgerrit | Genadi Chereshnya proposed openstack/kuryr-tempest-plugin master: Testing pod to pod connectivity https://review.openstack.org/525626 | 08:42 |
dmellado | sapd_: feel free to send a patch and we'll review it ;) | 08:58 |
sapd_ | dmellado, I think the contribute to openstack is hard, | 08:59 |
sapd_ | I do not understand how to contribute :D | 08:59 |
*** janonymous has joined #openstack-kuryr | 09:00 | |
*** salv-orlando has joined #openstack-kuryr | 09:01 | |
sapd_ | dmellado https://github.com/greatbn/kuryr-libnetwork/commit/1ec83f4a38b235696edd84abb39565fef99f55f0, I patched. | 09:02 |
*** caowei has quit IRC | 09:04 | |
dmellado | sapd_: it's not that hard at all! xD | 09:06 |
dmellado | https://docs.openstack.org/infra/manual/developers.html#getting-started | 09:06 |
dmellado | sapd_: ^^ ;) | 09:06 |
sapd_ | dmellado, thanks I will read it. :D | 09:07 |
dmellado | please do take a look and submit a patch as we won't be accepting PR ;) | 09:07 |
*** jappleii__ has quit IRC | 09:10 | |
*** dmellado has quit IRC | 09:17 | |
openstackgerrit | Michał Dulko proposed openstack/kuryr-kubernetes master: Use K8s 1.8 with Hyperkube https://review.openstack.org/525502 | 09:18 |
openstackgerrit | Michał Dulko proposed openstack/kuryr-kubernetes master: WiP: Preview of CNI daemon side VIF choice https://review.openstack.org/527243 | 09:18 |
*** garyloug has joined #openstack-kuryr | 09:25 | |
*** yboaron has joined #openstack-kuryr | 09:28 | |
openstackgerrit | Michał Dulko proposed openstack/kuryr-kubernetes master: Use K8s 1.8 with Hyperkube https://review.openstack.org/525502 | 09:35 |
openstackgerrit | Michał Dulko proposed openstack/kuryr-kubernetes master: WiP: Preview of CNI daemon side VIF choice https://review.openstack.org/527243 | 09:35 |
*** dmellado has joined #openstack-kuryr | 09:48 | |
*** janki has quit IRC | 10:07 | |
*** dmellado has quit IRC | 10:07 | |
*** kiennt26 has quit IRC | 10:09 | |
openstackgerrit | Michał Dulko proposed openstack/kuryr-kubernetes master: Make some Tempest gates voting https://review.openstack.org/527360 | 10:13 |
*** dmellado has joined #openstack-kuryr | 10:20 | |
*** caowei has joined #openstack-kuryr | 10:20 | |
*** caowei has quit IRC | 10:24 | |
*** yamamoto has quit IRC | 10:35 | |
dulek | yboaron: Hi. So it's impossible to run OpenShift+Octavia right now? | 10:39 |
dmellado | due to 403? | 10:43 |
*** yamamoto has joined #openstack-kuryr | 10:48 | |
*** dmellado has quit IRC | 10:50 | |
*** dmellado has joined #openstack-kuryr | 10:59 | |
dulek | dmellado: Dunno, yboaron commented on https://review.openstack.org/#/c/527360/. | 11:07 |
*** dmellado has quit IRC | 11:16 | |
*** dmellado has joined #openstack-kuryr | 11:18 | |
*** dmellado has quit IRC | 11:20 | |
*** dmellado has joined #openstack-kuryr | 11:24 | |
yboaron | dulek, Hi , sorry for the late response (lunch) , yes, it's impossible to run openshift with ocatavia (with firewall=OVS) , | 11:32 |
dulek | yboaron: But the job that is set to voting isn't enabling OpenShift. | 11:32 |
dulek | yboaron: So I'm not sure what's the issue here. | 11:32 |
*** dmellado has quit IRC | 11:32 | |
*** dmellado has joined #openstack-kuryr | 11:33 | |
yboaron | dulek, can you elaborate what octavia job is testing ? | 11:33 |
dulek | yboaron: It installs Kubernetes (Hyperkube) + OpenStack with Octavia and configures Kuryr to wire K8s pods. | 11:34 |
dulek | yboaron: Then we have like 1 test that creates a VM, a pod and tries to ping the pod from the VM. | 11:34 |
yboaron | dulek, OK , if that;s the case - so it's fine | 11:34 |
yboaron | dulek, min | 11:35 |
dulek | yboaron: In case we'll have some OpenShift-specific tempest tests, we'll need to make sure those tests will be skipped when running without OpenShift. | 11:35 |
dulek | yboaron: And it's pretty simple to do so with Tempest. | 11:35 |
dulek | yboaron: Now fun thing is we have "tempest-kuryr-kubernetes-octavia-openshift" job. Which as you've said - will not work (or at least load balancing will now work). | 11:36 |
dulek | dmellado: ^ | 11:36 |
dulek | yboaron: s/now/no | 11:36 |
dmellado | dulek: having a flag is pretty simple | 11:43 |
dmellado | so no worries on that | 11:44 |
dmellado | what's going on with the lbaas on openshift? | 11:44 |
dulek | dmellado: yboaron explains that Octavia will not work with firewall=OVS, which is required for OpenShift to work. | 11:45 |
dulek | Or at least that what I understood. | 11:45 |
dmellado | hmmm it seems that that was working before at least | 11:47 |
dmellado | so we could ping the octavia folks to get that back working | 11:47 |
yboaron | dulek , yep , there's a bug with octavia management network when firewall=OVS , so in bottom line it's impossible to create loadbalancer. to be more specific , in case of kuryr-k8s devstack will be stuck at: https://github.com/openstack/kuryr-kubernetes/blob/master/devstack/plugin.sh#L198 | 11:48 |
dmellado | yboaron: did you file that bug on octavia, or know if it's already filled? | 11:49 |
yboaron | dmellado, No , didn't file a bug , just communicate with bcafarel via email . | 11:50 |
yboaron | dmellado, I"ll check it | 11:51 |
dmellado | yboaron: ack, thanks! | 11:51 |
dmellado | if there's a bug already opened I'll try to have it addressed or even take care of it | 11:51 |
yboaron | dmellado, NP , I hope I'll have time to take care of it today | 11:52 |
*** yamamoto has quit IRC | 11:54 | |
*** yamamoto has joined #openstack-kuryr | 11:58 | |
dmellado | thanks yboaron | 12:00 |
dmellado | let me know if you get stuck and I can help in any way | 12:00 |
yboaron | dmellado, you're welcome | 12:02 |
*** yamamoto has quit IRC | 12:03 | |
openstackgerrit | Eyal Leshem proposed openstack/kuryr-kubernetes master: [WIP] Translate k8s policy to SG https://review.openstack.org/526916 | 12:12 |
dulek | ltomasbo: Ping. | 12:18 |
*** yamamoto has joined #openstack-kuryr | 12:35 | |
ltomasbo | dulek, pong | 13:19 |
*** janonymous has quit IRC | 13:20 | |
*** salv-orlando has quit IRC | 13:20 | |
*** atoth has joined #openstack-kuryr | 13:22 | |
dulek | ltomasbo: Can we talk after the scrum? | 13:30 |
ltomasbo | sure! | 13:30 |
*** dougbtv_ has joined #openstack-kuryr | 13:38 | |
*** dougbtv has quit IRC | 13:41 | |
*** salv-orlando has joined #openstack-kuryr | 13:45 | |
*** kiennt26 has joined #openstack-kuryr | 13:48 | |
*** dougbtv_ has quit IRC | 13:51 | |
*** dougbtv_ has joined #openstack-kuryr | 13:52 | |
openstackgerrit | Michał Dulko proposed openstack/kuryr-kubernetes master: Make some Tempest gates voting https://review.openstack.org/527360 | 13:54 |
dulek | dmellado: Looks like zuul will be playing games again - http://zuulv3.openstack.org/ is inaccessible. | 13:58 |
dmellado | ouch, looks like it :\ | 13:58 |
dulek | dmellado: "<odyssey4me> it looks like the zuul dashboard has timed out several times today, and it seems to correlate to jobs ending up timing out if they're running through the dashboard timeout" | 13:58 |
dmellado | just *awesome* | 13:58 |
dmellado | let's check #openstack-infra for updates | 13:58 |
dulek | dmellado: Infra uploaded new version of this dashboard yesterday, so I guess they'll need to revert it. | 13:58 |
dmellado | probably | 13:59 |
dmellado | dulek: looks like zuulv3 is maxed on ram | 13:59 |
dmellado | so I'd say we'll see some odd things for the time being... | 13:59 |
dulek | dmellado: Yay, fun stuff! | 14:00 |
*** maysamacedos has joined #openstack-kuryr | 14:09 | |
*** salv-orlando has quit IRC | 14:21 | |
*** yamamoto has quit IRC | 14:24 | |
*** yamamoto has joined #openstack-kuryr | 14:24 | |
*** gcheresh has joined #openstack-kuryr | 14:26 | |
*** salv-orlando has joined #openstack-kuryr | 14:33 | |
*** maysamacedos has quit IRC | 14:34 | |
*** maysamacedos has joined #openstack-kuryr | 14:35 | |
*** maysamacedos has quit IRC | 14:40 | |
*** maysamacedos has joined #openstack-kuryr | 14:41 | |
openstackgerrit | Yossi Boaron proposed openstack/kuryr-kubernetes master: Add L7 routing support for openshift context. https://review.openstack.org/523900 | 14:43 |
*** maysamacedos has quit IRC | 14:44 | |
leyal | dulek, the patch of 1.8 worked perfect for me. thanks :) | 14:48 |
dulek | leyal: I'm glad to hear that. :) | 14:49 |
dulek | ltomasbo: So ping again. :) | 14:49 |
*** salv-orl_ has joined #openstack-kuryr | 14:56 | |
*** maysamacedos has joined #openstack-kuryr | 14:57 | |
*** gouthamr has joined #openstack-kuryr | 14:59 | |
*** salv-orlando has quit IRC | 14:59 | |
*** kiennt26 has quit IRC | 15:23 | |
*** maysamacedos has quit IRC | 15:24 | |
*** garyloug has quit IRC | 15:26 | |
ltomasbo | hi dulek | 15:30 |
ltomasbo | tell me! | 15:30 |
dulek | ltomasbo: Why had you decided to leave out subnet from the pool key? | 15:30 |
ltomasbo | I was thinking on including it actually | 15:32 |
ltomasbo | dulek, but have that in my to do list | 15:32 |
ltomasbo | as at that point all the pods where on the same subnet... | 15:32 |
ltomasbo | but we definitely need to add it | 15:32 |
dulek | ltomasbo: Same goes for project_id? :P | 15:32 |
ltomasbo | project id is there, right? | 15:33 |
dulek | ltomasbo: Yes, but it also doesn't change. | 15:33 |
dulek | ltomasbo: Anyway the issue I'm having is as such: | 15:34 |
ltomasbo | ahh, yep, but it was provided on the request_vif already | 15:34 |
ltomasbo | unlike the subnet (I think) | 15:34 |
dulek | ltomasbo: Oh, okay, makes sense. | 15:34 |
dulek | I've moved the VIF reservation to the daemon. | 15:35 |
dulek | So kuryr-controller is only responsible for creating ports and adding them as CRDs into K8s API, so daemon will notice them. | 15:35 |
dulek | Easy. | 15:35 |
dulek | But - how do I extend pools. | 15:35 |
dulek | Pool creation when it doesn't exist is simple - when I get pod notification I check if pool exists and create it. | 15:36 |
dulek | But with delayed port reservation it's difficult to increase sizes of the pools. | 15:36 |
dulek | I could do it as a periodic task, the problem is I don't have neither subnet and pod info in the periodic task. | 15:37 |
dulek | ltomasbo: What do you think? | 15:38 |
* ltomasbo reading | 15:38 | |
ltomasbo | umm, maybe due to the context change (from tripleO) I'm missing something | 15:40 |
ltomasbo | the pool population is still handled by the kuryr-controller, right? | 15:40 |
*** yboaron has quit IRC | 15:41 | |
ltomasbo | dulek, ^^ | 15:41 |
dulek | ltomasbo: Yes. Currently I'm triggering both initial population and repopulation when new pod is detected by controller. | 15:41 |
dulek | ltomasbo: Fine thing with this is that I have all the required info - pod, pool_key, subnet. | 15:42 |
ltomasbo | dulek, then, kuryr controller can still populate the pool when the number is below X pod per a specific pool, right? | 15:42 |
dulek | ltomasbo: But if I create more pods at once than the number of vifs added by single repopulation, some pods will be missing the vifs as just one repopulation will be triggered. | 15:43 |
dulek | ltomasbo: More or less, but reservation will happen on daemon side. | 15:43 |
dulek | ltomasbo: So controller doesn't know when the port is taken from the pool | 15:44 |
ltomasbo | dulek, but will the cni retry as it does today the controller? | 15:44 |
*** gcheresh has quit IRC | 15:45 | |
ltomasbo | basically, what we have now is that if there is not enough pods, a repopulation is triggered | 15:45 |
dulek | ltomasbo: Well, CNI doesn't have a way to communicate with Controller, so noticing it on CNI side doesn't help us. | 15:45 |
ltomasbo | and the pod request_vif will fail with resource not ready and retry later | 15:45 |
dulek | ltomasbo: I could probably count running pods for each pool and available ports for each pool… But I wonder if this will be accurate. | 15:46 |
ltomasbo | dulek, I think I'm missing something | 15:47 |
dulek | s/available/all | 15:47 |
ltomasbo | the pools will be on the cni, right? | 15:47 |
ltomasbo | but the info there is just the subnet, port_id, pool_key... | 15:47 |
dulek | ltomasbo: Hm, yes. But pool creation happens in the controller. | 15:47 |
ltomasbo | the actual call to neutron still happens at kuryr-controller, right? | 15:47 |
dulek | ltomasbo: Yes, that's the assumption. | 15:48 |
*** mrostecki has joined #openstack-kuryr | 15:48 | |
ltomasbo | ok, and request_vif (from the handler) will still call the controller, right? | 15:48 |
dulek | ltomasbo: Which request_vif? | 15:48 |
ltomasbo | when you create a pod, the vif handlers calls request_vif to get a vif for the pod | 15:49 |
ltomasbo | and then the controller check the pool | 15:49 |
ltomasbo | if there is a port it returns it | 15:49 |
ltomasbo | and if there is not, it fails with ResourceNotReady | 15:49 |
ltomasbo | and it will be retried again | 15:49 |
dulek | ltomasbo: Yeah, that's still in there, but I'm now returning None. | 15:50 |
ltomasbo | how will be the flow now with the cni side? | 15:50 |
ltomasbo | it is the cni returning the vif? or just performing the annotation (and removing that from the controller)? | 15:50 |
dulek | ltomasbo: So controller checks the pool and repopulates if needed. That's it, no annotating done on controller side. | 15:51 |
dulek | ltomasbo: Now repopulation creates KuryrPort CRD that kuryr-daemon is watching for. | 15:51 |
dulek | (or CRDs) | 15:51 |
dulek | ltomasbo: Now kuryr-daemon keeps a registry of KuryrPorts. | 15:52 |
dulek | ltomasbo: So when the CNI request comes, CNI just takes a free KuryrPort and annotates it. | 15:52 |
dulek | ltomasbo: It's done in a safe manner using compare-and-swap technique. | 15:52 |
dulek | ltomasbo: And basically that's it. | 15:54 |
ltomasbo | ok | 15:57 |
ltomasbo | dulek, so, we still now at the kuryr-controller the amount of ports left at the pools, right | 15:57 |
ltomasbo | and we can still trigger the repopulation actions from there | 15:58 |
ltomasbo | the difference will be when CNI does not have a port from the CRD to take | 15:58 |
dulek | ltomasbo: So the source of truth is KuryrPort API. | 15:58 |
dulek | ltomasbo: So kuryr-controller doesn't know *immediately* if a port is missing for a pod. | 15:58 |
ltomasbo | then, how to detect that and make that to re-try, right? | 15:58 |
*** mrostecki has left #openstack-kuryr | 15:59 | |
ltomasbo | dulek, why it will not know it? | 15:59 |
dulek | ltomasbo: Hm. | 15:59 |
ltomasbo | the kuryrport crd is created by the kuryr-controller, right? | 16:00 |
ltomasbo | when the request_vif comes | 16:00 |
dulek | ltomasbo: https://review.openstack.org/#/c/527243/5/kuryr_kubernetes/controller/drivers/vif_pool.py@639 | 16:00 |
dulek | ltomasbo: Yes, but it's not assigned to a pod. | 16:00 |
ltomasbo | the different (If I got it right) is that when a repopulation is needed, it will create kuryrPort CRDs, right? | 16:00 |
ltomasbo | the difference will be that we don't care about ports_ids anymore at the kuryr-controller, but about the amount of pods requests | 16:01 |
ltomasbo | so that we just keep track of the remaining number of ports, right? | 16:01 |
dulek | ltomasbo: Oh… | 16:02 |
ltomasbo | dulek, can we keep track of the pools at the controller in a similar way? without relying on the kubernetesAPI? | 16:02 |
ltomasbo | and just make the cni side relying on API? | 16:03 |
dulek | ltomasbo: Not really. Controller will not know if a port was taken by a pod until CNI annotates it. | 16:03 |
ltomasbo | umm | 16:03 |
dulek | ltomasbo: And it cannot assume that every pod will take a port. E.g. - if pod will get removed before CNI request is sent. | 16:03 |
ltomasbo | so, kuryr-controller will not be listening anymore on pods creation? | 16:04 |
dulek | ltomasbo: It still listens for that. I've mentioned a bit earlier that I probably could count pods and trigger repopulation when *total* number of pods in a pool is higher than KuryrPorts number in a pool. | 16:05 |
dulek | ltomasbo: How this ResourceNotReady retry works? | 16:05 |
ltomasbo | something like taht but with some margin (to ensure a minimum size of the pool) | 16:06 |
ltomasbo | that was in the controller side | 16:06 |
dulek | ltomasbo: Suuure, sure. | 16:06 |
ltomasbo | so, whenever there was no port available for a pod, the request_vif will trigger a repopulation action (in another thread) and return a ResourceNotReady | 16:06 |
ltomasbo | so taht the vif_handler will retry later for that pod | 16:07 |
ltomasbo | and next time it will get the port created by the repopulation | 16:07 |
ltomasbo | there are a few handlers for logging, retries, ... in the handlers folder | 16:07 |
ltomasbo | not sure if they are applicable to the cni side... | 16:07 |
ltomasbo | but something like waiting on the kuryrPorts CRD to be populated may work I guess | 16:08 |
dulek | ltomasbo: VIFHandler doesn't seem to be inheriting from retry? | 16:08 |
ltomasbo | well, it does retry as the pods gets the vif after the resourcenotready... | 16:09 |
ltomasbo | let me see | 16:09 |
dulek | ltomasbo: Okay, ControllerPipeline wraps it in such handler automatically. | 16:09 |
dulek | ltomasbo: I'll try to wrap my head around what we've discussed. | 16:11 |
ltomasbo | yep, it is in the pipeline.py | 16:14 |
ltomasbo | but just for ResourceNotReady | 16:14 |
ltomasbo | so, I guess something similar may help... | 16:14 |
dulek | ltomasbo: Looking from another angle - I wonder if a periodic job shouldn't repopulate the pools. | 16:16 |
dulek | ltomasbo: That way it would be free from race conditions. | 16:16 |
dulek | ltomasbo: I wonder… When creating 10 new ports in repopulation action - why is pod passed to request_vifs? | 16:17 |
dulek | ltomasbo: https://github.com/openstack/kuryr-kubernetes/blob/f53188a2b80851403b2e89c3410da692cffba0af/kuryr_kubernetes/controller/drivers/neutron_vif.py#L101 | 16:18 |
dulek | ltomasbo: Does this setting matter at all? Because it will be the same for 10 ports created for the pool. | 16:18 |
ltomasbo | dulek, I'm not sure if that was needed | 16:34 |
ltomasbo | but now I think we use it at the ports to recover the ports for the non-nested case | 16:35 |
dulek | ltomasbo: Yeaaaah, now I've noticed that it's fine, it's just getting node hostname from pod. | 16:35 |
* dulek needs coffee, but it's too late now. ;) | 16:35 | |
ltomasbo | :D | 16:35 |
ltomasbo | also, the problem with time base repopulations is that you will still need to handle retries | 16:36 |
ltomasbo | in case of burst pods creations | 16:36 |
ltomasbo | in between updates | 16:36 |
dulek | ltomasbo: Not really I think. | 16:36 |
dulek | ltomasbo: I mean - new pool creation will need to be handled. | 16:36 |
dulek | ltomasbo: But repopulation of existing pools can happen in periodic tasks and this should be safe. | 16:37 |
ltomasbo | if you have 10 ports at a given pool, and you re-populate that one let's say every minute | 16:37 |
ltomasbo | and you have 20 pods arriving between two repopulation actions | 16:37 |
ltomasbo | what would it happen? | 16:37 |
dulek | ltomasbo: Not much, CNI will wait until controller catches up. | 16:38 |
dulek | ltomasbo: And will grab ports once available. | 16:38 |
ltomasbo | but then you can have latency added to the pods to be created based on that interval | 16:38 |
ltomasbo | instead of just repopulating it when you see it's needed | 16:39 |
dulek | ltomasbo: Yes, but is it different currently? | 16:39 |
*** juriarte has quit IRC | 16:39 | |
dulek | ltomasbo: When there's a high number of pod created at once, is pool still updated 10-by-10? | 16:40 |
ltomasbo | ummm | 16:40 |
ltomasbo | yep, we have a update lock to not trigger many repopulations | 16:41 |
ltomasbo | dulek, so, perhaps at the end is not that different anyway | 16:42 |
dulek | ltomasbo: Oh well. That's the worst part - I'm starting to wonder if the design I'm working on is actually any better. :P | 16:42 |
ltomasbo | it will just save some time when a burst happened in between updates if not many of them are triggered together | 16:43 |
dulek | ltomasbo: BTW - why are you doing an interval-based repopulation instead of just doing a failing lock so 2 repopulations will never run in parallel? | 16:44 |
ltomasbo | dulek, that is a good question... | 16:47 |
dulek | ltomasbo: Okay, that's enough for today, I'll need to sleep and rethink this with a fresh mind. | 16:51 |
dulek | Thanks! | 16:51 |
ltomasbo | xD | 16:52 |
ltomasbo | your welcome! | 16:52 |
*** maysamacedos has joined #openstack-kuryr | 17:17 | |
*** salv-orl_ has quit IRC | 18:23 | |
*** gcheresh has joined #openstack-kuryr | 18:33 | |
*** salv-orlando has joined #openstack-kuryr | 18:42 | |
*** maysamacedos has quit IRC | 19:40 | |
*** maysamacedos has joined #openstack-kuryr | 19:42 | |
*** salv-orlando has quit IRC | 19:46 | |
*** leyal has quit IRC | 19:48 | |
*** leyal has joined #openstack-kuryr | 19:49 | |
*** salv-orlando has joined #openstack-kuryr | 20:05 | |
*** maysamacedos has quit IRC | 20:28 | |
*** c00281451_ has joined #openstack-kuryr | 20:32 | |
*** dmellado has quit IRC | 20:32 | |
*** salv-orl_ has joined #openstack-kuryr | 20:32 | |
*** openstackgerrit has quit IRC | 20:34 | |
*** dmellado has joined #openstack-kuryr | 20:34 | |
*** salv-orlando has quit IRC | 20:35 | |
*** zengchen has quit IRC | 20:35 | |
*** s1061123 has quit IRC | 20:35 | |
*** lihi has quit IRC | 20:37 | |
*** maysamacedos has joined #openstack-kuryr | 20:39 | |
*** lihi has joined #openstack-kuryr | 20:40 | |
*** s1061123 has joined #openstack-kuryr | 20:40 | |
*** maysamacedos has quit IRC | 20:42 | |
*** jistr has quit IRC | 20:45 | |
*** jistr has joined #openstack-kuryr | 20:45 | |
*** jappleii__ has joined #openstack-kuryr | 21:23 | |
*** jappleii__ has quit IRC | 21:24 | |
*** jappleii__ has joined #openstack-kuryr | 21:25 | |
*** jappleii__ has quit IRC | 21:26 | |
*** jappleii__ has joined #openstack-kuryr | 21:27 | |
*** jappleii__ has quit IRC | 21:27 | |
*** jappleii__ has joined #openstack-kuryr | 21:28 | |
*** dougbtv has joined #openstack-kuryr | 21:34 | |
*** gcheresh has quit IRC | 21:41 | |
*** maysamacedos has joined #openstack-kuryr | 21:41 | |
*** maysamacedos has quit IRC | 21:43 | |
*** gouthamr has quit IRC | 22:18 | |
*** maysamacedos has joined #openstack-kuryr | 23:10 | |
*** maysamacedos has quit IRC | 23:31 | |
*** maysamacedos has joined #openstack-kuryr | 23:31 | |
*** atoth has quit IRC | 23:35 | |
*** pmannidi has joined #openstack-kuryr | 23:36 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!