*** hongbin has quit IRC | 00:04 | |
janonymous | apuimedo: Hi | 00:30 |
---|---|---|
apuimedo | janonymous: hi | 00:31 |
janonymous | apuimedo: Was wondering which thing to do first, swarm mode check or tls devstack.. | 00:31 |
janonymous | apuimedo: pls let me know | 00:31 |
apuimedo | janonymous: http://www.quickmeme.com/img/c2/c26809e3a038ae40c48384056be3891a0adcbf3ea2d13053e5ca83d8deb93798.jpg | 00:32 |
apuimedo | XD | 00:32 |
janonymous | :D | 00:32 |
apuimedo | janonymous: I'd say... Quick investigation on if Docker 1.13 swarm mode allows to specify other drivers like kuryr | 00:34 |
apuimedo | and then, before jumping to code any support, finish the devstack tls and add a fullstack test for it | 00:34 |
apuimedo | janonymous: what do you think? | 00:34 |
janonymous | apuimedo: yeah, sure! | 00:35 |
janonymous | apuimedo: I was checking tls devstack using USE_SSL=True flag for default services, seems like cinder fails which leads to devstack failure | 00:35 |
janonymous | apuimdeo: not sure should log a bug :D | 00:35 |
janonymous | apuimedo:i will proceed as per your suggestions, Thanks toni | 00:36 |
apuimedo | janonymous: asking around in openstack infra irc channel may save you the work of filing a bug | 00:36 |
apuimedo | :P | 00:36 |
janonymous | apuimedo:Haha | 00:37 |
janonymous | yeah i can try that :) | 00:37 |
apuimedo | janonymous: let me know how it goes | 00:43 |
apuimedo | (but later, now I'll go to bed :P ) | 00:43 |
janonymous | :) sure | 00:44 |
*** salv-orlando has joined #openstack-kuryr | 00:49 | |
*** salv-orlando has quit IRC | 00:54 | |
*** salv-orlando has joined #openstack-kuryr | 01:00 | |
*** salv-orlando has quit IRC | 01:06 | |
openstackgerrit | Zhang Ni proposed openstack/fuxi master: Enable Fuxi to use Manila share https://review.openstack.org/375452 | 01:13 |
*** yedongcan has joined #openstack-kuryr | 01:27 | |
openstackgerrit | Zhang Ni proposed openstack/fuxi master: Enable Fuxi to use Manila share https://review.openstack.org/375452 | 01:28 |
*** portdirect is now known as portdirect_away | 01:30 | |
*** hongbin has joined #openstack-kuryr | 01:35 | |
*** yedongcan1 has joined #openstack-kuryr | 01:41 | |
*** yedongcan has quit IRC | 01:43 | |
*** salv-orlando has joined #openstack-kuryr | 02:02 | |
*** salv-orlando has quit IRC | 02:06 | |
*** mattmceuen has joined #openstack-kuryr | 02:12 | |
*** hongbin has quit IRC | 02:19 | |
*** hongbin has joined #openstack-kuryr | 02:21 | |
*** yamamoto has quit IRC | 02:23 | |
*** hongbin has quit IRC | 02:28 | |
janonymous | apuimdeo: Seems kuryr works when changed to swarm mode, only change i have to made is remove --cluster-store etcd://localhost:4001 from daemon startup : http://paste.openstack.org/show/597474/ | 02:38 |
vikasc | apuimedo, pong | 02:53 |
*** pmannidi has quit IRC | 02:57 | |
*** pmannidi has joined #openstack-kuryr | 02:58 | |
*** salv-orlando has joined #openstack-kuryr | 03:03 | |
*** salv-orlando has quit IRC | 03:07 | |
*** yedongcan1 has quit IRC | 03:15 | |
*** yedongcan has joined #openstack-kuryr | 03:16 | |
*** hongbin has joined #openstack-kuryr | 03:21 | |
*** yamamoto has joined #openstack-kuryr | 03:24 | |
*** mattmceuen has quit IRC | 03:25 | |
*** yamamoto has quit IRC | 03:30 | |
*** hongbin has quit IRC | 03:31 | |
*** yamamoto has joined #openstack-kuryr | 03:58 | |
*** mattmceuen has joined #openstack-kuryr | 04:21 | |
*** mattmceuen has quit IRC | 05:00 | |
*** salv-orlando has joined #openstack-kuryr | 05:04 | |
*** salv-orlando has quit IRC | 05:09 | |
*** yamamoto has quit IRC | 05:10 | |
*** yamamoto has joined #openstack-kuryr | 05:14 | |
*** yamamoto has quit IRC | 05:14 | |
*** jay-ahn has joined #openstack-kuryr | 05:46 | |
*** jay-ahn has quit IRC | 05:47 | |
*** hyunsun has joined #openstack-kuryr | 05:47 | |
*** janki has joined #openstack-kuryr | 06:03 | |
*** salv-orlando has joined #openstack-kuryr | 06:05 | |
*** salv-orlando has quit IRC | 06:10 | |
*** yamamoto has joined #openstack-kuryr | 06:15 | |
*** saneax-_-|AFK is now known as saneax | 06:18 | |
*** yamamoto has quit IRC | 06:19 | |
*** yamamoto has joined #openstack-kuryr | 06:19 | |
*** salv-orlando has joined #openstack-kuryr | 06:57 | |
*** salv-orl_ has joined #openstack-kuryr | 07:49 | |
*** salv-orlando has quit IRC | 07:51 | |
apuimedo | janonymous: sounds great! | 07:59 |
apuimedo | janonymous: Did you verify that the communication is using the Neutron ports? | 07:59 |
apuimedo | vikasc: Morning! | 07:59 |
vikasc | apuimedo, Morning! | 08:00 |
vikasc | apuimedo, i missed your ping yesterday | 08:00 |
*** reedip has quit IRC | 08:01 | |
*** reedip has joined #openstack-kuryr | 08:02 | |
apuimedo | no worries | 08:02 |
*** salv-orl_ has quit IRC | 08:04 | |
*** salv-orlando has joined #openstack-kuryr | 08:04 | |
*** salv-orlando has quit IRC | 08:08 | |
janonymous | apuimedo: yeah , i checked that neutron logs were showing requests when creating network | 08:10 |
apuimedo | janonymous: cool! | 08:11 |
apuimedo | I think you should definitely make a youtube video or asciicinema recording of it and twit that kuryr-libnetwork works with Docker swarm mode 1.13 :-0 | 08:12 |
apuimedo | :-) | 08:12 |
janonymous | apuimedo:I prefer to remain anonymous ;) | 08:12 |
apuimedo | true :P | 08:13 |
apuimedo | You can make a twitter profile with https://pbs.twimg.com/profile_images/716059387735965696/_qYqY6BK.jpg as avatar | 08:13 |
apuimedo | xd | 08:13 |
janonymous | Hehee :D | 08:14 |
janonymous | apuimedo: just to add i checked with single node swarm deployment | 08:16 |
apuimedo | ok. We should check with multi node as well | 08:17 |
janonymous | sure | 08:20 |
apuimedo | I received the new OpenStack Foundation made Kuryr logo | 08:38 |
*** hyunsun has quit IRC | 08:39 | |
*** yamamoto has quit IRC | 08:40 | |
janonymous | We should order kuryr t-shirts :D | 08:40 |
apuimedo | https://drive.google.com/file/d/0B7nRaHXyizqSMXBiakh1dGFBYXdCZHFKUko3OU44d1FYTWJr/view?usp=sharing | 08:40 |
apuimedo | I am biased since I made the previous logo | 08:40 |
apuimedo | but I liked it better | 08:40 |
apuimedo | :P | 08:40 |
apuimedo | This platypus doesn't have a cheeky smile | 08:41 |
janonymous | Hahaaa | 08:41 |
janonymous | this is the side view i guess | 08:42 |
janonymous | :P | 08:42 |
apuimedo | I suppose | 08:42 |
janonymous | But this is not even nearer to what our previous logo was | 08:43 |
apuimedo | This one has a cheeky smile https://s-media-cache-ak0.pinimg.com/originals/1a/f3/37/1af33760a821e18bb6e7bd3457c55aa2.jpg | 08:43 |
apuimedo | janonymous: well, tbf, the previous one was not a platypus | 08:43 |
janonymous | what about http://docs.openstack.org/developer/kuryr-libnetwork/readme.html#getting-it-running-with-a-service-container | 08:44 |
apuimedo | http://www.environment.nsw.gov.au/images/nature/platypusLg.jpg would have been nice too | 08:44 |
apuimedo | janonymous: you mean that we should fix the upstream container? | 08:45 |
janonymous | sry the logo on this page | 08:46 |
apuimedo | that's the one I made | 08:46 |
janonymous | apuimedo:http://docs.openstack.org/developer/kuryr-libnetwork/readme.html#kuryr-libnetwork | 08:46 |
apuimedo | I guess it will go to the trash :'( | 08:46 |
janonymous | I saw some mailing list and protests on a bear logo in mailing list | 08:47 |
janonymous | :D , all logos look the same :P | 08:47 |
apuimedo | well, the final version of Ironic's bear is much better than the inital one | 08:49 |
apuimedo | but I'm not sure which feedback to give on this one | 08:49 |
janonymous | Haha | 08:50 |
apuimedo | ivc_: did you see ltomasbo's and my comments on the lbaas patch? | 08:53 |
*** garyloug has joined #openstack-kuryr | 08:59 | |
*** yamamoto has joined #openstack-kuryr | 09:24 | |
*** yamamoto has quit IRC | 09:40 | |
*** salv-orlando has joined #openstack-kuryr | 10:06 | |
*** salv-orlando has quit IRC | 10:11 | |
ivc_ | apuimedo aye, will update it soon | 10:38 |
apuimedo | cool | 10:38 |
apuimedo | ivc_: now ltomasbo is also testing your resourceversion patch | 10:38 |
apuimedo | and stumbled upon a case that annotation are not yet there | 10:38 |
apuimedo | and it gets an exception | 10:38 |
ltomasbo | yep, going to comment on the patch what I found | 10:38 |
apuimedo | (when initializing new_annotations) | 10:38 |
apuimedo | ltomasbo: maybe you can share the struggle with ivc_ | 10:39 |
apuimedo | three brains think better than two | 10:39 |
ltomasbo | ivc_, why workflow -1 in the patch btw? | 10:39 |
ivc_ | well i was thinking about baking 'skip stale' functionality into it (that was the original plan, but it is more complicated than the current time-based 'skip-stale' patch) | 10:40 |
*** yamamoto has joined #openstack-kuryr | 10:41 | |
apuimedo | ivc_: but I merged too fast | 10:41 |
ivc_ | but at the same time 'skip stale' got merged so i've not yet decided | 10:41 |
apuimedo | xd | 10:41 |
ivc_ | ^ | 10:41 |
ivc_ | and it also has -1 from irenab anyway | 10:42 |
ivc_ | so it requires a bug on launchpad | 10:42 |
ivc_ | ltomasbo but anyway, what's the issue with exception? | 10:44 |
ltomasbo | ivc_, ok, thanks | 10:44 |
ltomasbo | in L62 | 10:45 |
ivc_ | KeyError? | 10:45 |
ltomasbo | I'm getting an error as reousrce['metadata'] has no annotations dic | 10:45 |
ltomasbo | yep | 10:45 |
ltomasbo | so, I modified by reousrce[metadata].get('annotations', {}) | 10:46 |
ivc_ | yup | 10:46 |
ltomasbo | and then, to update the resource_version properly | 10:46 |
ltomasbo | I swtiched the for loop you have in L63-65 | 10:46 |
ltomasbo | by: | 10:46 |
ltomasbo | for k, v in new_annotations.items(): | 10:46 |
ltomasbo | if v != annotations.get(k, v): | 10:47 |
ltomasbo | break | 10:47 |
ltomasbo | else: | 10:47 |
ltomasbo | ... | 10:47 |
*** yamamoto has quit IRC | 10:47 | |
ltomasbo | so that if there is no annotations, the else gets executed and in the next iteration the response is ok | 10:47 |
ltomasbo | not sure though if that could break other use cases... | 10:47 |
ivc_ | i think i had a reason why it was 'annotations' driving the loop | 10:47 |
ltomasbo | that is what I don't know | 10:48 |
ivc_ | i need to think about it, but anyway we can get rid of get(..., {}) and simply "if 'annotations' not in ...['metadata']: ... continue" early | 10:49 |
ltomasbo | yep | 10:50 |
ltomasbo | I just saw the problem was the time as apuimedo suggested | 10:50 |
ltomasbo | I just rebase my patch (without using the annotations one) | 10:50 |
ltomasbo | and now it works as expected | 10:50 |
ivc_ | you probably got the 'skip stale' patch now | 10:51 |
ivc_ | they are designed to work together | 10:51 |
ivc_ | as in 'resource_version' introduces the issue that 'skip stale' fixes | 10:51 |
ltomasbo | yep | 10:51 |
ltomasbo | but with skip stale it is enough for me (rigth now) | 10:52 |
ltomasbo | anyway, I guess the .get( | 10:52 |
ltomasbo | anyway, I guess the .get('annotations', {}) | 10:52 |
ltomasbo | could be useful for your patch | 10:52 |
ivc_ | it depends if there was an actual reason why 'annotation' is driving the loop instead of 'new_annotation'. but that was more than a month ago and i don't quite remember, so i need to go through it once more | 10:54 |
ivc_ | and it prolly deserves some #NOTE | 10:54 |
ivc_ | ltomasbo also, you'll probably still see that issue in some rare cases even after 'skip stale' | 10:56 |
ltomasbo | could be | 10:57 |
ltomasbo | but it was happening to me all the time | 10:57 |
ltomasbo | even debugging with pdb, where the race are less probable | 10:57 |
ivc_ | its not the race | 10:57 |
ivc_ | you were guaranteed to get it :) | 10:58 |
ltomasbo | but anyhow, thanks for the skip stale! it was driving my crazy the whole morning :D | 10:58 |
ivc_ | the flow is that K8s itself causes several events during creation of the resource. i think 'skip stale' patch has a good description about whats going on | 10:59 |
ivc_ | and what you were seeing is caused by second event which still has no annotation but happens after kuryr processed first and added an annotation | 11:00 |
ivc_ | so it was literally guaranteed to happen | 11:00 |
ivc_ | and without skip_stale/resource_version patch it would cause unnecessary neutron create/delete operations for ports | 11:01 |
*** yedongcan has left #openstack-kuryr | 11:01 | |
ivc_ | ltomasbo and by the way, if you actually want to test services, just take https://review.openstack.org/#/c/376045/ - it has everything already, i'm just adding tests and splitting it into smaller parts now | 11:07 |
*** salv-orlando has joined #openstack-kuryr | 11:08 | |
ltomasbo | ivc_, I've already tested that! | 11:09 |
ltomasbo | my problem is that my patch was not removing ports with release_vif in the normal way | 11:10 |
ltomasbo | so I found some leftover ports | 11:10 |
ltomasbo | due to not having the skip stale | 11:10 |
ivc_ | ah ok | 11:10 |
ltomasbo | but now it works, although as you said, it still may end up in some race and leaving some ports | 11:11 |
*** salv-orlando has quit IRC | 11:12 | |
ivc_ | ltomasbo in your pool implementation do you have any performance numbers already? | 11:15 |
ltomasbo | I'm going to submit a new patch in a few minutes | 11:15 |
ltomasbo | I tried in a local VM here | 11:16 |
ltomasbo | with some containers that needs around 12-14 seconds to boot up | 11:16 |
ltomasbo | and I'm getting them (if the port is already there) in 7-10 seconds | 11:16 |
ltomasbo | but I can execute a better benchmarking | 11:17 |
ltomasbo | but it can save 2-3 seconds (in a slow environment) | 11:17 |
ivc_ | thing is i've measured port creation with ~20-50 concurrent pods with original patch | 11:17 |
ivc_ | and imo what you are optimising now is not the bottleneck | 11:18 |
ivc_ | neutron.port_create is quite fast | 11:18 |
ivc_ | so i don't see much gain in optimising it | 11:18 |
ivc_ | the bottleneck is the 'wait for status==ACTIVE' loop | 11:18 |
ltomasbo | well, in my case, it was not that fast | 11:19 |
ltomasbo | plus, the wait status==active will be faster if the port is already created | 11:19 |
ivc_ | i've not added devref/spec/bp for that, but we discussed that 'pool' design before | 11:19 |
ivc_ | ltomasbo it would not | 11:20 |
ivc_ | not with ovs-agent at least | 11:20 |
ltomasbo | I'm using ovs-firewall | 11:20 |
ivc_ | in case of ovs-agent port_create is mostly about adding a record in database | 11:20 |
ivc_ | ovs-firewall is still ovs-agent | 11:21 |
ivc_ | right? | 11:21 |
ltomasbo | and I will also try to speed up the nested case, where there are more calls to the neutron (create port, attach, update) | 11:21 |
ltomasbo | yep | 11:21 |
ivc_ | well what i was proposing is a little different | 11:21 |
ivc_ | so instead of optimising port_create we optimise the cni side of things | 11:22 |
ltomasbo | I think I'm getting more performance speed up due to testing this in a devstack over a VM | 11:22 |
ivc_ | so pools are per-node and interfaces/veths are also pooled | 11:22 |
ivc_ | ltomasbo my point is in a fair test i expect a very little gain from your current approach | 11:23 |
ltomasbo | seems one of the bottleneck is for the agent to ask the port details to the server | 11:23 |
ivc_ | na | 11:24 |
ltomasbo | so, if we already have does, we speed up that part | 11:24 |
ivc_ | the bottleneck is ovs-agent | 11:24 |
ltomasbo | ovs-agent doing what? | 11:24 |
ltomasbo | attaching the veth to the br-int? | 11:24 |
ivc_ | thats done by nova | 11:25 |
*** salv-orlando has joined #openstack-kuryr | 11:25 | |
ivc_ | :) | 11:25 |
ltomasbo | or creating the linux bridge and applying the ip tables? | 11:25 |
ivc_ | s/nova/os-vif/ | 11:25 |
ivc_ | linux bridge is also os-vif responsibility (and it is quite fast) | 11:25 |
ivc_ | you know how ovs-agent/nova/neutron-server interact with each other during vm startup? | 11:26 |
ltomasbo | yep | 11:27 |
ivc_ | so we are pretty much like nova here | 11:27 |
ltomasbo | but here we remove the nova | 11:27 |
ltomasbo | yep, agree | 11:28 |
ivc_ | we plug veth same as nova and wait for ovs-agent to configure everything (i.e. flows/iptables) | 11:28 |
ivc_ | and thats where we waste all the time | 11:28 |
openstackgerrit | Luis Tomas Bolivar proposed openstack/kuryr-kubernetes master: [WIP] Adding pool of ports to speed up containers booting/deletion https://review.openstack.org/426687 | 11:29 |
ivc_ | if you profile port_create -> osvif.plug -> poll show_port you'll see that it is that show_port polling part that takes the majority of time | 11:29 |
ltomasbo | I'm using ovs-firewall, so no iptables here | 11:29 |
ivc_ | doesn't really matter, it just replaces it with flows | 11:30 |
ivc_ | (but its prolly faster than with iptables) | 11:30 |
ivc_ | so, just add logging around port_create on controller side and you'll see its <1 second | 11:31 |
ivc_ | tho it grows as you add concurrency | 11:32 |
ltomasbo | 1 second is not something that small for containers | 11:32 |
ltomasbo | so, still saving | 11:32 |
ivc_ | its not by itself. but its 1 second if 12 seconds | 11:33 |
ivc_ | and pooling veths you'll be saving all those 12 seconds | 11:33 |
ivc_ | and 12 seconds is quite good actually. when i added ~20 pods it was getting to about 1-2 minutes intervals | 11:34 |
ivc_ | so my point is we better wait on pooling optimisation for now | 11:35 |
ltomasbo | 1-2 to create 20 cointainers? | 11:35 |
ltomasbo | let me try that with/without my patch | 11:35 |
ivc_ | 1-2 minutes max per container | 11:36 |
ivc_ | just do 'kubectl run ... --replicas=20' | 11:36 |
ltomasbo | per container?? | 11:40 |
ivc_ | yup | 11:40 |
ltomasbo | that is a lot! | 11:40 |
ivc_ | aye | 11:40 |
ivc_ | with replicas=50 it didn't finish at all :) | 11:40 |
ivc_ | but that was also before skip_stale and had quite a bit of overhead | 11:41 |
ivc_ | ltomasbo but we are talking about devstack inside vm here, so these numbers should not be taken very seriously (on real hw with properly setup ost it would be much better) | 11:44 |
ltomasbo | :D | 11:44 |
ivc_ | ltomasbo but my point is that even on devstack with veth-pooling (as opposed to just neutron port pooling) we can make it almost instant | 11:45 |
apuimedo | meh | 11:46 |
ivc_ | and to get to veth-pooling we first need runfiles implemented (apuimedo is working on it afaik) and also split cni into daemon/exec | 11:46 |
apuimedo | it seems I have quite a backlog to read | 11:46 |
ivc_ | xD | 11:46 |
ivc_ | apuimedo i think we need to make a todo/roadmap. maybe during VTG | 11:47 |
ivc_ | i mean what actionable items we have in a backlog and what are the dependencies/order of execution | 11:49 |
apuimedo | ivc_: what's the plan on avoiding k8s to delete the veth together with the namespace? | 11:54 |
*** pmannidi has quit IRC | 11:54 | |
apuimedo | or is remove from network called before the namespace is deleted? | 11:54 |
apuimedo | If it is, then that's the biggest optimization | 11:55 |
apuimedo | to move the veth back into a namespace | 11:55 |
apuimedo | called kuryr_pool | 11:55 |
apuimedo | (or, with the new logo, platypool | 11:55 |
apuimedo | ) | 11:55 |
ivc_ | apuimedo https://www.youtube.com/watch?v=hdcTmpvDO0I | 11:55 |
ivc_ | move it to 'recycle namespace' | 11:56 |
apuimedo | ivc_: I wanted to call a core meeting for monday/tuesday to finalize the vtg scheudle and sessions | 11:56 |
apuimedo | lol | 11:56 |
apuimedo | madagascar | 11:56 |
*** huats has quit IRC | 11:56 | |
openstackgerrit | Ilya Chukhnakov proposed openstack/kuryr-kubernetes master: K8s Services support: LBaaSSpecHandler https://review.openstack.org/427440 | 12:06 |
ivc_ | apuimedo ltomasbo ^^ | 12:08 |
*** yamamoto has joined #openstack-kuryr | 12:12 | |
*** dougbtv has quit IRC | 12:15 | |
*** dougbtv has joined #openstack-kuryr | 12:19 | |
*** dougbtv_ has joined #openstack-kuryr | 12:24 | |
*** dougbtv has quit IRC | 12:27 | |
*** garyloug has quit IRC | 12:44 | |
apuimedo | ivc_: thanks. Will check after lunch | 13:04 |
*** salv-orlando has quit IRC | 13:24 | |
*** yamamoto has quit IRC | 13:33 | |
*** janki has quit IRC | 13:35 | |
*** garyloug has joined #openstack-kuryr | 13:45 | |
*** yamamoto has joined #openstack-kuryr | 13:56 | |
*** yamamoto has quit IRC | 14:42 | |
*** pcaruana has quit IRC | 15:11 | |
*** pcaruana has joined #openstack-kuryr | 15:17 | |
*** saneax is now known as saneax-_-|AFK | 15:19 | |
*** yamamoto has joined #openstack-kuryr | 15:28 | |
*** yamamoto has quit IRC | 15:33 | |
*** salv-orlando has joined #openstack-kuryr | 15:44 | |
*** hongbin has joined #openstack-kuryr | 16:06 | |
openstackgerrit | Luis Tomas Bolivar proposed openstack/kuryr-kubernetes master: [WIP] Adding pool of ports to speed up containers booting/deletion https://review.openstack.org/426687 | 16:31 |
*** yamamoto has joined #openstack-kuryr | 16:32 | |
hongbin | apuimedo: hey antoni, just want to get your early feedback on this one: https://review.openstack.org/#/c/427923/ . i wanted to make sure it is on the right track before doing further work on this feature | 16:36 |
*** yamamoto has quit IRC | 16:38 | |
openstackgerrit | Luis Tomas Bolivar proposed openstack/kuryr-kubernetes master: [WIP] Adding pool of ports to speed up containers booting/deletion https://review.openstack.org/426687 | 16:49 |
*** lakerzhou_ has joined #openstack-kuryr | 16:50 | |
*** jamespage has joined #openstack-kuryr | 17:24 | |
apuimedo | hongbin: I just put a comment to it. I like the approach | 17:26 |
apuimedo | :-0 | 17:26 |
apuimedo | :-) | 17:26 |
apuimedo | Thanks | 17:26 |
apuimedo | ivc_: I'm happy with https://review.openstack.org/#/c/427440/3 | 17:28 |
apuimedo | I would probably have left the set endpoints annotation for the patch that introduces the endpoints handler | 17:28 |
apuimedo | but I don't want to be so picky | 17:28 |
*** jgriffith has quit IRC | 17:56 | |
*** mestery has quit IRC | 17:56 | |
*** mestery has joined #openstack-kuryr | 17:57 | |
*** jgriffith has joined #openstack-kuryr | 17:57 | |
*** pc_m has quit IRC | 18:18 | |
*** pc_m has joined #openstack-kuryr | 18:22 | |
*** neiljerram has quit IRC | 18:34 | |
hongbin | apuimedo: thanks | 18:47 |
*** salv-orlando has quit IRC | 19:05 | |
*** salv-orlando has joined #openstack-kuryr | 19:08 | |
*** salv-orl_ has joined #openstack-kuryr | 19:49 | |
*** salv-orlando has quit IRC | 19:52 | |
*** garyloug has quit IRC | 19:54 | |
*** salv-orl_ has quit IRC | 20:05 | |
*** salv-orlando has joined #openstack-kuryr | 20:06 | |
*** salv-orlando has quit IRC | 21:45 | |
*** russellb has quit IRC | 21:58 | |
*** vikasc has quit IRC | 22:19 | |
*** reedip has quit IRC | 22:19 | |
*** vikasc has joined #openstack-kuryr | 22:19 | |
*** reedip has joined #openstack-kuryr | 22:21 | |
*** neiljerram has joined #openstack-kuryr | 22:24 | |
*** salv-orlando has joined #openstack-kuryr | 22:29 | |
*** lakerzhou_ has quit IRC | 22:33 | |
*** reedip has quit IRC | 22:50 | |
*** reedip has joined #openstack-kuryr | 22:51 | |
*** pmannidi has joined #openstack-kuryr | 23:20 | |
*** pmannidi has quit IRC | 23:26 | |
*** pmannidi has joined #openstack-kuryr | 23:29 | |
*** saneax-_-|AFK is now known as saneax | 23:35 | |
*** salv-orlando has quit IRC | 23:48 | |
*** salv-orlando has joined #openstack-kuryr | 23:49 | |
*** neiljerram has quit IRC | 23:54 | |
*** salv-orlando has quit IRC | 23:54 | |
*** neiljerram has joined #openstack-kuryr | 23:55 | |
*** salv-orlando has joined #openstack-kuryr | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!