*** salv-orl_ has joined #openstack-kuryr | 00:02 | |
*** salv-orlando has quit IRC | 00:05 | |
*** salv-orl_ has quit IRC | 00:07 | |
*** reedip_away is now known as reedip | 00:49 | |
*** fawadkhaliq has quit IRC | 01:18 | |
*** fawadkhaliq has joined #openstack-kuryr | 01:18 | |
*** fawadkhaliq has quit IRC | 01:19 | |
*** fawadkhaliq has joined #openstack-kuryr | 01:20 | |
*** salv-orlando has joined #openstack-kuryr | 01:35 | |
*** salv-orlando has quit IRC | 01:42 | |
*** fawadkhaliq has quit IRC | 02:01 | |
*** fawadkhaliq has joined #openstack-kuryr | 02:02 | |
*** lezbar__ has quit IRC | 02:08 | |
*** lezbar has joined #openstack-kuryr | 02:09 | |
*** fawadkhaliq has quit IRC | 02:10 | |
*** fawadkhaliq has joined #openstack-kuryr | 02:10 | |
*** banix has joined #openstack-kuryr | 02:23 | |
*** salv-orlando has joined #openstack-kuryr | 02:41 | |
*** tfukushima has joined #openstack-kuryr | 02:47 | |
*** salv-orlando has quit IRC | 02:51 | |
*** fawadkhaliq has quit IRC | 02:51 | |
*** fawadkhaliq has joined #openstack-kuryr | 02:52 | |
*** fawadkhaliq has quit IRC | 03:01 | |
*** fawadkhaliq has joined #openstack-kuryr | 03:01 | |
*** fawadkhaliq has quit IRC | 03:01 | |
*** fawadkhaliq has joined #openstack-kuryr | 03:02 | |
*** yuanying has quit IRC | 03:21 | |
*** yuanying_ has joined #openstack-kuryr | 03:21 | |
*** yuanying_ has quit IRC | 03:24 | |
*** yuanying has joined #openstack-kuryr | 03:24 | |
*** yuanying has quit IRC | 03:29 | |
*** tfukushima has quit IRC | 03:57 | |
*** yuanying has joined #openstack-kuryr | 04:01 | |
*** banix has quit IRC | 04:06 | |
*** salv-orlando has joined #openstack-kuryr | 04:10 | |
*** salv-orlando has quit IRC | 04:14 | |
*** fawadkhaliq has quit IRC | 04:17 | |
*** fawadkhaliq has joined #openstack-kuryr | 04:17 | |
*** yamamoto_ has joined #openstack-kuryr | 04:30 | |
*** fawadkhaliq has quit IRC | 04:35 | |
*** fawadkhaliq has joined #openstack-kuryr | 04:36 | |
*** fawadkhaliq has quit IRC | 04:40 | |
*** fawadkhaliq has joined #openstack-kuryr | 04:41 | |
*** fawadkhaliq has quit IRC | 04:41 | |
*** fawadkhaliq has joined #openstack-kuryr | 04:42 | |
*** fawadkhaliq has quit IRC | 04:42 | |
*** fawadkhaliq has joined #openstack-kuryr | 04:43 | |
*** fawadkhaliq has quit IRC | 04:46 | |
*** fawadkhaliq has joined #openstack-kuryr | 04:46 | |
*** fawadkhaliq has quit IRC | 04:47 | |
*** fawadkhaliq has joined #openstack-kuryr | 04:48 | |
*** fawadkhaliq has quit IRC | 04:48 | |
*** fawadkhaliq has joined #openstack-kuryr | 04:49 | |
*** salv-orlando has joined #openstack-kuryr | 05:42 | |
*** salv-orlando has quit IRC | 05:49 | |
*** fawadkhaliq has quit IRC | 06:03 | |
*** fawadkhaliq has joined #openstack-kuryr | 06:04 | |
*** fawadkhaliq has quit IRC | 06:20 | |
*** fawadkhaliq has joined #openstack-kuryr | 06:21 | |
*** fawadkhaliq has quit IRC | 06:25 | |
*** fawadkhaliq has joined #openstack-kuryr | 06:25 | |
*** fawadkhaliq has quit IRC | 06:26 | |
*** fawadkhaliq has joined #openstack-kuryr | 06:26 | |
*** fawadkhaliq has quit IRC | 06:27 | |
*** fawadkhaliq has joined #openstack-kuryr | 06:28 | |
*** fawadkhaliq has quit IRC | 06:28 | |
*** fawadkhaliq has joined #openstack-kuryr | 06:29 | |
*** wanghua has joined #openstack-kuryr | 06:51 | |
*** salv-orlando has joined #openstack-kuryr | 06:58 | |
*** salv-orlando has quit IRC | 07:15 | |
*** oanson has joined #openstack-kuryr | 08:55 | |
*** openstackgerrit has quit IRC | 09:03 | |
*** openstackgerrit has joined #openstack-kuryr | 09:03 | |
*** salv-orlando has joined #openstack-kuryr | 09:06 | |
*** oanson has quit IRC | 09:58 | |
*** wanghua has quit IRC | 10:04 | |
*** oanson has joined #openstack-kuryr | 10:45 | |
*** apuimedo has joined #openstack-kuryr | 10:55 | |
*** salv-orl_ has joined #openstack-kuryr | 12:02 | |
*** salv-orlando has quit IRC | 12:04 | |
*** yamamoto_ has quit IRC | 12:20 | |
*** oanson has quit IRC | 12:36 | |
*** baohua has joined #openstack-kuryr | 12:39 | |
*** banix has joined #openstack-kuryr | 13:09 | |
*** yamamoto has joined #openstack-kuryr | 13:12 | |
*** oanson has joined #openstack-kuryr | 13:22 | |
*** banix has quit IRC | 13:27 | |
*** lezbar has quit IRC | 13:33 | |
*** lezbar has joined #openstack-kuryr | 13:35 | |
*** mspreitz has joined #openstack-kuryr | 13:58 | |
mspreitz | we have a meeting now about CNI, right? | 14:02 |
---|---|---|
apuimedo | #startmeeting k8s-kuryr part II | 14:02 |
openstack | Meeting started Wed Mar 23 14:02:40 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:02 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:02 |
openstack | The meeting name has been set to 'k8s_kuryr_part_ii' | 14:02 |
apuimedo | Welcome to another kubernetes integration meeting | 14:03 |
apuimedo | who is here for it? | 14:03 |
baohua | :) morning & evening | 14:03 |
mspreitz | o/ | 14:03 |
baohua | o/ | 14:03 |
apuimedo | gsagie: fawadkhaliq salv-orl_: ping | 14:04 |
apuimedo | let's hope we get at least 5 people :-) | 14:04 |
baohua | sure, let wait for a while | 14:05 |
mspreitz | BTW, banix told me he will join but about 20 min late | 14:05 |
apuimedo | baohua: so maybe my mind synced with banix when I told you I thought the meeting was in 30mins :P | 14:06 |
baohua | haha, maybe :) | 14:06 |
apuimedo | while we wait | 14:06 |
apuimedo | tfukushima and I have been working on the prototype based on python 3.4 asyncio | 14:06 |
apuimedo | for the api watcher | 14:07 |
apuimedo | (we call it raven) | 14:07 |
apuimedo | it watches the pod and the service endpoints | 14:07 |
*** salv-orl_ has quit IRC | 14:07 | |
apuimedo | and already adds data for the direct cni plugin to retrieve and use for plugging the container into the neutron provider | 14:07 |
gsagie | here | 14:08 |
baohua | great, and sorry i may lost some context, is it watching the k8s resource changing and trigger our plugin to work? | 14:08 |
*** openstack has joined #openstack-kuryr | 14:21 | |
apuimedo | ovs | 14:21 |
gsagie | "hard pinging" heh | 14:21 |
baohua | it works! | 14:21 |
apuimedo | banix: lurking, eh? | 14:21 |
baohua | hi banix | 14:22 |
mspreitz | gsagie: my design goal is to keep the iptables-based kube-proxy working, as we use Neutron. | 14:22 |
banix | they throw a rock at me | 14:22 |
gsagie | mspreitz: but which Neutron.. OVN ? | 14:22 |
banix | hi, sorry for being late | 14:22 |
apuimedo | banix: do you have to carry it up a hill then? | 14:22 |
banix | just joined | 14:22 |
apuimedo | gsagie: ovs | 14:22 |
banix | :) | 14:22 |
gsagie | apuimedo: so the reference implementation? ok thans | 14:22 |
gsagie | thanks | 14:22 |
mspreitz | gsagie: I think my design does not depend on what Neutron plugin / mechanism-drivers are being used. | 14:22 |
apuimedo | with quite a bit of other vendors it would not work with kube-proxy | 14:22 |
mspreitz | apuimedo: *which* kube proxy? | 14:23 |
gsagie | apuimedo, mspreitz: you need to somehow count on some sord of service chaining then, and have the kube-proxy as a Neutron port | 14:23 |
apuimedo | or it would require a bit of hacking around to make the k8s pod network available to the hosts (which is not a default in some neutron vendors) | 14:23 |
gsagie | mspreitz: or you have another plan? | 14:23 |
apuimedo | gsagie: yes. that is an option I considered. Not for kube-proxy. Rather for dns | 14:24 |
apuimedo | to put a port for the skydns in the neutron network | 14:24 |
mspreitz | woa, slow down, too many parallel threads and side topics. | 14:24 |
apuimedo | mspreitz: you are right | 14:24 |
apuimedo | let's start properly now that we have enough people | 14:24 |
mspreitz | My aim is to support the iptables-based kube-proxy, not the userspace based one. | 14:24 |
apuimedo | which topic do you want to cover with? | 14:24 |
apuimedo | let's do updates first | 14:25 |
apuimedo | #topic updates | 14:25 |
apuimedo | mspreitz: can you give us some update on your work? | 14:25 |
gsagie | mspreitz: but this can also be done in a namespace | 14:25 |
mspreitz | I added the requested examples to my devref and slightly updated the design. I also have some open questions posted in the new k8s place for that. | 14:25 |
apuimedo | gsagie: we'll get to this later | 14:25 |
gsagie | okie | 14:26 |
apuimedo | mspreitz: you mean the google docs? | 14:26 |
apuimedo | if so, please link it here | 14:26 |
mspreitz | There is a k8s issue opened for discussing the network policy design: https://github.com/kubernetes/kubernetes/issues/22469 | 14:26 |
apuimedo | #link https://github.com/kubernetes/kubernetes/issues/22469 | 14:26 |
apuimedo | thanks mspreitz | 14:27 |
apuimedo | #link https://review.openstack.org/#/c/290172/ | 14:27 |
mspreitz | The main design work has deferred the questions of access from outside the cluster, but I could not avoid it in doing a plausible guestbook example. | 14:27 |
baohua | is the problem on the lb side? | 14:28 |
apuimedo | I see that you added the examples, thanks | 14:28 |
mspreitz | The first problem with access from the outside is that the design so far has no vocabulary for talking about external clients. | 14:29 |
apuimedo | for those wondering from the last meeting | 14:29 |
apuimedo | https://review.openstack.org/#/c/290172/5..10/doc/source/devref/k8s_t1.rst | 14:29 |
apuimedo | #link https://review.openstack.org/#/c/290172/5..10/doc/source/devref/k8s_t1.rst | 14:29 |
baohua | thanks for the links | 14:29 |
apuimedo | this is the diff against the version that we spoke about the last time | 14:29 |
apuimedo | mspreitz: well, I guess they do assume that if your service defines externalIP or uses loadbalancer, it is externally accessible | 14:30 |
mspreitz | apuimedo: actually, that is contrary to the design approach that has been taken so far... | 14:31 |
mspreitz | Note, for example, there there is explicitly no usage of existing "Service" instances. | 14:31 |
apuimedo | mspreitz: you mean that by default it is not, and you'd have to add it | 14:31 |
baohua | i think if can access through nodeip, then the external should work with lb support. | 14:32 |
mspreitz | I mean that the design approach has been one of orthogonality, the network policy has to say what is intended, no implicit anything except health checking. | 14:32 |
apuimedo | mspreitz: oh, sure. I was talking about what people may be doing provisionally, while we lack the vocabulary :P | 14:32 |
mspreitz | I have not been thinking much about node IP, since my context is a public shared service that will not be offering node IP as an option. | 14:33 |
apuimedo | only node port then? | 14:33 |
baohua | oh, sure, that's the case | 14:34 |
mspreitz | I am focused on the case of network policies allowing connections to a pod's 'cluster IP' address. | 14:34 |
baohua | for external clients? | 14:35 |
apuimedo | mspreitz: do we have any news on the policy language front? | 14:35 |
apuimedo | (from the k8s-sig-network side) | 14:35 |
mspreitz | I am including the problem of external clients. Clearly they have to have an IP route to the cluster IP addresses. As do the pod hosts (minions, nodes), for health checking. | 14:36 |
mspreitz | Configuring external clients has to be beyond the scope of this code, but it has to be something that can be done relatively easily. | 14:36 |
mspreitz | My thought is to put the k8s pods on Neutron tenant networks connected to a Neutron router connected to an "external network" (in Neutron terms). | 14:37 |
apuimedo | mspreitz: well. for what is worth | 14:37 |
apuimedo | what we do is | 14:37 |
mspreitz | That establishes a path, and naturally all the right routing table entries have to be in the right places. | 14:37 |
apuimedo | Pods -> tenant network -> router <- cluster ip network | 14:37 |
apuimedo | and the cluseter ip network is where we put LBs that go into the pods (we do not use kube-proxy) | 14:38 |
mspreitz | IIRC, "cluster IP" is the kind of address a pod gets, not a host. | 14:38 |
apuimedo | cluster IP is the IP that brings you to a replica of a pod | 14:38 |
apuimedo | in one host it takes you to one replica, in another, to another | 14:38 |
apuimedo | that's why we made it the VIP of the load balancer that we put in front of the pods | 14:39 |
mspreitz | If I understand the terminology correctly, an RC manages several "replicas", each of which is a "pod". | 14:39 |
apuimedo | then, for external access, the router is connected to a neutron external net | 14:39 |
apuimedo | and we can assign FIPs to the VIPs of the load balancers | 14:39 |
apuimedo | mspreitz: that is right | 14:39 |
apuimedo | so, cluster ip -> pod_x | 14:40 |
apuimedo | where pod_x may be any of the pods that are replicas | 14:40 |
apuimedo | and kube-proxy handles that with its iptables fiddling | 14:40 |
mspreitz | apuimedo: are you saying that a given cluster IP is had by several pods (one on each of several hosts)? | 14:40 |
apuimedo | (managing the cluster ip as a sort of a vip) | 14:41 |
apuimedo | mspreitz: that's what I saw looking at the iptables | 14:41 |
apuimedo | of a deployment | 14:41 |
apuimedo | generally, it was redirecting the cluster ip to a pod in the same host | 14:41 |
mspreitz | apuimedo: are you trying to report a fact about kubernetes? | 14:41 |
apuimedo | I don't know what they do if there is no replica in that specific host | 14:41 |
apuimedo | mspreitz: I'm just stating what I saw | 14:42 |
mspreitz | was this using the userspace kube-proxy or the iptables-based one? | 14:42 |
*** dingboopt has quit IRC | 14:42 | |
apuimedo | in hopes that it gives some context as to why we use the cluster ips as VIPs of neutron LBs | 14:42 |
baohua | clusterip is virtual, only meaningful with kube-proxy rules to do the translation to real address | 14:42 |
*** dingboopt has joined #openstack-kuryr | 14:42 | |
mspreitz | slow down, let apuimedo answer | 14:43 |
apuimedo | iptables based one IIRC. But I can't confirm it. tfukushima set it up, I only looked around | 14:43 |
mspreitz | apuimedo: I think you may have confused "cluster IP" and "service IP". The service IPs are virtual, the cluster IPs are real; each cluster IP is had by just one pod. | 14:43 |
apuimedo | but what I saw was, cluster ip only defined in iptables redirects | 14:43 |
mspreitz | apuimedo: so the virtual IP addrs you saw were NOT the addrs that each pod sees itself as having, right? | 14:44 |
banix | i think that is the service ip | 14:44 |
apuimedo | mspreitz: right | 14:44 |
mspreitz | apuimedo: you are saying "cluster IP" where you mean what is actually called "service IP". | 14:44 |
apuimedo | mspreitz: maybe. I've been known to confuse names in the past. I'll try to check it | 14:45 |
baohua | sorry, mspreitz, i think we only have the clusterIP term. | 14:45 |
apuimedo | I only recall cluster ip | 14:45 |
baohua | service ip was the past | 14:45 |
mspreitz | "service IP" and "cluster IP" are distinct concepts | 14:45 |
mspreitz | baohua: if you list services, does each one have an IP address? | 14:45 |
baohua | yes, can u give some link to the concept doc? I've only seen the clusterIP | 14:46 |
apuimedo | mspreitz: https://coreos.com/kubernetes/docs/latest/getting-started.html | 14:46 |
apuimedo | look at the "service_ip_range" | 14:46 |
apuimedo | "Each service will be assigned a cluster IP out of this range" | 14:47 |
mspreitz | baohua: I can cite http://kubernetes.io/docs/user-guide/#concept-guide but it is not 1.2 | 14:47 |
mspreitz | apuimedo: exactly... | 14:47 |
apuimedo | which is distinct from POD_NETWORK | 14:47 |
mspreitz | exactly. | 14:47 |
apuimedo | pods don't get cluster_ips | 14:47 |
apuimedo | cluster ips are for services | 14:47 |
mspreitz | that POD_NETWORK thing configures the range for cluster IP addresses. | 14:47 |
apuimedo | nope | 14:48 |
apuimedo | SERVICE_IP_RANGE=10.3.0.0/24 | 14:48 |
mspreitz | o gosh, I see the verbiage there | 14:48 |
apuimedo | is for cluster ip addresses | 14:48 |
baohua | yes, that's what i think | 14:48 |
mspreitz | fine, so let's say "pod IP" for the kind of address that a pod gets. | 14:48 |
apuimedo | k8s naming conventions are bringing headache to everybody :-) | 14:48 |
apuimedo | right | 14:49 |
baohua | there're 3 types of ip: node, pod and clusterIP | 14:49 |
baohua | node is for the physical server, pod is for pod, and clusterIP for service | 14:49 |
apuimedo | right | 14:49 |
mspreitz | could we please say "service IP" for those virtual ones? | 14:49 |
baohua | oh, on,pls | 14:49 |
baohua | as this term was utilized in old release | 14:49 |
baohua | very confusing | 14:49 |
mspreitz | Anyway, back to the design. | 14:49 |
apuimedo | mspreitz: it will end up being more confusing, as people will check the current reference | 14:50 |
baohua | pls unify to clusterIP | 14:50 |
apuimedo | cluster ip is for the service | 14:50 |
mspreitz | My aim is to support the iptables-based kube-proxy | 14:50 |
apuimedo | and that's why we map it to a VIP of Neutron LBs when not using kube-proxy | 14:50 |
apuimedo | mspreitz: for supporting iptables-based-proxy | 14:50 |
apuimedo | can you refresh my memory on what it does with the VIP (since I'm not sure I looked at the userspace one or not) | 14:51 |
apuimedo | with the one I looked at | 14:51 |
apuimedo | It should be enough that the host can route into the neutron network that we use as POD_NETWORK | 14:51 |
apuimedo | (of course, depending on vendor, but probably with ovs it would work) | 14:52 |
mspreitz | apuimedo: we may have another terminology confusion. When you say "kube proxy", do you mean specifically the userspace one and NOT the iptables-based one? | 14:52 |
apuimedo | mspreitz: I mean the one I experienced (I only saw tons of iptables redirect rules, so I assume that it was the iptables-based one) | 14:53 |
apuimedo | that's why I asked what you see in your deployments | 14:53 |
apuimedo | how does it map the cluster ip to a pod | 14:53 |
mspreitz | apuimedo: the userspace based kube-proxy also uses iptables entries. | 14:53 |
apuimedo | to be able to know if my understanding is from one or the other :P | 14:53 |
mspreitz | In the userspace proxy, on each pod host, for each service, there is an iptables rule mapping dest=service & port=serviceport to dest=localhost & port=localaliasofthatservice | 14:54 |
mspreitz | There is a userspace process listening there and doing the loadbalancing. | 14:55 |
apuimedo | and in the new one? | 14:55 |
mspreitz | In the iptables-based kube-proxy, on each host, for each service, there is an iptables rule matching dest=service & port=serviceport and jumping to a set of rules that stochastically choose among the service's endpoints. | 14:56 |
apuimedo | mspreitz: and you want to support the latter, right? | 14:56 |
mspreitz | apuimedo: right | 14:56 |
mspreitz | Because it does not transform the client IP address. | 14:56 |
mspreitz | So the translation from network policy statement to security group rules is pretty direct. | 14:57 |
apuimedo | mspreitz: it would be very useful if you could post in the ML and/or devref examples of those chains that it uses for choosing | 14:57 |
mspreitz | OK, I'll find a way to do that. | 14:57 |
apuimedo | so that we can figure out how best to support this new kube-proxy backend | 14:57 |
apuimedo | #topic others | 14:58 |
apuimedo | any other topic in the last two minutes? | 14:58 |
apuimedo | it may feel like we didn't get a lot decided, but I am very happy to have converged in our understanding of concepts and vocabulary | 14:59 |
banix | good | 14:59 |
mspreitz | I am not sure we are accurately converged on "cluster IP" | 14:59 |
apuimedo | mspreitz: you'll get to love to the new name with time | 14:59 |
apuimedo | it's like a that shoe that is a bit rough but that with time gets familiar and you keep using it out of habit | 15:00 |
banix | looks like this is the new name for what used to be called service ip. if i understand it correctly. | 15:00 |
apuimedo | I'll never like the overloading of the name "namespaces" though | 15:00 |
apuimedo | banix: that's right | 15:00 |
apuimedo | I don't know what is it nowadays with people changing names and ips all the time | 15:01 |
baohua | +1 | 15:01 |
apuimedo | anything else, then? | 15:01 |
gsagie | heh | 15:01 |
mspreitz | not from me | 15:01 |
gsagie | nope | 15:01 |
baohua | nope | 15:01 |
apuimedo | let's meet next week? | 15:01 |
gsagie | sure | 15:01 |
baohua | sure, see u then | 15:01 |
banix | 14:30 UTC? | 15:01 |
banix | if on Wednesday | 15:02 |
mspreitz | OK with e | 15:02 |
apuimedo | #info next week 14:30utc k8s-kuryr meeting | 15:02 |
apuimedo | #endmeeting | 15:02 |
openstack | Meeting ended Wed Mar 23 15:02:18 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:02 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/k8s_kuryr_part_ii/2016/k8s_kuryr_part_ii.2016-03-23-14.02.html | 15:02 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/k8s_kuryr_part_ii/2016/k8s_kuryr_part_ii.2016-03-23-14.02.txt | 15:02 |
openstack | Log: http://eavesdrop.openstack.org/meetings/k8s_kuryr_part_ii/2016/k8s_kuryr_part_ii.2016-03-23-14.02.log.html | 15:02 |
apuimedo | thank you all for joining | 15:02 |
banix | thanks | 15:02 |
*** banix has quit IRC | 15:02 | |
apuimedo | gsagie: banix: don't escape yet | 15:02 |
apuimedo | :-D | 15:02 |
apuimedo | we should discuss release planning for the libnetwork driver | 15:02 |
apuimedo | banix: and I'd love to have your re-use existing network for it | 15:03 |
gsagie | apuimedo: you mean an OpenStack official release? | 15:03 |
*** baohua has quit IRC | 15:04 | |
apuimedo | gsagie: yes | 15:04 |
gsagie | i wonder if we are really stable enough, i personally dont feel we have tested this enough with the IPAM and backporting fixes to release can be a bit annoying, but really depends on you and banix experience | 15:05 |
apuimedo | gsagie: I would like the cores to take a decision on this | 15:06 |
apuimedo | I feel that we could use some extra pressure of having a first official release | 15:06 |
gsagie | apuimedo: good idea, want me to email everyone? | 15:06 |
apuimedo | gsagie: pretty please :-) | 15:06 |
gsagie | i also need to see if its not too late for Mitaka | 15:07 |
apuimedo | gsagie: sure | 15:07 |
apuimedo | otherwise we'll just make some tag | 15:07 |
apuimedo | that packagers can use | 15:07 |
gsagie | yeah tag is a good idea regardless | 15:07 |
gsagie | before we start adding Kubernetes integration | 15:08 |
gsagie | apuimedo, will send to all of us and Thierry to make sure we can still create a stable release | 15:09 |
apuimedo | ;-) | 15:09 |
apuimedo | perfect | 15:09 |
apuimedo | well, that's all from me for now. gsagie I'll look at the slides draft this weekend. I'm completely swamped now | 15:09 |
gsagie | apuimedo: np, if you want help send me the Kubernetes slides (if you have something) and i can start integrating | 15:11 |
gsagie | or even just a structure | 15:11 |
apuimedo | ok | 15:11 |
gsagie | and i will build slides (from text) | 15:11 |
*** fawadkhaliq has quit IRC | 15:54 | |
*** mspreitz has quit IRC | 15:56 | |
*** fawadkhaliq has joined #openstack-kuryr | 16:31 | |
*** salv-orlando has joined #openstack-kuryr | 16:31 | |
fawadkhaliq | apuimedo: sorry, I somehow didnt have this on my calendar and woke up after the meeting. I will catch up offline. | 16:32 |
*** fawadkhaliq has quit IRC | 16:35 | |
*** fawadkhaliq has joined #openstack-kuryr | 16:36 | |
*** salv-orlando has quit IRC | 16:38 | |
apuimedo | np ;-) | 16:40 |
*** banix has joined #openstack-kuryr | 16:42 | |
*** salv-orlando has joined #openstack-kuryr | 16:42 | |
*** salv-orlando has quit IRC | 16:54 | |
banix | apuimedo: gsagie sorry had to run/escapeā¦ Saw the conversation; I will have the updated external network support done (ready for review) this week. | 17:00 |
*** fawadkhaliq has quit IRC | 17:36 | |
*** fawadkhaliq has joined #openstack-kuryr | 17:37 | |
*** fawadkhaliq has quit IRC | 17:39 | |
*** fawadkhaliq has joined #openstack-kuryr | 17:39 | |
*** fawadkhaliq has quit IRC | 17:41 | |
*** fawadkhaliq has joined #openstack-kuryr | 17:41 | |
*** fawadkhaliq has quit IRC | 17:47 | |
*** yamamoto_ has quit IRC | 17:47 | |
*** fawadkhaliq has joined #openstack-kuryr | 17:47 | |
*** salv-orlando has joined #openstack-kuryr | 17:48 | |
*** yamamoto has joined #openstack-kuryr | 17:52 | |
apuimedo | thanks | 17:53 |
*** salv-orl_ has joined #openstack-kuryr | 18:01 | |
*** salv-orlando has quit IRC | 18:04 | |
*** yuanying_ has joined #openstack-kuryr | 18:07 | |
*** yuanying has quit IRC | 18:09 | |
openstackgerrit | Mike Spreitzer proposed openstack/kuryr: Add devref for kubernetes translate 1 and 2 https://review.openstack.org/290172 | 18:17 |
*** yamamoto has quit IRC | 18:19 | |
*** yamamoto has joined #openstack-kuryr | 18:24 | |
*** mestery_ has joined #openstack-kuryr | 18:28 | |
*** yamamoto has quit IRC | 18:33 | |
*** mestery has quit IRC | 18:33 | |
*** mestery_ is now known as mestery | 18:33 | |
*** yamamoto has joined #openstack-kuryr | 18:33 | |
*** yamamoto has quit IRC | 18:33 | |
*** yamamoto has joined #openstack-kuryr | 18:34 | |
*** mestery_ has joined #openstack-kuryr | 18:34 | |
*** yamamoto has quit IRC | 18:37 | |
*** yamamoto has joined #openstack-kuryr | 18:44 | |
*** yamamoto has quit IRC | 18:44 | |
*** yamamoto has joined #openstack-kuryr | 18:44 | |
*** yamamoto has quit IRC | 18:45 | |
*** mestery_ has quit IRC | 18:46 | |
*** openstack has joined #openstack-kuryr | 19:07 | |
*** openstack has joined #openstack-kuryr | 19:21 | |
*** yamamoto has joined #openstack-kuryr | 19:45 | |
*** yamamoto has quit IRC | 19:53 | |
*** openstack has joined #openstack-kuryr | 20:31 | |
*** fawadkhaliq has quit IRC | 20:53 | |
*** fawadkhaliq has joined #openstack-kuryr | 20:53 | |
*** fawadkhaliq has quit IRC | 21:04 | |
*** fawadk has joined #openstack-kuryr | 21:04 | |
*** oanson has quit IRC | 21:04 | |
*** banix has quit IRC | 21:49 | |
*** fawadk has quit IRC | 22:53 | |
*** fawadkhaliq has joined #openstack-kuryr | 22:53 | |
*** fawadkhaliq has quit IRC | 22:55 | |
*** fawadkhaliq has joined #openstack-kuryr | 22:56 | |
*** fawadkhaliq has quit IRC | 23:07 | |
*** fawadkhaliq has joined #openstack-kuryr | 23:08 | |
*** openstack has joined #openstack-kuryr | 23:22 | |
*** fawadkhaliq has quit IRC | 23:39 | |
*** fawadkhaliq has joined #openstack-kuryr | 23:49 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!