*** salv-orlando has quit IRC | 00:06 | |
*** outofmemory is now known as reedip | 00:17 | |
*** banix has joined #openstack-kuryr | 02:08 | |
*** banix has quit IRC | 02:08 | |
*** tfukushima has joined #openstack-kuryr | 02:30 | |
*** tfukushima has quit IRC | 02:33 | |
openstackgerrit | Mohammad Banikazemi proposed openstack/kuryr: Makes use of Neutron tags option https://review.openstack.org/306627 | 02:51 |
---|---|---|
*** yuanying_ has quit IRC | 02:51 | |
*** yamamoto_ has joined #openstack-kuryr | 02:59 | |
*** yamamoto_ has quit IRC | 03:11 | |
*** yamamoto_ has joined #openstack-kuryr | 03:22 | |
*** yamamoto_ has quit IRC | 03:43 | |
*** yuanying has joined #openstack-kuryr | 03:46 | |
*** tfukushima has joined #openstack-kuryr | 03:52 | |
*** tfukushima has quit IRC | 03:56 | |
*** tfukushima has joined #openstack-kuryr | 04:07 | |
*** salv-orlando has joined #openstack-kuryr | 04:09 | |
*** salv-orlando has quit IRC | 04:13 | |
*** yamamoto_ has joined #openstack-kuryr | 04:27 | |
*** oanson has joined #openstack-kuryr | 04:29 | |
*** tfukushima has quit IRC | 04:35 | |
*** salv-orlando has joined #openstack-kuryr | 04:48 | |
*** oanson has quit IRC | 04:50 | |
*** oanson has joined #openstack-kuryr | 05:33 | |
*** tfukushima has joined #openstack-kuryr | 05:36 | |
*** tfukushima has quit IRC | 05:40 | |
*** tfukushima has joined #openstack-kuryr | 05:45 | |
*** huats_ has joined #openstack-kuryr | 07:21 | |
*** lezbar has quit IRC | 07:39 | |
*** lezbar has joined #openstack-kuryr | 08:14 | |
*** yamamoto_ has quit IRC | 09:02 | |
*** tfukushima has quit IRC | 09:02 | |
*** yamamoto has joined #openstack-kuryr | 09:02 | |
*** tfukushima has joined #openstack-kuryr | 09:05 | |
*** tfukushima has quit IRC | 09:28 | |
*** tfukushima has joined #openstack-kuryr | 09:28 | |
*** salv-orl_ has joined #openstack-kuryr | 10:04 | |
*** salv-orlando has quit IRC | 10:07 | |
*** apuimedo has joined #openstack-kuryr | 11:28 | |
*** tfukushima has quit IRC | 11:30 | |
*** yamamoto has quit IRC | 12:01 | |
*** tfukushima has joined #openstack-kuryr | 12:25 | |
*** tfukushima has quit IRC | 12:34 | |
*** yamamoto has joined #openstack-kuryr | 12:38 | |
*** yamamoto has quit IRC | 12:45 | |
*** yamamoto has joined #openstack-kuryr | 12:45 | |
*** yamamoto has quit IRC | 12:50 | |
*** yamamoto has joined #openstack-kuryr | 12:54 | |
*** yamamoto has quit IRC | 12:56 | |
*** yamamoto has joined #openstack-kuryr | 12:57 | |
*** yamamoto has quit IRC | 13:02 | |
*** yamamoto has joined #openstack-kuryr | 13:13 | |
apuimedo | gsagie: ping | 13:14 |
gsagie | apuimedo: pong | 13:18 |
irenab | gsagie: apuimedo : did you run devstack recently? | 13:37 |
apuimedo | huats_ did | 13:37 |
irenab | I get issues when running stack.sh | 13:37 |
huats_ | I did indeed | 13:38 |
irenab | ubuntu? | 13:38 |
apuimedo | gsagie: google slides pptx importer is a piece of trash | 13:41 |
apuimedo | it loses pictures and stuff. I'll convert it to odp first, see if it helps | 13:41 |
gsagie | k | 13:41 |
irenab | apuimedo: you just insist to use all the tools on linux :-) | 13:43 |
apuimedo | irenab: I'm a freeman! | 13:44 |
irenab | huats_: did it run smoothly or you had to fix issues> | 13:44 |
gsagie | apuimedo: i think if you upload ppt to google drive it converts it | 13:46 |
huats_ | irenab: I did on Ubuntu yep | 13:47 |
huats_ | I had one issue, not related to Devstack but kuryr... But i fixed it and it is on the repo :) | 13:47 |
irenab | huats_: Ithanks, trying to rerun the stack.sh | 13:49 |
*** oanson has quit IRC | 13:55 | |
apuimedo | gsagie: it does, and badly :P | 13:55 |
apuimedo | huats_: irenab: can you do an fpaste of your local.conf for reference | 13:56 |
*** tfukushima has joined #openstack-kuryr | 13:56 | |
irenab | http://pastebin.com/qrxFscVG | 13:57 |
*** tfukushima has quit IRC | 13:57 | |
irenab | but the issue I have is related to running docker: stop: Unknown instance: | 13:58 |
irenab | 2016-04-18 13:54:49.105 | Error on exit | 13:58 |
apuimedo | which ubuntu is this? | 13:58 |
irenab | Ubuntu 14.04.4 LTS | 14:01 |
huats_ | I used the same | 14:02 |
huats_ | irenab: why do you rerun the stack.sh | 14:02 |
irenab | I think maybe I need to remove docker before rerunning the stack.sh | 14:02 |
huats_ | usually it is not a good idea IIRC | 14:02 |
huats_ | I think it is the case | 14:03 |
huats_ | (that you should remove docker) | 14:03 |
irenab | trying it, thanks | 14:04 |
huats_ | I'll have a look too | 14:05 |
huats_ | (but I have to say that I am clearly late to work on my talk for next week :)) | 14:05 |
huats_ | (so I am trying to focus :)) | 14:05 |
apuimedo | huats_: link to the talk? | 14:06 |
huats_ | https://www.openstack.org/summit/austin-2016/summit-schedule/events/8307?goback=1 | 14:11 |
huats_ | apuimedo: not related to kuryr I agree :) | 14:11 |
apuimedo | charging is always nice :-) | 14:11 |
huats_ | :) | 14:12 |
huats_ | irenab: I'll have a look if we need to clear docker in the devstack and I'll patch it if needed... | 14:12 |
irenab | huats_: I will let you know if it fixed the problem, it takes ages to deploy … | 14:13 |
huats_ | great ! | 14:16 |
irenab | hmm, got this error | 14:22 |
irenab | dpkg: error processing package docker-engine (--configure): | 14:22 |
irenab | 2016-04-18 14:17:25.044 | subprocess installed post-installation script returned error exit status 1 | 14:22 |
irenab | 2016-04-18 14:17:25.044 | Processing triggers for libc-bin (2.19-0ubuntu6.7) ... | 14:22 |
irenab | 2016-04-18 14:17:25.175 | Errors were encountered while processing: | 14:22 |
irenab | 2016-04-18 14:17:25.175 | docker-engine | 14:22 |
irenab | 2016-04-18 14:17:25.803 | E: Sub-process /usr/bin/dpkg returned an error code (1) | 14:22 |
huats_ | ok | 14:25 |
huats_ | can you do | 14:25 |
huats_ | dpkg --remove --purge docker-engine | 14:25 |
huats_ | and then stack.sh again ? | 14:25 |
huats_ | irenab: ^ | 14:26 |
irenab | dpkg: error: conflicting actions -P (--purge) and -r (--remove) | 14:26 |
huats_ | sorry :) | 14:27 |
huats_ | just use purge | 14:27 |
irenab | running | 14:28 |
irenab | it requests unstack first, so will take time ... | 14:28 |
*** tfukushima has joined #openstack-kuryr | 14:30 | |
huats_ | ok | 14:33 |
huats_ | take your time :) | 14:33 |
irenab | I wish it could be faster … | 14:41 |
irenab | huats_: well, back to where it started | 14:44 |
irenab | dpkg: error processing package docker-engine (--configure): | 14:44 |
irenab | 2016-04-18 14:43:01.848 | subprocess installed post-installation script returned error exit status 1 | 14:44 |
irenab | 2016-04-18 14:43:01.848 | Processing triggers for ureadahead (0.100.0-16) ... | 14:44 |
irenab | 2016-04-18 14:43:03.395 | Errors were encountered while processing: | 14:44 |
huats_ | :( | 14:44 |
irenab | 2016-04-18 14:43:03.395 | docker-engine | 14:44 |
irenab | 2016-04-18 14:43:04.149 | E: Sub-process /usr/bin/dpkg returned an error code (1) | 14:44 |
irenab | 2016-04-18 14:43:04.158 | Error on exit | 14:44 |
huats_ | hum | 14:44 |
huats_ | let me check something (I have started a devstack too in the same time) | 14:44 |
irenab | huats_: need to go for a while, will sync later | 14:46 |
huats_ | ok | 14:47 |
openstackgerrit | Mohammad Banikazemi proposed openstack/kuryr: Makes use of Neutron tags option https://review.openstack.org/306627 | 14:55 |
*** oanson has joined #openstack-kuryr | 15:04 | |
*** oanson has quit IRC | 15:05 | |
*** tfukushima has quit IRC | 15:11 | |
*** asti has joined #openstack-kuryr | 15:13 | |
*** salv-orlando has joined #openstack-kuryr | 16:04 | |
*** salv-orl_ has quit IRC | 16:06 | |
apuimedo | gsagie: new slides sent | 16:20 |
apuimedo | gotta go pack the luggage | 16:20 |
irenab | huats_: I got new clean VM and devstack completed fine | 16:38 |
huats_ | ok | 16:38 |
huats_ | ! | 16:38 |
huats_ | great | 16:38 |
irenab | seems Jaume had similar issue: https://review.openstack.org/#/c/290702/ | 16:41 |
*** banix has joined #openstack-kuryr | 17:03 | |
apuimedo | ;-) | 18:00 |
apuimedo | I think I'm going to hijack jaume's patch to get it in :P | 18:00 |
*** salv-orlando has quit IRC | 18:13 | |
*** salv-orlando has joined #openstack-kuryr | 18:13 | |
*** salv-orlando has quit IRC | 18:18 | |
*** thingee has joined #openstack-kuryr | 18:22 | |
thingee | gsagie: Kuryr's summit schedule shows TBD for a fishbowl session. Can we have that updated today please? | 18:22 |
*** salv-orlando has joined #openstack-kuryr | 18:24 | |
openstackgerrit | Kyle Mestery proposed openstack/kuryr: README.md: Cleanup documentation https://review.openstack.org/307420 | 19:06 |
openstackgerrit | Kyle Mestery proposed openstack/kuryr: README.md: Cleanup documentation https://review.openstack.org/307420 | 19:15 |
*** yamamoto has quit IRC | 19:39 | |
*** salv-orlando has quit IRC | 19:47 | |
*** yamamoto has joined #openstack-kuryr | 19:54 | |
*** salv-orlando has joined #openstack-kuryr | 20:33 | |
mestery | I'm seeing issues with Kuryr using Vagrant: http://paste.openstack.org/show/494512/ | 20:35 |
mestery | gsagie banix apuimedo: Any ideas on what's happening? ^^^ | 20:36 |
banix | mestery: hmmmm not sure what’s the problem | 20:39 |
mestery | banix: Bummer. Looks like it's not loading the kuryr driver in docker perhaps? | 20:40 |
banix | mestery: yes looks like the error happens when trying top load tge plugin | 20:49 |
banix | is the kuryr.json ther: | 20:52 |
banix | something like this:$ cat /usr/lib/docker/plugins/kuryr/kuryr.json | 20:52 |
banix | { | 20:52 |
banix | "Name": "kuryr", | 20:52 |
banix | "Addr": "http://127.0.0.1:2377" | 20:52 |
banix | } | 20:52 |
* mestery looks | 20:54 | |
mestery | Yup | 20:55 |
mestery | Has the same thing as you have | 20:55 |
mestery | And my kuryr is running | 20:55 |
mestery | vagrant@devstack:~$ ps axuw|grep kuryr | 20:55 |
mestery | root 9564 0.0 0.0 62096 2028 pts/7 S+ 20:27 0:00 sudo PYTHONPATH=:/opt/stack/kuryr SERVICE_USER=admin SERVICE_PASSWORD=pass SERVICE_TENANT_NAME=admin SERVICE_TOKEN=pass IDENTITY_URL=http://127.0.0.1:5000/v2.0 python /opt/stack/kuryr/scripts/run_server.py --config-file /etc/kuryr/kuryr.conf | 20:55 |
mestery | root 9565 0.0 0.9 99336 38240 pts/7 S+ 20:27 0:01 python /opt/stack/kuryr/scripts/run_server.py --config-file /etc/kuryr/kuryr.conf | 20:55 |
mestery | vagrant 17882 0.0 0.0 10464 920 pts/24 S+ 20:55 0:00 grep kuryr | 20:55 |
mestery | vagrant@devstack:~$ | 20:55 |
*** fawadkhaliq has joined #openstack-kuryr | 20:57 | |
apuimedo | mestery: ? | 20:58 |
mestery | apuimedo: Kuryr no worky for me | 20:58 |
apuimedo | latest devstack? | 20:58 |
apuimedo | docker 1.10? | 20:58 |
apuimedo | ovs/ovn? | 20:58 |
mestery | Yes | 20:59 |
mestery | Latest of everything | 20:59 |
apuimedo | mmm.... Irena got it working today and huats_ did use it last week | 21:00 |
apuimedo | not with vagrant though | 21:00 |
mestery | apuimedo: It's an odd error, did you see it? Almsot like Docker can't talk to Kuryr | 21:01 |
apuimedo | mestery: may it be that docker started before kuryr? | 21:01 |
mestery | apuimedo: Let me check | 21:01 |
mestery | apuimedo: So, in the screen session, the docker tab is indeed before the kuryr tab | 21:01 |
mestery | But Vagrant just runs a normal devstack | 21:02 |
mestery | So unsure why this would be different | 21:02 |
apuimedo | mestery: try stopping docker and starting it again | 21:02 |
* mestery buckles in and tries | 21:02 | |
apuimedo | probably the registration got botched | 21:02 |
mestery | apuimedo: same | 21:02 |
apuimedo | (╯° °)╯彡┻━┻ | 21:03 |
apuimedo | can you telnet to 127.0.0.1 2377 ? | 21:03 |
* mestery tries | 21:04 | |
mestery | Yup | 21:04 |
mestery | works like a champ | 21:04 |
apuimedo | ok, so at least kuryr is listening | 21:04 |
mestery | Yup | 21:04 |
apuimedo | mestery: the error is the same then? Still trying to reach 2375? instead of 2377? | 21:04 |
mestery | Yeah | 21:05 |
apuimedo | mmm | 21:06 |
apuimedo | mestery: can you try changing the docker engine to listen on the unix socket? | 21:06 |
mestery | apuimedo: OK, let me try that | 21:07 |
banix | have to run but will be on line later tonight | 21:07 |
*** banix has quit IRC | 21:07 | |
apuimedo | fawadkhaliq: I'll update you on the slides tomorrow night, gotta travel | 21:08 |
apuimedo | sorry about the delay :( | 21:08 |
fawadkhaliq | apuimedo: thanks thanks! | 21:08 |
apuimedo | mestery: I assume you have that running on a vagrant on your laptop, right? | 21:09 |
mestery | apuimedo: Yes | 21:09 |
mestery | and with a change to local socket I see this: | 21:09 |
mestery | vagrant@devstack:~$ sudo docker -H unix:///var/run/docker.sock network create -d kuryr foo | 21:09 |
mestery | An error occurred trying to connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.23/networks/create: EOF | 21:09 |
mestery | vagrant@devstack:~$ | 21:09 |
apuimedo | mestery: do the other docker commands work? | 21:10 |
mestery | apuimedo: Ack | 21:10 |
mestery | They did when it was listening on the socket as well | 21:10 |
apuimedo | mestery: very well | 21:11 |
mestery | Heh :) | 21:11 |
apuimedo | or very wrong :P | 21:11 |
mestery | Yeah, that one! :) | 21:11 |
apuimedo | mestery: exact docker version? | 21:11 |
mestery | vagrant@devstack:~$ docker --version | 21:11 |
mestery | Docker version 1.11.0, build 4dc5990 | 21:11 |
mestery | vagrant@devstack:~$ | 21:11 |
apuimedo | 1.11... That looks scary... Knowing the lack of bc regard these folks have | 21:12 |
mestery | apuimedo; lol | 21:12 |
mestery | Well, that's what devstack pulls down | 21:13 |
mestery | And yeah, my guess is now that docker has screwed us again | 21:13 |
mestery | Because: Backwards compatability! Why does it matter? | 21:13 |
apuimedo | but I think Irena got the devstack working today | 21:13 |
apuimedo | let's do a quick check | 21:14 |
mestery | Thanks apuimedo :) | 21:14 |
apuimedo | mestery: ubuntu, right? | 21:14 |
mestery | apuimedo: Ack | 21:14 |
apuimedo | mestery: trusty? | 21:14 |
mestery | ack | 21:15 |
apuimedo | https://apt.dockerproject.org/repo/pool/main/d/docker-engine/docker-engine_1.10.3-0~trusty_amd64.deb | 21:15 |
apuimedo | try this one | 21:15 |
apuimedo | if it works. I'll start a jihad | 21:15 |
* mestery downloads | 21:16 | |
mestery | That did it | 21:18 |
mestery | It worked | 21:18 |
mestery | So | 21:18 |
mestery | docker people screwed us :( | 21:18 |
mestery | Meh | 21:18 |
mestery | They do this all the time! | 21:18 |
mestery | Every 3 months I try to use kuryr, docker has broken backwards compatibility again | 21:18 |
apuimedo | mestery: you are the harbinger of bad bc breakages | 21:19 |
mestery | apuimedo: Indeed I am | 21:19 |
* mestery shakes his fist at docker ... again | 21:19 | |
apuimedo | I really can't believe the sloppiness with bc | 21:20 |
apuimedo | I should have worked on nova. This doesn't happen with libvirt | 21:21 |
mestery | rofl | 21:21 |
mestery | I know right? | 21:21 |
mestery | I mean | 21:21 |
mestery | Man | 21:21 |
mestery | I don't think Docker really cares about anything they make pluggable | 21:21 |
apuimedo | mmm... I'll fire up some big VM to try to debug that | 21:21 |
mestery | Thanks apuimedo :) | 21:21 |
apuimedo | docker has problems of care | 21:22 |
mestery | apuimedo: +1000 | 21:22 |
apuimedo | I didn't find any change in the release notes that would signify such breakage | 21:22 |
mestery | Of course you didn't :) | 21:22 |
apuimedo | mestery: checking anything specific? | 21:25 |
mestery | apuimedo: MEaning? | 21:25 |
apuimedo | if you are running kuryr with some specific goal | 21:25 |
mestery | Ah, well, for a demo, so yeah :) | 21:25 |
apuimedo | what will you show? | 21:26 |
mestery | apuimedo: Really basic stuff, it's for an IBM sponsored talk in Austin next week that Phil Estes and I are giving | 21:30 |
apuimedo | ah, cool | 21:31 |
apuimedo | mestery: since I'm starting devstack, maybe I'll test and get merged the container based kuryr | 21:31 |
mestery | apuimedo: Coolio! | 21:32 |
apuimedo | I already have it for the kubernetes integration | 21:32 |
apuimedo | automated builds of the api watcher | 21:32 |
mestery | Cool | 21:33 |
apuimedo | the problem with libnetwork is the stupidity of docker needing to be restarted when you add plugins | 21:33 |
mestery | lol | 21:33 |
apuimedo | and that you can't have a plugin register itself by calling the docker API | 21:33 |
apuimedo | which makes docker based plugins needlessly painful to deploy | 21:34 |
apuimedo | one would imagine they want plugins to be container based | 21:35 |
mestery | apuimedo: Well, one could imagine they really don't even want plugins | 21:35 |
mestery | And that plugins were just to get them some mindshare | 21:35 |
mestery | Because they are building everything themselves :) | 21:35 |
apuimedo | mestery: indeed | 21:35 |
apuimedo | socketplane... | 21:35 |
mestery | Right | 21:36 |
apuimedo | it's quite a difference with how the kubernetes network group works | 21:36 |
mestery | Yeah | 21:36 |
*** asti has quit IRC | 21:49 | |
salv-orlando | let's ditch libnetwork and do CNI!!! Hasta la revoluccion :) | 21:52 |
apuimedo | salv-orlando: you are tempting me :P | 21:54 |
apuimedo | salv-orlando: have you tried standalone rkt cni? | 21:55 |
salv-orlando | apuimedo: not yet, but I did not want to bring my revolution as far as ditching docker as well ;) | 21:55 |
apuimedo | salv-orlando: well, with docker it is very cumbersome to use cni | 21:56 |
salv-orlando | on the other hand I also know little about CNI behaviour wrt bw compatibility | 21:56 |
apuimedo | you basically have to create a docker container, then use CNI on it | 21:56 |
apuimedo | then you can start the real container that does things | 21:56 |
apuimedo | it really sucks | 21:57 |
apuimedo | and I wish kubernetes didn't have to do that | 21:57 |
apuimedo | (and hope in rkt it is not the case, but I haven't checked, not to be disappointed) | 21:57 |
salv-orlando | indeed. Considering that CNI is native in rkt I expect a seamless experience | 21:58 |
salv-orlando | nevertheless there's no revolution as it's not up to kuryr to decide whether docker should run CNI or CNM | 21:58 |
apuimedo | salv-orlando: :'( | 21:58 |
salv-orlando | so, I guess the only possible thing to do is... | 21:58 |
salv-orlando | suck it up! | 21:59 |
apuimedo | sometimes, specially when docker bc breaks | 21:59 |
apuimedo | I wish everybody would just kick docker out and take over runc | 21:59 |
apuimedo | and do sane things | 21:59 |
*** fawadkhaliq has quit IRC | 22:00 | |
mestery | Yeah | 22:00 |
mestery | no more docker! | 22:00 |
apuimedo | give karma time | 22:02 |
salv-orlando | let | 22:02 |
salv-orlando | s go back to UML | 22:02 |
apuimedo | salv-orlando: you are risking a banning from this channel | 22:02 |
apuimedo | UML and design patterns are haram | 22:02 |
apuimedo | containers based on ubuntu images when they don't need a full system is also | 22:03 |
apuimedo | I should compile my religion tenets | 22:03 |
salv-orlando | apuimedo: but I meant user mode linux!!! | 22:03 |
salv-orlando | the grandfather of all container technology | 22:04 |
apuimedo | salv-orlando: ok then. You almost worried me about the kuryr architecture session | 22:04 |
salv-orlando | well that's chroot actually | 22:04 |
apuimedo | if I see uml in a code review or working session I may get an aneurism | 22:04 |
apuimedo | salv-orlando: have you taken a look at lxd? | 22:05 |
salv-orlando | apuimedo: yes, when it was announced | 22:05 |
salv-orlando | I was not impressed back then, but I have not followed developments | 22:05 |
apuimedo | well, apart from the decision to base it on zfs, which is against my religion too | 22:06 |
apuimedo | it feels like it could have been done in libvirt | 22:06 |
apuimedo | salv-orlando: how's it going with the policy? | 22:07 |
apuimedo | so far I'm thinking just making policy object -> security group | 22:07 |
apuimedo | and then add/remove SGs from the ports | 22:07 |
salv-orlando | apuimedo: today a blog post on the API for policy has been published here: http://blog.kubernetes.io/2016/04/Kubernetes-Network-Policy-APIs.html | 22:08 |
salv-orlando | I'm working into adding some reliable caches to my watcher | 22:08 |
apuimedo | salv-orlando: for which objects? | 22:08 |
salv-orlando | namespaces, policys and pods | 22:09 |
apuimedo | I made a very naive cache for the reconnects only | 22:09 |
apuimedo | each watcher gets a lru cache | 22:09 |
salv-orlando | my cache is probably even more naive than that | 22:09 |
salv-orlando | it's a dict ;) | 22:09 |
apuimedo | (in raven each endpoint gets a watcher) | 22:09 |
apuimedo | I inherit from ordereddict to do the eviction | 22:09 |
salv-orlando | anyway at some point I will consider whether use Raven too for direct OVN integration | 22:09 |
apuimedo | salv-orlando: lazy minds think alike | 22:09 |
apuimedo | salv-orlando: there is one design decision I have to take this week | 22:10 |
apuimedo | that I'm not sure about yet | 22:10 |
salv-orlando | apuimedo: as for your thinking yes a k8s has been modeled in a way that's very similar to the security group | 22:10 |
salv-orlando | with the different in the "from" bits | 22:10 |
salv-orlando | that in openstack is either a cidr or another security group | 22:10 |
salv-orlando | while in kubernetes is a pod selector | 22:11 |
apuimedo | I'm thinking on having the namespace watcher spawn watchers for the other entities for each new namespace | 22:11 |
salv-orlando | that is quite unlikely to map to a cidr | 22:11 |
apuimedo | and cancel the tasks when the namespace is deleted | 22:11 |
apuimedo | instead of having a watcher per resource | 22:11 |
apuimedo | that takes care of all the namespaces | 22:11 |
salv-orlando | apuimedo: I've done something similar, perhaps. As the policys are only namespaced everytime a namespace event occurs a policy watcher is started | 22:12 |
apuimedo | salv-orlando: that "from" part sucks | 22:12 |
apuimedo | salv-orlando: sounds similar | 22:12 |
salv-orlando | apuimedo: It makes sense from the k8s consumer point of view though. | 22:12 |
apuimedo | my idea was that if I do it with multiple resource watchers, I can potentially have them scale out | 22:12 |
salv-orlando | Mapping into Neutron, OVN, or whatever uses IP addresses is a bit of a pain | 22:12 |
apuimedo | in multiple processes/machines | 22:12 |
apuimedo | salv-orlando: well. When you make something easily usable for people that don't know what they are doing, you have to pay the price in hair loss | 22:13 |
salv-orlando | apuimedo: that is an idea. From my perspective however in order to think about "scaling out" I need first to understand what the scale bottlenecks are | 22:13 |
apuimedo | I now pull my beard, it is more resilient | 22:13 |
salv-orlando | in my experience reality is often different from modeling ;) | 22:14 |
apuimedo | salv-orlando: of course, not going to write any multi process anytime soon | 22:14 |
apuimedo | but having watchers small and independent sounds nice in my head | 22:14 |
salv-orlando | apuimedo: for instance, just as an example, we do not know what kind of load a watcher per resource will ultimately put on etcd | 22:15 |
apuimedo | I hear the echo | 22:15 |
salv-orlando | you might scale out raven but kill etcd ;) | 22:15 |
apuimedo | salv-orlando: the API, not etcd | 22:15 |
apuimedo | right? | 22:15 |
apuimedo | which is worse, because etcd you could just put a ton of them | 22:15 |
apuimedo | I'm not sure it is as easy, putting a lot of api endpoints | 22:16 |
apuimedo | salv-orlando: I'm probably repeating myself but... What would be so wrong if the extensibility was built as a scheduler plugin? | 22:17 |
apuimedo | it would be executed right in the api controllers | 22:17 |
apuimedo | less moving pieces | 22:18 |
apuimedo | no back and forth from cni -> api | 22:18 |
apuimedo | and if you namespaced the attributes they can add, you could run multiple of them at the same time | 22:18 |
apuimedo | I'm probably missing something very evident, but I have not found it yet | 22:19 |
apuimedo | mestery: ok, devstack just finished | 22:34 |
apuimedo | and yeah, it reproduced nicely | 22:34 |
mestery | good! | 22:40 |
apuimedo | mestery: it looks like a big libnetwork bug. I suspect they may have broken the non-socket plugin usage | 22:57 |
apuimedo | *non-unix-socket | 22:58 |
*** asti has joined #openstack-kuryr | 23:34 | |
*** salv-orlando has quit IRC | 23:35 | |
mestery | apuimedo: Bummer! | 23:54 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!