*** rcernin_ has joined #openstack-lbaas | 00:04 | |
*** rcernin has quit IRC | 00:04 | |
*** fnaval has quit IRC | 00:43 | |
*** hongbin has joined #openstack-lbaas | 00:44 | |
*** hongbin_ has joined #openstack-lbaas | 00:56 | |
*** longkb has joined #openstack-lbaas | 00:56 | |
*** JudeC has quit IRC | 00:58 | |
*** hongbin has quit IRC | 00:58 | |
*** sapd has joined #openstack-lbaas | 01:39 | |
*** hongbin_ has quit IRC | 01:45 | |
*** hongbin has joined #openstack-lbaas | 01:45 | |
*** ramishra has joined #openstack-lbaas | 03:21 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Cleanup Octavia create VIP ports on LB delete https://review.openstack.org/581168 | 03:39 |
---|---|---|
*** hongbin has quit IRC | 04:08 | |
openstackgerrit | ZhaoBo proposed openstack/octavia master: Treat null admin_state_up as False https://review.openstack.org/582929 | 04:17 |
*** JudeC has joined #openstack-lbaas | 05:18 | |
openstackgerrit | Yang JianFeng proposed openstack/octavia master: Add amphora_flavor field for amphora api https://review.openstack.org/582914 | 05:25 |
*** rcernin_ has quit IRC | 05:25 | |
*** rcernin has joined #openstack-lbaas | 05:26 | |
*** links has joined #openstack-lbaas | 05:33 | |
openstackgerrit | Yang JianFeng proposed openstack/octavia master: Add compute_flavor field for amphora api https://review.openstack.org/582914 | 05:49 |
*** yboaron has joined #openstack-lbaas | 06:06 | |
openstackgerrit | ZhaoBo proposed openstack/octavia master: Treat null admin_state_up as False https://review.openstack.org/582929 | 06:14 |
*** phuoc_ has joined #openstack-lbaas | 06:26 | |
*** phuoc has quit IRC | 06:29 | |
*** kobis has joined #openstack-lbaas | 06:30 | |
*** kobis has quit IRC | 06:35 | |
*** yboaron has quit IRC | 06:35 | |
*** velizarx has joined #openstack-lbaas | 06:46 | |
*** tesseract has joined #openstack-lbaas | 06:48 | |
*** ispp has joined #openstack-lbaas | 06:59 | |
*** yamamoto has joined #openstack-lbaas | 07:02 | |
*** velizarx has quit IRC | 07:03 | |
*** JudeC has quit IRC | 07:04 | |
*** peereb has joined #openstack-lbaas | 07:04 | |
*** rcernin has quit IRC | 07:11 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 07:22 | |
*** velizarx has joined #openstack-lbaas | 07:23 | |
*** ispp has quit IRC | 07:28 | |
*** yboaron has joined #openstack-lbaas | 07:34 | |
*** longkb has quit IRC | 07:39 | |
*** longkb has joined #openstack-lbaas | 07:41 | |
openstackgerrit | Carlos Goncalves proposed openstack/octavia master: Correct naming for quota resources https://review.openstack.org/559672 | 07:59 |
*** yboaron_ has joined #openstack-lbaas | 08:03 | |
*** yboaron has quit IRC | 08:06 | |
*** PagliaccisCloud has quit IRC | 08:07 | |
*** PagliaccisCloud has joined #openstack-lbaas | 08:09 | |
*** ispp has joined #openstack-lbaas | 08:15 | |
*** kobis has joined #openstack-lbaas | 08:21 | |
cgoncalves | https://review.openstack.org/#/c/583068/ should fix our gate. I rebased the quota renaming patch ^ on top of that for testing | 08:22 |
cgoncalves | nmagnezi, ^ | 08:22 |
*** yboaron has joined #openstack-lbaas | 08:31 | |
*** yboaron_ has quit IRC | 08:33 | |
*** ktibi has joined #openstack-lbaas | 09:01 | |
*** ispp has quit IRC | 09:02 | |
*** annp has quit IRC | 09:18 | |
*** annp has joined #openstack-lbaas | 09:26 | |
*** salmankhan has joined #openstack-lbaas | 09:30 | |
*** yamamoto has quit IRC | 09:37 | |
bzhao__ | cgoncalves: Nice. :) | 10:02 |
openstackgerrit | ZhaoBo proposed openstack/octavia master: Treat null admin_state_up as False https://review.openstack.org/582929 | 10:03 |
openstackgerrit | ZhaoBo proposed openstack/octavia master: UDP jinja template https://review.openstack.org/525420 | 10:03 |
openstackgerrit | ZhaoBo proposed openstack/octavia master: UDP for [2] https://review.openstack.org/529651 | 10:04 |
openstackgerrit | ZhaoBo proposed openstack/octavia master: UDP for [3][5][6] https://review.openstack.org/539391 | 10:05 |
*** longkb has quit IRC | 10:30 | |
*** velizarx has quit IRC | 10:54 | |
*** ispp has joined #openstack-lbaas | 10:55 | |
*** velizarx has joined #openstack-lbaas | 10:58 | |
*** rabel has joined #openstack-lbaas | 11:02 | |
rabel | hi there. with lbaasv2: has the vip-port to be on the same network-node as the loadbalancer itself? | 11:03 |
rabel | we're experiencing problems with one of our lbaas-loadbalancers and i just saw that the vip port is not on the same network node as the loadbalancer | 11:03 |
*** sapd has quit IRC | 11:03 | |
openstackgerrit | Carlos Goncalves proposed openstack/neutron-lbaas master: Update new documentation PTI jobs https://review.openstack.org/530314 | 11:08 |
*** atoth has joined #openstack-lbaas | 11:14 | |
*** sapd has joined #openstack-lbaas | 11:20 | |
nmagnezi | cgoncalves, o/ | 11:48 |
nmagnezi | cgoncalves, the errors here are also because of the same diskimage-builder issue? https://review.openstack.org/#/c/580724/ | 11:49 |
cgoncalves | nmagnezi, yes: http://logs.openstack.org/24/580724/3/gate/octavia-v1-dsvm-py3x-scenario/145d0e3/logs/devstacklog.txt.gz#_2018-07-16_13_20_39_909 | 11:53 |
*** amuller has joined #openstack-lbaas | 11:54 | |
*** ispp has quit IRC | 12:22 | |
*** yboaron_ has joined #openstack-lbaas | 12:30 | |
*** yboaron has quit IRC | 12:33 | |
*** peereb has quit IRC | 12:56 | |
*** salmankhan has quit IRC | 13:09 | |
*** ispp has joined #openstack-lbaas | 13:17 | |
*** velizarx has quit IRC | 13:35 | |
*** velizarx has joined #openstack-lbaas | 13:35 | |
*** salmankhan has joined #openstack-lbaas | 13:36 | |
*** fnaval has joined #openstack-lbaas | 13:47 | |
*** ispp has quit IRC | 13:48 | |
*** ispp has joined #openstack-lbaas | 13:50 | |
*** ispp has quit IRC | 13:52 | |
*** ispp has joined #openstack-lbaas | 14:01 | |
*** ispp has quit IRC | 14:02 | |
*** yboaron has joined #openstack-lbaas | 14:11 | |
*** kobis has quit IRC | 14:11 | |
*** yboaron_ has quit IRC | 14:13 | |
*** velizarx has quit IRC | 14:27 | |
*** ispp has joined #openstack-lbaas | 14:38 | |
*** ispp has quit IRC | 14:39 | |
openstackgerrit | huangshan proposed openstack/python-octaviaclient master: Support backup members https://review.openstack.org/576530 | 14:39 |
openstackgerrit | huangshan proposed openstack/python-octaviaclient master: Support backup members https://review.openstack.org/576530 | 14:45 |
*** ispp has joined #openstack-lbaas | 15:03 | |
*** JudeC has joined #openstack-lbaas | 15:07 | |
*** JudeC has quit IRC | 15:21 | |
*** ftersin has joined #openstack-lbaas | 15:23 | |
*** tesseract has quit IRC | 15:57 | |
rabel | why are there two haproxy processes per loadbalancer on lbaasv2? | 16:09 |
johnsom | rabel You mean neutron-lbaas? | 16:10 |
johnsom | rabel Which provider driver are you using? | 16:10 |
rabel | johnsom: yes. provider driver is haproxy | 16:10 |
johnsom | So the old namespace driver. Not sure why there would be two as it doesn't support high availability. It could be that one is the parent controlling process and the second is the actual load balancing process. | 16:11 |
johnsom | Maybe they have them configured for nproc 2, which they can do as they don't do HA. | 16:12 |
rabel | johnsom: the reason why i'm asking: while investigating a broken lbaas loadbalancer, i found that the vip port is on another network node than the loadbalancer itself. in this case there is one haproxy process on each network node. i think something went wrong there, but don't know if this could also work somehow | 16:14 |
johnsom | I'm not very familiar with that driver, but I would expect the VIP port to be bound to the network node that is running the haproxy process for the VIP. | 16:15 |
rabel | so the port should be on the same node as the lbaas-agent from neutron lbaas-agent-hosting-loadbalancer? | 16:17 |
rabel | i can't imagine how it should work otherwise. but it's also a little strange that there are two haproxy services running on two different network nodes for the same loadbalancer. | 16:18 |
*** harlowja has joined #openstack-lbaas | 16:18 | |
johnsom | Yeah, I don't think that is how that driver normally works. It should be one LB tied to one agent running one haproxy | 16:19 |
rabel | ok, thanks a lot | 16:20 |
*** yboaron has quit IRC | 16:22 | |
*** links has quit IRC | 16:24 | |
*** ktibi has quit IRC | 16:32 | |
*** JudeC has joined #openstack-lbaas | 16:36 | |
johnsom | courtesy reminder: The deadline to submit a Berlin summit presentation is July 18 at 6:59 am UTC (July 17 at 11:59 pm PST) | 16:48 |
*** AlexeyAbashkin has quit IRC | 16:59 | |
*** ispp has quit IRC | 17:06 | |
*** atoth has quit IRC | 17:12 | |
*** ramishra has quit IRC | 17:17 | |
*** salmankhan has quit IRC | 17:22 | |
*** harlowja has quit IRC | 17:54 | |
*** atoth has joined #openstack-lbaas | 18:13 | |
*** ianychoi has quit IRC | 18:14 | |
*** KeithMnemonic has joined #openstack-lbaas | 18:16 | |
*** kberger has quit IRC | 18:18 | |
*** harlowja has joined #openstack-lbaas | 18:32 | |
*** phuoc has joined #openstack-lbaas | 19:04 | |
*** phuoc_ has quit IRC | 19:07 | |
*** kberger has joined #openstack-lbaas | 19:50 | |
*** KeithMnemonic has quit IRC | 19:51 | |
*** amuller has quit IRC | 19:52 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 20:22 | |
openstackgerrit | Nir Magnezi proposed openstack/octavia master: Fixes unlimited listener connection limit https://review.openstack.org/580724 | 20:39 |
apuimedo | johnsom: xgerman_: any of you there? | 20:40 |
johnsom | Hi | 20:40 |
apuimedo | Hi! | 20:40 |
apuimedo | :-) | 20:40 |
apuimedo | quick question | 20:40 |
openstackgerrit | German Eichberger proposed openstack/octavia master: [WIP] Switch amphora agent to use privsep https://review.openstack.org/549295 | 20:40 |
xgerman_ | Hi | 20:41 |
johnsom | Sure | 20:41 |
apuimedo | if I set event_streamer_driver to queue_event_streamer | 20:42 |
apuimedo | when octavia is used standalone without the neutron adapter | 20:42 |
apuimedo | do we get any side effect | 20:42 |
apuimedo | or we just get the health events in rabbitmq for whoever wants to see them? | 20:42 |
johnsom | Don't do it. Yes, horrible performance and eventual rabbit backlogs that tank rabbit | 20:43 |
apuimedo | johnsom: why is that? | 20:43 |
apuimedo | Because nothing will be consuming them? | 20:43 |
johnsom | This was a slapped in thing to try to keep neutron's database straight if you are using neutron-lbaas with octavia driver. It has two issues mainly: 1. it doesn't TTL out the messages, they just accumulate in rabbit. 2. the code is not efficient in that it checks the old state and then sends the event. This will slow down the health manager greatly. | 20:44 |
johnsom | As soon as neutron-lbaas is EOL that code is going away | 20:44 |
apuimedo | johnsom: ok | 20:45 |
apuimedo | I'll give some context | 20:45 |
johnsom | It was never built to be a generic "event streamer" sadly. | 20:45 |
apuimedo | we have an intern working on a project that takes rabbitmq events and turns them in a kubernetes/etcd style HTTP watch events | 20:46 |
johnsom | There is a patch to try to help someone that has to run it here: https://review.openstack.org/#/c/581585/ | 20:46 |
nmagnezi | apuimedo, \o/ | 20:46 |
johnsom | Basically they are feeling that pain | 20:46 |
apuimedo | he started trying it with Neutron | 20:46 |
apuimedo | and he gets the port events just fine | 20:46 |
apuimedo | so now he wanted to add support for octavia | 20:46 |
apuimedo | since kuryr-kubernetes makes extensive use of it | 20:46 |
apuimedo | and polling gets old really fast :P | 20:46 |
johnsom | Yeah, senlin wanted it from us too. It's just no one has done the development to do it right yet. | 20:46 |
apuimedo | johnsom: wouldn't it just be making a better queue_event_streamer ? | 20:47 |
apuimedo | Or is there anything else needed? | 20:47 |
johnsom | Well, I would ignore all of that code and start over with a cleaner solution focused on notification style events and not deltas/syncing neutron. | 20:48 |
apuimedo | one that doesn't care about checking the old status and just emits the events | 20:48 |
xgerman_ | we update the queue every couple of seconds so we should be much more selective with what we ship and how often | 20:48 |
johnsom | It wouldn't be hugely crazy work, just doing a decent design and updating the health manager driver(s) | 20:48 |
apuimedo | right | 20:48 |
apuimedo | it should be just on resource creation | 20:49 |
apuimedo | then on status updates | 20:49 |
cgoncalves | create, update, delete | 20:49 |
johnsom | Hmmm, resource creation... Are you looking for the audit events? | 20:49 |
apuimedo | cgoncalves: that's right | 20:49 |
apuimedo | johnsom: audit events? | 20:49 |
johnsom | API audit events | 20:50 |
johnsom | https://docs.openstack.org/keystonemiddleware/latest/audit.html | 20:50 |
johnsom | It's on my back burner to-do list | 20:50 |
johnsom | I think neutron supports it. we don't yet | 20:50 |
apuimedo | johnsom: that would not tell us about health of the LB | 20:50 |
apuimedo | :-) | 20:50 |
apuimedo | like when the LB is really ready and so on | 20:51 |
xgerman_ | ok, so the event streamer was meant to sync two databases; you want more a pub/sub type thingy | 20:51 |
johnsom | Right, but the create/update/delete stuffs. If that is not what you are looking for, I get it, just asking if that is what you are using on the neutron side. Clarification. So it sounds like you are looking for the CRUD events. | 20:51 |
johnsom | Yeah, so, we don't have what you are looking for today. | 20:52 |
xgerman_ | neutron has an event stream where they have stuff like port create/delete | 20:52 |
apuimedo | johnsom: on the neutron side we just use the port rabbit events that get published | 20:52 |
cgoncalves | CUD, not CRUD ;) | 20:52 |
johnsom | Yeah, I am aware of those | 20:52 |
apuimedo | usually they are port_create_start, port_update_end | 20:52 |
apuimedo | and such | 20:52 |
cgoncalves | xgerman_, correct. nova too | 20:52 |
apuimedo | exactly | 20:52 |
xgerman_ | and we would do lb create, update,… | 20:53 |
apuimedo | since I saw that you have rabbitmq stuff and emit in the health | 20:53 |
apuimedo | I thought maybe the scope was the same :P | 20:53 |
johnsom | Nope, sorry | 20:53 |
johnsom | That is bad hackery that needs rm -rf | 20:53 |
*** erjacobs has joined #openstack-lbaas | 20:53 | |
apuimedo | thanks for all the info johnsom xgerman_ and cgoncalves | 20:53 |
apuimedo | johnsom: well, at that point, if it needs writing anew | 20:54 |
apuimedo | probably it's as cheap to make the watch API directly in octavia xD | 20:54 |
*** AlexeyAbashkin has quit IRC | 20:54 | |
johnsom | Sure, NP. Sorry. You can always put in a story for us: https://storyboard.openstack.org/ | 20:54 |
* cgoncalves waits for johnsom's *grin* | 20:55 | |
apuimedo | is that something that would be acceptable to put in your API? | 20:55 |
nmagnezi | cgoncalves, that's his trademark :) | 20:56 |
johnsom | If it makes you feel better, it was on my roadmap.... https://wiki.openstack.org/wiki/Octavia/Roadmap as "Status change notifications via oslo messaging" | 20:56 |
apuimedo | johnsom: good | 20:56 |
cgoncalves | apuimedo, REST API? no changes would be needed AFAIK | 20:56 |
apuimedo | cgoncalves: just to take a &watch=True | 20:57 |
apuimedo | on any resource | 20:57 |
johnsom | Yeah, I'm not sure I see how that would relate to the API. | 20:57 |
apuimedo | and that would put the client with a http chunked transfer | 20:57 |
apuimedo | that would get events for each health update | 20:57 |
apuimedo | it is not trivial | 20:57 |
cgoncalves | apuimedo, now I sort of understand what you meant by "watch api" but which watch api? | 20:57 |
xgerman_ | ah, so you would httpstream | 20:57 |
cgoncalves | ah | 20:58 |
apuimedo | xgerman_: that's right | 20:58 |
apuimedo | :-) | 20:58 |
johnsom | Does any OpenStack API support that? | 20:58 |
cgoncalves | apuimedo, consider leveraging aodh for that: alarming as a service | 20:58 |
xgerman_ | mmh, that can quickly get out of hand… | 20:58 |
johnsom | Seems like it would hang an API thread just feeding that, which I suspect people would not like.... | 20:58 |
cgoncalves | yeah. go aodh | 20:59 |
apuimedo | johnsom: no. No service has that. That's why the intern is writing a rabbitmq -> Watch streaming API | 20:59 |
apuimedo | :P | 20:59 |
apuimedo | WEaaS | 20:59 |
apuimedo | Watch Enpoints as a Service | 20:59 |
apuimedo | :-) | 20:59 |
xgerman_ | well, I would think we would add something to the API to feed some event thing so that those theads don’t need to poll | 20:59 |
johnsom | I think I like the pub/sub over oslo messaging (rabbit if you like) better than adding that to our API | 21:00 |
xgerman_ | +! | 21:00 |
johnsom | Ha, dipping your toe into Ceilometer.... | 21:00 |
apuimedo | xgerman_: well, the idea is to prevent the polling from those threads | 21:00 |
apuimedo | johnsom: ok, so rabbit neutron/nova style then, right? | 21:01 |
johnsom | Yeah, maybe done better.. grin | 21:02 |
apuimedo | cgoncalves: how would aodh help? How does it get the events from octavia? | 21:02 |
xgerman_ | well, johnsom and i occasionally talk about using etcd for health/status stuff which might be more trigger-on-update friendly | 21:02 |
johnsom | It doesn't today | 21:02 |
apuimedo | xgerman_: etcd provides watch for free | 21:02 |
apuimedo | on all keys | 21:02 |
cgoncalves | apuimedo, aodh can listen for notification events and triggers an alarm for each tenant that has requested to be informed. it can be via http post or other methods | 21:03 |
johnsom | Yeah, I have crazy dreams of an etcd driver for the health manager... Just no time/resource to develop it. I don't get interns anymore... grin | 21:03 |
apuimedo | cgoncalves: but something has to notify it | 21:03 |
apuimedo | johnsom: you should apply for outreachy mentoring | 21:03 |
cgoncalves | apuimedo, octavia would send out a notification to the oslo messaging bus and that's it | 21:03 |
apuimedo | I got an awesome intern the last cycle | 21:03 |
johnsom | Yeah, my last intern wrote our active/standby code | 21:04 |
apuimedo | cgoncalves: well, if we have to add sending stuff to oslo messaging, the intern project can already consume it | 21:04 |
xgerman_ | +1000 | 21:04 |
apuimedo | since we want http watch endpoints and not aodh http POST | 21:04 |
apuimedo | johnsom: you should trick cgoncalves into writing the etcd health backend | 21:05 |
johnsom | Yeah, I would add the pub/sub code and fire them that way. Then multiple projects can consume them. Once the infrastructure is in the code (oslo setup, etc.) the hook points are pretty clear in our flows and health manager drivers | 21:05 |
* johnsom tries jedi mind trick on cgoncalves | 21:06 | |
apuimedo | :-) | 21:06 |
* johnsom that is not the code you are looking for. You want to work on johnsom's projects..... | 21:06 | |
* cgoncalves never watched Star Wars \o/ | 21:06 | |
apuimedo | cgoncalves: Star Trek is way better anyway | 21:07 |
johnsom | Oye | 21:07 |
apuimedo | johnsom: maybe make him an offer he won't refuse | 21:07 |
johnsom | Time to go back to working on my internal project | 21:07 |
johnsom | I will offer him a cookie | 21:08 |
rm_work | i almost did a whole etcd health driver, like a year and a half ago | 21:09 |
rm_work | it should work | 21:09 |
rm_work | it's just... meh | 21:09 |
apuimedo | rm_work: why meh? | 21:10 |
cgoncalves | how are you stealing my cookie! | 21:10 |
cgoncalves | *how dare you | 21:10 |
rm_work | it's not as scalable | 21:10 |
apuimedo | rm_work: as scalable as what? | 21:11 |
rm_work | the UDP health driver | 21:11 |
johnsom | cgoncalves Cookie pair: zuul_filter_string=neutron-lbaas | 21:12 |
johnsom | My thought was use it instead of mysql as the backend, not as the amp->controller mech | 21:12 |
johnsom | I'm interested in in-memory with disk checkpointing | 21:13 |
rm_work | yeah we were looking at having the amps do cert-auth to etcd and then keep their status in it via that mechanism -- both aliveness (connection active) and also health of members | 21:14 |
rm_work | and then the HM daemon would just be checking with etcd for failure events | 21:14 |
rm_work | it's actually a little bit SIMPLER imo, and a little faster for long-failover detection | 21:14 |
johnsom | Yeah, not so warm fuzzy on that one | 21:14 |
rm_work | but it doesn't scale as well | 21:14 |
johnsom | right | 21:14 |
rm_work | and it's scary if you don't trust an etcd cluster to stay up properly | 21:15 |
rm_work | (which I don't) | 21:15 |
rm_work | (I don't trust ANY service to stay up properly that hasn't been heavily tested for like 20 years) | 21:15 |
johnsom | We would find a way to break it I'm sure | 21:15 |
rm_work | (and even then barely) | 21:15 |
apuimedo | johnsom: yeah, I thought also instead of mysql | 21:17 |
*** apuimedo has quit IRC | 21:21 | |
*** erjacobs has quit IRC | 21:28 | |
openstackgerrit | German Eichberger proposed openstack/octavia master: [WIP] Switch amphora agent to use privsep https://review.openstack.org/549295 | 22:31 |
*** rcernin has joined #openstack-lbaas | 22:32 | |
rm_work | did something merge somewhere that causes our py3 tests to all fail? | 22:57 |
rm_work | they're failing on EVERYTHING | 22:57 |
rm_work | well, one specific one is | 22:58 |
rm_work | octavia-v1-dsvm-py3x-scenario is failing on every patchset even on rechecks | 22:58 |
johnsom | Yeah, astroid broke pylint and a bunch of stuff yesterday. | 22:59 |
johnsom | I think the patch merged though | 22:59 |
johnsom | Yep: http://logs.openstack.org/41/566741/7/gate/octavia-v1-dsvm-py3x-scenario/4a34984/logs/devstacklog.txt.gz#_2018-07-16_21_55_33_303 | 23:00 |
johnsom | This is what carlos was talking about yesterday | 23:01 |
cgoncalves | not merged yet. pending on a depends-on from tripleo-ci IIRC | 23:01 |
johnsom | Ah, bummer | 23:02 |
cgoncalves | https://review.openstack.org/#/c/583068/ & https://review.openstack.org/#/c/583102/ | 23:02 |
rm_work | k then our gates are stuck until that goes through <_< | 23:07 |
cgoncalves | in the meantime backport patches could use some reviews :D | 23:16 |
cgoncalves | https://review.openstack.org/#/q/status:open+project:openstack/octavia+NOT+branch:master | 23:16 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!