Wednesday, 2018-03-21

*** sshank has quit IRC00:02
*** AlexeyAbashkin has joined #openstack-lbaas00:18
*** AlexeyAbashkin has quit IRC00:22
*** yamamoto has joined #openstack-lbaas00:43
*** yamamoto has quit IRC00:48
*** fnaval has quit IRC00:53
*** fnaval has joined #openstack-lbaas00:56
*** harlowja has quit IRC01:07
*** AlexeyAbashkin has joined #openstack-lbaas01:18
*** AlexeyAbashkin has quit IRC01:22
*** yamamoto has joined #openstack-lbaas01:45
*** yamamoto has quit IRC01:51
*** jaff_cheng has joined #openstack-lbaas02:08
*** AlexeyAbashkin has joined #openstack-lbaas02:18
*** atoth has quit IRC02:20
*** AlexeyAbashkin has quit IRC02:23
*** yamamoto has joined #openstack-lbaas02:37
*** dayou has quit IRC02:42
*** dayou has joined #openstack-lbaas02:44
openstackgerrithuangshan proposed openstack/python-octaviaclient master: Add loadbalancer status show client api and osc
*** yamamoto has quit IRC02:55
*** yamamoto has joined #openstack-lbaas03:45
*** links has joined #openstack-lbaas04:00
*** links has quit IRC04:00
*** yamamoto has quit IRC04:07
*** yamamoto has joined #openstack-lbaas04:08
*** yamamoto has quit IRC04:33
*** harlowja has joined #openstack-lbaas04:39
*** ivve has quit IRC04:48
*** harlowja has quit IRC04:56
*** imacdonn has quit IRC05:14
*** imacdonn has joined #openstack-lbaas05:14
*** yamamoto has joined #openstack-lbaas06:28
*** aojea has joined #openstack-lbaas06:57
*** logan- has quit IRC06:58
*** logan- has joined #openstack-lbaas06:58
*** kobis has joined #openstack-lbaas07:00
*** aojea has quit IRC07:13
*** aojea has joined #openstack-lbaas07:14
*** kobis has quit IRC07:20
*** rcernin has quit IRC07:21
*** voelzmo has joined #openstack-lbaas07:21
*** aojea has quit IRC07:28
*** yamamoto has quit IRC07:32
*** AlexeyAbashkin has joined #openstack-lbaas07:55
*** kobis has joined #openstack-lbaas07:55
*** yamamoto has joined #openstack-lbaas08:00
*** yamamoto has quit IRC08:03
*** yamamoto has joined #openstack-lbaas08:13
*** yamamoto has quit IRC08:16
*** yamamoto has joined #openstack-lbaas08:16
*** aojea_ has joined #openstack-lbaas08:20
*** aojea_ has quit IRC08:25
*** tesseract has joined #openstack-lbaas08:31
*** velizarx has joined #openstack-lbaas08:48
*** tesseract has quit IRC08:51
*** tesseract has joined #openstack-lbaas08:52
*** tesseract has quit IRC08:54
*** tesseract has joined #openstack-lbaas08:57
*** yamamoto has quit IRC09:42
*** yamamoto has joined #openstack-lbaas09:43
*** yamamoto has quit IRC09:48
*** yamamoto has joined #openstack-lbaas09:48
*** yamamoto has quit IRC09:48
*** voelzmo has quit IRC09:54
*** voelzmo has joined #openstack-lbaas09:55
*** voelzmo has quit IRC09:55
*** voelzmo has joined #openstack-lbaas09:56
*** salmankhan has joined #openstack-lbaas09:59
*** voelzmo has quit IRC10:00
*** jaff_cheng has quit IRC10:02
*** aojea_ has joined #openstack-lbaas10:08
*** salmankhan1 has joined #openstack-lbaas10:11
*** salmankhan has quit IRC10:11
*** salmankhan1 is now known as salmankhan10:11
*** aojea_ has quit IRC10:14
*** salmankhan has quit IRC10:30
*** salmankhan has joined #openstack-lbaas10:34
*** sapd_ has quit IRC10:38
*** sapd has joined #openstack-lbaas10:39
*** yamamoto has joined #openstack-lbaas10:48
*** yamamoto has quit IRC10:54
*** yamamoto has joined #openstack-lbaas10:56
*** yamamoto has quit IRC11:01
*** yamamoto has joined #openstack-lbaas11:02
*** voelzmo has joined #openstack-lbaas11:04
*** numans is now known as numans_afk11:05
*** yamamoto has quit IRC11:06
*** fnaval has quit IRC11:09
*** yamamoto has joined #openstack-lbaas11:16
*** yamamoto has quit IRC11:16
*** numans_afk is now known as numans11:29
*** voelzmo has quit IRC11:32
*** yamamoto has joined #openstack-lbaas11:48
*** yamamoto has quit IRC11:52
*** kobis has quit IRC11:56
*** aojea_ has joined #openstack-lbaas11:57
*** kobis has joined #openstack-lbaas11:59
*** kobis has quit IRC11:59
*** kobis has joined #openstack-lbaas12:00
*** toker_ has quit IRC12:00
*** toker_ has joined #openstack-lbaas12:00
*** aojea_ has quit IRC12:01
*** yamamoto has joined #openstack-lbaas12:03
*** atoth has joined #openstack-lbaas12:04
*** yamamoto has quit IRC12:08
toker_Hi there (again)! I'm almost satisfied with my octavia setup. I got everything working except the "health-manager". I can see the amphora sending udp packages to the server where the health-manager is listening (on the correct port). But still doing a openstack show lb1 shows operating_status OFFLINE12:09
*** velizarx has quit IRC12:13
nmagnezi_toker_, hi there!12:15
nmagnezi_toker_, what do the logs have to say?12:15
*** yamamoto has joined #openstack-lbaas12:18
*** yamamoto has quit IRC12:22
*** yamamoto has joined #openstack-lbaas12:33
*** yamamoto has quit IRC12:38
*** yamamoto has joined #openstack-lbaas12:48
toker_nmagnezi_: hm, depends on which logs you are referring to. everything looks good on the octavia side, loadbalancer is up and running and works as expected. healthmonitor is quering my webservice as it suppose to. still I get "operating status OFFLINE" when running openstack loadbalancer show12:51
*** yamamoto has quit IRC12:53
toker_As I said, the amphoras is sending packages to the healthmanager (I verified that with tcpdump). However, the healthmanager doesn't seem to update the status of the loadbalancer to openstack12:53
toker_what should 'health_update_driver' be set to ? And12:54
*** yamamoto has joined #openstack-lbaas13:03
*** yamamoto has quit IRC13:08
toker_For some reason the octavia database isn't updated and I cant really figure out why :(13:12
*** yamamoto has joined #openstack-lbaas13:18
*** fnaval has joined #openstack-lbaas13:20
*** yamamoto has quit IRC13:22
*** yamamoto has joined #openstack-lbaas13:33
*** yamamoto has quit IRC13:38
toker_rm_work: ping me when you are around :P13:45
*** yamamoto has joined #openstack-lbaas13:48
*** aojea_ has joined #openstack-lbaas13:50
*** yamamoto has quit IRC13:53
*** aojea_ has quit IRC13:53
*** aojea_ has joined #openstack-lbaas13:54
*** yamamoto has joined #openstack-lbaas14:03
*** yamamoto has quit IRC14:07
*** salmankhan has quit IRC14:10
*** salmankhan has joined #openstack-lbaas14:15
*** yamamoto has joined #openstack-lbaas14:18
*** aojea_ has quit IRC14:18
*** salmankhan has quit IRC14:20
*** salmankhan has joined #openstack-lbaas14:20
*** yamamoto has quit IRC14:23
*** yamamoto has joined #openstack-lbaas14:33
*** yamamoto has quit IRC14:38
*** ivve has joined #openstack-lbaas14:41
*** yamamoto has joined #openstack-lbaas14:46
*** yamamoto has quit IRC14:46
*** aojea_ has joined #openstack-lbaas14:47
*** aojea_ has quit IRC14:51
johnsomtoker_ You want "health_db" as the health_update_driver14:55
johnsomSame with stats14:55
johnsomtoker_ To update your database, use "octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade head"14:57
johnsomNeat, we are leading other projects....
*** kbyrne has quit IRC15:30
*** yamamoto has joined #openstack-lbaas15:33
xgerman_another day, another not-run of our oeriodic job15:34
xgerman_or no result from it15:34
toker_johnsom: Dont think there is anything wrong with the database scheme. The problem seems to be octavia (octavia-healthmanager ???) never updates the database. Everything is working fine, I can query the loadbalancers, the loadbalancers are sending udp-health-packages to the controller - BUT everything is operating_status OFFLINE when querying with openstack cli. I'm fairly positive this worked yesterday *before* removing the15:34
johnsomHmm, I will see if I can dig up the job on one of the health pages15:34
toker_octavia api directly...Here is my config, am I missing something ?
xgerman_ is blank on the periodic side15:35
*** kbyrne has joined #openstack-lbaas15:35
johnsomxgerman POST_FAILURE15:36
johnsom /usr/bin/rsync --delay-updates -F --compress --archive --rsh=/usr/bin/ssh -S none -o StrictHostKeyChecking=no --out-format=<<CHANGED>>%i %n%L /var/lib/zuul/builds/3674c7b2b3674b068ed2afabe7a360a7/work/artifacts/
johnsomrsync: change_dir "/var/lib/zuul/builds/3674c7b2b3674b068ed2afabe7a360a7/work/artifacts" failed: No such file or directory (2)15:37
johnsomrsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1]15:37
johnsomtoker_ looking.  Can you do me a favor and restart your health manager process.  We once saw a case where it wasn't processing the packets until a restart. Curious if you are seeing that too15:39
rm_worktoker_: also make sure your rules for traffic forwarding/listening for your docker container with the healthmanager includes the right ports/protocols15:40
rm_workUDP and not just TCP15:40
rm_workit sounds to me like the packets are being sent by the amps but not actually making it to the HM process15:41
rm_worktoker_: and any security groups involved on the ports -- they usually don't include UDP by default15:44
rm_workor did you use tcpdump *in the HM containers* to make sure they were seeing the traffic?15:44
johnsomrm_work Are you going to be able to make the meeting today?15:47
rm_workbleh, did some perf stuff with the new 1.8 based containers and it's still shitty for me :(15:47
rm_workjohnsom: yeah15:47
johnsomI am considering two topics that it would be nice to have a quorum for.15:47
johnsomYeah, I tested 1.8 a while ago and didn't see any major improvement, but I also didn't change any settings, so assumed I just hadn't turned on the new stuff yet.  Still happy with the restart and systemd fixes they did in 1.8 though15:49
johnsomtoker_ I don't see anything obviously wrong with that config that would impact health manager.15:50
openstackgerritGerman Eichberger proposed openstack/octavia master: Fixes the directory we copy the image to
xgerman_^^ rm_work , johnsom — ley’s hope this is the right place15:51
rm_workjohnsom: right, just cgoncalves was hoping i would see perf improvements ;P15:51
rm_worki mean, i was too but wasn't hugely expecting anything15:51
rm_worki still need to figure out why it sucks so bad tho15:51
rm_worki can't even really get 1000 RPS :(15:52
johnsomYeah, I would have expected some scaling improvement with the threading work they did. But you need more than one vCPU for that and some settings I think15:52
rm_worki mean we've got 4 vCPU to work with15:53
rm_workI would just need to enable those15:53
rm_workbut i don't know for sure it's a CPU issue...15:53
xgerman_yep, with yoru weird network there is always - the network -15:53
johnsomHmm, that is odd. I know in my last testing I could hit 10,000 but my virtual NIC driver was eating up all of my CPU. I wasn't running in ideal environment, vmware workstation on windows....15:53
rm_worki don't think it's the network15:53
rm_workbut *shrug*15:53
xgerman_netperf graphs or it didn’t happen15:54
rm_workthis is not my area of expertise15:54
rm_workbut i can try15:54
xgerman_yeah, you should be able to benchmark vm<->vm15:54
rm_worki mean i can test hitting a member DIRECTLY and compare to hitting the LB15:55
xgerman_also if they employ firewalls - they night slow stuff down as well15:55
rm_workboth are just VMs in the same region15:55
johnsomxgerman_ FYI: I found you job here: with pipeline: periodic and project: openstack/octavia15:55
toker_hm, thanks for answering guys. I've restarted octavia-healtmanager loads of times, I also tried running it directly ont the controller (from cli) to make sure nothing fishy was up with the container - but no go.15:58
toker_I'm positive UDP packets are recieved on the controller. Fairly positive they reach the health-manager..15:59
toker_There is no octavia-health- process with opened RabbitMQ ports (5671,5672) running in the container <- this is seen though from the logs of the container. Don't think I've seen this before.15:59
cgoncalvesrm_work: no perf gains with 1.8? :/16:00
rm_worktoker_: do you have logs from the health-manager process?16:00
cgoncalves1.5 FTW!16:00
*** kobis has quit IRC16:00
rm_workcgoncalves: lolno16:05
rm_workthis fixes so many other issues16:05
johnsomtoker_ You have debug enabled, there should be log messages indicating it received a packet to process16:06
rm_workfirst two are connecting to the member directly, the third is to the LB16:06
rm_worki turned it down to 1000 for the LB because more and it just bombs entirely16:07
*** ivve has quit IRC16:08
rm_workbackend is our golang webserver16:08
rm_workload on the amp barely even budges16:09
rm_worklike, 5% or less16:10
rm_worksomething is f'd?16:10
rm_workdo i need to set ulimits inside the amp?16:11
johnsomtoker_ Should look like this:16:12
johnsomMar 21 09:11:33 devstackpy27-2 octavia-health-manager[107797]: DEBUG [-] Received packet from ('', 9818) {{(pid=107986) dorecv /opt/stack/octavia/octavia/amphorae/drivers/health/}}16:12
johnsomrm_work I think haproxy raises the ulimit automatically16:13
johnsomDid you set the connection limit on the LB?16:13
rm_workhow can it unless it runs as root?16:13
rm_workno, it's unset16:13
johnsomAh, set that16:13
johnsomHAProxy starts as root and then drops16:13
johnsomif I remember right.16:14
rm_workso set it to ... something?16:14
johnsomYeah, set the connection limit, I think there is an open bug that -1 is being translated to 200016:15
toker_johnsom: oh! cool, I'll definitly look into why the healt-manager doesn't recieve any packages.16:15
rm_worktoker_: yeah something is up between the process and the packet source16:16
rm_workeither network, or firewall, or settings16:17
rm_workyeah was just looking at that16:17
toker_ <- this is very werid.16:23
toker_I don't understand, I run healthmanager from cli, it binds to the correct ip address, udp packages are recieved on that address, healthmanager is saying nothing.16:24
rm_worktoker_: so when i told you to set the config for "heartbeat_key"16:26
johnsomtoker_ Hmm, check one thing for me. Stop you health manager process, then double check with ps -ef that there are not old processes still running in the background.16:26
rm_workdid you set that correctly *everywhere*?16:26
rm_workand by correctly, I mean, pretty much *any string* as long as it is identical everywhere16:26
johnsomIf the key was wrong it would log an hmac mis-match message in the log16:26
toker_rm_work There is only one config file that is used16:26
rm_workand you recreated the amps?16:26
toker_all day16:26
toker_starting the healthmanager seems to spawn three process, right ?16:27
johnsomRight, parent and two others16:27
johnsomThe health check thread and the receiver thread16:28
johnsomprocess in later versions16:28
*** AlexeyAbashkin has quit IRC16:29
johnsomrm_work on the tempest plugin, was it just the deletes in the tests that you didn't like? I can leave the creates right?16:29
rm_workerr, i think so16:30
johnsomI get your point about running more than we are testing. Just trying to balance that with the "each test must run on it's own" mandate16:30
rm_worktrying to remember16:30
johnsomI plan to ignore people if I have to today to make progress on that patch.... grin16:31
toker_ <- here is tha actual strace from one of the python processes. You can see it doing a SELECT, COMMIT and ROLLBACK. Not sure if that is normal behaviour.16:31
toker_johnsom: please dont ignore me :P16:31
rm_workyou should get log messages before any of that happens16:32
toker_hm that is really weird16:32
*** aojea_ has joined #openstack-lbaas16:32
johnsomtoker_ Yeah, sqlalchemy is a bit strange with the extra rollback, it can be ignored16:32
toker_dont understand anything where that logmessage is going16:32
johnsomYeah, what he said. We don't do DB until that packet received message comes in16:32
toker_when was debugging added ? maybe my version is to old for that debugmessage16:33
johnsomLike liberty16:33
rm_workdebug is for sure on?16:34
toker_I have debug=True in config file under default section and --debug on the command-line when starting health-manager16:35
johnsomAh, I was wrong, that debug came in during Pike16:36
toker_Well im in the, I can see its there.16:36
toker_Let me try to add some debug to see if I can figure out what is going on.16:36
*** aojea_ has quit IRC16:36
toker_(data, srcaddr) = self.sock.recvfrom(UDP_MAX_SIZE) <- Hm, it simply seems to get stuck on this.16:46
rm_workyeah that's it holding open a listen socket16:49
rm_workthat means it never gets any packet16:49
rm_workso something is still wrong in the network layer between the amps and the HM16:49
johnsomYeah, it should sit there until it gets a packet to process16:49
rm_workonly other thing is `heartbeat_interval       = 3` is a little fast BTW (though unrelated)16:51
rm_workmore like 10 is realistic16:51
johnsomFYI, if you have amp image issues around disk space:
toker_yep ok, I've verified this now. From an amp "echo "This is my data" > /dev/udp/xxx/5555" = no message in log, same command directly on the controller = message in logs from the healthmonitor.16:57
johnsomDIB bug16:57
toker_I just need to figure out why.16:58
rm_workwoah you can do that? lol16:59
rm_workjust echo to /dev/udp16:59
toker_yea, easiest way :p16:59
toker_no need for nc or anything :p16:59
*** bradjones has quit IRC17:03
*** salmankhan has quit IRC17:08
*** salmankhan has joined #openstack-lbaas17:12
toker_Hm, there is something really fishy going on here that is beyond my skills. I changed so the health-manager binds on all interfaces (udp    UNCONN     0      0         *:5555 ), still when sending packets from anything else than directly from the controller, the packets gets seen by tcpdump BUT NOT by the healthmanager.17:33
toker_So how could it be, that I'm allowed to send packages to the udp-socket locally, but not from anywhere else (if there were a firewall issue, I would not see the packets with tcpdump).17:33
toker_holy moly, I really need to learn networking better.18:00
toker_Turns out it was the firewall anyway.18:00
xgerman_it always is the firewall — they are evil18:00
johnsomSays the FWaaS guy....18:01
toker_but the tcpdump showed me the packet, so..18:01
xgerman_our iptables stuff is pretty benign compared to the backdoor ridden commerical things18:01
toker_I thought if iptables was the problem, I wouldn't have seen the packet.18:01
*** AlexeyAbashkin has joined #openstack-lbaas18:03
toker_Lessons learned.18:04
toker_Again, thanks everyone for helping me out!18:05
johnsomSure, NP18:14
rm_workyep happy to help :)18:16
rm_workcgoncalves: you sure it wouldn't be easier to just switch everyone to native Octavia than actually try to support n-lbaas in OSP12? lol18:16
*** atoth has quit IRC18:16
*** aojea_ has joined #openstack-lbaas18:20
*** AlexeyAbashkin has quit IRC18:21
*** harlowja has joined #openstack-lbaas18:24
*** aojea_ has quit IRC18:25
*** kobis has joined #openstack-lbaas18:36
*** yamamoto has quit IRC18:43
*** Swami has joined #openstack-lbaas18:48
openstackgerritMichael Johnson proposed openstack/octavia-tempest-plugin master: Revert "Create scenario tests for loadbalancers"
toker_next problem up for grabs ovtavia-dashboard :p18:53
toker_I installed it as the readme says, I got my "loadbalancer" menu back18:53
toker_However, when clicking it, I get stuck in a "redirect loop" that shows the loading spinner, reloads the page, shows the spinner ... this goes on forever...18:53
toker_I actually installed it the same way as the neutron-lbaas-dashboard. And I removed that one so they wouldnt conflict with eachoter18:54
johnsomYep, if you use the master or queens dashboard you will see a bunch of improvements18:54
toker_well i did git clone from the github18:54
johnsomYeah, ok, so master.  So let's look at a few things18:55
johnsomunder your horizon directory, there is openstack_dashboard/local/enabled Let's double check that there is only in there and not one of the other load balancer enable files18:56
johnsomI hope you skipped the policy setup for now, it's not critical18:57
toker_mm yep the file is there18:57
toker_only that file, no old neutron-ones18:57
johnsomOk, I just wanted to make sure18:58
johnsomWhen you did the ./ collectstatic I assume it copied a bunch of files over?18:58
johnsomOr did it not really do anything18:58
toker_yep it copied some files, and then I ran compress18:59
johnsomYeah, and compress takes some time to run18:59
johnsomhmmm, the only thing I can think is the collectstatic didn't work the way it should and there is still some old stuff in there.19:00
toker_hm, yea well something weird is happening. No error or nothing is returned, just keep "looping"19:02
toker_can I enable some debug from octavia-dashboard somewhere ?19:03
johnsomIn the past when doing development I have blown away the static file under horizon and re-ran collectstatic, but I can't recommend you do that as I'm not a horizon expert.19:03
johnsomCan you open the browser debug window and see what the paths are that are redirecting?19:03
johnsomctrl-shift-I on chrome, then network tab19:04
*** KeithMnemonic1 has joined #openstack-lbaas19:05
toker_doesnt seem to be a "redirect" per say, more of a reload of the page.19:05
johnsomhmmm, can you capture that path?19:06
*** salmankhan has quit IRC19:06
toker_the path is just dashboard/project/load_balancer/19:06
johnsomOk, so that is the new octavia dashboard path19:07
*** KeithMnemonic has quit IRC19:07
johnsomthe old one was: ngloadbalancersv219:08
johnsomDarn, I wish I had a dashboard VM running right now.19:09
johnsomThere is some logging from horizon, I just don't remember what the paths are off my head19:09
toker_ok  no worries19:09
toker_I need to call it a day anyways =)19:09
toker_Ill debug it some more tomorrow =)19:09
toker_Thanks again for all the support!19:10
johnsomOk, if you are still stuck tomorrow morning here I can help19:10
johnsomI will be done with one of the other VMs and can boot up a dashboard19:10
toker_Cool cool!19:10
johnsomIt's just I've already got four running, so not enough memory to get a dashboard VM going19:10
*** AlexeyAbashkin has joined #openstack-lbaas19:17
*** AlexeyAbashkin has quit IRC19:21
rm_workok back to perf testing -- had to move my whole control-plane to a different k8s cluster19:25
rm_workfortunately it only took about an hour, of which 45m was begging someone to let me use their cluster, and 10m was a DNS swing19:25
rm_workanywho, set the connection-limit to stupidly high number19:26
rm_workdidn't change anything19:26
rm_workmaxconn 10000019:27
rm_workis in both the places it should be19:27
rm_worki've also done some further tweaks on the amp though i can't be sure they took effect19:32
*** tesseract has quit IRC19:38
rm_workjohnsom: have you ever tested on centos?19:38
rm_worki wonder if it could be specific19:38
johnsomNo, ubuntu only19:40
rm_workhilariously, systemd-journal consistently is using more CPU time than haproxy during load testing, lol19:43
*** yamamoto has joined #openstack-lbaas19:44
johnsomAh, all that logging19:44
rm_workbut it's still only like 5%19:44
rm_workyeah this performance is abysmal19:45
rm_worki wonder if i can get an ubuntu amp working19:45
johnsomYou could try one of german's19:46
johnsomThough you may have local hacks19:47
xgerman_sweet — now if we could make that work ;-)19:47
xgerman_every night ;-)19:47
rm_workyeah uhh not sure actually if any local hacks are necessary with ubuntu19:47
rm_worki can try his O_o19:48
rm_workbut, triggered a build really quick19:48
johnsomIt will be 1.6 also19:48
rm_workah :/ well whatever19:48
*** yamamoto has quit IRC19:49
rm_workahh yeah i think it won't do the flip right, ugh19:50
cgoncalvesrm_work: n-lbaas will still be supported in OSP13 (Queens) for 5 years at least :/19:56
rm_worki feel ... bad for you19:58
rm_worksorry about that :(19:58
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed Mar 21 20:00:15 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia)"20:00
openstackThe meeting name has been set to 'octavia'20:00
johnsomHi folks20:00
johnsom#topic Announcements20:00
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"20:00
johnsom"S" series naming poll closes today - check you e-mail for your voting link20:01
johnsomTC elections are coming up in April20:01
johnsomThat is about what I have for announcements this week.20:01
johnsomAny others?20:01
xgerman_summit talks?20:02
xgerman_also travel support is closing today20:02
johnsomSure, I have two talks scheduled for Vancouver: Octavia project update (40 minutes extended version) and I signed up for an on-boarding session for Octavia.20:03
xgerman_I have. a talk on Octavia and K820:04
rm_workdo either of those actually require voting20:04
rm_workor are they just ... set20:04
xgerman_they are all confirmed20:04
rm_workhey i'm doing octavia on k8s :)20:04
johnsomMine are set by the foundation. They are not yet on the schedule, they will add them later20:04
johnsomAny other announcements?20:05
johnsom#topic Brief progress reports / bugs needing review20:06
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:06
johnsomUgh, trying to remember what I worked on. A bunch of gate related issues.20:06
cgoncalvesdid we all...20:07
johnsomOh, helped xgerman_ out with the proxy plugin gate job.20:07
johnsomToday i am starting work on the tempest plugin again to address the comments folks left and the discussion we had at the PTG.20:07
rm_worki made a little review thing with mine and german's listed
rm_workhaving something like that can be useful for us20:08
johnsomHope to wrap that up today/tomorrow so you all can shot more holes in it... grin20:08
rm_workif you want something reviewed, add it and i can look :)20:08
rm_workwas hoping i could get johnsom to help prioritize and add stuff too20:08
johnsomrm_work should I add that to the channel topic like we do at the end of the cycle?20:08
rm_workwe usually do one of these before each release and it seems to be quite helpful for velocity20:08
rm_workI think that'd be good20:08
*** aojea has joined #openstack-lbaas20:08
johnsomOk, I will work on that after the meeting20:09
rm_workit's sometimes hard for me to tell what i should be reviewing, across so many projects20:09
rm_workand with a lot of stuff WIP20:09
johnsomEVERYTHING! hahahha20:09
johnsomYeah, it really helps me with my review dashboard if folks use the workflow -1 for WIP patches20:10
johnsomOnce the tempest patch is straightened out, I'm on to provider driver work20:11
johnsomI did start a migration from neutron-lbaas to octavia script one evening. It is really just a start. I think rm_work is going to work on it some20:11
johnsomAny other updates?20:12
johnsomI think I saw Nir had some rally stuff in flight, so that is good too20:12
nmagnezi_still in the works20:13
nmagnezi_but will finalize this very soon20:13
nmagnezi_basically two patches are already up20:13
nmagnezi_1. add octavia pythonclient support20:13
nmagnezi_2. port the existing scenario to use Octavia20:13
nmagnezi_which is mostly done20:13
nmagnezi_the 3rd patch will contain additional stuff (mostly CRUD for our LB resources)20:14
*** aojea has quit IRC20:14
johnsomI also know I'm behind on dashboard patch reviews. I hope to load that up and do some reviews on that stuff this week.  Lots of good work going on there20:14
nmagnezi_oh and just if it interests anyone here, Rally is about to split to two code bases. to I ported my patches to rally-openstack20:15
nmagnezi_johnsom, I can help with those20:15
johnsomAh, interesting, so there will be an OpenStack specific Rally and then a general use one?20:15
nmagnezi_from what I understood from the Rally core team, the will have a generic base framework and additional code bases for plugins20:16
johnsomThat makes sense20:16
cgoncalvesopenstack-octavia-rally project? :)20:16
nmagnezi_maybe rally-k8s20:17
nmagnezi_who knows :)20:17
johnsomYeah, if we need a repo for the plugin let me know20:17
nmagnezi_but anyhow, the split is still WIP20:17
nmagnezi_johnsom, I don't think we will, but will keep you posted20:17
johnsom#topic Other OpenStack activities of note20:17
*** openstack changes topic to "Other OpenStack activities of note (Meeting topic: Octavia)"20:17
johnsomOpenStack Common Healthcheck WSGI Middleware spec20:17
johnsomOur friend mugsie is proposing a common method to report service health20:18
*** AlexeyAbashkin has joined #openstack-lbaas20:18
johnsomInteresting read, worth commenting on.20:18
* mugsie will reply on that spec soon, I promise20:18
johnsomI know our friends at F5 were interested in how we can expose the controller health in a useful way. This might be the answer20:19
johnsomProposed "Extended Maintenance" policy for stable branches20:19
johnsomAlso of interest, proposals about how to handle extended maintenance for stable branches.20:20
johnsomFor those of you not running master with a few days lag....20:21
johnsomOk, on to the big topic....20:21
nmagnezi_rm_work, i think he pointed at you :D20:21
johnsom#topic Octavia deleted status vs. 40420:21
*** openstack changes topic to "Octavia deleted status vs. 404 (Meeting topic: Octavia)"20:21
johnsomIn my work on the tempest plugin and the proxy plugin gate I noticed we have a problem20:22
johnsomMost services in OpenStack return a 404 when a query comes in for a deleted item.20:22
*** AlexeyAbashkin has quit IRC20:22
johnsomThe current Octavia API does not, it returns a record with the provisioning_status marked DELETED.20:23
rm_workyeah ... i also just noticed that nova somehow accepts a --deleted param to show old deleted stuff, otherwise 404s on gets20:23
johnsomThis morning I confirmed that neutron-lbaas also returns a 40420:23
johnsomSo, we have a backward compatibility issue.20:23
xgerman_yep,  my proxy-gate chokes on the DEL:ETED20:23
cgoncalvesbackport material?20:24
johnsomSo, I wrote up this patch, which switches it over to 404 and gives a path to having a --deleted flag.20:24
johnsomWell, here is the part I need your input on....20:24
xgerman_cgoncalves: we are in a funny spot. We released the API and documented the DELETED behavior20:25
johnsomWe have now released two versions with the API doing the "DELETED" bit, even though the api-ref does show the 40420:25
cgoncalvesbackporting would fix and break API at the same time20:25
xgerman_Related: n-lbaas returns 404 instead of 403 (FORBIDDEN)20:25
johnsomActually, I don't think we documented the "DELETED" case. I haven't looked through the whole api-ref, but I know the 404's are there20:25
johnsomxgerman_ I'm going to ignore the 403 thing for now. That is a neutron oddity20:26
xgerman_mmh, we need to make sure we don’t *change* the API after the fact20:26
xgerman_johnsom: well it breaks backward compatibility - but I doubt anyone was using that20:26
johnsomxgeman_ Yeah, but let's focus on one topic at a time20:27
xgerman_ok, we always told people to use octav ia API - we could change our recommendation to use proxy if 100$ compaitibility is needed and octavia-API if you are willing to fix the two or three variations20:27
johnsomYeah, ok, in the API-REF we list the DELETED as a possible provisioning_status, but not in any of the sections. Each section does list 404 however.20:27
johnsomSo, how do we want to handle this issue in Octavia API?20:28
xgerman_mmh, so I am for consistency between servcies…20:29
johnsom1. Consider it an API bug, fix, backport, beg for forgiveness in the release notes.20:29
cgoncalvesit has to be fixed either now or later. I'd say fix it now and backport to queens and perhaps also pike (if taken as a critical issue). existing deployments could eventually start observing a different behavior, yes...20:29
johnsom2. Bump the API version. Likely a major bump as it's not necessarily backward compat20:29
rm_workI would say fix it now20:30
johnsom3. ???20:30
rm_workthe pain will be less20:30
rm_workwe're about to have people switching over en-mass soon20:30
johnsomYeah, I think we need to do it now, I'm just struggling with how...20:30
rm_workmy guess is relatively few have actually seen or would be affected by this20:30
xgerman_let’s not do a 3.0 - people already are freaked out about 2.020:30
nmagnezi_xgerman_, good point20:31
johnsomYeah, I think the most pain would be with the libraries. I can of course fix openstacksdk, but what about gopher and openstack4j20:31
johnsomYeah, I really don't want to do 3.0 now. (though there are other result codes that neutron-lbaas used that are wrong IMO)20:32
nmagnezi_johnsom,what usually justifies an API minor version bump? bug fixes or just new features20:32
rm_worki think we just ... fix it and take the backlash20:32
rm_workif there is any20:32
xgerman_terraform etc. wait for DELETED20:33
*** mlavalle has joined #openstack-lbaas20:33
johnsomRight, A.B.C  A is breaking change to the API, B new features but compat, C is bug fixes20:33
rm_workcan we... make it a deployer option?20:33
nmagnezi_rm_work, please no :<20:33
rm_worki mean20:33
johnsomBut we don't really have our API versioning story straight yet. Our discovery is broken20:33
rm_workstart it deprecated, but allow people time to flip it over20:34
xgerman_yep, and gophercloud is rewriting anyway20:34
rm_work"you can flip this now, soon it will be flipped for you"20:34
johnsomLOL, I just had a thought.  It's bad, but a thought....   404 with the DELETED body....20:34
xgerman_so if we sneak it in now they should be fine20:34
rm_workI mean... why not?20:34
rm_workthough probably would still break the tools20:34
rm_workbecause they'd see the status first is my guess20:34
johnsomYeah, I think it doesn't really help that much20:35
xgerman_do we have the deletes=True option?20:35
xgerman_so we show deleted ones on request?20:35
johnsomSo, frankly I think people will be happy that it becomes the same as the other services.20:35
johnsomAlso, our client already handles lit like a 40420:35
rm_worki think we may just need to cause some temporary breakage20:35
rm_workto get to consistency20:35
xgerman_we are playing with credibility here. People don’t like us breaking things IHMO20:36
johnsomha, people keep bringing up v1 so.... haters are going to hate20:36
rm_workthis would be us NOT breaking things IMO20:36
rm_workbecause i don't think many people have switched yet20:37
johnsomI am more about doing the right thing that people whining20:37
rm_workto octavia20:37
xgerman_well, I have an install which relies on that feature20:37
rm_workdoes it?20:37
xgerman_yep, bot terraform and k8 provider waut for DELETED20:37
rm_workcan we fix those to work with BOTH20:37
rm_workget patches in for them20:37
rm_workand then do the switch20:37
xgerman_but it’s Pike — so as long as we leave that alone I am +220:37
rm_workmaybe once a patch lands to make them work both ways we can backport?20:38
johnsomOye, I feel like it should be backported all the way to Pike...20:38
nmagnezi_rm_work, if that will add a config option that would be a problem20:38
nmagnezi_to backport..20:38
cgoncalvesjohnsom: +120:38
rm_worknmagnezi_: nah that was a different thought20:39
rm_workso what if we do it in Master20:39
nmagnezi_rm_work, oh, alright20:39
rm_workthen make terraform work with either20:39
rm_workthen once that merges we backport to pike20:39
johnsomxgerman_ do you have the places in the repos for teraform and k8s that we can go do these patches?20:39
xgerman_I can find them — but my problem is that we are doing deploys and our version management isn’t that great20:40
johnsomYeah,  parallel effort this20:40
xgerman_so I like this change come with a cahnge Pike->Queen20:40
xgerman_just my 2ct20:41
johnsomSo let's go around the room and get your thoughts on the situation and how you think we should move forward.20:41
johnsomIt's a bad situation no matter what, we missed it while doing testing for pike20:42
johnsomNo one wants to go first?20:42
johnsomDo we wait and talk about this again next week?20:43
rm_workI mean20:43
rm_workI said my bit20:43
rm_workfix it to 404. fix terraform to work with either and wait for them to release the fix. backport all the way to pike.20:43
xgerman_so did I it’s easier to communicate beginning X you need this new terraform thing20:44
johnsomPersonally, to me the API-REF is the spec. It lists 404 as a result for the calls. So this is a bug in my book. I would fix it, backport it, and be proactive fixing things we think might break.20:45
rm_workif terraform starts breaking, people can look and see the changelog item and get the new release20:45
*** yamamoto has joined #openstack-lbaas20:45
cgoncalvesjohnsom: +120:46
*** velizarx has joined #openstack-lbaas20:46
nmagnezi_johnsom, +220:46
cgoncalvesI will overflow soon with all my +1s20:46
xgerman_I know people and it’s easier to tell them for Queens you need the new terraform then; oh, you did some stable Pike release and now everyhting is broken20:46
johnsomOk. So please review the patch on master. We seem to all agree we can land that.20:46
johnsomxgerman_ please paste some pointers in the channel so we can be aggressive at adding a fix.  I will go check openstacksdk and fix if needed.20:47
johnsomWe should double check the OSC plugin too.  Someone want to volunteer for that?20:48
xgerman_ok, will do20:48
johnsomI am assuming the will-do is for the links and not the OSC plugin?20:49
xgerman_I will do the terraform fixes20:49
xgerman_and investigate the k8s ones20:49
*** aojea has joined #openstack-lbaas20:49
johnsomOk, if no one has cycles for our client plugin I will take a look there too.20:49
johnsomI will put an agenda item on the meeting for status and when we should start landing backports.20:50
johnsom#topic Open Discussion20:50
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"20:50
johnsomOther items today?20:50
nmagnezi_I have a question20:51
nmagnezi_about our tempest plugin20:51
nmagnezi_specifically about
*** yamamoto has quit IRC20:51
nmagnezi_just wondering why did we adopt a specific name for role20:51
cgoncalvesI knew it!20:51
nmagnezi_cgoncalves, :D20:51
nmagnezi_I was struggeling with this today20:51
johnsomSo, first part, that code is going away20:51
*** velizarx_ has joined #openstack-lbaas20:52
johnsomreplaced with:
johnsomThe reason for the role is the RBAC20:53
johnsomSo OpenStack is moving towards a richer RBAC scheme.20:53
johnsomCurrently it's either "ADMIN" or "OWNER" by project20:53
openstackgerritGerman Eichberger proposed openstack/neutron-lbaas master: Fix proxy extension for neutron RBAC
johnsomnova and octavia both implemented this new RBAC scheme20:54
nmagnezi_alright so I guess tripleO should configure all the roles mentioned in regardless of tempest20:54
johnsomWhere you need to have a role "member" to access the load-balancer service (or nova)20:54
johnsomThis is what is in as "default", but we provide a policy.json that allows you to set it back to the old way20:55
johnsomThat said, there is a proposal out to officially align these across the services.20:55
nmagnezi_johnsom, thanks a lot. will surely read this.20:56
nmagnezi_johnsom,  we came across this when we started to test tripleO based deployments with via the tempest plugin20:56
nmagnezi_and since I was not aware of those RBAC related roles, I tried to understand where this is coming from20:57
johnsomThat docs page is the source of truth there...20:57
nmagnezi_xgerman_, thanks!20:57
nmagnezi_johnsom, it's not always is ;)20:57
johnsombut we are getting better20:58
cgoncalvesapi-ref is also the source of truth but... DELETED... :P20:58
johnsomThe "default Octavia policies" section is built out of the code, so will stay accurate20:58
johnsomYeah, api-ref is the truth, our code lies20:59
johnsomOne minute left.20:59
johnsomThanks folks.20:59
johnsomIf you have other questions I will be around21:00
*** openstack changes topic to "Discussion of OpenStack Load Balancing (Octavia) | Rocky is open for development!"21:00
openstackMeeting ended Wed Mar 21 21:00:07 2018 UTC.  Information about MeetBot at . (v 0.1.4)21:00
openstackMinutes (text):
rm_workjohnsom: so yeah, i copied the haproxy config from an amp and put it on haproxy on a random VM21:08
rm_workand i get great performance21:08
rm_workso it's not the network here21:08
rm_workit's something about the amp ...21:08
rm_workalso i spun up the go webserver on an amp on another port21:09
*** velizarx_ has quit IRC21:09
rm_workfixed the SG for it21:09
rm_workand i get the same perf issues21:09
rm_workso it's not haproxy21:09
rm_workit's something about the network and the amp OS21:09
*** velizarx has quit IRC21:14
xgerman_scroll back - didn’t I say “network”21:14
*** AlexeyAbashkin has joined #openstack-lbaas21:17
rm_worki mean it's NOT network it seems21:18
rm_worki mean the network config on the amp21:18
*** kobis has quit IRC21:19
xgerman_huh? There ksn;t that much the amp can do to screw that up — especially if you netpwrf21:20
xgerman_mostly MTU and frame lengths21:20
*** AlexeyAbashkin has quit IRC21:21
rm_workAH FFFFFFFF21:22
rm_workI figured it out21:22
rm_workit can be the network when the network is ... not exactly the same between test servers21:22
rm_workwhen i add a FLIP in the mix ... performance dies21:22
rm_workit's our FLIP implementation21:22
rm_workI don't even21:22
*** yamamoto has joined #openstack-lbaas21:47
*** yamamoto has quit IRC21:53
openstackgerritMichael Johnson proposed openstack/octavia master: Fix calls to "DELETED" items
johnsomrebased to make sure this doesn't have a conflict21:55
*** pcaruana has quit IRC21:55
*** mlavalle has quit IRC22:21
*** rcernin has joined #openstack-lbaas22:25
*** threestrands has joined #openstack-lbaas22:29
*** threestrands has quit IRC22:30
*** threestrands has joined #openstack-lbaas22:30
*** threestrands has quit IRC22:30
*** threestrands has joined #openstack-lbaas22:30
xgerman_also are our gates bonked again22:38
johnsomIt looks like the multinode is semi-broken22:39
johnsomI was just looking at this:
*** yamamoto has joined #openstack-lbaas22:49
*** aojea has quit IRC22:54
*** aojea has joined #openstack-lbaas22:54
*** yamamoto has quit IRC22:54
*** aojea has quit IRC22:55
*** aojea has joined #openstack-lbaas22:55
*** aojea has quit IRC22:55
xgerman_mmh —22:56
*** fnaval has quit IRC22:56
johnsomYeah, they are working on the "native" multi-node stuff and I think broke something. yesterday it was both multi-node gates failing, today it seems it is only py35.22:58
johnsomI think I might not worry about it for another day and see if they fix whatever it is22:58
johnsomPlus I have been sitting on zoom with Cam all day, so trying to get something done...22:59
*** fnaval has joined #openstack-lbaas23:16
*** fnaval has quit IRC23:16
*** fnaval has joined #openstack-lbaas23:17
openstackgerritMichael Johnson proposed openstack/octavia master: Fix calls to "DELETED" items
*** yamamoto has joined #openstack-lbaas23:51
*** yamamoto has quit IRC23:55
rm_workreproducing the performance issues is proving problematic23:58
rm_worki can only do it in one DC now23:58
rm_workand it's not the DC that had problems when i was perf testing originally23:58
rm_workso it must be transient O_o23:58
rm_workgetting 20k concurrent without too much issue in other DCs23:59
rm_work(even the ones that were failing for me originally)23:59

Generated by 2.15.3 by Marius Gedminas - find it at!