johnsom | By allowing them to specify the subnet it implies the network. It's just more specific. | 00:00 |
---|---|---|
johnsom | network means octavia has to *GUESS* which subnet the user really wants | 00:00 |
rm_work | ok, but by not allowing them to specify a network-only, it breaks cases involving provider-networks | 00:00 |
johnsom | Hmmm, is that true? Interesting use case | 00:01 |
rm_work | yes | 00:01 |
johnsom | I mean I know technically we have to have a subnet of some sort on the network | 00:01 |
rm_work | yeah but it can be detected | 00:01 |
johnsom | I don't think you can plug ports without it | 00:01 |
johnsom | Hmmm, we might have some bugs for a VIP without a neutron subnet defined on it. | 00:02 |
johnsom | I think there are definitely some assumptions in the code around that. | 00:03 |
johnsom | Which are probably bugs.... | 00:03 |
johnsom | rm_work: Like this: https://github.com/openstack/octavia/blob/master/octavia/api/v2/controllers/load_balancer.py#L122 | 00:04 |
johnsom | Ah, I guess that will work as long as someone doesn't specify the IP, which... Would kind of seem valid even if they did in the provider network idea | 00:05 |
rm_work | newer amp image with older octavia controlplane -- that SHOULD work, right? | 00:09 |
rm_work | specifically looking at rocky amp with queens controller | 00:09 |
johnsom | yes | 00:10 |
rm_work | the issue though is i'm worried about the 1.5->1.8 change | 00:10 |
rm_work | because we normally would check and generate the appropriate config | 00:10 |
rm_work | but i think we ADDED that in the newer controller code | 00:10 |
rm_work | didn't we? | 00:10 |
johnsom | That should be "hidden" in the amphora and abstracted by the agent api | 00:10 |
rm_work | well, the controller asks the amp API for the version | 00:10 |
rm_work | ahhh yeah and the queens controller still knows how to make a 1.8 config | 00:11 |
rm_work | yeah k | 00:11 |
johnsom | Well, we actually hack the config file as we write it out inside the amp | 00:11 |
rm_work | err, do we? | 00:12 |
rm_work | i didn't think the agent touched it | 00:12 |
johnsom | Hmm, maybe that was a different issue as that doesn't make a ton of sense. But in general, it should owrk | 00:12 |
rm_work | thought we just laid it down | 00:12 |
rm_work | regarding VIP bugs, yeah, one of the guys here has a patch for one of those bugs I think | 00:13 |
rm_work | he was here a while back and posted it (because i remember reading it in-channel previously) | 00:13 |
johnsom | https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/listener.py#L127 | 00:14 |
*** aojea has quit IRC | 00:15 | |
rm_work | also, i think senlin might be using our statuses incorrectly | 00:16 |
johnsom | Are they polling? | 00:16 |
rm_work | need to look at it, but from what peeps are saying, they might be using our operating_status in a way they shouldn't | 00:16 |
rm_work | like "operating status ERROR/DEGRADED -> failed! -> delete LB and try again" | 00:16 |
rm_work | which happens if any one member didn't respond quick enough | 00:17 |
rm_work | on spin-up | 00:17 |
rm_work | I need to dig into their stuff for a bit first, but just letting you know it's on my radar | 00:17 |
rm_work | i'll let you know if i find it's actually a problem | 00:17 |
rm_work | but poke me if you see anyone else mention anything :P | 00:17 |
johnsom | Oh geez, yeah, that is very wrong | 00:18 |
rm_work | need to make sure that's an upstream thing and not downstream or just user-error | 00:18 |
rm_work | but yeah i'll get back to you | 00:18 |
openstackgerrit | Michael Johnson proposed openstack/octavia-tempest-plugin master: Add v2 two-node scenario test https://review.openstack.org/605163 | 00:21 |
rm_work | one more thing | 00:23 |
rm_work | when we boot amps, we don't really have a way for an operator to provide some custom metadata to nova during the boot, AFAICT | 00:23 |
rm_work | that seems like something we could do, right? just allow an operator to specify some key-value pairs for us to pass through to nova as metadata? | 00:24 |
johnsom | We can easily add that with flavors | 00:24 |
rm_work | ah, flavors would cover that? | 00:24 |
johnsom | Yeah, well, we would add a default in the config file, then allow override in the flavor | 00:24 |
rm_work | didn't know flavors would dive that deep into the nova stuff | 00:24 |
rm_work | but makes sense | 00:24 |
rm_work | right now we just... don't pass custom metadata, IIRC? | 00:25 |
johnsom | It doesn't yet (I have only implemented topology so far), but this is the type of thing | 00:25 |
rm_work | so, yeah, ok | 00:25 |
johnsom | Correct, no nova "metadata" today | 00:25 |
*** blake has quit IRC | 00:30 | |
*** pbandark has quit IRC | 00:38 | |
rm_work | ah yeah, just traced it through, we can EITHER pass metadata, or use personality files for the agent config | 00:45 |
rm_work | there's no doing both | 00:45 |
rm_work | was hoping we could just edit the user data template file to include the metadata we wanted | 00:45 |
rm_work | but it's an either/or | 00:45 |
johnsom | Well, user_data is tiny, like 16k or something. Also, I think the nova client calls things "metadata" that aren't their metadata, so be careful on that. | 00:49 |
*** aojea has joined #openstack-lbaas | 00:56 | |
*** aojea has quit IRC | 01:00 | |
*** aojea has joined #openstack-lbaas | 01:08 | |
*** aojea has quit IRC | 01:13 | |
*** hongbin has joined #openstack-lbaas | 01:36 | |
openstackgerrit | sapd proposed openstack/python-octaviaclient master: Support REDIRECT_PREFIX for openstack client https://review.openstack.org/605914 | 01:56 |
*** aojea has joined #openstack-lbaas | 01:56 | |
*** witek has quit IRC | 02:00 | |
*** witek has joined #openstack-lbaas | 02:00 | |
openstackgerrit | Michael Johnson proposed openstack/octavia-tempest-plugin master: Add v2 two-node scenario test https://review.openstack.org/605163 | 02:06 |
*** abaindur has quit IRC | 02:07 | |
*** abaindur has joined #openstack-lbaas | 02:08 | |
*** aojea has quit IRC | 02:19 | |
openstackgerrit | sapd proposed openstack/python-octaviaclient master: Support REDIRECT_PREFIX for openstack client https://review.openstack.org/605914 | 02:20 |
*** abaindur has quit IRC | 02:20 | |
*** hongbin has quit IRC | 03:07 | |
openstackgerrit | sapd proposed openstack/python-octaviaclient master: Support REDIRECT_PREFIX for openstack client https://review.openstack.org/605914 | 03:07 |
*** aojea has joined #openstack-lbaas | 03:11 | |
*** ramishra has joined #openstack-lbaas | 03:23 | |
*** aojea has quit IRC | 03:37 | |
*** yamamoto has joined #openstack-lbaas | 03:38 | |
*** yamamoto has quit IRC | 03:42 | |
*** hongbin has joined #openstack-lbaas | 04:24 | |
*** yamamoto has joined #openstack-lbaas | 04:32 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix devstack plugin for multi-node jobs https://review.openstack.org/621677 | 04:33 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Adds flavor support to the amphora driver https://review.openstack.org/621323 | 04:45 |
*** hongbin has quit IRC | 05:19 | |
*** aojea has joined #openstack-lbaas | 05:22 | |
*** aojea has quit IRC | 05:27 | |
*** yamamoto has quit IRC | 05:34 | |
*** zufar has joined #openstack-lbaas | 05:42 | |
zufar | Hi all, I have question about octavia in openstack queens | 05:42 |
johnsom | Hi, what is your question? | 05:43 |
zufar | are all instance that need the load balancer service must connect to the `lb-mgmt-net` or another network we define in `amp_network` in /etc/octavia/octavia.conf? | 05:44 |
*** dmellado has quit IRC | 05:46 | |
johnsom | The lb-mgmt-net is used for the controllers (worker, health manger, housekeeping) to communicate with the amphora (and the other way as well). No tenant traffic goes over this network, it is only for management. The lb-mgmt-net you create in neutron is set in the octavia.conf "amp_boot_network_list" setting. | 05:47 |
*** dmellado has joined #openstack-lbaas | 05:48 | |
zufar | Hi jonsom, thank you for the explanation. So when we create loadbalancer service example with this command `openstack loadbalancer create --name test-lb --project admin --vip-subnet-id internal --enable` the LB instance automatic attach network from internal subnet and from lb-mgmt-net right? | 05:51 |
zufar | so LB Instance automatically have 2 ip, 1 is from internal network and 1 is from lb-mgmt-net? | 05:51 |
johnsom | Yes, lb-mgmt-net is attached when the amphora is booted, then, when a user specifies the "vip-subnet-id" we hot-plug that network into the amphora. This vip-subnet is inside a network namespace inside the amphora, so isolated from the lb-mgmt-net | 05:52 |
johnsom | Yes, at the completion of that command, the amphora instance will have two ports plugged and two IP addresses assigned. Just note that they are in different namespaces inside the amphora. | 05:53 |
*** yamamoto has joined #openstack-lbaas | 05:54 | |
zufar | thank you johnsom, its clear now | 05:54 |
johnsom | Great! Happy to help! | 05:54 |
*** dmellado has quit IRC | 05:55 | |
*** dmellado has joined #openstack-lbaas | 05:57 | |
*** dmellado has quit IRC | 06:02 | |
*** dmellado has joined #openstack-lbaas | 06:14 | |
*** pcaruana has joined #openstack-lbaas | 07:10 | |
*** zufar has quit IRC | 07:15 | |
*** takamatsu has joined #openstack-lbaas | 07:21 | |
*** yboaron has joined #openstack-lbaas | 07:33 | |
*** ccamposr has joined #openstack-lbaas | 07:39 | |
nmagnezi | johnsom, cgoncalves, fyi: https://github.com/CentOS/sig-cloud-instance-images/issues/133 | 07:40 |
openstackgerrit | Carlos Goncalves proposed openstack/octavia master: Fix centos7 gate for centos 7.6 release https://review.openstack.org/621719 | 08:48 |
openstackgerrit | Merged openstack/octavia-dashboard master: Change openstack-dev to openstack-discuss https://review.openstack.org/622021 | 08:50 |
openstackgerrit | Merged openstack/octavia-lib master: Change openstack-dev to openstack-discuss https://review.openstack.org/622442 | 08:52 |
*** celebdor has joined #openstack-lbaas | 08:55 | |
*** aojea_ has joined #openstack-lbaas | 09:03 | |
*** aojea_ has quit IRC | 09:07 | |
*** rcernin has quit IRC | 09:38 | |
cgoncalves | we might not actually need https://review.openstack.org/#/c/622590/ | 09:43 |
cgoncalves | https://review.rdoproject.org/r/#/c/17673/ added golang to the RDO dep repo and RDO repo is enabled in devstack already | 09:43 |
*** witek has quit IRC | 09:50 | |
openstackgerrit | Ying Wang proposed openstack/neutron-lbaas-dashboard master: Display error when entering a non-integer https://review.openstack.org/621141 | 10:10 |
openstackgerrit | Ying Wang proposed openstack/neutron-lbaas-dashboard master: Display error when entering a non-integer https://review.openstack.org/621141 | 10:14 |
openstackgerrit | Carlos Goncalves proposed openstack/octavia-tempest-plugin master: Update default provider to amphora https://review.openstack.org/622917 | 10:16 |
*** celebdor has quit IRC | 10:20 | |
*** salmankhan has joined #openstack-lbaas | 10:27 | |
*** salmankhan1 has joined #openstack-lbaas | 10:31 | |
*** salmankhan has quit IRC | 10:32 | |
*** salmankhan1 is now known as salmankhan | 10:32 | |
*** celebdor has joined #openstack-lbaas | 10:36 | |
openstackgerrit | Carlos Goncalves proposed openstack/octavia master: Enable debug for Octavia services in grenade job https://review.openstack.org/622319 | 11:13 |
*** salmankhan1 has joined #openstack-lbaas | 11:27 | |
*** salmankhan has quit IRC | 11:28 | |
*** salmankhan1 is now known as salmankhan | 11:28 | |
velizarx | Hi folks. I'm seeing a lot of warning messages in load with text: "Amphora <uuid> health message was processed too slowly: 10.7944991589s! The system may be overloaded or otherwise malfunctioning. This heartbeat has been ignored and no update was made to the amphora health entry. THIS IS NOT GOOD." But what is strange all of this messages about one amphora. Other amphorae work fine without warnings. Does this message talks about | 11:29 |
velizarx | overloaded on the amphora? or problem on health manager? I don't understand how the problem may concern only one amphora. | 11:29 |
cgoncalves | velizarx, hi. the message regards the time that the health manager took to process the heatbeat message sent from the amphora | 11:32 |
cgoncalves | velizarx, could it be that all other amps are sending messages to other health manager instances and that specific amp to a different health manager instance? | 11:33 |
cgoncalves | or that the health manager receives and processes first all other amps and by the time it processes that amp heartbeat it is overloaded | 11:34 |
velizarx | cgoncalves, Hm, interesting idea but it really strange what this problem occurs only with one amphora on all health managers. And other checks in the previous loop work fine. For example, see part of log: https://pastebin.com/xgqsaYQx | 11:43 |
velizarx | can it depend on the response body this amphora? | 11:44 |
cgoncalves | I don't see how, unless you've changed the response body | 11:47 |
cgoncalves | is that amp running the same image as other amps? | 11:47 |
cgoncalves | I have no clue. I am just trying help to troubleshoot | 11:48 |
*** yamamoto has quit IRC | 11:53 | |
*** yamamoto has joined #openstack-lbaas | 11:53 | |
*** yamamoto has quit IRC | 11:57 | |
velizarx | cgoncalves, yes, image the same. Other question: this message can be a reason to run 'octavia-failover-amphora-flow' for trying to restore loadbalancer? | 12:00 |
cgoncalves | velizarx, yes, since the health manager is not able to process the message under 10 seconds it doesn't update the record and after 6 total attempts it triggers amp failover | 12:03 |
velizarx | cgoncalves, Ok. Thank you for your help. I will dig the logs further. | 12:03 |
cgoncalves | sorry I can't help you much, maybe others may know | 12:04 |
*** salmankhan has quit IRC | 12:22 | |
*** khomesh has joined #openstack-lbaas | 12:22 | |
*** salmankhan has joined #openstack-lbaas | 12:30 | |
*** ramishra has quit IRC | 12:51 | |
*** yamamoto has joined #openstack-lbaas | 12:54 | |
*** ramishra has joined #openstack-lbaas | 13:05 | |
*** yamamoto has quit IRC | 13:06 | |
*** aojea has joined #openstack-lbaas | 13:45 | |
*** velizarx has quit IRC | 13:47 | |
*** khomesh has quit IRC | 13:51 | |
*** velizarx has joined #openstack-lbaas | 13:58 | |
*** bzhao__ has quit IRC | 14:18 | |
johnsom | velizarx We have found that the message you are seeing is tied closely to the database. It could be that the record for that amp is somehow slower to access in your database. You could attempt maintenance on your database to see if that resolves the issue. Also note, there is a patch pending for Queens (already merged in Rocky and Stein) that will significantly improve the database performance and reduce the chance | 14:58 |
johnsom | of that issue occurring. | 14:58 |
velizarx | jThank you johnsom. Yes, after analyzing monitoring data I saw that in the same time was problems in database (locks in Octavia's database). I think it is not Octavia's problem, because I saw requests from PMM and now you confirmed it. | 15:06 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Adds flavor support to the amphora driver https://review.openstack.org/621323 | 15:21 |
*** velizarx has quit IRC | 15:26 | |
xgerman | Yeah the DB is our achilles' heel — | 15:32 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Adds flavor support to the amphora driver https://review.openstack.org/621323 | 15:40 |
johnsom | Well, until the patch finally lands... | 15:41 |
johnsom | Happy day, finally got a zull v3 multi-node test to work! Now to polish it up... | 15:43 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Adds flavor support to the amphora driver https://review.openstack.org/621323 | 16:05 |
*** aojea has quit IRC | 16:14 | |
*** salmankhan has quit IRC | 16:14 | |
*** pcaruana has quit IRC | 16:18 | |
*** salmankhan has joined #openstack-lbaas | 16:19 | |
*** aojea has joined #openstack-lbaas | 16:19 | |
*** aojea has quit IRC | 16:20 | |
*** aojea has joined #openstack-lbaas | 16:25 | |
*** fnaval has joined #openstack-lbaas | 16:30 | |
*** aojea has quit IRC | 16:30 | |
*** aojea has joined #openstack-lbaas | 16:37 | |
*** velizarx has joined #openstack-lbaas | 16:38 | |
*** pbourke has joined #openstack-lbaas | 16:57 | |
pbourke | hi, in the guide it states | 16:58 |
pbourke | "Add appropriate routing to / from the ‘lb-mgmt-net’ such that egress is allowed, and the controller (to be created later) can talk to hosts on this network." | 16:58 |
pbourke | dont suppose anyone has an example of how to do this? | 16:58 |
johnsom | Hi, yes, I recently sent an e-mail pointing to some of those. Let me find a link | 16:58 |
johnsom | http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000584.html | 16:59 |
*** yamamoto has joined #openstack-lbaas | 17:04 | |
pbourke | johnsom: thanks! | 17:07 |
*** yamamoto has quit IRC | 17:08 | |
*** velizarx has quit IRC | 17:25 | |
*** salmankhan has quit IRC | 17:26 | |
*** salmankhan has joined #openstack-lbaas | 17:34 | |
*** roukoswarf has joined #openstack-lbaas | 17:40 | |
roukoswarf | does anyone have any insight on how providers are doing on implementing the new drivers? i have some a10s and we will be doing a new deployment, wondering what the best route is. | 17:41 |
rm_work | johnsom: ohhhh nice, zuulv3 multinode!!! grats | 17:42 |
rm_work | johnsom: how did work on ipv6 tempest testing go while i was out? | 17:42 |
johnsom | Yeah, it's been an adventure, but I finally figured out the incantation required. | 17:43 |
rm_work | did we make any progress on that? IIRC we were going to split out whole new test runs | 17:43 |
johnsom | The IPv6 patches are all up for review. Both tests and code fixes. | 17:43 |
johnsom | If you missed it, we have a review backlog: https://etherpad.openstack.org/p/octavia-priority-reviews | 17:43 |
rm_work | noice | 17:43 |
rm_work | yeah as soon as stuff settles down, i'm going to try to get back to reviewing | 17:43 |
johnsom | +1 | 17:44 |
*** celebdor_ has joined #openstack-lbaas | 17:48 | |
*** celebdor has quit IRC | 17:50 | |
*** aojea has quit IRC | 17:53 | |
*** aojea has joined #openstack-lbaas | 17:53 | |
*** aojea has quit IRC | 17:59 | |
*** salmankhan has quit IRC | 18:07 | |
xgerman | sweet | 18:17 |
*** aojea has joined #openstack-lbaas | 18:35 | |
*** aojea has quit IRC | 19:07 | |
*** abaindur has joined #openstack-lbaas | 19:30 | |
*** aojea has joined #openstack-lbaas | 19:52 | |
johnsom | #startmeeting Octavia | 20:00 |
openstack | Meeting started Wed Dec 5 20:00:03 2018 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. | 20:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 20:00 |
*** openstack changes topic to " (Meeting topic: Octavia)" | 20:00 | |
openstack | The meeting name has been set to 'octavia' | 20:00 |
johnsom | Hi folks! | 20:00 |
xgerman | o/ | 20:00 |
cgoncalves | o/ | 20:00 |
johnsom | #topic Announcements | 20:00 |
*** openstack changes topic to "Announcements (Meeting topic: Octavia)" | 20:00 | |
johnsom | The only real announcement I have today is that the Berlin summit videos are now posted. | 20:01 |
nmagnezi | o/ | 20:01 |
johnsom | #link https://www.openstack.org/videos/summits/berlin-2018 | 20:01 |
xgerman | yeah - so you all can see what cgoncalves and I were up to | 20:01 |
johnsom | You all did a pretty good job. Thanks for presenting for us! | 20:02 |
johnsom | Any other announcements today? | 20:02 |
cgoncalves | you taught us well | 20:02 |
xgerman | +1 | 20:02 |
johnsom | There is some discussion of moving projects to a different release model, but I don't think that impacts us for the most part. | 20:03 |
johnsom | Ok, moving on then... | 20:03 |
johnsom | #topic Brief progress reports / bugs needing review | 20:03 |
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)" | 20:03 | |
colin- | o/ sorry i'm late | 20:04 |
johnsom | I have been working on flavors. My latest patch implements the core of flavors for the amphora driver. The first flavor "option" is for the topology. | 20:04 |
johnsom | I have one more patch to add some API stuff, then it's down to the CLI, Tempest plugin, and finally adding additional flavor options. | 20:04 |
johnsom | So, good progress. | 20:05 |
johnsom | I have also figured out the last zuulv3 incantation I needed to get zuulv3 native multi-node gates running. I should have that wrapped up this week. | 20:05 |
cgoncalves | great progress, thank you! | 20:06 |
colin- | am still looking into nova-lxd integration for containerized amphora but am waiting on a nova upgrade before i proceed. expect to have something to report on that after kubecon next week | 20:06 |
johnsom | Short summary, you can't assign IPs for the hosts via devstack any longer. There is an ansible variable that tells you which IP the hosts got.... | 20:06 |
johnsom | colin- Excellent, excited that someone has time to look at that again. | 20:07 |
colin- | still very early days but am hopeful for it. would be really cool | 20:07 |
johnsom | Yes! | 20:08 |
johnsom | Any other updates? I know xgerman updated our testing web server to support UDP. This will enable tempest test for the UDP work added in Rocky. Thanks! | 20:08 |
xgerman | Yep. | 20:09 |
johnsom | #topic CentOS 7 gate | 20:09 |
*** openstack changes topic to "CentOS 7 gate (Meeting topic: Octavia)" | 20:09 | |
johnsom | I put a few gate related topic items on the agenda today. Starting with CentOS 7 | 20:10 |
johnsom | To be frank, this gate has been unstable for a while and with the recent release of CentOS 7.6 it is broken again. | 20:10 |
xgerman | #vote non voting | 20:11 |
johnsom | I will throw shame towards the person that pulled packages out in a dot release..... | 20:11 |
johnsom | lol, xgerman is ahead of me, but yes, I would like to propose we drop the CentOS 7 gate back down to non-voting. | 20:11 |
johnsom | Comments? | 20:11 |
cgoncalves | afaik golang is still present in rhel 7.6 repo | 20:11 |
johnsom | If that is the case, why are the gate runs failing because it can't find it? | 20:12 |
cgoncalves | +1 until it gets fixed | 20:12 |
cgoncalves | gate runs centos, not rhel | 20:12 |
nmagnezi | Yup, stability should be a priority here. +1 from me as well | 20:13 |
cgoncalves | http://paste.openstack.org/show/736720/ | 20:13 |
johnsom | Ok, that is a majority of the folks that raised their hand as attending today, so I will post a patch to drop it to non-voting. | 20:14 |
cgoncalves | oh, rhelosp-ceph-3.0-tools provides it | 20:14 |
colin- | sounds reasonable yeah | 20:14 |
johnsom | Really the issues have been around the package repos and mirrors. This may be another one of those issues. | 20:14 |
colin- | now that it's a safe majority i'll chime in ;) | 20:14 |
xgerman | Lol | 20:14 |
johnsom | #topic Grenade gate | 20:15 |
*** openstack changes topic to "Grenade gate (Meeting topic: Octavia)" | 20:15 | |
johnsom | So the fun continues... The grenade gate mysteriously started failing recently. | 20:15 |
cgoncalves | I spent some time today checking what's going on there, no luck | 20:15 |
johnsom | My quick look at this was apache is doing something wrong and the API calls aren't making it to us. | 20:15 |
johnsom | Ok, thanks for the update. This is one I don't really want to make non-voting. At least until we can see why it is failing. | 20:16 |
cgoncalves | I see it in keystone but then it returns 404 | 20:16 |
cgoncalves | http://logs.openstack.org/19/622319/3/check/octavia-grenade/3ee64ee/logs/apache_config/octavia-wsgi.conf.txt.gz | 20:17 |
johnsom | Yeah, that is a pretty simple file... | 20:17 |
johnsom | I will try to spend some time this afternoon debugging this. Any assistance is very welcome. | 20:18 |
johnsom | It is an important gate to show we can do upgrades... | 20:18 |
xgerman | Let me know how I can help | 20:18 |
cgoncalves | find the root cause :) | 20:19 |
johnsom | xgerman Another set of eyes would be welcome | 20:19 |
johnsom | #link http://logs.openstack.org/38/617838/5/check/octavia-grenade/3479905/ | 20:19 |
johnsom | This is a simple patch that is failing the grenade gate (all of them are now, so not likely this patch) | 20:19 |
johnsom | Thanks folks for taking a look at that. | 20:20 |
johnsom | #topic Multi-node gate(s) and scenario gates | 20:20 |
*** openstack changes topic to "Multi-node gate(s) and scenario gates (Meeting topic: Octavia)" | 20:20 | |
johnsom | Ok, next on the agenda, since I have multi-node gates running I wanted to run an idea by you all. | 20:20 |
johnsom | We have a lot of gates..... (nlbaas retirement is coming!) | 20:21 |
johnsom | Since these multi-node gates will run the scenario test suite, how do you feel about dropping our current two scenario gates and instead using the multi-node version as our primary scenario tests? | 20:21 |
johnsom | Or do folks see value in the single node scenario gates? | 20:22 |
johnsom | Just throwing it out as an idea | 20:23 |
cgoncalves | which two scenario gates? octavia-v2-dsvm-scenario and ocavia-dsvm-scenario? | 20:23 |
cgoncalves | or move octavia-v2-dsvm-scenario{,-py3} to multi-node? | 20:23 |
johnsom | octavia-v2-dsvm-scenario and octavia-v2-dsvm-py35-scenario | 20:24 |
nmagnezi | Well a multi-node is more true to life. The question is if whether or not we have something to gain by keeping a single node alongside it | 20:24 |
johnsom | Yeah, the only downside I see is multi-node has more moving parts, so could be more likely to have failures not related to the patch being tested....] | 20:25 |
cgoncalves | right. can we make it non-voting for a while and then decide based on success/failure rate? | 20:25 |
nmagnezi | I assume we'll start with it as non-voting so we can get an indication of stability | 20:25 |
nmagnezi | Just trying to think about the longer term | 20:25 |
johnsom | Yep | 20:26 |
johnsom | I was just trying to reduce the number of duplicate runs of the same tests and save some precious nodepool instances. | 20:26 |
* johnsom Glares at triple-o's nodepool usage | 20:26 | |
nmagnezi | haha | 20:27 |
cgoncalves | tripleo fine folks doing fine work | 20:28 |
johnsom | Ok, I will simply replace the current (broken) zuul v2 octavia-v1 gates with non-voting zuul v3 octavia-v2 gates for now and we can re-consider later. | 20:28 |
nmagnezi | So I think everyone will agree that multi node tests bring a ton of value. Maybe we can just start and see how it plays out? (I know, we'll consume a lot more nodes in the meantime) | 20:28 |
johnsom | I wasn't criticizing triple-o folks, just the heavy nodepool usage.... | 20:28 |
cgoncalves | yes but we could also run tests faster and in parallel | 20:29 |
cgoncalves | johnsom, I know ;) | 20:29 |
johnsom | Yeah, right now we have the tempest concurrency capped at 2. With the multi-node we might be able to raise that. | 20:30 |
cgoncalves | ok, so we're dropping octavia-v1 jobs \o/ | 20:30 |
xgerman | of course, once n-lbaas gets removed | 20:30 |
cgoncalves | johnsom, 2? I think at 1 | 20:30 |
johnsom | #link https://github.com/openstack/octavia-tempest-plugin/blob/master/zuul.d/jobs.yaml#L132 | 20:31 |
cgoncalves | oh, 2 yes | 20:31 |
johnsom | We could also be greedy as spin up a third instance with nova on it... | 20:31 |
xgerman | I wouldn’t call that greedy… OSA is no light foot neither | 20:32 |
* johnsom Hears maniacal laughter from somewhere.... | 20:32 | |
openstackgerrit | Merged openstack/neutron-lbaas master: use neutron-lib for _model_query https://review.openstack.org/617782 | 20:32 |
johnsom | Oh so very true... | 20:32 |
*** Swami has joined #openstack-lbaas | 20:32 | |
cgoncalves | should we also suffix py2-based jobs with -py2 and drop suffix -py3? make python3 officially first class citizen | 20:32 |
johnsom | Ok, I think I have the answer/path forward I was looking for. | 20:33 |
xgerman | k | 20:33 |
johnsom | cgoncalves Yes. I can do that. | 20:33 |
johnsom | #topic Open Discussion | 20:33 |
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)" | 20:33 | |
johnsom | We should also decide how/when we do bionic.... | 20:34 |
johnsom | Do we want parallel xenial and bionic for a bit, or just make the jump? | 20:34 |
cgoncalves | I was typing that but decided not to put more burn on you | 20:34 |
xgerman | parallel with bionic non-voting | 20:34 |
cgoncalves | *burden | 20:35 |
johnsom | We are not yet "bionic" native for the amp, but the compatibility mode seems to work fine. | 20:35 |
johnsom | Well, that is why I get the lousy thankless title of PTL.... Grin. Don't even get the t-shirt. lol | 20:36 |
cgoncalves | noted | 20:36 |
johnsom | Or put another way. We can decide we want to do it, and it will get done at some point in the future*. | 20:36 |
johnsom | * Maybe | 20:36 |
xgerman | johnsom: could be worse… TC | 20:37 |
johnsom | grin | 20:37 |
johnsom | TC gets catering | 20:37 |
johnsom | lol | 20:37 |
xgerman | every time I hang with them they go to food trucks | 20:37 |
johnsom | Oh, I mean, great title with a bunch of perks. Please sign up | 20:37 |
johnsom | Sorry, any topics for open discussion today? | 20:38 |
xgerman | been there, done that :-) | 20:38 |
cgoncalves | Q: why can't I find octavia-v2-dsvm-scenario-ubuntu-bionic in openstack health page? | 20:38 |
cgoncalves | I wanted to check failure rate | 20:38 |
johnsom | Probably because we don't have one | 20:39 |
colin- | oh i watched the berlin project update btw, nice job cgoncalves and xgerman (and slide composer johnsom ) | 20:39 |
colin- | looking forward to the stein stuff after seeing it bundled in that way | 20:39 |
cgoncalves | oh, maybe it only lists gate jobs | 20:39 |
johnsom | oh, we do | 20:39 |
cgoncalves | colin-, thanks! | 20:40 |
johnsom | #link http://zuul.openstack.org/builds?project=openstack%2Foctavia&job_name=octavia-v2-dsvm-scenario-ubuntu-bionic | 20:40 |
cgoncalves | I was looking at http://status.openstack.org/openstack-health/#/?searchProject=octavia-v2-dsvm-scenario&groupKey=build_name | 20:40 |
johnsom | You need to set the drop down to job instead of project | 20:41 |
cgoncalves | I have &groupKey=build_name which is job | 20:41 |
cgoncalves | anyway, bionic looks mostly green | 20:42 |
johnsom | Seems ok-ish. I need to look at that and setup bionic-on-bionic. I think that is amp image only right now | 20:42 |
johnsom | Yeah, I have another patch I was playing with to figure out what all was needed for zuul to do a bionic nodepool instance | 20:43 |
johnsom | I got it working at the PTG | 20:43 |
johnsom | Just need to clean it up | 20:43 |
johnsom | cgoncalves BTW, I saw your comment on the multi-node patch. I learned (the hard way) that if you override anything in a subsection, it *replaces* the whole section in zuul land, so I have to pull in a bunch of vars from parent jobs to make it work. | 20:45 |
cgoncalves | there's no zuul_extra_vars or alike option? :( | 20:46 |
johnsom | The only part that technically isn't required in our repo right now is the nodeset, but I think there is value to just having it there for future layouts and clarity. I will probably create an octavia-controller group | 20:46 |
cgoncalves | ok | 20:46 |
johnsom | I thought it was supposed to just be additive and override if the same name matches, but that override is at the group level, not the variable level... sigh. | 20:47 |
johnsom | So "host-vars: controller: devstack_localrc:" ends up replacing all of the "devstack_localrc:" vars from the parent | 20:47 |
johnsom | A bummer. | 20:48 |
johnsom | However, that said, you are welcome to play around with it and find a better way. There might be one. I'm just happy it works now. | 20:48 |
johnsom | Other items today? | 20:49 |
johnsom | Oh, I will be taking a work day off on Friday, so will be around with limited bandwidth. | 20:50 |
johnsom | Ok, thanks folks. Have a great week! | 20:50 |
nmagnezi | o/ | 20:50 |
johnsom | #endmeeting | 20:51 |
*** openstack changes topic to "Discussions for Octavia | Stein priority review list: https://etherpad.openstack.org/p/octavia-priority-reviews" | 20:51 | |
openstack | Meeting ended Wed Dec 5 20:51:03 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 20:51 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-12-05-20.00.html | 20:51 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-12-05-20.00.txt | 20:51 |
openstack | Log: http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-12-05-20.00.log.html | 20:51 |
colin- | ttyl | 20:51 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Make the CentOS 7 scenario gate non-voting https://review.openstack.org/623070 | 21:27 |
johnsom | Cores ^^^^^ please vote to get the gates moving again. | 21:27 |
nmagnezi | johnsom, +2, hopefully it'll get fixed soon | 21:32 |
xgerman | +2/A | 21:36 |
*** yboaron has quit IRC | 21:36 | |
rm_work | oops, missed the meeting T_T | 21:44 |
xgerman | yeah, was wondering if I should rm_work a few times | 21:46 |
rm_work | i was at lunch | 21:49 |
rm_work | because the meeting is at noon :P | 21:50 |
rm_work | worst meeting time | 21:50 |
nmagnezi | rm_work, you can always join as rm_mobile.. :D | 21:52 |
*** rcernin has joined #openstack-lbaas | 22:03 | |
rm_work | hard to type when my fingers are covered with shawarma grease :P | 22:05 |
*** hyang has joined #openstack-lbaas | 22:11 | |
xgerman | you are laying off the soylent? | 22:17 |
colin- | tell me about it rm_work i almost always miss it for that reason | 22:17 |
colin- | although i know others are the same timezone and don't complain, so i shouldn't | 22:18 |
colin- | shawarma sounds good | 22:18 |
*** hyang has quit IRC | 22:18 | |
xgerman | also most phone have voice input, “hey, siri…" | 22:28 |
rm_work | colin-: pfft, i will always complain. if everyone thinks the same way, everyone would just suffer in silence. my complaints are a form of public service. :P | 22:29 |
rm_work | xgerman: well, i'm in sunnyvale, so i don't have access to my supply | 22:29 |
johnsom | I just ate my hot-and-spicy pork during the meeting. It's not like it's a video meeting.... | 22:31 |
johnsom | My favorite Korean place, that had to close because some stupid chain sandwich shop took their lease, re-opened as a food truck. I am happy. | 22:32 |
rm_work | mmmmmmm I need some good Korean food | 22:40 |
rm_work | really want to get some 떡볶이 | 22:40 |
rm_work | hmmmm, might have to go here for dinner tho: https://www.octavia-sf.com/ | 22:41 |
rm_work | then stop by here after? :P https://www.octaviawellness.com/ | 22:42 |
johnsom | Hmm, not sure they have 떡볶이 at the cart | 22:42 |
rm_work | should ask for me :P if they do I'll be sure to stop by when I eventually make it over there | 22:43 |
johnsom | Lol, you can give them a sticker | 22:43 |
rm_work | Yeah! Though, I'm almost out! T_T | 22:43 |
rm_work | need to get another run printed | 22:43 |
johnsom | She would probably make it for you if we gave her a heads up | 22:43 |
johnsom | I know they have squid and pork, plus some other less interesting things | 22:44 |
johnsom | For the hot-and-spicy plates | 22:44 |
rm_work | mmmmmm | 22:45 |
*** rcernin has quit IRC | 22:52 | |
*** rcernin has joined #openstack-lbaas | 22:52 | |
*** aojea has quit IRC | 22:53 | |
*** aojea has joined #openstack-lbaas | 22:54 | |
*** aojea has quit IRC | 22:58 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Fix the grenade gate https://review.openstack.org/623102 | 23:21 |
johnsom | That *might* be the answer | 23:21 |
johnsom | Ugh, dual job failure race. hmmm | 23:23 |
*** fnaval has quit IRC | 23:26 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!