*** spatel has quit IRC | 00:03 | |
*** TrevorV has quit IRC | 00:10 | |
*** zzzeek has quit IRC | 00:47 | |
*** zzzeek has joined #openstack-lbaas | 00:49 | |
*** openstackgerrit has quit IRC | 00:58 | |
*** sapd1 has joined #openstack-lbaas | 01:14 | |
*** sapd1 has quit IRC | 01:19 | |
*** sapd1 has joined #openstack-lbaas | 01:21 | |
*** sapd1 has quit IRC | 01:25 | |
*** zzzeek has quit IRC | 02:13 | |
*** zzzeek has joined #openstack-lbaas | 02:14 | |
*** zzzeek has quit IRC | 02:28 | |
*** zzzeek has joined #openstack-lbaas | 02:29 | |
*** zzzeek has quit IRC | 02:34 | |
*** zzzeek has joined #openstack-lbaas | 02:36 | |
*** tamas_erdei has joined #openstack-lbaas | 02:54 | |
*** terdei has quit IRC | 02:57 | |
*** spatel has joined #openstack-lbaas | 03:11 | |
*** psachin has joined #openstack-lbaas | 03:17 | |
*** xgerman has quit IRC | 03:31 | |
*** zzzeek has quit IRC | 03:44 | |
*** zzzeek has joined #openstack-lbaas | 03:44 | |
*** rcernin has quit IRC | 03:46 | |
*** rcernin has joined #openstack-lbaas | 03:50 | |
*** sapd1 has joined #openstack-lbaas | 03:58 | |
*** zzzeek has quit IRC | 04:26 | |
*** zzzeek has joined #openstack-lbaas | 04:27 | |
*** zzzeek has quit IRC | 04:41 | |
*** zzzeek has joined #openstack-lbaas | 04:45 | |
*** rcernin has quit IRC | 05:39 | |
*** devfaz has quit IRC | 05:52 | |
*** rcernin has joined #openstack-lbaas | 06:08 | |
*** gcheresh has joined #openstack-lbaas | 06:24 | |
*** zzzeek has quit IRC | 06:41 | |
*** zzzeek has joined #openstack-lbaas | 06:45 | |
*** rcernin has quit IRC | 06:45 | |
*** rcernin has joined #openstack-lbaas | 07:01 | |
*** damien_r has joined #openstack-lbaas | 07:01 | |
*** zzzeek has quit IRC | 07:03 | |
*** zzzeek has joined #openstack-lbaas | 07:05 | |
*** damien_r has quit IRC | 07:06 | |
*** lxkong has quit IRC | 07:09 | |
*** vishalmanchanda has joined #openstack-lbaas | 07:19 | |
*** spatel has quit IRC | 07:20 | |
*** zzzeek has quit IRC | 07:24 | |
*** rcernin has quit IRC | 07:25 | |
*** zzzeek has joined #openstack-lbaas | 07:28 | |
*** luksky has joined #openstack-lbaas | 08:04 | |
*** rpittau|afk is now known as rpittau | 08:10 | |
*** damien_r has joined #openstack-lbaas | 08:20 | |
*** damien_r has quit IRC | 08:24 | |
*** damien_r has joined #openstack-lbaas | 08:25 | |
*** zzzeek has quit IRC | 08:57 | |
*** zzzeek has joined #openstack-lbaas | 08:59 | |
*** lxkong has joined #openstack-lbaas | 09:00 | |
*** devfaz has joined #openstack-lbaas | 09:24 | |
*** openstackgerrit has joined #openstack-lbaas | 09:40 | |
openstackgerrit | XiaoYu Zhu proposed openstack/octavia master: Alternative Distributor for L3 Active-Active, N+1 Amphora Setup https://review.opendev.org/c/openstack/octavia/+/746688 | 09:40 |
---|---|---|
*** zzzeek has quit IRC | 10:41 | |
*** zzzeek has joined #openstack-lbaas | 10:42 | |
*** sapd1 has quit IRC | 11:52 | |
*** gcheresh has quit IRC | 11:57 | |
*** gcheresh has joined #openstack-lbaas | 12:03 | |
*** sapd1 has joined #openstack-lbaas | 12:40 | |
*** TrevorV has joined #openstack-lbaas | 13:00 | |
*** gcheresh has quit IRC | 13:02 | |
*** damien_r has quit IRC | 13:05 | |
*** zzzeek has quit IRC | 13:10 | |
*** zzzeek has joined #openstack-lbaas | 13:11 | |
*** damien_r has joined #openstack-lbaas | 13:12 | |
*** gcheresh has joined #openstack-lbaas | 13:12 | |
*** zzzeek has quit IRC | 13:16 | |
*** zzzeek has joined #openstack-lbaas | 13:18 | |
*** spatel has joined #openstack-lbaas | 13:19 | |
*** zzzeek has quit IRC | 13:26 | |
*** zzzeek has joined #openstack-lbaas | 13:28 | |
*** damien_r has quit IRC | 13:50 | |
*** damien_r has joined #openstack-lbaas | 14:04 | |
*** sapd1 has quit IRC | 14:05 | |
*** tkajinam has quit IRC | 14:47 | |
*** tkajinam has joined #openstack-lbaas | 14:47 | |
*** tkajinam has quit IRC | 15:18 | |
rm_work | johnsom: the mishmash of clients we use to contact all the various services (nova, neutron, glance, keystone) all have different ways (or in some cases no way) of passing a cert to use for mutual TLS auth T_T | 16:07 |
rm_work | we're trying to set that up and it looks like it's going to be a bit of a nightmare | 16:07 |
rm_work | may require a number of patches, and some of them may depend on patches to the upstream service clients to even support this <_< | 16:08 |
johnsom | Yeah, many of the other projects don't understand two way authentication. | 16:08 |
johnsom | They are more focused on token auth | 16:08 |
rm_work | seems glance client does support it (takes cert_file and key_file), and nova does (but only takes cert_file? wtf?) | 16:09 |
rm_work | neutron doesn't at all | 16:09 |
johnsom | rm_work Any progress on getting stable/stein requirements fixed? | 16:13 |
rm_work | i just got git-review working again literally 5 min ago | 16:13 |
rm_work | so will check now | 16:14 |
rm_work | will take a bit to rebuild environments | 16:17 |
*** psachin has quit IRC | 16:38 | |
*** servagem has quit IRC | 16:40 | |
mchlumsky | hi! I am trying to use the tempest octavia plugin in our monitoring (run a LB scenario every X minutes) and I noticed that there is a requirement for admin credentials for all tests because LoadBalancerBaseTest sets it in credentials and all test classes inherit from it. I naively removed admin from credentials and | 16:42 |
mchlumsky | LoadBalancerBaseTest.setup_clients() and got quite a few scenarios to pass anyways. I'd like to not have to use admin credentials in my monitoring but what I did is dirty. Any thoughts on this? | 16:42 |
johnsom | The various roles that are setup are mostly for API testing to validate that our RBAC is working correctly. | 16:44 |
*** dulek has quit IRC | 16:44 | |
johnsom | This is a downside to the tempest plugin design that there is one credential setup for the whole plugin. | 16:44 |
*** servagem has joined #openstack-lbaas | 16:45 | |
johnsom | I think there is a way to override that credential setup however via the tempest configuration file. I haven't done it, but I think rm_work has. Let me see if I can find you a link | 16:45 |
johnsom | mchlumsky https://docs.openstack.org/tempest/latest/configuration.html#pre-provisioned-credentials | 16:47 |
johnsom | If you are only running like a smoke test, one of the scenarios, you can probably use this to only use a "test" credential. | 16:48 |
johnsom | The API tests however will need a proper set as those tests also cover the Role Based Access Control (RBAC). | 16:48 |
*** dulek has joined #openstack-lbaas | 16:53 | |
mchlumsky | thank you, I'll dig further. It looks like admin_username can be set. Maybe I can set it to a non-admin account and if it's not used it won't matter that it's not an admin account. | 17:03 |
rm_work | johnsom: so on pep8, these bandit failures look legit? | 17:09 |
rm_work | or rather, maybe not legit but bandit is RUNNING and detecting things, though i think the things it's detecting are not real issues | 17:09 |
rm_work | because it's running on our test-files | 17:10 |
rm_work | IE: exclusion is not working | 17:10 |
rm_work | I believe I ran into something like this before | 17:10 |
rm_work | meanwhile, requirements is failing because of `networkx`? | 17:10 |
haleyb | who was the one who added bandit? :) there's been a bunch of failures everywhere due to new pip i believe, it's a mess | 17:11 |
johnsom | The issue isn't results (those have been fine for a long time) it's that stein is running py27 and pulling in the latest bandit that is py3x only. It's getting the requirements and lower-constraints right that is the issue. This issue is only on stein BTW | 17:11 |
rm_work | yeah my local bandit issues match what zuul saw | 17:11 |
rm_work | so my tests are working | 17:11 |
rm_work | i can run with this | 17:11 |
rm_work | hmm ok | 17:11 |
rm_work | i mean i checked out your patch, shouldn't the tox.ini be set up to use py27 for pep8? | 17:12 |
johnsom | This log: https://zuul.opendev.org/t/openstack/build/253e25a319fa4e06863c684d1d885b56/log/job-output.txt | 17:12 |
rm_work | ahh ok, so you are not worried about the fact that pep8 is failing on that branch | 17:12 |
johnsom | Ah, line links are working again: https://zuul.opendev.org/t/openstack/build/253e25a319fa4e06863c684d1d885b56/log/job-output.txt#11962 | 17:13 |
rm_work | right, k | 17:14 |
johnsom | That is all I'm trying to solve. My problem is the tox requirements/lower-constraints always passes for me in my VM but fails the gate jobs. My VM is messed up somehow and I was hoping someone else could take a pass at fixing those constraints issues that has a working VM. | 17:14 |
rm_work | yeah my requirements job fails | 17:14 |
rm_work | locally | 17:15 |
rm_work | but in a really bad way, more than just bandit | 17:15 |
johnsom | Yeah, I borked my system somehow. | 17:15 |
rm_work | hmmm the tox.ini requirements job looks weird to me for stein | 17:19 |
rm_work | need to compare with other releases | 17:20 |
rm_work | but also, for the record: | 17:22 |
rm_work | Correct: https://github.com/openstack/octavia/blob/stable/stein/tox.ini#L140 | 17:22 |
rm_work | Incorrect: https://github.com/openstack/octavia/blob/stable/stein/tox.ini#L81 | 17:22 |
rm_work | of course that gives me a host of other issues, but i will try to ignore that and focus on your concern presently | 17:24 |
rm_work | i don't exactly understand why this job is running requirements with py27 when the tox.ini has requirements set as basepython=python3 | 17:30 |
rm_work | i guess it auto-downgrades if it isn't available? | 17:30 |
johnsom | Well, the gate job is defined in the zuul repos, it probably overrides it for stein | 17:33 |
rm_work | uhhh | 17:42 |
rm_work | it looks like your issue is actually in octavia-libb | 17:42 |
johnsom | Yeah, I think it is also impacted | 17:42 |
rm_work | i mean, not also, it IS the impact here | 17:42 |
rm_work | this error is during the octavia-lib install | 17:43 |
*** rpittau is now known as rpittau|afk | 17:43 | |
rm_work | and yeah octavia-lib has no requirements lines for python_version==2.7 | 17:43 |
rm_work | if we merged a fix there to add bandit for 2.7 | 17:43 |
rm_work | this might be better | 17:43 |
rm_work | gotta figure out how to test that... | 17:44 |
*** xgerman has joined #openstack-lbaas | 17:54 | |
*** ianychoi__ has quit IRC | 19:44 | |
*** luksky has quit IRC | 19:51 | |
*** TrevorV has quit IRC | 19:56 | |
*** luksky has joined #openstack-lbaas | 20:05 | |
*** admin0 has joined #openstack-lbaas | 20:53 | |
*** itsjg has joined #openstack-lbaas | 21:02 | |
*** gcheresh has quit IRC | 21:12 | |
admin0 | hi all .. is there a way to kickstart the amphora creation process ? | 21:44 |
admin0 | i had a wrong ip in the container ( using openstack-ansible) | 21:44 |
admin0 | i nuked the containers , recreated it ( so that they are in the correct ip range) | 21:44 |
johnsom | What state is the LB in now? | 21:46 |
johnsom | Be sure to graceful shutdown your OpenStack containers, if not you can get things stuck in certain states. We have a patch for that in Octavia, but it's not enabled by default yet. | 21:47 |
admin0 | i nuked the lbs before deleting and re-creating the container | 21:48 |
johnsom | Ok, and after you re-created the container, you created a new load balancer? | 21:51 |
*** sorrison has joined #openstack-lbaas | 21:51 | |
admin0 | not yet .. i was hoping to see a default amphora image boot up | 21:51 |
johnsom | Oh, you have spares pool enabled? | 21:52 |
admin0 | its set to 1 by default | 21:52 |
admin0 | i did not change it | 21:52 |
johnsom | Yeah, so housekeeping should maintain that pool. If you do "openstack server list --all" do you see an amphora? | 21:52 |
admin0 | i don't see it .. then the question is .. how frequent does housekeeping kicks in ? | 21:54 |
johnsom | It's a configuration setting. Default is every thirty seconds it will check the pool. | 21:55 |
johnsom | if the server list --all doesn't show one, check "openstack loadbalancer amphora list". If it sees one, but it's no longer in nova, somehow it got deleted from nova but Octavia still thinks it's present. In which case you can "openstack loadbalancer amphora delete <ID>" to clear the database ghost record. | 21:57 |
*** rcernin has joined #openstack-lbaas | 21:59 | |
johnsom | Normally the controller would notice that it was a failed spare, but since you had networking problems before, the heartbeat never made it to the controllers and that automatic repair process never started. | 22:00 |
*** damien_r has quit IRC | 22:11 | |
admin0 | are the containers ip hardcoded into the database ? | 22:12 |
admin0 | because the controller ips has changed, is there a possibility the db does not know about the new ones | 22:12 |
johnsom | No | 22:14 |
johnsom | The controller IPs are stored in the octavia.conf file. They get stamped into the amphora when a load balancer is created via cloud-init/config-drive/metadata, etc. | 22:16 |
johnsom | So that list should have been updated when you re-deployed the containers | 22:16 |
johnsom | Well, I guess there is only one container at the moment | 22:17 |
*** luksky has quit IRC | 22:18 | |
*** spatel has quit IRC | 22:19 | |
admin0 | yeah . | 22:19 |
admin0 | i will do a full playbook run and see if it makes a differnce | 22:19 |
*** luksky has joined #openstack-lbaas | 22:32 | |
admin0 | johnsom, https://gist.githubusercontent.com/a1git/851649fa6f76ebd6e4782f3d1c707501/raw/8970bdd6d4367df81474166f9aaed35c07c52013/gistfile1.txt -- is this caused by octavia ? | 22:44 |
admin0 | the firewall rules | 22:44 |
johnsom | admin0 No, I don't think so. That looks like a neutron bug of some sort | 22:44 |
admin0 | johnsom, https://review.opendev.org/c/openstack/neutron/+/740588 .. looks like a patch is out | 22:55 |
johnsom | Hmm, yeah, we don't do anything with ebtables, via SG or not, so should have no relation to anything Octavia is doing. | 22:56 |
*** tkajinam has joined #openstack-lbaas | 23:01 | |
*** vishalmanchanda has quit IRC | 23:05 | |
*** strigazi has quit IRC | 23:35 | |
*** strigazi has joined #openstack-lbaas | 23:37 | |
*** gthiemonge has quit IRC | 23:40 | |
*** gthiemonge has joined #openstack-lbaas | 23:43 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!