opendevreview | Brian Haley proposed openstack/neutron-fwaas master: Account for iptables-save output spacing differences https://review.opendev.org/c/openstack/neutron-fwaas/+/929658 | 00:30 |
---|---|---|
opendevreview | Merged openstack/ovn-bgp-agent stable/2024.2: Update .gitreview for stable/2024.2 https://review.opendev.org/c/openstack/ovn-bgp-agent/+/929060 | 08:21 |
opendevreview | Merged openstack/ovn-bgp-agent stable/2024.2: Update TOX_CONSTRAINTS_FILE for stable/2024.2 https://review.opendev.org/c/openstack/ovn-bgp-agent/+/929063 | 08:30 |
opendevreview | Lajos Katona proposed openstack/neutron master: DNM: test functional jobs https://review.opendev.org/c/openstack/neutron/+/928953 | 09:00 |
opendevreview | Roman Safronov proposed openstack/neutron-tempest-plugin master: Skip NetworkWritableMtuTest when driver is ML2/OVN https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/929633 | 09:18 |
opendevreview | Rodolfo Alonso proposed openstack/neutron-tempest-plugin master: Add 2024.2 (Dalmatian) stable jobs https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/929592 | 09:20 |
*** mnasiadka1 is now known as mnasiadka | 12:28 | |
opendevreview | Rodolfo Alonso proposed openstack/neutron-fwaas stable/2024.2: Stable Only: Use 2024.2 tempest job https://review.opendev.org/c/openstack/neutron-fwaas/+/929629 | 12:34 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: Unbreak lb fip status update when no ls are linked https://review.opendev.org/c/openstack/neutron/+/929772 | 12:57 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: don't serialize all switches on each lb checked https://review.opendev.org/c/openstack/neutron/+/929774 | 13:06 |
lajoskatona | ykarel: Hi, I can't recall if we ever discussed to remove weekly jobs from stadiums on stable branches (i.e.: https://opendev.org/openstack/networking-bagpipe/src/branch/stable/2024.1/.zuul.yaml#L74 ) ? | 14:02 |
lajoskatona | ykarel: I faintly remember that we said to remove these jobs but I suppose I just forgot to delete those lines from the .zuul.yamls.... | 14:04 |
ralonsoh | lajoskatona, now you are here: do you agree to remove all "*with-sqlalchemy-main" from stadium projects? | 14:09 |
ralonsoh | I don't think we need to continue testing it | 14:09 |
lajoskatona | ralonsoh: yeah, let's clean these things out | 14:10 |
ralonsoh | I'll push some patches today | 14:10 |
lajoskatona | ralonsoh: thanks | 14:10 |
opendevreview | Brian Haley proposed openstack/neutron master: Bump skip-level lower version to stable/2024.1 https://review.opendev.org/c/openstack/neutron/+/929787 | 14:18 |
opendevreview | Brian Haley proposed openstack/neutron master: Update jobs based on testing runtime for 2025.1 https://review.opendev.org/c/openstack/neutron/+/929788 | 14:18 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: Unbreak lb fip status update when no ls are linked https://review.opendev.org/c/openstack/neutron/+/929772 | 14:26 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: don't serialize all switches on each lb checked https://review.opendev.org/c/openstack/neutron/+/929774 | 14:26 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: refactor: slightly more explicit return value https://review.opendev.org/c/openstack/neutron/+/929792 | 14:26 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: unindent some indents in _handle_lb_fip_cmds https://review.opendev.org/c/openstack/neutron/+/929793 | 14:26 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: don't calculate list of attached lbs for every lb https://review.opendev.org/c/openstack/neutron/+/929794 | 14:26 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: split out code to verify a lb member into a function https://review.opendev.org/c/openstack/neutron/+/929795 | 14:26 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: construct member dict in one go https://review.opendev.org/c/openstack/neutron/+/929796 | 14:26 |
opendevreview | Brian Haley proposed openstack/neutron master: Update jobs based on testing runtime for 2025.1 https://review.opendev.org/c/openstack/neutron/+/929788 | 14:28 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: split out code to verify a lb member into a function https://review.opendev.org/c/openstack/neutron/+/929795 | 14:33 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: construct member dict in one go https://review.opendev.org/c/openstack/neutron/+/929796 | 14:33 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: refactor: remove some unused variables https://review.opendev.org/c/openstack/neutron/+/929810 | 14:52 |
opendevreview | Ihar Hrachyshka proposed openstack/neutron master: WIP: fix race in idl initialization https://review.opendev.org/c/openstack/neutron/+/929811 | 14:52 |
opendevreview | Takashi Kajinami proposed openstack/tap-as-a-service master: Squash tass.ini and taas_plugin.ini https://review.opendev.org/c/openstack/tap-as-a-service/+/929814 | 15:04 |
opendevreview | Takashi Kajinami proposed openstack/tap-as-a-service master: Squash tass.ini and taas_plugin.ini https://review.opendev.org/c/openstack/tap-as-a-service/+/929814 | 15:18 |
opendevreview | Takashi Kajinami proposed openstack/tap-as-a-service master: Squash tass.ini and taas_plugin.ini https://review.opendev.org/c/openstack/tap-as-a-service/+/929814 | 15:24 |
opendevreview | Brian Haley proposed openstack/neutron master: Update jobs based on testing runtime for 2025.1 https://review.opendev.org/c/openstack/neutron/+/929788 | 15:26 |
ykarel | lajoskatona i don't recall discussion for dropping those jobs from stable | 15:28 |
lajoskatona | ykarel, haleyb: than back to the topic of weekly periodic jobs of stadiums: I would remove them for unmaintained branches, what do you think? | 16:01 |
noonedeadpunk | hey folks! can you please give any advice on where to start debugging of very slow api responses from neutron? as like `curl -X GET http://$IP:9696/v2.0/security-groups/${security_group} -H "X-Auth-Token: ${token}" take 50-54 seconds for me right now. | 16:09 |
noonedeadpunk | And that's not DB | 16:09 |
noonedeadpunk | as I don't really see anything staing in DB for more then a second, so queries are more or less processed instantly there | 16:11 |
noonedeadpunk | and that's on 2024.1 with OVS driver | 16:13 |
noonedeadpunk | oh, well.... right after I've downgraded neutron-server to 2023.1 (from which I've just upgraded) exact same request takes now .05 sec | 16:18 |
noonedeadpunk | *.05 | 16:18 |
noonedeadpunk | argh - half a second :) | 16:18 |
noonedeadpunk | so really - 100 times difference between 2023.1 and 2024.1 o_O | 16:19 |
ihrachys | noonedeadpunk: enable debug; reproduce; note the request-id used; I think it's a candidate for a bug report. request-id is the most important thing; neutron logs a lot of info so usually devs can trace which part of the process takes a long time. but you can start profiling from get_security_group() function in ml2 plugin. | 16:21 |
opendevreview | Brian Haley proposed openstack/neutron master: Use py312 for all neutron jobs https://review.opendev.org/c/openstack/neutron/+/929824 | 16:26 |
noonedeadpunk | I went on and reported https://bugs.launchpad.net/neutron/+bug/2081087 | 16:41 |
haleyb | noonedeadpunk: thanks. have you tested with 2024.2 RC1? I can't remember a change that would have caused that, or if we somehow fixed it later :) | 16:52 |
noonedeadpunk | no, not yet | 16:56 |
noonedeadpunk | was about to check if that could be related to eventlet/grrenlet on their own | 16:56 |
noonedeadpunk | but it seems it's not | 16:56 |
noonedeadpunk | also playing on production, so trying to be careful - don't wanna mess up DB | 16:57 |
noonedeadpunk | it also could be that it was backported, but I'm just using older 2024.1 :) | 16:58 |
noonedeadpunk | 2024.2 would allow to use uwgi as well... | 16:59 |
noonedeadpunk | so far I know that it broke between 23.2.0 and 24.0.0 | 17:07 |
noonedeadpunk | haleyb: no, 25.0.0.0rc1 does not fix it | 17:17 |
noonedeadpunk | ah, sorry, I think my balancer re-enabled other backends :) | 17:21 |
noonedeadpunk | nah, still same | 17:24 |
haleyb | noonedeadpunk: so this is just doing an 'openstack security group show $group' ? Are there a lot of items there? | 17:24 |
noonedeadpunk | yup, it does that. Or openstack security group list for some specific project. Well, it's not very small, so I guess depends. Let me count rules in there (it's not fast...) | 17:28 |
noonedeadpunk | around 100 rules | 17:28 |
noonedeadpunk | `openstack security group rule list` took over a minute as well... | 17:29 |
noonedeadpunk | nah, sorry, 50 rules | 17:29 |
haleyb | that's not too many rules imo. so you are admin? same result if doing with creds of the project? | 17:29 |
noonedeadpunk | yup, that's as admin. frnakly I haven't tried with tenant, as not sure our policy allows to get role assignment to the project. | 17:30 |
noonedeadpunk | so likely I'd need to reproduce the list of rules or smth to test as a user | 17:31 |
noonedeadpunk | probably worth mentioning that there's also a bgp dragent and vpnaas enabled in the env | 17:32 |
noonedeadpunk | but I was not touching them in my "test" backend at least. Will try to drop them from there even... | 17:33 |
ihrachys | sg api should not intersect with service plugins | 17:33 |
ihrachys | I've asked req-ids in the bug and here + logs. I think that's the next step.. | 17:33 |
haleyb | i was just curious if being admin asked for the world, then filtered, would have thought it to apply the filter in the DB query call (?) | 17:34 |
noonedeadpunk | ^ this totally happens for ports | 17:34 |
noonedeadpunk | and was also noticable regression between Xena and 2023.1 IIRC | 17:35 |
noonedeadpunk | but it was bearable, as you generally don't ask for all ports... | 17:35 |
noonedeadpunk | ihrachys: so all of these requests are slow. | 17:35 |
noonedeadpunk | I will attach logs right away | 17:35 |
ihrachys | noonedeadpunk: ok. but still, once we have logs and req-id, we can check which part of the process is slow. | 17:36 |
ihrachys | ack, will check logs when they are up | 17:36 |
noonedeadpunk | ihrachys: I've attached log to the bug report: https://launchpadlibrarian.net/750045694/neutron_log.txt | 17:45 |
noonedeadpunk | but it really doesn't show anything... | 17:47 |
noonedeadpunk | I was actually thinking about osprofiler earlier today, but never actually used it for real. | 17:47 |
ihrachys | noonedeadpunk: I doubt these are the only messages you see. are they? | 17:48 |
noonedeadpunk | these are the only | 17:48 |
noonedeadpunk | I've marked this backend as MAINT in LB, so no other requests coming to it | 17:48 |
noonedeadpunk | and this is the only thing I see with debug. | 17:49 |
noonedeadpunk | well, except config options which I cut after restart | 17:49 |
ihrachys | no messages between 17:42:32 and 17:42:50, regardless of req-id? | 17:50 |
noonedeadpunk | https://i.imgur.com/CnT2WNU.png | 17:51 |
ihrachys | thanks, interesting | 17:52 |
noonedeadpunk | ok, well. Does neutron-server coordinate/offload tasks to other "members" ithroug rabbitmq? | 17:52 |
noonedeadpunk | as I did not check any other backends | 17:53 |
noonedeadpunk | like I would for nova-conductor or cinder-scheduler for instance... | 17:53 |
noonedeadpunk | as it feels that neutron-server is more "monolythinc" but I can be wrong | 17:54 |
noonedeadpunk | but like log is exactly the same as it would be for 23.2.0, except slower :D | 17:55 |
ihrachys | no it's just neutron + db. I don't think it should communicate beyond neutron-server for this. (so not even agents) | 17:55 |
noonedeadpunk | ++ | 17:55 |
noonedeadpunk | I also just excluded neutron-lib from the equasion, as left `neutron-lib==3.11.1` while jsut downgraded/upgraded neutron between 24.0.0 and 23.2.0 with the same result | 17:59 |
noonedeadpunk | so, I guess I can try out some commits between these 2 tags now to narrow down the reason | 18:01 |
ihrachys | on first sight, there's really nothing inside get_security_group() that could take a long time. it fetches SG + its rules and then serialize them. The latter operation may call to additional extensions to add fields as needed. wonder if something slows down before the entry point is hit (in middleware / policy layer) | 18:02 |
ihrachys | noonedeadpunk: that would be very helpful if you can binary search, yes. thanks. | 18:02 |
noonedeadpunk | ok, so what breaks it was merged quite long after 24.0.0.0b2 | 18:18 |
* noonedeadpunk down to 10 patches | 18:25 | |
noonedeadpunk | there was also multiple levels of degradation... | 18:25 |
noonedeadpunk | one raised response time from 0.6 sec to 1.2 sec | 18:25 |
noonedeadpunk | and second from 1.2 to 26 lol | 18:26 |
noonedeadpunk | I mostly care about second one ofc | 18:26 |
ihrachys | I'd check both; do you run lots of requests just to get average, or is it just one? | 18:27 |
noonedeadpunk | like 5 | 18:27 |
noonedeadpunk | or well, when I see 26 I just do 2 :D | 18:28 |
ihrachys | :D | 18:28 |
noonedeadpunk | ihrachys: well, bad news for me... that's the result of https://review.opendev.org/c/openstack/neutron/+/908571 | 18:37 |
ihrachys | noonedeadpunk: why bad for you? | 18:39 |
noonedeadpunk | well, it seems to be a temp solution while proper one is implemented | 18:40 |
noonedeadpunk | which kinda means it's time consuming and I'd likely need to use pre-version, which was fixing another bug | 18:40 |
ihrachys | ack; ralonsoh see above, looks like something broke in sql queries for sg rules. | 18:42 |
noonedeadpunk | unless I'm really bad in git log... as `d55c591ecde2f6cc4c2cea64fb21a75b6343cd5a` does work for me for sure, but not d499e6421a7f15c18e9eb57fe50d71b80cd215d6 | 18:44 |
ihrachys | there's not much in between. the rest of patches are ovn/docs/agent side | 18:46 |
noonedeadpunk | if to look into merges in git tree - they go each after another... but all `Fix` nearby are already borked | 18:47 |
haleyb | i wonder if changing to lazy='selectin' would work and/or help, until somehting better | 18:55 |
noonedeadpunk | I can easily check that | 18:56 |
haleyb | we did that other places, but i only play a DB person on TV | 18:56 |
haleyb | think you'd have to be on 2024.2 for that to work? don't know | 18:57 |
noonedeadpunk | but patch was backported tooo.... 2023.1? | 18:58 |
noonedeadpunk | so I was lucky not to hit that eariler I guess | 18:58 |
haleyb | i'm only thinking of when we started supporting 'selectin', i had to do a neutron-lib change for that | 18:58 |
haleyb | if it does help on master/2024.2 we'll have to figure something out | 18:59 |
noonedeadpunk | hm | 19:02 |
noonedeadpunk | I think I suck in git after all | 19:02 |
noonedeadpunk | ah, was trying to manually patch on another host | 19:04 |
* noonedeadpunk getting tired | 19:04 | |
noonedeadpunk | haleyb: so selectin execute with same time as dynamic | 19:06 |
haleyb | so back to "fast" | 19:07 |
noonedeadpunk | and I have neutron-lib 3.11.1 now which is from 2024.1 u-c | 19:07 |
noonedeadpunk | it;s still just 2 times slower then 2023.2 :D | 19:07 |
noonedeadpunk | but yeah, there was some other "regression" I believe down the road | 19:09 |
haleyb | ack, there might have been just certain neutron changes that needed a -lib change. 2x is better than 100x | 19:10 |
noonedeadpunk | but yeah, both selectin and dynamic do the job with 1.26 sec | 19:10 |
noonedeadpunk | I will try to pinpoint this 2x patch as well just after the dinner | 19:11 |
ihrachys | thanks. please post this data in the bug, really helpful. | 19:13 |
haleyb | we should ban lazy='joined' i guess? my initial change was getting rid of lazy='subquery' | 19:13 |
haleyb | https://docs.sqlalchemy.org/en/13/orm/loading_relationships.html#select-in-loading | 19:22 |
noonedeadpunk | I wonder if I have not enough of `join_buffer_size` - I bumped it to 4M but mysqltuner was keep telling me it's not enough | 19:25 |
noonedeadpunk | but 4M I guess is already quite extreme.... | 19:26 |
ihrachys | haleyb: are we (have we) going to backport selectin to stable neutron-lib? just trying to understand what the plan for stable is. | 19:49 |
haleyb | ihrachys: i think we could backport it since it's just dependent on sqlalchemy>1.3 i think. part of (my) plan was to make the change in dalmatian and see if there were any bad side-effects, i don't think there have been | 19:53 |
haleyb | there was some part of my neutron change that needed that neutron-lib change, but it has been forgotten | 19:54 |
noonedeadpunk | btw, this second regression could be also "nasty". As what I've spotted, that on 23.1.0 it not only 2x faster (with lazy='selectin'), but also capable of returning value from cache, as if I issue request couple of times in a row, it responds with 0.08s | 19:59 |
noonedeadpunk | that never happens afterwards | 20:00 |
ihrachys | do you also bisect it? | 20:04 |
noonedeadpunk | yeah | 20:05 |
noonedeadpunk | I think I found it, just making sure right now | 20:06 |
ihrachys | haleyb: I wonder why noonedeadpunk hits the perf issue; do we hit it in gate? (we could check tempest sg tests by req-id where they fetch a sg.) | 20:08 |
ihrachys | if it's not the same in gate; (we can validate by reverting the patch and seeing if it changes metrics), then there's some additional factor that is specific to the setup. | 20:09 |
ralonsoh | ihrachys, noonedeadpunk hmmm it makes sense that this patch introduces this degradation. The problem is that without this patch, postgre doesn't work | 20:10 |
noonedeadpunk | so frankly -we haven't spotted it in our full-scale sandbox with 500 networks/ routers in it | 20:10 |
ralonsoh | I'm ok with reverting this patch, that clearly adds more sql calls to the DB | 20:10 |
ralonsoh | but we need to re-think what to do with postgre support | 20:10 |
noonedeadpunk | we catched only in production. and basically due to nova | 20:11 |
haleyb | ralonsoh: so using selectin isn't an option there? | 20:11 |
noonedeadpunk | as `openstack server list --all-projects` went from 1m40s to 5m | 20:11 |
haleyb | and i realize it's late for ralonsoh | 20:11 |
ralonsoh | it is, indeed | 20:11 |
ralonsoh | I'll check that tomorrow morning | 20:11 |
noonedeadpunk | second patch bringing regression is also ralonsoh :D https://review.opendev.org/c/openstack/neutron/+/896273 | 20:12 |
ihrachys | I'd say we revert / patch forward to selectin and let psql folks to report back if they are broken. :) | 20:12 |
ihrachys | noonedeadpunk: that's because ralonsoh is a patch beast | 20:12 |
haleyb | ralonsoh: yes, look in the bug, nothing on fire regarding getting dalmatian out the door | 20:12 |
ihrachys | you shoot at random into neutron repo - you hit Rodolfo's offspring | 20:12 |
ralonsoh | it doesn't make sense, the second patch should not affect | 20:13 |
noonedeadpunk | oh, yes, for sure - one doesn't makes mistakes who doesn't do anything | 20:13 |
ralonsoh | no, https://review.opendev.org/c/openstack/neutron/+/896273 cannot be reverted | 20:14 |
noonedeadpunk | https://launchpadlibrarian.net/750065640/regression_pinpint1.txt - that's regarding 896273 | 20:14 |
haleyb | and whatever jobs we have, like rally, didn't see it, or at least i don't even see it running there | 20:14 |
ihrachys | noonedeadpunk: the patch you link to is on POSTs though. Do you still GET? | 20:14 |
noonedeadpunk | um, yes? | 20:14 |
noonedeadpunk | I could be really following "wrong" git log... | 20:15 |
noonedeadpunk | so `294e1c60b41d3422bb830758e2ea6b6cf554ac46` works for me | 20:15 |
noonedeadpunk | and `78027da56ccb25d19ac2c3bc1c174acb2150e6a5` (which in log is the next one for me) is already not | 20:15 |
ihrachys | git log --oneline 294e1..78027 will show you all patches. git log is flat, not as helpful. | 20:16 |
ralonsoh | noonedeadpunk, please open a LP bug with this information, I'll check tomorrow morning | 20:16 |
noonedeadpunk | ralonsoh: I've placed all these in https://bugs.launchpad.net/neutron/+bug/2081087 | 20:17 |
noonedeadpunk | ihrachys: yeah. so they look like each after another https://paste.openstack.org/show/bYNu60dSNjZ8HGAm7Act/ | 20:17 |
ihrachys | if I were to blame one, I'd probably pick https://review.opendev.org/c/openstack/neutron/+/883907 | 20:18 |
noonedeadpunk | Well. I was very suspicious about it | 20:18 |
ralonsoh | we can revert this feature but we need another way to implement https://bugs.launchpad.net/neutron/+bug/2019960 | 20:18 |
ralonsoh | or just change the loading method | 20:19 |
ihrachys | since it apparently modifies serialization of sg rules with pulling sg through orm relationship (if I read it correctly) | 20:19 |
ralonsoh | it does, yes | 20:20 |
noonedeadpunk | but as you saw I do install by SHA.... | 20:20 |
noonedeadpunk | ie `pip install "git+https://opendev.org/openstack/neutron@78027da56ccb25d19ac2c3bc1c174acb2150e6a5#egg=neutron"` | 20:20 |
noonedeadpunk | so unless merge order was different... | 20:21 |
ihrachys | noonedeadpunk: as I said, git log output doesn't give you the full picture. can't show a DAG linearly. | 20:21 |
noonedeadpunk | well. Likely this one is already applied indeed | 20:22 |
ralonsoh | noonedeadpunk, how can I locally reproduce this? | 20:22 |
noonedeadpunk | ralonsoh: no idea - testing on production :D | 20:22 |
ralonsoh | what do I need? a lot of SG rules? | 20:22 |
haleyb | i can probably send a WIP patch for changing to selectin before EOD, just to get it rolling if that is a way forward | 20:23 |
noonedeadpunk | in my case there's 50 rules in SG | 20:23 |
noonedeadpunk | so it's not "a lot" I'd say | 20:23 |
ralonsoh | ok, I'll try tomorrow | 20:23 |
ihrachys | GET by id of a single SG with some rules | 20:23 |
ralonsoh | ok | 20:23 |
ralonsoh | https://launchpadlibrarian.net/750065640/regression_pinpint1.txt | 20:24 |
ralonsoh | I' | 20:24 |
ralonsoh | I'll do it tomorrow | 20:24 |
ihrachys | I am not sure what the "caching" behavior noonedeadpunk described above, where without this patch, consequent requests were extremely quick. | 20:24 |
ralonsoh | I'm disconnecting now | 20:24 |
ihrachys | ralonsoh: good night! :) | 20:24 |
noonedeadpunk | ihrachys: yeah, https://review.opendev.org/c/openstack/neutron/+/883907/ is already in | 20:24 |
noonedeadpunk | you're right, it's ^ that the actual thing bringing in delays. But they're not that bad I guess given the benefit of being able to set default sec group for project/deployemtn (if I read it right) | 20:33 |
noonedeadpunk | well, it's x2 ofc, so not _that_ neglegent, but well | 20:34 |
ihrachys | noonedeadpunk: I think it's worth exploring it at least. can you report a separate bug for this? | 20:39 |
noonedeadpunk | um, I've posted data in the same for now. will create a new one in the morning | 20:41 |
ihrachys | noonedeadpunk: and also, can we maybe drill down a bit on the caching behavior? do you measure this in neutron log timestamps or on curl side? I'd like to make sure the discrepancy is not somewhere above neutron api. | 20:41 |
noonedeadpunk | it's just 10-30pm here :) | 20:41 |
ihrachys | noonedeadpunk: thanks. iiuc it's also very late for you; have a rest and we can discuss / report tomorrow of course. | 20:41 |
noonedeadpunk | curl is launched on the "localhost" | 20:41 |
noonedeadpunk | so there's not much difference between curl and neutron response | 20:42 |
ihrachys | right. but there's still some path to take between neutron-server process and curl. e.g. load balancer / apache? | 20:42 |
noonedeadpunk | under "localhost" I mean same container where neutron-server is launched | 20:42 |
noonedeadpunk | nah, 10.153.11.9:9696 is actually exact same host directly | 20:43 |
noonedeadpunk | I've disabled backend on LB so couldn't use that anyway | 20:43 |
noonedeadpunk | But I can supply timings from neutron logs | 20:44 |
noonedeadpunk | and neutron running in eventlet - so no apache or anything else at all | 20:45 |
ihrachys | I am not mistrusting, just trying to isolate all the variables. :) thanks for this. | 20:46 |
noonedeadpunk | yeah-yeah, I get that. was just lazy to open second terminal for logs :p | 20:49 |
ihrachys | wonder if this is because we apply @cache.cache_method_results (dogpile backed?) on OwnerCheck._extract... for policy | 20:49 |
noonedeadpunk | yeah, I do have [cache] configured there | 20:49 |
opendevreview | Brian Haley proposed openstack/neutron master: Change to use selectin for all DB lazy loads https://review.opendev.org/c/openstack/neutron/+/929851 | 20:50 |
haleyb | lets see how that goes | 20:51 |
haleyb | and it might be we need to tweak any rally tests, assuming they would have found it | 20:52 |
ihrachys | yes. maybe thresholds are not tight enough there. | 20:56 |
ihrachys | what I don't understand about the extension is 1) why the attribute has to be an api field returned on each request; 2) why we have to calculate it when policy doesn't even use it (I think that's the default behavior). | 20:57 |
ihrachys | this may be an artifact of how policy layer works (I am not in the know here), but seems like a waste if one doesn't need it in the first place. plus the name of the attribute for sg rule returned to api user seems weird - "sg rule belongs to default sg", yeah I know - I am fetching the default SG... | 21:00 |
ihrachys | side note: the new field wasn't documented in api-ref https://docs.openstack.org/api-ref/network/v2/#security-group-default-rules-security-group-default-rules haleyb should it be mentioned there? (the implementation probably hasn't followed a new feature checklist. which we probably don't have? :) | 21:03 |
haleyb | ihrachys: so the rally job variables are in rally-jobs/task-neutron.yaml - we only add 20 SG rules, if we can tweak that to start failing, and patches help, then can be more confident going forward we won't regress | 21:05 |
* haleyb only looked quickly | 21:05 | |
haleyb | ihrachys: perfect thing to document (regarding api-ref) :) | 21:06 |
ihrachys | ETOOMANYSIDEQUESTS | 21:06 |
haleyb | AI to the rescue! or cloning | 21:07 |
ihrachys | reported the rally follow up here https://bugs.launchpad.net/neutron/+bug/2081108 | 21:08 |
ihrachys | api-ref doc bug https://bugs.launchpad.net/neutron/+bug/2081109 | 21:09 |
ihrachys | Dmitri will report another bug for the perf degradation tomorrow... | 21:10 |
ihrachys | what else have we missed.. | 21:10 |
ihrachys | "the attribute name / why do we even calculate it" I don't know if I'm just missing something about policy layer... | 21:11 |
haleyb | i have only missed my other day job :) | 21:11 |
ihrachys | haleyb: who needs that | 21:12 |
haleyb | my retirement fund does :-p | 21:13 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!