*** shaolin__ is now known as shaolin_ | 10:00 | |
*** dtantsur_ is now known as dtantsur | 14:51 | |
JayF | I think serach is broken on the mailing list? | 16:05 |
---|---|---|
JayF | Looks like new items are not being indexed | 16:05 |
JayF | reproducer: search "eventlet", search newest first | 16:05 |
JayF | alternatively, my browser may have just failed to execute the JS until I did it a third time (WTF?) | 16:05 |
tonyb | LOL | 16:06 |
tonyb | JayF: so it is or is not something I should look at? | 16:07 |
JayF | PEBKAC/PICNIC/err: id10t/take your pick | 16:07 |
tonyb | JayF: I'm happy to go with something more like "I hate computers". Which you will hear me mumble many, many times each day :) | 16:09 |
fungi | JayF: or it might be slow and then caches results? i went to the main openstack-discuss archive page, entered "eventlet" without quotes in the search field, and got back 52 pages of results | 16:15 |
JayF | This was literally the drop down field didn't trigger the re-sort until I did it the second time | 16:15 |
JayF | which I don't blame the website for until I see it happen more than once | 16:15 |
JayF | because firefox just tends to do weird stuff at times these days, and I compile my own which is a multiplier for that | 16:16 |
fungi | oh, got it | 16:16 |
fungi | i did a similar search for "rabbit" which returned 575 results and the top scoring result was from ~4 years ago. i changed the sort drop-down to latest first and it immediately re-sorted the result for me, bubbling a result from yesterday to the top of the page | 16:18 |
fungi | (also firefox) | 16:18 |
fungi | but yeah, i wouldn't be surprised if ff is inconsistently squirrely | 16:18 |
fungi | the embedded search engine also might briefly lock when it's indexing additions, not really sure what the underlying architecture there is like | 16:21 |
clarkb | I've just updated the meeting agenda and will send it out momentarily | 16:40 |
tonyb | Thanks | 16:41 |
opendevreview | Tony Breeds proposed opendev/zone-opendev.org master: Add DNS records for mirror02.ord.rax https://review.opendev.org/c/opendev/zone-opendev.org/+/900922 | 16:41 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add new mirror02.ord.rax to inventory https://review.opendev.org/c/opendev/system-config/+/900923 | 16:45 |
opendevreview | Tony Breeds proposed opendev/zone-opendev.org master: Add DNS records for mirror02.ord.rax https://review.opendev.org/c/opendev/zone-opendev.org/+/900922 | 16:50 |
clarkb | tonyb: some notes on ^ | 16:55 |
tonyb | Thanks | 16:55 |
clarkb | tonyb: oh you also need to bump the serial | 16:55 |
clarkb | I use `date +%s` values | 16:55 |
clarkb | also we tick over to 1700000000 today | 16:56 |
clarkb | I think | 16:56 |
tonyb | Thanks. I caught the whitespace and serial issues. | 16:56 |
fungi | i was just about to leave the same comments. saved me the trouble! | 16:57 |
tonyb | I assumed the LE stuff came later. | 16:57 |
clarkb | tonyb: no we do the LE stuff upfront and that allows the ansible deployment to do everything at once | 16:57 |
clarkb | it is the big upside to having LE confirmation in DNS, we don't need to do multiple passes in deployment | 16:57 |
tonyb | Nice. | 16:58 |
tonyb | I assume I do *NOT* want to update the mirror.ord CNAME until we have verified the new server is good | 16:59 |
clarkb | correct | 17:00 |
clarkb | that will be the last step to put the new server into production | 17:00 |
tonyb | Got it. | 17:01 |
SvenKieske | has anybody else trouble reaching review.opendev.org? global down detectors think it's just me | 17:02 |
clarkb | SvenKieske: no issues from here | 17:03 |
SvenKieske | okay it's just really slow here | 17:03 |
SvenKieske | thx | 17:03 |
clarkb | really slow -> that could be a symptom of ipv6 -> ipv4 fallback | 17:03 |
clarkb | if your ipv6 path isn't working | 17:03 |
fungi | or a choked peering point between two backbone providers somewhere in the path from your isp to vexxhost | 17:04 |
SvenKieske | I actually seem to have some packet loss (with icmp) to review.opendev.org, but on both ipv4 and ipv6, but more on ipv6, interesting | 17:05 |
SvenKieske | I guess that's a hint from the universe to stop working for today | 17:05 |
tonyb | I spent so long teaching vim to always use 4 spaces instead of a tab, I can't seem to make it do the right thing for the zone file ... please hold | 17:10 |
clarkb | tonyb: I use augroup on file extension matches to set file type specific rules | 17:13 |
* tonyb tries that | 17:14 | |
clarkb | I actually have it set up to only do special rules for certain file types and then don't have special rules for dns zone files | 17:14 |
tonyb | Ahh I should probably try that. | 17:17 |
clarkb | looks like my vim recognizes the bind files as filetype bindzone. I think that means you can use something like: https://paste.opendev.org/show/bgwBX1vDlFjHkw9r9gj8/ | 17:22 |
opendevreview | Tony Breeds proposed opendev/zone-opendev.org master: Add DNS records for mirror02.ord.rax https://review.opendev.org/c/opendev/zone-opendev.org/+/900922 | 17:23 |
clarkb | I have a sudden urge to update my vimrc :) | 17:23 |
tonyb | You're welcome? | 17:24 |
clarkb | tonyb: some copy pasta updates needed (inline comments for that) otherwise lgtm | 17:28 |
fungi | i'm lazy, so i just '/ ' and then 4s<^V><tab> | 17:29 |
opendevreview | Tony Breeds proposed opendev/zone-opendev.org master: Add DNS records for mirror02.ord.rax https://review.opendev.org/c/opendev/zone-opendev.org/+/900922 | 17:31 |
tonyb | fungi: That'd work. | 17:54 |
opendevreview | Merged opendev/zone-opendev.org master: Add DNS records for mirror02.ord.rax https://review.opendev.org/c/opendev/zone-opendev.org/+/900922 | 19:11 |
artom | Heya, so we're trying to test huge pages in the gate, and need the PDPE1GB CPU flag. We're using the 'nested-virt-ubuntu-jammy' label for our nodeset, and as luck would have it, whenever we get a VM from Vexxhost for that label, we have that flag. Great! | 19:58 |
fungi | i take it there's no way to emulate or mock that | 19:59 |
artom | Looking at the project-config however, it looks as though OVH _also_ provide that label, so I'd like to check whether their VMs also have that flag. I haven't come up with anything smarter than just rechecking https://review.opendev.org/c/openstack/whitebox-tempest-plugin/+/900824 until it happens to grab an OVH VM, but so far no luck. | 19:59 |
artom | fungi, the value is specifically in the integration tests on real hardware | 19:59 |
artom | So I'm wondering - is there a way to "force" a specific nodepool provider - or maybe the question I should be asking, are OVH _actually_ still providing that label? Like, maybe the flavor doesn't exist on their cloud, or we have no more quota or something... | 20:00 |
fungi | artom: not really, you can push multiple do-not-merge changes with zuul config adjusted to just the jobs you're trying with, but we don't have a way to limit jobs to specific donor providers. i'll take a quick look at the config for ovh | 20:02 |
artom | It just seems awefully sus that even after half a dozen rechecks, we're always landing on Vexx | 20:03 |
fungi | i agree that's unexpected | 20:03 |
clarkb | artom: you can see https://opendev.org/openstack/project-config/src/branch/master/nodepool/nl04.opendev.org.yaml is configured to provide the flavor in ovh | 20:03 |
clarkb | and it is the same flavor as all the other nodes in ovh | 20:03 |
tonyb | [tony@thor ~]$ date --date='now + 130mins' +%s | 20:04 |
tonyb | 1700000017 | 20:04 |
tonyb | \o/ | 20:04 |
clarkb | I suspect that what may be happeing here is that vexxhost doesn't provide normal nodes anymore so it is able to schedule the special labels quickly | 20:04 |
clarkb | whereas ovh is also dealing with all the normal requests | 20:04 |
clarkb | in any case you can just check the cpu flags on any job running on ovh as the flavors are the same according to the config there | 20:05 |
clarkb | don't huge pages also imply numa and if the underlying cloud isn't doing numa we can't run them? | 20:05 |
artom | clarkb: aha, interesting. Yeah, that was going to be my question - if every OVN label is the same flavor, than any old OVH node will do. | 20:05 |
artom | *OVH | 20:05 |
artom | ETOOMANYOV | 20:05 |
artom | OVS, OVN, OVH... | 20:05 |
JayF | is it OVerwhelming? | 20:06 |
fungi | yep, i agree. it looks like ovh would in theory provide those nodes, it's probably just swamped with orders of magnitude more requests for standard labels | 20:06 |
artom | Overly so | 20:06 |
fungi | OVerwHelming | 20:06 |
artom | These puns deserve an OVation | 20:10 |
opendevreview | Merged opendev/system-config master: Add new mirror02.ord.rax to inventory https://review.opendev.org/c/opendev/system-config/+/900923 | 20:48 |
clarkb | looks like the le job failed for ^ | 21:19 |
clarkb | oh! I know why :) | 21:20 |
clarkb | tonyb: so bridge:/var/log/ansible/letsencrypt.yaml.log.2023-11-14T21:13:10 is where the logs for that went. I'm fairly positive teh reason it failed is we didn't add the ansible host vars for the LE cert info | 21:21 |
clarkb | tonyb: you may also need to add an ansible handler for the cert | 21:21 |
clarkb | sorry I should've known that was missing in the inventory addition | 21:21 |
fungi | i missed it too :( | 21:22 |
tonyb | Ahh I see. I should have done a more complete check. | 22:14 |
tonyb | So is it just a matter of adding those handlers? or now that the job failed is the cleanup recovery more complex? | 22:14 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add letsencrypt_certs for mirror02.ord https://review.opendev.org/c/opendev/system-config/+/900953 | 22:40 |
opendevreview | Tony Breeds proposed openstack/project-config master: Add tonyb to global IRC admins https://review.opendev.org/c/openstack/project-config/+/900954 | 22:53 |
opendevreview | Tony Breeds proposed opendev/system-config master: Add tonyb to statusbot nicks https://review.opendev.org/c/opendev/system-config/+/900955 | 22:55 |
opendevreview | Merged openstack/project-config master: Add tonyb to global IRC admins https://review.opendev.org/c/openstack/project-config/+/900954 | 23:14 |
ianw | tonyb: should be fine to just run again and will deploy | 23:31 |
ianw | i've had in my todo list "lint against forgetting handlers/rework handler" for ... a long time :) | 23:32 |
tonyb | ianw: If you give me an idea how to do it, I'd be happy to implement it | 23:33 |
ianw | openstack-infra/system-config/playbooks/roles/letsencrypt-create-certs/handlers has a note from 2019 about how listen: wasn't working. i think that perhaps i was thinking at the time it could be made simpler where every cert doesn't need it's own handler | 23:42 |
ianw | so tasks/acme.yaml could notify: something more generic, instead of "notify: 'letsencrypt updated {{ item.key }}'" which means every cert needs it's own handler | 23:43 |
ianw | because most of the handlers are the same restarting apache. one complication would be it would have to maybe default to apache, but you'd set a variable to override it if you want to do something else on cert renewal ... | 23:43 |
ianw | to keep it more or less as is, though, i think it just requires a script that walks inventory and pulls out the letsencrypt_certs: variables. then check each variable entry has a handler in tplaybooks/roles/letsencrypt-create-certs/handlers/main.yaml | 23:47 |
tonyb | ianw: Ah okay. Definately more in the "rework" category ;P | 23:47 |
ianw | "just" ... probably not too much craziness for a a python script | 23:48 |
ianw | parse the handler file into a dict, walk all the inventory files, if you see a "letsencrypt_certs" dict, for each key, make sure handler exists | 23:48 |
ianw | i can't think of any way to do it less bespoke than that, though | 23:49 |
ianw | it certainly is a trap we've fallen into more than once, so probably justified effort :) | 23:49 |
tonyb | Yeah. | 23:49 |
clarkb | I rechecked 900953 in order to gather more data about whether or not that failure is on our end or quays. cc corvus there might be a problem with registry.zuul-ci.org fwiw see https://zuul.opendev.org/t/openstack/build/e045e455ecf44f819b0d72cefba7797d | 23:55 |
corvus | didn't we abandon that? | 23:56 |
tonyb | IIUC quay has been having problems today. | 23:56 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!