clarkb | infra-root ok draft is largely completed at https://etherpad.opendev.org/p/V-YLkq0iEJyhBi4hHsFU for an OpenDev update if you have a moment to read it over | 00:01 |
---|---|---|
clarkb | want to make sure I didn't forget anything important etc | 00:01 |
clarkb | oh ze01 restarted earlier than I thought. I was looking at the wrong job. It restarted about 40 minutes ago | 00:06 |
corvus | each of the executors is only running a few builds right now... | 00:13 |
corvus | what do folks think about me running a manual pause command on a handful of them to speed up the rolling restart? | 00:13 |
clarkb | corvus: give me a sec to double check that won't interfere with the playbook but that sounds great to me | 00:13 |
corvus | i think when the playbook runs it, it should be a noop, but yeah, that would be the main concern i think :) | 00:13 |
ianw | lgtm, the message | 00:14 |
corvus | clarkb: message lgtm too | 00:15 |
clarkb | corvus: reading zuul-executor's graceful tasks I think we do the right thing | 00:15 |
clarkb | corvus: we already handle the case where the container has exited previously. So the only other concern would be if pausing or gracefulling an already paused/graceful executor is a problem and I don't think it is | 00:15 |
corvus | oh i was thinking of just running zuul-executor pause | 00:16 |
clarkb | thank you for looking at that message, I'm going to send it out shortly | 00:16 |
clarkb | corvus: oh ya pause won't exit liek graceful will. But I think both are fine atually | 00:16 |
corvus | then letting the playbook run graceful (which will do another pause first, which will noop, then basically immediately exit since no jobs run) | 00:16 |
clarkb | so ya I think that is fine | 00:16 |
corvus | okay. i agree you're probably right about graceful being safe to run too. somehow i feel like i want to do pause instead, just to try to keep it a light touch. | 00:17 |
clarkb | wfm | 00:17 |
corvus | how many should i do? we have the periodic jobs coming up soon.. so maybe 6? | 00:21 |
clarkb | ya half is probably a good count | 00:21 |
corvus | ok... actually, i'll do 5 so that's a total of 6 that are paused | 00:22 |
clarkb | corvus: the swift arm job that is queued on a node is something I keep meaning to look at too. If you are worried that will impact the restart we can dequeue it manually | 00:22 |
clarkb | but I think it won't cause a problem because it hasn't gotten far neough to need an executor yet | 00:23 |
clarkb | (it it sitll waiting on a node request) | 00:23 |
corvus | clarkb: yeah shouldn't affect the restart | 00:23 |
corvus | okay ze02 is paused by the playbook, and ze03-ze07 were manually paused by me | 00:23 |
corvus | basically, every executor is either running 4 or 5 jobs right now | 00:24 |
Clark[m] | corvus looked like pausing worked. It is onto 08 now | 01:46 |
opendevreview | Yoshi Kadokawa proposed openstack/project-config master: Add Cinder Huawei charm https://review.opendev.org/c/openstack/project-config/+/867589 | 03:10 |
opendevreview | Ian Wienand proposed openstack/diskimage-builder master: tox jobs: pin to correct nodesets https://review.opendev.org/c/openstack/diskimage-builder/+/867579 | 03:17 |
opendevreview | Ian Wienand proposed openstack/diskimage-builder master: tox jobs: pin to correct nodesets https://review.opendev.org/c/openstack/diskimage-builder/+/867579 | 04:18 |
*** yadnesh|away is now known as yadnesh | 04:27 | |
*** Tengu_ is now known as Tengu | 07:01 | |
*** ysandeep is now known as ysandeep|lunch | 08:30 | |
*** jpena|off is now known as jpena | 08:38 | |
*** ysandeep|lunch is now known as ysandeep | 10:05 | |
obondarev | Hi folks, it seems my organisation public IP address was banned by https://review.opendev.org/ - can someone please help to remove the ban? | 10:31 |
*** yadnesh is now known as yadnesh|afk | 11:04 | |
*** dviroel|out is now known as dviroel|rover | 11:12 | |
*** rlandy|out is now known as rlandy | 11:12 | |
*** yadnesh|afk is now known as yadnesh | 11:42 | |
*** ysandeep is now known as ysandeep|brb | 11:51 | |
*** dasm|off is now known as dasm | 12:18 | |
opendevreview | Mariusz Karpiarz proposed openstack/project-config master: Add the "api-ref-jobs" template to CloudKitty https://review.opendev.org/c/openstack/project-config/+/867651 | 12:31 |
*** ysandeep|brb is now known as ysandeep | 12:31 | |
fungi | obondarev: we generally only block ip addresses if they're repeatedly trying to open connections and failing to authenticate, so whatever's at that ip address seems to have broken authentication configured. have you corrected it? | 12:37 |
fungi | or at least turned off whatever is repeatedly trying to connect from that address? | 12:38 |
obondarev | @fungi hmm, that's an uplink IP address used by many employees | 12:39 |
obondarev | NAT address I mean | 12:40 |
obondarev | that is 176.74.218.106 | 12:42 |
fungi | obondarev: you'll need to check your router's sessions to see what's failing to connect. we believe it to probably be https://wiki.openstack.org/wiki/ThirdPartySystems/Seagate_CI | 12:47 |
fungi | if you can confirm you've turned that off we'll allow connections from the ip address again | 12:48 |
fungi | ianw attempted to contact the person listed as responsible for that system to notify them of the problem | 12:49 |
obondarev | fungi: ok, I'll check with IT guys and get back here, thank you very much! | 12:49 |
fungi | obondarev: you're welcome | 12:49 |
obondarev | fungi: and where is the person responsible for that system listed? | 12:52 |
fungi | obondarev: there's an e-mail address on the "Contact Information" row of the table on that https://wiki.openstack.org/wiki/ThirdPartySystems/Seagate_CI page | 13:28 |
obondarev | ah, right, sorry | 13:29 |
fungi | obondarev: oh, though that was for a different ip address. it's possible ianw blocked more than one address that day, let me double-check | 13:30 |
obondarev | fungi: yeah, that tristero.net should not be related to Mirantis | 13:31 |
fungi | obondarev: yep, my mistake. ianw blocked three addresses for ssh key negotiation failures, that one we couldn't find any contact info for other than general eudc.cloud administrator addresses | 13:34 |
fungi | that address was the source of hundreds of "Unable to negotiate key" errors in gerrit's error log | 13:35 |
obondarev | fungi: can I provide a contact email for that one so it could be unblocked? | 13:35 |
fungi | i can remove the block temporarily, but please try to figure out what is failing to authenticate | 13:35 |
obondarev | fungi: cool, thanks! | 13:35 |
fungi | obondarev: i've deleted that rule, please try connecting again | 13:39 |
obondarev | fungi: great, thanks, I'll talk to IT | 13:40 |
fungi | you're welcome | 13:45 |
*** ysandeep is now known as ysandeep|dinner | 14:30 | |
*** dviroel|rover is now known as dviroel|rover|lunch | 15:52 | |
clarkb | The Zuul restart appears to ahve completed successfully. We are running nodepool and zuul on python3.11 now | 16:07 |
*** dviroel|rover|lunch is now known as dviroel|rover | 16:38 | |
opendevreview | Jeremy Stanley proposed openstack/project-config master: Add the "api-ref-jobs" template to CloudKitty https://review.opendev.org/c/openstack/project-config/+/867651 | 16:41 |
*** marios is now known as marios|out | 16:47 | |
corvus | clarkb: the executors are split, and they do have a behavior difference: https://review.opendev.org/864903 | 17:20 |
corvus | i think that behavior difference is unlikely to be hit by opendev in production, but it's something to be aware of | 17:20 |
clarkb | ya we do run a cleanup but it is small enough I'm not too worried | 17:21 |
corvus | if we see any problems related to that, we'll just want to check the executor version first | 17:22 |
clarkb | I guess if you wanted to have a clearer signal on whether or not that is working as intended we could restart the executors on the older version. But ya I'm not worried about opendev itself | 17:23 |
clarkb | the restart in a couple of days should resynchronize things | 17:23 |
corvus | yeah, i think it's good enough for now as long as we know the caveat | 17:23 |
*** yadnesh is now known as yadnesh|away | 17:27 | |
*** jpena is now known as jpena|off | 17:35 | |
fungi | clarkb: checking the held mm3 node i used for our final import testing, i see the same thing there as in production so i don't think i accidentally changed anything by looking | 17:54 |
fungi | hyperkitty properly filters the mailing list names but postorius isn't | 17:54 |
fungi | and both the admin interface and django webui are consistent with what we see in production (two mail hosts but sharing a common lists.opendev.org mail host) | 17:55 |
fungi | and hyperkitty is showing the lists.opendev.org name in the corner of the lists.zuul-ci.org site | 17:55 |
fungi | so i guess the next thing to do is try adding/associating separate mailman web hosts (what django calls "sites") with the existing mail hosts | 17:56 |
fungi | we seem to have duplicate copies of web-settings.py in docker/mailman/web/mailman-web and playbooks/roles/mailman3/files | 18:02 |
fungi | both set SITE_ID = 1 which i guess is a default the new list domains all inherit? | 18:02 |
Clark[m] | fungi: the file in playbooks overrides a number of settings | 18:12 |
Clark[m] | They should not be identical | 18:13 |
fungi | diff says they're the same | 18:13 |
*** ysandeep|dinner is now known as ysandeep | 18:13 | |
fungi | nevermind! | 18:13 |
fungi | i was idiotically running `git diff foo bar` rather than `diff foo bar` | 18:14 |
*** ysandeep is now known as ysandeep|out | 18:14 | |
fungi | yes they're definitely different | 18:14 |
Clark[m] | The one in the docker file is kept in sync with upstream to avoid forking too hard. Then we overlay some edits via our deployment | 18:14 |
Clark[m] | The one actually used is the one in playbooks | 18:14 |
fungi | so anyway, i was able to get help output from the manage.py script in the container but it isn't immediately obvious that it supports adding sites/web hosts | 18:15 |
fungi | there is a django_extensions set_default_site subcommand but that's the only obvious match | 18:16 |
fungi | ahh, maybe i want postorius's mmclient subcommand for manage.py | 18:19 |
fungi | whoa... apparently SITE_ID = 0 is magic sauce: https://docs.mailman3.org/en/latest/faq.html#the-domain-name-displayed-in-hyperkitty-shows-example-com-or-something-else | 18:27 |
fungi | "setting SITE_ID = 0 in Django’s settings will cause HyperKitty to display the DISPLAY NAME for the domain whose DOMAIN NAME matches the accessing domain. However, do not set SITE_ID = 0 in a new installation without any existing Sites as this will cause an issue in applying migrations. Only set SITE_ID = 0 after there are domains defined in the Django admin Sites view." | 18:27 |
fungi | guess i'll give that a try on the held node and see what happens | 18:28 |
Clark[m] | Interesting | 18:34 |
fungi | doesn't seem to solve it as far as i can tell | 18:41 |
clarkb | fungi: is there a way to add a second site via the django admin webpage? I wonder if that if what we should try next? | 18:43 |
clarkb | or maybe as admin we can change the web_host value? | 18:44 |
clarkb | actually I think we should look at that first because iirc we found something saying that web_host needs to be unique for the vhosting and currently they are not? | 18:44 |
clarkb | I think that aws in the email that corvus found | 18:44 |
fungi | yes, the problem is that the "web host" seems to be synonymous with the django "site" and currently we only have one (lists.opendev.org) | 18:47 |
clarkb | I see | 18:47 |
clarkb | so maybe SITE_ID=0 needs to be done in conjunction with multiple sites? | 18:47 |
fungi | the sites are numbered, and the SITE_ID in settings.py seems to refer to the django site (therefore web host) | 18:47 |
fungi | and yeah, just setting SITE_ID=0 and restarting hasn't solved the display problem, but i'm trying adding sites and associating mail hosts (mailman 3 "mail domains" in the django admin ui) with them | 18:49 |
clarkb | I wonder if we can just insert a record into the db | 18:49 |
clarkb | https://docs.djangoproject.com/en/4.1/ref/contrib/sites/ seems to be the underlying implementation bits? | 18:49 |
fungi | oh, my bad. i didn't actually change it | 18:49 |
fungi | i edited the file on disk with the container down, not merely stopped | 18:50 |
clarkb | oh ya if you down it then when you up it you get it back in a clean state | 18:50 |
fungi | one better. i was editing the copy of that file inside the container file tree, but we bindmount our own into the running container instead | 18:55 |
fungi | so i was making all of this far more complicated than it needed to be | 18:55 |
fungi | so good news and bad news... | 19:02 |
fungi | the good news is that if i create a lists.zuul-ci.org site and associate the lists.zuul-ci.org mail host with it, then with SITE_ID=0 in settings.py it seems to show the correct domain name on the hyperkitty pages now | 19:04 |
fungi | the bad news is that it seems to break the ability to access the interface by any domains not configured for the site but which may resolve to it (for example "localhost" via my ssh socket) | 19:04 |
clarkb | I don't think you need to access it as localhost | 19:05 |
clarkb | you just need to originate from localhost but can hit the external interface | 19:05 |
clarkb | does it filter the lists properly too? | 19:05 |
fungi | well, i do at least need to override my dns resolution to go to the forwarded port on my local machine | 19:06 |
fungi | for accessing the django admin interface | 19:06 |
fungi | just referring to 127.0.0.1 or localhost won't set the right host header in my browser request | 19:06 |
clarkb | ya would need to use SOCKS or similar for what I describe I guess | 19:06 |
fungi | point is, with SITE_ID=0 any access to the mailman web content except by a domain name it knows about will now show an error page | 19:08 |
fungi | but that's probably acceptable | 19:08 |
clarkb | well it did that before too (there is a domains list with a filter but localhots was explicitly added) | 19:09 |
clarkb | The change here really only applies to localhost access I think. And ya that seems fine. We can either drop the admin local requirement or document using socks or something | 19:09 |
clarkb | fungi: does it filter the lists properly too? And what is hte process of creating a site like? | 19:10 |
fungi | and no, this doesn't solve the list filtering in postorius, just the site name displayed by hyperkitty | 19:10 |
clarkb | (I'm wondering if we can automate this somehow especiallysince we can't flip the 1 to a 0 unti lafter we deploy...) | 19:10 |
clarkb | fungi: I guess you hvae to associate individual lists with the site too? | 19:11 |
clarkb | via the web_host setting howeverthat is expressed | 19:11 |
fungi | the workaround hinted at in that one ml discussion for the "not till after deploy" problem seems to be to create a dummy site you're not going to use | 19:11 |
clarkb | fungi: we might be able to have the intial pass create everything then update the file and restart containers and not change the content of that file once there is a site present | 19:12 |
clarkb | its probbaly doable, just really clunky | 19:12 |
clarkb | probably want to look at the fixes in aggregate though once we've sorted out all the steps we want to take | 19:15 |
clarkb | fungi: now that you created the site is there a way to associate it as the web host to the individual mailing lists? I'm just wondering if that is what we need to filter them on the pages | 19:27 |
fungi | not that i'm able to find so far | 19:30 |
fungi | they're filtered on the hyperkitty pages just not the postorius pages | 19:30 |
clarkb | https://gitlab.com/mailman/hyperkitty/-/blob/master/hyperkitty/views/index.py#L60-79 is how hyperkitty does it | 19:31 |
clarkb | fungi: note in prod it isn't filter on either of them | 19:31 |
clarkb | https://lists.opendev.org/archives/ shows both sets of lists | 19:31 |
clarkb | is it possible that our archives/ vs hyperkitty/ url is to blame somehow? I swear that thi swas working in prod though | 19:32 |
clarkb | fungi: ok lists.zuul-ci.org is filtering with hyperkitty in prod but not lists.opendev.org | 19:32 |
clarkb | and then postorius for both doesn't filter | 19:33 |
fungi | oh, that may be due to the sites on prod | 19:33 |
fungi | for the hyperkitty not filtering on the opendev site but doing it on the zuul site | 19:33 |
clarkb | ah because we've only got the one | 19:34 |
fungi | yeah, if i look on the held node, the lists.opendev.org/archives page now lists all the imported mailing lists except the zuul ones | 19:34 |
fungi | presumably because i associated them with the lists.zuul-ci.org site | 19:34 |
clarkb | ya I'm looking at the hyperkitty filter andI think that might be right | 19:35 |
fungi | or, rather, because i associated the lists.zuul-ci.org mail host with the lists.zuul-ci.org web host (django site) i added | 19:35 |
fungi | so all the lists which used that mail host are now being associated with the newly added site | 19:35 |
fungi | rather than the default | 19:35 |
clarkb | hrm though actually I think the filtering should work as is | 19:36 |
clarkb | because it is looking at the requests value and the mail domains | 19:36 |
clarkb | nothing seems to look at the django content | 19:36 |
clarkb | oh maybe it is this line https://gitlab.com/mailman/hyperkitty/-/blob/master/hyperkitty/views/index.py#L69 | 19:36 |
fungi | those values are kept in a django registry of some sort, the django admin page simply gives a way to edit them in one place i think? | 19:37 |
clarkb | that looks up the maildomain then checks site.domain so maybe that converts it to the django site | 19:37 |
fungi | they may also be sharing that data through the db, i'm not sure | 19:37 |
clarkb | I think it is all db | 19:37 |
clarkb | the django admin stuff makes use of what django knows about its db models that sites use to let you introspect the db | 19:38 |
clarkb | https://gitlab.com/mailman/postorius/-/blob/master/src/postorius/views/list.py#L1038-1063 for postorius I think. Though I'm having a hard time parsing that | 19:40 |
clarkb | "if there is only one mail_host for this web_host" but we have many mail_hosts | 19:40 |
clarkb | I suspect https://gitlab.com/mailman/postorius/-/blob/master/src/postorius/views/list.py#L1046 is iterating over all the domains that django knows about? | 19:43 |
clarkb | but if that were the case your addition of the site to match the mail_host for zuul-ci.org would've corrected this for postorius I think. | 19:44 |
fungi | clarkb: oh, actually yes it does look like it did fix filtering for postorius | 19:45 |
fungi | 104.130.140.226 is the held server if you want to point your /etc/hosts there | 19:45 |
clarkb | fungi: yup trying now | 19:46 |
fungi | https://lists.zuul-ci.org/mailman3/lists/ shows just the zuul lists | 19:46 |
*** rlandy is now known as rlandy|dr_appt | 19:46 | |
fungi | if you try to look at lists.opendev.org there you'll see all the imported lists for the other domains that aren't the zuul one | 19:46 |
fungi | so this does seem to cover all the bases, question is how best to automate | 19:47 |
clarkb | ya so this isn't quite working | 19:47 |
fungi | oh? | 19:47 |
clarkb | well no we don't want all the lists shown at lists.opendev.org | 19:47 |
clarkb | we should only see opendev lists | 19:47 |
fungi | right, we would if all the other lists were associated with other domains than the lists.opendev.org one | 19:48 |
clarkb | fungi: well the zuul lists still show up there too | 19:48 |
clarkb | and those do have a separate domain | 19:48 |
clarkb | do we need to add those sites on the test node to see if they are all properly associated if that fixes it? | 19:49 |
fungi | checking again. i have to keep flipping back and forth between the public address and my ssh tunnel | 19:49 |
clarkb | I agree lists.zuul-ci.org looks correct now | 19:49 |
fungi | yeah, okay so maybe that's due to it being treated as the "default site" | 19:51 |
fungi | wonder if we need a separate default site which isn't any of the actual sites | 19:51 |
clarkb | I'm trying to reconcile that with what I see in the code an dnot seeing it yet | 19:51 |
clarkb | it seems to only take the first entry if the list length is 1 | 19:52 |
clarkb | https://gitlab.com/mailman/postorius/-/blob/master/src/postorius/views/list.py#L1049-1050 the filtration there would have to return the lists.zuul-ci.org lists when web_host is lists.opendev.org which makes no sense to me | 19:54 |
clarkb | the mail_host for lists.zuul-ci.org lists is lists.zuul-ci.org not lists.opendev.org so how do we get that back? | 19:54 |
fungi | what do you mean get it back? | 19:56 |
clarkb | MailDomain.objects.get() is doing a data lookup filtering by domain mail_host values. Then if the site.domain for the results matches web_host which is the url we hit we add the mail_host to the list of mail hosts | 19:58 |
clarkb | when I hit lists.opendev.org web_host should be lists.opendev.org so how does the == .site.domain matching lists.zuul-ci.org? | 19:58 |
fungi | i associated the lists.zuul-ci.org mail host with the lists.zuul-ci.org site (web host) that i created | 19:59 |
clarkb | right and that makes lists.zuul-ci.org return the reduced list. But how does that allow lists.opendev.org to continue to return lists.zuul-ci.org lists? | 19:59 |
clarkb | I suppose it could be cached? | 19:59 |
fungi | oh, possibly. i can try restarting the containers | 20:00 |
clarkb | it looks like hyperkitty is filtering the lists.zuul-ci.org results out of the index it produces | 20:00 |
fungi | doing that now, though it takes a few minutes to start back up so we should be mindful of that when doing it in production | 20:00 |
clarkb | postorius isn't so ya maybe it is just caching | 20:00 |
clarkb | the filtering code between the two codebases is very similar | 20:00 |
clarkb | I guess we can look for differences between them if the behavior continues to differ | 20:01 |
clarkb | but ya I think this fixes like 95% of the problem :) | 20:01 |
fungi | the containers are restarted but the sites will return error pages for a few minutes until everything's running | 20:01 |
clarkb | I think it does a lot of migration checks on startup | 20:01 |
fungi | yeah, and the log complains about some not yet applied too, we might want to check production for the same when we restart it | 20:02 |
clarkb | they still show up in postorius after the restart | 20:02 |
fungi | "Your models in app(s): 'django_mailman3', 'hyperkitty', 'postorius' have changes that are not yet reflected in a migration, and so won't be applied. Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them." | 20:03 |
clarkb | thats weird we've only ever run the one version? | 20:03 |
fungi | the log also says this: "Setting lists.opendev.org as the default domain ..." | 20:03 |
fungi | i wonder if being "default" conveys extra behaviors in postorius? | 20:03 |
clarkb | maybe? | 20:03 |
clarkb | this might be worth an email to the mm3 list? | 20:04 |
clarkb | we can describe what we've done how it has fixed hyperkitty but not postorius and see if they come back and say "its the default domain" or something | 20:04 |
fungi | though truth be told, i don't mind lists.opendev.org showing a cross-domain view of all the mailing lists we're hosting | 20:04 |
clarkb | ya I'm not sure it is the end of the world. But I think it would be nice to understand at least | 20:05 |
clarkb | really my main concern is that the behavior seemed to change randomly | 20:05 |
fungi | and also, if it is due to the default domain and we deem the behavior untidy, we could add another domain to serve solely as the "default" for the full list of lists | 20:06 |
clarkb | http://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_089/866632/1/gate/system-config-run-lists3/089ad87/bridge99.opendev.org/screenshots/ those are the screenshots for the site-owner change and they all show filtering | 20:07 |
clarkb | without setting site id to 0 or adding content to django | 20:07 |
clarkb | so something is changing in the running state of the system that trips this behavior. Understanding how/why is probably the most important thing if we decide to keep the lists.opendev.org listing as a global overview | 20:07 |
clarkb | Lunch now. Back in a bit | 20:08 |
Clark[m] | Maybe we hold a new node to cross compare? | 20:12 |
Clark[m] | And see if doing the admin thing trips it? | 20:12 |
fungi | probably a good idea. the only thing i can think of is that it seems like there were no mail hosts defined on the server until i logged in as admin and went to the domain management page. like it decided that was the time to populate them all | 20:13 |
fungi | specifically, visiting https://lists.opendev.org/mailman3/domains/ | 20:14 |
fungi | to test that theory, i deleted all 7 mail domains and the zuul site entry in the django admin interface, downed the containers, set SITE_ID back to 1 in settings.py and upped the containers again. seems to be back to the original behavior from before | 20:19 |
fungi | now i've visited https://lists.opendev.org/mailman3/domains/ as admin and then checked the django webui and all 7 mail domains have reappeared | 20:21 |
fungi | so i think that's the trigger | 20:21 |
Clark[m] | Ok cool so we understand the trigger now we can apply the workaround. Probably still worth an email to their list to ask why postorius differs in behavior? | 20:23 |
Clark[m] | I half wonder if this is a bug too since logging in as admin shouldn't change behavior | 20:23 |
Clark[m] | I mean that's a bit crazy to me :) | 20:23 |
fungi | yes, that's why i didn't want to believe what i thought i had witnessed originally | 20:24 |
fungi | like, there's no way that just logging in and looking at the domains list writes new configuration to the database, right? | 20:24 |
clarkb | fungi: the only other thought I've got before sneding upstream messages is maybe we add sites for all of the domains in case the issue is having lists without domains that match sites | 21:02 |
clarkb | but I don't expect that to help after looking at the code | 21:02 |
fungi | yeah, i've been trying to draft a post at the bottom of https://etherpad.opendev.org/p/mm3migration but am struggling with how to word it due to the overloaded and mismatched terminology between postorius and django | 21:06 |
clarkb | I'm not sure Im' much help after taking a look. I think you grok the differences there better than any of us right now :) | 21:10 |
*** tosky_ is now known as tosky | 21:12 | |
fungi | okay, i feel like i've got it "summarized" but it still seems like information overload | 21:18 |
clarkb | fungi: one last note and ya I think this is a bit convoluted but I suspect the mailman devs will be able to parse through it better than we can | 21:20 |
fungi | okay, i think i've addressed your last suggestion | 21:25 |
*** dasm is now known as dasm|off | 21:26 | |
clarkb | ya that looks good | 21:27 |
clarkb | I think worst case they ask us for more clarification | 21:27 |
*** dviroel|rover is now known as dviroel|out | 21:31 | |
clarkb | holiday party planning details sent out | 21:31 |
fungi | seems my first post to mailman-users is moderated. i guess merely subscribing isn't sufficient | 21:42 |
clarkb | ya that happened to me too, they get through it quickly iirc | 21:42 |
*** rlandy|dr_appt is now known as rlandy | 21:51 | |
*** rlandy is now known as rlandy|bbl | 22:29 | |
clarkb | In the gerrit mailing list I've found evidence that custom submit requirements do end up in the change list summary view | 23:45 |
clarkb | I'm pretty confident our supposed workaround will work | 23:45 |
clarkb | The 3.6 to 3.7 upgrade is not easily downgradeable beacuse they convert label copyValue settings to copyConditions | 23:50 |
clarkb | copyCondition appears to be supported by 3.6. I suspect that the very first thing we should be doing is converting to copyCondition on all our labels? | 23:51 |
clarkb | And maybe at the same time sort out what the submit requirements changes are that zuul quickstart ran into and do those conversions too | 23:52 |
clarkb | probably the first step is building 3.7 images and updated the upgrade job. But we shouldn't do that until we're confident we won't revert to 3.5. At this point this seems unlikely I'll poke at that tomorrow maybe | 23:54 |
clarkb | But then ya I suspect we need to do widespread updates to labels/submit-requirements to accomodate 3.7 early. I worry that the required offline reindexing and schema updates might be very slow... | 23:55 |
clarkb | testing that without a copy of the installations is hard, but we can probably benchmark it with artifical data and see if we think it represents an issue that deserves further testing | 23:56 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!