*** xarlos has quit IRC | 00:06 | |
*** jistr has quit IRC | 00:18 | |
*** jistr has joined #openstack-infra | 00:19 | |
*** hamalq_ has quit IRC | 00:49 | |
*** gyee has quit IRC | 01:01 | |
*** jamesdenton has quit IRC | 01:04 | |
*** jamesden_ has joined #openstack-infra | 01:04 | |
*** iurygregory has quit IRC | 01:16 | |
*** iurygregory has joined #openstack-infra | 01:17 | |
*** __ministry has joined #openstack-infra | 01:17 | |
*** iurygregory has quit IRC | 01:18 | |
*** iurygregory has joined #openstack-infra | 01:18 | |
*** osmanlicilegi has joined #openstack-infra | 01:20 | |
*** gshippey has quit IRC | 01:35 | |
*** openstackgerrit has joined #openstack-infra | 01:44 | |
openstackgerrit | Jeremy Stanley proposed openstack/project-config master: Clean up OpenEdge configuration https://review.opendev.org/c/openstack/project-config/+/783990 | 01:44 |
---|---|---|
*** carloss has quit IRC | 01:57 | |
*** iurygregory has quit IRC | 02:08 | |
*** iurygregory has joined #openstack-infra | 02:09 | |
*** rcernin has quit IRC | 02:31 | |
*** rcernin has joined #openstack-infra | 02:38 | |
*** armax has quit IRC | 02:55 | |
*** armax has joined #openstack-infra | 02:59 | |
*** rcernin has quit IRC | 03:07 | |
*** rcernin has joined #openstack-infra | 03:07 | |
*** rcernin has quit IRC | 03:07 | |
*** akahat has quit IRC | 03:08 | |
*** rcernin has joined #openstack-infra | 03:09 | |
*** kopecmartin has quit IRC | 03:09 | |
*** dulek has quit IRC | 03:09 | |
*** nhicher has quit IRC | 03:09 | |
*** rcernin has quit IRC | 03:11 | |
*** rcernin has joined #openstack-infra | 03:12 | |
*** nhicher has joined #openstack-infra | 03:13 | |
*** kopecmartin has joined #openstack-infra | 03:13 | |
*** rcernin has quit IRC | 03:14 | |
*** rcernin has joined #openstack-infra | 03:14 | |
*** rcernin has quit IRC | 03:16 | |
*** rcernin has joined #openstack-infra | 03:16 | |
*** rcernin has quit IRC | 03:18 | |
*** rcernin has joined #openstack-infra | 03:19 | |
*** akahat has joined #openstack-infra | 03:22 | |
*** zer0c00l|afk is now known as zer0c00l | 03:38 | |
*** psachin has joined #openstack-infra | 03:41 | |
*** armax has quit IRC | 03:57 | |
*** ykarel|away has joined #openstack-infra | 04:20 | |
*** vishalmanchanda has joined #openstack-infra | 04:30 | |
*** ykarel|away is now known as ykarel | 04:39 | |
*** zbr|rover4 has joined #openstack-infra | 05:04 | |
*** zbr|rover has quit IRC | 05:06 | |
*** zbr|rover4 is now known as zbr|rover | 05:06 | |
*** whoami-rajat has joined #openstack-infra | 05:17 | |
*** auristor has quit IRC | 05:27 | |
*** adriant has quit IRC | 05:43 | |
*** ralonsoh has joined #openstack-infra | 06:10 | |
*** slaweq has joined #openstack-infra | 06:10 | |
*** ajitha has joined #openstack-infra | 06:11 | |
*** sboyron has joined #openstack-infra | 06:21 | |
*** gfidente|afk is now known as gfidente | 06:21 | |
*** jcapitao has joined #openstack-infra | 06:27 | |
*** eolivare has joined #openstack-infra | 06:30 | |
*** hashar has joined #openstack-infra | 06:45 | |
*** dklyle has quit IRC | 06:51 | |
openstackgerrit | Hervé Beraud proposed openstack/project-config master: Use publish-to-pypi on barbican ansible roles https://review.opendev.org/c/openstack/project-config/+/784011 | 06:52 |
*** dulek has joined #openstack-infra | 07:04 | |
*** jamesden_ has quit IRC | 07:08 | |
*** jamesdenton has joined #openstack-infra | 07:09 | |
*** rcernin has quit IRC | 07:25 | |
*** tosky has joined #openstack-infra | 07:33 | |
hberaud | Hello infra team, please can you have a look ASAP to this patch => /https://review.opendev.org/c/openstack/project-config/+/784011 This patch a bit urgent and it will allow us to land 2 new deliverables within Wallaby. Deadline is close and the gates are stuck by this patch. Thank you for your understanding. | 07:38 |
*** xarlos has joined #openstack-infra | 07:41 | |
*** ykarel has quit IRC | 08:01 | |
frickler | hberaud: done | 08:07 |
*** ociuhandu has joined #openstack-infra | 08:07 | |
*** dpawlik0 is now known as dpawlik | 08:08 | |
*** lucasagomes has joined #openstack-infra | 08:09 | |
openstackgerrit | Merged openstack/project-config master: Use publish-to-pypi on barbican ansible roles https://review.opendev.org/c/openstack/project-config/+/784011 | 08:16 |
hberaud | frickler: thank you very much | 08:28 |
*** psachin has quit IRC | 08:30 | |
*** derekh has joined #openstack-infra | 08:30 | |
*** ociuhandu has quit IRC | 08:36 | |
*** vishalmanchanda has quit IRC | 08:39 | |
*** ykarel has joined #openstack-infra | 08:43 | |
*** ykarel is now known as ykarel|lunch | 08:43 | |
*** jcapitao has quit IRC | 08:51 | |
*** ociuhandu has joined #openstack-infra | 08:52 | |
*** ociuhandu has quit IRC | 08:53 | |
*** ociuhandu has joined #openstack-infra | 08:55 | |
*** vishalmanchanda has joined #openstack-infra | 09:11 | |
*** derekh has quit IRC | 09:21 | |
*** derekh has joined #openstack-infra | 09:21 | |
*** jcapitao has joined #openstack-infra | 09:24 | |
hberaud | frickler: sorry to disturb you again, do you have an idea why this error happen? https://zuul.opendev.org/t/openstack/build/3c94d9fe7dbc41fb82030a7e5adbf88a/log/job-output.txt#4257 | 09:38 |
hberaud | it doesn't seems related to the deliverable in itself | 09:39 |
hberaud | it more looks like to an environment issue (network issue or something like that) but I'm not sure | 09:39 |
hberaud | however, another release patch successfully passed more or less at the same time https://review.opendev.org/c/openstack/releases/+/784014 | 09:41 |
hberaud | so I wonder if some other elements aren't missing somewhere | 09:41 |
hberaud | (for these ansible roles) | 09:42 |
*** hashar has quit IRC | 09:52 | |
*** jcapitao has quit IRC | 09:56 | |
*** jcapitao has joined #openstack-infra | 09:56 | |
*** yamamoto has quit IRC | 10:00 | |
*** ykarel|lunch is now known as ykarel | 10:04 | |
*** rcernin has joined #openstack-infra | 10:06 | |
*** rcernin has quit IRC | 10:08 | |
*** rcernin has joined #openstack-infra | 10:08 | |
*** jamesdenton has quit IRC | 10:20 | |
*** jamesden_ has joined #openstack-infra | 10:20 | |
*** derekh has quit IRC | 10:26 | |
*** derekh has joined #openstack-infra | 10:26 | |
*** yamamoto has joined #openstack-infra | 10:33 | |
*** jcapitao has quit IRC | 10:36 | |
*** hjensas has joined #openstack-infra | 10:36 | |
*** jcapitao has joined #openstack-infra | 10:36 | |
*** jcapitao is now known as jcapitao_lunch | 10:39 | |
frickler | hberaud: this looks like a rate limit on the pypi side to me, maybe too many things merged at once? maybe some other infra-root can dig deeper | 10:43 |
hberaud | frickler: yeah we currently discuss about this on #openstack-release and ttx just proposed https://review.opendev.org/c/openstack/releases/+/784068 | 10:44 |
*** yamamoto has quit IRC | 10:46 | |
*** jcapitao_lunch has quit IRC | 10:46 | |
*** jcapitao_lunch has joined #openstack-infra | 10:47 | |
zbr|rover | i observed that a particular job is reaching RETRY_LIMIT very often but i am not able to get any feedback regarding why, zuul provides no logs. | 10:47 |
zbr|rover | https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ansible-centos-8-molecule-tripleo-modules&project=openstack/tripleo-ansible | 10:47 |
*** jcapitao_lunch has quit IRC | 10:51 | |
*** jcapitao_lunch has joined #openstack-infra | 10:51 | |
*** carloss has joined #openstack-infra | 10:55 | |
*** mugsie__ is now known as mugsie | 11:01 | |
*** jcapitao_lunch has quit IRC | 11:04 | |
*** ociuhandu has quit IRC | 11:13 | |
fungi | zbr|rover: we had what looked like a memory leak in the zuul scheduler yesterday which started causing zookeeper disconnects once the server began lightly utilizing swap, and that seemed to be resulting in jobs spontaneously getting retried. we restarted the scheduler to relieve some of the memory pressure | 11:29 |
*** ociuhandu has joined #openstack-infra | 11:29 | |
zbr|rover | fungi: i wonder if there is nothing particular with this job, i got one running and stuck at https://zuul.opendev.org/t/openstack/stream/b3f23e90b8c448e39dd9aa17830f76e5?logfile=console.log | 11:30 |
zbr|rover | oops, interesting, just got something out of it: 2021-03-31 11:29:49.414003 | centos-8 | "msg": "Data could not be sent to remote host \"104.130.239.177\". Make sure this host can be reached over ssh: ssh: connect to host 104.130.239.177 port 22: Connection timed out\r\n", | 11:30 |
zbr|rover | so that task finally failed after being stuck for 17mins | 11:31 |
zbr|rover | this is how it looks: http://paste.openstack.org/show/804072/ | 11:31 |
fungi | looks like a node somewhere in rackspace | 11:31 |
fungi | and yeah, i'm not seeing the memory pressure reappearing in the scheduler yet, so this is likely unrelated | 11:32 |
zbr|rover | were you able to spot anything else with such a high failure rate? https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ansible-centos-8-molecule-tripleo-modules&project=openstack/tripleo-ansible | 11:34 |
*** ociuhandu has quit IRC | 11:34 | |
*** dtantsur|afk is now known as dtantsur | 11:34 | |
zbr|rover | i failed to spot one, thus is why i suspect it may be something particular this job, or at least the nodeset. | 11:34 |
fungi | so maybe the tripleo-ansible-centos-8-molecule-tripleo-modules job is crashing nodes, or the centos-8-stream nodes could be unstable? | 11:36 |
fungi | and no, i'm still waking up, trying to broadly survey what's going on with everything before i know where to focus my efforts | 11:36 |
fungi | need to follow up on some release failures too | 11:36 |
fungi | see if they're related or not | 11:36 |
zbr|rover | sure, i also need to go and grab lunch or i will endup starving | 11:36 |
*** lpetrut has joined #openstack-infra | 11:37 | |
fungi | we can set an autohold for one of the failing changes if it's repeatable, and then try to see if the node recovers after the build completes, or try to reboot it from the api to get it back online and then investigate | 11:38 |
fungi | i can also try to snag a vm console log from one if we can catch it failing | 11:38 |
*** ociuhandu has joined #openstack-infra | 11:45 | |
*** jcapitao_lunch has joined #openstack-infra | 11:46 | |
*** ociuhandu has quit IRC | 11:50 | |
*** sshnaidm|off is now known as sshnaidm | 11:57 | |
*** ociuhandu has joined #openstack-infra | 12:01 | |
*** jcapitao_lunch is now known as jcapitao | 12:02 | |
*** auristor has joined #openstack-infra | 12:05 | |
*** ociuhandu has quit IRC | 12:10 | |
*** nweinber has joined #openstack-infra | 12:28 | |
*** yamamoto has joined #openstack-infra | 12:30 | |
*** rcernin has quit IRC | 12:31 | |
*** yamamoto has quit IRC | 12:39 | |
*** smcginnis has quit IRC | 12:53 | |
*** dchen has quit IRC | 13:03 | |
*** yamamoto has joined #openstack-infra | 13:10 | |
*** yamamoto has quit IRC | 13:15 | |
*** yamamoto has joined #openstack-infra | 13:15 | |
*** ociuhandu has joined #openstack-infra | 13:20 | |
*** yamamoto has quit IRC | 13:55 | |
*** rpioso is now known as rpioso|afk | 13:55 | |
*** rcernin has joined #openstack-infra | 14:12 | |
*** rcernin has quit IRC | 14:17 | |
*** armax has joined #openstack-infra | 14:18 | |
*** ociuhandu has quit IRC | 14:19 | |
*** ociuhandu has joined #openstack-infra | 14:20 | |
*** jamesden_ has quit IRC | 14:24 | |
*** jamesdenton has joined #openstack-infra | 14:24 | |
*** ociuhandu has quit IRC | 14:29 | |
*** ociuhandu has joined #openstack-infra | 14:29 | |
*** rcernin has joined #openstack-infra | 14:31 | |
*** xarlos has quit IRC | 14:34 | |
*** rcernin has quit IRC | 14:35 | |
*** yamamoto has joined #openstack-infra | 14:35 | |
*** xarlos has joined #openstack-infra | 14:37 | |
*** ykarel is now known as ykarel|away | 14:40 | |
*** yamamoto has quit IRC | 14:42 | |
clarkb | fungi: frickler hberaud I agree that looks like rate limiting from the pypi side. Looks like that job doesn't run particularly slowly, maybe we can get away with some self induced sleep between requests to pypi? | 14:56 |
hberaud | clarkb: already done with https://review.opendev.org/c/openstack/releases/+/784068 | 14:56 |
hberaud | And I confirm that fixed the problem | 14:56 |
clarkb | great | 14:57 |
*** dklyle has joined #openstack-infra | 14:57 | |
hberaud | clarkb: however ykarel|away just noticed us some problem with tarballs | 14:57 |
*** dklyle has quit IRC | 14:57 | |
hberaud | Especially with this one => https://tarballs.opendev.org/openstack/barbican-tempest-plugin/?C=M;O=D | 14:58 |
fungi | well, there's no guarantee it fixed the problem (we might have just not retriggered the issue on the next attempt), but it seems like a good measure to help mitigate | 14:58 |
hberaud | fungi: I think I'll rewrite it with tenacity | 14:58 |
hberaud | to better handle this case | 14:59 |
clarkb | possibly related: pypi is going to remove the xml search api | 14:59 |
hberaud | and as elod suggested we should move away from the xmlrpc which is deprecated | 14:59 |
hberaud | clarkb: yes | 14:59 |
clarkb | yup, well and not just deprecated it sounds like it will be removed entirely soon | 15:00 |
hberaud | ok | 15:00 |
hberaud | So we need to worry quickly | 15:01 |
clarkb | I'm trying to remember where I last saw updates, I think on a github issue | 15:02 |
clarkb | the pypi-announce list is still quiet though so maybe not as soon as I thought | 15:02 |
*** ajitha has quit IRC | 15:03 | |
*** dklyle has joined #openstack-infra | 15:04 | |
fungi | hberaud: clarkb: looks like the missing additions on the tarballs site are probably delayed vos release for the project.tarballs volume. looking into it now | 15:04 |
fungi | not sure if ianw started doing the ord replica sync for that, but if so maybe he temporarily held a lock or something | 15:05 |
clarkb | https://github.com/pypa/warehouse/issues/8769 implies you can do similar with the rest api at least | 15:05 |
*** ociuhandu has quit IRC | 15:05 | |
*** ociuhandu has joined #openstack-infra | 15:06 | |
clarkb | hberaud: https://github.com/pypa/warehouse/issues/4321 that has details on the rate limit itself | 15:06 |
hberaud | clarkb: excellent thanks for the link | 15:06 |
fungi | there is a "vos release -v -localauth project.tarballs" running on afs01.dfw since 00:48 utc | 15:07 |
fungi | i have a feeling that's in progress adding the afs01.ord replica | 15:07 |
fungi | so the tarballs site will be stale until that's done | 15:07 |
clarkb | or we serve from the RW replica temporarily | 15:08 |
*** __ministry1 has joined #openstack-infra | 15:10 | |
*** ociuhandu has quit IRC | 15:12 | |
*** mfixtex has joined #openstack-infra | 15:14 | |
*** lpetrut has quit IRC | 15:22 | |
*** rcernin has joined #openstack-infra | 15:26 | |
*** noonedeadpunk has quit IRC | 15:27 | |
*** noonedeadpunk has joined #openstack-infra | 15:28 | |
fungi | yeah, if this takes much longer i may push that change up | 15:29 |
fungi | need to take a look at the traffic graph and try to estimate how much time is remaining | 15:30 |
*** rcernin has quit IRC | 15:31 | |
*** ykarel|away has quit IRC | 15:37 | |
*** ociuhandu has joined #openstack-infra | 15:38 | |
*** hashar has joined #openstack-infra | 15:41 | |
*** ociuhandu has quit IRC | 15:43 | |
mtreinish | I had a weird reno question (not sure if this the best channel to ask). | 15:47 |
mtreinish | I've been using reno in some projects on github and we're facing a problem with stable branches and point releases | 15:47 |
mtreinish | basically, when we backport fixes with notes to stable branches, they only show up under point releases when we run `reno report` from that stable branch | 15:47 |
mtreinish | When we run it from master the note from the commit which was backported from the master branch will be shown under pending release section | 15:48 |
mtreinish | I was thinking maybe it was the branch scanning config settings but we use 'stable/0.x' branch name which should match the default patterns | 15:48 |
mtreinish | does anyone have any thoughts? | 15:49 |
*** spotz has joined #openstack-infra | 15:50 | |
fungi | mtreinish: we were discussing this in #openstack-release yesterday | 15:51 |
* fungi gets a link | 15:51 | |
mtreinish | ah, a channel I probably should idle in. (I lost my znc config a while back and forgot to rejoin several channels) | 15:53 |
fungi | http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-03-30.log.html#t2021-03-30T19:49:18 | 15:53 |
mtreinish | fungi: hmm, it's a bit more than that in my case I think. Like point releases tagged on the stable branches don't show up in the output at all unless you're current checkout is the stable branch | 15:56 |
fungi | mtreinish: yes, that's always been true i think | 15:56 |
fungi | branches are handled independently, you can create a release notes document per branch | 15:57 |
mtreinish | has it? I could have sworn it used to do the right thing when running from master and would show all stable point releases tagged on the stable branches | 15:58 |
mtreinish | but I could just be misremembering | 15:58 |
*** ociuhandu has joined #openstack-infra | 15:59 | |
fungi | the way openstack works around that is by having one release notes document per release: https://docs.openstack.org/releasenotes/nova/victoria.html | 15:59 |
fungi | so the stable/victoria release notes continue to update that victoria release notes document | 16:00 |
fungi | and things which happened on master after victoria branched go into a wallaby release notes document | 16:00 |
*** rpioso|afk is now known as rpioso | 16:00 | |
*** lucasagomes has quit IRC | 16:01 | |
*** __ministry1 has quit IRC | 16:01 | |
fungi | projects with all their releases in one document have struggled with it as a result. e.g. when zuul wanted to backport a fix recently we created a temporary branch and put a fix on it with a release note, but then that wouldn't show up in the release notes for the master branch at all. what we ended up having to do was revert all release notes in master since the point where we branched, merge the | 16:02 |
fungi | fix branch back into master with the tag that was on it, then readd all the master branch release notes with new reno ids after the merge point | 16:02 |
fungi | and i think that only worked because we hadn't tagged anything new on master after the branch point | 16:02 |
mtreinish | ah ok, yeah I just ran some tests locally and understand my confused memory now. On tempest it shows the stable point releases, but tempest is branchless :P | 16:03 |
mtreinish | ok well it's good to know I'm not doing something wrong, and it's a headache for everyone | 16:04 |
fungi | right, and if tempest ever needs to "backport" a fix to an old release, it will probably have the same struggle zuul does | 16:04 |
fungi | as zuul is similarly branchless | 16:04 |
mtreinish | I would have assumed because the documentation says it supports multiple branch scanning it would be able to handle this case but I guess not | 16:04 |
mtreinish | I'll have to take a look at the reno code and see if there is a way to handle doing this (maybe optionally because I assume it won't be perfect) | 16:05 |
mtreinish | thanks for the help | 16:05 |
fungi | the reno maintainers (openstack release management team) would probably be fine accepting a patch which made that better, but i expect the trick will be working out the reintegration logic for multiple branches | 16:06 |
fungi | the problem is not so much that nobody's thought to add support for it, but that the actual logistics are unclear | 16:06 |
mtreinish | yeah, that's why I was thinking it would be an optional thing, I was thinking of doing something like a BFS traversal approach, but then have to match tags to stable branches which could get hairy | 16:09 |
*** ociuhandu has quit IRC | 16:12 | |
*** hamalq has joined #openstack-infra | 16:18 | |
zbr|rover | what would be the best channel to chat about pbr? | 16:18 |
*** hamalq_ has joined #openstack-infra | 16:19 | |
*** hamalq has quit IRC | 16:22 | |
*** jamesdenton has quit IRC | 16:24 | |
*** jamesdenton has joined #openstack-infra | 16:25 | |
fungi | zbr|rover: maybe #openstack-oslo since it's an official deliverable of the openstack oslo team. i don't do a lot with pbr maintenance, but i patch or review things in it from time to time and am happy to join discussion there | 16:38 |
*** jcapitao has quit IRC | 16:40 | |
*** hamalq_ has quit IRC | 16:41 | |
*** hamalq has joined #openstack-infra | 16:41 | |
zbr|rover | thanks, i did not know that aspect and the irc channel was not mentioned on readme. | 16:42 |
*** eolivare has quit IRC | 16:43 | |
openstackgerrit | Sorin Sbârnea proposed openstack/pbr master: Make default envlist generic in tox.ini https://review.opendev.org/c/openstack/pbr/+/757445 | 16:56 |
*** derekh has quit IRC | 17:00 | |
*** dtantsur is now known as dtantsur|afk | 17:04 | |
openstackgerrit | Stephen Finucane proposed openstack/pbr master: Add test for cfg -> py transformation https://review.opendev.org/c/openstack/pbr/+/780658 | 17:09 |
openstackgerrit | Stephen Finucane proposed openstack/pbr master: Reverse ordering of 'D1_D2_SETUP_ARGS' https://review.opendev.org/c/openstack/pbr/+/780659 | 17:09 |
*** TViernion has quit IRC | 17:10 | |
*** TViernion has joined #openstack-infra | 17:16 | |
*** rcernin has joined #openstack-infra | 17:27 | |
*** rcernin has quit IRC | 17:35 | |
*** vishalmanchanda has quit IRC | 17:51 | |
*** timburke has joined #openstack-infra | 17:52 | |
*** gfidente is now known as gfidente|afk | 17:59 | |
*** yamamoto has joined #openstack-infra | 18:50 | |
*** yamamoto has quit IRC | 18:54 | |
*** jamesdenton has quit IRC | 18:56 | |
*** jamesden_ has joined #openstack-infra | 18:56 | |
*** hashar has quit IRC | 19:08 | |
*** rcernin has joined #openstack-infra | 19:31 | |
*** rcernin has quit IRC | 19:36 | |
*** hjensas has quit IRC | 20:10 | |
*** sboyron has quit IRC | 20:12 | |
*** nweinber has quit IRC | 20:14 | |
*** rcernin has joined #openstack-infra | 20:30 | |
*** ralonsoh has quit IRC | 20:31 | |
*** d34dh0r53 has quit IRC | 20:35 | |
ianw | fungi: urgh, yeah i added the ORD sites as mentioned in the meeting and started a vos release ... tarballs is still going | 20:37 |
ianw | it's in a screen on afs01 | 20:37 |
fungi | yep, that seemed to be what it turned out to be, no worries | 20:37 |
ianw | i dunno. it's going at 10mbps (1.3Mbps). yesterday i was copying things at 45MiBs rax->vexxhost. i don't know why it's so slow | 20:38 |
*** jamesden_ has quit IRC | 20:38 | |
clarkb | afs and latency :( | 20:39 |
*** jamesdenton has joined #openstack-infra | 20:39 | |
ianw | is the latency between dfw & ord that big? | 20:39 |
ianw | it seems to be very suspiciously 10mbps i would say | 20:40 |
clarkb | I would suspect its in the 30ms range? | 20:40 |
clarkb | which is probably high enough if afs is doing its weird windowing thing | 20:40 |
*** d34dh0r53 has joined #openstack-infra | 20:40 | |
clarkb | though we could also do a transfer from mirror to mirror in both directions and baseline tcp windowing too | 20:40 |
ianw | time=41.3 | 20:40 |
*** whoami-rajat has quit IRC | 20:47 | |
fungi | dallas and chicago aren't exactly next door, even as the jet flies | 20:48 |
ianw | i should probably kill this and try : https://lists.openafs.org/pipermail/openafs-info/2018-August/042502.html (and document it ...) | 20:50 |
fungi | or we can temporarily update the tarballs vhost on static.o.o to serve from the read-write volume temporarily, done that before more than once | 20:51 |
*** rcernin has quit IRC | 21:05 | |
*** rcernin has joined #openstack-infra | 21:05 | |
*** rcernin has quit IRC | 21:30 | |
*** dansmith has quit IRC | 21:33 | |
*** dansmith has joined #openstack-infra | 21:37 | |
*** rcernin has joined #openstack-infra | 21:55 | |
*** rcernin has quit IRC | 22:00 | |
*** rcernin has joined #openstack-infra | 22:13 | |
*** rcernin has quit IRC | 22:18 | |
*** yamamoto has joined #openstack-infra | 22:30 | |
*** rcernin has joined #openstack-infra | 22:32 | |
*** rcernin has quit IRC | 22:32 | |
*** rcernin has joined #openstack-infra | 22:33 | |
*** jamesdenton has quit IRC | 22:57 | |
*** jamesden_ has joined #openstack-infra | 22:58 | |
openstackgerrit | Merged openstack/project-config master: Clean up OpenEdge configuration https://review.opendev.org/c/openstack/project-config/+/783990 | 23:22 |
*** tosky has quit IRC | 23:30 | |
*** yamamoto has quit IRC | 23:34 | |
*** lamt has joined #openstack-infra | 23:59 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!