*** ryohayakawa has joined #openstack-infra | 00:02 | |
*** rcernin has quit IRC | 00:05 | |
*** rcernin has joined #openstack-infra | 00:31 | |
dansmith | clarkb: this is a drive-by since I'm going afk, but I just got one of those functional timeout jobs and I don't see the testr file anywhere, even though I see that post step that was supposed to grab them: https://zuul.opendev.org/t/openstack/build/87acf04afa5f4bec8618a3802978f714 | 00:35 |
---|---|---|
*** weshay|ruck has quit IRC | 00:53 | |
*** weshay_ has joined #openstack-infra | 00:53 | |
*** howell has quit IRC | 00:57 | |
*** samueldmq has quit IRC | 00:57 | |
*** donnyd has quit IRC | 00:57 | |
*** zzzeek has quit IRC | 00:57 | |
*** thedac has quit IRC | 00:57 | |
*** jberg-dev has quit IRC | 00:57 | |
*** dtantsur|afk has quit IRC | 00:57 | |
*** owalsh_ has joined #openstack-infra | 01:00 | |
*** samueldmq has joined #openstack-infra | 01:01 | |
*** donnyd has joined #openstack-infra | 01:01 | |
*** howell has joined #openstack-infra | 01:01 | |
*** zzzeek has joined #openstack-infra | 01:01 | |
*** thedac has joined #openstack-infra | 01:01 | |
*** jberg-dev has joined #openstack-infra | 01:01 | |
*** dtantsur|afk has joined #openstack-infra | 01:01 | |
*** gyee has quit IRC | 01:02 | |
*** mordred has quit IRC | 01:03 | |
*** owalsh has quit IRC | 01:04 | |
*** tkajinam has quit IRC | 01:06 | |
*** tkajinam has joined #openstack-infra | 01:06 | |
*** rfolco has quit IRC | 01:08 | |
*** mordred has joined #openstack-infra | 01:10 | |
*** rfolco has joined #openstack-infra | 01:15 | |
*** ociuhandu has joined #openstack-infra | 01:17 | |
*** rfolco has quit IRC | 01:19 | |
*** ociuhandu has quit IRC | 01:21 | |
*** Goneri has quit IRC | 01:25 | |
*** armax has quit IRC | 01:27 | |
*** tkajinam has quit IRC | 01:33 | |
*** tkajinam has joined #openstack-infra | 01:34 | |
*** ldenny has quit IRC | 01:40 | |
*** auristor has joined #openstack-infra | 01:40 | |
*** ldenny has joined #openstack-infra | 01:40 | |
*** ociuhandu has joined #openstack-infra | 01:43 | |
*** ociuhandu has quit IRC | 01:48 | |
*** dchen is now known as dchen|away | 01:56 | |
*** dchen|away is now known as dchen | 02:01 | |
*** rfolco has joined #openstack-infra | 02:08 | |
*** ricolin has quit IRC | 02:15 | |
*** ricolin has joined #openstack-infra | 02:21 | |
*** rfolco has quit IRC | 02:28 | |
*** artom has quit IRC | 02:35 | |
*** dchen is now known as dchen|away | 02:37 | |
zer0c00l | clarkb: Thanks. | 02:44 |
*** ociuhandu has joined #openstack-infra | 02:53 | |
gouthamr | clarkb fungi: circling back to tell you a new change-id resolved the issue i had earlier - thank you for your help! | 02:57 |
*** ociuhandu has quit IRC | 02:57 | |
*** stevebaker has quit IRC | 03:18 | |
*** armax has joined #openstack-infra | 03:28 | |
*** psachin has joined #openstack-infra | 03:37 | |
*** armax has quit IRC | 03:43 | |
*** dchen|away is now known as dchen | 03:58 | |
*** dchen is now known as dchen|away | 04:08 | |
*** ykarel|away has joined #openstack-infra | 04:22 | |
*** ykarel|away is now known as ykarel | 04:29 | |
*** evrardjp has quit IRC | 04:33 | |
*** evrardjp has joined #openstack-infra | 04:33 | |
*** dchen|away is now known as dchen | 04:42 | |
*** Lucas_Gray has quit IRC | 04:44 | |
*** udesale has joined #openstack-infra | 04:45 | |
*** dchen is now known as dchen|away | 04:55 | |
*** marios has joined #openstack-infra | 05:05 | |
*** ramishra has quit IRC | 05:11 | |
*** ramishra has joined #openstack-infra | 05:16 | |
*** stevebaker has joined #openstack-infra | 05:16 | |
*** tetsuro has joined #openstack-infra | 05:25 | |
*** tetsuro has quit IRC | 05:25 | |
*** tetsuro has joined #openstack-infra | 05:26 | |
*** tetsuro has quit IRC | 05:27 | |
*** dchen|away is now known as dchen | 05:27 | |
*** lmiccini has joined #openstack-infra | 05:30 | |
*** ysandeep|away is now known as ysandeep|rover | 05:35 | |
*** auristor has quit IRC | 05:38 | |
*** auristor has joined #openstack-infra | 05:39 | |
*** slaweq has joined #openstack-infra | 05:52 | |
*** ociuhandu has joined #openstack-infra | 05:59 | |
*** slaweq has quit IRC | 06:02 | |
*** eolivare has joined #openstack-infra | 06:14 | |
*** brtknr has quit IRC | 06:17 | |
*** dchen has quit IRC | 06:22 | |
*** dchen has joined #openstack-infra | 06:23 | |
*** marios has quit IRC | 06:29 | |
*** vishalmanchanda has joined #openstack-infra | 06:39 | |
*** dklyle has quit IRC | 06:42 | |
*** slaweq has joined #openstack-infra | 06:57 | |
*** hashar has joined #openstack-infra | 07:18 | |
*** jcapitao has joined #openstack-infra | 07:19 | |
*** bhagyashris is now known as bhagyashris|lunc | 07:22 | |
*** xek_ has joined #openstack-infra | 07:24 | |
*** zxiiro has quit IRC | 07:26 | |
*** apetrich has joined #openstack-infra | 07:27 | |
*** yonglihe has joined #openstack-infra | 07:31 | |
*** tosky has joined #openstack-infra | 07:39 | |
*** ralonsoh has joined #openstack-infra | 07:40 | |
*** yolanda has quit IRC | 07:40 | |
*** dtantsur|afk is now known as dtantsur | 07:53 | |
*** jpena|off is now known as jpena | 07:55 | |
*** evrardjp has quit IRC | 08:06 | |
*** pkopec has joined #openstack-infra | 08:06 | |
*** evrardjp has joined #openstack-infra | 08:08 | |
*** lucasagomes has joined #openstack-infra | 08:08 | |
*** yolanda has joined #openstack-infra | 08:17 | |
*** derekh has joined #openstack-infra | 08:23 | |
*** bhagyashris|lunc is now known as bhagyashris | 08:40 | |
*** brtknr has joined #openstack-infra | 08:46 | |
*** ociuhandu has quit IRC | 08:56 | |
*** dtantsur is now known as dtantsur|brb | 08:58 | |
*** dchen is now known as dchen|away | 09:03 | |
*** rcernin has quit IRC | 09:12 | |
*** Lucas_Gray has joined #openstack-infra | 09:23 | |
*** Lucas_Gray has quit IRC | 09:27 | |
*** Lucas_Gray has joined #openstack-infra | 09:30 | |
*** ociuhandu has joined #openstack-infra | 09:33 | |
*** ociuhandu has quit IRC | 09:37 | |
*** ociuhandu has joined #openstack-infra | 09:42 | |
*** hashar has quit IRC | 09:42 | |
*** tkajinam has quit IRC | 10:01 | |
*** hashar has joined #openstack-infra | 10:06 | |
*** ramishra has quit IRC | 10:08 | |
*** hashar has quit IRC | 10:35 | |
*** ramishra has joined #openstack-infra | 10:49 | |
*** ricolin has quit IRC | 10:52 | |
*** eolivare has quit IRC | 11:08 | |
*** hashar has joined #openstack-infra | 11:14 | |
*** jcapitao is now known as jcapitao_lunch | 11:15 | |
*** ysandeep|rover is now known as ysandeep|afk | 11:21 | |
*** dtantsur|brb is now known as dtantsur | 11:26 | |
*** xek_ has quit IRC | 11:27 | |
*** hashar has quit IRC | 11:34 | |
*** hashar has joined #openstack-infra | 11:34 | |
*** dchen|away is now known as dchen | 11:34 | |
*** ysandeep|afk is now known as ysandeep|rover | 11:38 | |
openstackgerrit | Slawek Kaplonski proposed openstack/project-config master: Move non-voting neutron tempest jobs to separate graph https://review.opendev.org/743729 | 11:40 |
*** markvoelker has joined #openstack-infra | 11:40 | |
*** hashar has quit IRC | 11:42 | |
*** hashar has joined #openstack-infra | 11:42 | |
*** hashar has quit IRC | 11:44 | |
*** hashar has joined #openstack-infra | 11:45 | |
*** dchen is now known as dchen|away | 11:46 | |
*** markvoelker has quit IRC | 11:47 | |
*** rfolco has joined #openstack-infra | 11:51 | |
*** ryohayakawa has quit IRC | 11:52 | |
*** jpena is now known as jpena|lunch | 11:52 | |
*** hashar has quit IRC | 11:57 | |
*** rlandy has joined #openstack-infra | 12:02 | |
*** artom has joined #openstack-infra | 12:03 | |
*** hashar has joined #openstack-infra | 12:05 | |
*** dciabrin has quit IRC | 12:06 | |
*** dciabrin has joined #openstack-infra | 12:07 | |
*** eolivare has joined #openstack-infra | 12:07 | |
*** udesale_ has joined #openstack-infra | 12:19 | |
*** udesale has quit IRC | 12:21 | |
*** hashar has quit IRC | 12:22 | |
*** jcapitao_lunch is now known as jcapitao | 12:25 | |
*** xek has joined #openstack-infra | 12:27 | |
*** lpetrut has joined #openstack-infra | 12:44 | |
*** dciabrin has quit IRC | 12:44 | |
*** dciabrin has joined #openstack-infra | 12:45 | |
*** markvoelker has joined #openstack-infra | 12:48 | |
*** jpena|lunch is now known as jpena | 12:54 | |
*** weshay_ is now known as weshay|ruck | 12:54 | |
*** Goneri has joined #openstack-infra | 13:01 | |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul-jobs master: ensure-pip: add instructions for RedHat system https://review.opendev.org/743750 | 13:03 |
*** dciabrin has quit IRC | 13:27 | |
*** dciabrin has joined #openstack-infra | 13:27 | |
*** dave-mccowan has joined #openstack-infra | 13:29 | |
*** xek has quit IRC | 13:30 | |
*** ysandeep|rover is now known as ysandeep | 13:31 | |
*** andrewbonney has joined #openstack-infra | 13:32 | |
mwhahaha | clarkb: so the multiple requests are likely the different jobs requesting the same layer. I checked the logs for one of the jobs and d1dded21abdf1872a3e678bb99614c7728e3d1b381d5721169bedae30cda5c61 was requested 2 times. I'll do some testing today to try and figure out the cache busting behavior | 13:36 |
*** dchen|away is now known as dchen | 13:44 | |
*** piotrowskim has joined #openstack-infra | 13:46 | |
mwhahaha | can anyone point me at where the apache cache config lives for the docker.io mirror? I'd like to recreate locally to troubleshoot | 13:49 |
*** d34dh0r53 has joined #openstack-infra | 13:49 | |
clarkb | mwhahaha: I filtered by ip address first then trimmed to timestamps occuring for that job against your change. All of that should mean I counted for that one hoston that on job. | 13:53 |
fungi | mwhahaha: https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/mirror/templates/mirror.vhost.j2#L400-L452 | 13:53 |
fungi | assuming you're looking for the v2 protocol proxy | 13:53 |
mwhahaha | clarkb: weird. ok i'll try and track down the other requests | 13:53 |
fungi | if you're looking for the v1 protocol proxy then it's in a similar macro earlier in that template | 13:53 |
mwhahaha | fungi: yes thats it, thanks | 13:53 |
fungi | worth noting, dockerhub isn't designed to be easily proxied, they discourage doing so, and we have a few ugly workarounds in there to get it working at all | 13:55 |
fungi | thuogh apparently our workarounds are only sufficient to get it working with the official docker client, sounds like? | 13:56 |
clarkb | the biggest thing is ignoring url query paramters to make the urla cacheable | 13:57 |
openstackgerrit | Aurelien Lourot proposed openstack/project-config master: Add Keystone Kerberos charm to OpenStack charms https://review.opendev.org/743766 | 13:57 |
clarkb | they are sha256 addressed so should never change | 13:57 |
*** ysandeep is now known as ysandeep|away | 14:07 | |
*** xek has joined #openstack-infra | 14:09 | |
openstackgerrit | Aurelien Lourot proposed openstack/project-config master: Add Keystone Kerberos charm to OpenStack charms https://review.opendev.org/743766 | 14:10 |
*** sshnaidm is now known as sshnaidm|bbl | 14:17 | |
clarkb | mwhahaha: I'm double checking and all of the logs I processed yesterday are for https://zuul.opendev.org/t/openstack/build/61aa12e9474840c8969fc8426541fa41/log/job-output.txt#48 that host and the logs run from [2020-07-28 22:06:40.671] to [2020-07-28 22:57:01.165] which is within the time range of that job running | 14:27 |
clarkb | mwhahaha: now I'm spot checking my requests counts to make sure they are accurate | 14:27 |
*** dklyle has joined #openstack-infra | 14:29 | |
clarkb | mwhahaha: ok there was a bug in my sed, it was doubling the counts. So still have multiple requests, but half as many in the original paste. http://paste.openstack.org/show/796432/ should be accurate | 14:31 |
clarkb | I was using s///p without -n so it printed an extra match I think. Dropped the p and now the counts should be correct | 14:31 |
*** dchen is now known as dchen|away | 14:37 | |
*** ricolin has joined #openstack-infra | 14:40 | |
*** rcernin has joined #openstack-infra | 14:41 | |
*** Lucas_Gray has quit IRC | 14:41 | |
*** lpetrut has quit IRC | 14:43 | |
*** Lucas_Gray has joined #openstack-infra | 14:49 | |
*** lbragstad_ has joined #openstack-infra | 14:51 | |
*** rcernin_ has joined #openstack-infra | 14:51 | |
*** dklyle has quit IRC | 14:53 | |
clarkb | mwhahaha: looking at tcpdumps the docker client drops the Authorization header bearer token when talking to the CDN | 14:53 |
mwhahaha | wat | 14:53 |
clarkb | give me a few minutes to put this all in a paste | 14:53 |
mwhahaha | thanks i'm setting up an apache mirror at the moment to play with it a bit more | 14:53 |
*** fdegir5 has joined #openstack-infra | 14:54 | |
*** rcernin_ has quit IRC | 14:56 | |
*** rcernin has quit IRC | 14:58 | |
*** fdegir has quit IRC | 14:58 | |
*** irclogbot_3 has quit IRC | 14:58 | |
*** zer0c00l has quit IRC | 14:58 | |
*** lbragstad has quit IRC | 14:58 | |
*** irclogbot_3 has joined #openstack-infra | 14:59 | |
zbr | clarkb: why we are not using a generic proxy approach for mirror_info.sh implementation? | 15:00 |
*** dklyle has joined #openstack-infra | 15:01 | |
clarkb | mwhahaha: http://paste.openstack.org/show/796435/ that is what I see | 15:03 |
clarkb | zbr: I don't understand the question. What do you mean by generic proxy approach? | 15:03 |
zbr | one that you would configure as HTTP_PROXY=.... | 15:03 |
clarkb | zbr: because we don't want open proxies on the internet, and there isn't a good way for us to restrict access to the proxies if we want our jobs to be able to use them | 15:04 |
clarkb | zbr: instead we reverse proxy so that only specific backends can be targetted limiting the risk we'll be used for nefarious reasons | 15:04 |
zbr | to be this seems like creating more maintenance and less flexibility. it is still possible to limit what the proxy would serve of not, or whom. | 15:05 |
clarkb | we can't IP filter because we use public clouds and get large ranges of IP addrs which can also be used by others. I don't think we can reasonably authenticate the proxies as anyone can write a job to disclose the auth material (particularly since we expect the proxy to be functional throughout the job and not just during pre or post steps) | 15:05 |
clarkb | zbr: how? I don't think we have a method to limit the who in this case | 15:06 |
zbr | IP ranges? | 15:06 |
clarkb | we could limit the what but then every time someone adds a new job they'd have to disable the proxy if they talk to things that aren't already enabled | 15:06 |
clarkb | see my earlier note about IP ranges | 15:06 |
clarkb | due to our use of public clouds that isn't really sufficient | 15:06 |
clarkb | BUT also it wouldn't address this problem at all | 15:06 |
zbr | or we could use auth on them | 15:07 |
clarkb | zbr: see my note about auth :) | 15:07 |
clarkb | if someone can construct a method to do that I'd be happy to hear it but I've been unable to figure out how it would work i na reasonable manner | 15:07 |
clarkb | the problem we have here is a result of pushing requests through a single origin which is true if we forward or reverse proxy | 15:08 |
clarkb | its made worse by not being able to cache for some reason so all the requests go through. On top of that we have jobs requesting the same resources multiple times. | 15:08 |
clarkb | If we fix the caching issue all of those multiple requests should avoid hitting the remote. And additionally maybe we can stop making all of those extra requests and use the earlier fetched data within the job | 15:09 |
zbr | i am trying to find you note.... | 15:09 |
clarkb | 15:05:54 clarkb | we can't IP filter because we use public clouds and get large ranges of IP addrs which can also be used by others. I don't think we can reasonably authenticate the proxies as anyone can write a job to disclose the auth material (particularly since we expect the proxy to be functional throughout the job and not just during pre or post steps) | 15:10 |
clarkb | zbr: ^ | 15:10 |
zbr | i do not see any damage in not having a perfect lockdown of the proxy, what if few additional IPs would be able to use the proxy? Or what if someone would expose the credential and make use of it? If these proxies are only mirrors for specific domains, it should not matter. | 15:12 |
clarkb | zbr: the setup you describe would capture all traffic or are yo usuggesting that ever job need to know to enable the proxy for specific requests? | 15:13 |
*** lmiccini has quit IRC | 15:13 | |
clarkb | generally the suggestion is that we configure it in /etc system wide so that jobs don't need to be aware of it | 15:14 |
zbr | so now every job needs to know how to load and process 30 variables from https://opendev.org/opendev/base-jobs/src/branch/master/roles/mirror-info/templates/mirror_info.sh.j2 | 15:14 |
clarkb | the problem there is we can't limit the traffic much and that makes it a potential avenue for abuse | 15:14 |
zbr | instead of doing it for a single HTTP_PROXY one | 15:14 |
*** zxiiro has joined #openstack-infra | 15:14 | |
clarkb | zbr: thats not true what you link is the legacy pattern | 15:14 |
clarkb | zbr: the expectation now is that you'll run the role to configure eg docker and it will know to do it for you | 15:14 |
clarkb | which is how roles like the docker role work | 15:15 |
clarkb | but as I said if you configure the signel http proxy once then you have to allow all traffic through the proxy and that is the abuse concern | 15:15 |
clarkb | because now anyone on the internet can use us to funnel their traffic | 15:15 |
clarkb | then if we get firewalled all our jobs stop working | 15:15 |
clarkb | I'm not suggesting it is perfect, but given the constraints we operate under I think it is reasonable to do what we do (reverse proxying specific resources). Also, again, changing to a forward proxy would not change the key details of this problem that tripleo is facing | 15:16 |
zbr | do I need to allow all traffic through it? wouldn't be enough to allow only select locations? | 15:17 |
clarkb | zbr: if you do that then every job tasks needs to know to set that env var when it makes requests | 15:17 |
zbr | in fact I even see few extra security benefits on having a proxy, you setup on the machine and discover if jobs try to access data from "random" (unapproved) locations. | 15:18 |
clarkb | I don't want to be the location police | 15:18 |
zbr | we could even have a locked down mode | 15:19 |
clarkb | jobs can already limit themselves in that manner if they choose, but I don't think it is the CI platforms duty to address that | 15:19 |
clarkb | forexample this is what the remove sudo role provides to jobs that want to avoid sudo access | 15:19 |
clarkb | but jobs opt into that and its all job config, not platform setup | 15:19 |
zbr | yeah, that could be used for similar purposes. | 15:20 |
zbr | even such a proxy could be an opt-in feature. | 15:21 |
clarkb | it is, one of the options availalbe to tripleo is to stop using our proxy | 15:21 |
clarkb | then you'd have the background error rate but due to using different IPs would likely avoid the rate limits | 15:21 |
clarkb | job runtime shouldn't go up since we weren't caching anything for those jobs anyway | 15:22 |
fungi | also reduced performance due to fetching them from farther away on every request (though that's currently the case anyway because the requests from tripleo's docker client aren't getting cached for as of yet unknown reasons) | 15:23 |
fungi | er, what clarkb just said | 15:23 |
clarkb | ya Ithink fixing the problem directly would be an improvment, but dropping the proxies wouldn't be a regression against the current situation | 15:23 |
zbr | clarkb: my question is unrelated to docker issue, is quite the opposite, see https://bugs.launchpad.net/tripleo/+bug/1888700 | 15:25 |
openstack | Launchpad bug 1888700 in tripleo "image build jobs are not using upstream proxy servers" [Critical,Triaged] - Assigned to Sorin Sbarnea (ssbarnea) | 15:25 |
clarkb | zbr: unfortunately "It would far better to have a proxy enabled and make use of it transparently, so when no proxy is configured it would still work." is a problem for our environment | 15:26 |
clarkb | because if we make it transparent then we're unable to control it to a level where abuse can be limited | 15:26 |
clarkb | yes that would likely be better in a perfect world | 15:26 |
clarkb | but we don't have the ability to take advantage of that as far as I can see (because it would require us to put open proxies on the internet) | 15:27 |
zbr | so the main issue here is preventing abuse | 15:27 |
clarkb | yes, if we set up a transparent squid proxy in each of our cloud regions, and configured our test nodes to use it. How would we also prevent random internet users from using them. IP restrictions are difficult because we use public clouds and share IP pools and those IP pools change. Authentication is trivially exposed via a push to gerrit since we'd need the proxy to be useable during the run | 15:28 |
clarkb | portion of a job and not just in the trusted pre and post playbooks | 15:28 |
zbr | if abuse prevention is only issue, i think there are ways to investigate a way to avoid it. | 15:28 |
clarkb | there may be, but I've spent some time thinking about it (when we initially set up the reverse proxies with pabelanger) and couldnt' come up with a reasonable solution | 15:29 |
zbr | zuul could produce a temporary token which gives access to the proxy for limited amount of time | 15:29 |
clarkb | but now its too complicated to bother | 15:29 |
clarkb | what we have is simple and it works | 15:29 |
clarkb | (also there are issues with that approach too which we hit with our first attempt at swift based log storage) | 15:30 |
zbr | until someone comes to you and say: i need mirrors to you foo.org, two weeks later, this becomes bar.org | 15:30 |
fungi | yeah, i was about to point out that we tried the temporary token authentication solution for granting nodes access to write to swift containers, and ultimately abandoned that due to the complexity | 15:30 |
clarkb | fungi: complexity and it didn't work reliably as getting timing right is weird | 15:31 |
zbr | i personally do not find that approach to scale well | 15:31 |
clarkb | zbr: yes, I agree its not perfect, but again given th econstraints we have it has worked really really well | 15:31 |
fungi | well, i meant getting the timing right between token generation, authorization, and revocation was complicated | 15:32 |
fungi | it looks like the suggested way to do it with squid is kerberos | 15:34 |
fungi | i can only imagine the new and exciting failure modes we'll encounter setting up each job node as a kerberos client | 15:35 |
zbr | clarkb: fungi: tx, time for me to go back and check how big is the need to cache/mirror requests made towards images.rdoproject.org | 15:35 |
fungi | squid can also support bearer tokens supplied in the authentication header | 15:37 |
*** armax has joined #openstack-infra | 15:37 | |
fungi | but again, key distribution would be the complicated bit | 15:38 |
clarkb | fungi: apache does too fwiw (its working with the docker hub client http://paste.openstack.org/show/796435/) | 15:38 |
zbr | what i do not understand is why we cannot limit access to a proxy to requests coming from inside the same cloud tenant, but I also do not know how our networking is setup | 15:38 |
*** williampiv has joined #openstack-infra | 15:38 | |
fungi | zbr: it's generally "provider" networking | 15:39 |
clarkb | zbr: because we don't get tenant networking in: rax, vexxhost, ovh | 15:39 |
fungi | we don't have a dedicated network for our nodes | 15:39 |
clarkb | I think we do have tenant networking in limestone, openedge, and inap | 15:39 |
clarkb | not sure about linaro | 15:39 |
clarkb | rax + vexxhost + ovh are probably 70% of our resources | 15:40 |
zbr | but all support them, so we could provision proxies that respond only to internal requests, even without having to "configure" the proxy itself to be aware about that. | 15:40 |
clarkb | I'm not sure thats a true statement | 15:41 |
clarkb | I'm 99% sure OVH does not | 15:41 |
clarkb | I know rax didn't but don't know if that has changed. I think vexxhost may allow us to configure tenant networks if they aren't there by default | 15:41 |
clarkb | OVH networking is particularly interesting. They give you a /32 on ipv4 with a default gateway outside of your subnet (and it works). For ipv6 they don't RA, neutron knows about it, but not config drive or metadata service. So you have to query neutron then statically configure it (so we don't do ipv6 there) | 15:43 |
*** williampiv has quit IRC | 15:43 | |
clarkb | mwhahaha: I notice that Vary: Accept-Encoding is set on the response from cloud flare (wasn't in my earlier paste as I didn't do the lsat response whoops). And docker client has Accept-Encoding: Identity while tripleo client does Accept-Encoding: gzip, deflate I think that means if we do start caching we will cache disparate objects for tripleo client and docker client as the accept encodings are | 15:49 |
clarkb | different. | 15:49 |
clarkb | while I don't think that is a direct cause of not caching at all, its another issue to clean up likely to avoid duplicate objects? | 15:49 |
clarkb | it seems that maybe the Authorization header is related to not caching, however I wouldn't expected your change to address that. Local testing is likely the best next step for sorting that out. | 15:50 |
clarkb | *I would've | 15:50 |
zbr | i wonder if there is proxy that I can easily reconfigure using REST, so I could tell it to open access to specific IP when I know about it, cleaning it after. | 15:51 |
mwhahaha | clarkb: yea but according to the docs I should be able to set Cache-Control: public with Authorization and caching should take effect | 15:52 |
clarkb | mwhahaha: ya that is why I expected your change would help | 15:52 |
mwhahaha | per https://httpd.apache.org/docs/2.4/caching.html "What Can be Cached" | 15:52 |
clarkb | mwhahaha: which makes me wonder if it is another issue (or maybe two things authorization and something else) | 15:52 |
mwhahaha | since i grabbed teh httpd conf i'm going to see if i need aditional stuff | 15:52 |
*** ykarel has quit IRC | 15:53 | |
clarkb | zbr: related to that is the reason we stopped using cloud provider dns servers. On rax they block IPs that make too many dns requests, but since we boot lots of nodes what would happen is they blocked an IP for too many dns requests when that IP was used by someone else. Then when it was our turn to use the IP they didn't stop blocking the IP and our jobs would fail. This is what prompted us to run | 15:54 |
clarkb | local dns forwarding resolvers and bypass the cloud servers entirely. (fwiw I think it could've been done better and this isn't a reason to not do that, just an interesting story related to a similar mechanism) | 15:54 |
*** ykarel has joined #openstack-infra | 15:58 | |
zbr | afaik, zuul knows the IP of its nodes, so it can (un)lock access to a proxy based on that, the only trick is that we need aproxy that does not need restart to change its ACL | 15:59 |
zbr | and this could be a generic and optional ensure-proxy role | 15:59 |
zbr | if a job runs on a cloud that does not support a proxy, it may just skip configuring the proxy | 16:00 |
fungi | that seems like a fairly fundamental job behavior we wouldn't want changing at random depending on where the build happened to get scheduled | 16:01 |
*** pkopec has quit IRC | 16:01 | |
*** lucasagomes has quit IRC | 16:02 | |
zbr | i would say behavior on this would an ops decision :D | 16:02 |
* fungi has no idea what that means | 16:03 | |
*** xek has quit IRC | 16:05 | |
*** jcapitao has quit IRC | 16:09 | |
clarkb | something like that may work now that we have a zuul cleanup phase, but we would also need to switch to always gracefully stopping execuyors | 16:13 |
*** markvoelker has quit IRC | 16:13 | |
clarkb | otherwise the cleanup may not run and we'll leak connections (or add proxy cleanup to zuul restart procedures) | 16:14 |
*** ykarel has quit IRC | 16:16 | |
*** ociuhandu_ has joined #openstack-infra | 16:21 | |
*** ociuhandu has quit IRC | 16:23 | |
*** udesale_ has quit IRC | 16:24 | |
*** ociuhandu_ has quit IRC | 16:27 | |
*** sshnaidm|bbl is now known as sshnaidm | 16:28 | |
*** jrichard has joined #openstack-infra | 16:28 | |
*** gyee has joined #openstack-infra | 16:28 | |
*** ociuhandu has joined #openstack-infra | 16:35 | |
*** ociuhandu has quit IRC | 16:41 | |
*** hashar has joined #openstack-infra | 16:42 | |
*** psachin has quit IRC | 16:53 | |
*** Lucas_Gray has quit IRC | 16:54 | |
*** pkopec has joined #openstack-infra | 16:56 | |
*** fdegir5 is now known as fdegir | 16:57 | |
*** derekh has quit IRC | 17:00 | |
*** ricolin has quit IRC | 17:01 | |
*** rlandy is now known as rlandy|mtg | 17:02 | |
*** jpena is now known as jpena|off | 17:02 | |
*** sshnaidm is now known as sshnaidm|afk | 17:15 | |
*** armax has quit IRC | 17:19 | |
*** armax has joined #openstack-infra | 17:19 | |
*** dtantsur is now known as dtantsur|afk | 17:21 | |
*** dchen|away has quit IRC | 17:21 | |
*** dchen|away has joined #openstack-infra | 17:24 | |
*** jrichard has quit IRC | 17:30 | |
*** doggydogworld has joined #openstack-infra | 17:32 | |
doggydogworld | hello all, im trying to PCI passthrough a NIC, but am running into "Insufficient compute resources: Claim pci failed.", does anyone have some insight as to how to fix this? | 17:33 |
doggydogworld | i'm on train and following this guide | 17:33 |
doggydogworld | https://docs.openstack.org/nova/train/admin/pci-passthrough.html | 17:33 |
doggydogworld | also, using all-in-one RDO packstack | 17:33 |
*** rlandy|mtg is now known as rlandy | 17:43 | |
*** artom has quit IRC | 17:45 | |
*** artom has joined #openstack-infra | 17:46 | |
*** dchen|away is now known as dchen | 17:50 | |
*** artom has quit IRC | 17:52 | |
clarkb | doggydogworld: we help run the developer infrastructure for openstack but don't do a ton of openstack operations ourselves | 17:53 |
clarkb | doggydogworld: you might have better luck in #openstack or emailing openstack-discuss@lists.openstack.org | 17:53 |
doggydogworld | okay, thank you clark | 17:54 |
*** dchen is now known as dchen|away | 18:00 | |
*** artom has joined #openstack-infra | 18:07 | |
clarkb | mwhahaha: I think I figured it out | 18:16 |
EmilienM | woot | 18:16 |
clarkb | the responses to the tripleo client have Cache-Control: public, max-age=14400 and Age: 512824 headers set | 18:17 |
clarkb | the responses to the docker client have Cache-Control: public, max-age=14400 set but no Age header | 18:17 |
clarkb | I wonder if the difference is the authorization header being sent on the request to cloud flare | 18:17 |
clarkb | also rereading the apache docs I think we already do the right thing for Authorization because it is the response to that that needs to set cache-control not the request itself aiui. But then because we are over max-age we don't cache anyway | 18:19 |
*** andrewbonney has quit IRC | 18:21 | |
clarkb | I don't know what sort of logic is expected around the authorization header by the docker image protocol :/ but I'm guessing that is something the upstream client can give us hints on. Then we also want to update the accept-encoding so that the vary header doesn't force us to cache duplicate data | 18:22 |
mwhahaha | the problem is that it's really docker.io specific :/ | 18:23 |
mwhahaha | we require auth for our blob fetches on registry.redhat.io i think | 18:24 |
clarkb | ya, but the docker client must have logic in there for it? | 18:24 |
clarkb | otherwise docker wouldn't work with any other registry? | 18:24 |
mwhahaha | maybe they just don't Auth on a 307 | 18:24 |
mwhahaha | i'm really uncertain | 18:24 |
mwhahaha | or we're incorrectly setting the scope of our auth tokens | 18:25 |
clarkb | its also possible something elseis tripping the setting of Age by cloud flare | 18:25 |
mwhahaha | anyway going to start digging into that more | 18:25 |
clarkb | but the authorization header stands out as a big difference | 18:25 |
clarkb | also arguably this is a bug in the cloud flare server setup as max-age shouldn't matter for sha256 adressed entities | 18:27 |
clarkb | they cannot change | 18:27 |
fungi | they can go away though via, e.g., deletion | 18:29 |
*** vishalmanchanda has quit IRC | 18:29 | |
fungi | so would switch from 200 to 404 or something like that | 18:29 |
mwhahaha | that being said, shouldn't max age still cache? just not for a long time? | 18:30 |
fungi | not if the age returned is greater than the max-age | 18:30 |
mwhahaha | hmm | 18:31 |
fungi | age 512824 is something like 6 days | 18:31 |
*** artom has quit IRC | 18:31 | |
clarkb | ya normally Age is like 0 | 18:31 |
fungi | lots of sites play games with age and max-age to try to make things uncacheable ("cache busters") | 18:31 |
fungi | at one point i remember having to custom compile a patched squid to ignore some of them and just cache it already please | 18:32 |
fungi | silly games like setting negative max-age | 18:32 |
fungi | maybe apache can be configured to strip/ignore age from responses? | 18:33 |
fungi | but still it's surprising that cloudflare is only sometimes returning an age with those depending on how they're requested | 18:34 |
fungi | could even be a custom filter based on the user agent | 18:35 |
clarkb | oh thats a good point | 18:35 |
fungi | that might be easy to test if the tripleo client can be tweaked to set a user agent | 18:36 |
mwhahaha | it's just python requests so we can change whatever | 18:37 |
fungi | yeah, i think it's an additional string parameter you pass in the connection cont\structor | 18:38 |
fungi | constructor | 18:38 |
*** hashar has quit IRC | 18:39 | |
clarkb | we can use https://httpd.apache.org/docs/current/mod/mod_headers.html#header to unset the Age header | 18:45 |
EmilienM | mwhahaha: we probably want to name it with an obvious name for better tracking in logs (i guess you already think about it) | 18:45 |
clarkb | I'm not sure that is the best option here, but if we can't sort out what dockerhub/cloud flare expect then unsetting that seems reasonable enough | 18:46 |
clarkb | in particular we have the docker v2 proxy on its own vhost so can limit the "damage" | 18:46 |
clarkb | we already set a max expiration for our cache at one day so we'll eventually catch up to something that has been deleted (and when docker hub client is used the header isn't there anyway) | 18:47 |
*** artom has joined #openstack-infra | 18:48 | |
*** artom has quit IRC | 18:48 | |
*** artom has joined #openstack-infra | 18:48 | |
openstackgerrit | Andrii Ostapenko proposed zuul/zuul-jobs master: Add ability to use (upload|promote)-docker-image roles in periodic jobs https://review.opendev.org/740560 | 18:49 |
openstackgerrit | Andrii Ostapenko proposed zuul/zuul-jobs master: Add ability to use (upload|promote)-docker-image roles in periodic jobs https://review.opendev.org/740560 | 18:52 |
*** ralonsoh has quit IRC | 18:53 | |
*** lbragstad_ is now known as lbragstad | 19:01 | |
*** zer0c00l has joined #openstack-infra | 19:05 | |
*** zer0c00l has quit IRC | 19:06 | |
*** zer0c00l has joined #openstack-infra | 19:13 | |
clarkb | mwhahaha: fungi https://github.com/moby/moby/blob/master/registry/registry.go#L157-L174 its basically "am I talking to docker.com or docker.io" check if not then drop authorization | 19:24 |
clarkb | I think the logic there is if we've been redirected away from our actual location then drop Authorization as the authorization no longer applies | 19:25 |
clarkb | whether or not that is actually correct in all instances (see https://github.com/moby/moby/blob/master/registry/registry.go#L140-L155 where they hardcode their own stuff) I don't know | 19:26 |
clarkb | where that gets weird for us is we proxy through the same host so its by chance that we drop it (because we aren't called docker.io) | 19:26 |
mwhahaha | the alternative way is to use docker registry as a passthrough cache | 19:27 |
mwhahaha | have we looked into that (Rather than apache) | 19:27 |
clarkb | mwhahaha: yes, there is no way to prune the registry in that case | 19:27 |
mwhahaha | figures | 19:28 |
mwhahaha | you can query the catalog and write a pruner | 19:28 |
mwhahaha | but yea i guess that's lame | 19:28 |
clarkb | the thing they have built in requires you to stop the service aiui | 19:28 |
clarkb | and so oyu take an outage while you do a bunch of io | 19:29 |
mwhahaha | though you could round robin them | 19:29 |
mwhahaha | to handle that | 19:29 |
mwhahaha | anyway /me goes back to checking on headers | 19:29 |
clarkb | maybe, it would be nice to not over complicate this. | 19:29 |
mwhahaha | too late | 19:29 |
clarkb | after reading the upstream code I think we can probably drop the Age: header | 19:29 |
clarkb | since they are special csing things poorly in the docker code | 19:29 |
mwhahaha | i mean you should be able to drop the age header on the Dockerv2 config | 19:29 |
clarkb | yup that is what I mean | 19:30 |
mwhahaha | and the risk should be contained | 19:30 |
mwhahaha | Age: 1270269 | 19:30 |
mwhahaha | so i see that | 19:30 |
clarkb | we can also drop the authorized header on the request side on requests to cloudflare | 19:30 |
mwhahaha | let me see if it's like user agent specific | 19:30 |
clarkb | (maybe do both) | 19:30 |
clarkb | mwhahaha: ++ | 19:30 |
clarkb | if its UA specific we can change the UA header and lie in the proxy instead | 19:31 |
clarkb | or maybe your client can do that | 19:31 |
clarkb | in general I think the rule that the docker upstream is trying to encode is "if we've been redirected to a third party service then remove authorization as the authorization token is valid only for the origin". Since we proxy everything through the same host the most accurate way to encode that may be to drop Authorization headers on the cloudflare prefix | 19:34 |
mwhahaha | podman also gets the Age but it's useragent is libpod/1.6.4 | 19:36 |
mwhahaha | so docker cli is doing something special | 19:36 |
clarkb | oh this gets more interesting they redirect to cloudflare using a docker.com name so that would be trusted, but because we don't have that same suffix the docker client drops authorization | 19:40 |
clarkb | mwhahaha: are you able to test easily if dropping authorization drops the age header? | 19:40 |
clarkb | I'm working on a change now to drop the age header on the proxy as I'm beginning to think that is most accurate for us | 19:41 |
mwhahaha | so i get an Age: but it's less than 14400 when i just switched the user agent | 19:42 |
clarkb | switched to the docker UA? | 19:43 |
mwhahaha | yea | 19:43 |
clarkb | and fetching the same blob with a different UA has a larger Age? | 19:43 |
mwhahaha | yea | 19:43 |
clarkb | wow | 19:43 |
mwhahaha | going to double check but i got numbers less than the 14400 | 19:44 |
clarkb | I've double checked my tcpdump and it isn't set at all there | 19:44 |
mwhahaha | nm it's consistent with the UA | 19:49 |
mwhahaha | i had one item ETag: 3290ca17424cdcfe2a49209035d13f8b with an age of 171871 | 19:49 |
*** xek has joined #openstack-infra | 19:49 | |
mwhahaha | then i reran with the current ua and it's 172246 | 19:49 |
mwhahaha | let me try dropping the auth header | 19:49 |
fungi | okay, so at least no shenanigans related to ua string | 19:51 |
clarkb | mwhahaha: I need to pop out for a bit now, but you might try applying https://review.opendev.org/743835 to your test setup and see if that is happier | 19:53 |
mwhahaha | the auth header is required for us to fetch the blobs | 19:53 |
mwhahaha | removing it broke it | 19:53 |
mwhahaha | so there must be some other type of thing causing teh difference | 19:54 |
clarkb | hrm I'll double check my tcpdumps but I'm fairly certain that they weren't there for the docker hub client | 19:54 |
clarkb | mwhahaha: they need to be there on the first pre redirect request | 19:55 |
clarkb | but ya I don't seem them on the post redirect requests in my tcpdump capture | 19:56 |
clarkb | they are there for the pre redirect requests | 19:56 |
*** doggydogworld has quit IRC | 19:57 | |
mwhahaha | they might be handling the redirect separately where as ours is probably getting handled under the covers by python requests | 20:07 |
clarkb | ya see my github.com/moby links above, they handle it explicitly | 20:08 |
*** eolivare has quit IRC | 20:23 | |
clarkb | mwhahaha: why do we need to remove the proxypass reverse line? | 20:35 |
mwhahaha | the </Location> line is bad | 20:35 |
clarkb | oh wait I see | 20:35 |
clarkb | ya the quote context rendered weird for me | 20:35 |
mwhahaha | i'm not getting any caching but i'm sure i've messed something up trying to hack a config out of this | 20:35 |
clarkb | I've updated the change and if it passes our testing I'll try to apply it manually to a server (mirror.iad.rax.opendev.org is what I've been reading logs on so far) | 20:37 |
mwhahaha | it definately removes the Age | 20:37 |
mwhahaha | and it "works" to get content | 20:37 |
mwhahaha | my caching just isn't working | 20:37 |
clarkb | its possible that removing the age happens after apache considers if it should be cached | 20:37 |
clarkb | that would be unfortunate if so | 20:37 |
clarkb | fungi: ^ do you know about order of ops there? | 20:37 |
*** dchen|away is now known as dchen | 20:38 | |
mwhahaha | i'm trying to cheat by running httpd in a container so it might just be me on the caching thing | 20:38 |
*** ociuhandu has joined #openstack-infra | 20:39 | |
*** ociuhandu has quit IRC | 20:43 | |
*** dchen is now known as dchen|away | 20:48 | |
mwhahaha | i feel like i'm missing a rewrite rule or something because the cloudflare stuff isn't being redirected to use the proxy | 20:51 |
clarkb | mwhahaha: the proxypassreverse line is important for that as it will rewrite the 307 location to be the proxy | 20:54 |
mwhahaha | yea it feel slike the location bit might have broken it | 20:55 |
* mwhahaha checks | 20:55 | |
mwhahaha | cause it was "working" previously | 20:55 |
mwhahaha | it wasn't actually caching but i was seeing transit on the wire | 20:55 |
clarkb | "When used inside a <Location> section, the first argument is omitted and the local directory is obtained from the <Location>." is what ProxyPassReverse docs say so it should work | 20:56 |
* mwhahaha wonders if the docs are full of lies | 20:56 | |
clarkb | https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypassreverse | 20:56 |
clarkb | that could be :) | 20:57 |
mwhahaha | yea location broke the redirect | 20:58 |
mwhahaha | removing location worked | 20:59 |
clarkb | fungi: any idea what may be happening there? | 20:59 |
mwhahaha | i think the proxy pass bits need to be top level | 20:59 |
mwhahaha | because it would only do that on requests for /cloudflare/ | 20:59 |
mwhahaha | instead of matching the reverse for responses from docker.io | 20:59 |
* mwhahaha tests | 20:59 | |
clarkb | ah yup I bet that is it | 21:00 |
clarkb | if it wasn't a 307 it would be fine but the / rules create the Location that we need proxypass to apply to | 21:00 |
clarkb | *proxypassreverse to apply to | 21:00 |
clarkb | mwhahaha: the proxypass stays in the location with the other rules, but the proxypassreverse moves out and gets the prefix back | 21:01 |
clarkb | I think | 21:01 |
mwhahaha | yea | 21:01 |
mwhahaha | i just wrapped the age ina location but you could probably leave the CacheEnable in the location block as well | 21:02 |
clarkb | mwhahaha: updated the change if you want to double check what I did with it | 21:03 |
mwhahaha | k let me try that | 21:03 |
mwhahaha | yea that'll work | 21:04 |
* mwhahaha ties to fix his cache | 21:04 | |
fungi | sorry, was on a brief taco break, back now | 21:06 |
clarkb | fungi: I think we've got it sorted out. Now just waiting on updated test results before I use mirror.iad.rax as a real world check | 21:07 |
*** slaweq has quit IRC | 21:07 | |
fungi | yeah, proxypassreverse won't be the same location | 21:08 |
fungi | i only half know what i'm talking about though, apache mod_proxy is a bit of voodoo and the docs are sometimes opaque | 21:09 |
*** xek has quit IRC | 21:10 | |
mwhahaha | so cache enable i think has to be outside as well | 21:23 |
mwhahaha | maybe not tho | 21:23 |
clarkb | I've applied it to mirror.iad.rax and not seeing it cache (yet at least) | 21:34 |
clarkb | we haven't regressed the docker client though | 21:34 |
clarkb | redirects seem to be working and going through the proxy | 21:35 |
clarkb | I can try to unset authorization on the cloudflare location | 21:36 |
clarkb | adding RequestHeader unset Authorization to the cloudflare location doesn't seem to change anything? Though I reloaded and didn't restart | 21:41 |
clarkb | ya I'm wondering if it is something else? or maybe a number of things and we've addressed some of it? | 21:41 |
clarkb | mwhahaha: the other difference that stood out to me was the accept-encoding difference. The response isn't encoded with either accept encoding and maybe apache won't cache something that has an inappropriate encoding? | 21:42 |
mwhahaha | maybe | 21:42 |
mwhahaha | i just installed docker and a pull cached (or at least has cache reponses) | 21:42 |
mwhahaha | podman didn't | 21:42 |
mwhahaha | which was weird | 21:43 |
mwhahaha | so something is going on | 21:43 |
clarkb | I'm setting mirror.iad back to normal now | 21:49 |
clarkb | should be back now. Disabled mod_headers too | 21:50 |
mwhahaha | I think it might be the Accept-Encoding: identity | 21:52 |
mwhahaha | let me test that | 21:52 |
*** artom has quit IRC | 22:04 | |
mwhahaha | so yes it's the authentication header | 22:07 |
mwhahaha | and dropping it is a pain | 22:08 |
* mwhahaha flips tables | 22:08 | |
clarkb | hrm I tried dropping it in the apache config and it dodn't seem to help. maybe I did it wrong | 22:09 |
mwhahaha | yea i don't think you can drop it in apache | 22:09 |
mwhahaha | it likely does it in the wrong spot | 22:09 |
clarkb | that could be | 22:09 |
mwhahaha | i can add a bunch of hack code in | 22:10 |
mwhahaha | but i think i'll do that tomorrow | 22:10 |
mwhahaha | it means we have to check if the blob response is a redirect and if it is, don't just follow but drop the auth since we're likely switching domains | 22:10 |
mwhahaha | right now it works, it's just not cachable | 22:10 |
mwhahaha | i hate this code so much | 22:10 |
mwhahaha | but podman's pull is no better | 22:11 |
clarkb | mwhahaha: does dropping authorization drop the Age header on the response? | 22:11 |
mwhahaha | let me see i have the age dropping config in place | 22:11 |
clarkb | becaues the response cache-control header does say public which should mean the authorization on its ow nis fine (but maybe not if it adds age) | 22:11 |
mwhahaha | let me take that out | 22:11 |
mwhahaha | we might be hitting: If the response has a status of 200 (OK), the response must also include at least one of the "Etag", "Last-Modified" or the "Expires" headers, or the max-age or s-maxage directive of the "Cache-Control:" header, unless the CacheIgnoreNoLastMod directive has been used to require otherwise. | 22:12 |
clarkb | we should have Etag, Last-modified, and cache-control with max-age | 22:12 |
clarkb | I don't think they send an expires header | 22:13 |
mwhahaha | i wasn't setting max-age in the Cache-Control i was sending | 22:13 |
mwhahaha | so i wonder if i needed to do that | 22:13 |
clarkb | I think apache says it take sthe lesser of the two values | 22:13 |
* clarkb looks for where it says that | 22:13 | |
mwhahaha | ah so we couldn't fix that then | 22:14 |
clarkb | "At the same time, the origin server defined freshness lifetime can be overridden by a client when the client presents their own Cache-Control header within the request. In this case, the lowest freshness lifetime between request and response wins." | 22:14 |
clarkb | mwhahaha: another option may be to crnak up the apache logging verbosity and see if it says why it is making cache decisions | 22:14 |
mwhahaha | they send age | 22:15 |
clarkb | https://httpd.apache.org/docs/2.4/mod/core.html#loglevel you can change that value on your test setup | 22:15 |
clarkb | (if I do it in prod we'll get so many logs all at once but it is a possibility too) | 22:15 |
mwhahaha | http://paste.openstack.org/show/796443/ | 22:16 |
mwhahaha | that's what happens with python, let me check docker again | 22:16 |
mwhahaha | docker gets age too | 22:20 |
mwhahaha | i think it's just the Authorization header | 22:20 |
mwhahaha | http://paste.openstack.org/show/796444/ | 22:22 |
fungi | https://httpd.apache.org/docs/2.4/mod/mod_cache.html under CacheStoreNoStore Directive: "Resources requiring authorization will never be cached." | 22:24 |
fungi | also includes the same caveat under CacheIgnoreCacheControl and CacheStorePrivate | 22:26 |
clarkb | fungi: what good is cache-control public then? | 22:26 |
clarkb | (that is set on these and I thought the intent there was to say yes this was requested with authorization but it is public data) | 22:26 |
fungi | i'm guessing apache doesn't care | 22:26 |
mwhahaha | can we add CacheIgnoreHeaders Authorization ? :D | 22:27 |
mwhahaha | nope that'd be too easy | 22:28 |
clarkb | https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.8 | 22:28 |
clarkb | mwhahaha: we can do RequestHeader unset Authorization but I tested that and it didn't work | 22:29 |
clarkb | fungi: ^ the rfc seems to say the cache-control: public means this is allowed | 22:29 |
clarkb | it would be super annoying if apache said we don't care :( | 22:29 |
mwhahaha | eh i'll just fix the code tomorrow | 22:29 |
clarkb | we may be able to disable CacheQuickHandler then do RequestHeader unset Authorization. I think the reason it is weird there is by default the cache is handled very early on | 22:35 |
clarkb | but disabling the quick handler would allow more processing to happen and potentially allow us to disable the extra bits | 22:35 |
*** rcernin_ has joined #openstack-infra | 22:35 | |
mwhahaha | https://review.opendev.org/#/c/743629/ | 22:39 |
mwhahaha | if you want to watch logs for that | 22:39 |
mwhahaha | in theory it's what does it | 22:39 |
mwhahaha | it's terrible | 22:39 |
mwhahaha | mehi broke something | 22:42 |
*** rcernin_ has quit IRC | 22:48 | |
*** rcernin has joined #openstack-infra | 22:48 | |
*** tkajinam has joined #openstack-infra | 22:53 | |
*** ociuhandu has joined #openstack-infra | 23:01 | |
*** ociuhandu has quit IRC | 23:06 | |
*** piotrowskim has quit IRC | 23:12 | |
*** rlandy has quit IRC | 23:16 | |
*** tosky has quit IRC | 23:21 | |
*** dchen|away is now known as dchen | 23:24 | |
*** ramishra has quit IRC | 23:52 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!