*** tosky has quit IRC | 00:06 | |
*** hamalq has quit IRC | 00:18 | |
ianw | ok, i think everything is running smoothly on hte reprepo front now | 00:27 |
---|---|---|
ianw | i've got a change to convert it to releasing using ssh but i think leave that for a bit while it settles in | 00:27 |
ianw | #status log reprepro mirroring moved to mirror-update.opendev.org. logs now available at https://static.opendev.org/mirror/logs/ | 00:29 |
openstackstatus | ianw: finished logging | 00:29 |
*** johnsom has quit IRC | 02:00 | |
*** rpittau|afk has quit IRC | 02:00 | |
*** TheJulia has quit IRC | 02:00 | |
*** TheJulia has joined #opendev | 02:00 | |
*** rpittau|afk has joined #opendev | 02:01 | |
*** johnsom has joined #opendev | 02:01 | |
*** ykarel|away has joined #opendev | 03:02 | |
*** ykarel|away has quit IRC | 03:09 | |
openstackgerrit | Ian Wienand proposed opendev/system-config master: reprepro: fix apt-puppetlabs volume name https://review.opendev.org/760272 | 03:10 |
openstackgerrit | Merged opendev/system-config master: reprepro: fix apt-puppetlabs volume name https://review.opendev.org/760272 | 04:18 |
*** ykarel|away has joined #opendev | 04:45 | |
*** ykarel|away is now known as ykarel | 04:47 | |
openstackgerrit | Merged opendev/system-config master: Cleanup grafana.openstack.org https://review.opendev.org/739625 | 05:15 |
openstackgerrit | Merged openstack/diskimage-builder master: Allow processing 'focal' ubuntu release in lvm https://review.opendev.org/760156 | 05:30 |
*** marios has joined #opendev | 06:00 | |
openstackgerrit | Rico Lin proposed opendev/irc-meetings master: move heat meeting time to Wednesday 1400 UTC https://review.opendev.org/760286 | 06:25 |
frickler | ianw: do you also want to clean up the dns records for grafana02.openstack.org ? | 06:35 |
*** ysandeep|away is now known as ysandeep|ruck | 06:43 | |
*** ralonsoh has joined #opendev | 06:47 | |
*** DSpider has joined #opendev | 06:51 | |
*** jaicaa has quit IRC | 07:00 | |
*** jaicaa has joined #opendev | 07:01 | |
*** sshnaidm|afk is now known as sshnaidm|rover | 07:01 | |
sshnaidm|rover | hi, is there a process how and what I can upload to tarballs.o.o? | 07:14 |
*** sboyron has joined #opendev | 07:18 | |
*** eolivare has joined #opendev | 07:25 | |
openstackgerrit | Lajos Katona proposed openstack/project-config master: Add publish-to-pypi template to tap-as-a-service https://review.opendev.org/760293 | 07:50 |
*** slaweq has joined #opendev | 08:04 | |
*** andrewbonney has joined #opendev | 08:09 | |
*** marios has quit IRC | 08:09 | |
*** ysandeep|ruck is now known as ysandeep|lunch | 08:17 | |
AJaeger | infra-root, config-core, I'm on vacation the next 10 days - will only be occasionally online and will shutdown IRC now. | 08:28 |
AJaeger | sshnaidm|rover: tarballs.o.o gets uploaded as part of jobs, best explain what you're considering and we see whether it fits. It was established to have tarballs of releases and after each merge. | 08:29 |
frickler | AJaeger: enjoy your vacation | 08:29 |
sshnaidm|rover | I'd like to upload there containers tarballs which will be downloaded in every job | 08:33 |
sshnaidm|rover | containers tarballs will be uploaded once/twice a day max | 08:33 |
ianw | AJaeger: have fun and stay covid safe! | 08:34 |
ianw | sshnaidm|rover: it's more or less an append-only service. it doesn't seem like the right place to put ephemeral things. perhaps job artifacts would work? | 08:36 |
sshnaidm|rover | ianw, I see.. I was looking more for something that can serve multiple jobs, like "images server", where images will be updated periodically, but not too often | 08:39 |
sshnaidm|rover | ianw, not after each job | 08:39 |
ianw | frickler: yes, we should clear out that record too. grafana.openstack.org though i think is a CNAME (and the cert covers it) | 08:40 |
*** rpittau|afk is now known as rpittau | 08:40 | |
ianw | sshnaidm|rover: well, are you familiar with the intermediate container our docker build jobs use? perhaps that is sufficient to serve the same container to multiple jobs? | 08:41 |
*** AJaeger has quit IRC | 08:42 | |
sshnaidm|rover | ianw, you mean passing registry to dependent jobs | 08:46 |
sshnaidm|rover | ? | 08:46 |
ianw | sshnaidm|rover: yes, basically ; see https://zuul-ci.org/docs/zuul-jobs/docker-image.html | 08:46 |
ianw | i have to head off, but i'm sure other can help. see the system-config repo for examples | 08:47 |
sshnaidm|rover | ianw, I need all jobs in various repos and branches to use these images, in all patches | 08:47 |
sshnaidm|rover | ianw, ack, thanks | 08:47 |
*** marios has joined #opendev | 08:50 | |
*** tosky has joined #opendev | 08:51 | |
*** ysandeep|lunch is now known as ysandeep|ruck | 08:57 | |
*** webmariner has quit IRC | 09:03 | |
*** webmariner has joined #opendev | 09:03 | |
*** webmariner has quit IRC | 09:08 | |
*** DSpider has quit IRC | 09:45 | |
frickler | slaweq: regarding https://bugs.launchpad.net/bugs/1902002 , I've seen this multiple times myself, but only for our limestone provider which has public v6 but only private v4 for instances and sometimes they even fail to get v4 completely | 09:48 |
openstack | Launchpad bug 1902002 in devstack "Fail to get default route device in CI jobs" [Undecided,New] | 09:48 |
frickler | maybe it is time to teach devstack to give v6 connectivity to instances instead of v4? | 09:49 |
frickler | hmm, from logstash it also seems to happen for other providers, but less frequently | 09:52 |
*** DSpider has joined #opendev | 09:54 | |
frickler | another option would be to make sure that tests all work without external connectivity and then drop this part from devstack | 09:54 |
frickler | otherwise we could check for this situation in the pre phase and fail early, allowing for a retry and hopefully finding a better node | 09:55 |
slaweq | frickler: yes, my first though was that it may be related to one provider but from logstash it seems like it's not | 10:00 |
slaweq | and it didn't happend (according to logstash) before 23.10 | 10:00 |
slaweq | but that could be just missing older logs in logstash, idk | 10:01 |
slaweq | also it seems from devstack that this check of default gw is also needed for jobs where linuxbridge agent is used, maybe we can make it conditional in devstack? | 10:01 |
slaweq | that would workaround the issue for most of the jobs I think | 10:01 |
*** slaweq has quit IRC | 10:05 | |
*** hashar has joined #opendev | 10:07 | |
*** slaweq has joined #opendev | 10:12 | |
*** ykarel has quit IRC | 10:17 | |
*** ykarel has joined #opendev | 10:19 | |
frickler | slaweq: that's a good idea for a first iteration workaround with low regression potential, I'm creating a patch for that now | 10:38 |
slaweq | frickler: ok, let me check that and propose patch | 10:38 |
slaweq | thx for looking at this issue | 10:38 |
*** ykarel_ has joined #opendev | 10:52 | |
*** ykarel has quit IRC | 10:55 | |
*** ykarel_ is now known as ykarel | 11:06 | |
*** lpetrut has joined #opendev | 11:59 | |
openstackgerrit | Balazs Gibizer proposed opendev/elastic-recheck master: Add query for bug 1901739 https://review.opendev.org/759967 | 12:33 |
openstack | bug 1901739 in OpenStack Compute (nova) " libvirt.libvirtError: internal error: missing block job data for disk 'vda'" [High,Confirmed] https://launchpad.net/bugs/1901739 | 12:33 |
fungi | sshnaidm|rover: would you overwrite the images, or would they accumulate? we don't really have a pruning solution for the files on tarballs.o.o. also it's served from afs which we've tuned to handle mirrors of things like python wheels and distribution packages, but putting very large files in there could represent challenges. the tarballs volume specifically has around 170gb of files in it currently and a | 13:16 |
fungi | 400gb quota, to give you some idea of the capacity and content there right now (we can increase the quota on it if needed, but it's not unbounded) | 13:16 |
sshnaidm|rover | fungi, I think we'll need like 40-50 GB and it's better to prune old files when uploading new ones, we definitely don't want them to append there | 13:28 |
openstackgerrit | Merged openstack/project-config master: Add publish-to-pypi template to tap-as-a-service https://review.opendev.org/760293 | 13:28 |
fungi | sshnaidm|rover: overwriting the old files each time you upload new ones wouldn't suffice, i guess? | 13:30 |
fungi | it's worth noting that afs volume updates are effectively atomic at least, so you shouldn't have files getting served in a half-written state if a download happens during an overwrite | 13:30 |
sshnaidm|rover | fungi, it will have different names, so just removing them would be fine | 13:31 |
sshnaidm|rover | fungi, if I did this, I'd upload files to a new folder, change symlink and delete old folder | 13:31 |
sshnaidm|rover | kind of | 13:31 |
fungi | well, it's the "removing" that's the problem. we don't have anything which removes files from that site, though we do have common job patterns where the file being uploaded overwrites a previous version (notably, branch tip tarballs) | 13:31 |
sshnaidm|rover | fungi, can we delete just a whole folder? | 13:32 |
fungi | the uploads to it are indirect... the executor pulls files from the build node with rsync, and then uses rsync to copy those files into a path in afs. the rsync used in the second phase would probably need to include the --delete flag | 13:34 |
fungi | so that it removes any unrecognized files in the destination path | 13:34 |
sshnaidm|rover | I see.. not sure we can use it then | 13:35 |
fungi | but file deletion isn't a distinct step, it would in theory need to be done with some feature of rsync during the copy phase | 13:35 |
fungi | this is why overwrites are easier to deal with | 13:36 |
sshnaidm|rover | fungi, and it's possible from jobs only, right? | 13:36 |
fungi | for docs publication we have particularly complex rsync invocations which look for and ignore subtrees with special root marker files in them to avoid removing content handled by different builds | 13:36 |
sshnaidm|rover | do you have example for those? ^ | 13:37 |
fungi | sshnaidm|rover: yeah, we don't really have a mechanism/workflow in place to grant kerberos credentials except for our automation and occasional systems administration work | 13:37 |
fungi | i'm looking for where the docs upload role is now, just a sec | 13:38 |
sshnaidm|rover | fungi, so if I need to do this periodically, I can use only periodic jobs and it will be not often then once a day? | 13:38 |
fungi | yeah, periodic jobs could be used, in theory | 13:39 |
sshnaidm|rover | maybe I'm going to wrong direction trying to use tarballs.o.o as a containers registry.. | 13:39 |
fungi | release management ptg session is slowing down my browser, this will take time | 13:39 |
sshnaidm|rover | fungi, sure, np | 13:40 |
sshnaidm|rover | don't you have a caching registry mirror? If not, I can help to set it up | 13:40 |
fungi | we do have a caching proxies for dockerhub and maybe quay, is that what you're asking about? | 13:41 |
sshnaidm|rover | fungi, no, I mean caching container images themselves: https://docs.docker.com/registry/recipes/mirror/ | 13:41 |
fungi | sshnaidm|rover: other than the intermediate registry we're running, no we don't host any container registries at the moment | 13:46 |
sshnaidm|rover | fungi, is there any interest to have it? I can work on ansible role to set it up and configure | 13:46 |
fungi | it's worth discussing with the rest of the team. we don't have any more ptg sessions scheduled for opendev but maybe start a thread on the service-discuss ml or something | 13:47 |
fungi | sshnaidm|rover: so the role which does the fancy footwork for afs docs publication with root marker files is part of zuul's stdlib: https://zuul-ci.org/docs/zuul-jobs/afs-roles.html#role-upload-afs-roots | 13:48 |
fungi | and the jobs we use that in are documented here: https://docs.opendev.org/opendev/base-jobs/latest/documentation-jobs.html | 13:48 |
sshnaidm|rover | fungi, cool, thanks | 13:48 |
*** mlavalle has joined #opendev | 14:17 | |
openstackgerrit | Jeremy Stanley proposed openstack/project-config master: Replace old Victoria cycle signing key with Wallaby https://review.opendev.org/760364 | 14:23 |
*** hamalq has joined #opendev | 14:41 | |
*** ykarel has quit IRC | 14:54 | |
*** lpetrut has quit IRC | 14:56 | |
*** tkajinam has quit IRC | 15:05 | |
*** ysandeep|ruck is now known as ysandeep|away | 15:12 | |
openstackgerrit | Merged openstack/project-config master: Create telemetry group and include https://review.opendev.org/760190 | 15:20 |
mordred | fungi, sshnaidm|rover : it might become a more important thing to consider with the new dockerhub rate limiting | 15:24 |
mordred | (running an actual registry that is) | 15:24 |
clarkb | did htey ever publish their promised "what CI systems should do" doc? | 15:25 |
clarkb | because they promised a thing like that and I tried to periodically look for it then got distracted | 15:25 |
fungi | i haven't seen one mentioned | 15:25 |
mordred | I haven't seen one mentioned either | 15:25 |
mordred | but 100 container image pulls every six hours is ... yeah | 15:26 |
fungi | i assume that's 100 pulls of a given image from any anonymous source? or is that 100 pulls of any container image from the same ip address? or something else entirely? | 15:28 |
clarkb | fungi: I don't think they've said specifically. However given the earlier rate limiting I expect its 100 pulls of any container image from the same ip | 15:30 |
clarkb | also note that they are no longer rate limiting by blob but now by manfifest | 15:30 |
clarkb | which is what completely breaks our caching system (and it also fails rate limit the thing that is actually expensive. I think this is what irritates me most about their changes, they have explicitly broken what we did to mitigate our impact and ignored the fact that we were already good citizens) | 15:31 |
clarkb | according to their blog post rate limiting the large blob requests was confusing to users so instead we'll do something that makes zero sense :/ | 15:31 |
clarkb | "However, CircleCI has partnered with Docker to ensure that our users can continue to access Docker Hub without rate limits." | 15:32 |
frickler | there was a link for CI owners to register and fill out some questions in order to get more information | 15:34 |
fungi | maybe we just need to write some high-profile blog posts/news articles about how dockerhub is useless for free and open source software | 15:35 |
* clarkb remembers where we last got re caching this. We need a smarter cache than apache because apache won't cache the manifest data since it is fetched with authentication (the headers are set such that a smarter proxy could cache them though) | 15:44 | |
*** olaph has quit IRC | 15:44 | |
clarkb | the other thought was to stop proxy caching entirely | 15:45 |
clarkb | because that would diversify our IP addresses | 15:45 |
clarkb | there are also "open source project plans" which don't get published limits | 15:48 |
clarkb | https://www.docker.com/blog/best-practices-for-using-docker-hub-for-ci-cd/ I think their advice may be "upgrade your account" | 15:54 |
fungi | though that would also require auth right? | 15:56 |
clarkb | yes, which we technically already do (just in most cases as the boring anonymous user auth) | 15:56 |
clarkb | I guess the issue then becomes securing the tokens | 15:58 |
*** hashar has quit IRC | 15:59 | |
*** hashar has joined #opendev | 15:59 | |
*** hashar_ has joined #opendev | 16:04 | |
*** hashar has quit IRC | 16:04 | |
sshnaidm|rover | mordred, yeah, that's exactly the reason I raised this.. | 16:24 |
clarkb | I don't think we should use tarballs as a docker registry standin | 16:24 |
clarkb | we should address the problem more directly (not sure what the best solution is though) | 16:24 |
*** hashar_ is now known as hashar | 16:24 | |
*** hashar has quit IRC | 16:25 | |
*** hashar has joined #opendev | 16:25 | |
sshnaidm|rover | tbh I'm not sure if mirroring dockerhub will be fine too, because it's enough for someone to request 100 new containers that are not in the cache - and cache machine is blocked for 6 hours to pull from dockerhub | 16:26 |
clarkb | s/mirror/proxy cache/ | 16:27 |
clarkb | if we mirrored it we could presumably use an account without rate limits and download all the things if we had sufficient disk | 16:28 |
clarkb | (note we don't have sufficient disk) | 16:28 |
sshnaidm|rover | I mean a real mirror: https://docs.docker.com/registry/recipes/mirror/ | 16:28 |
sshnaidm|rover | we can configure pruning as frequent as we want | 16:28 |
clarkb | well a real mirror shouldn't have to worry about the rate lmits? | 16:28 |
sshnaidm|rover | clarkb, only if it doesn't have container in it and need to pull it from dockerhub | 16:28 |
clarkb | your example there is for a pull through cache | 16:28 |
clarkb | which is different than a mirror (and very similar to what we already run) | 16:29 |
sshnaidm|rover | ah, ok | 16:29 |
sshnaidm|rover | I keep calling it mirror :) | 16:29 |
sshnaidm|rover | clarkb, do you run this pull-through-cache? | 16:29 |
clarkb | opendev runs an apache based pull through cache in every test node region | 16:30 |
clarkb | the issue is that they are changing how they apply rate limits to essentially break that setup | 16:30 |
sshnaidm|rover | yeah :/ | 16:30 |
clarkb | shifting from blob rate limits which apache can cache to manifest rate limits which I can't figure out how to get apache to cache | 16:31 |
clarkb | also that doc doesn't seem to have anything on pruning? | 16:31 |
clarkb | it just says it will somehow magically do it itself | 16:31 |
clarkb | (can we configure filesystem limits or frequency of pruning?) | 16:31 |
sshnaidm|rover | clarkb, pruning is available in config for registry container | 16:39 |
sshnaidm|rover | storage: maintenance: | 16:40 |
sshnaidm|rover | uploadpurging: | 16:40 |
sshnaidm|rover | enabled: true | 16:40 |
sshnaidm|rover | age: 168h | 16:40 |
sshnaidm|rover | interval: 24h | 16:40 |
*** sboyron has quit IRC | 16:40 | |
sshnaidm|rover | https://docs.docker.com/registry/configuration/#list-of-configuration-options | 16:41 |
sshnaidm|rover | but it's when you run registry as a container | 16:41 |
*** marios is now known as marios|out | 16:41 | |
clarkb | sshnaidm|rover: that is for puring uploads to the registry which you wouldn't do in this case | 16:42 |
clarkb | (though amybe the docs are just overly specific and it does apply to the cache scenario too?) | 16:42 |
clarkb | https://docs.docker.com/registry/garbage-collection/#more-details-about-garbage-collection | 16:43 |
clarkb | sshnaidm|rover: ^ basically those docs continue to say you can't garbage collect a live registry, it has to be read only (which I think means no new cached content but could serve existing content) or it has to be off | 16:43 |
*** slaweq has quit IRC | 16:44 | |
clarkb | it is possible some setup could be devised from this, but it needs someone to actually spin one up and test it and figure out these details taht the docs make ambiguous or confusing | 16:44 |
frickler | I did fill out https://www.docker.com/community/open-source/application in the name of opendev now, waiting to see how they respond | 16:45 |
clarkb | frickler: thank you | 16:45 |
sshnaidm|rover | I don't see where it can't garbage collect on live registry | 16:50 |
sshnaidm|rover | from this page | 16:50 |
clarkb | sshnaidm|rover: "You should ensure that the registry is in read-only mode or not running at all. If you were to upload an image while garbage collection is running, there is the risk that the image’s layers are mistakenly deleted leading to a corrupted image." | 16:51 |
clarkb | again its not clear how that interacts with pull through useage | 16:51 |
clarkb | (the docs are really lacking unfortunately) | 16:51 |
sshnaidm|rover | sigh.. | 16:51 |
clarkb | if someone is able to sort out how pruning functions so that we can evaluate it properly as a pull through cache that would be excellent | 16:53 |
clarkb | but I don't think the docs give us a clear enough picture and it requires actual testing | 16:53 |
*** mwhahaha has quit IRC | 16:55 | |
*** mwhahaha has joined #opendev | 16:55 | |
*** rpittau is now known as rpittau|afk | 16:57 | |
openstackgerrit | Clark Boylan proposed opendev/system-config master: Stop using the in region docker mirrors for system-config jobs https://review.opendev.org/760417 | 17:04 |
clarkb | frickler: sshnaidm|rover fungi ^ something like that may also serve as a reasonable temporary mitigation | 17:05 |
openstackgerrit | Clark Boylan proposed openstack/project-config master: Don't use the local docker caches https://review.opendev.org/760419 | 17:09 |
clarkb | corvus: mordred ^ you may also be interested in those | 17:09 |
clarkb | I think it would be good to hear back from docker on frickler's request. Maybe they'll say we can give them a list of IPs to rate limit less aggressively or something but we've now got a couple changes we can land next week if we think it will help | 17:11 |
*** hamalq has quit IRC | 17:13 | |
*** hamalq has joined #opendev | 17:14 | |
*** ralonsoh has quit IRC | 17:15 | |
*** ykarel has joined #opendev | 17:35 | |
*** ykarel has quit IRC | 17:37 | |
*** ykarel has joined #opendev | 17:38 | |
corvus | clarkb: i originally designed zuul-registry as a smart pull-through cache, but later determined i didn't need it and removed that functionality. if we feel like we want a pull-through cache with behavior that we know and control, we can resurrect that. | 17:40 |
corvus | (to be clear, simply using zuul-registry as a pull-through cache to replace the apache mirrors; not trying to incorporate any of the zuul shadowing functionality) | 17:41 |
clarkb | ya that may be useful using a filesystem driver | 17:43 |
clarkb | especially if we can incorporate auto pruning against some size limit | 17:43 |
fungi | though it stores the images in swift right? so having the z-r frontend in every provider may not be any more efficient than having a single one close to the swift backend? | 17:44 |
clarkb | it has multiple drivers | 17:44 |
clarkb | including local filesystem | 17:45 |
fungi | ahh, okay, so we have the option of local storage and multiple frontends then, got it | 17:45 |
*** eolivare has quit IRC | 17:47 | |
fungi | though as pointed out previously, if a job suddenly requests 100 additional images we're probably also stuck until the next day | 17:52 |
clarkb | ya I think the other step in proxying is maybe using credentials that arent rate limited between the proxy and upstream | 17:53 |
*** ykarel has quit IRC | 17:54 | |
*** webmariner has joined #opendev | 18:03 | |
*** hashar has quit IRC | 18:04 | |
*** marios|out has quit IRC | 18:10 | |
*** andrewbonney has quit IRC | 18:13 | |
*** Green_Bird has joined #opendev | 18:21 | |
*** tosky has quit IRC | 18:29 | |
TheJulia | o/ | 21:35 |
TheJulia | Out of curiosity, more gerrit crawling evil? proxy 502 + slowness/hanging | 21:36 |
johnsom | Yeah, it has been slow for me too | 21:37 |
fungi | stuffing my face but should be able to take a look in a few | 21:38 |
TheJulia | thanks fungi | 21:39 |
* johnsom thinks TheJulia has earned one of the Octavia team's canary stickers for catching issues about the same time we do. | 21:42 | |
TheJulia | \o/ | 21:49 |
*** tosky has joined #opendev | 21:57 | |
ianw | there is something walking all changes, but it looks different to the prior requests | 22:03 |
ianw | interestingly this time it comes from an engineering school in quebec | 22:04 |
TheJulia | sounds like it might be time to dig up a network operations center phone number... | 22:19 |
TheJulia | Because this is sounding remarkably suspicious. | 22:19 |
fungi | well, i've suspected all along this could be related to someone's compsci research project | 22:20 |
fungi | but finding a way to reach them and ask them to go slower was challenging while it was just random ip addresses in google cloud | 22:20 |
johnsom | Someone is training the ML on our code, our jobs are doomed. grin | 22:21 |
TheJulia | or there is a compromised machine, but the simplest possibility is most likely | 22:21 |
fungi | if a machine wants my job, it can have it ;) | 22:21 |
TheJulia | johnsom: Or maybe not and the AI goes "NOOOOO!!!!!" and curls up in a ball and cries having realized the nature of the universe | 22:22 |
johnsom | Do you still have HAProxy in front of it? If so you could consider rate limiting based on source IP, etc. | 22:22 |
johnsom | ^^^ Probably a more likely situation. | 22:22 |
fungi | we haven't had an haproxy in front of it at all (that's the gitea cluster), but yes we've been considering that as a possible avenue | 22:23 |
fungi | an apache module would probably be easier though since there is an apache layer proxying to the gerrit service | 22:23 |
TheJulia | is it new tcp connections or is over a kept alive connection? | 22:23 |
johnsom | Ok, I can help give pointers on how to set that up if/when you decide you want to do it. | 22:23 |
*** slaweq has joined #opendev | 22:27 | |
*** slaweq has quit IRC | 22:42 | |
*** stevebaker has joined #opendev | 22:42 | |
*** rchurch has quit IRC | 22:43 | |
*** rchurch has joined #opendev | 22:45 | |
*** hamalq has quit IRC | 22:57 | |
fungi | clarkb: oh, also there's https://review.opendev.org/760220 up to readd inap-mtl01 to nodepool if you have time | 23:12 |
clarkb | cool lets try it | 23:12 |
*** Green_Bird has quit IRC | 23:17 | |
sean-k-mooney | by the way is https://docs.opendev.org/opendev/system-config/latest/contribute-cloud.html still acurate in terms of 1st party ci requirementes | 23:18 |
openstackgerrit | Merged openstack/project-config master: Revert "Temporarily stop booting nodes in inap-mtl01" https://review.opendev.org/760220 | 23:20 |
clarkb | sean-k-mooney: yes I think so. We've been a bit more flexible on the total number of test node requirement but otherwise ya | 23:20 |
sean-k-mooney | ya that is the one i was debating about. im considerign expanding my home cloud in the next few months | 23:21 |
sean-k-mooney | im debating if i will invest enough too possibley be able to meet those requirements | 23:21 |
*** slaweq has joined #opendev | 23:22 | |
sean-k-mooney | i currently have 4 ironic nodes and one vm server with 256GB of ram and 48 cores. im thinking of adding a few more servers for vms in the next few months after i see how my thirdparty ci is going. | 23:25 |
*** mlavalle has quit IRC | 23:30 | |
sean-k-mooney | its funny that one of these https://www.ebay.ie/itm/4x-Node-Dell-PowerEdge-C6220-2U-Server-8x-E5-2670-OCTA-CORE-1024GB-RAM-Rails/402497036993?hash=item5db6b162c1:g:DfoAAOSwS5tfibmo can now actuly provide that | 23:31 |
*** slaweq has quit IRC | 23:55 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!