cardoe | clarkb: with zero clue, could we do something like https://github.com/spegel-org/spegel for a nodeset? | 00:01 |
---|---|---|
clarkb | we already have what we call a "buildset registry" which is a stateless registry that lives for the lifetime of the jobs that run in a buildset | 00:03 |
clarkb | the idea there is you build a bunch of images and then use them in various other jobs and the buildset registry sorts that out | 00:03 |
clarkb | but also running an entire kuberntes for just a simple http fronted data store kinda illustrates the exact problem with this space | 00:04 |
clarkb | the overhead is massive for what should be simple | 00:04 |
clarkb | it is reassuring that that system does garbage collect images. Thats like the main issue with every other registry out there | 00:07 |
clarkb | I don't want infinite growht I want a cache that evicts cold items and keeps warm items around | 00:08 |
clarkb | but I don't think we'd rely on anything that requires k8s or helm | 00:08 |
clarkb | currently we would need to deploy ~10 such registries and I'm not really looking forward to running 10 k8s installations just to have a container registry cache. | 00:12 |
clarkb | looks like it would only work for pods iwthin the same cluster too? | 00:16 |
clarkb | so we can't really offer that as a CI level service | 00:16 |
clarkb | it seems to rely on some containerd tricks to make image fetches go to the correct place | 00:16 |
clarkb | but even then to make that work you'd probably need an account to authenticate the central system since I'm pretty sure haivng the central fetch tool was contributing to the rate limit hit rate | 00:19 |
clarkb | (funnel everything through one IP and you hit the per IP limits quick) | 00:19 |
clarkb | you might be able to mitigate that with a special system that refreshes cached images on a rate limit aware timing system (which the apache proxy cache did not) | 00:20 |
clarkb | basically spread out image refreshes and accept some staleness to stay under the limits | 00:20 |
spotz[m] | fungi: dang! | 00:35 |
fungi | you took the words right out of my keyboard | 00:36 |
spotz[m] | I know we're celebrating 15 years this year, 10 seems like yesterday | 00:37 |
ralonsoh | hi folks, is still possible to get a pycharm professional license if you are working on OpenStack? | 09:10 |
*** ralonsoh_ is now known as ralonsoh | 10:07 | |
fungi | ralonsoh: most recent mention i find of pycharm licenses on openstack-discuss is from last year: https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/message/BY55N7MUPWMPHVGWLDCHCYXVWNVY6BSN/ | 13:11 |
fungi | you might try reaching out to dmellado and see if he knows anything | 13:12 |
*** iurygregory_ is now known as iurygregory | 13:31 | |
ralonsoh | fungi, thanks! | 13:44 |
*** iurygregory_ is now known as iurygregory | 14:42 | |
tkajinam | gmann, I noticed that oslo_policy options no longer appear in options in startup debug logs since we merged https://review.opendev.org/c/openstack/heat/+/934567 . | 16:32 |
tkajinam | gmann, I suspect that we should keep the set_defaults call to register oslo.policy options. testing it in https://review.opendev.org/c/openstack/heat/+/936385 ... | 16:33 |
gmann | tkajinam: we do not need to register oslo policy opt in main code for tests. it should be in test only. policy initialization does register all the opt. I will fix that. | 17:58 |
gmann | tkajinam: but can you point me to the failure link so that i can see if its unit tests or functional or both need oslo.policy opt regisration | 17:58 |
gmann | here oslo.policy register all its config option when enforcer is initialized https://github.com/openstack/oslo.policy/blob/b0473adeae4bb31d0670452f9ba5ad32f2ca82ea/oslo_policy/policy.py#L532 | 18:05 |
gmann | we need to register them explicitly only for unit/functional tests | 18:06 |
sean-k-mooney | was there a tc call this week? | 18:13 |
gmann | sean-k-mooney: it was a IRC meeting | 18:14 |
sean-k-mooney | ah i was just checkign the youtube channel | 18:14 |
sean-k-mooney | so i was wonderign where we are with watcher | 18:14 |
gmann | https://meetings.opendev.org/meetings/tc/2024/tc.2024-11-26-18.01.log.html | 18:14 |
sean-k-mooney | can we proceed with the launchpad and gerrit group membership changes? | 18:14 |
gmann | sean-k-mooney: so we discussed the watcher things and from TC side it seems all set. its now on watcher team to onboard new members in gerrit core or LP | 18:15 |
gmann | sean-k-mooney: yeah, you can discuss it in watcher current team and proceed on your proposal sent on ML | 18:15 |
sean-k-mooney | ok but since chenker has not been active or responcive and i have not been added we are still kind fo stuck | 18:16 |
gmann | but I did not see chenker response on ML or IRC, did you hear anything? | 18:16 |
sean-k-mooney | no that partly why i was proposing adding myslef to do the admin of expandign the team | 18:16 |
gmann | sean-k-mooney: yeah, I will suggest we can propose the conclusion in Watcher meeting and based on agreement there it can be added. for exmaple dansmith or slaweq still in gerrit core so they can help in adding if chenker is not active now | 18:17 |
gmann | sean-k-mooney: similarly I can help adding in LP as openstack-admin based on agreement from meeting or ML | 18:17 |
sean-k-mooney | ack cool i guess we can disucss it in the watcher meeting tomorrow so | 18:18 |
sean-k-mooney | gmann: i think you are listed as TC liasion yes? | 18:18 |
gmann | ++ for transparency, that is better as you have waited for chenker to respond or feedback and this week is the time ti take next tep | 18:18 |
sean-k-mooney | so im happy if you want to drive that form the tc side | 18:18 |
gmann | sean-k-mooney: yes | 18:18 |
sean-k-mooney | one of the action items we had was also to discuss the lanuchpad team creation | 18:19 |
gmann | sean-k-mooney: I will not be able to attend watcher meeitng as it is 4 AM for me but will read the log or can discuss with you later on | 18:19 |
sean-k-mooney | i was suggestign 3 teams (watcher-driver, watcher-bug and watcher,coresec) | 18:19 |
sean-k-mooney | basically i would like to have an open bug team for normal bugs adn a watcher-coresec team for vmt | 18:20 |
sean-k-mooney | i.e. secuirty bug tirage | 18:20 |
gmann | I think there is no issue for this proposal also, it is ok if project team want to manage LP like this. once it is final decision then one of the openstack-admin can help for initial setup | 18:20 |
sean-k-mooney | ok ill add that to the etherpad for tomorrow | 18:21 |
gmann | thanks | 18:21 |
tkajinam | gmann, ok that explains why we see oslo.policy options in designate logs... | 18:33 |
tkajinam | gmann, replied to your comment in https://review.opendev.org/c/openstack/heat/+/936385 but I agree the description is not quite correct and the change is not "fixing" behavior but may affect only logging. | 18:35 |
gmann | tkajinam: it will be helpful to see where it fail if you can point me to the heat test failure? | 18:36 |
sean-k-mooney | general question regarding "VMware Migration Working Group" that produced https://www.openstack.org/vmware-migration-to-openstack-white-paper they have in frequiest metting but i dont see it listed in https://meetings.opendev.org/ and there does nto seam to be meeting minutes outside fo the eather pad. | 18:43 |
sean-k-mooney | how is htat group functioning currently is it an offical team? is there a fixed/regular cadance for when it meets | 18:43 |
gouthamr | no frequent meeting yet, sean-k-mooney ... they've been meeting ad hoc.. aprice said they'd set one up soon and share details | 18:43 |
gmann | yeah aprice ^^ can tell more on this. | 18:44 |
sean-k-mooney | ok there was one meeting since the ptg on the 11th form the etherpad but i dont recall any public anouchment of that before hand | 18:44 |
sean-k-mooney | there was no comunication on the mailing list that i saw so it quite opaque currently | 18:45 |
sean-k-mooney | ok finsihing for today but i added the adjeda items here https://etherpad.opendev.org/p/openstack-watcher-irc-meeting#L12 | 20:40 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!