dpawlik0 | thanks fungi and clarkb | 09:03 |
---|---|---|
gnuoy | Hi, for pre-commit testing of the sunbeam projects we would ideally like to use 3 x 16Gb RAM nodes, I know this is quite a heavy requirement so is this feasible with the OpenDev infrastructure? | 09:15 |
fungi | gnuoy: depends on how many jobs you want to run like that. we have some 32gb node capacity in one region of vexxhost, but your testing would be dead in the water if that region ends up offline for some reason | 11:16 |
fungi | we standardize on 8gb ram nodes so we can more efficiently pack our available quota on most of our providers, so if you can find a way to get your tests to run on a group of 8gb nodes you'll have more resiliency | 11:17 |
frickler | but also maybe some day we can rediscuss that decision that is now how old, 10 years? | 11:19 |
frickler | I don't think running a 6 node job is really a good solution | 11:20 |
stephenfin | fungi: What does one need to do to get old feature branches deleted? Namely the 'feature/r1' branch for openstacksdk and 'feature/osc4' branch for python-openstackclient (CC: gtema) | 11:21 |
fungi | frickler: right, the main concern is that every time in the past we increased default memory capacity on nodes, many projects suddenly started requiring that much ram in order to run their tests because there was no longer any check that someone wasn't adding tons of memory-inefficient changes to projects | 11:24 |
fungi | stephenfin: the openstack release managers have branch deletion permission delegated to them for all official openstack projects | 11:25 |
fungi | frickler: system-config-run-zuul uses a nodeset with 6 nodes, fwiw, and we have several jobs in opendev/system-config which use 5 | 11:35 |
frickler | hmm, but there isn't much interaction between these nodes, just testing on different platforms I think? | 11:36 |
fungi | we may have some which use even larger nodesets, i just happened to know about those off the top of my head | 11:36 |
frickler | even then, the question would be how helpful it is to make deployment projects test on a setup that is extremely different from the standard setup they are targeting | 11:37 |
fungi | in system-config-run-zuul we have a bridge node which uses ansible to deploy zuul to a merger, executor and scheduler, sets up haproxy on another, zookeeper on yet another, and checks that they work together | 11:38 |
fungi | frickler: yes, but also if every job doesn't run twice as fast when we give it double the ram, we end up with less job throughput/capacity overall | 11:39 |
frickler | fungi: I doubt that our quota is RAM bound, though of course we would need to check that with our donors | 11:40 |
frickler | for vexxhost we know that even 4x RAM would not change node capacity | 11:40 |
fungi | well, for one vexxhost region yes, not for the other one | 11:41 |
fungi | you can check ram quotas with openstack limit show i think? | 11:41 |
fungi | historically, ram was the main limiter in our quotas, which is why everything was sized around it | 11:41 |
frickler | fungi: "openstack quota list --compute" seems to work. do you think our RAM quota is considered confidential or could I share it in an etherpad for discussion? | 11:46 |
fungi | none of our quotas should be considered confidential | 11:47 |
frickler | now still having an ethercalc would have been nice | 11:48 |
fungi | i wonder how that differs from what `openstack limits show --absolute` provides | 11:48 |
fungi | looks like the maxTotalRAMSize from limits show matches the Ram from quota list | 11:50 |
frickler | yes, seems to be mostly the same thing. on vexxhost only the limits command seems to work | 12:00 |
frickler | although ... that claims that we are unlimited there. not sure what to make of that | 12:02 |
fungi | some providers expect us to check with them before increasing max-servers | 12:08 |
fungi | or before changing what flavors we're booting | 12:09 |
fungi | some tie custom flavors for us to host aggregates in order to isolate us to specific hardware too | 12:09 |
frickler | yes, certainly we cannot make this kind of change without discussing it with all providers, I just want to gather some data that may better allow us as opendev to decide what we might want to do | 12:13 |
frickler | ovh limits are very weird by the way, I thought I could get along with only listing maximum ram and instance count, but there max cores seems to be the crucial factor | 12:15 |
frickler | https://etherpad.opendev.org/p/opendev-compute-quota is my starting point, but now it looks like I'll have to do a local spreadsheet and just copy the results | 12:16 |
*** ykarel_ is now known as ykarel | 14:01 | |
clarkb | note I wouldn' rely on quotas for each resource across the board | 14:36 |
clarkb | its part of the input but if we are limited on cpus for example the cloud may not care too much about memory | 14:36 |
clarkb | and set the memory to some large value for example | 14:36 |
clarkb | in ovh for example they have provided us with specific flavors that we may use | 14:37 |
clarkb | I don't think any of those flavors have more than 8gb of memory | 14:37 |
clarkb | we would almost certainly need to engage with rax and ovh if we want to shift the primary flavor to double the ram. inmotion we control and is cpu not memory limited iirc. And vexxhost we already know they are cpu limited not memory limited | 14:39 |
clarkb | if people want to go through that I think it could be beneficial. That said I think having reasonable artificial restrictions on things does help produce more testable software. But that isn't universally true. I've long believed we should be more clever to work around lack of nested virt (even aws doesn't do it) and everyone basically refuses to work around that one | 14:45 |
*** melwitt_ is now known as melwitt | 18:34 | |
clarkb | openstack ansible reops don't share a gate queue... | 21:56 |
clarkb | they all run the same jobs though so probably should | 21:57 |
fungi | worth pointing out to noonedeadpunk as ptl | 22:03 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!