*** efried1 is now known as efried | 02:40 | |
sahid | o/ | 07:44 |
---|---|---|
sahid | still with my fighting regarding aggregate and az. There is one thing I wanted to mention | 07:44 |
sahid | I have noticed the behavior betzeen placement aggregate multitenance isloation in placement and the filter aggregate multitenancy isolation of the sheduler | 07:45 |
sahid | the filter scheduler accepts host which does not belong to an aggregate with filter_tenant_id | 07:46 |
sahid | so tenant_id can be scheduled to hosts that do not belong to an aggregate with filter_tenant_id=tenant_id, where in placement tenant_id can *only* be scheduled to hosts that belong to aggregate with filter_tenant_id=tenant_id | 07:50 |
sahid | i'm not sure I'm clear :-) | 07:50 |
bauzas | sahid: you're perfectly clear :) | 08:44 |
bauzas | placement prefilter behaves different from the scheduler filter | 08:44 |
sahid | bauzas: thanks :-) the question that i have also is that, is there any plan to move all the filters in placement and so remove the scheduler? My point is that, the current AggregateMultitenancyIsolation is limited regarding the number of project_id we can set as part of filter_tenant_id (limited by the db field for value of an aggregate key I guess). Does that make sense to provide a patch to remove | 08:55 |
sahid | that limitation by adding a logic like we have in placement? | 08:55 |
sahid | or we just don't want to involve on scheduler fitler anymore? | 08:55 |
bauzas | sahid: we eventually came to the conclusion that this was impossible and unwantable to move all the filters into placement prefilters | 08:55 |
bauzas | but everytime we have resources in placement, we shall have prefilters yeah | 08:56 |
bauzas | back to your question, I'd say that there is no clear stance about the AggregateMultitenancyIsolation filter | 08:57 |
bauzas | it behaves a bit differently | 08:57 |
bauzas | so we could keep it and patch it if required | 08:57 |
bauzas | the limitation you mention is indeed a DB problem | 08:57 |
bauzas | but I assume placement would also have this limitation (at least it's limited by the length of the URL and the size of the response :) ) | 08:58 |
sahid | thanks a lot for all those points | 08:59 |
sahid | for placement filter_tenant_id is a prefix for the key, it searches to match all keys in the aggregate that start with filter_tenant_id | 08:59 |
sahid | which well remove the limitation :-) | 09:00 |
bauzas | ralonsoh: looks like some neutron revert is also needed here : https://9646a7fb82b47fbe6288-a22e2178400a1d74c0dfc0d0570ba9cf.ssl.cf2.rackcdn.com/894945/1/check/nova-grenade-multinode/e307f26/testr_results.html | 12:40 |
zigo | Is there a PBR equivalent to pyproject.toml's : | 12:42 |
zigo | [tool.setuptools.package-dir] | 12:42 |
zigo | packagename = "packagedir" | 12:42 |
jras_ | Hi, we're trying to migrate from nova/libvirt network QoS to Neutron managed QoS. Is there any way to do this without requiring VM downtime? It seems changes made to domain config using virsh domiftune are undone after live migrations. It seems the alternative is to resize to a flavor without these quotas. Is that correct? | 12:43 |
zigo | It should be: | 12:43 |
zigo | [files] | 12:43 |
zigo | packages = | 12:43 |
zigo | nova | 12:43 |
zigo | right? | 12:43 |
bauzas | zigo: https://docs.openstack.org/pbr/latest/user/using.html#files | 12:45 |
ralonsoh | bauzas, I pushed a patch for this yesterday | 12:45 |
ralonsoh | I think is merged | 12:45 |
bauzas | ack gtk | 12:45 |
bauzas | ralonsoh: yeah the one you said | 12:45 |
bauzas | but that's still failing | 12:45 |
bauzas | fresh fish from today's shelf | 12:46 |
ralonsoh | bauzas, https://review.opendev.org/c/openstack/tempest/+/895167/1/tempest/api/compute/admin/test_live_migration.py isn't that enough? | 12:46 |
jras_ | I forgot to mention that the original flavor has had the quotas removed, before using domiftune to edit the domain config, they still seem to be set after live migration. | 12:46 |
bauzas | ralonsoh: unstable seems just be there for marking the test, but it still runs | 12:48 |
bauzas | https://opendev.org/openstack/tempest/commit/21f53012f76d11e3df327adcf87e67edf9045d09 | 12:48 |
ralonsoh | bauzas, but the job should not fail if the test fails | 12:49 |
bauzas | agreed, I'm now puzzled | 12:49 |
zigo | bauzas: Thanks, but I think what I wrote isn't enough. What's happening to me, is: | 12:49 |
zigo | setuptools.errors.PackageDiscoveryError: Multiple top-level packages discovered in a flat-layout: ['debian', 'pymemcache']. | 12:49 |
zigo | (ie: the Debian folder is annoying setuptools 66.1.1, though it seems 66.1.2 is kind of fixed...) | 12:49 |
ralonsoh | bauzas, if in this new recheck it fails again, I'll send a patch to skip it | 12:49 |
bauzas | zigo: there are good reasons why I never did put my toe into the packaging mess, and that one you mention looks another good one to me :) | 12:50 |
bauzas | ralonsoh: ack | 12:51 |
zigo | Python is a mess, not packaging ! :) | 12:51 |
* bauzas disappears for 20 mins | 12:51 | |
bauzas | zigo: sure, having 3 different official ways to package + a lot of third-party ones is certainly not a mess | 12:52 |
zigo | Well, everyone's wrong, I'm the only one doing things right ... :P | 12:52 |
bauzas | #xkcd927, I agree | 12:54 |
zigo | Oh, I was wrong, pymemcache isn't using PBR, but standard setuptools with complex PBR like setup.cfg... | 12:56 |
zigo | So that's indeed a very good case for #xkcd927 ... | 12:56 |
zigo | But the issue really is Python packaging, and not Deb packaging. | 12:57 |
zigo | (not jocking, this time...) | 12:57 |
zigo | So, setup.cfg's way is (3rd way that I know now...): | 13:02 |
zigo | [options] | 13:02 |
zigo | packages = foo | 13:02 |
opendevreview | Merged openstack/placement master: Update 2023.2 reqs to support os-traits 3.0.0 as min version https://review.opendev.org/c/openstack/placement/+/895186 | 13:12 |
elodilles | zigo: somewhat similar case was this: https://review.opendev.org/q/topic:setuptools-issue-3197 | 13:17 |
elodilles | i think... | 13:17 |
zigo | Thinking about it: it's not even 3 standards that we have, but 4: | 13:37 |
zigo | - setup.py (the lebacy way) | 13:37 |
zigo | - setup.cfg by setup tools | 13:37 |
zigo | - setup.cfg by PBR | 13:37 |
zigo | - pyproject.toml | 13:37 |
zigo | Fun ... :P | 13:37 |
bauzas | elodilles: want me to rebase the placement RC1 patch with the new SHA1 ? | 13:42 |
elodilles | bauzas: if you could do that that would be awesome o:) | 14:02 |
bauzas | elodilles: shall work https://review.opendev.org/c/openstack/releases/+/894698 | 14:18 |
bauzas | (I needed to checkout HEAD^ for the bobcat file before using new-release tool) | 14:19 |
* bauzas goes taxiing for the kids | 14:20 | |
Uggla | bauzas, I think, I have spotted the difference and the "issue" with service token + sdk usage. | 15:02 |
bauzas | Uggla: happy to hear | 15:40 |
Uggla | I'm currently writing a wrapup of my findings, I will share it Monday. | 15:41 |
Uggla | coz I need to go to a school meeting... | 15:41 |
bauzas | Uggla: haha, me too on both Monday (for Charline) and Tuesday (Clémence) :) | 15:42 |
elodilles | bauzas: thanks, placement rc1 patch is on the gate! | 15:59 |
elodilles | now we need only the nova rc1 :] | 15:59 |
dansmith | we still need to land the revert series | 16:13 |
dansmith | I just rechecked the top one | 16:13 |
dansmith | and looks like the bottom one just hit a gate reset :/ | 16:14 |
opendevreview | OpenStack Release Bot proposed openstack/placement stable/2023.2: Update .gitreview for stable/2023.2 https://review.opendev.org/c/openstack/placement/+/895491 | 16:15 |
opendevreview | OpenStack Release Bot proposed openstack/placement stable/2023.2: Update TOX_CONSTRAINTS_FILE for stable/2023.2 https://review.opendev.org/c/openstack/placement/+/895492 | 16:15 |
opendevreview | OpenStack Release Bot proposed openstack/placement master: Update master for stable/2023.2 https://review.opendev.org/c/openstack/placement/+/895493 | 16:15 |
*** efried1 is now known as efried | 18:30 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!