opendevreview | Amit Uniyal proposed openstack/os-vif stable/zed: set default qos policy https://review.opendev.org/c/openstack/os-vif/+/883016 | 04:12 |
---|---|---|
opendevreview | Amit Uniyal proposed openstack/os-vif stable/yoga: Delete trunk bridges to avoid race with Neutron https://review.opendev.org/c/openstack/os-vif/+/886709 | 05:45 |
opendevreview | Amit Uniyal proposed openstack/os-vif stable/yoga: set default qos policy https://review.opendev.org/c/openstack/os-vif/+/886710 | 05:45 |
opendevreview | Amit Uniyal proposed openstack/os-vif stable/xena: Delete trunk bridges to avoid race with Neutron https://review.opendev.org/c/openstack/os-vif/+/886715 | 06:15 |
opendevreview | Amit Uniyal proposed openstack/os-vif stable/xena: set default qos policy https://review.opendev.org/c/openstack/os-vif/+/886716 | 06:15 |
opendevreview | Sylvain Bauza proposed openstack/nova master: Add a new NumInstancesWeigher https://review.opendev.org/c/openstack/nova/+/886232 | 10:30 |
dvo-plv | Hello, Nova folks | 11:03 |
dvo-plv | Maybe you will have a cnahce to check this nova patch | 11:04 |
dvo-plv | https://review.opendev.org/c/openstack/nova/+/876075 | 11:04 |
dvo-plv | This is a part of the virtio packed ring blueprint | 11:04 |
manuvakery1 | Hi .. Is it possible to create an instance on a disabled host by the admin using the ---host option? | 11:22 |
manuvakery1 | IIRC .. this was working in queens ..but in train throwing "No valid host was found" | 11:39 |
opendevreview | Elod Illes proposed openstack/nova stable/victoria: Remove mentions of removed scheduler filters https://review.opendev.org/c/openstack/nova/+/858051 | 11:54 |
bauzas | manuvakery1: by providing the host value, you ask the scheduler to verify it | 12:36 |
bauzas | manuvakery1: if the host is disabled, then the Compute filter will say no | 12:36 |
noonedeadpunk | bauzas: returning back to the discussion in Vancouver regarding cross-AZ scheduling - I see that topic was mentioned in the etherpad, but I can't recall why that's smth we should not do? | 13:20 |
bauzas | noonedeadpunk: hmmm, about what exactly ? | 13:21 |
noonedeadpunk | like having anti-AZ server groups | 13:21 |
noonedeadpunk | Right now we're trying to implement a cross-AZ scheduling of octavia amphoras by defining AZs in their config. And then mess up with OCtavia flows to ensure it will supply AZ to the scheduler | 13:22 |
noonedeadpunk | But now I've started thinking if that's correct path at all, and if would be easier to just enable anti-az option (alike to anti-affinity) and nova will handle that | 13:22 |
bauzas | noonedeadpunk: fortunately, you mean you would use soft-anti-affinity group policy ? | 13:23 |
noonedeadpunk | well, soft anti-affinity does ensure that VMs will spawn on different computes, but it doesn't care about different AZs, right? | 13:24 |
noonedeadpunk | So what I was thinking is smth like that, but specifically for cross-AZs spawning of VMs | 13:25 |
noonedeadpunk | but maybe we should just continue doing that on Octavia side | 13:27 |
noonedeadpunk | ah, sorry, I meant this etherpad https://etherpad.opendev.org/p/nova-vancouver2023-meet-and-greet | 13:28 |
bauzas | sorry, was afk for a second | 13:32 |
bauzas | noonedeadpunk: we also discussed this on the PTG, you can look at my ML email | 13:32 |
noonedeadpunk | ah, true, failure-domain-anti-affinity | 13:34 |
bauzas | right | 13:34 |
noonedeadpunk | but I guess "failure-domain" functionality does not exist yet? | 13:35 |
noonedeadpunk | another thing that can be "interesting" if we consider storage should be shared between failure domains? As I won't count on that I guess | 13:36 |
bauzas | noonedeadpunk: tbc, we let operators define their own value of "failure domain" | 13:39 |
bauzas | in general, this is something larger than a single rack | 13:39 |
noonedeadpunk | ideally it's smth 30km in between :) | 13:40 |
bauzas | yeah, or just a DC room | 13:41 |
bauzas | (as a reminder, I was an operator before joining OpenStack ;) ) | 13:41 |
bauzas | just because in general, the failures are with A/C or network | 13:41 |
noonedeadpunk | or fire in DC :D | 13:42 |
bauzas | so, if the domain is larger than a rack, then it's simple to move instances between hosts | 13:42 |
bauzas | noonedeadpunk: lollylol (sorry OVH friends ;) ) | 13:42 |
noonedeadpunk | so, basically, option is: set metadata to aggregate which defines "failure-domain", then for server group provide the metadata and policy failure-domain-anti-affinity (or failure-domain-affinity)? | 13:43 |
bauzas | noonedeadpunk: what we said during the PTG to Bloomberg is that we could have a new API microversion for an admin saying "I'd want to live-migrate this instance from this server group to another" | 13:45 |
noonedeadpunk | that is completely different thing I guess? | 13:46 |
bauzas | and maybe have a new policy too | 13:46 |
bauzas | noonedeadpunk: https://etherpad.opendev.org/p/vancouver-june2023-nova#L121 | 13:46 |
noonedeadpunk | ok, yes, I was also talking about that... | 13:47 |
noonedeadpunk | then I'm slightly confused with live-migration | 13:47 |
noonedeadpunk | going back to our usecase with Octavia. What could be done instead of having AZ-scheduler in octavia code - just making a call to create 2 VMs withing same server group, and nova would take care of ensuring, they will be spawned on different failure domains | 13:49 |
noonedeadpunk | (just theoretically) | 13:49 |
noonedeadpunk | And why I got confused about live migrations, as in our usecase we have completely independent backing storages | 13:51 |
bauzas | noonedeadpunk: sorry I was quickly explaining about what we discussed with Bloomberg | 13:51 |
noonedeadpunk | yeah, ok, that makes sense as well :) | 13:51 |
bauzas | like, given they were using both AZ and server groups, they also wanted to violate their hard policy that they were using for packing instances to the same rack | 13:52 |
bauzas | basically, because sometimes you want to deprecate a rack :p | 13:52 |
noonedeadpunk | yeah, yeah, that makes sense as well | 13:52 |
bauzas | that's also why I provided https://review.opendev.org/c/openstack/nova/+/886232 to help operators to pack instances to hosts :) | 13:53 |
bauzas | instead of having to create some hard policies, which I utterly dislike :D | 13:53 |
noonedeadpunk | oh, that is neat | 13:53 |
noonedeadpunk | and way less complex then metric weighter | 13:54 |
noonedeadpunk | Also I'm asking as we're working with implementation in Octavia now, but maybe it would be better to switch to this failover-anti-affinity policy implementation in nova instead | 13:57 |
noonedeadpunk | but ok, I think I got the gist | 13:57 |
bauzas | noonedeadpunk: we could discuss this on the next vPTG | 14:00 |
opendevreview | Amit Uniyal proposed openstack/os-vif stable/yoga: set default qos policy https://review.opendev.org/c/openstack/os-vif/+/886710 | 17:47 |
manuvakery1 | bauzas: ok thanks | 18:03 |
opendevreview | Amit Uniyal proposed openstack/os-vif stable/xena: set default qos policy https://review.opendev.org/c/openstack/os-vif/+/886716 | 18:04 |
opendevreview | Amit Uniyal proposed openstack/os-vif stable/wallaby: set default qos policy https://review.opendev.org/c/openstack/os-vif/+/886778 | 18:30 |
opendevreview | sean mooney proposed openstack/nova master: Remove deprecated AZ filter. https://review.opendev.org/c/openstack/nova/+/886779 | 18:31 |
*** iurygregory_ is now known as iurygregory | 19:25 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!