*** ianychoi__ is now known as ianychoi | 00:44 | |
*** ricolin_ has joined #openstack-meeting-3 | 01:45 | |
*** macz_ has joined #openstack-meeting-3 | 02:41 | |
*** macz_ has quit IRC | 02:46 | |
*** psahoo has joined #openstack-meeting-3 | 04:16 | |
*** psahoo has quit IRC | 05:07 | |
*** psahoo has joined #openstack-meeting-3 | 05:07 | |
*** macz_ has joined #openstack-meeting-3 | 06:18 | |
*** macz_ has quit IRC | 06:22 | |
*** psachin has joined #openstack-meeting-3 | 06:29 | |
*** ralonsoh has joined #openstack-meeting-3 | 06:37 | |
*** slaweq has joined #openstack-meeting-3 | 07:01 | |
*** ttx has quit IRC | 07:33 | |
*** ttx has joined #openstack-meeting-3 | 07:38 | |
*** tosky has joined #openstack-meeting-3 | 07:54 | |
*** e0ne has joined #openstack-meeting-3 | 08:01 | |
*** ttx has quit IRC | 08:02 | |
*** ttx has joined #openstack-meeting-3 | 08:02 | |
*** macz_ has joined #openstack-meeting-3 | 09:55 | |
*** macz_ has quit IRC | 09:59 | |
*** psachin has quit IRC | 10:53 | |
*** psachin has joined #openstack-meeting-3 | 11:03 | |
*** lkoranda has joined #openstack-meeting-3 | 11:18 | |
*** lpetrut has joined #openstack-meeting-3 | 11:25 | |
*** macz_ has joined #openstack-meeting-3 | 11:43 | |
*** macz_ has quit IRC | 11:48 | |
*** raildo has joined #openstack-meeting-3 | 12:00 | |
*** lkoranda has quit IRC | 12:10 | |
*** njohnston has joined #openstack-meeting-3 | 12:24 | |
*** _erlon_ has joined #openstack-meeting-3 | 12:25 | |
*** lkoranda has joined #openstack-meeting-3 | 12:38 | |
*** macz_ has joined #openstack-meeting-3 | 13:31 | |
*** macz_ has quit IRC | 13:36 | |
*** Luzi has joined #openstack-meeting-3 | 13:44 | |
*** lkoranda has quit IRC | 14:20 | |
*** lpetrut has quit IRC | 14:21 | |
*** slaweq has quit IRC | 14:38 | |
*** slaweq has joined #openstack-meeting-3 | 14:42 | |
*** priteau has joined #openstack-meeting-3 | 14:59 | |
*** psahoo has quit IRC | 14:59 | |
*** mlavalle has joined #openstack-meeting-3 | 15:03 | |
*** macz_ has joined #openstack-meeting-3 | 15:19 | |
*** macz_ has quit IRC | 15:24 | |
*** macz_ has joined #openstack-meeting-3 | 15:29 | |
*** navidpustchi has joined #openstack-meeting-3 | 15:43 | |
*** Luzi has quit IRC | 15:46 | |
*** navidpustchi is now known as navidp | 15:46 | |
*** ygk_12345 has joined #openstack-meeting-3 | 15:47 | |
*** elod has joined #openstack-meeting-3 | 15:47 | |
*** e0ne has quit IRC | 15:55 | |
*** ygk_12345 has quit IRC | 15:58 | |
gibi | #startmeeting nova | 16:00 |
---|---|---|
openstack | Meeting started Thu Oct 8 16:00:00 2020 UTC and is due to finish in 60 minutes. The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot. | 16:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 16:00 |
*** openstack changes topic to " (Meeting topic: nova)" | 16:00 | |
openstack | The meeting name has been set to 'nova' | 16:00 |
gmann | o/ | 16:00 |
gibi | \o | 16:01 |
_erlon_ | \o | 16:01 |
navidp | \o | 16:01 |
gibi | let's wait an extra minute for the others | 16:01 |
elod | o/ | 16:03 |
gibi | #topic Bugs (stuck/critical) | 16:04 |
*** openstack changes topic to "Bugs (stuck/critical) (Meeting topic: nova)" | 16:04 | |
gibi | No critical bug | 16:04 |
gibi | #link 6 new untriaged bugs (+2 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New | 16:04 |
stephenfin | o/ | 16:04 |
gibi | We have a fix for the focal bug #link https://bugs.launchpad.net/nova/+bug/1882521 but it did not merged yet on master #link https://review.opendev.org/#/c/755799 | 16:04 |
openstack | Launchpad bug 1882521 in OpenStack Compute (nova) "Failing device detachments on Focal" [High,In progress] - Assigned to Lee Yarwood (lyarwood) | 16:04 |
gibi | this means that it is probably not in the victoria release too | 16:05 |
gmann | ohk, i was typing same to ask | 16:05 |
gibi | but we can release from stable/victoria when it is ready | 16:05 |
gibi | I mean we can release an extra point release | 16:06 |
gmann | k | 16:06 |
gibi | any other bug to discuss? | 16:06 |
bauzas | \o (in a meeting) | 16:07 |
gibi | #topic Release Planning | 16:08 |
*** openstack changes topic to "Release Planning (Meeting topic: nova)" | 16:08 | |
gibi | Today is the last day to cut RC2 if needed. | 16:08 |
gibi | Nothing is on the stable/victoria branch that would justify an RC2 | 16:08 |
gibi | so we will release RC1 as the Victoria release | 16:08 |
gibi | any comments? | 16:08 |
gmann | i am holding integrated comoute job switch to Focal until we merge Focal fix in victoria on nova side - https://review.opendev.org/#/c/755525/3 | 16:09 |
gmann | as Tempest master run on Victoria gate too, it will start failing victoria gate | 16:10 |
gibi | so this also means that the focal switch is not part of the Victoria release | 16:10 |
tosky | unless you merge that fix, I guess | 16:10 |
gibi | unless that fix is merged _today_ as today is the last day for an RC2 | 16:11 |
gmann | gibi: it is and once we merge the fix on victoria as part of rc or backport with new release we start pending compute job to run on Focal | 16:11 |
gibi | gmann: sure I'm OK with a backport | 16:11 |
stephenfin | no one's going to consume the .0 release; .1 is fine imo | 16:12 |
gmann | gibi: i will also keep eyes on gate to merge it today | 16:12 |
stephenfin | in case you were looking for opinions :) | 16:12 |
gibi | gmann: if we merge it today, does it also mean you would like to have RC2? | 16:12 |
gmann | gibi: humm either is fine. .1 also ok with backport for me. | 16:13 |
gmann | i think you said no others change pending for RC2 righjt | 16:13 |
gibi | gmann: yeah, we merged nothing to stable/victoria that needs an RC2 | 16:13 |
elod | just a question: if it merges today, will we have time to backport it to victoria to release RC2? (the "Failing device detachments on Focal" fix i mean) | 16:14 |
gmann | yeah backport also need time then. | 16:14 |
gibi | elod: good point | 16:14 |
gmann | I think we can leave RC2 | 16:14 |
gibi | lets go with RC1 for Victoria. Then release .1 after the focal fix and the switch to focal | 16:14 |
gmann | +1 | 16:15 |
elod | sounds OK | 16:15 |
gibi | anything else about the release? | 16:15 |
gibi | #topic Stable Branches | 16:17 |
*** openstack changes topic to "Stable Branches (Meeting topic: nova)" | 16:17 | |
elod | i think stable branches are OK, nothing new issue there | 16:17 |
gibi | cool, thanks elod | 16:18 |
gibi | #topic Sub/related team Highlights | 16:18 |
*** openstack changes topic to "Sub/related team Highlights (Meeting topic: nova)" | 16:18 | |
gibi | API (gmann) | 16:18 |
gmann | nothing specific from me | 16:18 |
gibi | thanks | 16:19 |
gibi | Libvirt (bauzas) | 16:19 |
bauzas | nothing to report, sir. | 16:19 |
gibi | thanks | 16:19 |
gibi | #topic Open discussion | 16:20 |
*** openstack changes topic to "Open discussion (Meeting topic: nova)" | 16:20 | |
gibi | there is one item on the agenda | 16:20 |
gibi | Adding a filter scheduler for server groups on availability zone. #link https://review.opendev.org/#/c/756380 | 16:20 |
gibi | navidp: I think it is yours | 16:20 |
navidp | yap | 16:20 |
navidp | I just wanted to get some reviews and general idea about the use case. | 16:21 |
gibi | could you summarize the use case here | 16:21 |
*** navidp has quit IRC | 16:21 | |
*** navidpustchi has joined #openstack-meeting-3 | 16:22 | |
navidpustchi | I got disconnected. | 16:22 |
navidpustchi | The use case is that for instances with requirements to deploy across or within a single AZ, | 16:24 |
gibi | navidpustchi: either you give us a short summary about the use case here so we can discuss it quickly or we can continue the review in the spec | 16:24 |
navidpustchi | Instances with higher reliability and availability must be created ondifferent racks or locations such as hosts from different availability zones.Currently, the group affinity implementation supports instances in the group tobe created on a single host or be created on different hosts. Specially fordeployments with large number of availability | 16:25 |
navidpustchi | zones, it is not practical towatch available zones and instances in the group. Current server groupanti-affinity filter only guarantee instances in the server group withanti-affinity policy on different hosts, but these hosts can be in the sameavailability zone. | 16:25 |
gibi | so you would need availability zone level affinity and anti-affinity | 16:26 |
navidpustchi | exactly. | 16:26 |
bauzas | that's my understanding | 16:26 |
bauzas | but you get anti-affinity for free with AZs, right? | 16:26 |
gibi | I guess that would need new policies like az-affinity and az-anti-affinity, or a property on the server group that defines the scope of the affinity | 16:26 |
_erlon_ | Yes, the way can distribute a group of VMs for an application is to create each one of then with a given AZ. But if you have a large se of AZs that becames hard to track them | 16:27 |
bauzas | gibi: I personnally think it would be a bad idea to pursue into adding more to servergroups IMHO | 16:27 |
bauzas | _erlon_: navidpustchi: I'm lost, why can't you use then aggregates affinity ? | 16:28 |
navidpustchi | We can wither add to server groups or use the current server groups and have scheduler hint. | 16:28 |
_erlon_ | @bauzas the aggregate affinity only agregates hosts | 16:28 |
_erlon_ | not az, your hosts can be in the same az | 16:28 |
*** ricolin_ has quit IRC | 16:29 | |
* bauzas tries to find the existing filter that says "exclude my new vm from this agg" | 16:29 | |
bauzas | I'm pretty sure it exists | 16:30 |
_erlon_ | unless you could create a group with all hosts from different AZ? | 16:30 |
_erlon_ | navidpustchi: do you thing that would be possible to your case? | 16:30 |
gibi | bauzas: I'm not sure it exists. We have two isolation filter AggregateMultiTenancyIsolation AggregateImagePropertiesIsolation | 16:30 |
_erlon_ | bauzas: we couldnt find one either | 16:31 |
navidpustchi | no, there was a proposal for aggregate affinity but it never implemented. | 16:31 |
bauzas | okay, I'm still struggling with the usecase | 16:32 |
navidpustchi | this filter, address that issue as well. | 16:32 |
_erlon_ | #link https://blueprints.launchpad.net/nova/+spec/aggregate-affinity | 16:32 |
bauzas | so you want to tell "I want this VM to be not in this AZ" ? | 16:32 |
bauzas | and "I want this VM to be in this AZ" ? | 16:32 |
bauzas | if so, I can help your case | 16:32 |
gibi | bauzas: they want to tell that these 3 VMs are in different AZs | 16:32 |
navidpustchi | no, it is very similar to host affinity, I want to tell it this group of instances to be in an AZ | 16:32 |
bauzas | gibi: with multi-create, gotcha | 16:33 |
navidpustchi | or this group of instances to be on different AZs, meaning they will be created on hosts from a different AZ. | 16:33 |
gibi | bauzas: not just multicreate. You can create VMs one after the other but if they are in the same server group then the policy of the group would instruct the scheduler to place them to separate AX | 16:34 |
gibi | AZ | 16:34 |
navidpustchi | yes | 16:34 |
gibi | it is basically AZ (or aggregate) level (anti-)affinity | 16:34 |
navidpustchi | yes | 16:34 |
bauzas | gibi: well, if that's for sequential creates, you can specific which AZ you wanna land to | 16:34 |
gibi | we have host level today | 16:34 |
bauzas | and then you don't need server groups | 16:35 |
gibi | bauzas: correct, | 16:35 |
navidpustchi | it works for couple of instances. | 16:35 |
navidpustchi | but for a group of instances to keep track of it, becomes impossible | 16:35 |
bauzas | you're asking nova to orchestrate your cloud | 16:35 |
_erlon_ | its actually the same concept as affinity for server groups | 16:36 |
bauzas | again, assuming you need this for failure domains | 16:36 |
_erlon_ | yes | 16:37 |
bauzas | I assume that you also know what to land and where | 16:37 |
bauzas | tbc, the only miss I see from nova is multicreate with AZ anti-affinity | 16:37 |
bauzas | because the sequential case is just a matter of knowing what to tell nova to go where | 16:38 |
navidpustchi | I agree, though having this knowledge is not expected from the user who creates the instance. | 16:38 |
navidpustchi | I mean the sequential case. | 16:39 |
bauzas | give me an example of a confusing situation, please | 16:39 |
navidpustchi | I'm a user, trying to create three VMs, want to have them on different AZ for better reliability, dont just want them on seperate host. | 16:40 |
bauzas | so as a user, I ask nova to give me the list of AZs | 16:40 |
bauzas | and then I pick three differents | 16:40 |
navidpustchi | for this case I need to know which AZs I can create a VM to specify them | 16:40 |
bauzas | right | 16:40 |
bauzas | but the AZ list is known to the user | 16:41 |
navidpustchi | and I want them to stay on different AZ. | 16:41 |
_erlon_ | you can also do that with hosts, and cast each instance into a different host | 16:41 |
_erlon_ | bauzas: if you have s small number of operators and modest sized cloud you know where you want to land, the use case here is that if you have lets say dozens of AZs, them controlling that is a problem | 16:41 |
bauzas | _erlon_: if you specify different AZs per instance, those will necessarly be on different hosts | 16:41 |
navidpustchi | if there is a maintentance and we need to move a VM, it can break the initial AZs | 16:41 |
bauzas | _erlon_: again, I'm not convinced : the user can see the long list of AZs, and pick three different ones for each of their VMs | 16:42 |
bauzas | and he will get the guaranttee that the instances will be exclusively running on different hosts | 16:42 |
bauzas | navidpustchi: i thought this was an user usecase, not an operator one | 16:43 |
navidpustchi | may I ask what part you are not convinced, | 16:43 |
navidpustchi | it is a user case, | 16:43 |
bauzas | navidpustchi: if you wanna do maintenance on a host, just live migrate the VMs, those will stick on the same AZ | 16:43 |
bauzas | (provided the instances were created with an AZ argument) | 16:43 |
gibi | I thik this AZ level (anti-)affinity question boils down to the fact that nova does not want to implement much more orchestration logic https://docs.openstack.org/nova/rocky/contributor/project-scope.html#no-more-orchestration | 16:43 |
*** navidpustchi has quit IRC | 16:43 | |
bauzas | hah, I was about to explain why I wasn't convinced | 16:44 |
bauzas | but I'll continue and logs will show | 16:44 |
gibi | nova provide the interface for the list of AZs and the way to specify AZ for an instance so an external entity can do the orchestration logic | 16:44 |
*** navidpustchi has joined #openstack-meeting-3 | 16:44 | |
bauzas | ^ this | 16:44 |
gibi | this does not necessary needs to be the end user | 16:45 |
gibi | we have heat for example for orchestration | 16:45 |
bauzas | whather the number of AZs a cloud has, a user can be sure that they can boot instances on different hosts by specific different AZs | 16:45 |
bauzas | because he just has to pick different AZs from the list | 16:45 |
bauzas | even if the cloud is running 100000 instances and has 1000 AZs | 16:46 |
bauzas | pick the three first AZs and spin instances against each of them | 16:46 |
bauzas | and then you'll get AZ anti-affinity | 16:46 |
bauzas | but, | 16:46 |
bauzas | there is one miss I reckon | 16:46 |
bauzas | you can't do it at once in a single API call | 16:47 |
bauzas | that being said, and people can testify, I'm really not a super-fan of multi-create | 16:47 |
bauzas | and I'm not really happy to add more complexity to multi-create as I'd personnally like to live it die | 16:48 |
gibi | yeah, with multi create you can only specify one AZ for all the instances in the request | 16:48 |
bauzas | to leave* it | 16:48 |
bauzas | (pardon my French (c) ) | 16:48 |
bauzas | navidpustchi: do you think we answered your question ? we can continue off-meeting | 16:49 |
navidpustchi | makes sense, though if you want to create 500 VMs will it work ? | 16:49 |
gibi | the external orchestrator does not need to be a human ^^ | 16:49 |
bauzas | navidpustchi: if you create 500 VMs sequentially, yes | 16:49 |
navidpustchi | ok | 16:49 |
navidpustchi | we cna continue offline. | 16:50 |
gibi | O | 16:50 |
gibi | is there anything else for the today meeting? | 16:50 |
bauzas | navidpustchi: if you want to have 500 VMs to be mutually exclusive, use a servergroup or write a feature | 16:50 |
bauzas | at once* | 16:50 |
_erlon_ | ok, we will discuss the alternatives you guys suggested and if we have something that cant be accomplished we bring that over | 16:50 |
gibi | _erlon_, navidpustchi: thanks | 16:50 |
bauzas | thanks indeed | 16:51 |
navidpustchi | thanks | 16:51 |
_erlon_ | bauzas++ gibi++ | 16:51 |
_erlon_ | thanks guys | 16:51 |
gibi | if no any other thing for today then thanks for joining | 16:51 |
gibi | o/ | 16:52 |
gibi | #endmeeting | 16:52 |
bauzas | bye | 16:52 |
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/" | 16:52 | |
openstack | Meeting ended Thu Oct 8 16:52:25 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:52 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-10-08-16.00.html | 16:52 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-10-08-16.00.txt | 16:52 |
openstack | Log: http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-10-08-16.00.log.html | 16:52 |
bauzas | gibi: thanks for running it | 16:52 |
elod | thanks o/ | 16:53 |
*** psachin has quit IRC | 16:57 | |
*** mlavalle has quit IRC | 17:08 | |
*** mlavalle has joined #openstack-meeting-3 | 17:09 | |
*** penick has joined #openstack-meeting-3 | 17:23 | |
*** navidpustchi has quit IRC | 17:28 | |
*** penick has quit IRC | 17:49 | |
*** priteau has quit IRC | 19:31 | |
*** ralonsoh has quit IRC | 19:50 | |
*** slaweq has quit IRC | 20:26 | |
*** _erlon_ has quit IRC | 22:22 | |
*** mlavalle has quit IRC | 22:54 | |
*** tosky has quit IRC | 22:59 | |
*** macz_ has quit IRC | 23:32 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!