Thursday, 2020-10-08

*** ianychoi__ is now known as ianychoi00:44
*** ricolin_ has joined #openstack-meeting-301:45
*** macz_ has joined #openstack-meeting-302:41
*** macz_ has quit IRC02:46
*** psahoo has joined #openstack-meeting-304:16
*** psahoo has quit IRC05:07
*** psahoo has joined #openstack-meeting-305:07
*** macz_ has joined #openstack-meeting-306:18
*** macz_ has quit IRC06:22
*** psachin has joined #openstack-meeting-306:29
*** ralonsoh has joined #openstack-meeting-306:37
*** slaweq has joined #openstack-meeting-307:01
*** ttx has quit IRC07:33
*** ttx has joined #openstack-meeting-307:38
*** tosky has joined #openstack-meeting-307:54
*** e0ne has joined #openstack-meeting-308:01
*** ttx has quit IRC08:02
*** ttx has joined #openstack-meeting-308:02
*** macz_ has joined #openstack-meeting-309:55
*** macz_ has quit IRC09:59
*** psachin has quit IRC10:53
*** psachin has joined #openstack-meeting-311:03
*** lkoranda has joined #openstack-meeting-311:18
*** lpetrut has joined #openstack-meeting-311:25
*** macz_ has joined #openstack-meeting-311:43
*** macz_ has quit IRC11:48
*** raildo has joined #openstack-meeting-312:00
*** lkoranda has quit IRC12:10
*** njohnston has joined #openstack-meeting-312:24
*** _erlon_ has joined #openstack-meeting-312:25
*** lkoranda has joined #openstack-meeting-312:38
*** macz_ has joined #openstack-meeting-313:31
*** macz_ has quit IRC13:36
*** Luzi has joined #openstack-meeting-313:44
*** lkoranda has quit IRC14:20
*** lpetrut has quit IRC14:21
*** slaweq has quit IRC14:38
*** slaweq has joined #openstack-meeting-314:42
*** priteau has joined #openstack-meeting-314:59
*** psahoo has quit IRC14:59
*** mlavalle has joined #openstack-meeting-315:03
*** macz_ has joined #openstack-meeting-315:19
*** macz_ has quit IRC15:24
*** macz_ has joined #openstack-meeting-315:29
*** navidpustchi has joined #openstack-meeting-315:43
*** Luzi has quit IRC15:46
*** navidpustchi is now known as navidp15:46
*** ygk_12345 has joined #openstack-meeting-315:47
*** elod has joined #openstack-meeting-315:47
*** e0ne has quit IRC15:55
*** ygk_12345 has quit IRC15:58
gibi#startmeeting nova16:00
openstackMeeting started Thu Oct  8 16:00:00 2020 UTC and is due to finish in 60 minutes.  The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: nova)"16:00
openstackThe meeting name has been set to 'nova'16:00
gmanno/16:00
gibi\o16:01
_erlon_\o16:01
navidp\o16:01
gibilet's wait an extra minute for the others16:01
elodo/16:03
gibi#topic Bugs (stuck/critical)16:04
*** openstack changes topic to "Bugs (stuck/critical) (Meeting topic: nova)"16:04
gibiNo critical bug16:04
gibi#link 6 new untriaged bugs (+2 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New16:04
stephenfino/16:04
gibiWe have a fix for the focal bug #link https://bugs.launchpad.net/nova/+bug/1882521 but it did not merged yet on master #link https://review.opendev.org/#/c/75579916:04
openstackLaunchpad bug 1882521 in OpenStack Compute (nova) "Failing device detachments on Focal" [High,In progress] - Assigned to Lee Yarwood (lyarwood)16:04
gibithis means that it is probably not in the victoria release too16:05
gmannohk, i was typing same to ask16:05
gibibut we can release from stable/victoria when it is ready16:05
gibiI mean we can release an extra point release16:06
gmannk16:06
gibiany other bug to discuss?16:06
bauzas\o (in a meeting)16:07
gibi#topic Release Planning16:08
*** openstack changes topic to "Release Planning (Meeting topic: nova)"16:08
gibiToday is the last day to cut RC2 if needed.16:08
gibiNothing is on the stable/victoria branch that would justify an RC216:08
gibiso we will release RC1 as the Victoria release16:08
gibiany comments?16:08
gmanni am holding integrated comoute job switch to Focal until we merge Focal fix in victoria on nova side - https://review.opendev.org/#/c/755525/316:09
gmannas Tempest master run on Victoria gate too, it will start failing victoria gate16:10
gibiso this also means that the focal switch is not part of the Victoria release16:10
toskyunless you merge that fix, I guess16:10
gibiunless that fix is merged _today_ as today is the last day for an RC216:11
gmanngibi: it is and once we merge the fix on victoria as part of rc or backport with new release we start pending compute job to run on Focal16:11
gibigmann: sure I'm OK with a backport16:11
stephenfinno one's going to consume the .0 release; .1 is fine imo16:12
gmanngibi: i will also keep eyes on gate to merge it today16:12
stephenfinin case you were looking for opinions :)16:12
gibigmann: if we merge it today, does it also mean you would like to have RC2?16:12
gmanngibi: humm either is fine. .1 also ok with backport for me.16:13
gmanni think you said no others change pending for RC2 righjt16:13
gibigmann: yeah, we merged nothing to stable/victoria that needs an RC216:13
elodjust a question: if it merges today, will we have time to backport it to victoria to release RC2? (the "Failing device detachments on Focal" fix i mean)16:14
gmannyeah backport also need time then.16:14
gibielod: good point16:14
gmannI think we can leave RC216:14
gibilets go with RC1 for Victoria. Then release .1 after the focal fix and the switch to focal16:14
gmann+116:15
elodsounds OK16:15
gibianything else about the release?16:15
gibi#topic Stable Branches16:17
*** openstack changes topic to "Stable Branches (Meeting topic: nova)"16:17
elodi think stable branches are OK, nothing new issue there16:17
gibicool, thanks elod16:18
gibi#topic Sub/related team Highlights16:18
*** openstack changes topic to "Sub/related team Highlights (Meeting topic: nova)"16:18
gibiAPI (gmann)16:18
gmannnothing specific from me16:18
gibithanks16:19
gibiLibvirt (bauzas)16:19
bauzasnothing to report, sir.16:19
gibithanks16:19
gibi#topic Open discussion16:20
*** openstack changes topic to "Open discussion (Meeting topic: nova)"16:20
gibithere is one item on the agenda16:20
gibiAdding a filter scheduler for server groups on availability zone. #link https://review.opendev.org/#/c/75638016:20
gibinavidp: I think it is yours16:20
navidpyap16:20
navidpI just wanted to get some reviews and general idea about the use case.16:21
gibicould you summarize the use case here16:21
*** navidp has quit IRC16:21
*** navidpustchi has joined #openstack-meeting-316:22
navidpustchiI got disconnected.16:22
navidpustchiThe use case is that for instances with requirements to deploy across or within a single AZ,16:24
gibinavidpustchi: either you give us a short summary about the use case here so we can discuss it quickly or we can continue the review in the spec16:24
navidpustchiInstances with higher reliability and availability must be created ondifferent racks or locations such as hosts from different availability zones.Currently, the group affinity implementation supports instances in the group tobe created on a single host or be created on different hosts. Specially fordeployments with large number of availability16:25
navidpustchizones, it is not practical towatch available zones and instances in the group. Current server groupanti-affinity filter only guarantee instances in the server group withanti-affinity policy on different hosts, but these hosts can be in the sameavailability zone.16:25
gibiso you would need availability zone level affinity and anti-affinity16:26
navidpustchiexactly.16:26
bauzasthat's my understanding16:26
bauzasbut you get anti-affinity for free with AZs, right?16:26
gibiI guess that would need new policies like az-affinity and az-anti-affinity, or a property on the server group that defines the scope of the affinity16:26
_erlon_Yes, the way can distribute a group of VMs for an application is to create each one of then with a given AZ. But if you have a large se of AZs that becames hard to track them16:27
bauzasgibi: I personnally think it would be a bad idea to pursue into adding more to servergroups IMHO16:27
bauzas_erlon_: navidpustchi: I'm lost, why can't you use then aggregates affinity ?16:28
navidpustchiWe can wither add to server groups or use the current server groups and have scheduler hint.16:28
_erlon_@bauzas the aggregate affinity only agregates hosts16:28
_erlon_not az, your hosts can be in the same az16:28
*** ricolin_ has quit IRC16:29
* bauzas tries to find the existing filter that says "exclude my new vm from this agg"16:29
bauzasI'm pretty sure it exists16:30
_erlon_unless you could create a group with all hosts from different AZ?16:30
_erlon_navidpustchi: do you thing that would be possible to your case?16:30
gibibauzas: I'm not sure it exists. We have two isolation filter AggregateMultiTenancyIsolation AggregateImagePropertiesIsolation16:30
_erlon_bauzas: we couldnt find one either16:31
navidpustchino, there was a proposal for aggregate affinity but it never implemented.16:31
bauzasokay, I'm still struggling with the usecase16:32
navidpustchithis filter, address that issue as well.16:32
_erlon_#link https://blueprints.launchpad.net/nova/+spec/aggregate-affinity16:32
bauzasso you want to tell "I want this VM to be not in this AZ" ?16:32
bauzasand "I want this VM to be in this AZ" ?16:32
bauzasif so, I can help your case16:32
gibibauzas: they want to tell that these 3 VMs are in different AZs16:32
navidpustchino, it is very similar to host affinity, I want to tell it this group of instances to be in an AZ16:32
bauzasgibi: with multi-create, gotcha16:33
navidpustchior this group of instances to be on different AZs, meaning they will be created on hosts from a different AZ.16:33
gibibauzas: not just multicreate. You can create VMs one after the other but if they are in the same server group then the policy of the group would instruct the scheduler to place them to separate AX16:34
gibiAZ16:34
navidpustchiyes16:34
gibiit is basically AZ (or aggregate) level (anti-)affinity16:34
navidpustchiyes16:34
bauzasgibi: well, if that's for sequential creates, you can specific which AZ you wanna land to16:34
gibiwe have host level today16:34
bauzasand then you don't need server groups16:35
gibibauzas: correct,16:35
navidpustchiit works for couple of instances.16:35
navidpustchibut for a group of instances to keep track of it, becomes impossible16:35
bauzasyou're asking nova to orchestrate your cloud16:35
_erlon_its actually the same concept as affinity for server groups16:36
bauzasagain, assuming you need this for failure domains16:36
_erlon_yes16:37
bauzasI assume that you also know what to land and where16:37
bauzastbc, the only miss I see from nova is multicreate with AZ anti-affinity16:37
bauzasbecause the sequential case is just a matter of knowing what to tell nova to go where16:38
navidpustchiI agree, though having this knowledge is not expected from the user who creates the instance.16:38
navidpustchiI mean the sequential case.16:39
bauzasgive me an example of a confusing situation, please16:39
navidpustchiI'm a user, trying to create three VMs, want to have them on different AZ for better reliability, dont just want them on seperate host.16:40
bauzasso as a user, I ask nova to give me the list of AZs16:40
bauzasand then I pick three differents16:40
navidpustchifor this case I need to know which AZs I can create a VM to specify them16:40
bauzasright16:40
bauzasbut the AZ list is known to the user16:41
navidpustchiand I want them to stay on different AZ.16:41
_erlon_you can also do that with hosts, and cast each instance into a different host16:41
_erlon_bauzas: if you have s small number of operators and modest sized cloud you know where you want to land, the use case here is that if you have lets say dozens of AZs, them controlling that is a problem16:41
bauzas_erlon_: if you specify different AZs per instance, those will necessarly be on different hosts16:41
navidpustchiif there is a maintentance and we need to move a VM, it can break the initial AZs16:41
bauzas_erlon_: again, I'm not convinced : the user can see the long list of AZs, and pick three different ones for each of their VMs16:42
bauzasand he will get the guaranttee that the instances will be exclusively running on different hosts16:42
bauzasnavidpustchi: i thought this was an user usecase, not an operator one16:43
navidpustchimay I ask what part you are not convinced,16:43
navidpustchiit is a user case,16:43
bauzasnavidpustchi: if you wanna do maintenance on a host, just live migrate the VMs, those will stick on the same AZ16:43
bauzas(provided the instances were created with an AZ argument)16:43
gibiI thik this AZ level (anti-)affinity question boils down to the fact that nova does not want to implement much more orchestration logic https://docs.openstack.org/nova/rocky/contributor/project-scope.html#no-more-orchestration16:43
*** navidpustchi has quit IRC16:43
bauzashah, I was about to explain why I wasn't convinced16:44
bauzasbut I'll continue and logs will show16:44
gibinova provide the interface for the list of AZs and the way to specify AZ for an instance so an external entity can do the orchestration logic16:44
*** navidpustchi has joined #openstack-meeting-316:44
bauzas^ this16:44
gibithis does not necessary needs to be the end user16:45
gibiwe have heat for example for orchestration16:45
bauzaswhather the number of AZs a cloud has, a user can be sure that they can boot instances on different hosts by specific different AZs16:45
bauzasbecause he just has to pick different AZs from the list16:45
bauzaseven if the cloud is running 100000 instances and has 1000 AZs16:46
bauzaspick the three first AZs and spin instances against each of them16:46
bauzasand then you'll get AZ anti-affinity16:46
bauzasbut,16:46
bauzasthere is one miss I reckon16:46
bauzasyou can't do it at once in a single API call16:47
bauzasthat being said, and people can testify, I'm really not a super-fan of multi-create16:47
bauzasand I'm not really happy to add more complexity to multi-create as I'd personnally like to live it die16:48
gibiyeah, with multi create you can only specify one AZ for all the instances in the request16:48
bauzasto leave* it16:48
bauzas(pardon my French (c) )16:48
bauzasnavidpustchi: do you think we answered your question ? we can continue off-meeting16:49
navidpustchimakes sense, though if you want to create 500 VMs will it work ?16:49
gibithe external orchestrator does not need to be a human ^^16:49
bauzasnavidpustchi: if you create 500 VMs sequentially, yes16:49
navidpustchiok16:49
navidpustchiwe cna continue offline.16:50
gibiO16:50
gibiis there anything else for the today meeting?16:50
bauzasnavidpustchi: if you want to have 500 VMs to be mutually exclusive, use a servergroup or write a feature16:50
bauzasat once*16:50
_erlon_ok, we will discuss the alternatives you guys suggested and if we have something that cant be accomplished we bring that over16:50
gibi_erlon_, navidpustchi: thanks16:50
bauzasthanks indeed16:51
navidpustchithanks16:51
_erlon_bauzas++ gibi++16:51
_erlon_thanks guys16:51
gibiif no any other thing for today then thanks for joining16:51
gibio/16:52
gibi#endmeeting16:52
bauzasbye16:52
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:52
openstackMeeting ended Thu Oct  8 16:52:25 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:52
openstackMinutes:        http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-10-08-16.00.html16:52
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-10-08-16.00.txt16:52
openstackLog:            http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-10-08-16.00.log.html16:52
bauzasgibi: thanks for running it16:52
elodthanks o/16:53
*** psachin has quit IRC16:57
*** mlavalle has quit IRC17:08
*** mlavalle has joined #openstack-meeting-317:09
*** penick has joined #openstack-meeting-317:23
*** navidpustchi has quit IRC17:28
*** penick has quit IRC17:49
*** priteau has quit IRC19:31
*** ralonsoh has quit IRC19:50
*** slaweq has quit IRC20:26
*** _erlon_ has quit IRC22:22
*** mlavalle has quit IRC22:54
*** tosky has quit IRC22:59
*** macz_ has quit IRC23:32

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!