Tuesday, 2018-04-10

*** esberglu has quit IRC03:49
*** chhagarw has joined #openstack-powervm04:32
*** AlexeyAbashkin has joined #openstack-powervm06:32
*** AlexeyAbashkin has quit IRC06:49
*** AlexeyAbashkin has joined #openstack-powervm06:50
*** AlexeyAbashkin has quit IRC06:54
*** AlexeyAbashkin has joined #openstack-powervm07:02
*** AlexeyAbashkin has quit IRC07:53
*** AlexeyAbashkin has joined #openstack-powervm07:56
*** openstackgerrit has quit IRC12:04
*** edmondsw has joined #openstack-powervm12:17
*** edmondsw has quit IRC12:28
*** edmondsw has joined #openstack-powervm12:51
*** apearson has joined #openstack-powervm12:55
*** esberglu has joined #openstack-powervm13:26
esbergluefried: edmondsw: Can we talk about https://review.openstack.org/#/c/554688/ quick?13:46
esbergluefried: I think that the powervm:proc_units is the extra spec used. I don't think we are missing the proc_units_factor extra spec13:47
efriedI looked at this last night.  powervm:proc_units is proc_units.  The proc_units_factor goes into the init of the standardizer, and is not configurable via extra specs.13:48
efriedIOW we clearly didn't want people to be able to adjust proc_units_factor on a per-instance basis.  We wanted that to be one-and-done in the config.  And the way people get more processing power is by specifying more proc_units in their flavor.13:49
efriedthorst may remember why.  Or Kyle.13:50
edmondswah, that makes sense13:50
edmondswallowing proc_units_factor to change on a per-instance basis wouldn't make sense because then it would be meaningless to say how many proc units a VM has...13:51
efriedThat's one way to look at it, yeah.13:51
edmondsw(without also knowing the proc_units_factor for the VM)13:51
edmondswif it's consistent across all VMs, then you can compare apples to apples looking at num proc_units per VM13:52
esbergluSo mriedem is mistaken in his comments on PS9?13:53
edmondswesberglu I'll help you reword the commit message and help/comments after our meeting13:53
edmondswtrying to finish a call atm13:54
* efried looks at the comments...13:54
* efried responds...13:57
efriedesberglu: Done.  edmondsw: I stole your apples.14:01
edmondswefried thanks14:01
edmondsw#startmeeting PowerVM Driver Meeting14:01
openstackMeeting started Tue Apr 10 14:01:30 2018 UTC and is due to finish in 60 minutes.  The chair is edmondsw. Information about MeetBot at http://wiki.debian.org/MeetBot.14:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:01
*** openstack changes topic to " (Meeting topic: PowerVM Driver Meeting)"14:01
openstackThe meeting name has been set to 'powervm_driver_meeting'14:01
edmondsw#link agenda: https://etherpad.openstack.org/p/powervm_driver_meeting_agenda14:01
edmondsw#topic In-Tree Driver14:01
*** openstack changes topic to "In-Tree Driver (Meeting topic: PowerVM Driver Meeting)"14:01
edmondsw#link https://etherpad.openstack.org/p/powervm-in-tree-todos14:02
edmondswesberglu update on IT status?14:02
esbergluedmondsw: Everything before localdisk is ready for core review14:03
esbergluefried: I've responded to all your comments on localdisk except14:03
esbergluhttps://review.openstack.org/#/c/549300/18/nova/virt/powervm/disk/localdisk.py@12214:03
esbergluWasn't sure exactly what you meant there14:03
edmondswhotplug merged, so I've removed that from the todo etherpad14:04
esberglujichenjc left a few more comments that I haven't hit yet14:04
efriedshall we talk about that comment now?14:05
edmondswgo ahead14:05
efriedWe're building a ftsk, which is supposed to have a list of VIOSes in it as getters rather than wrappers, which is supposed to allow us to defer the retrieval of the VIOS(es) until we want to do the actual work, which is supposed to minimize the window for conflicts.14:06
efriedBut we're doing things here that are eliminating those benefits.14:06
efriedFirst off, there's only one VIOS we care about, and we already know which one it is (self._vios_uuid or whatever).  So using build_active_vio_feed_task - which goes out and tries to figure out which of all the VIOSes are "active" (RMC up) and stuffs all of those into the ftsk - will only *hopefully* include that VIOS, and may very well include the other(s) that we don't care about.14:08
edmondswagree that we only care about one VIOS14:08
efriedSecond, L137 accesses the .wrapper @property, which prefetches the wrappers in the ftsk, so we're not getting the benefit of deferring that fetch.14:08
edmondsw__init__ only sets self._vios_uuid, it does not cache the vios_w, so we do need a way to get vios_w14:09
edmondswand we do need to get it to make sure we have the latest info there, right?14:09
efriedThe main benefit I forgot to mention is running the subtasks in parallel across the VIOSes.  Which is n/a here since there is (should be) only one VIOS we care about.14:09
esbergluefried: So what you're proposing is that instead of adding the rm_func to the stg_ftsk we would just call tsk_map.remove_maps14:10
esbergluDirectly after find_maps?14:10
edmondswso do we need a different stg_ftsk that only retrieves one vios, or do we need to get the vios without a feedtask?14:10
efriedI'm saying using a ftsk at all in this method is overkill.14:10
efriedunnecessary.14:10
efriedesberglu: Let me look; it's possible remove_maps already returns the maps that get removed.14:11
edmondsware feedtasks only relevant when you're dealing with lists, and not singletons?14:11
edmondswfeed = list?14:11
efried...which means we could have probably extracted those results out of the ftsk after execute.14:12
*** chhavi__ has joined #openstack-powervm14:12
efriededmondsw: No, that's not the only advantage.  Doing multiple operations, reverting stuff, etc. (FeedTask is a derivative of TaskFlow)14:13
efried...yup, remove_maps already returns the list of maps removed.14:13
efriedI haven't looked, but I suspect the current code is an artifact of slot manager garbage from OOT, and we're going to have to re-complexify it later when we put that shit back in.14:13
efriedbut for now, we can make this way simpler.14:14
esbergluefried: So rip out the stg_ftsk & rm_func stuff, rip out find_maps14:14
esbergluAnd just have14:14
esbergluvios_w = stg_ftsk.wrapper_tasks[self._vios_uuid].wrapper14:14
edmondswyou just said rip out stg_ftsk, so that won't work14:14
esbergluOh right14:15
efriedNo stg_ftsk.  Retrieve the VIOS wrapper afresh based on self.vios_uuid14:15
*** chhagarw has quit IRC14:15
edmondswyep14:15
efried...using the SCSI xag14:15
efriedVIO_SMAP14:15
edmondswand then tsk_map.remove_maps on it14:15
edmondswand done14:15
esbergluOkay got it14:16
edmondswesberglu for vscsi, we already did this right?  "Add a follow on to use pypowervm 1.1.12 for wwpns"14:16
edmondswso I'm removing that from TODO etherpad14:16
esbergluedmondsw: Yeah14:17
edmondswhas anything merged other than netw hotplug14:17
*** AndyWojo has quit IRC14:17
edmondswor have comments we need to address, other than localdisk?14:17
*** AndyWojo has joined #openstack-powervm14:17
esbergluedmondsw: Nope, nothing has been reviewed, still a few things ahead of us in runways14:17
edmondswyep14:17
edmondswany updates on migrate/resize?14:18
esbergluGonna finish up localdisk today and get it ready for review, then jump back into that14:18
edmondswcool14:18
esbergluReady for core review14:18
edmondswok, anything else IT?14:18
esbergluIs there any system requirements for SDE installs? My install failed14:19
esbergluAnd I need that to test localdisk snapshot14:19
efried(esberglu: Just noticed gaffe in the commit message)14:19
edmondswesberglu thinking... but check with seroyer14:20
edmondswI think there are some local disk size requirements?14:21
esbergluedmondsw: I'll ask and try again, might also see if anyone can loan me a system for a couple days14:22
edmondsw#topic Out-of-Tree Driver14:22
*** openstack changes topic to "Out-of-Tree Driver (Meeting topic: PowerVM Driver Meeting)"14:22
edmondsw#link https://etherpad.openstack.org/p/powervm-oot-todos14:22
edmondswI've got a meeting setup with the PowerVC folks to talk about the volume refactoring14:23
edmondswand get everyone on the same page there14:23
edmondswI'd talked to gfm about this, and he was onboard, but some others on his team are freaking out14:23
edmondswso need to calm them down14:23
edmondswI've been working with chhavi__ quite a bit on iscsi14:24
edmondswI think we're making progress there14:24
edmondswI need to ping burgerk about https://review.openstack.org/#/c/428433/ again14:24
edmondsw#action edmondsw to ping burgerk about config drive UUID14:25
edmondswI also need to start writing code for MSP support14:25
edmondswefried I think the pypowervm support is already there for that, though obviously untested14:26
edmondswefried I will probably be proposing a change to at least the docstring, though, since it says name where it actually needs IPs14:27
edmondswand the arg is badly named as well... I'd love to rename it, but that would break backward compat14:27
efried"MSP support"?14:27
efriedWhat arg?14:27
efriedWhat docstring?14:27
edmondswdo you think that's ok since it didn't work before?14:27
efriedWhat's going on here??14:27
edmondswone sec14:27
edmondswhttps://github.com/powervm/pypowervm/blob/master/pypowervm/tasks/migration.py#L5214:28
edmondswdest_msp_name and src_msp_name should actually be lists of IP addresses14:28
edmondswnot names14:28
edmondswMSP = mover service partition14:28
edmondswspecifying IPs allows you to dictate which interfaces are used for LPM14:29
edmondswnew for NovaLink, but HMC has had this... presumably the pypowervm code was copied from HMC support14:29
edmondswefried make more sense now?14:30
efriedI thought the "lists of" thing was something new coming down the pipe.14:30
efriedAnd... you're saying those args don't work at all today?14:30
edmondswefried NovaLink didn't support those in REST until the changes Nicolas has just now been working on14:31
edmondswso they couldn't have worked (for NovaLink) before14:31
efriedOIC, we just copied that method from k2operator or whatever?14:32
edmondswI assume, yes14:32
efriedwas REST just ignoring any values passed down there?14:32
efriedcause if so, we can't remove/rename them.  If it was erroring, then maybe we can get away with it.14:33
edmondswefried right, I have to check with Nicolas on that14:33
edmondswuntil I know otherwise, I'm assuming we have to leave them and just cleanup the docstring14:33
efriedWell...14:33
efriedIf they can now be lists, we should probably accept (python) lists, and convert 'em to comma-delimited (or whatever) within the method.14:34
edmondswyes14:34
edmondswI don't mean there would only be a docstring change... just that I wouldn't rename the args unless they were erroring before14:34
efriedDig.14:35
edmondswanything else to discuss OOT?14:35
esberglunope14:36
edmondsw#topic Device Passthrough14:36
*** openstack changes topic to "Device Passthrough (Meeting topic: PowerVM Driver Meeting)"14:36
edmondsw#efried you're up14:36
efriedI started working on granular.  Some pretty intricate algorithms happening there.14:36
efriedGot grudging agreement from jaypipes that the spec as written is the way we should go (rather than switching to separate-by-default)14:36
edmondswcool14:37
efriedhe still has to convince Dan, but I think since the path of least resistance is what we've got, it'll just fall off.14:37
efriedIn case you're interested in looking at the code: https://review.openstack.org/#/c/517757/14:37
efriedI need to fix test, but the general idea is there.14:38
edmondswI'm interested, but won't have time14:38
efriedAt this point I've given up waiting for Jay to finish nrp-in-alloc-cands before I do that.14:38
edmondsw:)14:38
efriedSo whichever one of us wins, the other has to figure out how to integrate granular+NRP.14:38
efriedThere's a new #openstack-placement channel you may wish to join.14:39
edmondswefried ah, tx for the heads up14:39
efriedupt stuff is mostly merged.  I think my runway expires tomorrow.  But the stuff that's left is pretty nonessential - if it doesn't get in, it's not the end of the world.14:40
edmondswso the last important one did merge?14:40
efriedI think so.  Lemme double check.14:40
efriedyeah.  The pending ones are nice-to-have, but we can get by without 'em if we need.14:41
efriedhttps://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/update-provider-tree14:42
edmondswefried so are we ready to start making changes in our driver?14:42
efriedYes, if someone else wants to do it.  I'm going to be heads down on granular and reviewing other placement stuff for a while yet.14:42
efriedalso note that we can't get any mileage out of actual trees until Jay's thing is done.14:42
efriedWe can do single-provider stuff with traits, but we won't be able to do anything with child providers for GPUs etc.14:43
edmondswso that'll probably wait a bit longer then, because I have too many other things on my plate right now as well14:43
*** tjakobs has joined #openstack-powervm14:43
efriedWe could hack that together with custom resource classes, one per GPU, inventory of 1.  But that would be an interim solution.14:44
efriedIf we get to the end of Rocky and the NRP work still isn't finished, we may have to do that.14:44
edmondswk14:44
edmondswanything else?14:44
efriedWell, on your side, have we gotten any further figuring out how we want to map/represent/filter supported devices?14:45
edmondswI think it's pretty much what we'd talked about before14:45
edmondswprovider per adapter14:45
edmondswso we can allow selection by unique id if need be14:46
edmondswfor PCI, representation will use PCI vendor/device IDs14:47
efriedhm, then I wonder if we actually want to model it with custom resource classes.14:47
efriedNah, the cores will freak about that.14:47
edmondswthe custom bit will the unique id14:47
efriedRight, but we need to use traits for that.14:48
edmondswso we could use a common provider and have custom traits?14:48
edmondswif so, great14:48
efriedno, if it's a common provider, it would need to be distinct RCs.14:48
efriedSeparate providers, traits.14:48
efriedotherwise there's no way to know which trait belongs to which device.14:49
edmondswI did not follow that14:49
efriedbtw, RP names are freeform - no char restrictions - so we can do whatever tf we want with them.14:49
efriedmeaning that we can use the DRC name (or whatever) for the RP name, and not have to do any weird mapping.14:50
efriedSorry, okay, lemme back up.14:50
efriedTraits are on providers, not resources.14:51
esbergluefried: edmondsw: Sorry to butt in, but I've got to present on CI in a few minutes14:51
esbergluMultinode CI status: Have working multinode stack within staging, updated prep_devstack to handle control and compute14:51
esbergluStill seeing a few errors there14:51
edmondswefried yeah, let's give esberglu a few min on CI and we can continue later14:52
edmondsw#topic PowerVM CI14:52
*** openstack changes topic to "PowerVM CI (Meeting topic: PowerVM Driver Meeting)"14:52
edmondsw#link https://etherpad.openstack.org/p/powervm_ci_todos14:52
esbergluNext up is getting zuul/nodepool to work with multinode14:52
esbergluAnd figuring out the tempest failures14:52
esbergluThat's pretty much all I have14:52
edmondswesberglu tempest failures?14:53
edmondswis that specific to multinode, or in general?14:53
esbergluSeeing cold mig tempest failures (not all, just a few tests)14:53
esbergluOn OOT14:53
edmondswok14:54
esbergluGotta run14:54
edmondswI need to run as well14:54
edmondsw#topic Open Discussion14:54
*** openstack changes topic to "Open Discussion (Meeting topic: PowerVM Driver Meeting)"14:54
edmondswanything quick here?14:55
efriededmondsw: If we want to have all of our devices in the same provider, and have them all with the same generic resource class (e.g. "GPU"), it doesn't help us to have all the traits that represent all the devices on that provider, because when you pick one off, you don't know which trait goes with which inventory item.  And we don't want to be editing traits on the fly to indicate that kind of thing.  So if we want all ou14:55
efriedcustom    resource classes (e.g. "GPU_<drc_index>") and we kinda lose the ability to request different types (e.g. based on vendor/product IDs).14:55
efriedSo what we want is one RP per device, with the provider name equating to a unique identifier we can correlate back to the real device, and traits on the RP marking the type (vendor/product IDs, capabilities, whatever).14:56
efriedeach RP has inventory 1 of the generic resource class (e.g. "GPU")14:56
efriedIf that's still murky, hmu later and we can talk it through s'more.14:56
edmondswso we can use the common/generic RC14:56
edmondswbut need custom RP14:57
efriedWe were going to want to do that to some extent anyway.14:57
edmondswthat's what I was hoping14:57
efriedTheoretically we could group like devices14:57
efriedbut then we lose the ability to target a *specific* device.14:57
efriedwhich I gather is something we still want.14:57
edmondswI think so14:57
efriedeven though it's not very cloudy.14:57
edmondswwell... there are different definitions of cloud14:58
edmondswI think you're falling into the nova definition trap :)14:58
edmondsws/nova/certain nova cores/14:58
edmondsw#endmeeting14:58
*** openstack changes topic to "This channel is for PowerVM-related development and discussion. For general OpenStack support, please use #openstack."14:58
openstackMeeting ended Tue Apr 10 14:58:55 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2018/powervm_driver_meeting.2018-04-10-14.01.html14:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2018/powervm_driver_meeting.2018-04-10-14.01.txt14:58
openstackLog:            http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2018/powervm_driver_meeting.2018-04-10-14.01.log.html14:59
efriedthat is a very fair point.14:59
*** apearson has quit IRC15:29
*** apearson has joined #openstack-powervm15:33
*** AlexeyAbashkin has quit IRC15:33
*** apearson has quit IRC15:45
*** AlexeyAbashkin has joined #openstack-powervm15:47
*** edmondsw has quit IRC16:00
*** edmondsw has joined #openstack-powervm16:01
esbergluedmondsw: efried: SDE install worked this time, should be able to finish localdisk testing today16:01
edmondswawesome16:02
*** apearson has joined #openstack-powervm16:03
*** apearson has quit IRC16:49
*** AlexeyAbashkin has quit IRC16:59
*** openstackgerrit has joined #openstack-powervm17:01
openstackgerritChhavi Agarwal proposed openstack/nova-powervm master: WIP: Having iSCSI Initiator locks per VIOS  https://review.openstack.org/55780017:01
chhavi__edmondsw: updated change set with test.17:02
edmondswchhavi__ ack, thanks... will look after meetings17:02
chhavi__we need additional pypowervm changes to get hdisk from udid17:02
*** AlexeyAbashkin has joined #openstack-powervm17:07
*** AlexeyAbashkin has quit IRC17:12
*** apearson has joined #openstack-powervm17:14
*** openstackgerrit has quit IRC17:34
*** AlexeyAbashkin has joined #openstack-powervm18:36
esbergluedmondsw: efried: Just came across something testing localdisk. Saw a SCSI mapping left around after deleting an instance18:46
esbergluBecause we are missing https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/driver.py#L625-L64118:47
esbergluIs that needed prior to localdisk? I didn't see any related issues testing prior to this18:48
esbergluAlso I don't think the evacuate comment there is accurate, this was just a normal spawn/delete18:49
edmondswesberglu well it does only mention evacuate as an example18:50
edmondswdoesn't say that's the only case18:50
esbergluedmondsw: Yeah, but I'm inclined to rip that part out if it's seen in a more general case18:51
edmondswjust the (e.g....) ?18:52
esbergluYeah18:52
edmondswso you're going to need to update the vscsi commit with this, right?18:53
esbergluedmondsw: Yeah that's what I was thinking, just wanted some more context since I wasn't seeing any issues without it18:53
edmondswI'm a little surprised you would have seen this because what were you doing that would have removed a volume from the VIOS?18:54
efriedsnapshot?18:55
esbergluNo just testing a spawn/delete with localdisk, no volume stuff at all18:55
esbergluAfter spawning would see an entry in scsi list like18:56
esberglu|    neo39-inst    |     4     | vios1 |     4     |  i_6e8ab136_65fc   |18:56
esberglu|       LPAR       | LPAR Slot |  VIOS | VIOS Slot |  Storage   |18:56
esbergluAfter delete that entry was18:56
esberglu|   <None>    |     4     |        |         |        |18:57
esbergluAdded that code block above, and the scsi map gets removed during destroy18:58
esbergluSorry that second entry after delete was actually18:59
esberglu|      <None>      |     4     | vios1 |     4     |            |18:59
esbergluI don't think this is talking about cinder volumes. Its talking about the lvs created during localdisk spawn?19:00
esbergluAfter spawning, there is in lv on the localdisk volume group named i_6e8ab136_65fc19:07
esbergluThe deletion of that works fine, but the SCSI mapping doesn't get cleaned up19:07
esbergluefried: edmondsw: That make sense?19:08
esbergluAs in this has nothing to do with vSCSI19:09
esberglucinder vSCSI19:09
esbergluHow does localdisk snapshot work in SDE mode? You can't create vgs when in SDE right?19:38
esberglu*how does localdisk work at all in SDE mode19:38
edmondswok, so no changes to the vscsi commit... that's good, since respinning that would bump a bunch of commits based on it19:42
edmondswtjakobs can you create volume groups in SDE mode?19:42
tjakobsyou can create volume groups manually, not sure if there is api support for it19:43
tjakobsfrom personal experience, I've manually created a "lparHosting" volume group, then set these in local.conf `DISK_DRIVER=localdisk` and `VOL_GRP_NAME=lparHosting`19:44
tjakobsedmondsw esberglu ^19:45
edmondswtx19:45
edmondswesberglu I'm still confused as to why you saw the mapping not get cleaned up... I wonder if the code you pointed to was hiding a bug here19:46
edmondswi.e., it should have been cleaned up somewhere else, and wasn't, but we didn't notice because https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/driver.py#L625-L641 cleaned it up19:47
esbergluedmondsw: I can rip that bit back out and recreate19:50
*** openstackgerrit has joined #openstack-powervm19:51
openstackgerritChhavi Agarwal proposed openstack/nova-powervm master: WIP: Having iSCSI Initiator locks per VIOS  https://review.openstack.org/55780019:51
openstackgerritChhavi Agarwal proposed openstack/nova-powervm master: WIP: Having iSCSI Initiator locks per VIOS  https://review.openstack.org/55780019:52
edmondswesberglu looking at the code, it does delete the disk right before that, and not the mapping, so I guess it's pretty obvious that block is needed to delete the mapping20:02
esbergluedmondsw: This is probably an issue with how I changed the code based the stg_ftsk discussion this morning20:03
esbergluNow that I'm looking at it again20:03
edmondswoh?20:10
esbergluedmondsw: Yeah, after reverting back to the latest version of localdisk on gerrit, not seeing it20:10
edmondswinteresting20:10
esbergluedmondsw: Posted my changes there in slack for nicer formatting20:11
edmondswoh the detach disk is supposed to remove mappings... good20:11
edmondswso we just need to fix that20:12
*** AlexeyAbashkin has quit IRC20:16
*** AlexeyAbashkin has joined #openstack-powervm20:17
chhavi__please review https://review.openstack.org/#/c/557800/20:24
*** AlexeyAbashkin has quit IRC20:25
*** chhavi__ has quit IRC20:29
*** apearson has quit IRC21:15
*** tjakobs has quit IRC21:38
*** edmondsw has quit IRC21:42
*** edmondsw has joined #openstack-powervm21:42
*** edmondsw has quit IRC21:43
*** tjakobs has joined #openstack-powervm21:46
*** esberglu has quit IRC21:52
*** edmondsw has joined #openstack-powervm22:09
*** edmondsw has quit IRC22:10

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!