Thursday, 2018-05-17

openstackgerritMatthew Edmonds proposed openstack/nova-powervm master: add lower-constraints job  https://review.openstack.org/55596401:16
*** edmondsw has quit IRC01:19
*** tjakobs has joined #openstack-powervm02:11
*** tjakobs has quit IRC02:17
*** edmondsw has joined #openstack-powervm02:59
*** edmondsw has quit IRC03:04
*** prashkre has joined #openstack-powervm04:44
*** edmondsw has joined #openstack-powervm04:48
*** prashkre has quit IRC04:53
*** edmondsw has quit IRC04:53
*** chhagarw has joined #openstack-powervm05:24
*** prashkre has joined #openstack-powervm06:12
openstackgerritChhavi Agarwal proposed openstack/nova-powervm master: iSCSI Live Migration Support  https://review.openstack.org/56757507:01
*** prashkre has quit IRC07:12
*** prashkre has joined #openstack-powervm07:13
*** AlexeyAbashkin has joined #openstack-powervm07:47
*** prashkre has quit IRC07:57
*** edmondsw has joined #openstack-powervm08:24
*** edmondsw has quit IRC08:29
*** prashkre has joined #openstack-powervm08:51
*** AlexeyAbashkin has quit IRC09:53
*** edmondsw has joined #openstack-powervm10:13
*** edmondsw has quit IRC10:17
*** edmondsw has joined #openstack-powervm10:52
*** AlexeyAbashkin has joined #openstack-powervm10:54
*** prashkre has quit IRC11:22
*** prashkre has joined #openstack-powervm11:28
openstackgerritChhavi Agarwal proposed openstack/nova-powervm master: iSCSI Live Migration Support  https://review.openstack.org/56757511:50
edmondswefried how do you feel about https://review.openstack.org/#/c/567575/8/nova_powervm/tests/virt/powervm/volume/test_iscsi.py@563 ?12:03
*** prashkre_ has joined #openstack-powervm12:04
edmondswI'd have preferred that mock was done via decorator12:04
edmondswand others like it12:04
edmondswbut maybe it doesn't matter so much12:04
edmondswchhagarw has this been tested in a DevStack environment?12:05
*** prashkre has quit IRC12:06
*** prashkre_ has quit IRC12:09
chhagarwno its not ben tested on Devstack, I have to create one using 2 powervms to try out migration12:28
*** chhavi__ has joined #openstack-powervm12:40
*** chhagarw has quit IRC12:44
efriededmondsw: You can't use a decorator for a mock.patch.object on a runtime-created object like self.vol_drv12:50
efriededmondsw: Could just mock.patch it tho.12:51
efriededmondsw: It's not the same thing, but it's likely fine in this case.12:51
edmondswefried right, I'm not saying the mock would be identical12:52
edmondswbut just mock the whole class, doesn't need to be on a runtime-created object12:52
edmondswby mock the whole class, I mean mock.patch12:53
edmondswon the method12:53
efriededmondsw: Okay, then yes, I generally prefer seeing mocks as decorators if they're just used for the one test, or in setUp if they're used throughout the suite, rather than with context managers, if for no better reason than losing the extra indent.12:53
efriededmondsw: But it's a really soft preference.  Do you have stronger reasoning?12:53
edmondswyep12:53
edmondswno, pretty much what you said12:54
edmondswreadability, indents, consistency12:54
efriednow when there's a reason to patch something on an actual object, this is definitely the way to go.12:54
edmondswagreed12:54
efriedor to patch something for only part of the test case.12:54
edmondswsure12:54
edmondswI'd just prefer we only used it in those cases... I don't like doing things when they're not really needed... confusing12:55
edmondswbut I won't hold on that12:55
efriedone could argue that patching the whole class rather than the specific instance is doing things that aren't needed :P12:57
edmondsw:P12:59
edmondswchhavi__ so I'm +1 on that until you tell me this has been well tested with DevStack... not PowerVC13:00
openstackgerritMatthew Edmonds proposed openstack/networking-powervm master: use master for lower constraints job  https://review.openstack.org/56910413:00
edmondswefried FYI ^ ... I think this is right, but have no way of testing it locally so we'll see what zuul thinks13:01
efriedight13:01
efriededmondsw: apparently not13:01
efriedforgot your colon again13:01
edmondswoh, same : issue again :(13:01
openstackgerritMatthew Edmonds proposed openstack/networking-powervm master: use master for lower constraints job  https://review.openstack.org/56910413:02
edmondswone of these days I'll learn13:02
*** esberglu has joined #openstack-powervm13:32
efriedesberglu: Got some comments back from mriedem on localdisk, you on that?13:38
esbergluefried: Not until this afternoon13:39
efriedokay.  He wanted me to wake you up.13:39
efried(I'm sure he was joking.  It's not urgent.)13:39
*** tjakobs has joined #openstack-powervm13:56
chhavi__esberglu: for devstack AIO setup, after cloning the devstack, do i need to manually download local.conf.aio-sea-localdisk and store it as local.conf before running stack.sh14:05
esbergluchhavi__: Yes. And that local.conf file will only work if you are using SEA networking and localdisk, is that what you want?14:06
chhavi__yes i am using SEA14:07
chhavi__want to create one for iSCSI,14:07
chhavi__localdisk ?14:08
esbergluchhavi__: I've never done anything with iSCSI, afraid I won't be much help there14:17
edmondswchhavi__ you have two options for the root disk... our localdisk driver or our SSP driver14:28
edmondswwell, you could try boot-from-volume14:28
edmondswesberglu what you're doing for vSCSI may have some similarities. I sent chhavi__ a link to your WIP CI changes14:31
edmondswif you know of anything that's not there that she may need...14:32
esbergluedmondsw: chhavi__: Yeah that is probably a good place to start. Take a look at the [[post-config|$CINDER_CONF]] section14:36
esbergluI was given all of that config by thorst and it just worked for vSCSI, not sure what would be needed for iSCSI14:36
efriededmondsw: Wanna throw some relevant -infra reviewers on https://review.openstack.org/#/c/569104/ ?14:47
efriededmondsw: Given that the other ones still fail, it's not clear whether it's actually doing anything.14:47
edmondswefried yeah14:48
edmondswI wanted to do that after zuul passed14:48
*** AlexeyAbashkin has quit IRC15:12
*** AlexeyAbashkin has joined #openstack-powervm15:15
esbergluedmondsw: Why can't you use rootvg for the volume group?16:05
efriedIt's not that you can't.16:14
efriedIt's that you shouldn't.16:14
efriedIt allows deployers to fill up your rootvg.16:15
efriedwhich can make everything break.16:15
efriedesberglu: ^16:16
edmondswyeah, that16:18
esbergluedmondsw: efried: New localdisk is up16:49
*** AlexeyAbashkin has quit IRC17:04
*** chhavi__ has quit IRC17:18
esbergluedmondsw: efried: For vSCSI CI I have a big list of volume tests that are testing the volume API17:26
esbergluInitially I was thinking we would run these to make sure that our cinder config is valid17:26
esbergluBut now I'm leaning the other way. They do a lot of volume create/deletes and don't touch our driver at all17:27
efriedcan you pick out the ones that do touch the driver?17:27
esbergluWithout them the vSCSI CI will run 1 (or maybe up to 4, still testing out a few) scenario17:27
esberglutests17:27
esbergluThose are the ones we actually care about17:28
esbergluI think we just do the scenario tests17:28
esbergluNot sure if either of you have an opinion17:28
efriedI'm in favor of skipping tests that don't touch the driver.  If that leaves us with only one (or four), so be it.  I feel like the nova PTBs will accept that explanation, and we can include comments in whatever config file you're going to have to touch to make this happen.17:29
edmondswesberglu agreed17:30
esberglusounds good. I'm planning on setting up the CI pipeline to run only on "powervm:volume-check" comments or something like that17:31
esbergluWe could also potentially set it up to run on any changes to powervm/volume17:32
esbergluI'm hoping to have a working (although not optimized) vSCSI pipeilne for OOT master by the end of the week17:33
esbergluThen leave it to mujahid to port it to IT, stable branches, and clean it up a little bit17:34
esbergluefried: edmondsw: https://review.openstack.org/#/c/568982/17:35
esberglu^ What's the best way to do handle that?17:35
esbergluI think that we should be able to check the RMC status in that code block, but that will never get merged so we would have to carry a patch long term17:36
esbergluOtherwise I could add config option to add a delay before attaching volume17:36
esbergluWhich also would probably never merge tbh but has a better chance17:37
edmondswesberglu rather than a sleep, could you setup a ping loop?17:37
edmondswmight have a better chance of merging17:37
efriedI doubt any of that will have a chance of merging, tbh.17:37
edmondswoh, this isn't network, it's RMC17:37
esbergluedmondsw: Ping and active RMC aren't the same though. It will ping way before active RMC17:38
efriedbecause needing to have RMC active to attach a volume is a PowerVM thing.17:38
edmondswyep17:38
efriedI think you're going to have to carry the patch.  In which case polling for RMC status is probably the most efficient thing.17:38
esbergluefried: Yeah, that's why I thought the delay *might have a chance (default to 0, doesn't affect anyone else)17:38
esbergluBut I doubt it as well17:38
efriedesberglu: Possible.  Worth a try, I suppose.17:38
esbergluChecking the RMC status is so much cleaner though. And it shouldn't be a hard patch to maintain going forward17:39
esbergluI say we do that17:39
efried++17:39
edmondswwfm17:39
esbergluThoughts on triggering the job for all powervm/volume changes? I'm actually leaning against that17:40
edmondswI would not at this point17:40
esbergluSomething that we could potentially add later17:40
esbergluThe other thing that I haven't considered at all is how to do cleanup17:41
esbergluAny thoughts on that?17:41
esbergluThe tests will try to delete the volumes even if they fail. But if the volume delete fails there's nothing17:42
efriedesberglu: [compute]ready_wait may already be that delay you were talking about.17:43
efriedstill back on the RMC thing17:43
esbergluefried: Ah I think you're right. I'm spinning up a test instance now to confirm17:48
efriedesberglu: There also seems to be something called OS-EXT-STS which may do exactly what you want.  If my guess is correct, it's a callback hook that you can use to make tempest ask a function of your choice for things like vm state - which you would override in this case with "lpar active *and* rmc active".17:49
efriedyeah no17:56
efriedthat's not what it is at all.17:56
esbergluefried: I don't see how I would override that?17:56
esbergluOh nvm17:56
esbergluOh yeah that doesn't have any idea about RMC17:58
efriedNo, but if you could somehow manage to set a OS-EXT-STS:task_state (to any value) until RMC comes up, that would do the trick.18:00
esbergluefried: ready_wait works, I'm just gonna do that for now18:33
efriedesberglu: roger that.18:33
efriedesberglu: tbc, this is going to be a separate job that runs on demand only?18:33
esbergluefried: Yes. And it only runs a whitelist of vSCSI specific scenario tests18:34
efriedesberglu: And let's say RMC comes up in one minute: the extra wasted four minutes will be felt once per test we actually run, divided by however many we can run in parallel?18:34
esbergluefried: Right. But I'm not too concerned with runtimes for an on demand job18:35
efriedesberglu: Well, yeah, unless it's like hours.18:35
efriedesberglu: Just wondering whether long-term it'd be worth implementing and carrying that wait-for-RMC patch instead.18:36
efriedesberglu: And thus adding it to the to-do list (edmondsw)18:36
edmondswyep18:38
esbergluefried: We're only running 1 (or up to 4, in which case all will be parallel) so we would only fell that extra 4 minutes one time per run18:39
efriedesberglu: Any sense for how long the overall test takes?18:40
esbergluefried: Well right now I'm using a 10 min wait for RMC to be active which can probably be reduced. With that I think the runtime for the single test was ~20 minutes18:42
esbergluI haven't been paying a ton of attention to runtime though, just trying to get it passing atm18:42
efriedesberglu: yeah, I dig it.  Just trying to assess the value.  Let's go ahead and add it to the to-do list, but it'll be a thing to happen after you're gone :)18:42
edmondswesberglu we also need https://review.openstack.org/#/c/567599 backported to pike, right?20:56
esbergluedmondsw: Yeah, I thought proper protocol was to wait for the initial patch to merge, is that incorrect?20:57
edmondswI don't think that's necessary for something like this20:57
edmondswif ^ was on master, maybe, but it's already on stable/queens20:58
edmondswprobably easier to get someone to review and approve both pike and queens at the same time20:58
esbergluedmondsw: ack will do21:00
edmondswtx21:01
*** esberglu has quit IRC21:16
*** edmondsw has quit IRC21:46
*** tjakobs has quit IRC22:20
*** edmondsw has joined #openstack-powervm23:03
*** edmondsw has quit IRC23:08
*** edmondsw has joined #openstack-powervm23:46

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!