Tuesday, 2018-07-17

*** k0da has quit IRC00:37
*** k0da has joined #openstack-powervm06:24
*** AlexeyAbashkin has joined #openstack-powervm07:22
*** k0da has quit IRC09:05
*** k0da has joined #openstack-powervm10:49
openstackgerritChhavi Agarwal proposed openstack/nova-powervm master: iSCSI volume detach with no UDID  https://review.openstack.org/57603411:46
*** edmondsw has joined #openstack-powervm12:06
openstackgerritChhavi Agarwal proposed openstack/nova-powervm master: iSCSI volume detach with no UDID  https://review.openstack.org/57603413:11
*** esberglu has joined #openstack-powervm13:32
*** chhagarw has joined #openstack-powervm13:47
openstackgerritChhavi Agarwal proposed openstack/nova-powervm master: iSCSI volume detach with no UDID  https://review.openstack.org/57603414:02
*** mujahidali has joined #openstack-powervm14:02
edmondsw#startmeeting PowerVM Driver Meeting14:06
openstackMeeting started Tue Jul 17 14:06:24 2018 UTC and is due to finish in 60 minutes.  The chair is edmondsw. Information about MeetBot at http://wiki.debian.org/MeetBot.14:06
chhagarwedmondsw: while testing the iscsi changes on devstack, there were some issue could14:06
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:06
*** openstack changes topic to " (Meeting topic: PowerVM Driver Meeting)"14:06
openstackThe meeting name has been set to 'powervm_driver_meeting'14:06
edmondsw#link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda14:06
edmondswping efried gman-tx mdrabe mujahidali chhagarw14:07
esbergluo/14:07
edmondswgetting started a bit late... I got too caught up in reviewing the device passthrough spec14:07
efriedō/14:07
efriedheh, good news14:07
edmondswI'm sure efried will mind that :)14:07
edmondsw#topic In-Tree Driver14:07
*** openstack changes topic to "In-Tree Driver (Meeting topic: PowerVM Driver Meeting)"14:07
edmondsw#link https://etherpad.openstack.org/p/powervm-in-tree-todos14:08
edmondswI don't know of anything to discuss here... anyone else?14:08
edmondswI don't believe we've made any more progress on the TODOs there14:09
edmondsweverything on hold as we focus on other priorities14:09
edmondsw#topic Out-of-Tree Driver14:09
*** openstack changes topic to "Out-of-Tree Driver (Meeting topic: PowerVM Driver Meeting)"14:09
edmondsw#link https://etherpad.openstack.org/p/powervm-oot-todos14:09
edmondswchhagarw I saw that you posted a new patch set for your iSCSI work... is that based on something you found with devstack?14:10
edmondswI think you were starting to say that right as I started this meeting14:10
edmondswin which case... good example of why we wanted devstack testing, and thank you for doing that14:10
chhagarwyeah, while testing with devstack there are couple of issues found14:11
chhagarwi am re-verifying on pvc now, will keep u posted14:11
edmondswas an aside on that... I am trying to spend spare cycles here and there improving our example devstack local.conf files in nova-powervm based on things I've been learning from chhagarw's environment and the CI14:11
edmondswchhagarw I think the last patch set I saw still had pep8 errors, so make sure you iron those out14:12
chhagarwyeah i am updating14:12
edmondswI had a conversation with mdrabe about the MSP work. I hope he's getting to that here soon14:12
edmondswmdrabe any comments there?14:13
mdrabeedmondsw: I'll be syncing the conf options, but the migration object in pvc will remain the same14:13
edmondswwe can talk about pvc in other forums14:14
edmondswI added a section in our TODO etherpad about docs14:15
edmondswbasically, readthedocs builds are failing since stephenfin's changes14:15
edmondswefried I also figured out how to register in readthedocs to be notified when a docs build fails... thought you might be interested to do the same14:16
efriededmondsw: I would rather get us into docs.o.o.  Is that possible?14:16
edmondswI believe so, and it's on the TODO list14:17
edmondswin fact, I think that may be the only way to solve the current build issues, short of reverting some of what stephenfin did14:17
edmondswwhich I'd rather not do14:17
edmondswthat will probably be the next thing I try14:18
edmondswthat == moving to docs.o.o14:18
edmondswwhile we're talking about docs builds... I also noticed that one of our stable docs builds is broken, and all of our EOL tagged docs builds are broken14:18
edmondswlower priority, but also need to be fixed14:19
edmondswI hope we can also move them to docs.o.o but I'm not sure on that14:19
chhagarwedmondsw: I want the updated code to reviewed once for the LPM change perspective14:19
efrieddocs.o.o is latest only, I thought.14:19
edmondswefried no, it has older stuff too14:19
edmondswchhagarw I'll try to look later today14:20
edmondswanything else to discuss OOT?14:21
edmondsw#topic Device Passthrough14:21
*** openstack changes topic to "Device Passthrough (Meeting topic: PowerVM Driver Meeting)"14:21
edmondswEric has a couple things up for review:14:21
edmondsw#link https://review.openstack.org/#/c/57935914:22
edmondsw#link https://review.openstack.org/#/c/57928914:22
edmondswefried I've started commenting on both in parallel14:22
edmondswefried what do you want to add here?14:22
edmondsw(i.e. I'm done talking, take it away)14:23
efriedReshaper work is proceeding apace. Once that winds down, I'll probably be looking in nova (resource tracker and report client) to make sure nrp support is really there; as well as working through more of that series ^14:23
efriedmdrabe: We're counting on you to be our second core reviewer for this series, in case you didn't have enough to do.14:24
edmondswmdrabe I know you have other things to focus on atm... probably let me get my comments up today first, and then look at it14:25
mdrabesounds good14:25
edmondswefried anything else?14:26
efriedno14:26
chhagarwmdrabe: if u can have a look https://review.openstack.org/#/c/576034/ want you to check if this change does not impact NPIV lpm14:26
edmondsw#topic PowerVM CI14:27
*** openstack changes topic to "PowerVM CI (Meeting topic: PowerVM Driver Meeting)"14:27
edmondsw#link https://etherpad.openstack.org/p/powervm_ci_todos14:27
edmondswwe've been having some CI stability issues that mujahidali is working14:27
mdrabechhagarw: will do14:28
edmondswI've helped some there, as has esberglu14:28
esbergluHere's what I think is going on with CI. The underlying systems are a mess. Filesystems are full, vios issues, etc.14:28
edmondswyes14:28
esbergluEverything else is just a symptom of that14:28
edmondswagreed14:28
edmondswquestion is how best to fix it14:28
mujahidaliI looked into the the neo-21 and found that pvmctl was not working so restarted the neo followed by14:28
mujahidalipvm-core14:28
mujahidalipvm-res14:28
mujahidaliand after that pvmctl worked for neo-2114:28
mujahidaliI have cleared the other neo sytems as suggested by esberglu but still no luck, so, I decided to manually clear the ports. But it seems after cleaning them manually they are not coming back to active state14:29
esbergluThe ports are just going to keep failing to delete until the underlying issues are resolved14:29
esbergluAre the /boot/ directories still full on some of the systems?14:30
mujahidalimanagemnt nodes and most of the neo have only 30% filled /bbot/ directory14:31
mujahidali*/boot/14:31
mujahidaliesberglu: do we need to again re deploy the CI after the cleanup and neo restart ?14:32
esbergluHave you been restarting neos? If you restart them you need to redeploy them14:32
esbergluAnd yes if they are broken because of full filesystems they need to be redeployed14:33
edmondswesberglu is it possible to redeploy a single neo, or do they all have to be redeployed as a group?14:33
mujahidaliso, redploying the cloud_nodes or only management_nodes will do the work ?14:33
esbergluYou should deploy the compute_nodes and the management_nodes14:34
mujahidaliokay14:34
esbergluYou can redeploy single nodes using the --limit command, I've given mujahidali instructions on that before, but let me know if you need help with that14:35
esbergluAt this point it's probably better to redeploy all of them though14:35
mujahidalisure14:35
edmondswmujahidali have we fixed the VIOS issues?14:36
edmondswand you said "most" of the neos have only 30% filled in /boot... what about the others?14:36
mujahidalifor neo-26 and neo-30 ??14:36
edmondswyes14:36
openstackgerritEric Fried proposed openstack/networking-powervm master: Match neutron's version of hacking, flake8 ignores  https://review.openstack.org/58268614:36
openstackgerritEric Fried proposed openstack/networking-powervm master: Use tox 3.1.1 and basepython fix  https://review.openstack.org/58240414:36
edmondswI want to address as much as we can before you redeploy to increase our chances of that fixing things14:36
mujahidaliI am not getting what exactly gone wrong with neo-26 and 3014:37
edmondswok, let's try to look at that together after this meeting, before you redeploy14:38
mujahidalithey(neo-26 and 30) are having sufficient /boot/ space as well14:38
mujahidaliedmondsw: sure14:38
edmondswanything else to discuss here?14:38
esbergluYeah I have some stuff14:39
esberglumujahidali: Have you created all of the zuul merger nodes?14:39
esbergluSo that I can stop maintaining mine soon?14:39
mujahidaliI want to try them with today's deployment for prod14:40
mujahidaliso let me deploy the prod with the new zuul mergers and if all went right then you can free yours14:41
esberglumujahidali: Please propose a patch with the changes14:42
mujahidalisure14:42
esberglumujahidali: edmondsw: What's the status on vSCSI CI for stable branches? I think last I heard ocata was still broken there. I gave some suggestions14:43
esbergluIs it still broken with those?14:43
esbergluIs it worth moving forward with vSCSI stable CI for pike and queens only and skipping ocata for now?14:43
edmondswI thought we were going to split that commit into 1) pike and newer 2) ocata so that we could go ahead and merge #114:43
edmondswbut I haven't seen that done yet14:44
mujahidaliI am able to stack it now with changes esberglu suggested14:44
edmondswyay!14:44
mujahidalibut there are 3 tempest failure14:44
edmondswmujahidali ping me the details after the meeting14:45
edmondswand we can work through that14:45
edmondswafter we work through the other thing14:45
mujahidaliokay14:45
esbergluedmondsw: mujahidali: There were a bunch of additional neo systems that we had slated for the CI pool. Did those ever get set up?14:46
mujahidalino14:46
edmondswbecause we've been focused on other things, or is there another reason?14:47
mujahidaliwe were hitting CI breaking very frequently so, didn't get a chance to a look at it.14:47
edmondswI think that's understandable... keeping the CI running takes priority14:48
esbergluLast thing on my list was multinode CI. Any questions for me there mujahidali? I'm guessing not much work has happened there either with the CI stability issues14:49
mujahidaliI redeployed the staging CI using the changes suggested by esberglu for multinode14:49
edmondswand?14:51
mujahidalithe jenkins job failed. can I paste the log link here14:52
edmondswno, that's another thing we can talk about in slack14:52
mujahidalisure14:52
edmondswI think that's it for CI?14:52
esbergluAll for me14:53
edmondsw#topic Open Discussion14:53
*** openstack changes topic to "Open Discussion (Meeting topic: PowerVM Driver Meeting)"14:53
mujahidaliI will be OOO next monday.14:54
edmondswI meant to bring this up when we were talking about OOT, but efried has fixed our specs so they build now. Thanks efried14:54
edmondswmujahidali got it, tx14:54
edmondsw#endmeeting14:55
*** openstack changes topic to "This channel is for PowerVM-related development and discussion. For general OpenStack support, please use #openstack."14:55
openstackMeeting ended Tue Jul 17 14:55:18 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:55
openstackMinutes:        http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2018/powervm_driver_meeting.2018-07-17-14.06.html14:55
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2018/powervm_driver_meeting.2018-07-17-14.06.txt14:55
openstackLog:            http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2018/powervm_driver_meeting.2018-07-17-14.06.log.html14:55
*** k0da has quit IRC15:35
*** esberglu has quit IRC16:28
*** k0da has joined #openstack-powervm16:54
*** esberglu has joined #openstack-powervm16:57
*** esberglu has quit IRC16:58
*** esberglu has joined #openstack-powervm16:58
*** AlexeyAbashkin has quit IRC16:59
*** chhagarw has quit IRC17:09
*** mujahidali has quit IRC17:10
*** chhagarw has joined #openstack-powervm17:29
*** k0da has quit IRC17:37
*** chhagarw has quit IRC17:45
mdrabeefried: Is it possible to make a nova-powervm change depend on a nova change?18:21
efriedmdrabe: Depends-On works for zuul but not for CI, IIRC. esberglu?18:22
esbergluYeah18:43
edmondswi.e., use Depends-On to keep things from merging in the wrong order, but we'll need to recheck the CI after the nova change merges18:46
*** k0da has joined #openstack-powervm19:01
*** efried has quit IRC19:14
*** efried has joined #openstack-powervm19:14
edmondswefried posted comments on https://review.openstack.org/#/c/57935920:05
edmondswI need to learn to insert "I" when I prefix something with an irc nick...20:06
edmondswor add ":" or something20:06
efriededmondsw: My client adds : automatically. Yours doesn't?20:09
edmondswnope20:09
*** AlexeyAbashkin has joined #openstack-powervm20:22
*** AlexeyAbashkin has quit IRC20:54
efriededmondsw: Got a sec to talk about allow/deny logic?21:14
edmondswefried fire away21:14
efriededmondsw:21:15
efriedI thought we talked about making it an error if the same device was identified more than once in a file. But a) I don't see myself having written that down anywhere :( and b) it would be convenient to support generalized allow=true paragraphs overridden by more specific allow=false paragraphs. However, c) speccing that all out gets pretty complicated. What's simpler is to say that, if you want to allow only X devices ou21:15
efriedway, you just have to specify those X devices in individual paragraphs. Whereupon we don't need 'allow' at all, because you always allow devices that are mentioned and deny ones that aren't.21:15
edmondswthat's probably good enough for the first pass, anyway21:17
edmondswand enhance later if needed?21:17
efriededmondsw: If we did want to support the whole "set logic" thing, the rules would have to look something like:21:19
efried- You can't have a device represented by more than one allow=true paragraph. That's an error. Because we wouldn't know which to use for resource_class, traits, etc. The yaml syntax, at least as we've designed it thus far, gives us the paragraphs in unpredictable order (it's a dict).21:19
efried- You *can* have a device represented by more than one allow=false paragraph.21:19
efried- We process allow=true paragraphs first, then process allow=false paragraphs and remove any devices we find (however many times we find 'em).21:19
efriedMy concern is that that's kind of a complex semantic to try to document for the operator, and also to test.21:20
efriedSo - I can make that a "stretch goal" and/or "future" if you're satisfied with that first thing I said.21:20
edmondswefried yeah, I'm ok with that being either stretch or future.21:22
efriedack21:22
edmondswefried I think you spelled it out there pretty well, so I'm not sure it's too complicated to document... but it's not MVP either21:23
*** edmondsw has quit IRC21:36
*** esberglu has left #openstack-powervm21:38
*** esberglu has joined #openstack-powervm21:38
*** esberglu has quit IRC21:38
*** edmondsw_ has joined #openstack-powervm21:45
*** edmondsw_ has quit IRC21:49
openstackgerritEric Fried proposed openstack/nova-powervm master: Spec: Device Passthrough  https://review.openstack.org/57935922:00
openstackgerritEric Fried proposed openstack/nova-powervm master: Spec: Device Passthrough  https://review.openstack.org/57935922:30
openstackgerritEric Fried proposed openstack/nova-powervm master: Passthrough whitelist schema and loading  https://review.openstack.org/57928922:39
*** k0da has quit IRC23:18

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!