Wednesday, 2016-07-20

*** tjakobs has joined #openstack-powervm00:05
*** thorst_ has quit IRC00:06
*** tjakobs has quit IRC00:11
*** tjakobs has joined #openstack-powervm00:53
*** tjakobs has quit IRC01:09
*** seroyer has quit IRC02:10
*** tjakobs has joined #openstack-powervm02:33
*** esberglu has joined #openstack-powervm02:36
*** tjakobs has quit IRC02:37
*** apearson has joined #openstack-powervm02:42
*** esberglu has quit IRC02:43
*** apearson has quit IRC04:25
*** kotra03 has joined #openstack-powervm04:47
*** kotra03 has quit IRC04:49
*** k0da has joined #openstack-powervm08:14
*** kotra03 has joined #openstack-powervm10:00
*** k0da has quit IRC10:19
*** adi___ has quit IRC10:38
*** adi___ has joined #openstack-powervm10:39
*** madhaviy has joined #openstack-powervm11:54
*** smatzek has joined #openstack-powervm12:01
*** k0da has joined #openstack-powervm12:16
*** tblakeslee has joined #openstack-powervm13:00
*** edmondsw has joined #openstack-powervm13:00
*** kylek3h has joined #openstack-powervm13:08
*** kylek3h has quit IRC13:09
*** kylek3h has joined #openstack-powervm13:09
*** mdrabe has joined #openstack-powervm13:11
*** kylek3h has quit IRC13:13
*** kylek3h has joined #openstack-powervm13:16
*** apearson has joined #openstack-powervm13:39
*** esberglu has joined #openstack-powervm13:39
*** lmtaylor has joined #openstack-powervm13:50
*** seroyer has joined #openstack-powervm14:05
*** tjakobs has joined #openstack-powervm14:11
*** smatzek has quit IRC14:41
*** smatzek has joined #openstack-powervm14:42
*** mdrabe has quit IRC14:43
*** mdrabe has joined #openstack-powervm14:49
*** apearson has quit IRC14:59
*** apearson has joined #openstack-powervm15:00
*** k0da has quit IRC15:11
openstackgerritEric Berglund proposed openstack/nova-powervm: DNM: Test Change Set 2  https://review.openstack.org/30023215:20
openstackgerritEric Berglund proposed openstack/nova-powervm: DNM: CI Check  https://review.openstack.org/29593515:20
*** seroyer has quit IRC15:21
*** seroyer has joined #openstack-powervm15:37
*** efried has joined #openstack-powervm15:49
*** efried has quit IRC15:53
*** apearson has quit IRC16:33
*** apearson has joined #openstack-powervm16:33
*** madhaviy has quit IRC16:49
*** edmondsw has quit IRC16:51
*** edmondsw has joined #openstack-powervm16:52
*** thorst_ has joined #openstack-powervm17:33
*** thorst_ has quit IRC18:19
*** kotra03 has quit IRC18:35
*** k0da has joined #openstack-powervm19:34
*** thorst_ has joined #openstack-powervm19:38
*** efried has joined #openstack-powervm19:38
efriedthorst_, svenkat: we need a way to map deterministically and uniquely between physical network names and SR-IOV pport labels if our design is to work.19:39
efriedIt would seem as though the pport label can't be more than 16 bytes.19:40
efriedI believe there is no such restriction (at least, not such a drastically limiting one) on a phys net name.19:40
efriedSo we can't just truncate.  That would wind up being wildly ambiguous.19:40
efriedSo phys nets don't have an kind of ID?19:41
efriedany*19:41
efriedthorst_ ^^19:44
thorst_efried: A physical network is defined in the neutron.conf file19:45
thorst_let me find an example19:45
*** svenkat has joined #openstack-powervm19:46
thorst_looks like its just the bridge mappings now19:47
thorst_I think...19:47
thorst_that's interesting...19:47
efriedwhat does that mean?19:47
thorst_nothing really.19:47
thorst_but basically this isn't something in the DB19:48
thorst_the neutron network has a physical network name19:48
thorst_its a string19:48
svenkat@thorst: is it limited to network_vlan_ranges in ml2_type_vlan section in ml2_conf.ini.. i see both default and unknown and i am able to create vlan network with these two19:48
thorst_and if two neutron networks are on the same physical network, they both map to the same string19:48
thorst_svenkat: I think its more the binding details19:48
thorst_depending on the type of network you may have different things19:48
thorst_and I think its fed from the mechanism driver.19:48
svenkatok..19:49
svenkatwe can scan all sriov physical ports and get their port labels as physical network names and surface them to neutron via mechanism driver… so neutorn networks can be created for those physical networks19:50
svenkatis this right?19:50
thorst_I believe so, yes19:50
thorst_which then solves the 'what value is this agent providing' question I had19:51
svenkatthen how about vlan ranges19:51
thorst_isn't that just another part of the binding details?19:51
thorst_first check is physical networks19:51
thorst_second check is what VLAN ranges are supported?19:51
svenkatnetwork_vlan_ranges = default:1:4094,unknown:1:409419:52
thorst_really, its a segmentation check.  It is avoided for Flat networks, used for VLAN19:52
thorst_and vxlan, GRE, or Geneve19:52
svenkatok… so does this mean, that to add a new physical network to neutron, it is surfaced via mechanism driver  and also update ml2_conf.ini and restart neutron service?19:52
svenkator is vlan range can also be made available to neutron via mechanism driver19:53
openstackgerritEric Berglund proposed openstack/nova-powervm: Add support_attach_interface to capabilities  https://review.openstack.org/34501419:54
efriedSo thorst_, svenkat, where did we land with port label vs. net name?19:58
svenkatsounds like , along with updation of port lablel with net name, ml2 conf should also be updatead (if vlan type network is involved, not for flat).  I do not see any documentation on restriction on  length of net name20:00
thorst_svenkat: I think its up to the agent code to determine how to surface that information20:00
svenkatok…20:00
thorst_so for the openvswitch agent, yeah its a conf file and needs to be restarted20:00
thorst_for SR-IOV, you could look at port labels and just expose it20:01
svenkatyou mean, mark the port up if there is a match ? (between created port and there is a physical port with that name)20:01
thorst_no, sec...let me find you a link20:02
efriedRely on the user to make sure they've named their ports and networks the same?20:02
svenkatefried: agree.20:02
efriedAre network names arbitrary, and restricted to the single compute host?20:03
svenkatbut we should be able to create a neutron network to start… with specific physical network name.20:03
efriedIn other words, can I have a network named 'foo' on one host and the same network named 'bar' on another?20:03
efriedsvenkat, you can't assume green field in the OpenStack case.20:03
openstackgerritEric Berglund proposed openstack/nova-powervm: Adds supports_device_tagging to capabilities  https://review.openstack.org/34502120:04
efriedWe have to account for the scenario where we're bringing up our driver into an environment that is already established.20:04
efriedAnd which already has names on its networks.20:04
efriedNames like this_is_my_network_for_production_work20:05
svenkatefried: agree.20:05
efriedI don't think we can ask the user to rename their physnet under that scenario.  thorst_, do you agree with that?20:06
thorst_efried: agree...within reason20:07
thorst_remember, their physical network name in openstack can be unlimited characters20:07
thorst_if we derive those based off they 16 byte field on the port20:07
thorst_we're fine I think20:07
efriedSure, but if "reason" is anything less than "16 chars max", we're screwed.20:07
efriedNo, we can't drive the neutron net name from the pport label.  See above - "environment that is already established".20:08
thorst_efried: I'm not worried about that TBH20:10
thorst_so you mean an OpenStack that has a physical network with > 16 chars?20:10
svenkatin our sriov vif driver, during plug operation we need to pick physical ports based on the network name in the incoming vif data… do we agree ?20:10
efriedthorst_ yeah20:10
efriedsvenkat, that's the design assumption driving the current discussion, yes.20:11
thorst_efried: I'd say that's exceptional, and we shouldn't spend significant energy solving it.  In fact, I'd rather fail and let an operator come in and tell us how to support it20:11
svenkatthorst: i agree.20:11
efriedFail is a significant design statement, and one I'm more inclined to agree with than, "just sanitize the thing and hope for the best".20:11
efriedIn other words, if we come across a network name that we can't directly, uniquely, deterministically map to a pport label, we raise.20:12
efriedthorst_, is that what you meant?20:12
svenkator, pick first 16 bytes from incomign vif data and move on matching with port label on pp…20:13
efriedNo20:13
efriedCan't do that.20:13
efriedBecause ambiguous.20:13
svenkatefried: you want equal match…20:13
efriedWell, at least deterministic and unambiguous.20:13
efriedphysical_network1 and physical_network2 will both map to 'physical_network'.  That's bad.20:14
svenkatthorst_ its fine… if you agree, we can raise an exception in that case and fail plug.20:14
efriedThinking it through, I'm not actually sure we're going to have a place to fail.20:15
efriedBecause it's the user's responsibility to set the pport label to match the network name.  When they try to set the label to something we don't accept, the failure will happen right there, at pypowervm (via pvmctl).20:16
efriedSo the "fail" will be, "Hey, you told me to set this label to match my phys net name, but that's not working.  What do I do now?  Call IBM support..."20:16
thorst_efried: open a launchpad bug  :-)20:18
efriedYeah, that's what I meant.20:19
*** apearson has quit IRC20:27
thorst_efried: we have a place to fail.  Its the mechanism binding20:33
efriedthorst_, eh?20:33
efriedAre you suggesting that if we find a pport label that doesn't match a known physnet, we should fail?  Cause I would be against that.20:33
thorst_efried: nope20:34
efriedLikewise, if we fail to find a pport label that matches each known physnet.20:34
thorst_efried: If you are asked to provision a neutron network and there is no matching physical port label, then just fail the vif binding20:34
efriedoh, well shoot, of course.20:34
thorst_the presence of a port label indicates a physical network20:34
efriedBut that has nothing to do with name/format restrictions per se.20:34
efriedBut it allows us to make the rules very simple.  No sanitization, no truncation.20:35
thorst_efried: right.20:35
thorst_must match explicitly20:35
openstackgerritBrent Tang proposed openstack/nova-powervm: Add VeNCrypt (TLS/x509) Security VNC config  https://review.openstack.org/34503720:35
thorst_not like the first 16 characters match so we go go20:35
efriedAgreed.20:35
*** efried has quit IRC20:50
*** smatzek has quit IRC20:51
*** smatzek has joined #openstack-powervm20:52
*** efried has joined #openstack-powervm20:52
*** esberglu has quit IRC20:59
*** apearson has joined #openstack-powervm21:04
*** apearson has quit IRC21:04
*** thorst__ has joined #openstack-powervm21:05
*** thorst_ has quit IRC21:05
*** apearson has joined #openstack-powervm21:06
*** thorst__ is now known as thorst_21:06
*** svenkat has quit IRC21:08
*** smatzek has quit IRC21:09
*** lmtaylor has left #openstack-powervm21:14
thorst_efried: I'm not seeing an ideal way to change the VNC stuff.  I'm not really sure why having 8 variables in a class are bad...21:17
efried9.  But who's counting.21:17
efriedKill it.21:17
thorst_just ask Dom to toss it?21:17
efriedyuh21:17
thorst_k21:17
efriedthorst_, unless you want to put in an exception for that file.21:18
thorst_efried: how does one go about that?21:20
thorst_and I guess take a look at 3608 and see what you think is best21:21
efriedsonar-project.properties21:21
efriedBut thorst_, I still need you to walk me through that change set.  See my comments.21:21
thorst_efried: got it.  Will be doing that21:24
*** apearson has quit IRC21:31
efriedthorst_, what triggers "the VNC pipe goes dead, [so] we know they've 'navigated away'"?21:39
thorst_so from our side the port is closed21:40
thorst_the socket21:40
thorst_what triggers a socket close is the VNC Client closes21:40
thorst_we leave the VNC Server running so it can accept a new VNC client connection for up to 5 minutes21:40
thorst_and if no new VNC clients come in, we clean up the vterm21:41
thorst_but the flags for this are core socket pipes21:41
*** k0da has quit IRC21:43
*** tblakeslee has quit IRC22:05
*** mdrabe has quit IRC22:06
*** tjakobs has quit IRC22:10
*** kylek3h has quit IRC22:11
*** seroyer has quit IRC22:41
*** catintheroof has joined #openstack-powervm22:52
efriedthorst_, so if you keep the server alive for 5 minutes, even though they've closed their session, if they then re-open a session to the same LPAR, its state will be restored?23:23
thorst_yes23:25
thorst_efried: ^^23:25
efriedOkay.  I think there's a better way to word stuff to make this clear.23:26
efriedthorst_, does one VNCServer instance handle more than one vterm?23:27
thorst_no23:27
thorst_one to one23:27
thorst_but a VNCServer can handle multiple VNC Clients23:27
thorst_it only starts the VNCKiller when all clients have disconnected23:28
efriedf me, what's the relationship between VNC client and vterm session?23:30
efriedThe only part I have a visual on is the thing I think of as the "vterm", which is the black square that shows me what's going on at the LPAR's "console".  You can only have one of those open to a particular LPAR at a time.23:30
efriedSo is there one VNC "client" per vterm?23:30
thorst_efried: there *could* be more than one client per VNC Server.  But one VNCServer per vterm23:30
thorst_so the two of us could each have our own VNC client23:30
efriedwhat's a client, then?23:30
thorst_looking at the same vterm23:30
efriedoh, mind blown now.23:31
thorst_RealVNC for instance23:31
efriedCan we both interact with it?23:31
thorst_yep.  Any changes I do reflect back on your screen and vice versa23:31
efrieddacomon23:31
thorst_?23:31
thorst_wanna see it?23:31
efriednever mind23:31
efriedNo, I believe you.23:31
thorst_heh23:31
efriedI think that was the hurdle I was having trouble with.23:31
efriedI was assuming one mutually exclusive vterm, so when you told me one server to multiple clients, I assumed they were separate vterms.23:32
thorst_yeah, unfortunately not23:32
thorst_that's what makes this code so fun23:32
thorst_lol23:32
efriedthorst_, reviewed.  One more question inline.23:39
efriedAnd how did you know it was local?23:42
efried(thorst_ ^^)23:42
efriedIs it because this is running under the VNCRepeaterServer, which is always "local"?23:44
efriednoooo...23:44
efriedthorst_, seems like you should have been calling close_vterm instead.23:44
thorst_efried: possibly.  We know its local if we're in that loop.  close_vterm will call down to local anyway23:45
efriedLocal is based on an adapter trait.  I don't see how we could know from that chunk of code.23:45
thorst_cause that code can't be started unless you had the adapter trait of local23:46
thorst_line 16523:47
efriedgot it23:47
efriedSo what does close_vterm do that we weren't doing before?  Were we just leaving the vterm around indefinitely?23:48
efriedor was it closing implicitly, like when I do ~~. ?23:50
thorst_it is what calls the rm_vterm23:50
thorst_which we were missing before23:50
efriedWe must have been cleaning it up somehow, no?23:50
thorst_and then it allows the VNCServer to just 'die'23:50
thorst_efried: uhhh...23:50
thorst_efried: when the VM migrated or was deleted23:50
efriedor rather, it must have been cleaning itself up?23:51
thorst_this is kinda 'important'23:51
thorst_no23:51
efriedSo before, if I opened a vterm, then navigated away, then tried to go back to that same vterm, it would be blocked and I would have to go rmvterm from somewhere else?23:51
efriedCause if we're fixing a bug like that, it should definitely be mentioned in the commit message.  And possibly have a public bug filed.23:52
thorst_no, before it would reconnect a new VNCServer to an existing console23:52
thorst_so same experience (although indefinite sessions)23:52
efriedOkay.  Lemme ponder that for a sec.  Initial thought is that the commit message should mention that more clearly.23:53
thorst_that's fair23:54
efriedOkay, so before, you could navigate away, come back whenever, and have your session "preserved" - even though it would really be a different window on a still-running vterm - with one (or more) stale nonexistent "windows" hanging around in limbo.23:55
efriedAnd with your change, you can navigate away, and come back in five minutes and wind up on the same session - with no stale windows in limbo?  Or still stale windows in limbo, just with a lifespan limited to "five minutes from the last time you closed"?23:56
efriedSo you haven't solved the stale window problem, unless I leave for more than five minutes.23:57
thorst_pretty much23:57
*** thorst_ has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!