*** tjakobs has joined #openstack-powervm | 00:05 | |
*** thorst_ has quit IRC | 00:06 | |
*** tjakobs has quit IRC | 00:11 | |
*** tjakobs has joined #openstack-powervm | 00:53 | |
*** tjakobs has quit IRC | 01:09 | |
*** seroyer has quit IRC | 02:10 | |
*** tjakobs has joined #openstack-powervm | 02:33 | |
*** esberglu has joined #openstack-powervm | 02:36 | |
*** tjakobs has quit IRC | 02:37 | |
*** apearson has joined #openstack-powervm | 02:42 | |
*** esberglu has quit IRC | 02:43 | |
*** apearson has quit IRC | 04:25 | |
*** kotra03 has joined #openstack-powervm | 04:47 | |
*** kotra03 has quit IRC | 04:49 | |
*** k0da has joined #openstack-powervm | 08:14 | |
*** kotra03 has joined #openstack-powervm | 10:00 | |
*** k0da has quit IRC | 10:19 | |
*** adi___ has quit IRC | 10:38 | |
*** adi___ has joined #openstack-powervm | 10:39 | |
*** madhaviy has joined #openstack-powervm | 11:54 | |
*** smatzek has joined #openstack-powervm | 12:01 | |
*** k0da has joined #openstack-powervm | 12:16 | |
*** tblakeslee has joined #openstack-powervm | 13:00 | |
*** edmondsw has joined #openstack-powervm | 13:00 | |
*** kylek3h has joined #openstack-powervm | 13:08 | |
*** kylek3h has quit IRC | 13:09 | |
*** kylek3h has joined #openstack-powervm | 13:09 | |
*** mdrabe has joined #openstack-powervm | 13:11 | |
*** kylek3h has quit IRC | 13:13 | |
*** kylek3h has joined #openstack-powervm | 13:16 | |
*** apearson has joined #openstack-powervm | 13:39 | |
*** esberglu has joined #openstack-powervm | 13:39 | |
*** lmtaylor has joined #openstack-powervm | 13:50 | |
*** seroyer has joined #openstack-powervm | 14:05 | |
*** tjakobs has joined #openstack-powervm | 14:11 | |
*** smatzek has quit IRC | 14:41 | |
*** smatzek has joined #openstack-powervm | 14:42 | |
*** mdrabe has quit IRC | 14:43 | |
*** mdrabe has joined #openstack-powervm | 14:49 | |
*** apearson has quit IRC | 14:59 | |
*** apearson has joined #openstack-powervm | 15:00 | |
*** k0da has quit IRC | 15:11 | |
openstackgerrit | Eric Berglund proposed openstack/nova-powervm: DNM: Test Change Set 2 https://review.openstack.org/300232 | 15:20 |
---|---|---|
openstackgerrit | Eric Berglund proposed openstack/nova-powervm: DNM: CI Check https://review.openstack.org/295935 | 15:20 |
*** seroyer has quit IRC | 15:21 | |
*** seroyer has joined #openstack-powervm | 15:37 | |
*** efried has joined #openstack-powervm | 15:49 | |
*** efried has quit IRC | 15:53 | |
*** apearson has quit IRC | 16:33 | |
*** apearson has joined #openstack-powervm | 16:33 | |
*** madhaviy has quit IRC | 16:49 | |
*** edmondsw has quit IRC | 16:51 | |
*** edmondsw has joined #openstack-powervm | 16:52 | |
*** thorst_ has joined #openstack-powervm | 17:33 | |
*** thorst_ has quit IRC | 18:19 | |
*** kotra03 has quit IRC | 18:35 | |
*** k0da has joined #openstack-powervm | 19:34 | |
*** thorst_ has joined #openstack-powervm | 19:38 | |
*** efried has joined #openstack-powervm | 19:38 | |
efried | thorst_, svenkat: we need a way to map deterministically and uniquely between physical network names and SR-IOV pport labels if our design is to work. | 19:39 |
efried | It would seem as though the pport label can't be more than 16 bytes. | 19:40 |
efried | I believe there is no such restriction (at least, not such a drastically limiting one) on a phys net name. | 19:40 |
efried | So we can't just truncate. That would wind up being wildly ambiguous. | 19:40 |
efried | So phys nets don't have an kind of ID? | 19:41 |
efried | any* | 19:41 |
efried | thorst_ ^^ | 19:44 |
thorst_ | efried: A physical network is defined in the neutron.conf file | 19:45 |
thorst_ | let me find an example | 19:45 |
*** svenkat has joined #openstack-powervm | 19:46 | |
thorst_ | looks like its just the bridge mappings now | 19:47 |
thorst_ | I think... | 19:47 |
thorst_ | that's interesting... | 19:47 |
efried | what does that mean? | 19:47 |
thorst_ | nothing really. | 19:47 |
thorst_ | but basically this isn't something in the DB | 19:48 |
thorst_ | the neutron network has a physical network name | 19:48 |
thorst_ | its a string | 19:48 |
svenkat | @thorst: is it limited to network_vlan_ranges in ml2_type_vlan section in ml2_conf.ini.. i see both default and unknown and i am able to create vlan network with these two | 19:48 |
thorst_ | and if two neutron networks are on the same physical network, they both map to the same string | 19:48 |
thorst_ | svenkat: I think its more the binding details | 19:48 |
thorst_ | depending on the type of network you may have different things | 19:48 |
thorst_ | and I think its fed from the mechanism driver. | 19:48 |
svenkat | ok.. | 19:49 |
svenkat | we can scan all sriov physical ports and get their port labels as physical network names and surface them to neutron via mechanism driver… so neutorn networks can be created for those physical networks | 19:50 |
svenkat | is this right? | 19:50 |
thorst_ | I believe so, yes | 19:50 |
thorst_ | which then solves the 'what value is this agent providing' question I had | 19:51 |
svenkat | then how about vlan ranges | 19:51 |
thorst_ | isn't that just another part of the binding details? | 19:51 |
thorst_ | first check is physical networks | 19:51 |
thorst_ | second check is what VLAN ranges are supported? | 19:51 |
svenkat | network_vlan_ranges = default:1:4094,unknown:1:4094 | 19:52 |
thorst_ | really, its a segmentation check. It is avoided for Flat networks, used for VLAN | 19:52 |
thorst_ | and vxlan, GRE, or Geneve | 19:52 |
svenkat | ok… so does this mean, that to add a new physical network to neutron, it is surfaced via mechanism driver and also update ml2_conf.ini and restart neutron service? | 19:52 |
svenkat | or is vlan range can also be made available to neutron via mechanism driver | 19:53 |
openstackgerrit | Eric Berglund proposed openstack/nova-powervm: Add support_attach_interface to capabilities https://review.openstack.org/345014 | 19:54 |
efried | So thorst_, svenkat, where did we land with port label vs. net name? | 19:58 |
svenkat | sounds like , along with updation of port lablel with net name, ml2 conf should also be updatead (if vlan type network is involved, not for flat). I do not see any documentation on restriction on length of net name | 20:00 |
thorst_ | svenkat: I think its up to the agent code to determine how to surface that information | 20:00 |
svenkat | ok… | 20:00 |
thorst_ | so for the openvswitch agent, yeah its a conf file and needs to be restarted | 20:00 |
thorst_ | for SR-IOV, you could look at port labels and just expose it | 20:01 |
svenkat | you mean, mark the port up if there is a match ? (between created port and there is a physical port with that name) | 20:01 |
thorst_ | no, sec...let me find you a link | 20:02 |
efried | Rely on the user to make sure they've named their ports and networks the same? | 20:02 |
svenkat | efried: agree. | 20:02 |
efried | Are network names arbitrary, and restricted to the single compute host? | 20:03 |
svenkat | but we should be able to create a neutron network to start… with specific physical network name. | 20:03 |
efried | In other words, can I have a network named 'foo' on one host and the same network named 'bar' on another? | 20:03 |
efried | svenkat, you can't assume green field in the OpenStack case. | 20:03 |
openstackgerrit | Eric Berglund proposed openstack/nova-powervm: Adds supports_device_tagging to capabilities https://review.openstack.org/345021 | 20:04 |
efried | We have to account for the scenario where we're bringing up our driver into an environment that is already established. | 20:04 |
efried | And which already has names on its networks. | 20:04 |
efried | Names like this_is_my_network_for_production_work | 20:05 |
svenkat | efried: agree. | 20:05 |
efried | I don't think we can ask the user to rename their physnet under that scenario. thorst_, do you agree with that? | 20:06 |
thorst_ | efried: agree...within reason | 20:07 |
thorst_ | remember, their physical network name in openstack can be unlimited characters | 20:07 |
thorst_ | if we derive those based off they 16 byte field on the port | 20:07 |
thorst_ | we're fine I think | 20:07 |
efried | Sure, but if "reason" is anything less than "16 chars max", we're screwed. | 20:07 |
efried | No, we can't drive the neutron net name from the pport label. See above - "environment that is already established". | 20:08 |
thorst_ | efried: I'm not worried about that TBH | 20:10 |
thorst_ | so you mean an OpenStack that has a physical network with > 16 chars? | 20:10 |
svenkat | in our sriov vif driver, during plug operation we need to pick physical ports based on the network name in the incoming vif data… do we agree ? | 20:10 |
efried | thorst_ yeah | 20:10 |
efried | svenkat, that's the design assumption driving the current discussion, yes. | 20:11 |
thorst_ | efried: I'd say that's exceptional, and we shouldn't spend significant energy solving it. In fact, I'd rather fail and let an operator come in and tell us how to support it | 20:11 |
svenkat | thorst: i agree. | 20:11 |
efried | Fail is a significant design statement, and one I'm more inclined to agree with than, "just sanitize the thing and hope for the best". | 20:11 |
efried | In other words, if we come across a network name that we can't directly, uniquely, deterministically map to a pport label, we raise. | 20:12 |
efried | thorst_, is that what you meant? | 20:12 |
svenkat | or, pick first 16 bytes from incomign vif data and move on matching with port label on pp… | 20:13 |
efried | No | 20:13 |
efried | Can't do that. | 20:13 |
efried | Because ambiguous. | 20:13 |
svenkat | efried: you want equal match… | 20:13 |
efried | Well, at least deterministic and unambiguous. | 20:13 |
efried | physical_network1 and physical_network2 will both map to 'physical_network'. That's bad. | 20:14 |
svenkat | thorst_ its fine… if you agree, we can raise an exception in that case and fail plug. | 20:14 |
efried | Thinking it through, I'm not actually sure we're going to have a place to fail. | 20:15 |
efried | Because it's the user's responsibility to set the pport label to match the network name. When they try to set the label to something we don't accept, the failure will happen right there, at pypowervm (via pvmctl). | 20:16 |
efried | So the "fail" will be, "Hey, you told me to set this label to match my phys net name, but that's not working. What do I do now? Call IBM support..." | 20:16 |
thorst_ | efried: open a launchpad bug :-) | 20:18 |
efried | Yeah, that's what I meant. | 20:19 |
*** apearson has quit IRC | 20:27 | |
thorst_ | efried: we have a place to fail. Its the mechanism binding | 20:33 |
efried | thorst_, eh? | 20:33 |
efried | Are you suggesting that if we find a pport label that doesn't match a known physnet, we should fail? Cause I would be against that. | 20:33 |
thorst_ | efried: nope | 20:34 |
efried | Likewise, if we fail to find a pport label that matches each known physnet. | 20:34 |
thorst_ | efried: If you are asked to provision a neutron network and there is no matching physical port label, then just fail the vif binding | 20:34 |
efried | oh, well shoot, of course. | 20:34 |
thorst_ | the presence of a port label indicates a physical network | 20:34 |
efried | But that has nothing to do with name/format restrictions per se. | 20:34 |
efried | But it allows us to make the rules very simple. No sanitization, no truncation. | 20:35 |
thorst_ | efried: right. | 20:35 |
thorst_ | must match explicitly | 20:35 |
openstackgerrit | Brent Tang proposed openstack/nova-powervm: Add VeNCrypt (TLS/x509) Security VNC config https://review.openstack.org/345037 | 20:35 |
thorst_ | not like the first 16 characters match so we go go | 20:35 |
efried | Agreed. | 20:35 |
*** efried has quit IRC | 20:50 | |
*** smatzek has quit IRC | 20:51 | |
*** smatzek has joined #openstack-powervm | 20:52 | |
*** efried has joined #openstack-powervm | 20:52 | |
*** esberglu has quit IRC | 20:59 | |
*** apearson has joined #openstack-powervm | 21:04 | |
*** apearson has quit IRC | 21:04 | |
*** thorst__ has joined #openstack-powervm | 21:05 | |
*** thorst_ has quit IRC | 21:05 | |
*** apearson has joined #openstack-powervm | 21:06 | |
*** thorst__ is now known as thorst_ | 21:06 | |
*** svenkat has quit IRC | 21:08 | |
*** smatzek has quit IRC | 21:09 | |
*** lmtaylor has left #openstack-powervm | 21:14 | |
thorst_ | efried: I'm not seeing an ideal way to change the VNC stuff. I'm not really sure why having 8 variables in a class are bad... | 21:17 |
efried | 9. But who's counting. | 21:17 |
efried | Kill it. | 21:17 |
thorst_ | just ask Dom to toss it? | 21:17 |
efried | yuh | 21:17 |
thorst_ | k | 21:17 |
efried | thorst_, unless you want to put in an exception for that file. | 21:18 |
thorst_ | efried: how does one go about that? | 21:20 |
thorst_ | and I guess take a look at 3608 and see what you think is best | 21:21 |
efried | sonar-project.properties | 21:21 |
efried | But thorst_, I still need you to walk me through that change set. See my comments. | 21:21 |
thorst_ | efried: got it. Will be doing that | 21:24 |
*** apearson has quit IRC | 21:31 | |
efried | thorst_, what triggers "the VNC pipe goes dead, [so] we know they've 'navigated away'"? | 21:39 |
thorst_ | so from our side the port is closed | 21:40 |
thorst_ | the socket | 21:40 |
thorst_ | what triggers a socket close is the VNC Client closes | 21:40 |
thorst_ | we leave the VNC Server running so it can accept a new VNC client connection for up to 5 minutes | 21:40 |
thorst_ | and if no new VNC clients come in, we clean up the vterm | 21:41 |
thorst_ | but the flags for this are core socket pipes | 21:41 |
*** k0da has quit IRC | 21:43 | |
*** tblakeslee has quit IRC | 22:05 | |
*** mdrabe has quit IRC | 22:06 | |
*** tjakobs has quit IRC | 22:10 | |
*** kylek3h has quit IRC | 22:11 | |
*** seroyer has quit IRC | 22:41 | |
*** catintheroof has joined #openstack-powervm | 22:52 | |
efried | thorst_, so if you keep the server alive for 5 minutes, even though they've closed their session, if they then re-open a session to the same LPAR, its state will be restored? | 23:23 |
thorst_ | yes | 23:25 |
thorst_ | efried: ^^ | 23:25 |
efried | Okay. I think there's a better way to word stuff to make this clear. | 23:26 |
efried | thorst_, does one VNCServer instance handle more than one vterm? | 23:27 |
thorst_ | no | 23:27 |
thorst_ | one to one | 23:27 |
thorst_ | but a VNCServer can handle multiple VNC Clients | 23:27 |
thorst_ | it only starts the VNCKiller when all clients have disconnected | 23:28 |
efried | f me, what's the relationship between VNC client and vterm session? | 23:30 |
efried | The only part I have a visual on is the thing I think of as the "vterm", which is the black square that shows me what's going on at the LPAR's "console". You can only have one of those open to a particular LPAR at a time. | 23:30 |
efried | So is there one VNC "client" per vterm? | 23:30 |
thorst_ | efried: there *could* be more than one client per VNC Server. But one VNCServer per vterm | 23:30 |
thorst_ | so the two of us could each have our own VNC client | 23:30 |
efried | what's a client, then? | 23:30 |
thorst_ | looking at the same vterm | 23:30 |
efried | oh, mind blown now. | 23:31 |
thorst_ | RealVNC for instance | 23:31 |
efried | Can we both interact with it? | 23:31 |
thorst_ | yep. Any changes I do reflect back on your screen and vice versa | 23:31 |
efried | dacomon | 23:31 |
thorst_ | ? | 23:31 |
thorst_ | wanna see it? | 23:31 |
efried | never mind | 23:31 |
efried | No, I believe you. | 23:31 |
thorst_ | heh | 23:31 |
efried | I think that was the hurdle I was having trouble with. | 23:31 |
efried | I was assuming one mutually exclusive vterm, so when you told me one server to multiple clients, I assumed they were separate vterms. | 23:32 |
thorst_ | yeah, unfortunately not | 23:32 |
thorst_ | that's what makes this code so fun | 23:32 |
thorst_ | lol | 23:32 |
efried | thorst_, reviewed. One more question inline. | 23:39 |
efried | And how did you know it was local? | 23:42 |
efried | (thorst_ ^^) | 23:42 |
efried | Is it because this is running under the VNCRepeaterServer, which is always "local"? | 23:44 |
efried | noooo... | 23:44 |
efried | thorst_, seems like you should have been calling close_vterm instead. | 23:44 |
thorst_ | efried: possibly. We know its local if we're in that loop. close_vterm will call down to local anyway | 23:45 |
efried | Local is based on an adapter trait. I don't see how we could know from that chunk of code. | 23:45 |
thorst_ | cause that code can't be started unless you had the adapter trait of local | 23:46 |
thorst_ | line 165 | 23:47 |
efried | got it | 23:47 |
efried | So what does close_vterm do that we weren't doing before? Were we just leaving the vterm around indefinitely? | 23:48 |
efried | or was it closing implicitly, like when I do ~~. ? | 23:50 |
thorst_ | it is what calls the rm_vterm | 23:50 |
thorst_ | which we were missing before | 23:50 |
efried | We must have been cleaning it up somehow, no? | 23:50 |
thorst_ | and then it allows the VNCServer to just 'die' | 23:50 |
thorst_ | efried: uhhh... | 23:50 |
thorst_ | efried: when the VM migrated or was deleted | 23:50 |
efried | or rather, it must have been cleaning itself up? | 23:51 |
thorst_ | this is kinda 'important' | 23:51 |
thorst_ | no | 23:51 |
efried | So before, if I opened a vterm, then navigated away, then tried to go back to that same vterm, it would be blocked and I would have to go rmvterm from somewhere else? | 23:51 |
efried | Cause if we're fixing a bug like that, it should definitely be mentioned in the commit message. And possibly have a public bug filed. | 23:52 |
thorst_ | no, before it would reconnect a new VNCServer to an existing console | 23:52 |
thorst_ | so same experience (although indefinite sessions) | 23:52 |
efried | Okay. Lemme ponder that for a sec. Initial thought is that the commit message should mention that more clearly. | 23:53 |
thorst_ | that's fair | 23:54 |
efried | Okay, so before, you could navigate away, come back whenever, and have your session "preserved" - even though it would really be a different window on a still-running vterm - with one (or more) stale nonexistent "windows" hanging around in limbo. | 23:55 |
efried | And with your change, you can navigate away, and come back in five minutes and wind up on the same session - with no stale windows in limbo? Or still stale windows in limbo, just with a lifespan limited to "five minutes from the last time you closed"? | 23:56 |
efried | So you haven't solved the stale window problem, unless I leave for more than five minutes. | 23:57 |
thorst_ | pretty much | 23:57 |
*** thorst_ has quit IRC | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!