*** dayou has quit IRC | 00:08 | |
rm_work | yeah we just followed the neutron guidelines for filtering and such | 00:08 |
---|---|---|
*** rtjure has quit IRC | 00:11 | |
rm_work | johnsom: i'm trying to trace the issue with the tempest stuff now, finally finished with meetings and errands | 00:11 |
johnsom | Ok, I have logged off to make dinner, but might be able to answer questions and such | 00:14 |
rm_work | kk np | 00:14 |
*** rtjure has joined #openstack-lbaas | 00:16 | |
*** fnaval has joined #openstack-lbaas | 00:18 | |
*** armax has joined #openstack-lbaas | 00:18 | |
*** fnaval has quit IRC | 00:20 | |
*** fnaval has joined #openstack-lbaas | 00:21 | |
rm_work | kong: you around? | 00:24 |
rm_work | looks like there's some stuff in server_util.py that you copy/pasted from elsewhere -- i see some of it is from tempest's waiters, but ... what about `_execute` ? | 00:25 |
johnsom | rm_work: btw, I updated the zuul v3 etherpad with ovh and stuff ready for review | 00:44 |
rm_work | kk | 00:44 |
rm_work | err which etherpad is that | 00:44 |
rm_work | ahh i see the issues | 00:48 |
rm_work | one was caused by my latest refactor, and one was ... ok i don't know but i fixed it | 00:48 |
rm_work | running that in our cloud now | 00:51 |
openstackgerrit | Adam Harwell proposed openstack/octavia-tempest-plugin master: Create scenario tests for loadbalancers https://review.openstack.org/486775 | 00:52 |
rm_work | err, that ^^ | 00:52 |
*** jniesz has quit IRC | 00:52 | |
johnsom | rm_work: https://etherpad.openstack.org/p/octavia-zuulv3-patches | 00:53 |
rm_work | johnsom: rebuild your devstack for tempest with the newest LB patch, if you can | 00:54 |
rm_work | later | 00:54 |
johnsom | I will fire that version up in the morning | 00:54 |
rm_work | kk | 00:54 |
openstackgerrit | Adam Harwell proposed openstack/octavia-tempest-plugin master: WIP: Failover test https://review.openstack.org/501559 | 01:12 |
openstackgerrit | Merged openstack/octavia-dashboard master: Zuul: add file extension to playbook path https://review.openstack.org/516126 | 01:18 |
openstackgerrit | Adam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s) https://review.openstack.org/435612 | 01:21 |
openstackgerrit | ZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id https://review.openstack.org/458308 | 01:21 |
openstackgerrit | ZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id https://review.openstack.org/458308 | 01:23 |
*** bbbbzhao_ has joined #openstack-lbaas | 01:42 | |
kong | rm_work: hi, the _execute is from the very early version of that patch | 02:10 |
*** dayou has joined #openstack-lbaas | 02:12 | |
*** sanfern has joined #openstack-lbaas | 03:44 | |
sanfern | Hi johnsom | 03:44 |
*** AlexeyAbashkin has joined #openstack-lbaas | 04:18 | |
*** bbbbzhao_ has quit IRC | 04:22 | |
*** AlexeyAbashkin has quit IRC | 04:22 | |
*** yamamoto has joined #openstack-lbaas | 04:24 | |
rm_work | kong: well, i fixed it -- please actually try running the latest revision | 04:46 |
rm_work | i refactored some stuff a little bit (moved it around to different files, and renamed a client), so just to make sure it all works for you | 04:47 |
rm_work | it works for me, but we don't use the FLIP stuff | 04:47 |
*** sanfern has quit IRC | 04:56 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 04:57 | |
*** eN_Guruprasad_Rn has quit IRC | 05:12 | |
*** gcheresh has joined #openstack-lbaas | 05:13 | |
*** gcheresh has quit IRC | 05:21 | |
*** armax has quit IRC | 05:27 | |
*** armax has joined #openstack-lbaas | 05:27 | |
*** armax has quit IRC | 05:27 | |
*** armax has joined #openstack-lbaas | 05:28 | |
*** armax has quit IRC | 05:28 | |
*** armax has joined #openstack-lbaas | 05:29 | |
*** armax has quit IRC | 05:29 | |
*** sanfern has joined #openstack-lbaas | 05:34 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 05:46 | |
*** gcheresh has joined #openstack-lbaas | 05:50 | |
*** pcaruana has joined #openstack-lbaas | 05:58 | |
*** sanfern has quit IRC | 06:06 | |
*** Alex_Staf_ has joined #openstack-lbaas | 06:57 | |
*** annp has quit IRC | 07:16 | |
*** dayou has quit IRC | 07:20 | |
*** rcernin has quit IRC | 07:27 | |
*** Alex_Staf_ has quit IRC | 07:39 | |
openstackgerrit | Nir Magnezi proposed openstack/octavia master: Update devstack plugin and examples https://review.openstack.org/503638 | 07:49 |
*** AlexeyAbashkin has joined #openstack-lbaas | 07:58 | |
*** tesseract has joined #openstack-lbaas | 08:10 | |
*** annp has joined #openstack-lbaas | 08:15 | |
openstackgerrit | ZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id https://review.openstack.org/458308 | 08:17 |
*** Tengu has left #openstack-lbaas | 08:24 | |
*** rcernin has joined #openstack-lbaas | 08:26 | |
*** Alex_Staf_ has joined #openstack-lbaas | 08:29 | |
*** oanson has quit IRC | 08:30 | |
*** leyal has quit IRC | 08:30 | |
*** oanson has joined #openstack-lbaas | 08:31 | |
*** spectr has joined #openstack-lbaas | 08:36 | |
*** dayou has joined #openstack-lbaas | 08:39 | |
*** oanson has quit IRC | 08:45 | |
*** oanson has joined #openstack-lbaas | 08:46 | |
*** slaweq_ has quit IRC | 09:08 | |
*** slaweq has joined #openstack-lbaas | 09:16 | |
*** yamamoto has quit IRC | 09:18 | |
*** yamamoto has joined #openstack-lbaas | 09:24 | |
*** yamamoto has quit IRC | 09:28 | |
*** yamamoto has joined #openstack-lbaas | 09:35 | |
*** yamamoto has quit IRC | 09:35 | |
*** links has quit IRC | 09:44 | |
*** yamamoto has joined #openstack-lbaas | 09:57 | |
*** rtjure has quit IRC | 10:17 | |
*** salmankhan has joined #openstack-lbaas | 10:20 | |
*** slaweq has quit IRC | 10:21 | |
kong | rm_work: i rerun the test, still works for me | 10:21 |
*** rtjure has joined #openstack-lbaas | 10:22 | |
*** fzimmermann has joined #openstack-lbaas | 10:33 | |
fzimmermann | Hi, any hints how to "revive" an amphora if octavia terminated them caused by network-connection-issues? | 10:34 |
*** slaweq has joined #openstack-lbaas | 10:38 | |
*** knsahm has joined #openstack-lbaas | 11:03 | |
knsahm | is it possible to rebuild the amphore instances? | 11:07 |
knsahm | octavia has killed all amphora instances and we want to rebuild the instances... | 11:08 |
knsahm | all loadbalancer are in provisioning_status "ACTIVE" | 11:08 |
*** yamamoto has quit IRC | 11:16 | |
*** yamamoto has joined #openstack-lbaas | 11:19 | |
*** yamamoto has quit IRC | 11:24 | |
*** sanfern has joined #openstack-lbaas | 11:26 | |
knsahm | Does somebody has any idea? | 11:27 |
*** knsahm has quit IRC | 11:54 | |
*** rcernin has quit IRC | 11:56 | |
*** salmankhan has quit IRC | 12:10 | |
*** sanfern has quit IRC | 12:11 | |
*** yamamoto has joined #openstack-lbaas | 12:11 | |
*** salmankhan has joined #openstack-lbaas | 12:14 | |
*** yamamoto has quit IRC | 12:15 | |
*** yamamoto has joined #openstack-lbaas | 12:15 | |
*** yamamoto has quit IRC | 12:20 | |
*** knsahm has joined #openstack-lbaas | 12:22 | |
*** krypto has joined #openstack-lbaas | 12:26 | |
*** krypto has quit IRC | 12:26 | |
*** krypto has joined #openstack-lbaas | 12:26 | |
*** leitan has joined #openstack-lbaas | 12:38 | |
*** eN_Guruprasad_Rn has quit IRC | 12:38 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 12:39 | |
*** yamamoto has joined #openstack-lbaas | 12:47 | |
*** yamamoto has quit IRC | 12:47 | |
*** yamamoto has joined #openstack-lbaas | 12:50 | |
*** eN_Guruprasad_Rn has quit IRC | 12:55 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 12:55 | |
*** eN_Guruprasad_Rn has quit IRC | 13:03 | |
*** fnaval has quit IRC | 13:05 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 13:05 | |
*** eN_Guruprasad_Rn has quit IRC | 13:10 | |
*** rtjure has quit IRC | 13:15 | |
fzimmermann | Any hints how to trigger a redeployment of a load balancer if the amphore got terminated and the database already cleaned up? So how to tell octavia: I know there a no running Amphoras for your Config, please create new ones? | 13:17 |
*** rtjure has joined #openstack-lbaas | 13:19 | |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 13:23 | |
*** AlexeyAbashkin has quit IRC | 13:23 | |
*** yamamoto has quit IRC | 13:24 | |
*** yamamoto has joined #openstack-lbaas | 13:28 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 13:45 | |
*** fnaval has joined #openstack-lbaas | 13:49 | |
johnsom | fzimmermann: Still arround? | 14:00 |
fzimmermann | yes | 14:00 |
fzimmermann | johnsom: hi | 14:01 |
fzimmermann | some background: we had network issues and our housekeeping-timeout cleaned all amphora-table entries. | 14:01 |
johnsom | Ok, they are all in provisioning status active? | 14:01 |
fzimmermann | the amphora? | 14:01 |
johnsom | But housekeeping would only clean them up if someone deleted them, not if there was a networking error | 14:02 |
johnsom | The load balancers | 14:02 |
johnsom | Are the load balancers in status Active? | 14:04 |
knsahm | the load balancers are in provisioning status active | 14:04 |
knsahm | yes | 14:04 |
knsahm | but no amphoras are working | 14:04 |
johnsom | Ok, is the health manager process running? | 14:04 |
johnsom | What version of octavia are you running? | 14:05 |
fzimmermann | Newton | 14:05 |
johnsom | What is the status of the health manager processes? O-hm | 14:06 |
fzimmermann | 0.9.2dev3 | 14:06 |
*** lutzb has joined #openstack-lbaas | 14:06 | |
johnsom | It should be attempting to failover those amphora and rebuild them | 14:06 |
fzimmermann | the health-manager is fine. If I create an amphore-table-entry (from backup) and create a suitable amphora_health-entry. The health-manager detects the problem and tries to create a new amphora. | 14:07 |
fzimmermann | yes, but its missing some ports | 14:07 |
knsahm | correction: neutron cli shows lbaas pending state active but the octavia table entries are "ERROR" | 14:08 |
johnsom | Ok, yeah, neutron has a habit of getting out of sync. We should focus on the octavia side for the recovery | 14:09 |
knsahm | yes | 14:09 |
johnsom | we can sync neutron after | 14:10 |
fzimmermann | sure, what could we do to trigger a new amphora-deployment? | 14:11 |
johnsom | Pick one load balancer that is in ERROR state in octavia DB, check the listener and pool to see if they are still active. | 14:12 |
fzimmermann | ok, give me a second. | 14:13 |
fzimmermann | ok, listener is ERR and pools are ACTIVE | 14:15 |
dayou | https://review.openstack.org/#/c/505851/ | 14:16 |
johnsom | So, when you created the amphora health entries, what port was missing? It should rebuild all of the ports except for the VIP which has the VIP IP address assigned | 14:17 |
dayou | Any idea about this patch? Since we are using pike now, and that's the only issue blocks us from creating a tls listener. | 14:18 |
fzimmermann | johnsom: octavia-lb-vrrp-b716ac46-6c26-4b39-bc5c-60ea6e51282c | 14:18 |
dayou | octavia_v2 only seems works fines so far | 14:18 |
johnsom | dayou It just needs reviews and that comment addressed | 14:19 |
dayou | Alright, thanks | 14:19 |
*** armax has joined #openstack-lbaas | 14:19 | |
fzimmermann | vrrp_port_id | 14:20 |
johnsom | Ok, so that is the VIP port, that is not good. | 14:20 |
johnsom | Oh, wait, no, that is the base port for vrrp. That should be automatically re-built. It may have another IP address, but the VIP IP should still be ok | 14:22 |
johnsom | The load balancer you created a fake amphora and amphora_health record for, it is not up and functional? | 14:23 |
fzimmermann | right. I did the SQL-Inserts (dump, all values as they where in backup) and the amphora got removed and recreated. Give me a minute I will redo it and provide the correct errors. | 14:24 |
johnsom | Ok, yeah, that is the procedure I would follow as well. | 14:26 |
nmagnezi | johnsom, https://review.openstack.org/#/c/503638/ <-- that patch cursed. I was stacking a node (a CentOS node) and now it fails because of an issue with the devstack plugin (specifically here https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L58-L63 ). will try to finalize it as soon as possible. | 14:27 |
johnsom | I can't say that I have tested a situation where I needed failover but the amphora database records were deleted. We run with a 7 day purge cycle in housekeeping, so failover should be done by then | 14:27 |
johnsom | nmagnezi I was wondering what was up with that patch. Want me to see if I can finish it up today? | 14:28 |
fzimmermann | johnsom: https://pastebin.com/qJnHLBHL | 14:28 |
*** spectr-RH has joined #openstack-lbaas | 14:30 | |
nmagnezi | johnsom, i will try to figure this out very soon, if not I'll let you know. it *should* be working fine, but I wanted to see it with my own eyes before I declare it's done (since it has a chance of breaking stuff) | 14:30 |
johnsom | Crumb, I think I have a bug I am working on for this issue. Can you paste the amphora database record for that amp? | 14:30 |
johnsom | nmagnezi Ok | 14:30 |
fzimmermann | johnsom: https://pastebin.com/PTkbwGA2 | 14:32 |
fzimmermann | maybe we should try the old "backup" amphora? | 14:32 |
*** spectr has quit IRC | 14:33 | |
fzimmermann | johnsom: ok that looks promosing.. the backup amphora got deployed successfully | 14:35 |
johnsom | Is there any way in the error log post you can get the log lines before this taskflow error tree? I'm looking for the last few successful lines and then the error line. | 14:37 |
fzimmermann | johnsom: sure - one moment. | 14:38 |
*** eN_Guruprasad_Rn has quit IRC | 14:41 | |
fzimmermann | johnsom: https://pastebin.com/7WN7pGjH | 14:42 |
johnsom | Yeah, ok. So when you create those amphora records, the two ports are going to need to exist in neutron, so you will need to rebuild them or find them in neutron if they still exist. Would you like the neutron commands for that? | 14:47 |
fzimmermann | oh yeah - would be great! | 14:48 |
*** gcheresh has quit IRC | 14:49 | |
*** krypto has quit IRC | 14:51 | |
*** krypto has joined #openstack-lbaas | 14:52 | |
*** krypto has quit IRC | 14:57 | |
*** krypto has joined #openstack-lbaas | 14:57 | |
*** krypto has quit IRC | 14:57 | |
*** krypto has joined #openstack-lbaas | 14:57 | |
*** krypto has quit IRC | 15:02 | |
*** krypto has joined #openstack-lbaas | 15:03 | |
johnsom | This command will create the two ports: | 15:06 |
johnsom | neutron port-create --tenant-id <LB project/tenant ID> --name octavia-lb-vrrp-<amp ID> --security-group lb-<lb ID> --allowed-address-pair ip_address=<VIP IP address> <network ID for VIP> | 15:06 |
fzimmermann | johnsom: thanks a lot! I will try it | 15:07 |
fzimmermann | what should be do to get neutron back in sync? | 15:07 |
johnsom | The ID of this port is the ID that will go into the "vrrp_port_id" field. If you do a 'neutron port-list | grep <VIP IP>' you can see the ID of the "ha_port_id" | 15:08 |
johnsom | You will want to have the event streamer setting enabled with "queue_event_streamer" https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.event_streamer_driver | 15:10 |
johnsom | This setting assumes that your neutron-lbaas is using the same rabbit queue as octavia | 15:10 |
*** salmankhan has quit IRC | 15:11 | |
fzimmermann | we already have this setting enabled: /etc/octavia/octavia.conf:event_streamer_driver=queue_event_streamer | 15:11 |
fzimmermann | looks like we are not using the same queue, isnt' it | 15:11 |
johnsom | If it is setup you will see debug messages in neutron service log for the messages coming off the queue. | 15:12 |
johnsom | There is a patch up for queens that allows separate queues, but that won't help here. | 15:13 |
johnsom | You can also do a "neutron lbaas-loadbalancer-update" call to set something, like the description or do an enable/disable cycle on an object. That should force neutron to get back in sync | 15:14 |
*** eN_Guruprasad_Rn has joined #openstack-lbaas | 15:22 | |
*** salmankhan has joined #openstack-lbaas | 15:23 | |
fzimmermann | johnsom: yes got notification in neutron after lb-update-call. Thanks a lot for your support! | 15:31 |
fzimmermann | johnsom: maybe one last question: currently we only have one amphora back. How would you tell octavia to create another (BACKUP) one? | 15:34 |
fzimmermann | same as above? Create ports and suitable amphora-lines? | 15:35 |
johnsom | yes, same as above | 15:36 |
*** slaweq has quit IRC | 15:38 | |
*** bzhao has quit IRC | 15:38 | |
*** yamamoto has quit IRC | 15:41 | |
*** spectr has joined #openstack-lbaas | 15:41 | |
*** yamamoto has joined #openstack-lbaas | 15:43 | |
*** spectr-RH has quit IRC | 15:44 | |
*** yamamoto has quit IRC | 15:48 | |
*** AlexeyAbashkin has quit IRC | 15:53 | |
*** spectr has quit IRC | 15:55 | |
*** yamamoto has joined #openstack-lbaas | 16:04 | |
*** yamamoto has quit IRC | 16:04 | |
*** links has joined #openstack-lbaas | 16:10 | |
johnsom | Getting better: | 16:12 |
johnsom | https://www.irccloud.com/pastebin/kVgeno8y/ | 16:12 |
johnsom | Digging in to find out what went wrong | 16:13 |
*** eN_Guruprasad_Rn has quit IRC | 16:13 | |
*** gcheresh has joined #openstack-lbaas | 16:14 | |
*** gcheresh has quit IRC | 16:34 | |
*** links has quit IRC | 16:35 | |
*** yamamoto has joined #openstack-lbaas | 17:05 | |
*** fzimmermann has quit IRC | 17:06 | |
*** yamamoto has quit IRC | 17:12 | |
Alex_Staf_ | johnsom, this is the plugin test ? | 17:24 |
johnsom | yes | 17:25 |
Alex_Staf_ | which one fails ? | 17:25 |
johnsom | octavia_tempest_plugin.tests.v2.scenario.test_basic_ops.BasicOpsTest.test_basic_ops | 17:25 |
johnsom | I have posted some comments on the patch | 17:25 |
Alex_Staf_ | johnsom, I ran it and I posted comments regarding why it failed for me | 17:26 |
*** sshank has joined #openstack-lbaas | 17:26 | |
johnsom | Yeah, I am seeing slightly different results | 17:26 |
Alex_Staf_ | johnsom, ok. I am runnign it a cloud and not devstack FYI | 17:27 |
johnsom | Yeah, I am on devstack. Probably the difference | 17:27 |
Alex_Staf_ | johnsom, it is a tough one to debug with ipdb | 17:27 |
*** sshank has quit IRC | 17:29 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 17:31 | |
*** AlexeyAbashkin has quit IRC | 17:35 | |
*** sshank has joined #openstack-lbaas | 17:36 | |
*** knsahm has left #openstack-lbaas | 17:42 | |
*** lutzb has quit IRC | 17:55 | |
*** SumitNaiksatam has joined #openstack-lbaas | 18:01 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 18:09 | |
*** pcaruana has quit IRC | 18:10 | |
*** AlexeyAbashkin has quit IRC | 18:14 | |
*** tesseract has quit IRC | 18:25 | |
*** gcheresh has joined #openstack-lbaas | 19:02 | |
*** Alex_Staf_ has quit IRC | 19:03 | |
*** salmankhan has quit IRC | 19:48 | |
johnsom | rm_work Not sure what is going on here, looks like it is progressing before the LB goes active? | 19:49 |
johnsom | https://www.irccloud.com/pastebin/ppjqiFX1/ | 19:49 |
*** SumitNaiksatam has quit IRC | 20:10 | |
rm_work | johnsom: what is this i'm looking at? | 20:15 |
rm_work | i mean, is this a single-create? | 20:15 |
johnsom | Yeah, the tempest test from the o-cw log | 20:15 |
rm_work | hmmm | 20:16 |
*** atoth has quit IRC | 20:16 | |
rm_work | ahhhh i see what you mean | 20:16 |
rm_work | on the create listener | 20:16 |
rm_work | err, create pool | 20:16 |
rm_work | actually all of it | 20:16 |
rm_work | so the LB goes active from the create... and then the pool create starts... and then ALSO somehow the listener and member creates come in? before it goes active? | 20:17 |
rm_work | i don't even see how this is possible on the worker side | 20:17 |
rm_work | the tempest stuff doesn't use single-create | 20:17 |
johnsom | Yeah, the log is confusing, trying to debug a bit now. I think I just found my problem | 20:18 |
rm_work | O_o | 20:19 |
*** Alex_Staf_ has joined #openstack-lbaas | 20:20 | |
johnsom | I changed the path but didn't notice you have a different path in the screen command | 20:20 |
johnsom | https://www.irccloud.com/pastebin/Mx1a0iXP/ | 20:20 |
rm_work | ? | 20:22 |
rm_work | ah ok so /dev/shm is actually a mountpoint | 20:22 |
rm_work | so yeah we could do that i guess if that's consistent across distros | 20:22 |
rm_work | or i guess it could be ... configurable :P | 20:23 |
rm_work | defaulting to that or something | 20:23 |
rm_work | johnsom: i remember kong had the same issue with space as you did but somehow he fixed it?? | 20:23 |
*** slaweq has joined #openstack-lbaas | 20:26 | |
*** sshank has quit IRC | 20:41 | |
johnsom | rm_work Yeah, after I fixed that the test passed | 20:44 |
rm_work | k | 20:44 |
*** sshank has joined #openstack-lbaas | 20:45 | |
openstackgerrit | German Eichberger proposed openstack/octavia master: ACTIVE-ACTIVE with LVS - DIB https://review.openstack.org/499807 | 20:49 |
johnsom | rm_work With those last comments I will be ok to +2 | 20:51 |
Alex_Staf_ | rm_work, I tested the ssh in the test_basic_ops, I compared it to other neutron test. Regarding your comment regarding the user it is possible. I compared it to other test and it worked for me but the keypair there without use id and stuff | 21:02 |
Alex_Staf_ | rm_work, https://github.com/openstack/neutron/blob/master/neutron/tests/tempest/scenario/test_trunk.py#L74 compared to this. The VM here booted with keypair ( nova show shows key not"-") | 21:04 |
*** AlexeyAbashkin has joined #openstack-lbaas | 21:08 | |
*** AlexeyAbashkin has quit IRC | 21:13 | |
*** salmankhan has joined #openstack-lbaas | 21:30 | |
*** gcheresh has quit IRC | 21:31 | |
*** salmankhan has quit IRC | 21:35 | |
*** yamamoto has joined #openstack-lbaas | 21:51 | |
*** leitan has quit IRC | 21:52 | |
*** rcernin has joined #openstack-lbaas | 22:01 | |
*** AlexeyAbashkin has joined #openstack-lbaas | 22:08 | |
*** AlexeyAbashkin has quit IRC | 22:13 | |
*** rohara has quit IRC | 22:43 | |
*** fnaval has quit IRC | 22:59 | |
*** slaweq has quit IRC | 23:09 | |
*** salmankhan has joined #openstack-lbaas | 23:10 | |
*** slaweq has joined #openstack-lbaas | 23:12 | |
*** salmankhan has quit IRC | 23:14 | |
*** fnaval has joined #openstack-lbaas | 23:15 | |
*** slaweq has quit IRC | 23:16 | |
*** rcernin has quit IRC | 23:17 | |
*** slaweq has joined #openstack-lbaas | 23:41 | |
*** links has joined #openstack-lbaas | 23:44 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Make the allowed_address_pairs driver better https://review.openstack.org/517455 | 23:52 |
*** cyberde has quit IRC | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!