*** yamamoto has joined #openstack-lbaas | 00:08 | |
*** yamamoto has quit IRC | 01:03 | |
*** sapd1 has joined #openstack-lbaas | 01:39 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Add an option to the Octavia V2 listener API for client cert https://review.openstack.org/612268 | 02:55 |
---|---|---|
*** sapd1 has quit IRC | 03:05 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Speed up pylint by using multiple cores https://review.openstack.org/637348 | 03:06 |
johnsom | Use the cores Luke.... | 03:08 |
*** KeithMnemonic has quit IRC | 03:18 | |
*** sapd1 has joined #openstack-lbaas | 03:20 | |
*** Dinesh_Bhor has joined #openstack-lbaas | 03:45 | |
*** eandersson has left #openstack-lbaas | 04:05 | |
*** yamamoto has joined #openstack-lbaas | 05:01 | |
*** yamamoto has quit IRC | 05:06 | |
*** dayou has quit IRC | 05:25 | |
*** fnaval_ has joined #openstack-lbaas | 05:38 | |
*** sapd1 has quit IRC | 05:40 | |
*** Dinesh_Bhor has quit IRC | 05:41 | |
*** fnaval has quit IRC | 05:41 | |
*** ltomasbo has quit IRC | 05:41 | |
*** strigazi has quit IRC | 05:41 | |
*** problem_v has quit IRC | 05:41 | |
*** mloza has quit IRC | 05:41 | |
*** strigazi has joined #openstack-lbaas | 05:41 | |
*** dayou has joined #openstack-lbaas | 05:44 | |
*** problem_v has joined #openstack-lbaas | 05:45 | |
*** dayou has quit IRC | 05:49 | |
*** dayou has joined #openstack-lbaas | 05:50 | |
*** yamamoto has joined #openstack-lbaas | 06:23 | |
*** psachin has joined #openstack-lbaas | 07:37 | |
*** phuoc has quit IRC | 07:49 | |
*** psachin has quit IRC | 08:01 | |
*** psachin has joined #openstack-lbaas | 08:06 | |
*** sapd1 has joined #openstack-lbaas | 09:03 | |
*** sapd1 has quit IRC | 09:11 | |
*** yamamoto has quit IRC | 09:13 | |
*** AlexStaf has quit IRC | 11:15 | |
openstackgerrit | Carlos Goncalves proposed openstack/octavia-tempest-plugin master: Add an active/standby scenario test https://review.openstack.org/584681 | 11:16 |
*** salmankhan has joined #openstack-lbaas | 11:24 | |
*** salmankhan has quit IRC | 12:53 | |
*** salmankhan has joined #openstack-lbaas | 17:04 | |
*** salmankhan has quit IRC | 17:09 | |
*** ipo has joined #openstack-lbaas | 17:25 | |
ipo | Hello All ! | 17:26 |
*** salmankhan has joined #openstack-lbaas | 18:16 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Speed up pylint by using multiple cores https://review.openstack.org/637348 | 18:50 |
*** salmankhan has quit IRC | 19:13 | |
ipo | johnsom: Hello ! | 19:49 |
johnsom | Hi | 19:50 |
ipo | johnsom: do you have some time to answer questions ? | 19:50 |
ipo | About octavia arch... as usual | 19:51 |
johnsom | I have a little, it's the weekend here, just trying to catch up on some stuff for Stein | 19:51 |
ipo | ok, thanks ! I have the same weekend as you do I suppose... And I hope you'll just say you are finished for questions and we'll stop for today. | 19:55 |
johnsom | Ok | 19:55 |
ipo | So what do you think about using legacy mgmt network for octavia services container ? | 19:57 |
johnsom | I don't think I follow the question. Is it about using an existing mgmt network for the octavia lb-mgmt-net? | 19:58 |
ipo | Exact | 19:58 |
johnsom | Should not be a problem. Just make sure the subnet is sized appropriately | 19:58 |
johnsom | All of the traffic on lb-mgmt-net is IP based, there is no L2 requirement | 19:59 |
ipo | yes but I thought about some reasons like huge traffic to split it to special network | 20:00 |
ipo | usually all infra services communicate to each other through mgmt | 20:00 |
johnsom | No, lb-mgmt-net is not that much traffic. The tenant traffic that is load balanced never touches the lb-mgmt-net. | 20:01 |
ipo | ya, I know about net namespaces and so on | 20:01 |
ipo | so if we have routing from mgmt to amphorae network it should be fine | 20:02 |
johnsom | It carries the command/control from the controllers, and then the heartbeats back from the amps. So the bulk of the traffic is one UDP packet, every 10 seconds(or whatever you set your heartbeat interval to, from each amp. | 20:02 |
johnsom | Yes, routing works just fine | 20:02 |
ipo | Is there any code examples to achive this goal with OSA ? | 20:03 |
johnsom | No | 20:04 |
ipo | As I see it uses special bridge, and eth14 as container interface | 20:04 |
johnsom | Yeah, the OSA default is a pretty complicated setup IMO | 20:06 |
ipo | ok. I will dive into OSA playbooks to try to figure out | 20:06 |
johnsom | I think there is talk of simplifying that, but I don't know what the OSA team timeline is for that. | 20:07 |
*** psachin has quit IRC | 20:09 | |
ipo | Do you have some suggestions, where to place octavia services for production env ? I noticed, when deploy it on one node that it starts to consume many cores | 20:09 |
ipo | especially health check processes | 20:10 |
ipo | end octavia container even start to hang in CLI | 20:11 |
johnsom | Then something is wrong with your deployment/configuration | 20:12 |
johnsom | That is not normal. | 20:13 |
ipo | It shouldnt take so much ? I see some config parameters to limit number of threads for health check | 20:13 |
johnsom | Especially if you only have one load balancer or none, it should have lower load than nova usually uses | 20:13 |
johnsom | For me, it occasionally pops up to 0.7 in top, but nova and glance are all up over 1.0 | 20:15 |
johnsom | I would start by looking in the logs and seeing what it is doing. It's likely some mis-configuration like bad DB connection string. | 20:16 |
johnsom | Also, I don't know what version you are running of octavia, but make sure you have the bug fix releases | 20:16 |
ipo | we are testing Queens just for first view. | 20:17 |
ipo | to get feel of it | 20:17 |
johnsom | Ok, make sure you have the 2.0.4 release | 20:17 |
johnsom | I know OSA updated the SHAs, but I don't know when you deployed | 20:18 |
ipo | And from your point of view, should we separate the octavia services to dedicated infra node in case of 500-1000 amphorae ? | 20:18 |
ipo | So as you say about nova I suppose the following estimation: | 20:19 |
johnsom | No, that would be a huge waste. The most important thing to have on dedicated nodes is your database. That is where I have seen OSA deployments have trouble, the mysql container gets overloaded | 20:19 |
johnsom | Well, I am running devstack when I looked at top here. So everything is in one VM | 20:20 |
johnsom | https://www.irccloud.com/pastebin/LRBahIEr/ | 20:21 |
ipo | Thx | 20:21 |
ipo | So basically I could estimate the following in general: if nova services on 3 infra nodes support 2000+ VMs, octavia will not consume much more for 2000 amphorae ? | 20:22 |
johnsom | For the controllers, I recommend you deploy three instance of each, that way you can do rolling upgrades easy | 20:22 |
ipo | So we have to get controllers which feel good for 3000 VMs (2000 legacy + 1000 Amhorae) | 20:24 |
ipo | and also extra CPU cores in controllers to support octavia stuff | 20:25 |
ipo | same as for nova to support +1000 VMs | 20:25 |
ipo | :) | 20:25 |
*** salmankhan has joined #openstack-lbaas | 20:26 | |
johnsom | Yeah, 2000 amps (1000 LBs in act/stdby) will have some HM CPU load. As long as your cloud is stable and not failing instances all the time, it should not be a big deal. Just make sure you have a good database that is maintained well. | 20:29 |
johnsom | Sorry, it is hard to judge what is needed for different deployments. I do have the HM stress tool if you want to use that. I think I mentioned that once before | 20:30 |
ipo | And this stress tool will load HM octavia service ? Is it somethere in octavia git repo ? | 20:32 |
johnsom | It is not the best code, so I left it in my personal github: https://github.com/johnsom/stresshm | 20:33 |
johnsom | I put it together very quickly to be able to simulate large numbers of amphroa against an HM | 20:33 |
ipo | Do you have any tests results somewhere to compare it with ? | 20:37 |
johnsom | Again, I highly recommend you make sure you have the 2.0.4 octavia if you are running queens | 20:37 |
ipo | ok - I highlight it in my records | 20:37 |
johnsom | No, I couldn't get about 4000 amps or so, I don't have enough CPU available run the test tool up to large loads | 20:37 |
johnsom | The HM was handling all I could throw at it, I just didn't have enough resource to power the test tool | 20:38 |
ipo | Ok, I will try to dig in stress test tool, ask you questions and will try to execute tests | 20:39 |
ipo | And I hope it will be helpfull for sizing calculation in the future | 20:39 |
johnsom | Pretty much just setup the conf file, then put in the number of LBs you want to simulate. The sub-resources, like members are how many per-lb it should simulate | 20:40 |
ipo | so it will be my work to help you somehow | 20:40 |
johnsom | Yes, it is always nice to get feedback | 20:40 |
johnsom | Everyone's cloud infrastructure is different, so it's really hard to give specifics. | 20:40 |
ipo | ok, thanks. I'm finished with questions. I will start to look to OSA internals | 20:43 |
johnsom | Ok, good luck! | 20:43 |
ipo | have a nice evening ! | 20:44 |
*** ipo has left #openstack-lbaas | 20:45 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Add client_ca_tls_container_ref to listener API https://review.openstack.org/612267 | 21:11 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Add client_ca_tls_container_ref to listener API https://review.openstack.org/612267 | 21:11 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Add an option to the Octavia V2 listener API for client cert https://review.openstack.org/612268 | 21:15 |
*** salmankhan has quit IRC | 21:33 | |
openstackgerrit | Carlos Goncalves proposed openstack/octavia stable/queens: Fix check redirect pool for creating a fully populated load balancer. https://review.openstack.org/637396 | 21:38 |
openstackgerrit | Carlos Goncalves proposed openstack/octavia stable/rocky: Add missing import octavia/opts.py https://review.openstack.org/637397 | 21:39 |
openstackgerrit | Carlos Goncalves proposed openstack/octavia stable/queens: Add missing import octavia/opts.py https://review.openstack.org/637398 | 21:40 |
openstackgerrit | Carlos Goncalves proposed openstack/octavia stable/rocky: Fix possible state machine hole in failover https://review.openstack.org/637399 | 21:44 |
openstackgerrit | Carlos Goncalves proposed openstack/octavia stable/queens: Fix possible state machine hole in failover https://review.openstack.org/637400 | 21:44 |
*** hongbin has joined #openstack-lbaas | 21:44 | |
openstackgerrit | Carlos Goncalves proposed openstack/octavia stable/rocky: Simplify keepalived lvsquery parsing for UDP https://review.openstack.org/637401 | 21:47 |
openstackgerrit | Carlos Goncalves proposed openstack/octavia stable/queens: Ensure pool object contains the listener_id if passed https://review.openstack.org/637402 | 21:49 |
johnsom | Looks like we are having an outside issue again, the canary is chirping. Some of the scenario gates aren't able to boot the cirros web servers... They were both a vexxhost sjc1 so maybe it's just an issue there | 22:22 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Add an option to the Octavia V2 listener API for client cert https://review.openstack.org/612268 | 22:49 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!