*** yamamoto has quit IRC | 00:03 | |
*** yamamoto has joined #openstack-lbaas | 00:08 | |
*** longkb has joined #openstack-lbaas | 00:52 | |
*** phuoc_ has quit IRC | 00:52 | |
*** phuoc_ has joined #openstack-lbaas | 00:53 | |
openstackgerrit | ZhaoBo proposed openstack/octavia master: UDP jinja template https://review.openstack.org/525420 | 01:02 |
---|---|---|
openstackgerrit | ZhaoBo proposed openstack/octavia master: UDP for [2] https://review.openstack.org/529651 | 01:02 |
openstackgerrit | ZhaoBo proposed openstack/octavia master: UDP for [3][5][6] https://review.openstack.org/539391 | 01:02 |
johnsom | bzhao__ bbbzhao_ Sorry, I had an issue come up today I had to deal with where barbican client has a bug. Looking at this now. I'm not sure we want to re-order flow items as they are very specific. I will look at the changes, but it makes me worry. | 01:09 |
johnsom | PostVIPPlug is a step after the VIP has been plugged into the amphora that is used to configure the interfaces. | 01:10 |
bzhao__ | johnsom: Never mind. I understand. ;-) . Let me check again the code. | 01:11 |
johnsom | I can look too. I plan to focus on UDP for the next four hours or so | 01:11 |
bzhao__ | johnsom: But the task "PostVIPPlug" will call plug_vip to agent side. Agent side will plug the vip interface into the namespace and configure something..And read the system defaut kernel configuration is there also.. So I add the necessary kernal configuration for udp there either. These are all based on generic LB create flow -> listener create flow. And failover flow seems not follow that. Maybe I'm not right | 01:18 |
bzhao__ | here. ;-) | 01:18 |
*** abaindur has quit IRC | 01:19 | |
johnsom | Ah, yes, ok, I mis understood what you changed the order of. | 01:19 |
bzhao__ | <- Bad description. ;-) | 01:20 |
*** atoth has quit IRC | 01:27 | |
*** ravioli16 has joined #openstack-lbaas | 01:38 | |
*** ravioli16 has quit IRC | 01:38 | |
*** c13 has joined #openstack-lbaas | 01:39 | |
c13 | With our IRC ad service you can reach a global audience of entrepreneurs and fentanyl addicts with extraordinary engagement rates! https://williampitcock.com/ | 01:39 |
c13 | I thought you guys might be interested in this blog by freenode staff member Bryan 'kloeri' Ostergaard https://bryanostergaard.com/ | 01:39 |
c13 | Read what IRC investigative journalists have uncovered on the freenode pedophilia scandal https://encyclopediadramatica.rs/Freenodegate | 01:39 |
c13 | A fascinating blog by freenode staff member Matthew 'mst' Trout https://MattSTrout.com/ | 01:39 |
*** OwenBarfield has joined #openstack-lbaas | 01:39 | |
*** ATDT91121 has joined #openstack-lbaas | 01:40 | |
*** OwenBarfield has quit IRC | 01:40 | |
*** ATDT91121 has quit IRC | 01:40 | |
*** c13 has quit IRC | 01:40 | |
johnsom | Joy, spam | 01:41 |
*** puff has joined #openstack-lbaas | 01:43 | |
puff | With our IRC ad service you can reach a global audience of entrepreneurs and fentanyl addicts with extraordinary engagement rates! https://williampitcock.com/ | 01:43 |
puff | I thought you guys might be interested in this blog by freenode staff member Bryan 'kloeri' Ostergaard https://bryanostergaard.com/ | 01:43 |
puff | Read what IRC investigative journalists have uncovered on the freenode pedophilia scandal https://encyclopediadramatica.rs/Freenodegate | 01:43 |
puff | A fascinating blog by freenode staff member Matthew 'mst' Trout https://MattSTrout.com/ | 01:43 |
*** ChanServ sets mode: +r | 01:44 | |
johnsom | Well, we can try that... | 01:45 |
johnsom | see if it stops the spam bots for a bit | 01:46 |
*** hongbin has joined #openstack-lbaas | 01:47 | |
bzhao__ | Seems another trust auth needed . ;-) | 01:48 |
*** puff has quit IRC | 01:48 | |
*** hongbin_ has joined #openstack-lbaas | 01:51 | |
*** hongbin has quit IRC | 01:52 | |
johnsom | bzhao__ I am going to start creating bugs for issues I see. These don't need to be fixed before merge, just before final Rocky release candidate. So please don't panic | 02:13 |
johnsom | grin | 02:13 |
bzhao__ | johnsom: Thanks, Micheal. I will keep an eye on the bugs and try to fix then. | 02:46 |
johnsom | bzhao__ I am tagging them with UDP so they will show in this list: https://storyboard.openstack.org/#!/story/list?status=active&tags=UDP | 02:46 |
bzhao__ | Yeah, thanks, that's pretty nice. | 02:47 |
bzhao__ | Good for search. ;-) | 02:47 |
johnsom | The member down showing as "DRAINING" is probably the most important so far | 02:48 |
bzhao__ | Yeah, as I just get the info from the config file/kernel file. hmm | 02:50 |
johnsom | We may want to use notify_up/notify_down scripts to write a status file per member. and not use inhibit_on_failure | 02:51 |
johnsom | Or just check known members against the current member list. if it is missing, it is down | 02:52 |
johnsom | again not using inhibit_on_failure | 02:52 |
bzhao__ | I see. That's mine. Make sense now. | 02:52 |
bzhao__ | Lacks the practise. ;-( | 02:53 |
bzhao__ | experience.. | 02:53 |
johnsom | No worries | 02:54 |
*** hongbin_ has quit IRC | 02:54 | |
bzhao__ | Owns to your kind direction. ;-). ha | 02:56 |
*** yamamoto has quit IRC | 03:00 | |
*** yamamoto has joined #openstack-lbaas | 03:06 | |
*** yamamoto has quit IRC | 03:14 | |
*** yamamoto has joined #openstack-lbaas | 03:26 | |
*** yamamoto has quit IRC | 03:34 | |
*** yamamoto has joined #openstack-lbaas | 03:36 | |
*** yamamoto has quit IRC | 04:03 | |
*** yamamoto has joined #openstack-lbaas | 04:04 | |
johnsom | bzhao__ Is this the start of your day? I have some comments on one of the patches. Not sure if you can address tonight or if I should be doing them. | 04:04 |
bzhao__ | johnsom: Here is the time for lunch. I don't find the comments. Let me have a try. ;-). | 04:07 |
johnsom | bzhao__ I have not posted them yet | 04:07 |
bzhao__ | johnsom: Ok. | 04:07 |
johnsom | bzhao__ have lunch and I will post them when you return. Just seeing if you have time to do a couple of fixes today. | 04:07 |
johnsom | This way I can keep reviewing/testing instead of stopping to fix | 04:08 |
johnsom | I have about one hour left tonight I can work. | 04:08 |
bzhao__ | No rest is OK for me. It's time for me to fight for octavia. | 04:09 |
johnsom | grin | 04:09 |
bzhao__ | ;-) | 04:09 |
johnsom | Sorry reviews have been slow on this. We had a lot to get done in Rocky that distracted. | 04:09 |
bzhao__ | Never mind, I real understand. I have done a very gread job in Network Zone of OpenStack.. The rank 1st is you. | 04:10 |
bzhao__ | nonono | 04:11 |
bzhao__ | YOU have done a very gread job in Network Zone of OpenStack.. The rank 1st is you. | 04:11 |
johnsom | Ha | 04:11 |
bzhao__ | what a trerrible mistake I make... | 04:11 |
bzhao__ | s/gread/great/ | 04:12 |
bzhao__ | =.= | 04:12 |
johnsom | No worries | 04:13 |
bzhao__ | ;-). I will leave for lunch. Thanks very much, micheal. | 04:14 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Followup patch for UDP support https://review.openstack.org/587690 | 05:24 |
johnsom | bzhao__ Ok, I have commented on patch 1 and 3, I will review 2 again tomorrow. I have also created a new patch that adds the release notes, API-ref update, and removes the misc_dynamic setting. | 05:25 |
johnsom | I think the biggest issue might be the One packet schedule setting. I think we don't need that as it's one SP type. That would be the "default" setting and having it removed would be "SOURCE_IP" | 05:26 |
johnsom | Overall they are minor comments. Your validation and tests are very good. | 05:26 |
johnsom | One packet scheduling will behave like a load balancer with no session persistence defined. | 05:28 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Followup patch for UDP support https://review.openstack.org/587690 | 05:45 |
johnsom | Ok, time for sleep. Catch you all in the morning | 05:50 |
*** hvhaugwitz has quit IRC | 05:55 | |
*** hvhaugwitz has joined #openstack-lbaas | 05:55 | |
*** dims has quit IRC | 06:13 | |
*** dims has joined #openstack-lbaas | 06:15 | |
bzhao__ | johnsom: Thanks, micheal, have a good rest. | 06:21 |
*** dims has quit IRC | 06:22 | |
*** dims has joined #openstack-lbaas | 06:25 | |
*** pcaruana has joined #openstack-lbaas | 06:42 | |
*** ispp has joined #openstack-lbaas | 06:57 | |
dmellado | johnsom: how did you get to finish the spam xD | 06:58 |
dmellado | damn xD | 06:59 |
*** rcernin has quit IRC | 07:05 | |
*** rpittau has quit IRC | 07:24 | |
*** zigo_ is now known as zigo | 07:26 | |
*** threestrands has quit IRC | 07:26 | |
*** velizarx has joined #openstack-lbaas | 07:33 | |
*** nmagnezi has joined #openstack-lbaas | 07:34 | |
*** yamamoto has quit IRC | 07:54 | |
*** velizarx has quit IRC | 07:59 | |
*** yamamoto has joined #openstack-lbaas | 08:01 | |
*** velizarx has joined #openstack-lbaas | 08:04 | |
*** ivve has quit IRC | 08:40 | |
*** salmankhan has joined #openstack-lbaas | 08:56 | |
*** thomasem has quit IRC | 09:28 | |
*** yamamoto has quit IRC | 09:37 | |
*** nmagnezi has quit IRC | 09:38 | |
*** yamamoto has joined #openstack-lbaas | 09:38 | |
*** yamamoto has quit IRC | 09:41 | |
*** yamamoto has joined #openstack-lbaas | 09:45 | |
*** yamamoto has quit IRC | 09:46 | |
*** yamamoto has joined #openstack-lbaas | 09:53 | |
*** ispp has quit IRC | 10:02 | |
*** longkb has quit IRC | 10:14 | |
*** jiteka has quit IRC | 10:22 | |
*** jiteka- has quit IRC | 10:22 | |
*** yamamoto has quit IRC | 10:34 | |
*** yamamoto has joined #openstack-lbaas | 11:08 | |
bzhao__ | johnsom: errrrrr, maybe I make a mistake for removing all OPS when I see the comment in https://review.openstack.org/#/c/539391/39/octavia/db/migration/alembic_migrations/versions/76aacf2e176c_extend_support_udp_protocol.py@59, you suggest that the "default" setting when the pool.session_persistence not exist, is that right? if yes, I think we don't need SESSION_PERSISTENCE_ONE_PACKET_SCHEDULING anymore, as in the | 11:12 |
bzhao__ | previous way to set OPS type just can be from API, but right now, we set it as default if no pool.session_persistence . Ahhhh, wish my thought is the same with you. | 11:12 |
*** threestrands has joined #openstack-lbaas | 11:15 | |
*** rpittau has joined #openstack-lbaas | 11:33 | |
*** nmagnezi has joined #openstack-lbaas | 11:33 | |
*** velizarx has quit IRC | 11:40 | |
*** yamamoto has quit IRC | 11:48 | |
*** yamamoto has joined #openstack-lbaas | 11:51 | |
*** velizarx has joined #openstack-lbaas | 11:59 | |
*** KeithMnemonic has quit IRC | 12:04 | |
*** KeithMnemonic has joined #openstack-lbaas | 12:04 | |
*** nmagnezi has quit IRC | 12:21 | |
*** nmagnezi has joined #openstack-lbaas | 12:24 | |
*** openstack has joined #openstack-lbaas | 12:59 | |
*** barjavel.freenode.net sets mode: +ns | 12:59 | |
*** barjavel.freenode.net sets mode: -o openstack | 13:00 | |
-barjavel.freenode.net- *** Notice -- TS for #openstack-lbaas changed from 1533128361 to 1403021244 | 13:00 | |
*** barjavel.freenode.net sets mode: +crt-s | 13:00 | |
*** nmagnezi has joined #openstack-lbaas | 13:00 | |
*** KeithMnemonic has joined #openstack-lbaas | 13:00 | |
*** velizarx has joined #openstack-lbaas | 13:00 | |
*** yamamoto has joined #openstack-lbaas | 13:00 | |
*** rpittau has joined #openstack-lbaas | 13:00 | |
*** salmankhan has joined #openstack-lbaas | 13:00 | |
*** pcaruana has joined #openstack-lbaas | 13:00 | |
*** dims has joined #openstack-lbaas | 13:00 | |
*** hvhaugwitz has joined #openstack-lbaas | 13:00 | |
*** phuoc_ has joined #openstack-lbaas | 13:00 | |
*** bbbbzhao_ has joined #openstack-lbaas | 13:00 | |
*** cgoncalves has joined #openstack-lbaas | 13:00 | |
*** annp has joined #openstack-lbaas | 13:00 | |
*** andreykurilin has joined #openstack-lbaas | 13:00 | |
*** jmccrory has joined #openstack-lbaas | 13:00 | |
*** sapd has joined #openstack-lbaas | 13:00 | |
*** colin- has joined #openstack-lbaas | 13:00 | |
*** vegarl has joined #openstack-lbaas | 13:00 | |
*** mugsie has joined #openstack-lbaas | 13:00 | |
*** ltomasbo has joined #openstack-lbaas | 13:00 | |
*** ianychoi has joined #openstack-lbaas | 13:00 | |
*** openstackgerrit has joined #openstack-lbaas | 13:00 | |
*** devfaz has joined #openstack-lbaas | 13:00 | |
*** zigo has joined #openstack-lbaas | 13:00 | |
*** bcafarel has joined #openstack-lbaas | 13:00 | |
*** eandersson has joined #openstack-lbaas | 13:00 | |
*** JudeC has joined #openstack-lbaas | 13:00 | |
*** irenab has joined #openstack-lbaas | 13:00 | |
*** ipsecguy has joined #openstack-lbaas | 13:00 | |
*** crazik has joined #openstack-lbaas | 13:00 | |
*** Krast has joined #openstack-lbaas | 13:00 | |
*** strigazi has joined #openstack-lbaas | 13:00 | |
*** dmellado has joined #openstack-lbaas | 13:00 | |
*** oanson has joined #openstack-lbaas | 13:00 | |
*** xgerman_ has joined #openstack-lbaas | 13:00 | |
*** mrhillsman has joined #openstack-lbaas | 13:00 | |
*** sbalukoff_ has joined #openstack-lbaas | 13:00 | |
*** numans has joined #openstack-lbaas | 13:00 | |
*** keithmnemonic[m] has joined #openstack-lbaas | 13:00 | |
*** dulek has joined #openstack-lbaas | 13:00 | |
*** korean101 has joined #openstack-lbaas | 13:00 | |
*** PagliaccisCloud has joined #openstack-lbaas | 13:00 | |
*** zioproto has joined #openstack-lbaas | 13:00 | |
*** dosaboy has joined #openstack-lbaas | 13:00 | |
*** frickler has joined #openstack-lbaas | 13:00 | |
*** amotoki has joined #openstack-lbaas | 13:00 | |
*** colby_ has joined #openstack-lbaas | 13:00 | |
*** johnsom has joined #openstack-lbaas | 13:00 | |
*** lxkong has joined #openstack-lbaas | 13:00 | |
*** coreycb has joined #openstack-lbaas | 13:00 | |
*** beisner has joined #openstack-lbaas | 13:00 | |
*** ctracey has joined #openstack-lbaas | 13:00 | |
*** dayou has joined #openstack-lbaas | 13:00 | |
*** logan- has joined #openstack-lbaas | 13:00 | |
*** bzhao__ has joined #openstack-lbaas | 13:00 | |
*** Eugene__ has joined #openstack-lbaas | 13:00 | |
*** jlaffaye_ has joined #openstack-lbaas | 13:00 | |
*** wolsen has joined #openstack-lbaas | 13:00 | |
*** LutzB has joined #openstack-lbaas | 13:00 | |
*** wayt has joined #openstack-lbaas | 13:00 | |
*** dlundquist has joined #openstack-lbaas | 13:00 | |
*** mordred has joined #openstack-lbaas | 13:00 | |
*** obre has joined #openstack-lbaas | 13:00 | |
*** rm_work has joined #openstack-lbaas | 13:00 | |
*** dougwig has joined #openstack-lbaas | 13:00 | |
*** hogepodge has joined #openstack-lbaas | 13:00 | |
*** amitry_ has joined #openstack-lbaas | 13:00 | |
*** fyx has joined #openstack-lbaas | 13:00 | |
*** mnaser has joined #openstack-lbaas | 13:00 | |
*** ptoohill- has joined #openstack-lbaas | 13:00 | |
*** gigo has joined #openstack-lbaas | 13:00 | |
*** pck has joined #openstack-lbaas | 13:00 | |
*** raginbajin has joined #openstack-lbaas | 13:00 | |
*** ptoohill has joined #openstack-lbaas | 13:00 | |
*** mjblack has joined #openstack-lbaas | 13:00 | |
*** barjavel.freenode.net sets mode: +b *!~pxydzqwy@107.163.64.54 | 13:00 | |
*** barjavel.freenode.net changes topic to "Discussion of OpenStack Load Balancing (Octavia) | https://etherpad.openstack.org/p/octavia-priority-reviews" | 13:00 | |
*** yamamoto has quit IRC | 13:25 | |
*** amuller has joined #openstack-lbaas | 13:29 | |
*** yamamoto has joined #openstack-lbaas | 13:48 | |
*** nmagnezi has quit IRC | 14:02 | |
openstackgerrit | German Eichberger proposed openstack/octavia master: Delete zombie amphora when detected https://review.openstack.org/587505 | 14:04 |
*** nmagnezi has joined #openstack-lbaas | 14:09 | |
*** yamamoto has quit IRC | 14:23 | |
cgoncalves | http://logs.openstack.org/14/587414/2/check/octavia-v2-dsvm-scenario-centos.7/ae0cf08/job-output.txt.gz#_2018-07-31_22_09_05_659322 | 14:27 |
cgoncalves | ^ this is bogging me quite a bit. the job only overrides the nodeset | 14:28 |
cgoncalves | the log until that point is basically the same as the ubuntu-based nodeset so I don't get it | 14:28 |
openstackgerrit | Merged openstack/octavia master: Fix DIB_REPOREF_amphora_agent not set on Git !=1.8.5 https://review.openstack.org/584856 | 14:33 |
*** hongbin has joined #openstack-lbaas | 14:42 | |
*** yamamoto has joined #openstack-lbaas | 14:48 | |
*** yamamoto has quit IRC | 14:51 | |
*** yamamoto has joined #openstack-lbaas | 14:51 | |
*** nmagnezi has quit IRC | 14:52 | |
openstackgerrit | Merged openstack/octavia master: Fix the bionic gate to actually run Ubuntu bionic https://review.openstack.org/586906 | 14:53 |
*** yamamoto has quit IRC | 14:56 | |
*** yamamoto has joined #openstack-lbaas | 14:57 | |
*** yamamoto has quit IRC | 14:57 | |
*** ChanServ sets mode: +f #openstack-unregistered | 15:10 | |
*** pcaruana has quit IRC | 15:15 | |
*** velizarx has quit IRC | 15:19 | |
*** yamamoto has joined #openstack-lbaas | 15:30 | |
johnsom | cgoncalves I happened to have infra's attention so asking about your centos job | 15:33 |
*** openstackstatus has joined #openstack-lbaas | 15:33 | |
*** ChanServ sets mode: +v openstackstatus | 15:33 | |
openstackgerrit | ZhaoBo proposed openstack/octavia master: UDP jinja template https://review.openstack.org/525420 | 15:36 |
openstackgerrit | ZhaoBo proposed openstack/octavia master: UDP for [2] https://review.openstack.org/529651 | 15:36 |
openstackgerrit | ZhaoBo proposed openstack/octavia master: UDP for [3][5][6] https://review.openstack.org/539391 | 15:36 |
johnsom | bzhao__ Morning! lol. I am starting my day and plan to work on UDP as much as I can. Is there something you know I should help with or go back to working on review? | 15:37 |
*** yamamoto has quit IRC | 15:37 | |
bbbbzhao_ | johnsom: lol, just back home from office. ;-). post the patch on my phone through RDP app. | 15:46 |
johnsom | bbbbzhao_ Wow, over RDP. lol | 15:47 |
-openstackstatus- NOTICE: Due to ongoing spam, all OpenStack-related channels now require authentication with nickserv. If an unauthenticated user joins a channel, they will be forwarded to #openstack-unregistered with a message about the problem and folks to help with any questions (volunteers welcome!). | 15:48 | |
bbbbzhao_ | I remove the OPS, treat as a default setting if the default pool doesn't contain a session_persistence. | 15:48 |
bbbbzhao_ | ha.. it's an APP developed by Microsoft. | 15:48 |
johnsom | bbbbzhao_ yes, I am familiar with it. Just running from a phone must be hard with small screen, etc. | 15:49 |
bbbbzhao_ | Yeah, my eye is nearly blind. ;-) | 15:50 |
johnsom | bbbbzhao_ Ok, I will pickup work on the patches. Hopefully we can start merging today. | 15:50 |
bbbbzhao_ | johnsom: Thanks very much. I just test the whole patches pep8 UT and functional tests. Not test the fullstack test yet. | 15:53 |
bbbbzhao_ | Just not sure that remove OPS healthmonitor type is right, but I had removed all... | 15:55 |
bbbbzhao_ | I think a quick look on patch 1 and 3 for making sure that is what we want. And patch 2 need review, that patch is so big, it maybe a hard work.. | 15:57 |
cgoncalves | johnsom, they must like you. I asked on qa and infra channels this morning but no answer | 15:59 |
johnsom | cgoncalves lol, no comment. Nah, they are busy and like I said I had their attention already on a nested virt issue. So, the consensus is that it's a permissions issue on centos. They asked if we could inject a "ls - alR" in the job to show the permissions on /home/zuul down | 16:01 |
johnsom | the guess is /home/zuul is not readable by the stack account tox is using | 16:02 |
johnsom | tempest/tox | 16:02 |
cgoncalves | johnsom, that's plausible, yes. I need to check how to inject such thing in a zuul v3 job. AFK for next 2 hours | 16:05 |
johnsom | ok | 16:05 |
johnsom | bbbbzhao_ Just to check, I may start posting patch updates for UDP. Is that ok? | 16:34 |
bbbbzhao_ | OK. | 16:35 |
xgerman_ | I don’t think my patch broke dvsm-api | 16:35 |
xgerman_ | https://review.openstack.org/#/c/587505/ | 16:41 |
johnsom | xgerman_ You got a yahtzee, all of the v2 gates failed. | 16:45 |
johnsom | Yes, you did cause those failures: http://logs.openstack.org/05/587505/4/check/octavia-v2-dsvm-noop-api/98b16a5/controller/logs/screen-o-cw.txt.gz#_Aug_01_14_48_27_046104 | 16:45 |
johnsom | You have database lock issues in that new code | 16:46 |
johnsom | Commented. Do you really need "secheduled for delete"? Can't we just go do it like we do failovers? | 16:54 |
openstackgerrit | German Eichberger proposed openstack/octavia master: Delete zombie amphora when detected https://review.openstack.org/587505 | 17:05 |
xgerman_ | yes, we need scheduled for delete. How are you gonna detect that they need to be deleted otherwise? | 17:05 |
*** jiteka has joined #openstack-lbaas | 17:12 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: UDP jinja template https://review.openstack.org/525420 | 17:14 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: UDP for [2] https://review.openstack.org/529651 | 17:14 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: UDP for [3][5][6] https://review.openstack.org/539391 | 17:14 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Followup patch for UDP support https://review.openstack.org/587690 | 17:14 |
johnsom | Doh | 17:14 |
johnsom | Screwed that up... | 17:14 |
johnsom | Apply more coffee, then I will fix | 17:14 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: UDP jinja template https://review.openstack.org/525420 | 17:18 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: UDP for [2] https://review.openstack.org/529651 | 17:21 |
*** salmankhan has quit IRC | 17:23 | |
openstackgerrit | Michael Johnson proposed openstack/octavia master: UDP for [3][5][6] https://review.openstack.org/539391 | 17:24 |
openstackgerrit | Michael Johnson proposed openstack/octavia master: Followup patch for UDP support https://review.openstack.org/587690 | 17:27 |
johnsom | Ok, fixed. Sorry for that..... | 17:27 |
johnsom | Please ignore my bad rebase. | 17:27 |
*** nmagnezi has joined #openstack-lbaas | 17:38 | |
*** nmagnezi has quit IRC | 18:15 | |
*** salmankhan has joined #openstack-lbaas | 18:55 | |
*** salmankhan has quit IRC | 18:59 | |
eandersson | When hitting neutron-lbaas, what is the difference form hitting /v2.0/lbaas/pools/<uuid>/members vs. /v2.0/lbaas/pools/<uuid>/members.json ? | 19:10 |
*** amuller has quit IRC | 19:14 | |
johnsom | eandersson None | 19:17 |
eandersson | I figured as much | 19:17 |
eandersson | Thanks | 19:18 |
johnsom | The old extension path was for when you had an xml option in API responses. | 19:18 |
johnsom | Long since dropped in OpenStack to my knowledge | 19:18 |
*** nmagnezi has joined #openstack-lbaas | 19:19 | |
johnsom | eandersson Docs calls it out here: https://developer.openstack.org/api-ref/network/v2/#response-format | 19:19 |
rm_work | i thought we already had code for deleting zombies | 19:22 |
johnsom | No, just ignoring them | 19:23 |
johnsom | This takes it to the next level... lol | 19:23 |
johnsom | So have visions of the health manager spinning a shotgun. (movie reference) | 19:23 |
jiteka | Hello johnsom rm_work, is one of you could tell me if that error : | 19:25 |
jiteka | octavia.network.drivers.neutron.allowed_address_pairs BadRequest: Unrecognized attribute(s) 'project_id' | 19:25 |
jiteka | is due to a bad configuration ? or which parameters is the culprit here ? | 19:25 |
jiteka | it seems to happend when octavia call neutron API | 19:25 |
jiteka | on a loadbalancer create | 19:25 |
johnsom | jiteka Hmm, what version of Octavia and neutron do you have? | 19:26 |
jiteka | octavia queens and neutron mitaka | 19:27 |
johnsom | This is likely due to neutron not supporting the new name for tenant_id, which is project_id. By new I mean Keystone changed it in Mitaka or Newton if not earlier | 19:27 |
johnsom | jiteka Ah, ok. Yeah, that was added 8 months ago for queens. Kind of surprised neutron doesn't just ignore it. That would be a compatibility bug. We need to ask neutron if it wants "project_id" or "tenant_id" I would guess. | 19:30 |
johnsom | It is here in the code: https://github.com/openstack/octavia/blame/master/octavia/network/drivers/neutron/allowed_address_pairs.py#L392 | 19:31 |
johnsom | Opened a bug for it: https://storyboard.openstack.org/#!/story/2003278 | 19:33 |
jiteka | awesome thanks johnsom | 19:34 |
rm_work | xgerman_: i made some comments on that patch | 19:43 |
xgerman_ | k | 19:43 |
rm_work | mostly just naming/doc/log nits, except for the session handling bit... | 19:44 |
rm_work | @johnsom: is the review list up to date? | 19:48 |
johnsom | rm_work reasonably yes | 19:49 |
johnsom | #startmeeting Octavia | 20:00 |
openstack | Meeting started Wed Aug 1 20:00:11 2018 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. | 20:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 20:00 |
openstack | The meeting name has been set to 'octavia' | 20:00 |
johnsom | Hi folks! | 20:00 |
cgoncalves | o/ | 20:00 |
johnsom | #topic Announcements | 20:00 |
rm_work | o/ | 20:00 |
nmagnezi | O/ | 20:01 |
johnsom | We are still tracking priority bugs for Rocky. We are in feature freeze, but we can still be fixing bugs..... | 20:01 |
johnsom | #link https://etherpad.openstack.org/p/octavia-priority-reviews | 20:01 |
xgerman_ | o/ | 20:01 |
johnsom | As an FYI, Rocky RC1 is next week. This is where we will cut a stable branch for Rocky. We should strive to have as many bug fixes in as we can. | 20:02 |
johnsom | It would be super nice to only do one RC and start work on Stein | 20:02 |
johnsom | I do have some sad news for you however.... | 20:02 |
johnsom | Since no one ran against me, you are stuck with me as PTL for another release..... | 20:03 |
johnsom | #link https://governance.openstack.org/election/ | 20:03 |
* xgerman_ raises the “4 more years” sign | 20:03 | |
cgoncalves | 4 more years \o/ !! | 20:03 |
* nmagnezi joins xgerman_ | 20:03 | |
johnsom | You all are trying to make me crazy aren't you.... | 20:03 |
cgoncalves | crazier | 20:03 |
xgerman_ | just showing our appreciation… | 20:03 |
nmagnezi | johnsom, you scared me for a sec | 20:04 |
nmagnezi | johnsom, not cool :) | 20:04 |
johnsom | Towards the end of the year it will be three years for me. I would really like to see a change in management around here, so.... Start planning your campaign. | 20:04 |
xgerman_ | this is not where we elect PTLs for a year? | 20:04 |
johnsom | Not yet. Maybe the cycle after Stein will be longer than six months.... | 20:05 |
johnsom | Actually Stein is going to be slightly longer than a normal release to sync back with the summits | 20:05 |
xgerman_ | :-) | 20:05 |
johnsom | #link https://releases.openstack.org/stein/schedule.html | 20:05 |
johnsom | If you are interested in the Stein schedule | 20:06 |
johnsom | Also an FYI, all of the OpenStack IRC channels now require you to be signed in with a freenode account to join. | 20:06 |
johnsom | #link http://lists.openstack.org/pipermail/openstack-dev/2018-August/132719.html | 20:07 |
johnsom | There have been bad IRC spam storms recently. I blocked our channel yesterday, but infra has done the rest today. | 20:07 |
cgoncalves | I see the longer Stein release as a good thing this time around | 20:07 |
johnsom | It doesn't mean we can procrastinate though.... grin | 20:08 |
johnsom | I think my early Stein goal is going to be implement flavors | 20:08 |
johnsom | That is all I have for announcements, anything I missded? | 20:09 |
johnsom | #topic Brief progress reports / bugs needing review | 20:09 |
johnsom | I have been pretty distracted with internal stuffs over the week, but most of that is clear now (some docs to create which I hope to upstream). | 20:10 |
johnsom | I have also been focused on the UDP patch and helping there. | 20:10 |
xgerman_ | did two bug fixes: one when nova doesn’t release the port for failover + one for the zombie amps | 20:10 |
johnsom | Yeah, the nova thing was interesting. Someone turned off a compute host for eight hours. Nova just sits on the instance delete evidently and doesn't do it, nor release the attached ports. | 20:12 |
johnsom | If someone has a multi-node lab where they can power of compute hosts, that patch could use some testing assistance. | 20:12 |
xgerman_ | +10 | 20:12 |
xgerman_ | my multinode lab has customers — so I can’t chaos monkey | 20:13 |
cgoncalves | nothing special from my side: some octavia and neutron-lbaas backporting, devstack plugin fixing and CI jobs changes + housekeeping, then some tripleo-octavia bits | 20:13 |
rm_work | interesting, we're ... having that happen here, as we're patching servers on a rolling thing, and some hosts end up down for a while sometimes <_< | 20:13 |
johnsom | Any other updates? nmagnezi cgoncalves ? | 20:13 |
cgoncalves | xgerman_, could your patch (which I haven't looked yet) improve scenarios like https://bugzilla.redhat.com/show_bug.cgi?id=1609064 ? it sounds like it | 20:14 |
openstack | bugzilla.redhat.com bug 1609064 in openstack-octavia "Rebooting the cluster causes the loadbalancers are not working anymore" [High,New] - Assigned to amuller | 20:14 |
nmagnezi | On my end: have been looking deeply into active standby. Will report bunch of stories (and submit patches) soon | 20:14 |
nmagnezi | Some if the issues where already known ; some look new (at least to me) | 20:14 |
nmagnezi | But nothing drastic | 20:14 |
johnsom | rm_work the neat thing we saw once, but couldn't confirm was nova status said "DELETED" but there is a second status in the EXT that said "deleting" | 20:14 |
rm_work | O_o | 20:15 |
nmagnezi | johnsom, I actually have a question related to active standby, but that can wait for open discussion | 20:15 |
rm_work | well ... we wouldn't get bugs like cgoncalves linked, as our ACTIVE_STANDBY amps are split across AZs | 20:15 |
rm_work | with AZ Anti-affinity ;P | 20:15 |
rm_work | which I still wish we could merge as an experimental feature, as i have seen at least two other operators HERE that use similar code | 20:16 |
xgerman_ | Stein… | 20:16 |
johnsom | Looks like it went to failover those and there were no compute hosts left: Failed to build compute instance due to: {u'message': u'No valid host was found. There are not enough hosts available.' | 20:16 |
johnsom | Yeah, so too short of a timeout before we start failing over or too small of a cloud? | 20:17 |
cgoncalves | johnsom, right. and I think after that the LBs/amps couldn't failover manually because they were in ERROR. I need to look deeper and need-info the reporter. anyway | 20:17 |
johnsom | hmmm. Ok, thanks for the updates | 20:18 |
johnsom | Our main event today: | 20:18 |
johnsom | #topic Make the FFE call on UDP support | 20:18 |
johnsom | Hmm, wonder if meeting bot is broken | 20:19 |
xgerman_ | mmh | 20:19 |
johnsom | Well, we will see at the end | 20:19 |
cgoncalves | I *swear* I've been wanting to test this :( I even restacked this afternoon with latest patch sets | 20:19 |
johnsom | Current status from my perspective: | 20:19 |
johnsom | 1. the client patch is merged and was in Rocky. This is good and it seems to work great for me. | 20:19 |
johnsom | 2. Two out of the three patches I have +2'd as I think they are fine. | 20:20 |
johnsom | 3. I have started some stories for issues I see, but I don't consider show stoppers: https://storyboard.openstack.org/#!/story/list?status=active&tags=UDP | 20:20 |
johnsom | 4. I have successfully build working UDP LBs with this code. | 20:20 |
johnsom | 5. The gates show it doesn't break existing stuff. (the one gate failure today was a "connection reset" while devstack was downloading a package) | 20:21 |
rm_work | Yeah I think this also falls into "merge, fix bugs as we find them" territory | 20:21 |
rm_work | as with any big feature | 20:21 |
rm_work | so long as it doesn't interfere with existing code paths (which i believe it does not) | 20:22 |
johnsom | Yeah, I'm leaning that way too. I need to take another pass across the middle patch to see if anything recent jumps out at me, but I expect we can turn and burn on that if needed. | 20:22 |
xgerman_ | I am not entirely in love with the additional code path for UDP LB health | 20:22 |
cgoncalves | would it make sense to somehow flag it as experimental? there's not a single tempest test for it IIRC | 20:23 |
xgerman_ | but we can streamline that later | 20:23 |
johnsom | That is in the middle patch I haven't looked at for a bit | 20:23 |
johnsom | cgoncalves we have shipped stuff in worse shape tempest wise... sigh | 20:23 |
xgerman_ | yeah, my other beef is with having a special UDP listener on the amphora REST API… | 20:24 |
johnsom | xgerman_ What do you mean? It's just another protocol on the listener..... | 20:24 |
johnsom | Oh, amphora-agent API? | 20:25 |
xgerman_ | yep: https://review.openstack.org/#/c/529651/57/octavia/amphorae/backends/agent/api_server/server.py | 20:26 |
johnsom | cgoncalves I can probably wipe up some tempest tests for this before Rocky ships if you are that concerned. We will need them | 20:26 |
johnsom | Will be a heck of a lot easier than the dump migration tool tests | 20:26 |
xgerman_ | wouldn’t bet on it ;-) | 20:26 |
johnsom | Well, I have been doing manual testing on this a lot so have a pretty good idea how I would do it | 20:27 |
cgoncalves | johnsom, I'd prefer having at least a basic udp test but I wont ask you for that. too much already on your plate | 20:27 |
johnsom | Any more discussion or should we vote? | 20:29 |
xgerman_ | well, how do others feel about the architecture ? | 20:30 |
xgerman_ | or, let’s vote ;-) | 20:30 |
johnsom | #startvote Should we merge-and-fix the UDP patches? Yes, No | 20:30 |
openstack | Begin voting on: Should we merge-and-fix the UDP patches? Valid vote options are Yes, No. | 20:30 |
openstack | Vote using '#vote OPTION'. Only your last vote counts. | 20:30 |
xgerman_ | #vote abstain | 20:31 |
openstack | xgerman_: abstain is not a valid option. Valid options are Yes, No. | 20:31 |
johnsom | No maybe options for you whimps.... Grin | 20:31 |
cgoncalves | #vote yes | 20:31 |
cgoncalves | (do I get to vote?!) | 20:31 |
johnsom | #vote yes | 20:31 |
johnsom | Yes, everyone gets a vote | 20:31 |
nmagnezi | I was not involved in this, but johnsom reasoning make sense to me | 20:32 |
nmagnezi | #vote yes | 20:32 |
johnsom | xgerman_ rm_work Have a vote? Anyone else lurking? | 20:32 |
rm_work | ah | 20:32 |
xgerman_ | I thought sitting it out would be like abstain ;-) | 20:33 |
rm_work | #vote yes | 20:33 |
* johnsom needs a buzzer for "abstain" votes | 20:34 | |
rm_work | though I should at least try to get through the patches | 20:34 |
rm_work | to make sure there's nothing that'd be hard to fix later | 20:34 |
johnsom | Yeah, I think 1 and 3 are good. I would like some time on 2 today, so maybe push to merge later today or early tomorrow | 20:34 |
johnsom | Going once.... | 20:35 |
johnsom | Going twice..... | 20:35 |
johnsom | #endvote | 20:35 |
openstack | Voted on "Should we merge-and-fix the UDP patches?" Results are | 20:35 |
openstack | Yes (4): rm_work, nmagnezi, cgoncalves, johnsom | 20:35 |
johnsom | Sold, you are now the proud owners of a UDP protocol load balancer | 20:35 |
xgerman_ | dougwig: will be proud ;-) | 20:36 |
johnsom | So, cores, if you could give your approve votes on 1 and 3. Give us some time on 2. I will ping in the channel if it's ready for the final review pass | 20:37 |
cgoncalves | we now *really* need to fix it for centos amps. I just tried creating a LB and it failed | 20:37 |
xgerman_ | what’s the patch I have to pull for a complete install? 1 or 3? | 20:37 |
johnsom | Ah bummer. cgoncalves can you help with that or too busy? | 20:37 |
cgoncalves | johnsom, I will prioritize that for tomorrow | 20:38 |
johnsom | xgerman_ 3 or https://review.openstack.org/539391 | 20:38 |
xgerman_ | k | 20:38 |
johnsom | I also added a follow up patch with API-ref and release notes and some minor cleanup | 20:38 |
johnsom | https://review.openstack.org/587690 | 20:38 |
johnsom | Which is also at the end of the chain. | 20:38 |
xgerman_ | k | 20:39 |
johnsom | cgoncalves If you have changes, can you create a patch at the end of the chain? That way we can still make progress on review/merge but get it fixed | 20:39 |
cgoncalves | johnsom, sure | 20:39 |
dougwig | UDP, damn straight. | 20:39 |
johnsom | If I get done early with my review on 2 I might poke at centos, but no guarantees I will get there. | 20:39 |
johnsom | dougwig o/ Sorry you missed the vote. Now you can load balancer your DNS servers... grin | 20:40 |
xgerman_ | :-) | 20:40 |
dougwig | next up, rewrite in ruby | 20:40 |
johnsom | You had better sign up for PTL if you want to do that.... | 20:41 |
johnsom | grin | 20:41 |
johnsom | #topic Open Discussion | 20:41 |
johnsom | nmagnezi I think you had an act/stdby question | 20:41 |
nmagnezi | johnsom, yup :) | 20:41 |
nmagnezi | johnsom, so did a basic test of spawning a highly available load balancer , and captured the traffic on both amps | 20:42 |
nmagnezi | Specifically, on the namespace that we run there | 20:42 |
nmagnezi | MASTER: https://www.cloudshark.org/captures/1d0a1028c402 | 20:42 |
nmagnezi | BACKUP: https://www.cloudshark.org/captures/8a4ee5b38e18 | 20:42 |
nmagnezi | First question, mm.. I was not expecting to see tenant traffic in the backup one | 20:43 |
nmagnezi | Even if I manually "if down" the VIP interface (who does not send GARPs - I verified that) -> I still see that traffic | 20:43 |
nmagnezi | And that happens specifically when I send traffic towards the VIP | 20:44 |
nmagnezi | btw in this example -> 10.0.0.1 is the qrouter NIC and 10.0.0.3 is the VIP | 20:44 |
johnsom | It's likely the promiscuous capture on the port. | 20:45 |
nmagnezi | johnsom, you mean that it is because I use 'tcpdump -i any" in the namespace? | 20:46 |
johnsom | Oh, I know what it is. It's the health monitor set on the LB. It's outgoing tests for the member I bet | 20:46 |
xgerman_ | +1 | 20:46 |
nmagnezi | IIRC I didn't set any health monitor | 20:46 |
nmagnezi | Lemme double check that real quick | 20:46 |
xgerman_ | mmh | 20:46 |
johnsom | Your member is located out the VIP subnet (you didn't specify a subnet at member create) | 20:47 |
johnsom | Because on that backup, those HTTP are all outbound from the VIP | 20:47 |
nmagnezi | Checked.. no health monitor set | 20:48 |
nmagnezi | The members reside on the same subnet as the VIP | 20:48 |
nmagnezi | All in privste-subnet the is created by default in devstack | 20:49 |
johnsom | Hmm, they do look a bit odd. Yeah, my bet is the promiscuous setting on the port is picking up the response traffic from the master, let's look at the MAC addresses. | 20:49 |
johnsom | That is probably why only half the conversation is seen on the backup. | 20:50 |
nmagnezi | Yeah that looked very strange.. no SYN packets | 20:50 |
johnsom | If you check, the backup's haproxy counters will not be going up | 20:51 |
nmagnezi | Will check that | 20:51 |
nmagnezi | But honestly I was not expecting to see that traffic on the backup amp | 20:52 |
johnsom | It looks right to me in general. Yeah, generally I wouldn't either, but I'm just guessing it's how the network is setup underneath and the point of capture. | 20:53 |
nmagnezi | I still don't get why it's there actually. I know the two amps communicate for other stuff (e.g. keepalived) | 20:53 |
nmagnezi | okay | 20:53 |
johnsom | The key to helping understand that is to look at the MAC addresses of your ports and the packets. The 0.3 packets will likely have the MAC of the base port on the master | 20:53 |
johnsom | If you switch it over it should be the base port of the backup in those packets. | 20:54 |
johnsom | Does that help? | 20:55 |
nmagnezi | I was inspecting the qrouter arp table while doing those tests. It remained consistent with the MASTER MAC address | 20:55 |
nmagnezi | It does, thank you | 20:55 |
nmagnezi | Will keep looking into this | 20:55 |
johnsom | Ok, cool. | 20:55 |
johnsom | Any other items today? | 20:55 |
nmagnezi | If there's time I have another question | 20:56 |
nmagnezi | But let other folks talk first | 20:56 |
nmagnezi | :) | 20:56 |
johnsom | Sure, 5 minutes | 20:56 |
nmagnezi | Going once.. | 20:56 |
johnsom | Just take it | 20:56 |
nmagnezi | ha | 20:56 |
nmagnezi | ok | 20:56 |
nmagnezi | So if we look at the capture from master | 20:56 |
nmagnezi | MASTER: https://www.cloudshark.org/captures/1d0a1028c402 | 20:57 |
nmagnezi | Some connections end with RST, ACK and RST | 20:57 |
nmagnezi | Some not | 20:57 |
nmagnezi | Is that an HAPROXY thing to close connections with pool members? | 20:57 |
nmagnezi | It does not happen with all the sessions | 20:58 |
johnsom | If it is a flow with the pool member, yes, that is the connection between HAProxy and the member server. | 20:58 |
nmagnezi | Okay, no more questions here | 20:59 |
johnsom | If the client on the front end closes the connection to the LB, haproxy will RST the backend. | 20:59 |
nmagnezi | Thank you! | 20:59 |
johnsom | Let me see if I can find that part of the docs. | 20:59 |
johnsom | I will send a link after the meeting | 21:00 |
nmagnezi | Np | 21:00 |
johnsom | Thanks folks! | 21:00 |
johnsom | #endmeeting | 21:00 |
openstack | Meeting ended Wed Aug 1 21:00:39 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 21:00 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-08-01-20.00.html | 21:00 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-08-01-20.00.txt | 21:00 |
openstack | Log: http://eavesdrop.openstack.org/meetings/octavia/2018/octavia.2018-08-01-20.00.log.html | 21:00 |
nmagnezi | o/ | 21:00 |
johnsom | nmagnezi Looking at the HAProxy http log entries might help you see why or who did the RST. | 21:02 |
johnsom | The magic decoder ring is here: http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#8.5 | 21:02 |
johnsom | There is also a case in the older versions of haproxy where a "reload" might trigger some RST handshakes. | 21:03 |
johnsom | That was fixed in 1.8 | 21:03 |
nmagnezi | johnsom, hint taken ;) | 21:04 |
johnsom | nmagnezi Ha, no hint implied | 21:04 |
johnsom | That is Adam's mission | 21:05 |
nmagnezi | I hope we can share good news about this in the near future | 21:06 |
nmagnezi | We are trying | 21:06 |
nmagnezi | For some time now.. | 21:06 |
openstackgerrit | German Eichberger proposed openstack/octavia master: Delete zombie amphora when detected https://review.openstack.org/587505 | 21:25 |
*** nmagnezi is now known as nmagnezi_ | 21:26 | |
*** nmagnezi_ has quit IRC | 21:46 | |
*** yamamoto has joined #openstack-lbaas | 21:57 | |
rm_work | ah nmagnezi left but ... centos amps already use 1.8 default :P so the mission is complete ^_^ | 22:17 |
rm_work | oh also | 22:17 |
rm_work | approved for PTG travel through the foundation | 22:17 |
rm_work | so I'm good :) | 22:18 |
rm_work | flights booked, hotel should be covered too | 22:18 |
johnsom | Wahoo! Excellent. Glad you can join us! | 22:19 |
*** hongbin has quit IRC | 22:20 | |
johnsom | rm_work You might find this interesting: https://storyboard.openstack.org/#!/story/2003197 | 22:21 |
johnsom | bbq client is not honoring the endpoint_type | 22:21 |
rm_work | hmm | 22:21 |
rm_work | i wonder if that's my fault | 22:22 |
rm_work | i wrote a lot of that client | 22:22 |
rm_work | though i think it was changed a bunch since then too | 22:22 |
johnsom | lol, well, there might be a reason I thought you might be interested | 22:22 |
rm_work | OH yeah so, the GET calls thing is by design | 22:22 |
rm_work | because technically barbican can do federation stuff | 22:22 |
rm_work | so we can't ever assume anything | 22:22 |
rm_work | the user passes a ref, and *that ref is what we try to get* | 22:22 |
rm_work | period | 22:23 |
rm_work | it could be in another cloud even | 22:23 |
rm_work | so long as federated identity works | 22:23 |
rm_work | why/how is this causing us a problem? | 22:23 |
johnsom | So, if a deployment has internal endpoints and certs for service->service calls that differ from the public certs how do you pull anything out of it? | 22:23 |
rm_work | :/ | 22:24 |
rm_work | we could rewrite those intelligently in our own code | 22:24 |
rm_work | but it's really "not a bug" | 22:24 |
*** rcernin has joined #openstack-lbaas | 22:24 | |
johnsom | Right. It's the roach motel, you can put stuff in, but never get it out | 22:24 |
rm_work | it HAS to work this way | 22:24 |
rm_work | for federation to work | 22:24 |
rm_work | barbican itself can't be in the business of rewriting secret refs because there's zero guarantee if it'll do the right thing | 22:25 |
johnsom | Right, agree, but I am attempting to connect to the Barbican API to hand it an HREF to fetch. Right now the client fails to connect to the bbq API because it doesn't honor which endpoint to use out of keystone. | 22:26 |
johnsom | It ALWAYS goes to public | 22:26 |
johnsom | basically | 22:26 |
johnsom | For gets. For POST it does actually use the endpoint catalog | 22:27 |
rm_work | it goes to the endpoint that the secret ref indicates | 22:27 |
rm_work | it doesn't "pick public" | 22:27 |
rm_work | if the ref was "http://some-other-site.com/secret/1234" it would go there | 22:28 |
rm_work | regardless of what public/internal/admin whatever is set in keystone | 22:28 |
rm_work | because that is where the ref says the secret lives | 22:28 |
johnsom | Right, that is an OpenStack fail IMO | 22:28 |
rm_work | no | 22:28 |
rm_work | this is how federation works | 22:28 |
johnsom | AGree to strongly disagree. It may be how bbq does "federation" but .... | 22:29 |
johnsom | You understand the problem right? Octavia can't connect to public which is what the href is stamped with. It even fails host validation | 22:31 |
johnsom | I looks like glance just doesn't use the bbq client. I wonder if this is why | 22:34 |
bbbbzhao_ | Thanks Team, thanks johnsom , it's 6:33am here. I will prepare to go to office and pick up the rest.. | 22:34 |
rm_work | it might be worth having this discussion in the barbican channel so others can give feedback too | 22:35 |
rm_work | johnsom: how would you propose you allow users to use a federated secret then? | 22:36 |
rm_work | if the client force-directs them to the cloud's native instance | 22:36 |
johnsom | Yeah, I tried, no one has answered since yesterday | 22:36 |
rm_work | prolly gotta ping some folks | 22:36 |
johnsom | I am fine moving there if you want. Not sure if they stopped the spam or not | 22:37 |
rm_work | everywhere did | 22:40 |
rm_work | infra expanded it to global for all openstack channels | 22:41 |
*** abaindur has joined #openstack-lbaas | 22:47 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!