rm_work | ummm | 00:12 |
---|---|---|
rm_work | http://paste.openstack.org/show/617353/ | 00:12 |
rm_work | wut | 00:12 |
rm_work | johnsom: you know what that means at all? | 00:12 |
rm_work | greghaynes: or you ^^ | 00:12 |
rm_work | started happening to me :/ | 00:13 |
johnsom | The root error probably happened above | 00:13 |
johnsom | That looks like revert/rollback to me | 00:14 |
rm_work | ah | 00:14 |
rm_work | i don't think so | 00:15 |
rm_work | johnsom: http://paste.openstack.org/show/617354/ | 00:16 |
rm_work | there's more | 00:16 |
rm_work | it looks like normal cleanup | 00:16 |
rm_work | ah | 00:16 |
rm_work | https://bugs.launchpad.net/diskimage-builder/+bug/1706386 | 00:16 |
openstack | Launchpad bug 1706386 in diskimage-builder "build-image role failing in master" [Critical,Fix released] - Assigned to John Trowbridge (trown) | 00:16 |
rm_work | whelp | 00:16 |
rm_work | need them to release it i guess | 00:17 |
rm_work | or fix our build process to use dib-master | 00:17 |
johnsom | Pretty soon you will be the canary | 00:18 |
*** http_GK1wmSU has joined #openstack-lbaas | 00:19 | |
*** http_GK1wmSU has left #openstack-lbaas | 00:21 | |
rm_work | am i not already? :/ | 00:25 |
JudeC | Hey johnsom you ever seen anything like this in devstack before: | 00:25 |
johnsom | Oh the things I have seen.... Grin | 00:25 |
JudeC | https://pastebin.com/gdMsXLE1 | 00:26 |
JudeC | this is the nova logs when trying to build an LB | 00:26 |
JudeC | I keep getting provisioning_status = ERROR :( | 00:26 |
johnsom | Yeah, nova is puking on you. Ubunut? | 00:27 |
johnsom | Ah, yeah | 00:27 |
johnsom | Can you "dpkg --list | grep qemu" and paste the versions? | 00:27 |
rm_work | didn't ubuntu fix their shit? | 00:27 |
rm_work | mine seems to work currently... | 00:27 |
johnsom | No, we had an agree to disagree | 00:27 |
rm_work | oh right but we fixed the nova config to fix that | 00:28 |
rm_work | so this is something else | 00:28 |
JudeC | qemu 1:2.8+dfsg-3ubuntu2.3~cloud0 amd64 fast processor emulator | 00:28 |
johnsom | Yeah, ok, so if you are on xenial, you have the same issue I had.... | 00:28 |
JudeC | do you want all of them? | 00:28 |
johnsom | No, that is fine | 00:28 |
rm_work | but i'm on xenial too and mine seems to be fine >_> i think | 00:29 |
johnsom | https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1703423 | 00:29 |
openstack | Launchpad bug 1703423 in qemu (Ubuntu) "zesty qemu packages (2.8) released for xenial-updates" [Undecided,Incomplete] | 00:29 |
rm_work | restacking now | 00:29 |
JudeC | So you know how to fix it :P | 00:29 |
johnsom | Add to nova.conf:[libvirt]live_migration_uri = qemu+ssh://stack@%s/systemcpu_mode = nonevirt_type = kvmhw_machine_type = x86_64=pc-i440fx-xenial https://usercontent.irccloud-cdn.com/file/17Lw8wBp/image.png | 00:29 |
johnsom | That last line, hw_machine_type | 00:29 |
johnsom | now, note, nova also messed up their config files, so you may have two config files in /etc/nova/ that need to be edited. | 00:30 |
rm_work | yeah, we've done that | 00:30 |
rm_work | ah though | 00:30 |
rm_work | his say qemu not kvm | 00:30 |
johnsom | Then restart devstack@n-* and restart your instances or build a VM | 00:30 |
rm_work | OH | 00:30 |
rm_work | because you didn't check the box i bet | 00:30 |
rm_work | JudeC: parallels? | 00:30 |
JudeC | yes | 00:31 |
rm_work | there's a box you have to check in config | 00:31 |
rm_work | to enable nested virt | 00:31 |
rm_work | so it will detect and use kvm instead of qemu | 00:31 |
rm_work | that's why mine works | 00:31 |
JudeC | I hate parallels.... | 00:31 |
johnsom | Well, the new "split" config could also be why it didn't pick it up | 00:31 |
rm_work | under CPU & Memory | 00:31 |
rm_work | advanced | 00:31 |
rm_work | johnsom: he has that too | 00:31 |
rm_work | johnsom: he's using devstack_deploy | 00:31 |
johnsom | Ok | 00:32 |
rm_work | so yeah stop your VM, enable nested | 00:32 |
rm_work | and then start and redo your stack | 00:32 |
rm_work | the setting should save across reverts | 00:32 |
johnsom | JudeC ^^^ That will make a huge difference in performance too | 00:32 |
rm_work | yeah did you... just redo it or something | 00:33 |
rm_work | or has your devstack ALWAYS been running like that O_o | 00:33 |
JudeC | I reinstalled parallels at some point | 00:34 |
JudeC | a little while ago and may have wiped that out | 00:34 |
johnsom | Yeah, LB creation goes from taking like 8-10 minutes to start down to ~30 seconds | 00:35 |
rm_work | i think his LB creation has been broken since he redid his VM <_< | 00:36 |
rm_work | he's been on no-op | 00:36 |
JudeC | likely yes I prob never noticed because I am in noop | 00:36 |
johnsom | Poor soul | 00:36 |
JudeC | welp restacking now, thanks for the help lol | 00:39 |
rm_work | yeah somehow it didn't click that all of that said qemu | 00:39 |
rm_work | because i was on qemu for so long myself | 00:39 |
*** bzhao has joined #openstack-lbaas | 01:26 | |
*** gongysh has joined #openstack-lbaas | 01:30 | |
*** armax has joined #openstack-lbaas | 01:34 | |
*** yamamoto has quit IRC | 01:45 | |
*** yamamoto has joined #openstack-lbaas | 01:45 | |
*** sanfern has quit IRC | 02:02 | |
*** sanfern has joined #openstack-lbaas | 02:05 | |
*** sanfern has quit IRC | 02:12 | |
*** ducnc has joined #openstack-lbaas | 02:22 | |
*** ducnc has quit IRC | 02:33 | |
*** yamamoto has quit IRC | 02:41 | |
*** yamamoto has joined #openstack-lbaas | 02:51 | |
*** kbyrne has quit IRC | 02:54 | |
*** tongl has quit IRC | 02:56 | |
*** kbyrne has joined #openstack-lbaas | 02:58 | |
*** yamamoto has quit IRC | 03:33 | |
*** yamamoto has joined #openstack-lbaas | 03:42 | |
*** JudeC has quit IRC | 03:43 | |
*** sanfern has joined #openstack-lbaas | 03:44 | |
rm_work | johnsom: i think housekeeping should also recycle spares pool VMs that are out of date (periodically check the image tag and see which image it is, and recycle the spares if they aren't the same image | 03:46 |
rm_work | ) | 03:46 |
johnsom | Sure | 03:47 |
rm_work | possibly we should track which image an amp was created with | 03:47 |
rm_work | in the DB | 03:47 |
rm_work | since it won't change | 03:47 |
rm_work | and it'd make that easier | 03:47 |
rm_work | adding it to my TODO | 03:47 |
rm_work | stale spares is my bane right now | 03:48 |
rm_work | every time I make an image i have to create a bunch of LBs and delete them | 03:48 |
rm_work | to clear it out | 03:48 |
rm_work | i guess i could try to fail them over... | 03:48 |
*** yamamoto has quit IRC | 03:48 | |
johnsom | Man, I thought we did store the image id in the amp table, but I guess not | 03:49 |
johnsom | Yeah, I hope to have the failover API finished soon | 03:49 |
*** yamamoto has joined #openstack-lbaas | 03:49 | |
johnsom | German started it, I offered to finish it, then got busy with other things. | 03:49 |
johnsom | We hoped to get it into pike, but sigh, time | 03:50 |
johnsom | Yeah, you can just nova delete them. Just make sure you have the right IDs.... | 03:51 |
xgerman_ | Well, we can merge as-is and big the rare case somebody steals the amp. After all it's not worse then before | 03:52 |
johnsom | Feature freeze was last week | 03:53 |
*** yamamoto has quit IRC | 03:54 | |
*** links has joined #openstack-lbaas | 03:54 | |
xgerman_ | Well, it had a patch for a while... | 03:54 |
johnsom | Yep | 03:54 |
johnsom | Just not enough cycles for everything we are trying to do... | 03:55 |
openstackgerrit | ZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id https://review.openstack.org/458308 | 03:55 |
xgerman_ | Feature freeze used to be no new patches but finish the open ones... | 03:56 |
rm_work | mm | 03:56 |
rm_work | actually i do kinda remember that >_> | 03:56 |
rm_work | BTW can confirm https://review.openstack.org/#/c/489015/ works | 03:57 |
johnsom | stop merging code adding new features, new dependencies, new configuration options, database schema changes, changes in strings… all things that make the work of packagers, documenters or testers more difficult. | 03:57 |
rm_work | deployed it to production here already :) | 03:57 |
johnsom | That is the current definition | 03:58 |
rm_work | ahhhhh | 03:58 |
rm_work | umm | 03:58 |
rm_work | ok | 03:58 |
rm_work | stringfreeze is later | 03:58 |
rm_work | very specifically tho | 03:58 |
johnsom | Soft string freeze is with feature freeze. Hard freeze is RC1 | 03:59 |
xgerman_ | Yeah, no merging after RC-1 has always been the case | 03:59 |
johnsom | Yeah, I have to say, I agree with drawing a line. It sucks, but it's what we could get done in a reasonable time. | 03:59 |
rm_work | do we, uh.... have translators? | 03:59 |
johnsom | There are a number of patches I wish we could have made happen, it just didn't work out. | 04:00 |
rm_work | oh FYI, I am taking the rest of this week off | 04:00 |
johnsom | Yes, we have translators | 04:00 |
rm_work | i am tired of not having vacations | 04:00 |
johnsom | rm_work Cool, enjoy! | 04:00 |
rm_work | so i am going to ... sit around at home by myself and play video games | 04:00 |
xgerman_ | I don't agree with taking weeks away from Dev time | 04:00 |
rm_work | probably i will be on IRC | 04:00 |
rm_work | >_> | 04:00 |
xgerman_ | Yeah, enjoy | 04:00 |
johnsom | Well, it's still dev time. Just focused on clearing out the pile of bugs that have accumulated.... | 04:01 |
rm_work | yeah it actually allows for a clear focus on ONLY bugs | 04:01 |
rm_work | which is the idea | 04:01 |
johnsom | Like the one rm_work just mentioned | 04:01 |
rm_work | stabilize during the last weeks | 04:01 |
xgerman_ | That's what we did during RC | 04:02 |
johnsom | RC1 should ship ideally | 04:02 |
rm_work | yeah, only supposed to even cut an RC2 or RC3 if you HAVE to | 04:03 |
johnsom | If we get things stablized, we can release early and start on queens.... | 04:03 |
xgerman_ | I don't have to agree with the new world order | 04:03 |
johnsom | Ha | 04:03 |
johnsom | True | 04:03 |
xgerman_ | Also that's the first time it was mentioned but I don't pay much attention | 04:04 |
xgerman_ | Otherwise i would have filed the FFE | 04:04 |
johnsom | Nah, this has been the case for a few releases now. I want to say mitaka, but maybe newton | 04:04 |
xgerman_ | Newton makes sense - my attention span has been real short beginning then | 04:05 |
johnsom | I remember feeling really bad about our ocata release because the same thing happened. We didn't get a few patches in that I really hoped we would. | 04:06 |
johnsom | I think the best thing that could help is getting more reviews on patches. The cores can't do all of the reviews.... | 04:07 |
xgerman_ | We can make our own rules - there are projects which even backport features ;-) | 04:08 |
rm_work | we lose some tags if we do that | 04:08 |
xgerman_ | Yeah, more reviewers would be good! | 04:08 |
xgerman_ | Aren't those tags meaningless? Our project not diverse? My a** | 04:09 |
rm_work | eh | 04:10 |
johnsom | The lowest core review is still 4x the next reviewer | 04:10 |
rm_work | hate to say it but... at this point... if we lost rax or gd it'd be bad | 04:10 |
xgerman_ | We might gain Octavia Inc | 04:11 |
rm_work | lol | 04:11 |
johnsom | Ha | 04:11 |
rm_work | hey if we could find funding i'd do it :P | 04:11 |
johnsom | Yeah, would love to read the business plan | 04:11 |
xgerman_ | 1 develop Octavia | 04:11 |
xgerman_ | 2 ? | 04:11 |
xgerman_ | 3 Profit | 04:12 |
johnsom | Hahaha, yeah, as entertaining as I expected | 04:12 |
rm_work | foolproof | 04:12 |
*** gongysh has quit IRC | 04:15 | |
johnsom | rm_work What are you planning to play? | 04:15 |
rm_work | not 100% sure | 04:15 |
rm_work | I do a lot of PlayerUnknown's Battlegrounds | 04:16 |
johnsom | Hahaha, fair enough | 04:16 |
rm_work | and I got a Vive recently | 04:16 |
rm_work | so maybe some VR stuff | 04:16 |
rm_work | Rec Room is fun, Darknet is really cool too | 04:16 |
johnsom | Nice. haven't tried any of the VR gear yet | 04:16 |
rm_work | https://www.darknetgame.com/ | 04:17 |
johnsom | Downside of not being on an HP site anylonger | 04:17 |
rm_work | no cool toys to play with? :( | 04:17 |
rm_work | ah when is that Eclipse thing | 04:17 |
*** belharar has joined #openstack-lbaas | 04:17 | |
rm_work | I was tempted to go down to Oregon | 04:17 |
rm_work | I have a friend who's driving down | 04:18 |
johnsom | Aug 21st | 04:18 |
johnsom | It's going to be a cluster here | 04:18 |
rm_work | heh yeah | 04:18 |
rm_work | are you in the "zone"? | 04:18 |
johnsom | Like 150,000 people are supposed to show up here in our 50,000 person town | 04:18 |
rm_work | lol | 04:18 |
johnsom | I am in the zone | 04:18 |
rm_work | nice :P at least you can hide in your house | 04:19 |
rm_work | and just look outside | 04:19 |
johnsom | The university is having a big "event" and opening the dorms | 04:19 |
xgerman_ | Selling rooms on airbnb? | 04:19 |
johnsom | Nah, airbnb creeps me out, but many people here are | 04:19 |
johnsom | We are taking the day off and sheltering-in-place | 04:19 |
rm_work | I use AirBnB all the time | 04:19 |
rm_work | so :P | 04:19 |
johnsom | Maybe walk to the nearby park | 04:20 |
xgerman_ | Get a month mortgage in a few nights | 04:20 |
johnsom | See, my point exactly | 04:20 |
johnsom | In a single night from what I have heard about the hotels | 04:20 |
xgerman_ | Chance of a lifetime | 04:21 |
johnsom | rm_work if you are so motivated, you are of course welcome here | 04:21 |
rm_work | heh not sure what my friend's plan is... i honestly wonder if he's planning to sleep in his car, or try to camp | 04:21 |
rm_work | but i assume all the legal campgrounds are crazy booked too | 04:21 |
rm_work | i will probably end up being way too busy/lazy | 04:22 |
johnsom | Yeah, the permits for most of the area sold out in a few hours | 04:22 |
johnsom | We are too close to the highest point in the coast range (mary's peak) | 04:22 |
rm_work | yeah we were gonna camp on the drive up from TX and then we realize you can't just... camp | 04:22 |
xgerman_ | My Oregon relatives are throwing a big party... | 04:23 |
johnsom | All for less than ten minutes of entertainment in the morning. My guess is tailgating the rest of the day. Sigh. Living in a college town | 04:24 |
johnsom | Thus, why we are sheltering-in-place... grin | 04:24 |
rm_work | yeah what time is it? | 04:24 |
rm_work | early? | 04:25 |
rm_work | i might not even want to be AWAKE that early | 04:25 |
johnsom | Like 9 or 10 am | 04:25 |
rm_work | ugh yeah | 04:25 |
rm_work | that's a bit before my alarm | 04:25 |
johnsom | Hahaha | 04:25 |
xgerman_ | Will be dark so great for sleeping;-) | 04:25 |
johnsom | And if you stare without glasses, it will be dark the rest of the day.... | 04:26 |
johnsom | They have activated the national guard for it here. | 04:26 |
johnsom | Whatever that means | 04:27 |
xgerman_ | There was some big eclipse in Germany when I was in school - so feel like having seen it ;-) | 04:27 |
johnsom | Yeah, I am pretty sure we had one when I was pre-school age. They wouldn't let us go outside so we didn't blind ourselves | 04:28 |
rm_work | <_< | 04:29 |
xgerman_ | Also selling glasses last minute is a lucrative business | 04:29 |
johnsom | hahaha, darn youngins | 04:29 |
johnsom | Yeah, the astronomy group is selling them at the fair. We volunteer each year, so I plan to pick some up there. | 04:30 |
xgerman_ | Just make sure they are not fake ;-) | 04:33 |
johnsom | Yeah, that is a big problem from what I hear. | 04:34 |
xgerman_ | Yep. Was the same in the German and eclipse | 04:35 |
xgerman_ | We are contemplating coming up. My wife + kids would like to see the eclipse... | 04:36 |
xgerman_ | But as you said getting there is a cluster | 04:37 |
rm_work | johnsom: you know how to specify a DB time offset in SQLA? | 04:42 |
rm_work | there's func.now() which is just "CURRENT_TIME" in MySQL | 04:43 |
johnsom | No | 04:43 |
rm_work | but I want to be like | 04:43 |
rm_work | func.now()+60 | 04:43 |
johnsom | SQLA is super limited here | 04:43 |
rm_work | blegh | 04:43 |
rm_work | i can probably QUERY the DB for the time | 04:43 |
rm_work | and do it myself | 04:43 |
johnsom | I fought with that when I wanted the DB to be the "single point of truth for time" and gave up. | 04:44 |
rm_work | so is it already not? | 04:44 |
johnsom | They claim it's too DB specific, when I claim BS | 04:44 |
johnsom | Yeah, gave up. The assumption is the controllers are all NTP clients | 04:44 |
rm_work | well i'm doing the initial heartbeat insert thing | 04:44 |
rm_work | and i have everything but this | 04:44 |
johnsom | I despise SQLA | 04:45 |
rm_work | just need to do now+offset as the initial last_update | 04:45 |
johnsom | Yeah, as it is now, just use the host UTC time and python time math | 04:45 |
rm_work | if the time is off from the DB ... what happens is ... either they failover-loop instantly because always past the time | 04:46 |
rm_work | or ... >_> | 04:46 |
rm_work | oh, though already we really need DB-time to be in sync i guess | 04:46 |
rm_work | since the HMs do the retrieval based on their time? | 04:46 |
rm_work | or ... OH no that's a DB-side query | 04:46 |
rm_work | bleh | 04:46 |
johnsom | It came down to I would have had to ask SQLA what engine it is and then push native SQL through it. Totally defeats the purpose (SQLA ??? Purpose???) of SQLA and different DB backends | 04:47 |
rm_work | welllll | 04:48 |
rm_work | you can supposedly do like | 04:48 |
rm_work | select([some_table, func.current_date()]).execute() | 04:48 |
rm_work | and get the time back | 04:49 |
johnsom | At this point, I think NTP is probably good enough for us. We can probably save the DB roundtrip | 04:50 |
johnsom | I don't think we need to go to the PTP level here | 04:51 |
rm_work | k | 04:51 |
rm_work | just hope your DB is the same timezone is all >_> | 04:52 |
johnsom | Yeah, the world should be UTC... Am I right??? | 04:52 |
johnsom | Seems like it is kind of your current timezone.... | 04:56 |
rm_work | I THINK this works tho... | 04:57 |
rm_work | db_apis.get_session().execute(expression.select(functions.now())) | 04:57 |
johnsom | Just make sure it works with our sqlite functional tests.... | 04:57 |
rm_work | augh | 04:57 |
johnsom | Since you are kind of bypassing SQLA | 04:57 |
rm_work | yeah hopefully | 04:57 |
rm_work | it's not | 04:57 |
rm_work | i do not do any pure SQL | 04:58 |
rm_work | it's all from SQLAlchemy | 04:58 |
johnsom | Doesn't expression bypass? | 04:58 |
rm_work | but the only thing i send is `functions.now()` | 04:58 |
rm_work | which should work for any DB | 04:59 |
rm_work | right? | 04:59 |
johnsom | Sigh, young padawan... | 04:59 |
johnsom | There will always be a DB exception | 04:59 |
rm_work | T_T | 04:59 |
johnsom | on the path to load balancer enlightenment | 05:00 |
johnsom | Grin | 05:00 |
johnsom | Too long of a day, too many virtual networks on my diagrams. | 05:00 |
rm_work | this seems simple from the outside... (the whole initial heartbeat thing) | 05:00 |
rm_work | but instead i'm going to go watch Game of Thrones and start my vacation | 05:01 |
johnsom | Yeah, Trevor's initial caused the gates to failover | 05:01 |
rm_work | yeah | 05:01 |
johnsom | Wise man, kick can to next week... grin | 05:01 |
rm_work | i just started over | 05:01 |
rm_work | it was easier | 05:01 |
rm_work | k yeah, i'll be around :P | 05:02 |
johnsom | Yeah, probably a good plan | 05:02 |
*** belharar has quit IRC | 05:02 | |
rm_work | good week though, and good note to leave on | 05:03 |
rm_work | just closed out four stories in JIRA that have been open for a while :P | 05:03 |
rm_work | timing issue is resolved | 05:03 |
rm_work | as are all these member status bugs :) | 05:03 |
rm_work | timing was REALLY because of k8s >_< | 05:03 |
rm_work | they override the resolv.conf on the box with really idiotic settings that break DNS badly, so DNS resolution to hit keystone for token auth was timing out all the time | 05:04 |
johnsom | Ouch, yeah, that will not be fun | 05:05 |
rm_work | s/on the box/on the container/ | 05:05 |
johnsom | Aren't they heavy on the hosts file overloading? | 05:05 |
rm_work | but yeah, default dnsPolicy is "ClusterFirst" .... the one that makes it work right is "Default" ... which is not default. | 05:05 |
rm_work | no, actually none | 05:05 |
rm_work | my fix was actually to add the keystone server to /etc/hosts | 05:06 |
johnsom | I am also watching sou vide youtube videos. Got the wife a immersion heater. | 05:06 |
rm_work | noice, i wanted one of those for a while | 05:06 |
rm_work | now i realize i do not have the patience | 05:07 |
johnsom | Amazon black friday in July deal | 05:07 |
rm_work | but it's awesome supposedly :) | 05:07 |
rm_work | lol i must have missed that | 05:07 |
johnsom | I will let you know | 05:07 |
rm_work | night :) | 05:07 |
johnsom | Catch you next week | 05:08 |
*** armax has quit IRC | 05:13 | |
*** armax has joined #openstack-lbaas | 05:13 | |
*** armax has quit IRC | 05:13 | |
*** armax has joined #openstack-lbaas | 05:14 | |
*** armax has quit IRC | 05:14 | |
*** armax has joined #openstack-lbaas | 05:15 | |
*** armax has quit IRC | 05:15 | |
*** armax has joined #openstack-lbaas | 05:16 | |
*** armax has quit IRC | 05:16 | |
*** JudeC has joined #openstack-lbaas | 05:17 | |
sanfern | hi johnsom, how to identify which amphora is serving in ACTIVE-STANDBY mode ? | 05:27 |
openstackgerrit | Santhosh Fernandes proposed openstack/octavia master: [WIP] Adding exabgp-speaker element to amphora image https://review.openstack.org/490164 | 05:31 |
*** aojea has joined #openstack-lbaas | 05:33 | |
*** aojea has quit IRC | 05:33 | |
*** aojea has joined #openstack-lbaas | 05:33 | |
*** robcresswell has quit IRC | 05:41 | |
*** aojea_ has joined #openstack-lbaas | 05:46 | |
*** aojea has quit IRC | 05:48 | |
*** gcheresh_ has joined #openstack-lbaas | 05:54 | |
*** Guest14 has joined #openstack-lbaas | 05:56 | |
*** jick has joined #openstack-lbaas | 06:01 | |
*** belharar has joined #openstack-lbaas | 06:01 | |
*** aojea_ has quit IRC | 06:03 | |
*** aojea has joined #openstack-lbaas | 06:03 | |
openstackgerrit | ZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id https://review.openstack.org/458308 | 06:14 |
*** rcernin has joined #openstack-lbaas | 06:19 | |
*** JudeC has quit IRC | 06:32 | |
*** pcaruana has joined #openstack-lbaas | 06:48 | |
*** aojea_ has joined #openstack-lbaas | 06:57 | |
*** kobis has joined #openstack-lbaas | 06:58 | |
*** aojea has quit IRC | 06:59 | |
*** aojea has joined #openstack-lbaas | 07:02 | |
*** aojea_ has quit IRC | 07:04 | |
*** aojea_ has joined #openstack-lbaas | 07:07 | |
*** ducnc has joined #openstack-lbaas | 07:07 | |
*** aojea has quit IRC | 07:10 | |
*** tesseract has joined #openstack-lbaas | 07:17 | |
*** gtrxcb has quit IRC | 07:24 | |
*** ducnc has quit IRC | 07:28 | |
*** isantosp has joined #openstack-lbaas | 07:32 | |
*** robcresswell has joined #openstack-lbaas | 07:41 | |
*** sanfern has quit IRC | 08:05 | |
*** sanfern has joined #openstack-lbaas | 08:05 | |
*** sanfern has quit IRC | 08:06 | |
openstackgerrit | Nir Magnezi proposed openstack/octavia master: Remove WebTest from test requirements https://review.openstack.org/490382 | 08:15 |
*** openstackgerrit has quit IRC | 08:18 | |
*** yamamoto has joined #openstack-lbaas | 09:41 | |
*** rochaporto has quit IRC | 09:49 | |
*** strigazi_OFF has quit IRC | 09:49 | |
*** rochaporto has joined #openstack-lbaas | 09:50 | |
*** strigazi_OFF has joined #openstack-lbaas | 09:50 | |
*** fnaval has quit IRC | 11:00 | |
*** belharar has quit IRC | 11:11 | |
*** atoth has joined #openstack-lbaas | 11:40 | |
*** catintheroof has joined #openstack-lbaas | 12:36 | |
*** dosaboy has quit IRC | 12:38 | |
*** dosaboy has joined #openstack-lbaas | 12:38 | |
*** sanfern has joined #openstack-lbaas | 13:04 | |
*** yamamoto has quit IRC | 13:06 | |
*** links has quit IRC | 13:06 | |
*** ssmith has joined #openstack-lbaas | 13:38 | |
*** cpusmith has joined #openstack-lbaas | 13:40 | |
*** ssmith has quit IRC | 13:43 | |
*** fnaval has joined #openstack-lbaas | 14:00 | |
*** nakul_d has quit IRC | 14:03 | |
*** nakul_d has joined #openstack-lbaas | 14:04 | |
*** yamamoto has joined #openstack-lbaas | 14:06 | |
*** gcheresh_ has quit IRC | 14:07 | |
*** aojea_ has quit IRC | 14:07 | |
*** yamamoto has quit IRC | 14:11 | |
mnaser | has anyone been able to successfully plumb an octavia controller to a tenant network such as one over vxlan or gre | 14:17 |
mnaser | minus the obvious run it in a VM on the cloud thing | 14:18 |
*** kobis has quit IRC | 14:18 | |
*** fnaval_ has joined #openstack-lbaas | 14:21 | |
*** fnaval has quit IRC | 14:24 | |
xgerman_ | I have a management network on vxlan | 14:26 |
xgerman_ | and that turns in any other neutron network past linux bridge | 14:27 |
xgerman_ | ^mnaser any specifics? | 14:27 |
mnaser | xgerman_ but how do you get your physical machine to access the management network? | 14:28 |
xgerman_ | VxLan->bridge->conroller | 14:29 |
xgerman_ | bridge is one of those virtual br you can create with brctl | 14:29 |
mnaser | oh hm i see | 14:29 |
mnaser | so your switches do the vxlan work? | 14:30 |
mnaser | in this case using neutron-openvswitch-agent + gre tunnels, i cant imagine being able to hook into a network in a physical machine | 14:30 |
xgerman_ | gre tunnels are tricky | 14:31 |
mnaser | unless i create a vxlan management network and configure it manually on the controller node | 14:31 |
mnaser | but im not sure if i can mix those in the cloud | 14:31 |
xgerman_ | well, you should have things come in on a trunk port so you can take one of the vxlans for your management purposes | 14:32 |
mnaser | i suppose that's the way to do it but it involves a fair bit of configs across switches and what not | 14:32 |
mnaser | so i'm trying to see if there's a cleaner way but i could just configure a vlan and use that but yeah | 14:33 |
mnaser | thinking about it maybe my issue is more of an octavia one anyways | 14:34 |
xgerman_ | I think somebody experimented with a VPN | 14:34 |
mnaser | accessing the load balancers from the same network as the mgmt network results in no traffic | 14:35 |
mnaser | the path of the traffic is <client ip> => <load balancer ip> incoming, where client ip is an IP in the mgmt subnet and load balancer is the vip | 14:35 |
xgerman_ | I think rm_work is using mgmt. network as VIP network… but he made patches | 14:35 |
mnaser | however, the path of return traffic is <load balancer vip> => <client ip> .. however i think this is where things go wrong because the load balancer vip and the client ip arent on the same network | 14:36 |
mnaser | so the traffic comes with srcip in the 10.x network (the one the lbaas is sitting on) | 14:36 |
mnaser | rather than come back to the virtual router doing the floating ip and back to the user | 14:37 |
xgerman_ | we proxy the traffic so it should be VIP -> LB IP -> node | 14:38 |
mnaser | to make it easy.. my floating ip is 192.168.0.220, my lbaas vip is 10.0.0.5, my client ip is 192.168.0.133. making a request hits the backend and comes back, but on the way back, tcpdump shows: 10.0.0.5.webcache > 192.168.0.133.60359 | 14:38 |
mnaser | what should happen is 10.0.0.5 sends traffic back to default gateway (neutron) which will do SNAT to send traffic back | 14:39 |
mnaser | but because there is a route for 192.168.0.0/24, it uses that instead | 14:39 |
xgerman_ | ah, that sounds like a bug | 14:39 |
mnaser | i mean the load balancers have always kinda worked | 14:40 |
mnaser | so i wonder if this is a one time thing that it got stuck | 14:40 |
mnaser | let me try to recreate the load balancer | 14:40 |
xgerman_ | eah, the Lb should proxy and hence send things from 10.x | 14:40 |
*** fnaval_ has quit IRC | 14:41 | |
mnaser | i figured that it sits in its own netns so it shouldn't use the routes of the host but yeah | 14:41 |
mnaser | let me do some checks | 14:41 |
johnsom | It will honor "host_routes" defined in neutron for the neutron networks. So if there are "host_routes" configured for the network in neutron, the netns in the amp will pick those up | 14:42 |
*** armax has joined #openstack-lbaas | 14:47 | |
*** yamamoto has joined #openstack-lbaas | 14:47 | |
*** yamamoto has quit IRC | 14:47 | |
*** fnaval has joined #openstack-lbaas | 14:51 | |
mnaser | johnsom no host routes, i wonder if it was a one time oddity | 14:52 |
johnsom | Ok, just wanted to share in case.... | 14:53 |
*** kobis has joined #openstack-lbaas | 14:57 | |
*** kobis has quit IRC | 15:00 | |
mnaser | so packet flow is: 192.168.0.133.60720 > 10.0.0.7.webcache (initial request, DNAT by neutron floating ip), 10.0.0.4.33042 > 10.0.0.12.31841 (request towards backend), 10.0.0.12.31841 > 10.0.0.4.33042 (response from backend), 10.0.0.7.webcache > 192.168.0.133.60720 (response to client) | 15:01 |
mnaser | where it is going wrong is the last one should be hitting the virtual router (aka 10.0.0.1) rather than the route installed by being on the same l2 network | 15:02 |
mnaser | and i believe netns should have made sure this doesnt happen but i guess this is where i have to start troubleshooting | 15:02 |
*** Guest14 is now known as ajo | 15:04 | |
*** kobis has joined #openstack-lbaas | 15:27 | |
mnaser | omg | 15:29 |
mnaser | johnsom got it | 15:29 |
mnaser | mtu of network is not respected in amphorae | 15:29 |
mnaser | 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 | 15:29 |
mnaser | but in reality the network has a lower mtu (which neutron even sets by dhcp) | 15:29 |
mnaser | ip netns exec amphora-haproxy ip link set eth1 mtu 1458 fixed it | 15:29 |
mnaser | so i guess we need to make sure that amphorae respect the correct mtu | 15:30 |
mnaser | i guess it worked before because the mtu was not an issue due to the size of the page | 15:30 |
mnaser | hmm https://github.com/openstack/octavia/commit/89f6b2ccefa67d6a5b2d1c518d2361580f83fcc1 | 15:31 |
mnaser | i guess my image was out of date, darn | 15:32 |
*** kobis has quit IRC | 15:38 | |
*** yamamoto has joined #openstack-lbaas | 15:48 | |
*** yamamoto has quit IRC | 15:53 | |
*** kobis has joined #openstack-lbaas | 16:00 | |
*** belharar has joined #openstack-lbaas | 16:10 | |
*** rcernin has quit IRC | 16:12 | |
*** kobis has quit IRC | 16:19 | |
*** kobis has joined #openstack-lbaas | 16:29 | |
*** belharar has quit IRC | 16:37 | |
johnsom | FYI, PTG etherpads: https://wiki.openstack.org/wiki/PTG/Queens/Etherpads | 16:41 |
*** oomichi has quit IRC | 16:41 | |
*** oomichi has joined #openstack-lbaas | 16:44 | |
*** cpusmith_ has joined #openstack-lbaas | 16:49 | |
*** cpusmith has quit IRC | 16:49 | |
*** cpusmith has joined #openstack-lbaas | 16:50 | |
*** cpusmith_ has quit IRC | 16:54 | |
*** JudeC has joined #openstack-lbaas | 16:54 | |
*** pcaruana has quit IRC | 16:54 | |
*** sshank has joined #openstack-lbaas | 16:55 | |
*** rcernin has joined #openstack-lbaas | 17:02 | |
*** armax_ has joined #openstack-lbaas | 17:03 | |
*** armax has quit IRC | 17:03 | |
*** armax_ is now known as armax | 17:03 | |
*** ajo has quit IRC | 17:04 | |
*** kobis has quit IRC | 17:05 | |
*** sanfern has quit IRC | 17:09 | |
*** sanfern has joined #openstack-lbaas | 17:09 | |
*** atoth has quit IRC | 17:20 | |
*** atoth has joined #openstack-lbaas | 17:22 | |
*** sanfern has quit IRC | 17:24 | |
*** sanfern has joined #openstack-lbaas | 17:26 | |
*** tongl has joined #openstack-lbaas | 17:38 | |
*** tesseract has quit IRC | 17:49 | |
*** SumitNaiksatam has joined #openstack-lbaas | 17:54 | |
*** cpusmith has quit IRC | 17:58 | |
*** kobis has joined #openstack-lbaas | 18:17 | |
*** sshank has quit IRC | 18:31 | |
*** gcheresh_ has joined #openstack-lbaas | 19:09 | |
*** openstackgerrit has joined #openstack-lbaas | 19:27 | |
openstackgerrit | Swaminathan Vasudevan proposed openstack/octavia master: Adds support for SUSE distros https://review.openstack.org/488885 | 19:27 |
*** kobis has quit IRC | 19:39 | |
*** harlowja has quit IRC | 19:57 | |
*** sshank has joined #openstack-lbaas | 20:02 | |
xgerman_ | @johnsom how do we envision that distrinutor stuff to work for the end user? | 20:33 |
xgerman_ | are we letting them say how many amphora should be in there or is that in the flavor? | 20:33 |
*** SumitNaiksatam has quit IRC | 20:56 | |
johnsom | xgerman_ That is a flavor setting IMO | 21:00 |
xgerman_ | yeah, my feel as well | 21:00 |
xgerman_ | so we need to get flavor before we can have ACTIVE-ACTIVE | 21:00 |
*** gcheresh_ has quit IRC | 21:04 | |
*** rtjure has joined #openstack-lbaas | 21:05 | |
*** harlowja has joined #openstack-lbaas | 21:15 | |
*** aojea has joined #openstack-lbaas | 21:17 | |
*** yamamoto has joined #openstack-lbaas | 21:21 | |
*** aojea_ has joined #openstack-lbaas | 21:23 | |
openstackgerrit | Merged openstack/neutron-lbaas master: Updated from global requirements https://review.openstack.org/488863 | 21:24 |
*** aojea has quit IRC | 21:24 | |
*** yamamoto has quit IRC | 21:26 | |
*** aojea has joined #openstack-lbaas | 21:28 | |
*** aojea_ has quit IRC | 21:30 | |
*** aojea_ has joined #openstack-lbaas | 21:32 | |
*** aojea has quit IRC | 21:35 | |
*** aojea has joined #openstack-lbaas | 21:37 | |
*** aojea_ has quit IRC | 21:40 | |
*** aojea_ has joined #openstack-lbaas | 21:42 | |
*** aojea has quit IRC | 21:45 | |
*** aojea has joined #openstack-lbaas | 21:47 | |
johnsom | xgerman_ Well, in the interim we can do like we do for act/stdby and just make it a blanket config setting | 21:49 |
xgerman_ | yeah, I am trying to make that distributor driver interface | 21:49 |
*** aojea_ has quit IRC | 21:50 | |
*** aojea has quit IRC | 21:56 | |
*** aojea has joined #openstack-lbaas | 21:58 | |
*** yamamoto has joined #openstack-lbaas | 21:58 | |
*** aojea_ has joined #openstack-lbaas | 22:03 | |
*** aojea has quit IRC | 22:06 | |
*** aojea has joined #openstack-lbaas | 22:08 | |
*** aojea_ has quit IRC | 22:10 | |
*** apuimedo has quit IRC | 22:11 | |
*** aojea_ has joined #openstack-lbaas | 22:13 | |
*** apuimedo has joined #openstack-lbaas | 22:14 | |
*** fnaval has quit IRC | 22:16 | |
*** aojea has quit IRC | 22:16 | |
*** aojea has joined #openstack-lbaas | 22:18 | |
*** aojea_ has quit IRC | 22:21 | |
*** aojea has quit IRC | 22:26 | |
openstackgerrit | German Eichberger proposed openstack/octavia master: ACTIVE-ACTIVE Topology: Initial Distributor Driver Mixin https://review.openstack.org/313006 | 22:33 |
xgerman_ | okey, that should be a good basis for next steps | 22:34 |
johnsom | Cool | 22:35 |
xgerman_ | so on the OSA front jenkins is obstructing but they are more lenient with deadlines… | 22:46 |
*** fnaval has joined #openstack-lbaas | 22:56 | |
johnsom | What is up with Jenkins? | 22:57 |
*** rcernin has quit IRC | 22:58 | |
xgerman_ | timeouts, tempest disk images not loaded | 22:59 |
johnsom | Bummer. For the glance issue, I wonder if it isn't waiting for glance to mark it ready. That is async and can take a while | 23:22 |
xgerman_ | well, I will keep hitting recheck and hopefully get tha stuff merged | 23:32 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!