*** mahito has joined #openstack-operators | 00:04 | |
*** alop has quit IRC | 00:27 | |
*** gyee has quit IRC | 00:33 | |
*** `mjr_ has quit IRC | 00:44 | |
*** chlong has quit IRC | 00:52 | |
*** klindgren has joined #openstack-operators | 01:27 | |
*** chlong has joined #openstack-operators | 02:11 | |
*** robksawyer has joined #openstack-operators | 02:18 | |
*** hakimo_ has joined #openstack-operators | 02:52 | |
*** hakimo has quit IRC | 02:55 | |
*** bradjones has quit IRC | 03:03 | |
*** bradjones has joined #openstack-operators | 03:04 | |
*** bradjones has quit IRC | 03:04 | |
*** bradjones has joined #openstack-operators | 03:04 | |
*** Marga_ has quit IRC | 03:13 | |
*** robksawyer has quit IRC | 03:25 | |
*** cpschult has joined #openstack-operators | 03:42 | |
*** mahito has quit IRC | 03:59 | |
*** HenryG has quit IRC | 04:09 | |
*** HenryG has joined #openstack-operators | 04:12 | |
*** cpschult has quit IRC | 04:28 | |
*** robksawyer has joined #openstack-operators | 04:31 | |
*** mahito has joined #openstack-operators | 04:40 | |
*** maishsk has quit IRC | 04:51 | |
*** mahito has quit IRC | 05:04 | |
*** mahito has joined #openstack-operators | 05:05 | |
*** mahito_ has joined #openstack-operators | 05:08 | |
*** mahito has quit IRC | 05:09 | |
*** bvandenh has joined #openstack-operators | 05:39 | |
*** bhunter71 has quit IRC | 05:42 | |
*** mmorais has quit IRC | 05:58 | |
*** robksawyer has quit IRC | 06:07 | |
*** robksawyer has joined #openstack-operators | 06:08 | |
*** Miouge has joined #openstack-operators | 06:15 | |
*** maishsk has joined #openstack-operators | 06:16 | |
*** maishsk has quit IRC | 06:26 | |
*** maishsk has joined #openstack-operators | 06:28 | |
*** chlong has quit IRC | 06:49 | |
*** ruagair has quit IRC | 06:54 | |
*** maishsk has quit IRC | 06:59 | |
*** maishsk has joined #openstack-operators | 07:00 | |
*** bvandenh has quit IRC | 07:00 | |
*** Marga_ has joined #openstack-operators | 07:27 | |
*** vinsh_ has joined #openstack-operators | 07:29 | |
*** vinsh has quit IRC | 07:31 | |
*** vinsh has joined #openstack-operators | 07:36 | |
*** vinsh_ has quit IRC | 07:36 | |
*** subscope has quit IRC | 07:41 | |
*** subscope has joined #openstack-operators | 07:55 | |
*** vinsh_ has joined #openstack-operators | 07:58 | |
*** vinsh has quit IRC | 07:59 | |
*** vinsh has joined #openstack-operators | 07:59 | |
*** vinsh_ has quit IRC | 08:03 | |
*** subscope has quit IRC | 08:09 | |
*** derekh has joined #openstack-operators | 08:25 | |
*** subscope has joined #openstack-operators | 08:25 | |
*** maishsk has quit IRC | 08:30 | |
*** maishsk has joined #openstack-operators | 08:31 | |
*** bvandenh has joined #openstack-operators | 08:45 | |
*** robksawyer has quit IRC | 08:47 | |
*** robksawyer has joined #openstack-operators | 08:47 | |
*** saneax has quit IRC | 09:01 | |
*** mahito_ has quit IRC | 09:06 | |
*** mahito has joined #openstack-operators | 09:07 | |
*** mahito has quit IRC | 09:11 | |
*** bvandenh has quit IRC | 09:28 | |
*** maishsk has quit IRC | 09:34 | |
*** bvandenh has joined #openstack-operators | 09:42 | |
*** saneax has joined #openstack-operators | 10:19 | |
*** maishsk has joined #openstack-operators | 10:44 | |
*** chlong has joined #openstack-operators | 11:08 | |
*** maishsk has quit IRC | 12:14 | |
*** maishsk has joined #openstack-operators | 12:19 | |
*** cpschult has joined #openstack-operators | 12:24 | |
*** j05hk has joined #openstack-operators | 12:26 | |
*** cpschult has quit IRC | 12:32 | |
*** VW_ has joined #openstack-operators | 12:35 | |
*** ferest has joined #openstack-operators | 12:50 | |
*** bvandenh has quit IRC | 12:57 | |
*** ferest has quit IRC | 12:57 | |
*** bvandenh has joined #openstack-operators | 12:59 | |
*** j05hk has quit IRC | 13:38 | |
*** j05hk has joined #openstack-operators | 13:38 | |
*** bvandenh has quit IRC | 13:42 | |
*** dminer has joined #openstack-operators | 13:48 | |
*** belmoreira has joined #openstack-operators | 14:07 | |
*** VW_ has quit IRC | 14:08 | |
*** VW_ has joined #openstack-operators | 14:09 | |
*** VW_ has quit IRC | 14:13 | |
*** zul has quit IRC | 14:14 | |
*** VW_ has joined #openstack-operators | 14:18 | |
*** belmoreira has quit IRC | 14:19 | |
*** rbrooker has joined #openstack-operators | 14:20 | |
*** zul has joined #openstack-operators | 14:27 | |
*** jeh has joined #openstack-operators | 14:40 | |
*** mdamkot_sfdc has joined #openstack-operators | 14:42 | |
*** maishsk has quit IRC | 14:59 | |
*** saneax has quit IRC | 14:59 | |
*** mdorman has joined #openstack-operators | 15:02 | |
*** bhunter71 has joined #openstack-operators | 15:07 | |
*** fawadkhaliq has joined #openstack-operators | 15:07 | |
*** david-lyle has joined #openstack-operators | 15:17 | |
*** rbrooker has quit IRC | 15:30 | |
*** SimonChung1 has quit IRC | 15:36 | |
*** saneax has joined #openstack-operators | 15:37 | |
*** rbrooker has joined #openstack-operators | 15:38 | |
*** mdamkot_sfdc has quit IRC | 15:49 | |
*** kitch_ has joined #openstack-operators | 16:03 | |
kitch_ | Anyone with experience taking a deployment with one network node and splitting it into two? | 16:03 |
---|---|---|
klindgren | I assume that would just be moving a router to the new node? | 16:08 |
klindgren | Like neutron l3-agent-router-remove <l3 agent uuid> <router uuid> | 16:08 |
klindgren | neutron l3-agent-router-add <new l3 agent uuid> <router uuid> | 16:09 |
*** signed8bit has joined #openstack-operators | 16:09 | |
klindgren | we (godaddy) don't use routers/l3 agents in our neutron deployment. I know that TWC does, I believe they talked about moving routers/agents around. | 16:10 |
klindgren | mfisch, clayton, med_ ^^ | 16:10 |
med_ | yep | 16:10 |
med_ | one network node: like a single neutron controller and making it several? | 16:11 |
med_ | (for HA) | 16:11 |
med_ | or for segmenting the network kitch_ ? | 16:11 |
med_ | thanks Kris. | 16:11 |
klindgren | I assumed splitting for load | 16:11 |
mfisch | we move routers all teh time | 16:11 |
*** rbrooker has quit IRC | 16:12 | |
med_ | kitch_, we do this with Neutron with VXLAN and OVS | 16:12 |
kitch_ | med_: for scale out, we have singleton, and things are not going well, time to split the work. i can take an outage, but im not sure what hte process would be | 16:12 |
med_ | we (today) have 3 neutron api endpoints and we'll be scaling that up to 5 or more (RSN) | 16:12 |
med_ | not sure you'll need an outage even | 16:12 |
mfisch | kitch_: bring up a new node, move routers over | 16:13 |
kitch_ | so the trick is, i need to move existing networks to that new node, not just take new ones | 16:13 |
med_ | just bring up (hopefully using some sort of config mgmt tool like Puppet &c) another neutron api endpoint | 16:13 |
med_ | and then do a router delete, router add to the new endpoint | 16:13 |
mfisch | moving routers is simple | 16:13 |
mfisch | let me find our tool | 16:13 |
* med_ works directly with mfisch on this | 16:13 | |
mfisch | neutron l3-agent-router-remove ${SRC_AGENT_UUID} ${ROUTER_UUID} | 16:13 |
mfisch | neutron l3-agent-router-add ${DST_AGENT_UUID} ${ROUTER_UUID} | 16:13 |
mfisch | in a nutshell | 16:13 |
med_ | ^ yep | 16:14 |
*** serverascode has quit IRC | 16:14 | |
kitch_ | ok, so when new routers are created, it just gets load balanced by virtue of the haproxy in front of the neutron servers? | 16:14 |
mfisch | you should always have >1 node for this anyway, if it dies, you'd be in trouble | 16:15 |
mfisch | kitch_: no | 16:15 |
*** serverascode has joined #openstack-operators | 16:15 | |
mfisch | kitch_: that code is explicitly moving it to a node, hence the SRC and DST agent uuids | 16:15 |
kitch_ | right, im saying after i move existing over | 16:15 |
kitch_ | what happens to future routers? | 16:16 |
*** SimonChung has joined #openstack-operators | 16:16 | |
mfisch | disable the agent on the old node if you want to | 16:16 |
mfisch | to prevent stuff from coming back | 16:16 |
mfisch | or otherwise neutrons scheduler figures it out for you | 16:16 |
mfisch | not haproxy afaik | 16:17 |
kitch_ | ah, ok | 16:17 |
kitch_ | that was the part i was missing | 16:17 |
kitch_ | can i split JUST the l3 agent? | 16:17 |
kitch_ | we’re building a new architecture for scale from teh start, but i need a band aid with minimal effort to keep folks happy in the interim | 16:17 |
mfisch | I assume you can run a node with just the l3 agent but I'm not sure | 16:18 |
mfisch | why dont you setup this in a VM and try it there first? | 16:18 |
klindgren | kitch_, yes - re: new routers the scheduler should split the placement between the l3 agents - I dunno if htat tunable or not | 16:18 |
kitch_ | fair enough, i think ill spin up a test with just another l3 agent, and see if i can remove and add them | 16:18 |
kitch_ | now, question is this, there are active users on those existing routers, i dont believe neutron is going to let me remove the router if its in use, no? | 16:19 |
kitch_ | ohh | 16:19 |
kitch_ | sorry | 16:19 |
kitch_ | im removing the l3-agent-router | 16:19 |
kitch_ | no the router itself | 16:19 |
klindgren | so you currently have mutliple services on a node and want to move that stuff to just a new node | 16:20 |
klindgren | ? | 16:20 |
kitch_ | yep | 16:20 |
klindgren | s/that stuff/l3-agent | 16:20 |
klindgren | I would make a new l3-agent node | 16:21 |
kitch_ | well, i want to split the one l3 agent on my current node, out to two l3 agents on other nodes | 16:21 |
klindgren | move the routers to the new node | 16:21 |
klindgren | remove the l3 agent from the old node | 16:21 |
kitch_ | yeah, i think im getting it now | 16:21 |
*** Guest100_ has joined #openstack-operators | 16:28 | |
med_ | and there may be a nominal "blip" for the users | 16:30 |
med_ | and how nominal is relative to the number of routers/etc. | 16:31 |
*** signed8bit has quit IRC | 16:31 | |
*** Guest100_ has quit IRC | 16:32 | |
*** bvandenh has joined #openstack-operators | 16:33 | |
*** simonmcc has quit IRC | 16:38 | |
*** simonmcc has joined #openstack-operators | 16:38 | |
*** derekh has quit IRC | 16:46 | |
kitch_ | sure | 16:48 |
kitch_ | Thanks folks for the thoughts and info, ill report back with whatever we end up doing :) | 16:49 |
*** jraim has quit IRC | 17:07 | |
*** jraim has joined #openstack-operators | 17:08 | |
*** alop has joined #openstack-operators | 17:09 | |
*** Guest100_ has joined #openstack-operators | 17:11 | |
*** fawadkhaliq has quit IRC | 17:11 | |
*** fawadkhaliq has joined #openstack-operators | 17:11 | |
kitch_ | Just curious, if one were to create a node with only the l3 agent on it, can i really get away with it beign the only openstack service there? | 17:24 |
*** Guest100_ has quit IRC | 17:30 | |
klindgren | kitch_, outside of running whatever L2 agent - I dont see why you would need to run anything else on that node - unless you are doing fwaas or something similar. DHCP does not need to be on the same network node as the router. | 17:32 |
*** signed8bit has joined #openstack-operators | 17:33 | |
*** signed8bit is now known as signed8bit_ZZZzz | 17:46 | |
*** signed8bit_ZZZzz is now known as signed8bit | 17:48 | |
*** bvandenh has quit IRC | 17:53 | |
kitch_ | klindgren: ok, so drag along our ml2 and ovs with l3 and we should be set | 17:57 |
*** slaweq has joined #openstack-operators | 18:00 | |
klindgren | kitch_, right | 18:11 |
kitch_ | Wish us luck. I’ll be quite happy to get off of rhel 6.5 and on to multi-ubuntu nodes | 18:13 |
klindgren | so we run dhcp on as a standalone service on a set of nodes - it needs the L2 stuff (for us openvswitch-agent) to configure it to talk on the correct network, then we only have the dhcp agent running on that node. | 18:13 |
klindgren | we use Cent 6 and Cent7 - we haven't had any issues | 18:13 |
klindgren | outside of the python 2.6 stuff. But I have a work aroudn for that so *shrug* | 18:14 |
*** signed8bit is now known as signed8bit_ZZZzz | 18:33 | |
*** signed8bit_ZZZzz is now known as signed8bit | 18:35 | |
*** Miouge has quit IRC | 18:36 | |
kitch_ | so no performance issues with things like namespaces on cent6? klindgren | 18:39 |
klindgren | kitch_, we only run dhcp-agent | 18:39 |
klindgren | so not that we have seen | 18:39 |
klindgren | though we are movign those servers to cent7 | 18:40 |
klindgren | actually we are mostly moved to cent7 | 18:40 |
klindgren | we also have all of our instances configured to do config-drive for statically configuring their network stuff as well | 18:40 |
kitch_ | ok :) I was seeing 2-3x performance on modern kernel vs 2.6.x of 6 | 18:40 |
klindgren | because it sucks if your instances stop trying to querry for a lease | 18:41 |
*** Miouge has joined #openstack-operators | 18:41 | |
klindgren | I assume you were mainly seeing that around the l3-agent stuff? | 18:41 |
klindgren | IE iptables inside a netns | 18:42 |
*** bvandenh has joined #openstack-operators | 18:43 | |
kitch_ | yeah | 18:53 |
*** bitblt has joined #openstack-operators | 18:55 | |
*** bitblt has quit IRC | 18:58 | |
*** VW_ has quit IRC | 18:58 | |
*** Miouge has quit IRC | 18:58 | |
*** VW_ has joined #openstack-operators | 18:59 | |
*** VW_ has quit IRC | 18:59 | |
*** VW_ has joined #openstack-operators | 18:59 | |
*** bvandenh has quit IRC | 19:02 | |
klindgren | kitch_, we run all of our VM's on shared provider networks - so no routers for us | 19:05 |
*** yuriy has joined #openstack-operators | 19:12 | |
kitch_ | klindgren: lucky you :D | 19:19 |
mnaser | same ^ | 19:22 |
mnaser | we do offer the ability for users to create their own networks if they want | 19:22 |
mnaser | so it gives them the best level of performance while giving them the ability to still have the whole my-own-network-with-nat-routers stuff | 19:23 |
klindgren | yea - we made modification to neutron to allow us to inject routes for floating ip's into the network | 19:27 |
klindgren | so that vm's can have floating ip's | 19:27 |
klindgren | and actually have those IP's show up in the vm and be used "normally" | 19:27 |
klindgren | we specifically prevent people from creating routers - 99% of our end users don't care about that stuff anyway | 19:28 |
klindgren | those that do care about that stuff really want L2 adjacency - which we don't support in our model. As a company as a whole we are actively moving away from depending on L2 | 19:29 |
*** bvandenh has joined #openstack-operators | 19:30 | |
klindgren | so they want us to build overlays to solve their problems - which I am not keen on doing - since overlays in my experience are problematic. Plus it gets you to a point where you are always trying to figure out how to scale a router to deal with the load that vm's behind that router serv's | 19:31 |
klindgren | IE I can put 10 large vm's behind a singe router and the through put of those vm's is directly related to the ability of that single router to process packets | 19:32 |
*** bvandenh has quit IRC | 19:37 | |
*** Miouge has joined #openstack-operators | 19:37 | |
*** yuriy has quit IRC | 19:43 | |
*** Miouge has quit IRC | 19:47 | |
*** Miouge has joined #openstack-operators | 20:01 | |
*** Miouge has quit IRC | 20:02 | |
*** bvandenh has joined #openstack-operators | 20:19 | |
*** bhunter71 has quit IRC | 20:20 | |
*** kitch_ has quit IRC | 20:40 | |
*** signed8bit has quit IRC | 20:42 | |
*** markvoelker has quit IRC | 20:42 | |
klindgren | when is the LDT meeting today? - VW_ | 21:11 |
klindgren | 9PM mountain - I can timezone... | 21:13 |
*** VW_ has quit IRC | 21:32 | |
*** VW_ has joined #openstack-operators | 21:33 | |
*** hakimo has joined #openstack-operators | 21:36 | |
*** hakimo_ has quit IRC | 21:36 | |
*** VW_ has quit IRC | 21:37 | |
*** VW_ has joined #openstack-operators | 21:40 | |
VW_ | yep klindgren that is correct | 21:40 |
VW_ | :) | 21:40 |
klindgren | hrm - mightn ot be able to make it today :-/ | 21:41 |
jlk | hrm? | 21:42 |
jlk | strange | 21:42 |
*** fawadk has joined #openstack-operators | 21:47 | |
*** fawadkhaliq has quit IRC | 21:48 | |
jlk | so gmail is pretty bad when the time says 03:00UTC but clicking 22:00 CDT showed the right time. | 21:48 |
*** slaweq has quit IRC | 21:49 | |
*** bhunter71 has joined #openstack-operators | 21:50 | |
*** SimonChung has quit IRC | 21:52 | |
*** dminer has quit IRC | 21:56 | |
*** SimonChung has joined #openstack-operators | 21:59 | |
*** jeh has quit IRC | 22:05 | |
*** hakimo has quit IRC | 22:12 | |
*** hakimo has joined #openstack-operators | 22:13 | |
*** fawadk has quit IRC | 22:31 | |
*** kitch has joined #openstack-operators | 22:32 | |
*** signed8bit has joined #openstack-operators | 22:34 | |
*** bvandenh has quit IRC | 22:37 | |
*** signed8bit has quit IRC | 22:37 | |
*** signed8bit has joined #openstack-operators | 22:38 | |
WormMan | ahh, vif_type=binding_failed... the most useless error ever | 22:42 |
jlk | "Error scheduling instance" | 22:43 |
*** signed8b_ has joined #openstack-operators | 22:45 | |
*** signed8bit has quit IRC | 22:46 | |
*** chlong has quit IRC | 22:52 | |
klindgren | jlk guessing that should be 15:00UTC | 22:56 |
jlk | well, it's 22:57 UTC right now | 22:57 |
jlk | so 3:00 UTC seems right | 22:57 |
klindgren | ah - I htin the issue its it would 3:00UTC tommorrow | 22:57 |
klindgren | not today | 22:57 |
klindgren | think* | 22:57 |
klindgren | WormMan, and JLK +1 to both of those | 22:58 |
klindgren | or "No Valid Host" | 22:58 |
jlk | gmail is trying to say that 0300 UTC is 8am for me, when in reality it's 8pm for me | 22:59 |
*** david-lyle has quit IRC | 23:02 | |
*** VW_ has quit IRC | 23:12 | |
*** VW_ has joined #openstack-operators | 23:13 | |
*** signed8b_ is now known as signed8bit_ZZZzz | 23:15 | |
*** signed8bit_ZZZzz is now known as signed8b_ | 23:15 | |
*** VW_ has quit IRC | 23:17 | |
*** SimonChung has quit IRC | 23:21 | |
*** VW_ has joined #openstack-operators | 23:27 | |
*** signed8b_ is now known as signed8bit_ZZZzz | 23:28 | |
WormMan | woo, progress... | 23:31 |
WormMan | Failed to start vnc server on my_ip:0 | 23:31 |
WormMan | yea, that's not gonna work :) | 23:31 |
WormMan | but I'm closer to having a cloud | 23:31 |
*** signed8bit_ZZZzz is now known as signed8b_ | 23:32 | |
*** david-lyle has joined #openstack-operators | 23:36 | |
*** mahito has joined #openstack-operators | 23:55 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!