*** tbarron_ has quit IRC | 00:32 | |
*** zerick has quit IRC | 00:32 | |
*** kukacz_ has quit IRC | 00:32 | |
*** timburke has quit IRC | 00:32 | |
*** kukacz has joined #openstack-ansible | 00:33 | |
*** zerick has joined #openstack-ansible | 00:33 | |
*** timburke has joined #openstack-ansible | 00:34 | |
openstackgerrit | Guilherme Steinmuller Pimentel proposed openstack/openstack-ansible-os_zun master: debian: add support https://review.openstack.org/651003 | 00:37 |
---|---|---|
openstackgerrit | Guilherme Steinmuller Pimentel proposed openstack/openstack-ansible-os_ceilometer master: debian: add support https://review.openstack.org/651043 | 00:37 |
openstackgerrit | Guilherme Steinmuller Pimentel proposed openstack/openstack-ansible-os_gnocchi master: debian: add support https://review.openstack.org/651039 | 00:37 |
*** dxiri has quit IRC | 00:38 | |
openstackgerrit | Guilherme Steinmuller Pimentel proposed openstack/openstack-ansible-os_panko master: debian: add support https://review.openstack.org/651022 | 00:38 |
*** partlycloudy has joined #openstack-ansible | 00:53 | |
jsquare | Any idea why the containers might fail to get their network up and running? We had our Rocky deployment process very streamlined until it stopped working because of that. | 00:55 |
mnaser | jsquare: what part of the network containers not working? | 00:57 |
jsquare | mnaser: we've been fine tunning the deployment over the last week, we were deploying the whole thing, then tearing it down, cleaning up, and repeat over and over | 01:00 |
jsquare | at some point today, it started failing, no obvious reason | 01:00 |
jsquare | basically, the infra containers don't get their network configured | 01:00 |
jsquare | we are missing something... | 01:01 |
*** cshen has joined #openstack-ansible | 01:05 | |
*** cshen has quit IRC | 01:09 | |
kplant | it appears to me that on centos 7 both stable/stein and master are trying to install a version of keystone that doesn't exist: https://pastebin.com/DHyw6tMU | 01:10 |
*** gyee has quit IRC | 01:10 | |
openstackgerrit | Merged openstack/openstack-ansible master: Set telemetry debian gate job to non vonting https://review.openstack.org/651643 | 01:16 |
jsquare | mnaser: therefore, task "lxc_container_create : Execute first script" always failing in the setup-hosts playbook | 01:26 |
*** hwoarang has quit IRC | 01:38 | |
*** hwoarang has joined #openstack-ansible | 01:39 | |
*** dave-mccowan has quit IRC | 02:01 | |
*** kmadac has quit IRC | 02:05 | |
*** zhongjun2_ has joined #openstack-ansible | 02:05 | |
*** kmadac has joined #openstack-ansible | 02:07 | |
*** zhongjun2_ has quit IRC | 02:07 | |
*** zhongjun2_ has joined #openstack-ansible | 02:07 | |
*** zhongjun2_ is now known as zhongjun2 | 02:11 | |
*** nurdie has joined #openstack-ansible | 02:30 | |
*** markvoelker has joined #openstack-ansible | 02:31 | |
cloudnull | jsquare I would check to see if the lxc-dnsmasq process is running | 02:32 |
cloudnull | it also may need a restart | 02:33 |
cloudnull | containers will fail to get networking if the first interface is blocking on trying to get DHCP | 02:33 |
openstackgerrit | Merged openstack/openstack-ansible-os_aodh master: debian: add support https://review.openstack.org/651047 | 02:33 |
*** partlycloudy has left #openstack-ansible | 02:38 | |
*** nurdie has quit IRC | 02:40 | |
*** nurdie has joined #openstack-ansible | 02:41 | |
*** nurdie has quit IRC | 02:45 | |
jsquare | cloudnull: yes, all looks fine, actually nothing has changed, apparently | 02:53 |
jsquare | actually, the containers only show the loopback interface | 02:59 |
*** nicolasbock has quit IRC | 03:03 | |
*** markvoelker has quit IRC | 03:05 | |
*** cshen has joined #openstack-ansible | 03:05 | |
*** cshen has quit IRC | 03:09 | |
*** KeithMnemonic has quit IRC | 03:12 | |
*** nurdie has joined #openstack-ansible | 03:22 | |
*** nurdie has quit IRC | 03:30 | |
*** hwoarang has quit IRC | 03:35 | |
*** hwoarang has joined #openstack-ansible | 03:36 | |
mnaser | hmm maybe the systemd_network role didn't run? | 03:40 |
mnaser | kplant: is this a multinode job? | 03:41 |
jsquare | hmmm why you think it would not? | 03:41 |
mnaser | jsquare: I think that's the part that configures the containers | 03:45 |
mnaser | I believe if you rerun the container create role it .. should do it? | 03:45 |
mnaser | if you get into any containers and run systemctl status systemd-networkd that might give us info | 03:45 |
mnaser | tho that sector isn't as much of my expertise | 03:45 |
*** cmart has joined #openstack-ansible | 03:46 | |
openstackgerrit | Merged openstack/openstack-ansible-os_zun master: debian: add support https://review.openstack.org/651003 | 03:46 |
*** nurdie has joined #openstack-ansible | 03:49 | |
openstackgerrit | Merged openstack/openstack-ansible-os_ceilometer master: debian: add support https://review.openstack.org/651043 | 03:51 |
jsquare | mnaser: systemd-networkd is down inside the containers | 03:52 |
mnaser | status show anything useful jsquare ? | 03:52 |
jsquare | Status: "Shutting down..." | 03:52 |
jsquare | can't find any error anywhere | 03:54 |
jsquare | this issue never happened, we've deployed the whole thing more than 15 times | 03:56 |
jsquare | i'm at a loss | 03:56 |
mnaser | systemctl restart systemd-networkd inside a container.. does that bring it back? | 03:59 |
openstackgerrit | Merged openstack/openstack-ansible-os_gnocchi master: debian: add support https://review.openstack.org/651039 | 04:00 |
openstackgerrit | Merged openstack/openstack-ansible-os_panko master: debian: add support https://review.openstack.org/651022 | 04:00 |
*** markvoelker has joined #openstack-ansible | 04:02 | |
jsquare | yeah, and stays up for a few seconds with status = processing requests, and then exits successfully | 04:03 |
jsquare | do you know what playbook wires up the host bridge to the containers? | 04:08 |
jsquare | *bridges | 04:09 |
*** cshen has joined #openstack-ansible | 04:20 | |
*** cshen has quit IRC | 04:27 | |
*** markvoelker has quit IRC | 04:36 | |
*** chhagarw has joined #openstack-ansible | 04:38 | |
*** goldenfri has quit IRC | 04:44 | |
mnaser | jsquare: I think that’s lxc container create one | 04:46 |
mnaser | If you’re using lxc | 04:46 |
*** cshen has joined #openstack-ansible | 04:48 | |
*** cshen has quit IRC | 04:52 | |
jsquare | yes, lxc, container setup fails, for some reason the containers don't get attached to lxcbr0 | 05:00 |
*** hwoarang has quit IRC | 05:00 | |
*** hwoarang has joined #openstack-ansible | 05:02 | |
openstackgerrit | Dmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible master: Set telemetry debian gate job to vonting again https://review.openstack.org/651692 | 05:05 |
openstackgerrit | Dmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible master: Set telemetry debian gate job to vonting again https://review.openstack.org/651692 | 05:06 |
*** rambo_li has joined #openstack-ansible | 05:13 | |
rambo_li | Hi, I'm having some troubles deploying openstack with the rbd driver. The playbook fails on TASK [Perform online data migrations] and looking at the logs from the cinder-api container I get the following error: "Error attempting to run shared_targets_online_data_migration: MessagingTimeout: Timed out waiting for a reply to message ID". I've probabl | 05:15 |
rambo_li | y missed something but can't find it | 05:15 |
*** goldenfri has joined #openstack-ansible | 05:16 | |
rambo_li | if anyone met this problem? | 05:19 |
*** goldenfri has quit IRC | 05:32 | |
*** markvoelker has joined #openstack-ansible | 05:33 | |
*** hwoarang has quit IRC | 05:40 | |
*** hwoarang has joined #openstack-ansible | 05:41 | |
*** cmart has quit IRC | 05:43 | |
*** nurdie has quit IRC | 05:59 | |
*** udesale has joined #openstack-ansible | 06:03 | |
*** markvoelker has quit IRC | 06:06 | |
*** kopecmartin|off is now known as kopecmartin | 06:09 | |
*** yetiszaf has quit IRC | 06:13 | |
*** chhagarw has quit IRC | 06:13 | |
*** chhagarw has joined #openstack-ansible | 06:14 | |
*** ahuffman has quit IRC | 06:36 | |
*** chhagarw has quit IRC | 06:39 | |
*** chhagarw has joined #openstack-ansible | 06:39 | |
*** phasespace has quit IRC | 06:40 | |
*** goldenfri has joined #openstack-ansible | 06:48 | |
*** cshen has joined #openstack-ansible | 06:49 | |
*** ahuffman has joined #openstack-ansible | 06:51 | |
*** ivve has joined #openstack-ansible | 06:52 | |
fnpanic | good morninh | 06:53 |
fnpanic | 's/h/g/g' | 06:54 |
*** markvoelker has joined #openstack-ansible | 07:03 | |
*** luksky has joined #openstack-ansible | 07:07 | |
*** mbuil has joined #openstack-ansible | 07:10 | |
*** goldenfri has quit IRC | 07:26 | |
*** CeeMac has joined #openstack-ansible | 07:29 | |
*** markvoelker has quit IRC | 07:36 | |
*** tosky has joined #openstack-ansible | 07:39 | |
*** vnogin has joined #openstack-ansible | 07:42 | |
noonedeadpunk | guilhermesp: no problems:) | 07:49 |
*** phasespace has joined #openstack-ansible | 07:55 | |
openstackgerrit | Chandan Kumar (raukadah) proposed openstack/openstack-ansible-os_tempest master: Switch to import_task in os_tempest https://review.openstack.org/650054 | 08:01 |
openstackgerrit | Chandan Kumar (raukadah) proposed openstack/openstack-ansible-os_tempest master: Switch to import_task in os_tempest https://review.openstack.org/650054 | 08:02 |
*** hamzaachi has joined #openstack-ansible | 08:03 | |
openstackgerrit | Dmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible master: Set telemetry jobs to vouting again https://review.openstack.org/651718 | 08:17 |
*** priteau has joined #openstack-ansible | 08:22 | |
openstackgerrit | Dmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible-os_ceilometer master: Noop change to test gate https://review.openstack.org/651720 | 08:24 |
*** ygk_12345 has joined #openstack-ansible | 08:29 | |
*** yolanda_ has joined #openstack-ansible | 08:29 | |
ygk_12345 | odyssey4me: Hi :) | 08:29 |
noonedeadpunk | guilhermesp: I was not sure whether you was online so decided to place patch | 08:32 |
*** markvoelker has joined #openstack-ansible | 08:34 | |
*** cshen has quit IRC | 08:51 | |
*** cshen has joined #openstack-ansible | 09:03 | |
*** markvoelker has quit IRC | 09:07 | |
*** tbarron has joined #openstack-ansible | 09:21 | |
*** rambo_li has quit IRC | 09:35 | |
*** sum12 has quit IRC | 09:36 | |
*** cshen has quit IRC | 09:38 | |
*** electrofelix has joined #openstack-ansible | 09:40 | |
*** luksky has quit IRC | 09:42 | |
*** sum12 has joined #openstack-ansible | 09:46 | |
*** ygk_12345 has quit IRC | 10:00 | |
*** cshen has joined #openstack-ansible | 10:04 | |
*** markvoelker has joined #openstack-ansible | 10:04 | |
*** ygk_12345 has joined #openstack-ansible | 10:06 | |
*** cshen has quit IRC | 10:09 | |
*** luksky has joined #openstack-ansible | 10:18 | |
*** markvoelker has quit IRC | 10:36 | |
*** yolanda_ has quit IRC | 10:40 | |
*** Kurlee has joined #openstack-ansible | 10:43 | |
*** cshen has joined #openstack-ansible | 10:44 | |
*** nicolasbock has joined #openstack-ansible | 10:46 | |
*** yolanda_ has joined #openstack-ansible | 10:52 | |
ygk_12345 | guilhermesp: hi :) , r u there ? | 10:55 |
*** udesale has quit IRC | 10:57 | |
*** priteau has quit IRC | 10:59 | |
*** ansmith has quit IRC | 11:22 | |
*** dave-mccowan has joined #openstack-ansible | 11:38 | |
*** ygk_12345 has quit IRC | 11:42 | |
kplant | mnaser: yeah, it is a multinode job | 11:46 |
*** flaviosr_ has quit IRC | 11:52 | |
*** nicolasbock has quit IRC | 11:53 | |
*** vnogin has quit IRC | 11:57 | |
*** nicolasbock has joined #openstack-ansible | 12:00 | |
*** flaviosr_ has joined #openstack-ansible | 12:00 | |
phasespace | Question about how rabbitmq is configured. I see you set a HA policy (ha-mode: all) only for the / vhost. That means the queues in other vhosts, i.e. /nova is not mirrored, no? | 12:09 |
*** ygk_12345 has joined #openstack-ansible | 12:11 | |
phasespace | There is no queue master locator either. Doesn't all this mean that rabbitmq is configured not to be HA, and all queues will end up having the same master? | 12:11 |
noonedeadpunk | Seems, that you're right. I was thinking about the same thing a while ago actually | 12:14 |
noonedeadpunk | looks like smth that should be investigated and fixed as policies are Vhost-scoped and according to this logic policy for / shouldn't be applied for other vhosts | 12:16 |
*** starborn has joined #openstack-ansible | 12:20 | |
noonedeadpunk | phasespace I think you may define policies with https://github.com/openstack/openstack-ansible-rabbitmq_server/blob/master/defaults/main.yml#L158 | 12:20 |
noonedeadpunk | oh, it seems to be missing vhost parameter | 12:21 |
phasespace | yes, and i'm on rocky: https://github.com/openstack/openstack-ansible-rabbitmq_server/blob/stable/rocky/defaults/main.yml | 12:21 |
guilhermesp | noonedeadpunk: I appreciate the help! let me vote :) | 12:22 |
guilhermesp | we are so close to debian support <3 | 12:23 |
guilhermesp | we just have 3 roles to fix https://review.openstack.org/#/q/topic:osa/debian-support+is:open | 12:23 |
guilhermesp | actually I haven't did anything with nspawn roles yet... they are missing the requisites for debian support | 12:24 |
noonedeadpunk | phasespace: seems that it's really not implemented right now. So you may offer a patch for this, or we may work on it when will have some free time | 12:24 |
noonedeadpunk | it seems that we'll need to patch almost every role for their policies support... | 12:29 |
*** ansmith has joined #openstack-ansible | 12:31 | |
*** rambo_li has joined #openstack-ansible | 12:34 | |
phasespace | yes, it would seem so | 12:35 |
* noonedeadpunk writes this down to the list of ToDo things | 12:36 | |
*** udesale has joined #openstack-ansible | 13:00 | |
*** sum12 has quit IRC | 13:01 | |
*** sum12 has joined #openstack-ansible | 13:01 | |
*** rambo_li has quit IRC | 13:08 | |
guilhermesp | mnaser: I think we can reverse the Drop sphinxmark https://github.com/openstack/requirements/commit/8d5a0e657612fece0173a79889ad1057b44544c7 | 13:14 |
serverascode | hi, what's the purpose of the cidr_networks tunnel definition in the user config file? I'm working on deploying in a pure layer 3 environment, so br-vxlan interfaces will not be in the same network...any thoughts on how I might get around that? | 13:17 |
serverascode | is it just some kind of validation check to see if all vxlan interfaces are in the same network? or something else? | 13:17 |
*** pcaruana has quit IRC | 13:20 | |
mnaser | guilhermesp: we don't really care about sphinx mark anymore, its been replaced by openstackdocstheme | 13:22 |
noonedeadpunk | serverascode: actually it's for vxlans. So neutron selects interface to build vxlan for based on the ip, this ip is taken from tunnel segment and placed into neutron plugins conf. | 13:22 |
mnaser | serverascode: in that case I wouldn't even create br-vxlan ? | 13:23 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_designate master: Fix designate venv build constraints https://review.openstack.org/651784 | 13:23 |
jrosser | guilhermesp: something like that i think ^ ? | 13:24 |
guilhermesp | uooow let me see jrosser | 13:24 |
serverascode | mnaser if I didn't have a br-vxlan where would vxlan be tunneled for tenants in openstack? br-mgmt? | 13:25 |
serverascode | ie. can we just use br-mgmt for tunnels? | 13:25 |
noonedeadpunk | So I don't have br-vxlan - I just use simple interface for this | 13:25 |
serverascode | ok, cool, what is a simple interface? | 13:26 |
mnaser | serverascode: you can just use an actual interface as noonedeadpunk mentioned | 13:26 |
mnaser | jrosser: I added a -1, one more thing missing.. | 13:26 |
mnaser | I'll take the blame for that one :) | 13:26 |
mnaser | brb reboot | 13:26 |
jrosser | hahah barbican :) | 13:27 |
*** goldenfri has joined #openstack-ansible | 13:27 | |
noonedeadpunk | Yeh, I missed that as well ;( | 13:27 |
serverascode | thanks all, I'm just not sure what an actual or simple interface is and how it would be setup in the user config? | 13:28 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_designate master: Fix designate venv build constraints https://review.openstack.org/651784 | 13:30 |
noonedeadpunk | serverascode: http://paste.openstack.org/show/749184/ | 13:31 |
ygk_12345 | can someone help me with my issue please https://storyboard.openstack.org/#!/story/2005431 | 13:31 |
noonedeadpunk | as far as you place neutron agents on baremetal (it's default) this should work | 13:31 |
serverascode | noonedeadpunk: ok let me take a look at that, I think I was finding I had to enter the cidr_network for tunnel and that is not a single network for all of those interfaces | 13:32 |
*** marst has joined #openstack-ansible | 13:33 | |
ygk_12345 | mnaser: hi :). can you help me with my issue please | 13:33 |
noonedeadpunk | the main point, that ip from tunnel should be placed on ib1 (in my case). As ib1 may act any netwrok interface (without bridge) | 13:33 |
noonedeadpunk | I meant not only bridge:) | 13:34 |
ygk_12345 | waiting for someone to pitch in | 13:34 |
serverascode | ok yeah, but cidr_network -> tunnel will be defined as a single network, and my IPs that are on the vxlan interface (not bridge) will be on different networks on each node | 13:34 |
serverascode | and then I get errors when running | 13:35 |
mnaser | ygk_12345: I have seen that issue before with `--wait` .. I believe it's something in our load balancer stuff but I'm not sure tbh | 13:35 |
mnaser | I would check the haproxy logs | 13:35 |
serverascode | how can I run this without cidr_network defined for tunnel or for that definition to have multiple networks in it | 13:35 |
ygk_12345 | mnaser: it is happening intermittently at the clients end | 13:36 |
noonedeadpunk | serverascode: You'll need some common network... You may create some vlan for this on your equimpent | 13:36 |
serverascode | then it wouldn't be pure layer 3 :) | 13:36 |
jrosser | serverascode: what error do you get? | 13:36 |
*** phasespace has quit IRC | 13:37 | |
noonedeadpunk | serverascode: SR-IOV?:) | 13:37 |
ygk_12345 | mnaser: i have taken the tcpdump as well | 13:38 |
serverascode | no it's on packet.com which is by default pure layer 3 | 13:38 |
serverascode | we can do vlans if needed but it's not the default and I'm trying to avoid it b/c it shouldn't be necessary unless required by OSA | 13:38 |
serverascode | or I can setup underlying vxlans for infrastrcuture but hat would be a little weird :) | 13:39 |
jrosser | serverascode: what error do you get ? :) | 13:39 |
jrosser | fwiw i am running L3 between TOR switches so an in a similar position | 13:39 |
serverascode | jrosser I'll run again to grab the error | 13:40 |
serverascode | just need a bit | 13:40 |
serverascode | but that is great to know that you are doing l3 | 13:40 |
jrosser | in the "old days" when the neutron agents were containerised the inventory would have needed to assign IP for the neutron containers | 13:40 |
jrosser | but that is no longer the case | 13:40 |
jrosser | so it could well be that there is spurious logic around from that which no longer matters | 13:41 |
ygk_12345 | mnaser: any idea ? | 13:42 |
*** pcaruana has joined #openstack-ansible | 13:42 | |
mnaser | ygk_12345: I don't know, I don't have time to dig into this in particular, but if you have any tweaks to the haproxy service, I can help land those changes. | 13:42 |
noonedeadpunk | But once you don't have tunnel network on all computing nodes and neutron agent node you'll possibly catch error, as neurtron role won't be able to fullfill linuxbrdige_agent plugin template | 13:42 |
serverascode | yeah I can understand when OSA is effectively scheduling IPs that having a single netowrk would be easiest, but if vxlan is not scheduled, but will find error msg | 13:43 |
jrosser | noonedeadpunk: it just needs to be able to figure out the IP for the VTEP on each node, and if it knows the interface it can find that | 13:43 |
openstackgerrit | Guilherme Steinmuller Pimentel proposed openstack/openstack-ansible-os_designate master: debian: add support https://review.openstack.org/651040 | 13:43 |
*** Kurlee has quit IRC | 13:44 | |
jrosser | i have this in neutron_linuxbridge_agent_ini_overrides | 13:45 |
jrosser | local_ip: "{{ hostvars[inventory_hostname]['ansible_' + 'bond1.1941']['ipv4']['address'] }}" | 13:45 |
noonedeadpunk | oh, in this case it might work of course | 13:45 |
jrosser | but that can be done a *whole* lot nicer now since jamesdenton added some extra bits to drive that from user config | 13:45 |
jrosser | i've just not got round to migrating to that yet | 13:46 |
*** cshen has quit IRC | 13:50 | |
*** marst has quit IRC | 13:52 | |
*** ygk_12345 has left #openstack-ansible | 13:52 | |
goldenfri | hello all, I started a deloy over, destroyed all the containers and deleted the inventory and now it fails because cinder-manage can't complete the online_data_migrations the logs say its timing out waiting for a reply, any idea where to start troubleshooting this? Should I have deleted more things when I stated over? | 13:55 |
goldenfri | opps delay = deploy | 13:55 |
noonedeadpunk | goldenfri it's known problem to be honest, and it seems that it's related more to cinder | 14:01 |
goldenfri | oh hrm, is there a workaround? | 14:01 |
noonedeadpunk | https://bugs.launchpad.net/cinder/+bug/1806156 | 14:01 |
openstack | Launchpad bug 1806156 in Cinder "shared_targets_online_data_migration fails when cinder-volume service not running" [Undecided,Confirmed] | 14:01 |
*** gkadam has joined #openstack-ansible | 14:02 | |
noonedeadpunk | so you may try to drop cinderr db and try to deploy it again | 14:02 |
*** gkadam has quit IRC | 14:03 | |
goldenfri | ok thanks, I'm not sure why its even trying to do a data migration since I thought I was starting from scratch | 14:05 |
jamesdenton | mornin | 14:06 |
noonedeadpunk | goldenfri migration is kinda required during setup : https://docs.openstack.org/cinder/rocky/install/cinder-controller-install-ubuntu.html#install-and-configure-components | 14:08 |
*** admin0 has joined #openstack-ansible | 14:09 | |
noonedeadpunk | goldenfri: oh, probably you're kinda right... it's more about population than online migration | 14:10 |
goldenfri | yea | 14:11 |
goldenfri | maybe I'll just try and skip that | 14:11 |
noonedeadpunk | tbh I commented this step out for now | 14:12 |
*** ianychoi has quit IRC | 14:13 | |
*** ianychoi has joined #openstack-ansible | 14:14 | |
*** marst has joined #openstack-ansible | 14:15 | |
*** SimAloo has joined #openstack-ansible | 14:15 | |
*** b1tsh1ft3r has joined #openstack-ansible | 14:19 | |
b1tsh1ft3r | Does anyone have any guides or information on integrating grafana with openstack for monitoring/graphing ect...? Im having a difficult time finding information on how to integrate data sources with what is already available from openstack in ceilometer/gnocchi | 14:21 |
noonedeadpunk | b1tsh1ft3r you may integrate grafana with gnocchi | 14:21 |
noonedeadpunk | But you'll need configured ceilometer for data collection | 14:22 |
noonedeadpunk | https://grafana.com/plugins/gnocchixyz-gnocchi-datasource | 14:22 |
noonedeadpunk | b1tsh1ft3r: or you may setup elk:) https://github.com/openstack/openstack-ansible-ops/tree/master/elk_metrics_6x | 14:32 |
b1tsh1ft3r | hmm.. Ok, ill have to check into the ceilometer config i have now. I've defined the metering-infra-hosts and metrics_hosts in the openstack_user_config when deploing a queens environment, but beyond this i dont believe i have anything that actually stores the collected data. Im assuming that a backend must be present that this data is stored into | 14:35 |
noonedeadpunk | Ceilometer stores data in gnocchi | 14:35 |
noonedeadpunk | But in queens there were some alternatives.. | 14:36 |
*** nurdie has joined #openstack-ansible | 14:36 | |
*** udesale has quit IRC | 14:38 | |
openstackgerrit | Antony Messerli proposed openstack/openstack-ansible-lxc_hosts master: Use pkill for lxc-dnsmasq systemd unit file https://review.openstack.org/651617 | 14:46 |
admin0 | noonedeadpunk, i have tried elk on ubuntu 18.04 .. does not work | 14:48 |
admin0 | elk metrics | 14:48 |
noonedeadpunk | hey, cloudnull, heard this? :) ^ | 14:49 |
noonedeadpunk | elk metrics or elk metrics 6? | 14:49 |
admin0 | he knows :D | 14:50 |
admin0 | but i am still stuck :( | 14:50 |
noonedeadpunk | than I may sleep calmly :p | 14:50 |
noonedeadpunk | unfrotunatelly can't help you - I'm using zabbix instead. And Ceilometer+Gnocchi+Grafana to monitor usage of exact instances (and for some other internal stuff) | 14:51 |
admin0 | my / is always 100% . and there are no files :( | 14:51 |
*** partlycloudy has joined #openstack-ansible | 14:53 | |
b1tsh1ft3r | noonedeadpunk i've setup the gnocchi data source with a token (temporary) and setup the dash in grafana. Looks like i can see a list of instances from the drop down, but no metrics come through. the default panels look to have "request error" | 14:59 |
b1tsh1ft3r | upon checking out the request error im seeing the response as "400 Bad request Your browser sent an invalid request." | 15:00 |
noonedeadpunk | I guess smth is wrong with configuration. It was pretty tricky for me tbh | 15:03 |
noonedeadpunk | I've configured CORS (allowed grafana server) for gnocchi and keystone and using direct access from browser | 15:04 |
noonedeadpunk | I may kinda share by dashboard, but it's still in work, and I'm collecting SNMP data from nodes and use them in it. But you may probably catch the idea | 15:06 |
guilhermesp | noonedeadpunk: remember the openstacksdk issue with trove? Take a look at this https://github.com/openstack/python-troveclient/blob/3713149ba61e2ed0dab0f03a470002355591628a/lower-constraints.txt#L41 | 15:09 |
guilhermesp | and http://logs.openstack.org/11/651011/5/check/openstack-ansible-deploy-aio_metal_trove-debian-stable/4e668bc/logs/ara-report/result/56efc826-a0fe-4296-837e-694c1d194057/ | 15:10 |
guilhermesp | I'm just wondering why only debian is complaining about that | 15:10 |
noonedeadpunk | lol | 15:10 |
*** dhellmann has joined #openstack-ansible | 15:10 | |
noonedeadpunk | guilhermesp: probably we don't need python-troveclient then? | 15:11 |
noonedeadpunk | as since 0.11.2 it's fully integrated with openstacksdk? | 15:11 |
noonedeadpunk | it's just a guess though | 15:12 |
guilhermesp | yeah I suspect too... we are using openstack modules to create the resouces http://logs.openstack.org/11/651011/5/check/openstack-ansible-deploy-aio_metal_trove-debian-stable/4e668bc/logs/ara-report/file/a150ec7e-6adf-46af-b009-0628651cf0e6/#line-31 | 15:12 |
*** gyee has joined #openstack-ansible | 15:12 | |
guilhermesp | not sure if the python client is needed | 15:12 |
guilhermesp | I can drop it and see the results | 15:13 |
noonedeadpunk | think it's worth a shot | 15:13 |
openstackgerrit | Guilherme Steinmuller Pimentel proposed openstack/openstack-ansible-os_trove master: debian: add support https://review.openstack.org/651011 | 15:14 |
guilhermesp | here we go noonedeadpunk | 15:14 |
noonedeadpunk | btw can you take a look at my backport https://review.openstack.org/#/c/651178/ (unrelated)? | 15:15 |
* noonedeadpunk opens zuul at takes some cookies | 15:16 | |
guilhermesp | done | 15:16 |
guilhermesp | let's do some trade then | 15:16 |
guilhermesp | can you suggest something for this scenario | 15:16 |
guilhermesp | wait | 15:16 |
guilhermesp | https://review.openstack.org/#/c/651050/ qpid is failing on debian because we are defining keys from ubutnu | 15:17 |
guilhermesp | something kind of similar to the issue I was having with zun, but zun was only a silly thing. This qrouterd has 3 variables defining the keyserver | 15:17 |
guilhermesp | https://github.com/openstack/ansible-role-qdrouterd/blob/82213e344a01a5ef5359a3c524013700d89c7632/vars/ubuntu.yml#L29 | 15:18 |
guilhermesp | I think we need to add vars for debian, of do some tweak to make the role deal with different keyservers | 15:18 |
noonedeadpunk | Yeah, I think that we should use different var for debian and ubuntu here... | 15:19 |
noonedeadpunk | And seems that qpid is placed in default debian repo | 15:20 |
noonedeadpunk | https://packages.debian.org/stretch/python-qpid | 15:20 |
noonedeadpunk | So no need in this ppa for debian | 15:20 |
*** b1tsh1ft3r has quit IRC | 15:25 | |
*** ivve has quit IRC | 15:36 | |
*** luksky has quit IRC | 15:36 | |
guilhermesp | I think we will need to change here too https://github.com/openstack/ansible-role-qdrouterd/blob/82213e344a01a5ef5359a3c524013700d89c7632/tasks/qdrouterd_install_apt.yml#L16 adding conditionals to ansible_distribution and creating new tasks for debian | 15:37 |
*** dxiri has joined #openstack-ansible | 15:39 | |
noonedeadpunk | I'd probably L16-L32 moved under ubuntu specific block | 15:42 |
*** pcaruana has quit IRC | 15:42 | |
*** cmart has joined #openstack-ansible | 15:42 | |
noonedeadpunk | But what new task for debian we need? L33-39 should suit debian | 15:42 |
*** cshen has joined #openstack-ansible | 15:45 | |
*** hamzy has quit IRC | 15:47 | |
*** cshen has quit IRC | 15:49 | |
*** chandankumar is now known as raukadah | 16:01 | |
openstackgerrit | Antony Messerli proposed openstack/openstack-ansible-lxc_hosts master: Use pkill for lxc-dnsmasq systemd unit file https://review.openstack.org/651617 | 16:05 |
guilhermesp | noonedeadpunk: btw dropping python-troveclient was not enough http://logs.openstack.org/11/651011/6/check/openstack-ansible-deploy-aio_metal_trove-debian-stable/4e2fae0/logs/ara-report/result/9358f954-8f0f-4afe-b88f-0847f05cf876/ | 16:09 |
*** vnogin has joined #openstack-ansible | 16:13 | |
*** vnogin has quit IRC | 16:14 | |
*** vnogin has joined #openstack-ansible | 16:14 | |
guilhermesp | and yeah noonedeadpunk you're right, no need for task for debian. Let me try it out | 16:15 |
jrosser | guilhermesp: we should vendor the keys like was done for ceph_client, apt_key is brok with proxies | 16:16 |
openstackgerrit | Logan V proposed openstack/openstack-ansible master: Add Calico networking AIO scenario https://review.openstack.org/645831 | 16:19 |
jrosser | guilhermesp: this sort of thing https://review.openstack.org/#/c/636711/ | 16:19 |
*** ianychoi has quit IRC | 16:20 | |
guilhermesp | hum jrosser that means we need to refactor the existent role, this could serve both ubuntu/debian ? | 16:21 |
jrosser | well, it didnt look like there were actually qdrouterd packages for debian from anywhere but the distro anyway | 16:22 |
jrosser | perhaps that needs investigating first | 16:22 |
jrosser | but the apt_key thing will be an issue at some point in the future anyway if we ever start using that role as standard | 16:23 |
*** chhagarw has quit IRC | 16:25 | |
raukadah | with respect to debian support, are we dropping ubuntu? | 16:26 |
noonedeadpunk | I don't think so... are we? | 16:28 |
guilhermesp | raukadah: no. We're just adding debian support, don't worry :) | 16:28 |
noonedeadpunk | I think we're just extending support of distributions, as debian is very alike one and might be added without really big changes. And it might be pretty easily supported | 16:29 |
raukadah | cool, then , thanks! | 16:29 |
noonedeadpunk | jrosser didn't know about apt_key... | 16:29 |
*** cshen has joined #openstack-ansible | 16:30 | |
jrosser | noonedeadpunk: did doesnt hit me becasue although i have proxies everything is mirrored inside those | 16:30 |
jrosser | *it doesnt | 16:30 |
jrosser | but others who do everything through proxies have come unstuck with all our uses of apt_key | 16:31 |
noonedeadpunk | I see | 16:31 |
*** vnogin has quit IRC | 16:35 | |
*** dave-mccowan has quit IRC | 16:39 | |
*** cshen has quit IRC | 16:55 | |
*** ivve has joined #openstack-ansible | 16:56 | |
*** hamzy has joined #openstack-ansible | 17:00 | |
*** pcaruana has joined #openstack-ansible | 17:05 | |
openstackgerrit | Merged openstack/openstack-ansible master: Set telemetry debian gate job to vonting again https://review.openstack.org/651692 | 17:09 |
*** ivve has quit IRC | 17:15 | |
openstackgerrit | Merged openstack/openstack-ansible-os_designate master: Fix designate venv build constraints https://review.openstack.org/651784 | 17:26 |
*** gyee has quit IRC | 17:29 | |
partlycloudy | Hello folks, I have a question about setting up L3 routing (Clos). | 17:36 |
partlycloudy | My edge router is on a dedicate leaf. OSPF is used between leaf and spine. | 17:36 |
partlycloudy | Given this, what is the recommended solution to bring the provider networks across leaf racks? | 17:36 |
*** kopecmartin is now known as kopecmartin|off | 17:40 | |
*** luksky has joined #openstack-ansible | 17:41 | |
*** cmart has quit IRC | 17:43 | |
jsquare | still trying to fix the issue we have with the containers being created without network, we don't see and ifcfg-* inside them, anybody has a clue? can't find anything on the logs | 17:50 |
jrosser | partlycloudy: segmented provider networks are a thing, may be helpful | 17:50 |
jrosser | partlycloudy: but that’s if you actually need the external net on all the leaf racks | 17:52 |
*** hamzaachi has quit IRC | 17:55 | |
*** ahuffman has quit IRC | 17:56 | |
*** cmart has joined #openstack-ansible | 17:57 | |
partlycloudy | jrosser: i hope to give provider network access to all leaf racks and use OVN to send north-south traffic directly from compute nodes. | 17:58 |
partlycloudy | is Clos appropriate or overkill for a cluster with a total of ~200 compute nodes? | 18:00 |
jrosser | Don’t you have fundamentally two choices then, a pure L3 calico style approach or an overlay? | 18:01 |
partlycloudy | jrosser: My current gears don't have BGP support, but only OSPF. Does that rule me out of calico or overlay (like EVPN/VXLAN)? | 18:05 |
openstackgerrit | Logan V proposed openstack/openstack-ansible-os_tempest master: Do not ping router when public net is local https://review.openstack.org/651896 | 18:05 |
openstackgerrit | Logan V proposed openstack/openstack-ansible master: Add Calico networking AIO scenario https://review.openstack.org/645831 | 18:05 |
openstackgerrit | Merged openstack/ansible-hardening master: Fix conditional cast to bool https://review.openstack.org/643694 | 18:06 |
kplant | as someone who is going through a huge evpn roll out, do youself a favor and make sure all of your POPs and RRs support it.. even the things you forgot to think about | 18:09 |
kplant | if you choose to go that route | 18:09 |
jrosser | partlycloudy: I’m not sure I understand really, distributing your provider network leaf switches with ospf pretty much implies assigning subnets to each leaf, that’s kind of inherent in a routed approach. OVN or not that would be your underlying topology. | 18:09 |
jrosser | kplant: I’m just bringing up bgp-evpn on nxos.... it’s been an “adventure” | 18:10 |
kplant | i've been dealing with it on mainly junos | 18:11 |
kplant | almost all of our stuff was 12.x and evpn support was added in 14.x | 18:12 |
kplant | but not really functional until 16.x | 18:12 |
logan- | jrosser: in my experience everything on nxos is an adventure, and never the fun kind :/ | 18:12 |
kplant | we also had some RRs not accepting type 10 LSAs which made our ospf take a dive | 18:13 |
kplant | all in all, a great experience | 18:13 |
kplant | logan-: have you had to suffer through the poison that is ios-xe? | 18:13 |
* mnaser just wants to replace everything with frrouting boxes | 18:14 | |
* kplant likes junos :-( | 18:15 | |
kplant | i'm kind of a juniper zealot sometimes | 18:15 |
logan- | nope, have not worked with ios-xe. after experiencing early nexus 9k, new deployments are generally junos or arista/eos | 18:15 |
logan- | ++ junos has its downsides at times but generally its super predictable and things just make sense | 18:16 |
kplant | transactional changes are a big one for me | 18:16 |
logan- | yup | 18:17 |
kplant | i know ios-xr eventually ripped it off | 18:17 |
kplant | but it took way too long for others to catch on | 18:17 |
kplant | i've also found that juniper products are stupid cheap compared to the competition | 18:18 |
kplant | especially cisco | 18:18 |
kplant | who cares about things like hsrp and eigrp anyway | 18:18 |
jrosser | Hmm I find the opposite - depends on your discount | 18:19 |
jrosser | White box is more expensive than Cisco for me | 18:19 |
*** electrofelix has quit IRC | 18:19 | |
* jrosser likes a bit of network geek-out | 18:21 | |
kplant | have you gotten into any of the newer cisco platforms with virtualization like NCS? | 18:23 |
kplant | one chassis nsr is pretty neat | 18:24 |
guilhermesp | I think the reason trove is complaining bout openstacksdk version is because debian stretch has 0.9.5-2 while the ansible openstack modules are requiring >=0.12 | 18:24 |
guilhermesp | https://packages.debian.org/search?searchon=names&keywords=python-openstacksdk | 18:24 |
guilhermesp | that's why only the debian job is failing | 18:24 |
guilhermesp | http://logs.openstack.org/11/651011/6/check/openstack-ansible-deploy-aio_metal_trove-debian-stable/4e2fae0/logs/ara-report/result/9358f954-8f0f-4afe-b88f-0847f05cf876/ | 18:24 |
*** tosky has quit IRC | 18:26 | |
openstackgerrit | Nicolas Bock proposed openstack/openstack-ansible-lxc_hosts stable/rocky: Switch OBS mirror to branch for stabilization https://review.openstack.org/651623 | 18:38 |
partlycloudy | jrosser: sorry for any confusion. i am still trying to get hold of this clos thing and sorry if my questions sounds dump :-p | 18:39 |
mnaser | jrosser: I figure you probably have a much bigger purchasing contract than I do.. | 18:40 |
jrosser | mnaser: across the whole org I expect so yes, I’m sure I’m leveraging that even though my qty is quite modest | 18:41 |
mnaser | this just in: jrosser using his discount to help us all get cheap network equipment | 18:42 |
kplant | hot off the press | 18:42 |
jrosser | partlycloudy: don’t worry - nice thing here is everyone has built different things to meet different requirements so there’s a lot of real life stuff to share | 18:43 |
* jrosser hides | 18:43 | |
*** cmart has quit IRC | 18:45 | |
admin0 | how to change novalocal in ansible | 18:46 |
admin0 | to null/blank domain name | 18:46 |
admin0 | i cannot find the variable | 18:47 |
admin0 | is it nova_dhcp_domain and neutron_dhcp_domain overrides ? | 18:47 |
admin0 | i see openstack.domain and neutron_dns_domain -- | 18:47 |
admin0 | but no values that is novalocal | 18:48 |
*** cshen has joined #openstack-ansible | 18:52 | |
openstackgerrit | Merged openstack/openstack-ansible-os_designate master: debian: add support https://review.openstack.org/651040 | 18:52 |
jrosser | kplant: we looked at NCS but found the split of features between NCS and ASR9K bad, particularly when a Cat 6500 used to do everything. Juniper MX looked a better bet, depending what features you need really | 18:53 |
*** cmart has joined #openstack-ansible | 18:54 | |
*** hwoarang has quit IRC | 18:54 | |
*** cshen has quit IRC | 18:56 | |
kplant | which series? mx204? or the 10Ks | 18:56 |
*** hwoarang has joined #openstack-ansible | 18:56 | |
openstackgerrit | Merged openstack/openstack-ansible stable/stein: Add masakari-monitors to openstack_services https://review.openstack.org/651178 | 18:56 |
kplant | we have almost 2000 240s 480s and 960s | 18:59 |
kplant | and we have the 10Ks in the lab right now | 18:59 |
kplant | i wanted to grab some 204s for to be lightweight PERs but haven't gotten a chance yet | 19:00 |
*** pcaruana has quit IRC | 19:02 | |
jrosser | I know folks who are having success with mx10003 | 19:03 |
kplant | nice. i haven't gotten a chance to play with one yet | 19:18 |
*** christ0 has quit IRC | 19:21 | |
serverascode | jrosser here's a gist https://gist.github.com/ccollicutt/1d15970f0db20a8b569eeca85d4472d0 of the vxlan interfaces, but they are separate networks so Idon't know what to put in cidr_networks -> tunnel | 19:24 |
serverascode | I added an error message I get from trying to run setup-hosts | 19:27 |
*** cshen has joined #openstack-ansible | 19:28 | |
serverascode | do I need ip_from_q for a network definition? | 19:28 |
serverascode | actually nevermind, I see at least one issue there, my mistake | 19:34 |
serverascode | apologies for spam | 19:34 |
*** cshen has quit IRC | 19:34 | |
*** vakuznet has joined #openstack-ansible | 19:47 | |
jrosser | serverascode: do you have all your infra nodes in the same L2 segment? | 19:50 |
serverascode | threre's just the one node | 19:52 |
serverascode | right now anyways | 19:52 |
serverascode | and will only be one for this particular deployment | 19:52 |
jrosser | and your container_network, is that similarly L3 routed to your computes? | 19:54 |
jrosser | sorry cidr_networks -> container | 19:54 |
serverascode | yeah but it doesn't seem to matter b/c there is just the one node | 19:54 |
serverascode | but with br-vxlan it needs to be on infra and compute | 19:54 |
jrosser | but each compute needs an ip on the mgmt network | 19:54 |
serverascode | oh really, ok | 19:55 |
serverascode | what is the ip_from_q? | 19:56 |
jrosser | i have one per set of leaves | 19:56 |
jrosser | the config file gets quite large | 19:56 |
jrosser | same goes for the storage network and so on | 19:57 |
serverascode | ok | 19:57 |
openstackgerrit | Nicolas Bock proposed openstack/openstack-ansible-lxc_hosts stable/rocky: Switch OBS mirror to branch for stabilization https://review.openstack.org/651623 | 19:57 |
jrosser | serverascode: have a close read of this before going much further https://docs.openstack.org/openstack-ansible/rocky/user/l3pods/example.html | 19:59 |
serverascode | jrosser thanks, that is a good call, thanks kindly for the help :) | 20:00 |
jrosser | no problem :) | 20:00 |
kplant | is centos 7 support still experimental in osa? | 20:01 |
*** starborn has quit IRC | 20:04 | |
vakuznet | is there plan to add distro install method to os_octavia? | 20:05 |
*** vakuznet has quit IRC | 20:10 | |
*** dxiri has quit IRC | 20:22 | |
*** hamzaachi has joined #openstack-ansible | 20:31 | |
*** ansmith has quit IRC | 20:31 | |
*** hamzy has quit IRC | 20:46 | |
*** ansmith has joined #openstack-ansible | 21:04 | |
*** partlycloudy has quit IRC | 21:11 | |
*** partlycl_ has joined #openstack-ansible | 21:14 | |
*** partlycl_ has left #openstack-ansible | 21:15 | |
*** partlycloudy has joined #openstack-ansible | 21:15 | |
*** nurdie has quit IRC | 21:19 | |
openstackgerrit | Merged openstack/openstack-ansible-lxc_hosts master: Use pkill for lxc-dnsmasq systemd unit file https://review.openstack.org/651617 | 21:21 |
*** hamzaachi has quit IRC | 21:24 | |
mnaser | do we wanna check out https://review.openstack.org/#/c/650561/ | 21:24 |
*** tosky has joined #openstack-ansible | 21:29 | |
*** cshen has joined #openstack-ansible | 21:31 | |
*** cshen has quit IRC | 21:35 | |
*** marst has quit IRC | 21:40 | |
*** gyee has joined #openstack-ansible | 21:49 | |
*** nurdie has joined #openstack-ansible | 22:01 | |
*** nurdie has quit IRC | 22:06 | |
*** luksky has quit IRC | 22:08 | |
*** SimAloo has quit IRC | 22:11 | |
*** tosky has quit IRC | 22:15 | |
openstackgerrit | Merged openstack/openstack-ansible-haproxy_server master: handlers: reload instead of restart https://review.openstack.org/650561 | 22:20 |
*** chhagarw has joined #openstack-ansible | 22:21 | |
*** nurdie has joined #openstack-ansible | 22:22 | |
openstackgerrit | Merged openstack/openstack-ansible master: Set telemetry jobs to vouting again https://review.openstack.org/651718 | 22:25 |
*** nurdie has quit IRC | 22:26 | |
*** chhagarw has quit IRC | 22:35 | |
*** jra has joined #openstack-ansible | 22:47 | |
jra | I'm running a moderately large OSA-deployed OpenStack on Queens, and we're hitting grievous performance issues moving traffic through the containerized neutron agents on control plane nodes. iperf from metal to metal gives consistent 8.5-9.0Gbps, but from metal into the agents container runs anywhere from 100Mbps down to 10kbps, which seems crazy. | 22:50 |
jra | So question one, has anybody encountered this before? | 22:50 |
jra | Following on from that, our team decided to look at moving the neutron agents out of containers onto bare metal in the control plane. I'm doing this process in dev, but the provider network config just profoundly does not work (missing interfaces, trying to enslave a bridge to another bridge, etc). | 22:52 |
jra | I've been up and down the documentation looking for examples of a bare-metal neutron agent provider network config, and have found nothing that looks any different from what we've been doing; but I know the default config is for bare-metal agents in recent releases | 22:53 |
jra | how is this supposed to work? | 22:53 |
mnaser | jra: what release is this? | 23:12 |
mnaser | oh | 23:13 |
mnaser | queens | 23:13 |
mnaser | jra: in rocky we moved the agents to baremetal | 23:13 |
jra | so I read | 23:15 |
jra | but the docs haven't changed, and the openstack_user_config.yml examples don't seem to show any changes | 23:15 |
logan- | pretty sure they moved in queens | 23:16 |
jra | so I'm not sure how I'm supposed to change things | 23:16 |
logan- | pike was the last version with containerized agents | 23:16 |
mnaser | okay so in that case jra should already have bare metal agents? | 23:16 |
logan- | jra: https://docs.openstack.org/openstack-ansible/queens/admin/upgrades/major-upgrades.html#implement-inventory-to-deploy-neutron-agents-on-network-hosts | 23:16 |
jra | we installed from an inventory that had built our pike install, and it had an env.d that specified containerized agents | 23:16 |
jra | @logan- the problem's not getting OSA to install the agents on metal, I've got that done; it's that I don't know how to update our config to actually work with metal-deployed agents | 23:17 |
jra | our config, like those in the example docs, names veth interfaces that don't exist on metal | 23:18 |
mnaser | they shouldn't exist on metal afaik | 23:19 |
jra | agreed - but then, what do we specify? | 23:21 |
jra | so I thought, simple, I'll just name the bridges directly | 23:21 |
*** kmadac has quit IRC | 23:21 | |
*** tbarron has quit IRC | 23:21 | |
*** kukacz has quit IRC | 23:21 | |
*** aspiers has quit IRC | 23:21 | |
*** jillr has quit IRC | 23:21 | |
*** antonym has quit IRC | 23:21 | |
jra | but no! Then you end up with "physical_device_mappings = vlan:br-vlan,ex:br-ex" in your linuxbridge_agent.conf | 23:22 |
jra | and "can't slave a bridge device to a bridge device" in your logs | 23:22 |
logan- | jra: I had a hell of a time getting it working on metal. this is what I ended up with (on a flat networking, single interface cloud): | 23:23 |
logan- | https://github.com/openstack/limestone-ci-cloud/blob/master/examples/interfaces | 23:24 |
*** admin0 has quit IRC | 23:24 | |
logan- | basically dangling veths off of the bridge | 23:24 |
logan- | and then telling neutron to attach to those veths here: https://github.com/openstack/limestone-ci-cloud/blob/3886dbc40de036e7d1e3bb61917793d7067a89b2/openstack_deploy/openstack_user_config.yml#L41-L50 | 23:24 |
jra | I really didn't want to do veths | 23:25 |
jra | thanks for sharing those configs, btw | 23:26 |
jra | Our main issue is the grievous performance issue, and virtualized networking is implicated in that mess | 23:26 |
jra | so how is this supposed to work for brand-new green field installations? | 23:27 |
*** tbarron has joined #openstack-ansible | 23:27 | |
*** kmadac has joined #openstack-ansible | 23:27 | |
*** kukacz has joined #openstack-ansible | 23:27 | |
*** jillr has joined #openstack-ansible | 23:27 | |
*** aspiers has joined #openstack-ansible | 23:27 | |
*** antonym has joined #openstack-ansible | 23:27 | |
logan- | i can't speak too much to how others do it, because the only linuxbridge cloud I have is that one (my larger clouds use calico so no bridges or network nodes), but I think most folks hand neutron physical interfaces directly to build its bridges on | 23:29 |
logan- | iirc my workaround there was necessary because i had a single bond to work with and wanted the containers and neutron to share it | 23:29 |
*** cshen has joined #openstack-ansible | 23:31 | |
logan- | (which you can do using that host_bind_override) | 23:32 |
snadge | i feel like such a noob for asking much simpler questions.. but I have an RDO 1 controller, 3 node setup deployed with ansible onto vsphere vms.. obviously the default is to use vlans, but my lack of vsphere knowledge (and access to peek at and create switches) is hampering me figuring out the best way to get them all talking to each other | 23:33 |
jra | I've got dedicated hardware just for neutron agents, with dual 10G interfaces bonded down to a single interface with a bunch of VLAN devices hanging off of it, and the bridges hanging off of them. I had kinda thought I could just hand OSA those bridges; are you suggesting I should hand it the VLAN devices instead? | 23:33 |
snadge | its just a test environment and there's actually no huge requirement to have the networking actually work.. its more about testing ansible deployment scripts etc rather than actually running instances and doing things with them.. but I guess I just want to learn a bit more about neutron and possibly even vsphere networking as well | 23:35 |
*** cshen has quit IRC | 23:36 | |
logan- | jra: right, neutron will build its own bridges off of the physical interface: https://github.com/openstack/openstack-ansible/blob/e4940799c7dfaeda1f33094bb3e1bc143c4c0880/etc/openstack_deploy/openstack_user_config.yml.singlebond.example#L68-L70 | 23:36 |
snadge | eg.. can a vsphere network switch trunk vlans similar to a physical switch.. or am i better off forgetting about that and defining a flat network setup? | 23:37 |
snadge | given that its only for testing anyway | 23:37 |
jra | @logan- interesting! I guess I didn't know that | 23:37 |
jra | Thanks so much @logan- and @mnaser, you guys have helped me a ton | 23:37 |
kplant | snadge: you could do multiple interfaces per vm and bind each interface to a different portgroup | 23:40 |
kplant | i'm not sure how well vmware deals with encapsulation/QinQ | 23:40 |
snadge | yeah.. each VM appears to have about 5 interfaces.. with one of them dedicated to instances (apparently) | 23:41 |
snadge | but from what I can see.. the previous person who set it all up, gave up on that side of it.. we have staging and prod environments that run on metal, that don't share the same problems | 23:41 |
kplant | looks like you can create portgoups that accept a range of vlan tags | 23:49 |
kplant | is that what you're looking for? | 23:49 |
snadge | quite possibly.. when i look for information about vmware specific to openstack networking.. i seem to find references to full vsphere integration.. im not sure I really want to do that though, since thats very far away from production.. and this is just a test environment | 23:51 |
*** cmart has quit IRC | 23:52 | |
snadge | if i can find a way to accept ranges of vlans in vmware.. ie.. get the "default" vlan networking to work.. then perhaps that's what I should do | 23:52 |
snadge | ie.. like trunking | 23:52 |
snadge | but i'm thinking it might be easier just to use flat networking | 23:52 |
kplant | it sounds like to me you're only using vmware as a means to virtualize your dev environment | 23:55 |
snadge | yeah.. i think its literally just a sandbox.. i don't think there is any expectation of using it for anything productive | 23:57 |
snadge | im just new to this job.. and to openstack, and in some ways vsphere as well.. but not virtualisation (kvm), linux, networking etc | 23:58 |
snadge | the first thing they got me to do was install devstack and play around with that.. which I have.. and this is basically moving on from that to something a little bit more like what is used in staging (dev) and prod.. they seem to have skipped the test environment | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!