*** javeriak_ has quit IRC | 00:04 | |
*** britthouser has quit IRC | 00:12 | |
*** britthouser has joined #openstack-ansible | 00:13 | |
*** sdake has quit IRC | 00:21 | |
*** sdake has joined #openstack-ansible | 00:22 | |
*** britthouser has quit IRC | 00:37 | |
*** sdake has quit IRC | 00:38 | |
*** appprod0 has joined #openstack-ansible | 00:43 | |
*** javeriak has joined #openstack-ansible | 00:47 | |
*** sacharya has joined #openstack-ansible | 00:48 | |
*** daneyon has quit IRC | 00:51 | |
*** sigmavirus24 is now known as sigmavirus24_awa | 01:08 | |
*** sdake has joined #openstack-ansible | 01:37 | |
*** sdake has quit IRC | 01:44 | |
*** daneyon has joined #openstack-ansible | 02:37 | |
*** appprod0 has quit IRC | 02:43 | |
*** javeriak has quit IRC | 02:45 | |
*** daneyon has quit IRC | 03:36 | |
*** daneyon has joined #openstack-ansible | 03:36 | |
*** davidjc has joined #openstack-ansible | 04:04 | |
*** davidjc has quit IRC | 04:12 | |
*** JRobinson__ is now known as JRobinson__afk | 04:32 | |
*** JRobinson__afk is now known as JRobinson__ | 04:57 | |
*** davidjc has joined #openstack-ansible | 05:21 | |
*** davidjc has quit IRC | 05:24 | |
*** davidjc has joined #openstack-ansible | 05:24 | |
*** markvoelker has joined #openstack-ansible | 05:32 | |
*** fangfenghua has quit IRC | 05:43 | |
*** JRobinson__ has quit IRC | 05:50 | |
*** davidjc has quit IRC | 06:04 | |
*** javeriak has joined #openstack-ansible | 06:11 | |
*** sacharya has quit IRC | 06:31 | |
*** markvoelker has quit IRC | 06:33 | |
*** fangfenghua has joined #openstack-ansible | 07:15 | |
*** markvoelker has joined #openstack-ansible | 07:33 | |
*** markvoelker has quit IRC | 07:38 | |
mattt | svg: do we need to update cinder.conf so cinder can write into rbd ? | 08:14 |
---|---|---|
svg | not sure what you mean, cinder.conf gets configured via container_vars.cinder_backends as defined in the user config | 08:18 |
svg | doing rbd things is just another backend, so the cinder.conf template didn't need to be patched | 08:19 |
mattt | svg: derp, i was looking at a cinder.conf on a cinder-api node which remained unconfigured for rbd | 08:23 |
mattt | the one on cinder-volume node is fine | 08:24 |
mattt | wonder if cinder-volume doesn't get restarted correctly when cinder.conf is updated, i'll need to dig into that separately | 08:26 |
*** markvoelker has joined #openstack-ansible | 08:34 | |
svg | mattt: that is possible, I filed a similar (unrelated) bug recently: https://bugs.launchpad.net/openstack-ansible/+bug/1449958 | 08:38 |
openstack | Launchpad bug 1449958 in openstack-ansible trunk "galera mysql instances are not restarted after an update to the config" [Undecided,Confirmed] | 08:38 |
*** markvoelker has quit IRC | 08:39 | |
mattt | svg: yeah, in this instance cinder volume creates were going into error, the moment i restarted cinder-volume they started working | 08:39 |
mattt | svg: with your patch, do all nova creates end up on rbd volumes? if so, can we make that part optional? | 08:40 |
svg | I mattt that depends on the backends you define | 08:42 |
svg | you can configure multiple "backends" in cinder | 08:42 |
mattt | svg: so i may want cinder to use RBD but that doesn't mean all nova boots should end up on RBD | 08:42 |
mattt | which if i'm understanding this correctly is how this will end up | 08:43 |
svg | multiple cinder backends show up in horizon as volume typed (admin, volumes, second tab) | 08:44 |
svg | one of them is marked as default, don;t recall how that one is choosen though | 08:45 |
svg | I presume the first | 08:45 |
mattt | svg: have a look at https://review.openstack.org/#/c/181957/11/playbooks/roles/os_nova/templates/nova.conf.j2 | 08:45 |
mattt | does that not mean that irrespective of cinder, all nova instances will end up on rbd? | 08:45 |
svg | images pool != volumes pool | 08:46 |
mattt | but they're both rbd pools is the issue | 08:46 |
svg | though I agree this is confusing and I don't grasp it completely myself | 08:46 |
mattt | svg: http://ceph.com/docs/master/rbd/rbd-openstack/ is helpful | 08:47 |
mattt | so i see there being a few options | 08:47 |
mattt | 1. rbd-backed cinder volumes (which are implemented nicely with your patch) | 08:47 |
svg | e.g. when you create a vm, you have a choice of multiple "boot sources" | 08:47 |
mattt | 2. configuring nova to allow it to boot from an rbd-backed cinder volume | 08:48 |
mattt | 3. configuring nova to boot instances directly into rbd | 08:48 |
mattt | in your patch #3 seems to happen if #1 is enabled | 08:49 |
mattt | i think they can be mutually exclusive | 08:49 |
svg | hm, I didn;t really look at the use case of combining rbd and other cinder backends | 08:51 |
svg | given rbd snapshots/in pool copies are instantenous, those are a lot more performant | 08:51 |
svg | have a look at https://dl.dropboxusercontent.com/u/13986042/20150519104921.png and check the select source drop down | 08:52 |
mattt | svg: is it possible to boot nova instances using cli with local storage? | 08:55 |
mattt | s/with/onto/ | 08:56 |
svg | honestly, no clue | 08:59 |
svg | I'm very new with openstack | 08:59 |
svg | it's alreqady a big win we managed to set up womething so quickly thanks to this project :) | 08:59 |
mattt | are you guys running openstack in prod now ? | 09:00 |
svg | nope, not yet | 09:01 |
svg | we hope to do that asap, but we're still learning a lot | 09:01 |
svg | atm, we have one osad setup (juno) where we do some tests | 09:01 |
svg | and it's not pretty so far :( | 09:01 |
svg | pushed a deploy of 100vm's at once | 09:02 |
mattt | what happened? | 09:02 |
svg | which ran without (obvious) errors | 09:02 |
svg | but half of them are not reacheable via their floating ip | 09:02 |
mattt | we should ping Apsu about that later today, to see if he's seen that happening before and we can do to address | 09:03 |
svg | any help will be appreciated; I'm not even sure how to start debugging that | 09:05 |
svg | but didn't look closely yet, it's one of my coworkers that works on that | 09:05 |
svg | it's not easy to track things in logging | 09:06 |
svg | early June we expect a visit of a racker for some help/consultancy/validating the stuff we do etc | 09:06 |
mattt | svg: yeah i'm not overly familiar w/ neutron, Apsu knows it very well though adn should be able to advise you where to look | 09:08 |
svg | it might also be a misconfigured network thing on our side.. | 09:27 |
*** markvoelker has joined #openstack-ansible | 09:35 | |
*** markvoelker has quit IRC | 09:40 | |
*** javeriak has quit IRC | 09:48 | |
svg | mattt right now working on other things, but I keep track of your latest comments | 10:02 |
svg | the one about nova running directly in rbd, is part of something I don't understand very well in openstack | 10:04 |
mattt | svg: i'm not overly familiar w/ ceph so i'm probably not explaining it very well | 10:16 |
svg | this sounded more like a nova thing here | 10:16 |
svg | nova and cinder | 10:16 |
svg | I guess as cinder supports different types of backend, it also allows things that might make less sense with ceph (avoid copying images to volumes, and just do a ceph copy etc.) | 10:17 |
svg | I'm not sure where the nova pool kicks in here | 10:18 |
svg | one has an image (glance), volume (cinder) and vms (nova) pool IIRC | 10:18 |
mattt | svg: so if you take ceph out of the picture, you can do the following | 10:19 |
svg | I don;t understand what the nova pool does actually | 10:19 |
mattt | svg: boot an instance onto local storage, boot an instance on local storage and then attach a cinder volume, or boot an instance w/ its root disk on a cinder volume | 10:19 |
mattt | svg: with ceph, you can then configure cinder to use an rbd backend, which will allow you to boot instances w/ local storage and a cinder disk or a root cinder volume attached that's backed by rbd | 10:21 |
mattt | svg: you also have the ability to replace local storage so that any instance booted has its root disk in ceph | 10:21 |
mattt | svg: with your patch, it is now assumed that if you have configured cinder to use rbd then you want _all_ booted instances to have their root disk in rbd (not as cinder volumes) | 10:21 |
svg | "or a root cinder volume attached that's backed by rbd" => on what ceph pool wouold that be then? | 10:22 |
svg | I guess the cinder/volumes pool? | 10:22 |
mattt | svg: yeah whatever you specified as rbd_pool in your openstack_user_variables.yml file | 10:23 |
mattt | (in your default example it is 'volumes_hdd' | 10:23 |
svg | ok | 10:23 |
svg | so in what case does the nova/vms pool get used? | 10:23 |
mattt | svg: as long as you have cinder configured w/ an rbd backend then nova will use the nova pool, which is wrong because that doesn't use cinder | 10:24 |
svg | (the one I configure for nova) | 10:24 |
mattt | so you're making one feature rely on another when they're not related | 10:24 |
svg | I'm afraid I don;t follow you here | 10:25 |
mattt | ok so if you look at https://review.openstack.org/#/c/181957/12/playbooks/roles/os_nova/templates/nova.conf.j2 | 10:25 |
mattt | you have {% if cinder_backend_rbd_inuse|bool %} configure stuff {% endif %} | 10:26 |
mattt | so what you're saying is if cinder is configured to use rbd, then force nova to create all instances in rbd | 10:26 |
svg | ok | 10:26 |
mattt | but images_type = rbd is not using cinder, so having it rely on cinder_backend_rbd_inuse|bool isn't right | 10:26 |
svg | ok | 10:27 |
mattt | (and i could be wrong here too, anyone is welcome to step in) | 10:27 |
mattt | i think there is a disadvantage to having all instances go into rbd | 10:27 |
svg | Which is? | 10:27 |
mattt | well one is that it's probably more expensive, due to replicas | 10:28 |
svg | (this configuration was made based on http://ceph.com/docs/master/rbd/rbd-openstack/ btw) | 10:28 |
svg | if you forget the forced cinder_backend_rbd_inuse for a while, I do configure everything on ceph/rbd herem as per those docs | 10:29 |
mattt | svg: yeah i think that aside the configuration is fine | 10:29 |
mattt | i just think the conditional should be based on some user variable that isn't tied to cinder | 10:29 |
mattt | i have no issue w/ that configuration, i just think it should be optional and not dependent on cinder | 10:29 |
mattt | svg: i think there are probably some benefits to using cinder to create rbd volumes which you attach to instances over that nova configuration | 10:30 |
mattt | one obvious one is that you can probably delete the instance while still having the volume in cinder | 10:30 |
svg | I still don;t understand what that nova pool is used for, when I deploy an instance, I get choose to boot from a volume or from an image | 10:31 |
mattt | svg: so if you boot from image the root disk of the instance will be stored in teh nova pool | 10:32 |
mattt | svg: rather than existing on the compute host's local storage | 10:32 |
svg | but I always choose to boot from an image, as that is a generic, and booting from a volume is more of a specific | 10:33 |
mattt | svg: correct, but you are assuming that the deployer wants all instances which aren't booted with a cinder volume to be stored in ceph | 10:33 |
mattt | and i think that assumption is wrong | 10:33 |
mattt | they may have a small ceph cluster which they want to expose to cinder/glance only and not have every single instance live in ceph | 10:33 |
mattt | that way if a customer wants a ceph instance they boot from a cinder volume, otherwise they boot an instance and it will land on local storage | 10:34 |
mattt | i don't see any harm in being able to allow people to deploy every instance onto ceph storage, but i think that hsould be optional and only enabled with a user variable | 10:34 |
*** markvoelker has joined #openstack-ansible | 10:36 | |
*** markvoelker has quit IRC | 10:40 | |
svg | ok | 10:40 |
svg | I do see harm in using local storage though, as that might be (very) limited | 10:41 |
svg | ok, ok, I checked what we have in different pools, and the volumes pool is actually empty | 10:44 |
svg | we have a couple of images in the image / glance pool | 10:44 |
svg | and all vm's have their dirk in the vms/nova pool | 10:45 |
svg | disks | 10:45 |
svg | so the nova storage is actually "boot disks"? | 10:46 |
mattt | yep | 10:48 |
mattt | svg: i agree that local storage is limited (you can't live migrate, etc.) | 10:48 |
mattt | but for a lot of non-critical workloads it's fine, and cheap since most hypervisors have a ton of local storage in them | 10:49 |
svg | one thing I seemed to have gotten wrong, is I thought ceph can do instantenous copies (diff copies, doesn;t recreate the data, but reuses the one from the image, sort of COW), but that this only worked within the same pool | 10:49 |
svg | ut it seems that works acrooss pools, and this is what happens when deploying from an image to the nova backend | 10:50 |
svg | except when choosing the options "boot from image {creates a new volume)" | 10:50 |
mattt | yeah i thought it did that when you booted an instance from glance in rbd to a cinder volume | 10:51 |
mattt | actually when you boot a nova instance onto rbd that should work also | 10:53 |
svg | wow | 10:54 |
svg | *confused* | 10:54 |
svg | "booted an instance from glance in rbd to a cinder volume" | 10:54 |
svg | or does this mean, first create a volume from an image, then boot from that volume? | 10:54 |
mattt | you can boot an instance from a glance image to a cinder volume and boot from that volume | 10:57 |
svg | ow, ok, just tested, that is what happens with the "boot from image {creates a new volume)" | 10:57 |
mattt | yeah, so doing that probably offers some advantages over the other method | 10:58 |
mattt | i'm not overly familiar w/ cinder but i'm guessing you can then delete the instance and retain the volume, you can do cinder backups on the volume (snapshots etc.) | 10:58 |
mattt | and whatever else cinder offers :p | 10:58 |
svg | yes, given cinder is the only component that allows for multiple backends | 11:01 |
svg | we e.g. have separate volumes pools with hdd and ssd | 11:01 |
svg | so e.g. a db instance would get a separate data volume with tits database files on sdd | 11:02 |
svg | oops, *its* databases :) | 11:02 |
svg | so nova can only have one type of storage backend, and the choice should be here local storage or ceph | 11:03 |
mattt | svg: yeah exactly | 11:06 |
mattt | svg: which again i think is fine, if that's what you want, but it should not be dependent on whether cinder is configured w/ an rbd backend because they're not related | 11:06 |
svg | sure, I now understand | 11:06 |
mattt | yay! | 11:09 |
svg | is the logging/kibana thing usefull? | 11:17 |
svg | i just tried deploying an instance, where it copies and creates a new volume from an image, and it failed | 11:17 |
svg | looking at kibana, no error message if found | 11:18 |
mattt | i don't use kibana personally | 11:25 |
mattt | let me finish something up here and then i'll try it | 11:25 |
mattt | (the boot from volume) | 11:25 |
*** markvoelker has joined #openstack-ansible | 11:36 | |
*** markvoelker has quit IRC | 11:41 | |
*** fangfenghua has quit IRC | 11:53 | |
*** markvoelker has joined #openstack-ansible | 12:37 | |
*** markvoelker has quit IRC | 12:43 | |
*** markvoelker has joined #openstack-ansible | 12:53 | |
*** Mudpuppy has joined #openstack-ansible | 13:26 | |
*** Mudpuppy has quit IRC | 13:40 | |
*** markvoelker has quit IRC | 13:59 | |
*** yaya has joined #openstack-ansible | 14:22 | |
*** sdake has joined #openstack-ansible | 14:42 | |
*** davidjc has joined #openstack-ansible | 14:43 | |
*** sdake_ has joined #openstack-ansible | 14:43 | |
svg | May 19 16:46:22 dc2-rk4-ch1-bl1 dnsmasq-dhcp[4866]: not giving name dc2-rk4-ch1-bl1_cinder_api_container-cddfd56e to the DHCP lease of 10.0.3.165 because the name exists in /etc/hosts with address 10.16.8.24 | 14:47 |
svg | does this ring a bell to anyone? | 14:47 |
*** sdake has quit IRC | 14:47 | |
svg | (getting *lots* of those) | 14:48 |
*** jwagner_away is now known as jwagner | 14:51 | |
*** davidjc has quit IRC | 14:51 | |
svg | prolly not related, but we now suddenly have lots of issues with different containers not being reacheable | 14:58 |
svg | and this varies | 14:58 |
svg | when I ping from a metal controller host to all of its containers, say around 10 containers, I get max 2-3 that reply, all the other ip's yield a "ping: sendmsg: Invalid argument" | 14:59 |
svg | and this starts out once we deploy +/- 50 vm's per compute host (second time we notice this) | 15:00 |
*** metral is now known as metral_zzz | 15:06 | |
mattt | svg: so wait, you are losing connectivity to _containers_ when you boot a large number of VMs? | 15:08 |
svg | at least ikt looks like it | 15:09 |
mattt | svg: odd, do those instances share the same network as your infrastructure? | 15:10 |
svg | define infrastructure? | 15:10 |
svg | they have a standard network setup as per rpc_userconfig | 15:11 |
mattt | svg: are instances being booted in the same network as is defined in cidr_networks for containers? | 15:11 |
svg | (duno if this is related, but kernel log show lots of ""[537441.256976] net_ratelimit: 6 callbacks suppressed"" | 15:12 |
svg | ah, the instances, no, we use vxlan's so they are separated | 15:12 |
mattt | that is super odd | 15:13 |
svg | mattt I need to run to catch my train, will be back in 20' | 15:13 |
mattt | cool | 15:14 |
*** daneyon has quit IRC | 15:28 | |
*** appprod0 has joined #openstack-ansible | 15:30 | |
svg | back | 15:30 |
svg | on train so not a stable connection | 15:31 |
mattt | welcome back | 15:32 |
svg | so, any funky thoughts? | 15:32 |
mattt | svg: just poking on my dev env | 15:34 |
mattt | svg: i also have a ton of those 'not giving name' dhcp errors on my controllers | 15:34 |
mattt | so that should be unrelated | 15:35 |
svg | i checked and we had those errors pretty much all the time, so I don't expect this to be related | 15:36 |
mattt | would probably be prudent for us to update dnsmasq to not check /etc/hosts so we can prevent those log messages | 15:36 |
mattt | but yeah unrelated | 15:36 |
mattt | so you spin up a ton of VMs, and you start losing access to containers | 15:37 |
mattt | Apsu: you seen this? | 15:38 |
Apsu | mattt: No | 15:38 |
Apsu | That sounds fun | 15:38 |
*** saguilar has joined #openstack-ansible | 15:39 | |
mattt | svg: the containers aren't crashing right? | 15:39 |
svg | not sure if that is the trigger, but if the error is there before at least it makes it more obvious | 15:39 |
svg | containers are not crashing no, I can attach to them | 15:39 |
Apsu | Can't pay attention this second, will read back in a minute | 15:40 |
mattt | svg: do you have nova-compute running on the same controller node that you have containers on? | 15:42 |
svg | no | 15:43 |
*** appprod0 has quit IRC | 15:45 | |
mattt | svg: i'm not too sure unfortunately :( | 15:52 |
svg | on the compute hosts I have several syslog containbers that seems to have crashed | 15:54 |
mattt | i wouldn't imagine that'd cause the errors you're seeing | 15:59 |
*** sacharya has joined #openstack-ansible | 15:59 | |
*** markvoelker has joined #openstack-ansible | 16:00 | |
*** saguilar has quit IRC | 16:01 | |
Apsu | svg: You've got overlapping CIDRs, I believe. | 16:04 |
mattt | Apsu: that would have been my guess | 16:05 |
mattt | afk for a bit, back later | 16:05 |
Apsu | Also, mattt, svg: "net_ratelimit: x callbacks suppressed" is when the kernel limits repeat syslog messages so they don't overwhelm the logger | 16:05 |
Apsu | Doesn't actually give you any clue to message content | 16:05 |
svg | Apsu: overlapping on metal or containers or both? | 16:06 |
Apsu | svg: Both probably. You've got a container named in /etc/hosts with a particular IP, yet the IP dnsmasq wants to give it is different | 16:07 |
*** daneyon has joined #openstack-ansible | 16:07 | |
*** markvoelker has quit IRC | 16:07 | |
svg | but that is about management interface vs the container backendnetwork? | 16:08 |
Apsu | Well, let's find out | 16:09 |
Apsu | Can you show me your host management interface CIDR (10. I assume, so shouldn't be an issue to share), as well as the user_config.yml CIDRs? | 16:11 |
svg | Apsu: I'm on a train right now with flaky connection, I'll dive into it when @home | 16:13 |
*** yaya has quit IRC | 16:17 | |
*** saguilar has joined #openstack-ansible | 16:19 | |
*** daneyon has quit IRC | 16:21 | |
Apsu | svg: Sounds good | 16:33 |
*** yaya has joined #openstack-ansible | 16:37 | |
*** sigmavirus24_awa is now known as sigmavirus24 | 16:51 | |
*** sdake_ is now known as sdake | 16:55 | |
svg | Apsu: so from one of my "controllers", br-mgmt has inet addr:10.16.8.50 Bcast:10.16.9.255 Mask:255.255.254.0 | 17:09 |
svg | the user config for networking is http://sprunge.us/DRjO | 17:09 |
*** javeriak has joined #openstack-ansible | 17:17 | |
Apsu | svg: Right, but, what about your host's management range? | 17:21 |
svg | the metal hosts you mean? | 17:22 |
Apsu | yeah | 17:22 |
Apsu | I mean | 17:22 |
Apsu | svg | May 19 16:46:22 dc2-rk4-ch1-bl1 dnsmasq-dhcp[4866]: not giving name dc2-rk4-ch1-bl1_cinder_api_container-cddfd56e to the DHCP lease of 10.0.3.165 because the name exists in /etc/hosts with address 10.16.8.24 | 17:22 |
Apsu | This is the error you got | 17:23 |
Apsu | So, something has 10.0.3.165 in its range | 17:23 |
svg | I just checked, managent iface on metal is all within the excluded range - 10.16.8.0,10.16.8.127 | 17:23 |
svg | hm, that is not how I undertand that error | 17:23 |
svg | the hosts has an entry in /etc/hosts for 10.16.8.24 dc2-rk4-ch1-bl1_cinder_api_container-cddfd56e (which osad configures like that) and because of that it refuses to hand out the lease | 17:24 |
Apsu | Well, it's failing to tag the lease with the name | 17:24 |
Apsu | Because it's in the hosts file | 17:24 |
svg | mattt confirmed me he has similar warnings on his dev setup, and we also have those messages in the past day (before we started havoing issues with container connectivity) | 17:25 |
svg | so pretty sure that is unrelated | 17:25 |
Apsu | Ok | 17:26 |
Apsu | So you're losing ssh access, regardless of the name association | 17:26 |
Apsu | Does that still occur if you ssh to the IP of the container? | 17:26 |
svg | yup | 17:28 |
svg | (openstack-ansible targets ip's, as container names are not in dns) | 17:28 |
*** willemgf has joined #openstack-ansible | 17:29 | |
Apsu | So this hosts issue is just breaking the dns resolution, which makes sense of course | 17:29 |
Apsu | Ok | 17:29 |
Apsu | We'll have to dig into what's up | 17:30 |
*** saguilar has quit IRC | 17:32 | |
* svg welcomes coworker willemgf | 17:32 | |
*** sdake has quit IRC | 17:40 | |
*** sdake has joined #openstack-ansible | 17:42 | |
*** jwagner is now known as jwagner_lunch | 17:44 | |
*** daneyon has joined #openstack-ansible | 17:50 | |
*** sdake has quit IRC | 17:51 | |
*** sigmavirus24 is now known as sigmavirus24_awa | 17:55 | |
*** sdake has joined #openstack-ansible | 17:56 | |
*** javeriak has quit IRC | 17:56 | |
*** daneyon has quit IRC | 17:59 | |
*** daneyon has joined #openstack-ansible | 18:00 | |
*** dkalleg has joined #openstack-ansible | 18:09 | |
*** saguilar has joined #openstack-ansible | 18:10 | |
*** dkalleg has quit IRC | 18:10 | |
*** dkalleg has joined #openstack-ansible | 18:11 | |
*** britthouser has joined #openstack-ansible | 18:14 | |
*** daneyon has quit IRC | 18:14 | |
*** britthou_ has joined #openstack-ansible | 18:16 | |
*** daneyon has joined #openstack-ansible | 18:16 | |
*** davidjc has joined #openstack-ansible | 18:16 | |
*** yaya has quit IRC | 18:17 | |
*** daneyon has quit IRC | 18:18 | |
*** sdake has quit IRC | 18:18 | |
*** davidjc has quit IRC | 18:18 | |
*** britthouser has quit IRC | 18:19 | |
*** davidjbc has joined #openstack-ansible | 18:19 | |
*** jwagner_lunch is now known as jwagner | 18:39 | |
*** javeriak has joined #openstack-ansible | 18:40 | |
*** willemgf has quit IRC | 18:41 | |
svg | mattt apsu thanks for all your help, so far, this is really confusing us, and we will first start to dive in our networking setup, to seek out if there might be problems on our side. thx! | 18:49 |
Apsu | np! | 18:50 |
*** britthou_ has quit IRC | 19:01 | |
*** davidjbc has quit IRC | 19:03 | |
*** britthouser has joined #openstack-ansible | 19:05 | |
*** openstackgerrit has quit IRC | 19:06 | |
*** openstackgerrit has joined #openstack-ansible | 19:06 | |
*** logan2 has quit IRC | 19:07 | |
*** logan2 has joined #openstack-ansible | 19:08 | |
*** davidjc has joined #openstack-ansible | 19:26 | |
*** britthouser has quit IRC | 19:31 | |
*** dkalleg has quit IRC | 19:31 | |
*** davidjc has quit IRC | 19:41 | |
*** daneyon has joined #openstack-ansible | 19:45 | |
*** jwagner is now known as jwagner_away | 19:49 | |
*** javeriak has quit IRC | 19:50 | |
mattt | Apsu: those errors are because we write out our own /etc/hosts file which doesn't correspond w/ IPs that dnsmasq dishes out | 19:52 |
Apsu | mattt: Sounds like it, yeah. Should file a bug so we can sync them up | 19:53 |
Apsu | Probably by generating a dnsmasq lease file that matches | 19:53 |
mattt | Apsu: not sure it's the right solution, but i passed --no-hosts to dnsmasq and that cleared htem up | 19:53 |
*** javeriak has joined #openstack-ansible | 19:54 | |
mattt | Apsu: i'll create a bug for it so we can circle back at some point | 19:58 |
mattt | it does beg the question if we could do away with lxc's stock eth0 and we create eth0 w/ our container cidr instead | 19:58 |
Apsu | Yeah | 20:00 |
*** javeriak has quit IRC | 20:03 | |
*** sdake has joined #openstack-ansible | 20:06 | |
svg | mattt: is that about the 'container management network'? I never understood what tha's for. given there's already an openstack management network | 20:09 |
mattt | svg: openstack management network? | 20:11 |
svg | 24 # Management (same range as br-mgmt on the target hosts) | 20:12 |
svg | 25 container: 10.16.8.0/23 | 20:12 |
svg | as defined in rpc_user_config.yml | 20:12 |
svg | ^^ openstack management network, by container management network I mean the dnsmasq/dhcp network (eth0 on the containers) | 20:13 |
*** jwagner_away is now known as jwagner | 20:13 | |
mattt | svg: i can't see why it's not possible to do away w/ eth0 in the container | 20:15 |
svg | my thought exactly | 20:15 |
mattt | svg: we'd have to ensure dnsmasq dishes out the same IPs every time a container boots though | 20:16 |
svg | (though afaicr, there wa no way to de-configure that?) | 20:16 |
svg | euhm, I mean, why do we need dhcp? | 20:16 |
mattt | well i think you'd still need to use dhcp | 20:17 |
mattt | because you can't configure the container's eth0 without networking | 20:18 |
svg | Why? is that lxc-specific? | 20:18 |
mattt | well you can run arbitrary commands in an lxc container, not sure if our lxc ansible module permits that though | 20:19 |
svg | I don't follow | 20:20 |
svg | ok, sorry, now it struck me | 20:21 |
svg | so that's for initial config | 20:22 |
mattt | svg: looks like the initial container stuff doesn't happen over the network tho, which makes sense | 20:22 |
mattt | so yeah i'm not entirely sure dhcp is necessary | 20:22 |
svg | also, given the ansible_ssh_host var is set to the mgmt ip | 20:23 |
svg | I was told (by cloudnull) that dhcp network is essentially to allow a container to connect outside the cluster (to internet..) | 20:24 |
mattt | yeah i was just looking at that | 20:24 |
mattt | looks like the 10.0.3.0/24 lxc network is hardcoded in the lxc-net stuff | 20:24 |
*** daneyon has quit IRC | 20:24 | |
mattt | so you'd be hard pressed to change that | 20:24 |
svg | but, given that routes through dnsmasq on the metal host, those get external access over the mgmt network... | 20:25 |
svg | guess it's one of the quirks :) | 20:25 |
mattt | svg: i created https://bugs.launchpad.net/openstack-ansible/+bug/1456792 so we can look into it further | 20:27 |
openstack | Launchpad bug 1456792 in openstack-ansible "Excessive dnsmasq-dhcp log entries" [Undecided,New] | 20:27 |
* svg subsccribed | 20:32 | |
*** sdake has quit IRC | 20:39 | |
*** daneyon has joined #openstack-ansible | 20:41 | |
*** daneyon has quit IRC | 20:45 | |
*** sigmavirus24_awa is now known as sigmavirus24 | 20:52 | |
*** daneyon has joined #openstack-ansible | 20:53 | |
*** daneyon has quit IRC | 20:54 | |
*** britthouser has joined #openstack-ansible | 21:01 | |
*** radek_ has joined #openstack-ansible | 21:29 | |
*** davidjc has joined #openstack-ansible | 21:33 | |
*** davidjc has quit IRC | 21:34 | |
*** javeriak has joined #openstack-ansible | 21:42 | |
*** britthouser has quit IRC | 21:45 | |
*** radek_ has quit IRC | 21:46 | |
*** britthouser has joined #openstack-ansible | 21:49 | |
*** britthouser has quit IRC | 22:09 | |
*** sacharya has quit IRC | 22:10 | |
*** sigmavirus24 is now known as sigmavirus24_awa | 22:32 | |
*** britthouser has joined #openstack-ansible | 22:35 | |
javeriak | hey guys, general question, there's a neutron-agent service being installed, thats not really configured for anything, what exactly is the purpose for it | 22:57 |
*** saguilar has quit IRC | 23:00 | |
prometheanfire | neutron-lb-agent? | 23:10 |
javeriak | nope, jutst a 'neutron-agent', this is the juno branch code | 23:12 |
*** britthouser has quit IRC | 23:13 | |
*** darrenc is now known as darren_afk | 23:25 | |
*** britthouser has joined #openstack-ansible | 23:36 | |
*** darren_afk is now known as darrenc | 23:44 | |
*** daneyon has joined #openstack-ansible | 23:49 | |
*** daneyon has quit IRC | 23:59 | |
*** daneyon has joined #openstack-ansible | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!