*** sdake has quit IRC | 00:25 | |
*** markvoelker has joined #openstack-ansible | 00:27 | |
*** markvoelker has quit IRC | 00:31 | |
*** Mudpuppy has joined #openstack-ansible | 00:35 | |
cloudnull | arbrandes: did you get it to go ? | 00:55 |
---|---|---|
*** arbrandes1 has joined #openstack-ansible | 00:55 | |
cloudnull | Quick question, just reading through the scroll back, do you have the br-mgmt inteface on all of your hosts? | 00:57 |
*** arbrandes has quit IRC | 00:58 | |
cloudnull | Additionally if you have null entries for your containers it looks like the cause of that was due to not having a management network defined for the ip addresses to be pulled from. | 00:58 |
cloudnull | looking at http://paste.openstack.org/show/469139/ | 00:58 |
cloudnull | Line 23 | 00:58 |
cloudnull | ip_from_q: "management" | 00:59 |
cloudnull | management is not defined in cidr_networks | 00:59 |
cloudnull | arbrandes1: ^^ | 01:01 |
cloudnull | just looking at the pasted config it seems like you need to s'/container/management/' in the cidr_networks field and youll be good to go. | 01:02 |
cloudnull | in terms of cleaning it up. for the sake of simplicity, id probably run the lxc-container-destroy.yml play. | 01:03 |
cloudnull | rm the openstack_inventory.json file | 01:03 |
cloudnull | fix the entry in config | 01:03 |
cloudnull | and start the plays fresh | 01:03 |
cloudnull | host setup and base config should be all good. | 01:03 |
cloudnull | let me know if it doesn't go . im around for the most part this evening. | 01:04 |
cloudnull | FYI for more available community help most of us are available Mon-Fri GMT+1 / GMT-6 | 01:06 |
cloudnull | but i think the only thing that your missing is the miss configuration between the "container" and "management" network names. | 01:07 |
*** arbrandes1 has quit IRC | 02:06 | |
*** jmccrory has quit IRC | 02:12 | |
*** arbrandes1 has joined #openstack-ansible | 02:23 | |
*** markvoelker has joined #openstack-ansible | 02:28 | |
*** markvoelker has quit IRC | 02:32 | |
*** abitha has joined #openstack-ansible | 02:53 | |
prometheanfire | cloudnull: hi | 03:08 |
prometheanfire | cloudnull: thing tomorrow? | 03:08 |
cloudnull | hi. | 03:10 |
cloudnull | no, hangning with the wife in tomorrow. | 03:10 |
cloudnull | we went wine tasting today . she wasnt feeling well before and now worse | 03:10 |
cloudnull | so tomorrow will be a down day | 03:11 |
prometheanfire | ok | 03:13 |
*** Mudpuppy has quit IRC | 03:53 | |
*** abitha has quit IRC | 03:54 | |
* prometheanfire finally figured out gerrit interdiffing | 04:13 | |
prometheanfire | more reviews | 04:13 |
*** Mudpuppy has joined #openstack-ansible | 04:23 | |
*** Mudpuppy has quit IRC | 04:27 | |
prometheanfire | cloudnull: have any of the upgrade automated stuff used your patch (for split or the other one) | 04:29 |
*** markvoelker has joined #openstack-ansible | 04:29 | |
cloudnull | as of today yes | 04:29 |
cloudnull | IE OSA upgrade test failed on OneOffTest-10.1.14-11.2.1-saved2-ref224137 | 04:30 |
cloudnull | ref224137 == https://review.openstack.org/#/c/224137/ | 04:30 |
prometheanfire | nice | 04:32 |
prometheanfire | cloudnull: I'm getting rid of articles in your docs for the split btw | 04:32 |
prometheanfire | might want to have docs look at it though | 04:32 |
cloudnull | huh ? | 04:32 |
prometheanfire | also, s/that// | 04:32 |
prometheanfire | a/and/the | 04:33 |
prometheanfire | bah | 04:33 |
prometheanfire | a/an/the | 04:33 |
cloudnull | okiedokie | 04:33 |
*** markvoelker has quit IRC | 04:33 | |
prometheanfire | ya, more pedantry | 04:34 |
cloudnull | sounds good to me. | 04:35 |
cloudnull | i told the docs people that they could write them or they'd have to deal with the garbage I create. :) | 04:36 |
cloudnull | and now we have the garbage I created so you see how well that worked out :) | 04:36 |
cloudnull | cc Sam-I-Am ^^ | 04:36 |
prometheanfire | yep | 04:37 |
cloudnull | prometheanfire: do you know if you've seen this https://bugs.launchpad.net/openstack-ansible/+bug/1497669 <- this issue must be a thing in all kilo unless rpc is resolving the python-ldap pkg elsewhere ? | 04:57 |
openstack | Launchpad bug 1497669 in openstack-ansible trunk "python-ldap is missing from the keystone containers" [High,Triaged] - Assigned to Kevin Carter (kevin-carter) | 04:57 |
prometheanfire | huh, I've not seen it | 04:59 |
prometheanfire | I think I remember it mentioned, but could have been the bug triage meeting | 04:59 |
*** sdake has joined #openstack-ansible | 05:02 | |
prometheanfire | cat is dreaming (doesn't have full cutof from rem, still acts out) | 05:03 |
cloudnull | nice | 05:04 |
cloudnull | when felipe dreams he's funny | 05:04 |
cloudnull | prometheanfire: did you see my PR in channel ? | 05:05 |
cloudnull | IE https://review.openstack.org/#/c/225469/ | 05:05 |
prometheanfire | I'll look | 05:06 |
prometheanfire | I didn't see it if it was in the last day | 05:06 |
prometheanfire | is the bot dead? | 05:06 |
cloudnull | it seems so | 05:06 |
prometheanfire | neat | 05:06 |
*** sdake_ has joined #openstack-ansible | 05:07 | |
cloudnull | master is not happy right now. | 05:11 |
cloudnull | this is the source of suckage > openstack_auth.User.keystone_user_id: (mysql.E001) MySQL does not allow unique CharFields to have a max_length > 255 | 05:11 |
*** sdake has quit IRC | 05:11 | |
cloudnull | in horizon | 05:11 |
prometheanfire | neat | 05:14 |
*** sdake_ has quit IRC | 05:18 | |
*** sdake has joined #openstack-ansible | 05:20 | |
*** sdake has quit IRC | 05:22 | |
*** sdake has joined #openstack-ansible | 05:23 | |
*** sdake has quit IRC | 05:24 | |
*** sdake_ has joined #openstack-ansible | 05:24 | |
prometheanfire | cloudnull: you're welcome | 05:37 |
cloudnull | tyvm | 05:58 |
cloudnull | now to see if the doc people agree/will go fix it :) | 05:58 |
cloudnull | cc Sam-I-Am ^^ re: https://review.openstack.org/#/c/224137/ | 05:59 |
prometheanfire | :P | 06:02 |
*** elo has joined #openstack-ansible | 06:25 | |
*** markvoelker has joined #openstack-ansible | 06:30 | |
*** markvoelker has quit IRC | 06:34 | |
*** elo has quit IRC | 06:37 | |
*** sdake_ has quit IRC | 06:59 | |
*** arbrandes1 has quit IRC | 08:01 | |
*** arbrandes has joined #openstack-ansible | 08:08 | |
*** markvoelker has joined #openstack-ansible | 08:30 | |
*** markvoelker has quit IRC | 08:35 | |
*** pellaeon has joined #openstack-ansible | 08:50 | |
pellaeon | Hi, previously I have a single network node, older version of OSAD use this I remember, now I want to use infra1~3 as network node, as suggested in example openstack_user_config.yml, how do I do this? | 08:53 |
pellaeon | simmply changing openstack_user_config by replacing network_hosts as infra1~3 doesn't work | 08:54 |
pellaeon | because openstack_inventory.json still contain the old network host | 08:55 |
pellaeon | but deleting openstack_inventory.json will cause it to be re-generated, and it will build new containers instead of just using the old containers | 08:57 |
pellaeon | I made some manual modifications to some containers so I don't want to lose it | 08:58 |
*** agireud has quit IRC | 09:49 | |
*** agireud has joined #openstack-ansible | 09:50 | |
*** markvoelker has joined #openstack-ansible | 10:31 | |
*** markvoelker has quit IRC | 10:35 | |
*** ashishjain has joined #openstack-ansible | 10:59 | |
ashishjain | hello | 10:59 |
ashishjain | Need some help with osad | 10:59 |
ashishjain | while running openstack-ansible haproxy-install.yml -vvv | 11:02 |
ashishjain | I get the following skipping: no hosts matched | 11:02 |
ashishjain | any clues | 11:02 |
*** markvoelker has joined #openstack-ansible | 11:32 | |
*** markvoelker has quit IRC | 11:37 | |
*** shoutm has joined #openstack-ansible | 11:46 | |
*** gparaskevas has joined #openstack-ansible | 12:13 | |
*** arbrandes has quit IRC | 13:30 | |
*** markvoelker has joined #openstack-ansible | 13:33 | |
*** markvoelker has quit IRC | 13:37 | |
cloudnull | ashishjain: to have hproxy work you you need to define a host where it will live. | 13:42 |
cloudnull | something similar to https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.aio#L127-L129 | 13:42 |
cloudnull | pellaeon: if you want to remove a host from inventory there is a script called inventory-manage.py which can pull that one thing out | 13:43 |
ashishjain | cloudnull: Thanks for this. | 13:44 |
ashishjain | cloudnull: Can you please provide me one more help | 13:44 |
cloudnull | usage is : inventory-manage.py -f <path-to-inventory-file> -l | 13:44 |
cloudnull | to remove its the same. | 13:44 |
cloudnull | usage is : inventory-manage.py -f <path-to-inventory-file> -r <hostname-to-remove> | 13:44 |
ashishjain | Cloudnull: I have been trying osad for last 7 days initially with juno and than with kilo w/o any success although :( | 13:44 |
cloudnull | ashishjain: sure whats up? | 13:45 |
ashishjain | I will paste my openstack_user_config.yml | 13:45 |
ashishjain | Can you please validate if this is correct | 13:45 |
cloudnull | dod you see my comments about your userconfig lastnight ? | 13:45 |
cloudnull | i think it was you ... | 13:45 |
cloudnull | i dont remember. :) | 13:45 |
ashishjain | cloudnull: No I dont thik it was me | 13:47 |
ashishjain | http://paste.openstack.org/show/471145/ | 13:47 |
ashishjain | cloudnull: Not sure if you have changed your name. But I did get one answer yesterday regarding provider network being part of global ovverirdes | 13:47 |
ashishjain | ansible01 is my deployment as well as target host | 13:49 |
*** arbrandes has joined #openstack-ansible | 13:50 | |
cloudnull | ah yes | 13:51 |
cloudnull | one sec | 13:51 |
cloudnull | this is what i said looking back at that last night: | 13:52 |
cloudnull | <cloudnull> arbrandes: did you get it to go ? | 13:53 |
cloudnull | <cloudnull> Quick question, just reading through the scroll back, do you have the br-mgmt inteface on all of your hosts? | 13:53 |
cloudnull | <cloudnull> Additionally if you have null entries for your containers it looks like the cause of that was due to not having a management network defined for the ip addresses to be pulled from. | 13:53 |
cloudnull | <cloudnull> looking at http://paste.openstack.org/show/469139/ | 13:53 |
cloudnull | <cloudnull> Line 23 | 13:53 |
cloudnull | <cloudnull> ip_from_q: "management" | 13:53 |
cloudnull | <cloudnull> management is not defined in cidr_networks | 13:53 |
cloudnull | <cloudnull> arbrandes1: ^^ | 13:53 |
cloudnull | <cloudnull> just looking at the pasted config it seems like you need to s'/container/management/' in the cidr_networks field and youll be good to go. | 13:53 |
cloudnull | <cloudnull> in terms of cleaning it up. for the sake of simplicity, id probably run the lxc-container-destroy.yml play. | 13:53 |
cloudnull | <cloudnull> rm the openstack_inventory.json file | 13:53 |
cloudnull | <cloudnull> fix the entry in config | 13:53 |
cloudnull | <cloudnull> and start the plays fresh | 13:53 |
cloudnull | <cloudnull> host setup and base config should be all good. | 13:53 |
cloudnull | <cloudnull> let me know if it doesn't go . im around for the most part this evening. | 13:53 |
cloudnull | <cloudnull> FYI for more available community help most of us are available Mon-Fri GMT+1 / GMT-6 | 13:53 |
cloudnull | <cloudnull> but i think the only thing that your missing is the miss configuration between the "container" and "management" network names. | 13:53 |
cloudnull | ashishjain: are you were also working on http://paste.openstack.org/show/469486/ right ? | 13:55 |
ashishjain | cloudnull: ya looks like my config, but somehow I missed your chat snippets yesterday, I use a web base chat so I usually dont have chat logs :( | 13:56 |
ashishjain | https://webchat.freenode.net/ | 13:56 |
ashishjain | this is what I use | 13:56 |
ashishjain | currently my latest config is http://paste.openstack.org/show/471145/ | 13:57 |
ashishjain | btw where is lxc cache downloaded to ? | 13:57 |
ashishjain | Can I store it permanently in my system? | 13:58 |
cloudnull | its downloaded to /var/cache/lxc | 13:58 |
cloudnull | so are you deploying juno ? | 13:58 |
cloudnull | that user config looks like the example from juno. | 13:59 |
ashishjain | cloudnull: not it is kilo. | 14:00 |
cloudnull | also do you intend to deploy both flat and vlan networks for neutron ? | 14:00 |
ashishjain | cloudnull: I would deploy vlan .... but I thought we need to have both :( | 14:01 |
cloudnull | you can have both | 14:01 |
cloudnull | but its not required | 14:01 |
ashishjain | cloudnull: okay | 14:01 |
ashishjain | cloudnull: Is my kilo config incoorect | 14:01 |
cloudnull | to have flat networks function you'll need to remove one entry | 14:01 |
cloudnull | rather change . | 14:02 |
cloudnull | host_bind_override: "eth12" | 14:02 |
ashishjain | git branch * kilo | 14:02 |
ashishjain | when I run git branch command I get the result as kilo | 14:02 |
cloudnull | line 37-44 i'd remove | 14:03 |
ashishjain | cloudnull: done | 14:03 |
ashishjain | I removed the lines | 14:03 |
ashishjain | now I got only vlan | 14:04 |
cloudnull | http://cdn.pasteraw.com/bchpal2z88b5znjgslmx41hsve5xc69 <- a simple config from an AIO i did today. | 14:05 |
cloudnull | so in the git branch are you running from the kilo head or a tag ? | 14:06 |
cloudnull | and where are you stuck now? | 14:06 |
ashishjain | cloudnull: regarding your config, it seems to be a single node | 14:08 |
ashishjain | cloudnull: But what about multi-node config | 14:08 |
ashishjain | cloudnull: how is haproxy_hosts different from internal_lb_vip_address? | 14:09 |
ashishjain | cloudnull: as per my understanding haproxy_hosts is the one where haproxy will be installed | 14:10 |
ashishjain | what about internal_lb_vip_address ? | 14:11 |
ashishjain | will it be the same as where haproxy is installed? | 14:12 |
ashishjain | cloudnull: I am not sure if I have confused you but my question is will haproxy be installed as a lxc container ? | 14:14 |
cloudnull | ashishjain: that config is from a single node, that true but extending it for multi node is as simple as adding more nodes. | 14:14 |
cloudnull | haproxy runs on the host | 14:15 |
cloudnull | not in a contianer | 14:15 |
ashishjain | okay so I think the snippet I pasted is still wrong | 14:15 |
cloudnull | in my case the internal lb vip adress is 172.29.236.100 which is on the container management network | 14:15 |
ashishjain | haproxy_hosts: ansible01: ip: 192.168.57.100 | 14:15 |
ashishjain | because I do not have any host with ip 192.168.57.100 | 14:16 |
ashishjain | instead I have got hosts with ip 192.168.57.11, 192.168.57.12 and 192.168.57.13 | 14:16 |
cloudnull | so you can alias the address to an existing network interface if you need. | 14:16 |
cloudnull | or just change it | 14:17 |
ashishjain | cloudnull: So I think the correct config will be to have internal_lb_vip_address: 192.168.57.11 | 14:17 |
ashishjain | and haproxy_hosts: ansible01: ip: 192.168.57.11 | 14:17 |
ashishjain | cloudnull: Is this correct? | 14:17 |
cloudnull | yes, if you dont have 192.168.57.100 anywhere | 14:17 |
ashishjain | cloudnull: aah that was another mistake. | 14:17 |
cloudnull | osad wont mangle your network interfaces. it assumes things on the host are aready configured | 14:18 |
ashishjain | cloudnull: I was stuck at infrastcuture-hosts.yml where my galera container was not able to pip install MySQL-python, pycrypto and memcached | 14:18 |
ashishjain | I finally realised that it is trying to connect to a webserver at 192.168.57.100:8181 | 14:19 |
ashishjain | which definitely was not there | 14:19 |
cloudnull | ah , that'll do it | 14:19 |
ashishjain | I think I will start afresh again...this is probably 6 or 7th time :( | 14:19 |
cloudnull | 8th time is a charm :) | 14:20 |
ashishjain | cloudnull: can you plz validate my config one more time | 14:20 |
cloudnull | sure. | 14:20 |
ashishjain | cloudnull: thanks a ton | 14:20 |
cloudnull | you can remove line 2 and 10 from your config | 14:21 |
cloudnull | environment_version is no longer used | 14:21 |
cloudnull | juno only | 14:21 |
cloudnull | and as you've said 192.168.57.100 is no longer present. | 14:22 |
cloudnull | your cidr networks are only using a /24 that wont give you a lot of room to grow . | 14:22 |
cloudnull | if its a small deployment that should be fine | 14:23 |
ashishjain | cloudnull: just a test deplyoment for now | 14:23 |
ashishjain | ya made the edits as you have suggested | 14:23 |
cloudnull | but if its something that your looking to grow over time, id recommend using a /22 or larger. | 14:23 |
ashishjain | here is the final config | 14:23 |
cloudnull | kk | 14:23 |
ashishjain | it is on my laptop now but soon I want to move into industry grade servers | 14:23 |
ashishjain | http://paste.openstack.org/show/471186/ | 14:24 |
ashishjain | here is my new config | 14:24 |
cloudnull | just to be sure, you have br-vlan br-vxlan, and br-mgmt on all of your hosts already ? | 14:25 |
ashishjain | cloudnull: yes. I will just paste the result from one of my host | 14:25 |
ashishjain | cloudnull: http://paste.openstack.org/show/471207/ | 14:27 |
cloudnull | ok. are you not deploying cinder ? | 14:30 |
cloudnull | or do you want to ? | 14:30 |
ashishjain | cloudnull: not deploying it now, will do it later. | 14:31 |
ashishjain | cloudnull: thought of having something running first and than update the config to include cinder | 14:32 |
ashishjain | cloudnull: that was the original plan..but if your advice is than I will configure cinder and swift as well | 14:32 |
ashishjain | cloudnull: currently I have not configured ceilometer also | 14:33 |
cloudnull | http://cdn.pasteraw.com/536s873abyyyzoxfv4snhdsj4ib0f79 | 14:33 |
cloudnull | I made one edit | 14:33 |
cloudnull | infra_hosts / os-infra_hosts | 14:33 |
cloudnull | the old entry would've worked but os-infra is more specific | 14:34 |
cloudnull | i also included commented out sections when you decide to deploy cinder | 14:34 |
ashishjain | okay got it | 14:37 |
*** fawadkhaliq has joined #openstack-ansible | 14:37 | |
ashishjain | cloudnull: why is storage-infra_hosts: different from storage_hosts:? | 14:38 |
cloudnull | storage infra runs the api | 14:38 |
cloudnull | storage hosts is where the volume services will run | 14:38 |
ashishjain | okay | 14:38 |
ashishjain | okay | 14:40 |
ashishjain | btw just wanted to tell you the final plan is to use ansible api to deploy osad, Do you think it is viable? | 14:40 |
cloudnull | through tower ? | 14:41 |
ashishjain | no ansible python api | 14:41 |
cloudnull | i've never tried . | 14:42 |
ashishjain | http://docs.ansible.com/ansible/developing_api.html | 14:42 |
cloudnull | i'd be interested if it works. I'd assume it'd go . | 14:42 |
ashishjain | yes hopefully it may work | 14:42 |
cloudnull | we had issues with tower in the past, through i've not tried again for some time. | 14:43 |
cloudnull | it didnt handle complex vars very well. | 14:43 |
cloudnull | but that was ansible 1.5 x | 14:43 |
cloudnull | so im sure its improved , | 14:43 |
cloudnull | i've just not gave it another go. | 14:43 |
* cloudnull waiting on ansible 2 | 14:43 | |
ashishjain | ya hopefully it may work | 14:44 |
cloudnull | i'd dont see why it wouldn't | 14:44 |
ashishjain | yes you are correct. | 14:45 |
ashishjain | Going back to osad when I run lxc-ls I see lot of containers .. to start afresh what shall I do | 14:45 |
ashishjain | I rant the destroy yml also but still I see the same set of continers | 14:45 |
ashishjain | shall I use lxc-destroy to remove them one by one? | 14:46 |
cloudnull | from the playbooks directory . | 14:46 |
ashishjain | yes I ran from the playbooks directory | 14:46 |
cloudnull | run: ansible hosts -m shell -a 'for i in $(lxc-ls); do lxc-destroy -fn $i; done' | 14:47 |
ashishjain | i have messed up I know...i initially deleted everything from /var/lib/lxc manually | 14:47 |
cloudnull | then run: ansible hosts -m shell -a 'rm -rf /openstack' | 14:47 |
cloudnull | then delete the /etc/openstack_deploy/openstack_inventory.json | 14:48 |
cloudnull | and start the deployment again | 14:48 |
*** Mudpuppy has joined #openstack-ansible | 14:48 | |
cloudnull | ive got to run bbl | 14:52 |
ashishjain | cloudnull: all set to start afrest | 14:54 |
ashishjain | cloudnull: thanks for your help and time, catch you later | 14:55 |
*** galstrom_zzz is now known as galstrom | 15:06 | |
*** galstrom is now known as galstrom_zzz | 15:10 | |
*** fawadkhaliq has quit IRC | 15:26 | |
*** markvoelker has joined #openstack-ansible | 15:33 | |
*** markvoelker has quit IRC | 15:38 | |
*** arbrandes has quit IRC | 16:43 | |
*** fawadkhaliq has joined #openstack-ansible | 16:57 | |
*** arbrandes has joined #openstack-ansible | 17:02 | |
*** fawadkhaliq has quit IRC | 17:11 | |
*** fawadkhaliq has joined #openstack-ansible | 17:14 | |
*** arbrandes has quit IRC | 17:21 | |
*** cloudtrainme has joined #openstack-ansible | 17:21 | |
*** cloudtrainme has quit IRC | 17:26 | |
*** markvoelker has joined #openstack-ansible | 17:34 | |
ashishjain | cloudnull: U there? | 17:37 |
ashishjain | I am hitting another issue now "sg: [ALERT] 262/230555 (20527) : Starting frontend keystone_service-front: cannot bind socket" | 17:38 |
*** arbrandes has joined #openstack-ansible | 17:38 | |
*** markvoelker has quit IRC | 17:38 | |
ashishjain | I suspect some port is already in use | 17:40 |
cloudnull | Is keystone running on your haproxy host and not in a container ? | 17:43 |
cloudnull | Have you rerun the haproxy role/restarted it ? | 17:44 |
*** arbrandes has quit IRC | 17:44 | |
ashishjain | cloudnull: I restarted the process with -vvv flasg and somehow it worked | 17:44 |
ashishjain | "openstack-ansible haproxy-install.yml -vvv" | 17:45 |
ashishjain | is their a way to test if the run was successful..even before I go to setup-infrastructure.yml? | 17:46 |
cloudnull | You can verify haproxy with haproxy-stat -s /var/run/haproxy.sock | 17:48 |
cloudnull | Run lxc-ls -f | 17:48 |
cloudnull | To see your containers and the assigned IPS | 17:48 |
evrardjp | hello everyone | 17:49 |
ashishjain | lxc-ls -f gives all my containers :) | 17:50 |
ashishjain | but I do not have haproxy-stat installed on my host | 17:50 |
cloudnull | I think that's the command. If you installed with the haproxy role it should be there. | 17:51 |
ashishjain | "haproxy -v HA-Proxy version 1.4.24 2013/06/17" | 17:53 |
ashishjain | haproxy -v gives me the version | 17:53 |
ashishjain | but I indeed do not have haproxy-stat | 17:54 |
ashishjain | or /var/run/haproxy.sock | 17:54 |
ashishjain | and I have not installed with haproxy role | 17:54 |
ashishjain | I used "openstack-ansible haproxy-install.yml -vvv" | 17:54 |
cloudnull | That'll do it. | 17:55 |
ashishjain | so this would have used probably root user to install haproxy | 17:55 |
cloudnull | Yes. | 17:55 |
ashishjain | okay | 17:55 |
cloudnull | haproxy the press tab a free time. | 17:55 |
cloudnull | *Few | 17:55 |
ashishjain | did that their is no such command as haproxy-stat | 17:56 |
cloudnull | I'm mobile right now. | 17:56 |
ashishjain | aah okay | 17:56 |
cloudnull | I'm not at my compu so I may be saying the wrong command. | 17:58 |
ashishjain | cloudnull:okay got it | 17:59 |
ashishjain | cloudnull: I think I can start infrastrucutre setup probably | 18:00 |
ashishjain | btw when I see /etc/haproxy/conf.d | 18:00 |
ashishjain | I can see all ceilometer_api, heat,nova,keystone etc config files. | 18:00 |
ashishjain | so looks like it is setup | 18:01 |
ashishjain | cloudnull: one non-osad qs. What is the client you use to login to irc which makes you login 24x7 and probably keeps u in sync whether on compu or mobile? | 18:02 |
*** arbrandes has joined #openstack-ansible | 18:05 | |
evrardjp | ashishjain: to see haproxy info you can do | 18:07 |
evrardjp | hatop -s /var/run/haproxy.stat | 18:07 |
evrardjp | you'll see your backends/frontends, and their status | 18:07 |
evrardjp | keep in mind that haproxy working doesn't mean your openstack is fully working ;) it's just the load balancer in front of it | 18:08 |
ashishjain | evrardjp:I get this insufficient permissions for socket path /var/run/haproxy.stat | 18:08 |
ashishjain | I am logged in as a root | 18:08 |
evrardjp | that's weird | 18:08 |
evrardjp | what are you deploying? kilo ? | 18:08 |
ashishjain | kilo | 18:08 |
evrardjp | could you check /etc/haproxy/haproxy.cfg ? | 18:09 |
evrardjp | you should have this: | 18:09 |
evrardjp | stats socket /var/run/haproxy.stat level admin mode 600 | 18:09 |
evrardjp | level admin is the required part to send administrative commands to your stat socket | 18:10 |
ashishjain | their is no haproxy.stat in /var/run | 18:10 |
evrardjp | mmm | 18:10 |
evrardjp | that's not normal ;) | 18:10 |
evrardjp | haproxy not running? | 18:10 |
ashishjain | but then the error is misleading | 18:10 |
evrardjp | ls /var/run/ha* ? | 18:10 |
ashishjain | s /var/run/ha* ls: cannot access /var/run/ha*: No such file or directory | 18:11 |
ashishjain | I think I need to re run the haproxy-install.yml | 18:11 |
evrardjp | file /var/run/ ? | 18:11 |
evrardjp | just to make sure /var/run exists ;) | 18:12 |
evrardjp | evrardjp: haproxy not running? | 18:12 |
ashishjain | evrardjp: I have restarted the yml | 18:14 |
ashishjain | and /var/run exists | 18:14 |
ashishjain | :) | 18:14 |
evrardjp | ps aux |grep haproxy ? | 18:14 |
evrardjp | you should have a long list of the config files loaded | 18:14 |
ashishjain | evrardjp: yml exeuction finised | 18:14 |
ashishjain | but ha proxy is not started | 18:15 |
ashishjain | ps aux | grep haproxy does not return anything | 18:15 |
ashishjain | ansible01 : ok=14 changed=0 unreachable=0 failed=0 | 18:15 |
ashishjain | this is the result of "openstack-ansible haproxy-install.yml" | 18:16 |
evrardjp | you should check why haproxy is not starting | 18:16 |
ashishjain | http://paste.openstack.org/show/471461/ | 18:16 |
evrardjp | (does he have the IP he tries to bind on?) | 18:17 |
ashishjain | ya IP of haproxy is same as the ip of the host it is being installed | 18:17 |
ashishjain | 192.168.57.11 | 18:17 |
evrardjp | could you check if everything is fine in your generated service configs? | 18:18 |
evrardjp | in /etc/haproxy/conf.d/ | 18:18 |
evrardjp | but always check your logs first ;) | 18:18 |
evrardjp | I must go for today | 18:19 |
evrardjp | don't hesitate to ping me tomorrow | 18:19 |
ashishjain | evrardjp: since I starting using osad I am devoid of logs I feel | 18:19 |
ashishjain | I donot see logs anywhere | 18:19 |
ashishjain | where are the logs for this | 18:19 |
ashishjain | and /etc/haproxy/conf.d has got all the files | 18:19 |
ashishjain | evrardjp: sure I will ping, thnaks | 18:20 |
*** abitha has joined #openstack-ansible | 18:20 | |
ashishjain | but it has been a nightmare ;( using osad | 18:20 |
*** abitha has quit IRC | 18:21 | |
evrardjp | don't hesitate to tell us what you want to improve | 18:23 |
*** fawadkhaliq has quit IRC | 18:23 | |
ashishjain | [ALERT] 262/235358 (27141) : Starting frontend keystone_service-front: cannot bind socket | 18:24 |
ashishjain | this is the error when I try to manually start /etc/init.d/haproxy start | 18:24 |
ashishjain | I suspect it is port 5000 which is probably required by "keystone_service-front: cannot bind socket" | 18:28 |
ashishjain | but somehow I am not sure | 18:28 |
ashishjain | becuase that port is absolutely free | 18:28 |
cloudnull | Ashishjain nuke the haproxy configs and rerun the play. | 18:29 |
ashishjain | heh | 18:29 |
cloudnull | Maybe you have a duplicate. | 18:30 |
cloudnull | You could grep through the configs | 18:30 |
ashishjain | cloudnull: How to nuke it..manually delete it? | 18:30 |
cloudnull | rm -rf /etc/haproxy ; openstack-ansible haproxy-install.yml | 18:31 |
ashishjain | I get the same error again msg: [ALERT] 263/000314 (27812) : Starting frontend keystone_service-front: cannot bind socket | 18:33 |
ashishjain | and that is the reason for haproxy not starting | 18:33 |
cloudnull | Look through the configs. There has to be some duplication somewhere ? Or port 5000/35357 are in use. | 18:34 |
ashishjain | cloudnull: bind 192.168.1.1:5000 | 18:35 |
ashishjain | cloudnull: this is the external_lb_vip address I have given 192.168.1.1 | 18:36 |
ashishjain | this is only present in grep 192.168.1.1 * keystone_service:bind 192.168.1.1:5000 | 18:36 |
ashishjain | keystone_service | 18:36 |
ashishjain | keystone_service file in /etc/haproxy/conf.d | 18:37 |
ashishjain | this ip does not exist...in all the other files in "/etc/haproxy/conf.d" you have got *:<port> | 18:37 |
ashishjain | why is that? | 18:37 |
ashishjain | once I change the entry from 192.168.1.1:5000 tp *:5000 I am able to start haproxy | 18:39 |
cloudnull | my haproxy configs look like this | 18:42 |
cloudnull | http://cdn.pasteraw.com/zyyf1owjuzvwq7doo6tc478gbwi82o < internal | 18:42 |
cloudnull | sorry ^ external | 18:42 |
cloudnull | http://cdn.pasteraw.com/373nncha68n2q06mtfamk1fiah3q0lm < internal | 18:42 |
cloudnull | and they're working. does the bind address that you've set for the interal and external address not exist on your host thats running haproxy? | 18:43 |
cloudnull | IE 192.168.1.1 | 18:44 |
ashishjain | no 192.168.1.1 does not exist | 18:44 |
ashishjain | moreover can you tell the name of the file which has the content in the link | 18:45 |
ashishjain | I have got only 2 files in /etc/haproxy/conf.d one is keystone_service and other is keystone_admin | 18:45 |
ashishjain | and I do not have any section called "frontend keystone_internal-front" in any of these files instead I have got frontend keystone_service-front | 18:46 |
cloudnull | i have http://cdn.pasteraw.com/8obh298thqur9nl3jri0e224p1r1h2y | 18:47 |
ashishjain | which is what you pasted as part of your first link "http://cdn.pasteraw.com/zyyf1owjuzvwq7doo6tc478gbwi82o" | 18:47 |
cloudnull | yes | 18:48 |
ashishjain | I do not have keystone_internal | 18:48 |
ashishjain | http://paste.openstack.org/show/471493/ | 18:48 |
ashishjain | but in my case it is keystone_Service which is leading to failure as 192.168.1.1 does not exist | 18:49 |
ashishjain | in your case does this ip exist "104.130.175.168" | 18:49 |
cloudnull | yes | 18:50 |
cloudnull | it does. | 18:50 |
cloudnull | both the internal and external lb vip addresses need to be on the host running haproxy | 18:50 |
cloudnull | they're the interfaces haproxy will use to route all your traffic | 18:50 |
ashishjain | okay so I am screwed again :0) | 18:51 |
ashishjain | what shall I do to correct this now | 18:51 |
ashishjain | I shall give the host ip in that case to this external_lb_vip | 18:52 |
ashishjain | eth1 in my case | 18:52 |
ashishjain | do i need to rerun all the playbooks?' | 18:52 |
ashishjain | cloudnull: this is the network configuration of my host where I got ha proxy installed | 18:54 |
ashishjain | http://paste.openstack.org/show/471501/ | 18:54 |
ashishjain | here eth1 is basically how ssh into this VM which we can probably say as the external ip address | 18:54 |
ashishjain | so does it mean I need to give "external_lb_vip_address: 192.168.56.81" | 18:55 |
cloudnull | yes correct the external vip address setting | 18:58 |
cloudnull | to use 192.168.56.81 | 18:58 |
cloudnull | because thats the "external" network interface for your load balancer. | 18:58 |
cloudnull | then simply run setup-everything.ylm | 18:58 |
ashishjain | cloudnull: what I am doing now is run /opt/os-ansible-deploymemt/playbooks/inventory/dynamic_inventory.py | 18:59 |
ashishjain | this will generate the new openstack_inventory.json for me | 18:59 |
ashishjain | and than I will have to just run openstack-ansible haproxy-install.yml | 19:00 |
ashishjain | I think this should suffiic | 19:00 |
ashishjain | cloudnull: looks like that worked | 19:02 |
ashishjain | even the command hatop -s /var/run/haproxy.stat is giving lot of thing now | 19:03 |
ashishjain | :) | 19:03 |
ashishjain | cloudnull: I think I can start infrastructure yml now | 19:06 |
*** sdake has joined #openstack-ansible | 19:06 | |
cloudnull | nice! | 19:10 |
cloudnull | ok. im off today. ashishjain good luck i hope it all works out. | 19:20 |
ashishjain | cloudnull: thanks a lot for your time and help. | 19:20 |
ashishjain | cloudnull: thanks for your wishes I really need it :) | 19:21 |
cloudnull | Itll go. We've been running it in prod for some time. So its just a matter of getting the setup right for your env. | 19:22 |
cloudnull | I'll be back ol tomorrow. | 19:22 |
cloudnull | Take care. | 19:22 |
*** markvoelker has joined #openstack-ansible | 19:35 | |
*** markvoelker has quit IRC | 19:40 | |
prometheanfire | m/win 1 | 19:42 |
*** gparaskevas has quit IRC | 20:43 | |
*** abitha has joined #openstack-ansible | 20:44 | |
*** abitha has quit IRC | 20:46 | |
*** sdake_ has joined #openstack-ansible | 20:57 | |
*** sdake has quit IRC | 21:01 | |
*** KLevenstein has joined #openstack-ansible | 21:05 | |
*** KLevenstein has quit IRC | 21:05 | |
*** subscope has quit IRC | 21:11 | |
*** markvoelker has joined #openstack-ansible | 21:36 | |
*** markvoelker has quit IRC | 21:40 | |
*** sdake_ has quit IRC | 22:00 | |
*** Mudpuppy has quit IRC | 22:29 | |
*** ggillies has joined #openstack-ansible | 22:32 | |
*** Mudpuppy has joined #openstack-ansible | 22:51 | |
*** markvoelker has joined #openstack-ansible | 23:21 | |
*** markvoelker has quit IRC | 23:26 | |
*** openstackgerrit has joined #openstack-ansible | 23:26 | |
*** openstackgerrit has quit IRC | 23:31 | |
*** shoutm has quit IRC | 23:47 | |
*** openstackgerrit has joined #openstack-ansible | 23:47 | |
*** openstackgerrit has quit IRC | 23:48 | |
*** openstackgerrit has joined #openstack-ansible | 23:49 | |
*** arbrandes has quit IRC | 23:55 | |
*** markvoelker has joined #openstack-ansible | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!