*** k_stev has joined #openstack-ansible | 00:17 | |
*** k_stev has quit IRC | 00:17 | |
*** tiagogomes has quit IRC | 00:34 | |
*** markvoelker has joined #openstack-ansible | 00:40 | |
*** tiagogomes has joined #openstack-ansible | 00:47 | |
*** abitha has quit IRC | 00:57 | |
*** kerwin_bai has joined #openstack-ansible | 01:46 | |
*** darrenc is now known as darrenc_afk | 02:21 | |
*** kerwin_bai has quit IRC | 02:22 | |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible: Configure HAProxy SSL frontends with cipher suite https://review.openstack.org/226610 | 02:23 |
---|---|---|
*** stevelle_ is now known as stevelle | 02:31 | |
cloudnull | logan2: did you get it worked out? hash failure typically means that there's a broken link within the repo infra somewhere. | 02:41 |
logan2 | nope it is still happening | 02:43 |
logan2 | i tried blowing away the repo containers and rebuilding from scratch | 02:43 |
cloudnull | new deployment ? | 02:45 |
logan2 | broken link tip helps.. here's what I am seeing | 02:45 |
logan2 | http://paste.gentoolinux.info/qavuneduru.coffee | 02:46 |
cloudnull | the index process loops through and creates an index.html file with links to all of the build wheels. in href yaprt sets the md5 content type for hashing the wheel. | 02:46 |
cloudnull | are you cloning the rpc-openstack mirror by chance ? | 02:47 |
cloudnull | sorry. rpc-repo ? | 02:47 |
logan2 | i believe it is using rpc-repo, not 100% clear on how the whole repo setup works yet | 02:47 |
cloudnull | kilo / master based deployment? | 02:48 |
logan2 | kilo, yes | 02:48 |
logan2 | roles/repo_server/files/openstack-wheel-builder.py: 'https://rpc-repo.rackspace.com/pools', | 02:48 |
*** darrenc_afk is now known as darrenc | 02:49 | |
logan2 | http://paste.gentoolinux.info/bahoxosasu.avrasm | 02:51 |
logan2 | looks like it is creating the link to ansible-lint instead of ansible_lint | 02:51 |
logan2 | there are a bunch of broken links in this 11.1.0 directory and at first glance it looks like a lot of them may result from this - vs _ issue | 02:54 |
logan2 | http://paste.gentoolinux.info/ujibuvoqac.avrasm | 02:55 |
*** kerwin_bai has joined #openstack-ansible | 02:55 | |
cloudnull | in the mirror the links seem to be fine. http://rpc-repo.rackspace.com/os-releases/11.1.0/ | 02:57 |
cloudnull | which likely means there's an issue with the rsync command https://github.com/openstack/openstack-ansible/blob/kilo/playbooks/repo-clone-mirror.yml#L28 | 02:59 |
cloudnull | or maybe it didnt complete ? | 02:59 |
logan2 | i just deleted /var/www/repo, running repo-clone-mirror playbook--guessing that is what builds that dir hopefully | 03:01 |
*** markvoelker has quit IRC | 03:01 | |
logan2 | appears to be as it is now filling up | 03:01 |
cloudnull | let me know how it goes . | 03:01 |
logan2 | links are all good now after rsync completed | 03:03 |
logan2 | thanks! | 03:03 |
* logan2 tries repo-build again | 03:03 | |
* cloudnull trying the same :) | 03:06 | |
logan2 | so for a production deployment is it still recommended to clone rpc-repo? or is there a method to rebuild that structure locally | 03:07 |
cloudnull | in prod, we've been just running repo-build.yml | 03:08 |
cloudnull | first repo-server.yml then repo-build.yml which will recreate that structure locally. | 03:09 |
cloudnull | and will only build the wheels that are needed for the given deployment. | 03:09 |
logan2 | ahhh ok, so essentially what run-playbooks.sh does. I guess somehow that got messed up earlier when I was trying to get the local git sources worked in and broke all those links. | 03:10 |
cloudnull | its possible. | 03:11 |
cloudnull | its also confusing. | 03:12 |
cloudnull | maybe we need to remove that from the meta play | 03:12 |
*** Manojit has joined #openstack-ansible | 03:14 | |
Manojit | Hi after reboot the controller I 'm getting error in running openstack command | 03:15 |
Manojit | ERROR (GatewayTimeout): Gateway Timeout (HTTP 504) | 03:15 |
*** markvoelker has joined #openstack-ansible | 03:16 | |
*** cemmason has joined #openstack-ansible | 03:17 | |
cloudnull | Manojit: I assume your loadbalancer is up? is it able to route traffic to the other infra nodes? | 03:17 |
cloudnull | also check your galera cluster to make sure its up and showing that its wsrep has nodes in it. | 03:18 |
Manojit | The setup is on AIO | 03:18 |
cloudnull | ah | 03:18 |
Manojit | and haproxy is running | 03:18 |
cloudnull | you need to rerun the galera plays to rebootstrap the node. | 03:18 |
Manojit | I have kept all single container | 03:19 |
*** skamithi13 has joined #openstack-ansible | 03:19 | |
cloudnull | on restart it wont bring back the galera cluster automatically from catastrophic failure. this is to prevent data loss. | 03:19 |
cloudnull | openstack-ansible galera-install.yml will rebootstrap the galera node(s) and you should be good to go. | 03:20 |
cloudnull | if it prsents you with a failure , it should also provide the variable required to make it go. | 03:21 |
Manojit | openstack-ansible galera-install.yml is failing | 03:21 |
logan2 | well cloudnull thanks for the help, the indexes generated this time so I am going to call it a night and keep hacking on it tomorrow. hopefully it is deploying from my local git branches now. :) | 03:21 |
Manojit | openstack-ansible galera-install.yml | 03:21 |
cloudnull | IE you might need to run: openstack-ansible galera-install.yml -e galera_ignore_cluster_state=true | 03:21 |
cloudnull | logan2: have a good one. | 03:21 |
cloudnull | nice! | 03:22 |
cloudnull | logan2: BTW did you get it to go with SSH ? | 03:22 |
cloudnull | or is it using something else ? | 03:22 |
Manojit | Yes it fixed the isssue.. | 03:23 |
logan2 | no i ended up cloning from github to a local http mirror and putting it behind htaccess for now. i even tried using authenticated https against github but it was failing because it seems like the extra @ was messing up yaprt | 03:23 |
Manojit | Thanks cloudnull :) | 03:23 |
Manojit | I have another issue with provisioning VM.. | 03:24 |
logan2 | but if that were fixed I think I could clone directly from github with authenticated https :) | 03:24 |
cloudnull | logan2: ill ping you later maybe we can integrate those features in the yaprt code base so that it can take of those things for you. | 03:25 |
cloudnull | for now, have a good night :) | 03:25 |
cloudnull | Manojit: whats up ? | 03:25 |
logan2 | thanks that would be great! ttyl | 03:25 |
Manojit | Good night.. | 03:25 |
Manojit | Unable to mount image /var/lib/nova/instances/38262ad8-2c67-4f37-870d-0f9d507dd1ea/disk with error libguestfs installed but not usable (/usr/bin/supermin-helper exited with error status 1 | 03:26 |
Manojit | I did "update-guestfs-appliance" and restarted nova-compute service | 03:27 |
Manojit | as root | 03:27 |
Manojit | Still the issue remains.. | 03:28 |
Manojit | So it rebooted the box .. | 03:28 |
Manojit | Let me try again now | 03:28 |
cloudnull | sorry. what are you wanting to do ? | 03:29 |
cloudnull | i think i missed part of that | 03:29 |
Manojit | The issue is that VM provising is failing with error | 03:30 |
Manojit | Unable to mount image /var/lib/nova/instances/38262ad8-2c67-4f37-870d-0f9d507dd1ea/disk with error libguestfs installed but not usable (/usr/bin/supermin-helper exited with error status 1 | 03:30 |
cloudnull | ive not seen that | 03:31 |
Manojit | Seems it is a bug.. | 03:31 |
Manojit | I saw some bug reported.. | 03:32 |
*** fawadkhaliq has joined #openstack-ansible | 03:32 | |
Manojit | https://bugs.launchpad.net/fuel/+bug/1467579 | 03:33 |
openstack | Launchpad bug 1467579 in Fuel for OpenStack "libguestfs doesn't work on Ubuntu without root permissions" [Medium,Confirmed] - Assigned to Alexei Sheplyakov (asheplyakov) | 03:33 |
Manojit | I tried to follow the work around but still issue persist | 03:33 |
cloudnull | have you chmod 0644 /boot/vmlinuz* ? | 03:34 |
cloudnull | is the perms still 0600 ? | 03:34 |
Manojit | -rw------- 1 root root 5776416 May 2 2014 /boot/vmlinuz-3.13.0-24-generic -rw------- 1 root root 5821152 Aug 14 18:07 /boot/vmlinuz-3.13.0-63-generic | 03:35 |
Manojit | let me do 644 | 03:35 |
cloudnull | in reading https://bugs.launchpad.net/devstack/+bug/1413142 it looks like an issue for Ubuntu + libguestfs . | 03:35 |
openstack | Launchpad bug 1413142 in OpenStack Compute (nova) "bad configuration for libguestfs" [Medium,Confirmed] | 03:35 |
coolj | Manojit: try chown -r nova:nova /var/lib/nova | 03:35 |
cloudnull | coolj: for prez! | 03:35 |
cloudnull | :) | 03:35 |
Manojit | done chmod 0644 /boot/vmlinuz* | 03:37 |
cloudnull | seems like this was a decision made by ubuntu and one they dont want to undo: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/759725 | 03:38 |
openstack | Launchpad bug 759725 in hobbit-plugins (Ubuntu) "The kernel is no longer readable by non-root users" [Undecided,In progress] - Assigned to Axel Beckert (xtaran) | 03:38 |
Manojit | let me try provisioning again | 03:38 |
openstackgerrit | Miguel Alejandro Cantu proposed openstack/openstack-ansible: Add OpenID Connect RP Apache Module[WIP] https://review.openstack.org/226617 | 03:41 |
Manojit | some progress.. | 03:45 |
Manojit | but new error | 03:45 |
Manojit | 2015-09-22 22:40:18.185 6861 TRACE nova.compute.manager [instance: e9ee0175-79c0-4683-8055-02d63cc86205] _("Unexpected vif_type=%s") % vif_type) 2015-09-22 22:40:18.185 6861 TRACE nova.compute.manager [instance: e9ee0175-79c0-4683-8055-02d63cc86205] NovaException: Unexpected vif_type=binding_failed | 03:45 |
cloudnull | vif binding issues are generally a problem with the user_config file. | 03:45 |
cloudnull | typically we've seen lots of folks running with the flat network entry | 03:46 |
cloudnull | and not setting up the interface on hostbind_override | 03:46 |
cloudnull | that said, if you dont need or want that network type i'd remove it | 03:46 |
cloudnull | im off to sleep. take care all. | 03:47 |
Manojit | pls take rest cloudnull.. | 03:48 |
Manojit | thanks for ur help | 03:49 |
Manojit | i will continue with my timezone people | 03:49 |
Manojit | My user_config file looks as default | 03:51 |
Manojit | cidr_networks: container: 172.29.236.0/22 tunnel: 172.29.240.0/22 storage: 172.29.244.0/22 used_ips: - "172.29.236.1,172.29.236.50" - "172.29.240.1,172.29.240.50" - "172.29.244.1,172.29.244.50" - "172.29.248.1,172.29.248.50" global_overrides: internal_lb_vip_address: "{{ external_lb_vip_address }}" external_lb_vip_address: 75.126.87.231 tunnel_bridge: "br-vxlan" management_bridge: "br-mgmt" provi | 03:51 |
Manojit | - network: container_bridge: "br-vlan" container_type: "veth" container_interface: "eth12" host_bind_override: "eth12" type: "flat" net_name: "flat" group_binds: - neutron_linuxbridge_agent | 03:52 |
Manojit | where to set hostbind_override on | 03:53 |
*** tlian2 has joined #openstack-ansible | 04:01 | |
*** tlian has quit IRC | 04:02 | |
*** kerwin_bai1 has joined #openstack-ansible | 04:13 | |
*** kerwin_bai has quit IRC | 04:14 | |
*** kerwin_bai1 is now known as kerwin_bai | 04:14 | |
Manojit | I have the network setup as VXLAN both for public and private lan in neutron | 04:16 |
Manojit | i have two bridge one flat and one vxlan | 04:17 |
*** kerwin_bai has quit IRC | 04:18 | |
*** skamithi13 has quit IRC | 04:18 | |
*** skamithi13 has joined #openstack-ansible | 04:18 | |
Manojit | Hi Team.. | 04:18 |
Manojit | getting error NovaException: Unexpected vif_type=binding_failed while vm provisioning.. | 04:19 |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible: Allow protocol to be set per endpoint-type https://review.openstack.org/226621 | 04:32 |
*** fawadk has joined #openstack-ansible | 04:33 | |
*** fawadkhaliq has quit IRC | 04:35 | |
Manojit | Does anyone experienced "getting error NovaException: Unexpected vif_type=binding_failed" error | 04:47 |
*** elo has joined #openstack-ansible | 05:10 | |
*** elo has quit IRC | 05:23 | |
*** Manojit has quit IRC | 05:46 | |
*** kerwin_bai has joined #openstack-ansible | 05:56 | |
*** elo has joined #openstack-ansible | 05:57 | |
*** elo has quit IRC | 06:07 | |
*** fawadk has quit IRC | 06:14 | |
*** kerwin_bai has quit IRC | 06:15 | |
*** fawadkhaliq has joined #openstack-ansible | 06:17 | |
*** cloudnull has quit IRC | 06:18 | |
*** b3rnard0 has quit IRC | 06:18 | |
*** b3rnard0 has joined #openstack-ansible | 06:18 | |
*** cloudnull has joined #openstack-ansible | 06:21 | |
*** cemmason2 has joined #openstack-ansible | 06:22 | |
*** cemmason has quit IRC | 06:24 | |
*** fawadk has joined #openstack-ansible | 06:32 | |
*** fawadkhaliq has quit IRC | 06:33 | |
*** tlian2 has quit IRC | 06:37 | |
*** skamithi13 has quit IRC | 06:38 | |
*** kerwin_bai has joined #openstack-ansible | 06:40 | |
*** kerwin_bai1 has joined #openstack-ansible | 06:45 | |
*** kerwin_bai has quit IRC | 06:47 | |
*** kerwin_bai1 is now known as kerwin_bai | 06:47 | |
*** fawadkhaliq has joined #openstack-ansible | 06:51 | |
*** kerwin_bai has quit IRC | 06:51 | |
*** fawadk has quit IRC | 06:52 | |
*** fawadk has joined #openstack-ansible | 06:58 | |
*** neilus has joined #openstack-ansible | 06:59 | |
*** fawadkhaliq has quit IRC | 07:00 | |
*** fawadkhaliq has joined #openstack-ansible | 07:09 | |
*** fawadk has quit IRC | 07:10 | |
*** markvoelker has quit IRC | 07:15 | |
*** javeriak has joined #openstack-ansible | 07:18 | |
*** elo has joined #openstack-ansible | 07:31 | |
*** skamithi13 has joined #openstack-ansible | 07:38 | |
*** fawadkhaliq has quit IRC | 07:49 | |
*** fawadkhaliq has joined #openstack-ansible | 07:50 | |
*** cemmason2 has quit IRC | 07:57 | |
*** javeriak has quit IRC | 08:13 | |
*** markvoelker has joined #openstack-ansible | 08:15 | |
*** kukacz has joined #openstack-ansible | 08:19 | |
*** kukacz has quit IRC | 08:20 | |
*** markvoelker has quit IRC | 08:20 | |
*** kukacz|75601 has joined #openstack-ansible | 08:21 | |
*** kukacz|75601 has quit IRC | 08:21 | |
*** cemmason1 has joined #openstack-ansible | 08:22 | |
*** kukacz has joined #openstack-ansible | 08:28 | |
*** vdo has joined #openstack-ansible | 08:34 | |
*** gparaskevas has joined #openstack-ansible | 08:39 | |
*** elo has quit IRC | 08:42 | |
*** neilus has quit IRC | 08:47 | |
*** neilus has joined #openstack-ansible | 08:48 | |
*** neilus has quit IRC | 08:51 | |
*** neilus has joined #openstack-ansible | 08:54 | |
*** skamithi14 has joined #openstack-ansible | 09:29 | |
*** skamithi13 has quit IRC | 09:33 | |
*** cemmason2 has joined #openstack-ansible | 09:33 | |
*** gparaskevas has quit IRC | 09:34 | |
*** cemmason1 has quit IRC | 09:35 | |
tiagogomes | hi, is there a way to disable load balancing for the networking hosts? | 09:55 |
tiagogomes | I would like that the network hosts behaved more like active/passive | 09:56 |
mattt | tiagogomes: not fully sure i understand the question | 09:56 |
mattt | tiagogomes: are you talking about the neutron-agents container ? | 09:56 |
tiagogomes | yes, the agents container (and maybe the neutron server container) as well | 09:58 |
mattt | tiagogomes: it makes sense to LB neutron-server | 09:59 |
mattt | tiagogomes: the services in neutron-agents container aren't behind LB last i recall | 09:59 |
tiagogomes | under my physical network setup, I think that having two active neutron-agents is not going to perform well | 10:00 |
odyssey4me | tiagogomes the way it works is that networks and routers are scheduled to one or the other agent, not both | 10:00 |
mattt | tiagogomes: that's fine, you can just run one neutron-agents container on the desired host | 10:00 |
odyssey4me | the stuff is only rescheduled to the other agent if one of them goes down | 10:01 |
tiagogomes | yes but one or another is problematic to me. I would like to go to always on network host A, unless host A is down | 10:03 |
mattt | odyssey4me: why remove the galera note in https://review.openstack.org/#/c/222831/1/scripts/run-aio-build.sh ? | 10:15 |
mattt | if those details aren't correct can we correct them? | 10:15 |
*** gparaskevas has joined #openstack-ansible | 10:16 | |
*** markvoelker has joined #openstack-ansible | 10:17 | |
odyssey4me | tiagogomes as I recall that is how it gets done anyway as the neutron scheduler isn't too smart - unless they've updated the scheduler to be smarter... there may be a scheduler filter option you can use there | 10:17 |
odyssey4me | mattt heh, I thought the old note was still in there - it looks like the existing note is ok | 10:18 |
mattt | odyssey4me: i'm not actually sure if the details are correct, but having something correct written to MOTD would be helpful! | 10:18 |
mattt | (there is a playbook that will rebootstrap right?) | 10:19 |
odyssey4me | that was a quick two minute review to try and correct the stuff above it | 10:19 |
mattt | or rather a task | 10:19 |
odyssey4me | well, I think the right way is to shut all the containers down, then bring them up in the right order - or something like that | 10:19 |
odyssey4me | anyway, I'll remove that edit | 10:19 |
mattt | ok cool | 10:20 |
*** ashishjain has joined #openstack-ansible | 10:20 | |
ashishjain | hello | 10:20 |
mattt | ashishjain: howdy | 10:20 |
ashishjain | mattt: good...howsz u? | 10:21 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Update the AIO build convenience script https://review.openstack.org/222831 | 10:21 |
*** markvoelker has quit IRC | 10:21 | |
mancdaz | odyssey4me mattt that note makes no sense | 10:21 |
odyssey4me | mancdaz it's git-harry's fault ;) | 10:22 |
mancdaz | to do this execute: "" | 10:22 |
mattt | ashishjain: not bad, how you doing today? | 10:22 |
mattt | mancdaz: yeah that isn't right | 10:22 |
ashishjain | mattt: Surrounded with issues :) | 10:23 |
mattt | ashishjain: well that's not good! how can we help? | 10:23 |
ashishjain | mattt: I have got 3 hosts setup for osad - 2 hosts have got all the infra components and one host is log and compute host. | 10:23 |
mattt | k | 10:24 |
ashishjain | my compute hosts is unable to ping any container running on one of the infra hosts and vice versa. However communication b/w another infra and compute hosts is just working fine | 10:25 |
ashishjain | mattt: any clue what may be wrong? | 10:25 |
mancdaz | ashishjain a related note on that setup - a 2 node cluster for rabbit/galera is dangerous because if you lose one, the cluster loses quorum and will fail | 10:25 |
mancdaz | so you may as well have only one | 10:26 |
ashishjain | macdaz: you got it right...since yesterday I have been facing issues with galera and rabbitmq | 10:26 |
mancdaz | quorum based clusters work best in odd multiples, so 3 would be a minimum for HA | 10:26 |
ashishjain | macdaz: how can I get rid of one for time being? | 10:26 |
ashishjain | sorry for type mancdaz | 10:27 |
git-harry | mancdaz: odyssey4me yeah, execute nothing and read the manual | 10:27 |
git-harry | makes perfect sense | 10:27 |
gparaskevas | ashishjain: on topic are you compute hosts and infra hosts vms on same hypervisor or vms on diferent hypervisors? or physical machines? | 10:27 |
mancdaz | ashishjain you'd need to manually remove the galera container from the galera cluster, and same for rabbit | 10:27 |
mancdaz | the api stuff should be fine | 10:27 |
mattt | i think you're ok w/ 2 rabbit nodes? galera is def. a problem tho | 10:28 |
ashishjain | gparaskevas: My setup is on Vbox vms so computes host, infra host are all vm using same hypervisor. | 10:29 |
*** gparaskevas_ has joined #openstack-ansible | 10:30 | |
ashishjain | matt mancdaz this is just a test setup for now...but the bigger plan is to have osad on industry grade servers , will you recommend the same( one rmq and one galera) even for that? | 10:30 |
mancdaz | ashishjain no, at least 3 | 10:30 |
ashishjain | gparaskevas_: My setup is on Vbox vms so computes host, infra host are all vm using same hypervisor. | 10:30 |
mattt | ashishjain: minimum 3 for sure | 10:30 |
ashishjain | mancdaz mattt why 3 ? | 10:31 |
ashishjain | why min 3? | 10:31 |
mattt | ashishjain: quorum | 10:31 |
mancdaz | ashishjain because of the way quorum based clustering protocols work | 10:31 |
ashishjain | mattt okay got it | 10:31 |
*** gparaskevas has quit IRC | 10:31 | |
mattt | ashishjain: sorry not sure about your networking issue, sounds like it could be one of many things :( | 10:32 |
ashishjain | mattt: thanks for this will make sure to have 3 nodes min for each | 10:33 |
odyssey4me | ashishjain not three nodes min for each - just three hosts to run the controller containers on | 10:33 |
odyssey4me | ashishjain if your compute vm cannot contact your containers, then there clearly is a problem in the way the networking is setup for those hosts or the virtual environment you're using | 10:34 |
mancdaz | ashishjain for your test enviornment, you could build an all in one | 10:34 |
ashishjain | odyssey4me: got it | 10:34 |
odyssey4me | make sure you don't have something like mac spoof protection in the hypervisor which prevents networking comms from any mac other than the NIC of the vm | 10:35 |
*** javeriak has joined #openstack-ansible | 10:35 | |
odyssey4me | can the compute host talk to its logging container? | 10:35 |
ashishjain | odyssey4me: till yesterday everything was fine, since I was not able to spin up an instance ... I jsut rebooted all the vm's and since than all these issues have popped up, initally my nova condutor stopeed talking to galera and since today now host communications have also stopped bw/w one infra and compute | 10:35 |
*** willemgf has joined #openstack-ansible | 10:35 | |
mancdaz | ashishjain is mariadb/mysql actually running on either of the 2 hosts? | 10:36 |
*** skamithi14 has quit IRC | 10:42 | |
*** skamithi13 has joined #openstack-ansible | 10:42 | |
gparaskevas_ | if you have vlans there maybe a problem with that i guess... | 10:46 |
ashishjain | odyssey4me macda sorry got a call | 10:46 |
gparaskevas_ | or after reboot somethng not up | 10:47 |
ashishjain | odyssey4me : compute hosts can talk to logging container | 10:47 |
gparaskevas_ | galera doesnt come up automatically btw | 10:47 |
ashishjain | compute hosts is also able to ping one of the rabbitmq host and not the other one | 10:47 |
gparaskevas_ | and containers take about 10 minuts to come up after reboot | 10:47 |
ashishjain | macdaz: yes mariadb is running on 2 different hosts(vm) | 10:48 |
ashishjain | gparaskevas: the problem now is lxc's on one vm is not reachable from another vm(compute) | 10:49 |
ashishjain | so as mattt mancdaz has said I need to first get rid of galera and rmq cluster | 10:50 |
ashishjain | as I am using 2 ndoes | 10:50 |
mancdaz | ashishjain no, if both are up things would work | 10:50 |
mancdaz | 2 nodes is risky in case one goes down | 10:50 |
mancdaz | ashishjain if you say mariadb is running on both nodes, that should not be an issue | 10:51 |
mancdaz | if you're not able to ping one lxc container on one node, from a container on another node, you've got networking issues that are outside the osa deployment | 10:51 |
ashishjain | mancdaz looks like rabbitmq is up and running fine on both the nodes, but mariadb is only up on one node | 10:53 |
ashishjain | mancdaz trying to start it but seems to be hanged | 10:53 |
ashishjain | mancdaz can you please suggest how can i remove one maria db instance | 10:54 |
mancdaz | ashishjain I would stop mariadb on the other node first | 10:56 |
ashishjain | okay | 10:57 |
ashishjain | done | 10:57 |
mancdaz | then start it back up with 'service mysql start --wsrep-new-cluster' | 10:58 |
mancdaz | then once it's up, start the other one with 'service mysql start' - it should join the other node in the cluster | 10:58 |
mancdaz | ashishjain it may be quicker to just perform a fresh deployment, since we don't know the state of any of the rest of the components and it might take longer to work through those, than it would just to deploy from scratch | 11:03 |
mancdaz | given this is just a testing environment | 11:03 |
ashishjain | mancdaz yes you seem to be correct ... the second mysql instance is not coming up :( | 11:04 |
ashishjain | mancdaz okay I shall try a new deployment | 11:05 |
mancdaz | ashishjain probably easier | 11:05 |
mancdaz | ashishjain maybe run an all-in-one? | 11:05 |
mancdaz | depends what you're testing | 11:05 |
ashishjain | mancdaz regading the download from rackspace can I re- use the cache I have downloaded | 11:05 |
ashishjain | mandaz I want my test environment to be as close to the actual environment and hence all-in-one will not suite my needs | 11:06 |
ashishjain | I will have to redo the multi node one | 11:06 |
mancdaz | ashishjain then I'd suggest 3 infra nodes | 11:06 |
mancdaz | plus computes/storage | 11:06 |
ashishjain | mancdaz okay I got a laptop with 8 gb ram | 11:06 |
ashishjain | and ubuntu on it | 11:06 |
mancdaz | ashishjain hmm | 11:06 |
ashishjain | can I still have 3 nodes? | 11:07 |
mancdaz | ashishjain it's going to be a bit of a squeeze trying to get 3 infra nodes, plus compute, plus then spinning up some instances | 11:07 |
ashishjain | can I still have 3 nodes for infra and atleast one for compute ... what could be the ideal RAM and CPU config for these 4 VMs? | 11:07 |
ashishjain | mancdaz spiining instance is fine I will live with cirros | 11:08 |
mancdaz | ashishjain sure, but it takes memory too | 11:08 |
ashishjain | yesterday I was able to use glance, create network, use horizon etc , while trying to spin up instance was when all the issues started and than trying to fix it up brought my setup to this state :) | 11:09 |
mancdaz | ashishjain fyi the aio actually spins up 3 galera containers, and 3 rabbit containers, in a single host | 11:09 |
ashishjain | so ideally I have eaten half the cake | 11:09 |
ashishjain | what's aio? | 11:09 |
mancdaz | all in one | 11:09 |
ashishjain | ok | 11:09 |
mancdaz | it's a single host, but because everything is container based, you can deploy multiple containers of the same type on a single host | 11:10 |
mancdaz | so you get a 3 node galera cluster, a 3 node rabbit cluster, but only use a single vm | 11:10 |
ashishjain | mancdaz thats good but more importantly than multiple containers I think setting up the host networking is the one which seems to be slighlty complex. | 11:11 |
ashishjain | I have use br-mgmt br-vxlan and br-vlan | 11:11 |
ashishjain | and I want to make sure this config is correct | 11:11 |
mancdaz | ashishjain right, but you're dealing with vbox networking, which is not representative of real world networking | 11:11 |
mancdaz | so you could spend a day dealing with something that's specific to vbox | 11:11 |
ashishjain | Ya I do agree but you know the servers on which I plan to finally deploy openstack has got 2 nic's | 11:12 |
ashishjain | out of 2 one would go to internet | 11:12 |
ashishjain | and other would be used to connect with another server with same config and 2 nic cards | 11:13 |
ashishjain | so ideally I will be using only one nic which is the same as my laptop | 11:13 |
ashishjain | currently I am using vbox | 11:13 |
ashishjain | but on those servers I plan to use libvirt(kvm) to spin up the vm's and libvirt bridges to create the 4 network interfaces | 11:14 |
ashishjain | here I am using vboxnet0,1,2,3 etc | 11:14 |
ashishjain | so you see this setup and the one on the servers will be almost similar | 11:14 |
mancdaz | ashishjain even putting all infra containers into a single vm, and maybe having one or 2 compute nodes, you're still going to be testing the networking setup | 11:14 |
ashishjain | ahhh mancdaz .. you are correct :D | 11:15 |
ashishjain | all right I am all set to use aio | 11:15 |
ashishjain | Is it possible to reuse the cache .... so that no dowloads from internet? | 11:15 |
ashishjain | just to speedup the complete process? | 11:16 |
mancdaz | ashishjain given that the cache is inside the current deploy, I think not | 11:16 |
ashishjain | okay mancdaz mattt yesterday pointed out to https://github.com/openstack/openstack-ansible/blob/master/playbooks/repo-install.yml | 11:17 |
mancdaz | odyssey4me how long does the gate usually take? | 11:17 |
mancdaz | ashishjain yes that's a play that builds local repo servers, but they get built in containers in the deployment | 11:17 |
mancdaz | so if you destroy those vms from the last deployment, that gets lost | 11:17 |
ashishjain | I think better approach will to have kilo cache downloaded permanently | 11:17 |
*** markvoelker has joined #openstack-ansible | 11:17 | |
ashishjain | mancdaz how can I replicate this -> http://rpc-repo.rackspace.com/ | 11:18 |
ashishjain | permanently for my use? | 11:18 |
ashishjain | I think this is where all the cache is downloaded from? | 11:18 |
ashishjain | can this be integrated with something like nexus? | 11:19 |
mancdaz | ashishjain you can run those plays and keep the containers around somewhere outside the deployment. then you need to point at it when you run a deployment | 11:19 |
ashishjain | okay but no way to rsync or replicate this http://rpc-repo.rackspace.com/? | 11:20 |
ashishjain | in my local system? | 11:20 |
mancdaz | ashishjain sure, you could just mirror it | 11:20 |
ashishjain | mancdaz: what kind of repo is this -> http://rpc-repo.rackspace.com/?? | 11:21 |
mancdaz | ashishjain it's just a set of files really | 11:22 |
*** markvoelker has quit IRC | 11:22 | |
mancdaz | it's not really a structure repo as such | 11:22 |
ashishjain | mancdaz: how do I mirror it than? | 11:22 |
ashishjain | rsync? | 11:23 |
mancdaz | ashishjain sure | 11:23 |
ashishjain | okay wget may also work I guess? | 11:24 |
mancdaz | ashishjain yeah there are a ton of tools to do that :) | 11:24 |
ashishjain | okay cool ... thanks I will try to get the complte content ... this is kilo stuff right? | 11:25 |
*** javeriak has quit IRC | 11:33 | |
mancdaz | ashishjain yeah | 11:41 |
mancdaz | and juno | 11:41 |
mancdaz | and icehouse | 11:41 |
mancdaz | it's all there | 11:41 |
ashishjain | mancdaz: aah nice! thanks a lot for for your time and help | 11:45 |
mancdaz | ashishjain np | 11:45 |
*** cristicalin has joined #openstack-ansible | 11:49 | |
odyssey4me | mancdaz sorry - was at the consulate collecting my visa - back on thr train now | 11:54 |
mancdaz | odyssey4me np, seems like a normal gate takes around 59 mins | 11:54 |
odyssey4me | mancdaz the gate check completes at around 60-70 mins normally, depending on whether it runs on rax or hp nodepool instances | 11:55 |
mancdaz | odyssey4me I'm just comparing bad/good gate to see where time is being lost because there's no obvious failures that I can see | 11:55 |
mancdaz | odyssey4me k | 11:55 |
odyssey4me | mancdaz is that with regards to https://review.openstack.org/225367 ? | 11:56 |
mancdaz | odyssey4me right | 11:56 |
*** skamithi14 has joined #openstack-ansible | 11:58 | |
odyssey4me | ashishjain I would recommend against mirroring the whole of rpc-repo. If you use the repo-build play (instead of the repo-sync play) then you will get a local mirror of only the python files you need. You can then also implement your own copy of the image file that's downloaded for the container base, and also implement your own apt mirror. You can then override the default URL's with your own in user_variables so that your own mirror | 11:58 |
odyssey4me | gets used. | 11:58 |
odyssey4me | We don't have roles/plays for building an apt mirror at this point, and I'm working on testing a replacement for the container base image to rather build it locally in https://review.openstack.org/225264 - but that is very, very early and not working just yet. | 11:59 |
*** skamithi13 has quit IRC | 11:59 | |
odyssey4me | rpc-repo as it stands has a lot of historical stuff which a modern deployment doesn't need - it contains stuff that goes back to Icehouse. :) | 11:59 |
odyssey4me | mancdaz yeah, I get that we could be more surgical and would like to see that - but I don't get why removing all that stuff ends up with a longer than normal build time. It seems odd. | 12:00 |
mancdaz | odyssey4me right, that's what I'm looking at | 12:01 |
openstackgerrit | Zhao Lei proposed openstack/openstack-ansible: Remove quotes from subshell call in bash script https://review.openstack.org/226714 | 12:04 |
openstackgerrit | Zhao Lei proposed openstack/openstack-ansible: Use pure variable name in $(()) statement https://review.openstack.org/226715 | 12:05 |
odyssey4me | mattt is https://review.openstack.org/226325 something you'd like to see backported to kilo? | 12:07 |
openstackgerrit | Matt Thompson proposed openstack/openstack-ansible: Allow tempest to deploy when no heat in environment https://review.openstack.org/226727 | 12:12 |
mattt | it's pretty trivial, but why not :) | 12:12 |
evrardjp | who doesn't deploy heat those days anyway ;) | 12:13 |
*** kukacz has quit IRC | 12:16 | |
openstackgerrit | Christopher H. Laco proposed openstack/openstack-ansible: Fix for keystone LDAP pkg missing https://review.openstack.org/226740 | 12:20 |
*** markvoelker has joined #openstack-ansible | 12:20 | |
*** fawadkhaliq has quit IRC | 12:21 | |
*** fawadkhaliq has joined #openstack-ansible | 12:22 | |
mancdaz | odyssey4me aside from making the runs take longer, the arp cache flush fix actually doesn't cause breakage | 12:25 |
*** woodard has joined #openstack-ansible | 12:26 | |
odyssey4me | mancdaz ok, that's good - but I would have thought that removing those bits should make it go faster not slower... so what gives? | 12:28 |
mancdaz | odyssey4me dunno | 12:28 |
mancdaz | odyssey4me just everything seems to take longer :/ | 12:28 |
evrardjp | arp cache flush isn't bad to run on first (but it's useless to run that many times), so I agree with the idea of https://review.openstack.org/#/c/225367/ | 12:29 |
odyssey4me | over 30 minutes longer :/ | 12:29 |
mancdaz | os-neutron-install.yml 526 seconds, versus 137 seconds in a 'good' gate | 12:29 |
mancdaz | but it completes just fine | 12:29 |
evrardjp | couldn't we do that once in a separate playbook? | 12:29 |
odyssey4me | evrardjp agreed, but why does not flushing the cache result in such a massive increase in the time taken? | 12:29 |
evrardjp | interesting | 12:30 |
odyssey4me | evrardjp essentially what happens now is that every time the container config changes the container is restarted, and that flushes the cache | 12:30 |
*** cemmason2 has quit IRC | 12:30 | |
evrardjp | that's logical | 12:30 |
evrardjp | I mean, that makes sense | 12:31 |
odyssey4me | oh wow, I see that we flush the cache regardless - on every run | 12:31 |
odyssey4me | so we should perhaps make it conditional instead - flush the cache if the container config changed | 12:31 |
mancdaz | odyssey4me we do it all over the place | 12:31 |
evrardjp | that's what I meant | 12:31 |
mancdaz | and we don't need to flush the cache just because we restarted a container | 12:31 |
odyssey4me | mancdaz we only do it after a container config change | 12:31 |
mancdaz | odyssey4me I mean in each playbook | 12:32 |
mancdaz | odyssey4me point being a full cache flush is not needed | 12:32 |
odyssey4me | mancdaz yes, that was necessary after splitting out the container config changes from one place into the multiple playbooks to cut down the down time during an upgrade | 12:32 |
odyssey4me | and yes I agree, a more surgical approach would be far better | 12:33 |
mancdaz | odyssey4me mostly we don't ever need to do that | 12:33 |
odyssey4me | it baffles me why not flushing the cache makes it take so much longer... | 12:33 |
mancdaz | regardless, why it takes *longer* is weird | 12:33 |
mancdaz | yes | 12:34 |
odyssey4me | perhaps we should try an alternative of flushing the cache for just the container that was restarted | 12:34 |
odyssey4me | it might be that the client connections are still open and aren't properly closed when the container restarts | 12:34 |
mancdaz | odyssey4me that doesn't solve the problem we're trying to solve | 12:34 |
odyssey4me | so flushing all connections relating to the container might be better? | 12:34 |
mancdaz | odyssey4me we shouldn't ever need to do an entire arp cache flush anywhere | 12:35 |
odyssey4me | I dunno - I know very little about this level of networking, so I'm just throwing ideas out there. :) | 12:35 |
openstackgerrit | Christopher H. Laco proposed openstack/openstack-ansible: Fix for keystone LDAP pkg missing https://review.openstack.org/226750 | 12:35 |
evrardjp | gratuitous arp are good on container start | 12:35 |
mancdaz | evrardjp only if the IP changed | 12:35 |
mancdaz | if it didn't, it's not needed | 12:35 |
evrardjp | in theory | 12:36 |
*** ashishjain has quit IRC | 12:36 | |
evrardjp | gratuitous arp isn't really bad per se, that's what I mean | 12:37 |
evrardjp | flushing arp cache... that's something I find... disruptive | 12:38 |
odyssey4me | evrardjp yep, that's where this came up - the arp cache flushing affects uptime during upgrades from juno to kilo | 12:38 |
evrardjp | ok | 12:38 |
*** skamithi14 has quit IRC | 12:53 | |
*** skamithi13 has joined #openstack-ansible | 12:57 | |
mancdaz | odyssey4me why do we do this https://github.com/openstack/openstack-ansible/blob/master/scripts/run-playbooks.sh#L66 | 13:01 |
mancdaz | and this https://github.com/openstack/openstack-ansible/blob/master/scripts/run-playbooks.sh#L66 | 13:01 |
*** KLevenstein has joined #openstack-ansible | 13:05 | |
skamithi13 | odysseyme: what's up? I'm still around. I'm on irc most days. regarding vagrant stuff I thought I said I'd have a first draft by end of Oct. if not that's my plan right now. | 13:06 |
*** tlian has joined #openstack-ansible | 13:11 | |
evrardjp | I have a question about rcbops/rpc-openstack/maas/ | 13:26 |
mattt | evrardjp: shoot | 13:27 |
*** kerwin_bai has joined #openstack-ansible | 13:27 | |
mattt | tho i am on a call, so may be slow to respond :) | 13:27 |
evrardjp | why are you using in all the scripts ipaddr from pip, instead of ipaddress which is installed already on far more containers by default? | 13:27 |
evrardjp | it's just a "ess" to add on a few lines | 13:27 |
*** pradk has joined #openstack-ansible | 13:28 | |
evrardjp | the methods seem the same | 13:28 |
evrardjp | (at first sight, I'm no expert) | 13:28 |
mattt | evrardjp: i'm not sure personally, let me scan logs ... i'll get back to hyou | 13:30 |
evrardjp | it's not mandatory but it avoid maintaining packages that do exactly the same thing as others (which are already installed) | 13:31 |
mattt | evrardjp: agree | 13:31 |
*** cemmason1 has joined #openstack-ansible | 13:34 | |
*** k_stev has joined #openstack-ansible | 13:34 | |
mhayden | mornin' | 13:38 |
*** KLevenstein is now known as klev-dentist | 13:40 | |
mattt | git-harry: looks like you initially chose to use ipaddr, do you know why this was used over ipaddress? | 13:43 |
git-harry | mattt: eh? | 13:46 |
mattt | git-harry: ha, see evrardjp's question above | 13:46 |
odyssey4me | mancdaz that was added by cloudnull, and I have no idea why that was added - note though that you linked the same line twice | 13:47 |
odyssey4me | skamithi13 ah, I thought you'd disappeared - do you need any help getting the spec together? have you sorted out your gerrit account? | 13:49 |
mancdaz | odyssey4me oh the other one was https://github.com/openstack/openstack-ansible/blob/master/scripts/run-playbooks.sh#L84 | 13:50 |
*** cemmason1 has quit IRC | 13:50 | |
odyssey4me | mancdaz the reason is apparently for when you use teardown to rebuild: https://github.com/openstack/openstack-ansible/blob/master/scripts/run-playbooks.sh#L85 | 13:51 |
skamithi13 | odyssey4me: yeah gerritt acct sorted out I can access review.openstack site. | 13:51 |
git-harry | mattt: evrardjp no idea, all I can offer is educated guesses | 13:51 |
git-harry | patches welcome | 13:52 |
odyssey4me | skamithi13 great! | 13:52 |
skamithi13 | odyssey4me I'm taking my time. openstack is a beast and its not my day job..so I'm taking it slow. | 13:54 |
*** skamithi13 has quit IRC | 13:58 | |
*** skamithi13 has joined #openstack-ansible | 13:58 | |
evrardjp | git-harry: what do you prefer for that? PR? it's outside openstack-ansible as it's pure rackspace maas | 13:59 |
*** Mudpuppy has joined #openstack-ansible | 14:00 | |
*** Mudpuppy has quit IRC | 14:00 | |
*** Mudpuppy has joined #openstack-ansible | 14:01 | |
mattt | evrardjp: i'll create a bug for us to look into it | 14:02 |
evrardjp | it's not really a bug | 14:03 |
mattt | well no but how else do you capture this in github? :) | 14:03 |
evrardjp | it's just a possible improvement | 14:03 |
mattt | s/bug/issue/ | 14:03 |
mattt | evrardjp: that's why i wouldn't recommend you just change it, because we'll need to do a bit of testing to ensure it's all good | 14:04 |
evrardjp | if I patch it, I'll also need to test it ;) | 14:04 |
mattt | evrardjp: ok up to you :) | 14:04 |
mattt | if you want me to create the github issue just let me know | 14:04 |
git-harry | I think they're the same code so it should be a straight switch | 14:05 |
git-harry | or basically the same. I think ipaddress is a backport from python 3 and python 3 ipaddress comes from ipaddr | 14:06 |
git-harry | but I could be wrong about that | 14:06 |
evrardjp | it looks like it git-harry | 14:07 |
evrardjp | just doing sed -i 's/ipaddr\./ipaddress\./g' * should work ;) | 14:08 |
evrardjp | or something like that | 14:08 |
evrardjp | without the \ on the second part ofc | 14:09 |
*** k_stev has quit IRC | 14:11 | |
*** k_stev has joined #openstack-ansible | 14:11 | |
*** willemgf has quit IRC | 14:14 | |
*** neilus has quit IRC | 14:14 | |
*** fawadkhaliq has quit IRC | 14:15 | |
*** phalmos has joined #openstack-ansible | 14:20 | |
evrardjp | I'm not really familiar with yaprt, what should I do if I want to add a pip package on my repository? | 14:21 |
mattt | evrardjp: there are two playbooks to run | 14:22 |
evrardjp | like for example I'd like to add the pip package django-piwik on my horizon containers, so I'll have my own playbooks/roles to modify what I need, but I need to know what I have to edit | 14:22 |
evrardjp | repo-build I guess | 14:22 |
mattt | yep and repo-pip-setup.yml | 14:22 |
mattt | you're not doing this on your prod deploy are you ? | 14:22 |
evrardjp | nope, but I'll | 14:22 |
evrardjp | at some point I'll have to | 14:23 |
evrardjp | nope, but I will* | 14:23 |
mattt | evrardjp: why? are you planning on using rackspace maas for monitoring? | 14:24 |
mattt | (you are welcome to use this stuff, just not sure why you would :)) | 14:24 |
evrardjp | this is something else, I already moved on | 14:24 |
mhayden | klev-dentist / Sam-I-Am: is there a doc macro of some sort for making an information box or a warning box of some sort? | 14:24 |
evrardjp | ;) | 14:24 |
evrardjp | mattt: the maas is used as basis for our monitoring systems | 14:24 |
evrardjp | the python scripts are used to partially get the data out for our systems | 14:25 |
evrardjp | but that's another story | 14:25 |
mattt | ok cool | 14:25 |
mattt | then opensource patches welcome | 14:25 |
mattt | :) | 14:25 |
evrardjp | yeah ofc | 14:26 |
evrardjp | I'll create an openstack-ansible-zabbix-monitoring when I'll have the time | 14:26 |
evrardjp | but I mentionned is something different | 14:27 |
odyssey4me | evrardjp so we did an additional repo for extra stuff in rpc-openstack | 14:27 |
odyssey4me | I don't think it's a perfect implementation, but it works | 14:28 |
openstackgerrit | Major Hayden proposed openstack/openstack-ansible: Merge SSL documentation https://review.openstack.org/226533 | 14:29 |
evrardjp | I know | 14:30 |
evrardjp | it's that one right? https://github.com/rcbops/rpc-openstack | 14:30 |
evrardjp | it's that one that I mention for the change /ipaddr/ipaddress/ | 14:31 |
evrardjp | mentionned* | 14:31 |
evrardjp | still for my pip concern, this is something else | 14:31 |
evrardjp | repo-pip-setup doesn't exist for me mattt | 14:32 |
mattt | evrardjp: https://github.com/rcbops/rpc-openstack/blob/master/rpcd/playbooks/repo-pip-setup.yml | 14:33 |
evrardjp | ok | 14:33 |
odyssey4me | evrardjp https://github.com/rcbops/rpc-openstack/blob/master/scripts/deploy.sh#L92-L96 | 14:33 |
odyssey4me | yeah, so that's a custom play which uses the pip lockdown role from OSA but adds the extra repo's link and recompiles pip.conf | 14:34 |
evrardjp | so I shouldn't drop stuff in os-ansible-deployment/playbooks/defaults/repo_packages | 14:34 |
odyssey4me | and https://github.com/rcbops/rpc-openstack/blob/master/rpcd/playbooks/repo-build.yml is a play which executes the repo build for the custom repo | 14:35 |
evrardjp | maybe that only for git | 14:35 |
odyssey4me | evrardjp of course if you're maintaining a fork then you can simply drop stuff into /playbooks/defaults/repo_pack | 14:35 |
odyssey4me | ... | 14:35 |
evrardjp | I see what you mean with your "..." | 14:36 |
odyssey4me | ideally we should make the repo system more pluggable I think so that one can simply add more packages if they're needed | 14:36 |
evrardjp | odyssey4me: indeed | 14:36 |
evrardjp | I'll just use pip install on my horizon hosts, I'll see what it will do | 14:37 |
evrardjp | I need to understand this pip process more | 14:37 |
odyssey4me | the ideal situation for anything that someone wants extra in the repo, whether it be specific wheels needed for storage/network drivers or for additional bits we should allow someone to drop in a file similar to how we do conf.d or env.d and it'll get included in the repo build | 14:37 |
evrardjp | that would be awesome | 14:39 |
evrardjp | or a list of pip extra packages in user_*.yml | 14:40 |
mattt | that'd be nice yeah | 14:41 |
cloudnull | Morning. Mancdaz odyssey4me - what did I add ? | 14:42 |
cloudnull | ;-) | 14:42 |
odyssey4me | cloudnull some stuff into run-playbooks which no-one understands :p | 14:46 |
odyssey4me | mancdaz has been doing tests related to https://review.openstack.org/225367 | 14:46 |
cloudnull | Ah good. | 14:46 |
odyssey4me | it works, but it is super slow and we have no idea why | 14:46 |
odyssey4me | my guess is that it relates to some sort of tcp timeout which the arp cache flush is taking care of | 14:47 |
odyssey4me | but I know nothing about networking :p | 14:47 |
odyssey4me | cloudnull fyi I'm going to do sha bumps for juno and kilo today - I'm busy prepping the patches now and will do the rpc-repo rebuilds to update them too | 14:48 |
odyssey4me | I see that keystone has released rc1, so I'll drop in a sha bump for that to see what happens :) | 14:48 |
cloudnull | Sweet. I'll hold your beer. | 14:51 |
cloudnull | The commit to remove the flushing bits is slow ? | 14:52 |
cloudnull | Have we rebased that in a while? I've not looked. | 14:53 |
* cloudnull on mobile due to conference call. | 14:53 | |
prometheanfire | lol | 14:53 |
prometheanfire | andymccr: ping? | 14:54 |
andymccr | prometheanfire: hello | 14:54 |
odyssey4me | cloudnull rebased several times, even after the base image improvement the build times out after 90 minutes almost every time | 14:54 |
odyssey4me | there has been a single successful build within 90 mins and more than 10 fails across both hp and rax instances | 14:55 |
cloudnull | That's a bummer. | 14:55 |
prometheanfire | andymccr: keep wednesday 11:15-11:55 open kthnx | 14:56 |
*** phalmos has quit IRC | 14:56 | |
palendae | Sounds like we need Apsu involved there | 14:56 |
prometheanfire | andymccr: for the conf | 14:56 |
andymccr | that sounds ominous prometheanfire ;D | 14:57 |
prometheanfire | that's the container session | 14:57 |
prometheanfire | for ops | 14:57 |
prometheanfire | Infrastructure Containers | 14:57 |
andymccr | cool | 14:57 |
prometheanfire | thought we could pimp things | 14:57 |
andymccr | sure sounds good! | 14:57 |
prometheanfire | https://etherpad.openstack.org/p/TYO-ops-meetup https://docs.google.com/spreadsheets/d/1EUSYMs3GfglnD8yfFaAXWhLe0F5y9hCUKqCYe0Vp1oA/edit#gid=1480678842 | 14:57 |
evrardjp | I'm off for today, see you tomorrow | 14:58 |
prometheanfire | cya | 14:58 |
mattt | later evrardjp | 14:58 |
*** phalmos has joined #openstack-ansible | 15:01 | |
*** cemmason1 has joined #openstack-ansible | 15:09 | |
*** cemmason1 has quit IRC | 15:09 | |
*** k_stev has quit IRC | 15:12 | |
*** phalmos has quit IRC | 15:13 | |
*** k_stev has joined #openstack-ansible | 15:16 | |
tiagogomes | Hi, I am getting this error "One or more undefined variables: 'dict object' has no attribute 'volume_backend_name'" . My user config: http://paste.openstack.org/show/473775/ | 15:16 |
tiagogomes | anyone has an idea of what is the issue? | 15:17 |
*** phalmos has joined #openstack-ansible | 15:19 | |
*** k_stev1 has joined #openstack-ansible | 15:19 | |
mattt | tiagogomes: should cinder_nfs_client and everything underneath be indented ? | 15:20 |
logan2 | heat-engine and heat-api setup is failing due to ceilometerclient: http://paste.gentoolinux.info/ipiqoqepop.mel .. completely fresh containers/repo built this morning.. any ideas? | 15:20 |
*** k_stev has quit IRC | 15:21 | |
klev-dentist | mhayden: I think there’s something, but I’d have to look it up | 15:22 |
*** klev-dentist is now known as KLevenstein | 15:22 | |
*** jhesketh has quit IRC | 15:26 | |
*** jhesketh has joined #openstack-ansible | 15:27 | |
*** spotz_zzz is now known as spotz | 15:28 | |
*** alejandrito has joined #openstack-ansible | 15:33 | |
*** javeriak has joined #openstack-ansible | 15:34 | |
openstackgerrit | Merged openstack/openstack-ansible: Remove quotes from subshell call in bash script https://review.openstack.org/226714 | 15:42 |
*** cristicalin has quit IRC | 15:42 | |
*** KLevenstein is now known as klev-awa | 15:42 | |
*** alop has joined #openstack-ansible | 15:49 | |
*** jwagner_away is now known as jwagner | 15:50 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Update juno SHA's - 23 Sep 2015 https://review.openstack.org/226861 | 15:52 |
prometheanfire | another gentoo user? | 15:54 |
*** sdake has joined #openstack-ansible | 15:57 | |
logan2 | recovering, sorry. mostly ubuntu these days | 15:57 |
*** javeriak has quit IRC | 16:01 | |
*** javeriak has joined #openstack-ansible | 16:02 | |
prometheanfire | openstack is working fine for me on it :P | 16:03 |
*** sdake has quit IRC | 16:04 | |
*** javeriak has quit IRC | 16:06 | |
palendae | prometheanfire: At what scale? :) | 16:06 |
*** javeriak has joined #openstack-ansible | 16:06 | |
palendae | Actually am curious about that - assume you have a home lab for it? | 16:06 |
*** sdake_ has joined #openstack-ansible | 16:06 | |
prometheanfire | home lab right now | 16:07 |
prometheanfire | I have other users with larger deployments | 16:07 |
prometheanfire | home lab is 3 nodes atm | 16:07 |
prometheanfire | will be 4 eventually | 16:07 |
prometheanfire | iirc one of my users was in belgium, another in russia, not sure of the others | 16:08 |
*** sdake has joined #openstack-ansible | 16:10 | |
*** elo has joined #openstack-ansible | 16:14 | |
*** sdake_ has quit IRC | 16:14 | |
*** phalmos has quit IRC | 16:17 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Add policy changes required for OSSA-2015-018 / CVE-2015-5240 https://review.openstack.org/226872 | 16:28 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Add policy changes required for OSSA-2015-018 / CVE-2015-5240 https://review.openstack.org/226874 | 16:32 |
mhayden | odyssey4me: would it be possible for https://review.openstack.org/#/c/226533/ to get a workflow+1? i'd like to use it to fix up my RabbitMQ SSL review | 16:38 |
odyssey4me | mhayden need another core reviewer cloudnull mattt andymccr d34dh0r53 ^ | 16:40 |
d34dh0r53 | mhayden: odyssey4me reviewing now | 16:41 |
mhayden | odyssey4me: ah, sorry -- still figuring this process out ;) | 16:41 |
*** daneyon has joined #openstack-ansible | 16:43 | |
*** daneyon_ has quit IRC | 16:43 | |
d34dh0r53 | mhayden: ask away, that is the process :) | 16:45 |
d34dh0r53 | mhayden: odyssey4me reviewed and +W | 16:45 |
stevelle | mhayden: I'm still confused about the ca_cert | 16:45 |
mhayden | so d34dh0r53, what is the secret of life? | 16:45 |
d34dh0r53 | 42 | 16:45 |
palendae | mhayden: You just need to know the question | 16:45 |
odyssey4me | stevelle so the ca cert may be dropped regardless of whether someone is using self-signed or user-provided certs | 16:45 |
odyssey4me | if the ca cert is provided or not provided makes no difference to the self signed process | 16:46 |
mhayden | stevelle / odyssey4me: there does arise a sticky situation if a user provides only cert + key and no cacert | 16:46 |
stevelle | that was my concern, but honestly that can be touched on in a following patch | 16:46 |
odyssey4me | mhayden in that case if the user provides no ca cert then the user expects that the target OS already knows the CA | 16:46 |
mhayden | i'd like to synchronize the ssl logic everywhere as well | 16:47 |
mhayden | odyssey4me: but the conf files specify a CA file -- which won't exist | 16:47 |
mhayden | that's the larger issue | 16:47 |
stevelle | the docs state clearly that the ca_cert will be required for any user-provided cert | 16:47 |
mhayden | it would be silly to deploy cert/key with no CA | 16:47 |
stevelle | so relying on the os to know it seems to violate the docs | 16:47 |
odyssey4me | mhayden the apache conf files skip the ca config entry if the ca doesn't exist | 16:47 |
mhayden | ah, rabbitmq ones don't | 16:47 |
* mhayden winks | 16:47 | |
stevelle | exactly | 16:47 |
odyssey4me | mhayden sounds like you need a patch then ;) | 16:47 |
mhayden | i'll go over the ssl logic for apache, keystone, rabbit, and horizon later today to ensure they have similar logic | 16:48 |
mhayden | in the code, not the docs | 16:48 |
odyssey4me | there are cases where the ca would already be known to the OS | 16:48 |
mhayden | and verify that docs matchf ully | 16:48 |
odyssey4me | alternatively the deployer may have concatenated the ca cert into the server cert | 16:48 |
mhayden | odyssey4me: good point | 16:48 |
mhayden | didn't think about that last situation | 16:48 |
stevelle | it was funny mhayden because I went down a rabbit hole yesterday after reviewing your general security spec yesterday and was looking up what it would take to secure rabbit. I parked it and went to check reviews and noticed you had already submitted the patcheset for it. | 16:49 |
odyssey4me | prior to the normalised logic in the ssl certs, that was the expected way of deploying | 16:49 |
mhayden | stevelle: oops :) | 16:49 |
odyssey4me | stevelle nicely picked up - I missed the lack of optional ca cert in the rabbitmq bits :) | 16:50 |
odyssey4me | luckily it's not yet merged, so mhayden can fix that up :) | 16:50 |
odyssey4me | mhayden you could also rebase your patch on the ssl docs patch, that way you can take care of the docs duplication in the same patch set :) | 16:50 |
mhayden | yup - just commented on 223717 | 16:50 |
mhayden | that's the plan, odyssey4me ;) | 16:50 |
odyssey4me | mhayden you can create dependent patches :) want to give that a whirl? | 16:51 |
openstackgerrit | Christopher H. Laco proposed openstack/openstack-ansible: Add net.netfilter.nf_conntrack_max to Swift Storage https://review.openstack.org/226880 | 16:51 |
mhayden | i've heard that this christopher h. laco submits good code | 16:53 |
mhayden | from reputable people | 16:53 |
openstackgerrit | Christopher H. Laco proposed openstack/openstack-ansible: Add net.netfilter.nf_conntrack_max to Swift Storage https://review.openstack.org/226880 | 16:53 |
openstackgerrit | Merged openstack/openstack-ansible: Merge SSL documentation https://review.openstack.org/226533 | 16:53 |
mhayden | yay docs | 16:54 |
mhayden | will hopefully get rabbitmq review updated by EOD | 16:54 |
odyssey4me | mhayden it may be best to -w it now quickly to prevent someone else doing the workflow bit :) | 16:56 |
*** gparaskevas_ has quit IRC | 16:57 | |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible: Apply correct websocket URI scheme for spice-html5 https://review.openstack.org/226462 | 16:59 |
odyssey4me | mhayden I think that https://review.openstack.org/226533 could do with a backport to kilo :) | 17:01 |
*** abitha has joined #openstack-ansible | 17:01 | |
*** abitha has quit IRC | 17:02 | |
odyssey4me | mhayden another thing - I have been toying with the idea for some time to have a role which deploys an internal CA, and another role which can generate a cert on that CA for servers so that plays can request a cert and distribute it appropriately. | 17:02 |
odyssey4me | I've already done some work in another role I work on in spare time (LoL) to have a working CA - it's far from done, but I thought that it would be way better to replace all self-signed certs with an internal CA. | 17:04 |
stevelle | odyssey4me: ++ | 17:06 |
stevelle | that was part of my rabbit hole yesterday as well | 17:06 |
odyssey4me | self signed certs are useless IMO, you may as well not bother | 17:06 |
stevelle | I almost feel the same way for SSL terminated at the LB | 17:07 |
stevelle | :) | 17:07 |
*** cloudtrainme has joined #openstack-ansible | 17:08 | |
odyssey4me | well, ssl at the LB is ok in my books as long as your internals are properly protected in other ways | 17:11 |
odyssey4me | if someone can sniff your internals, you're in trouble regardless | 17:11 |
odyssey4me | but using self signed certs for endpoints is just stupid - you end up having to make all clients operate in insecure mode, so they never validate anything - even if a man gets in the middle | 17:12 |
*** cloudtrainme has quit IRC | 17:26 | |
*** skamithi13 has quit IRC | 17:27 | |
*** cloudtrainme has joined #openstack-ansible | 17:27 | |
*** skamithi13 has joined #openstack-ansible | 17:27 | |
openstackgerrit | Miguel Grinberg proposed openstack/openstack-ansible: Put horizon in its own process https://review.openstack.org/226889 | 17:30 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Update kilo SHA's - 23 Sep 2015 https://review.openstack.org/226890 | 17:31 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Add ebtables to neutron agent configuration https://review.openstack.org/217103 | 17:32 |
*** cloudtrainme has quit IRC | 17:33 | |
*** phalmos has joined #openstack-ansible | 17:41 | |
*** jwagner is now known as jwagner_lunch | 17:44 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Add ebtables to neutron agent configuration https://review.openstack.org/217103 | 17:49 |
*** abitha has joined #openstack-ansible | 18:06 | |
*** kerwin_bai has quit IRC | 18:09 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Update Keystone to Liberty RC1 https://review.openstack.org/226917 | 18:16 |
*** elo has quit IRC | 18:18 | |
*** phalmos has quit IRC | 18:23 | |
*** phalmos has joined #openstack-ansible | 18:36 | |
openstackgerrit | Merged openstack/openstack-ansible: Fix for keystone LDAP pkg missing https://review.openstack.org/226750 | 18:42 |
*** klev-awa is now known as KLevenstein | 18:44 | |
*** jwagner_lunch is now known as jwagner | 18:45 | |
*** phalmos has quit IRC | 18:49 | |
openstackgerrit | Major Hayden proposed openstack/openstack-ansible: Add SSL/TLS listener to RabbitMQ https://review.openstack.org/223717 | 18:56 |
*** Bjoern_ has joined #openstack-ansible | 19:06 | |
*** Bjoern_ is now known as BjoernT | 19:06 | |
*** phalmos has joined #openstack-ansible | 19:12 | |
*** phalmos has quit IRC | 19:22 | |
*** phalmos has joined #openstack-ansible | 19:31 | |
*** cloudtrainme has joined #openstack-ansible | 19:32 | |
*** cloudtrainme has quit IRC | 19:37 | |
openstackgerrit | Miguel Alejandro Cantu proposed openstack/openstack-ansible: Add OpenID Connect RP Apache Module[WIP] https://review.openstack.org/226617 | 19:56 |
*** fawadkhaliq has joined #openstack-ansible | 20:02 | |
*** fawadkhaliq has quit IRC | 20:05 | |
*** kukacz has joined #openstack-ansible | 20:19 | |
*** javeriak has quit IRC | 20:19 | |
*** sdake_ has joined #openstack-ansible | 20:22 | |
*** alop has quit IRC | 20:22 | |
*** sdake has quit IRC | 20:25 | |
*** k_stev1 has quit IRC | 20:33 | |
mhayden | getting some apt-get failures in jenkins | 20:42 |
mhayden | weird | 20:42 |
*** sigmavirus24_awa has quit IRC | 20:54 | |
openstackgerrit | Merged openstack/openstack-ansible: Configure HAProxy SSL frontends with cipher suite https://review.openstack.org/226610 | 20:54 |
*** d34dh0r53 has quit IRC | 20:55 | |
*** d34dh0r53 has joined #openstack-ansible | 20:55 | |
*** eglute has quit IRC | 20:55 | |
*** eglute has joined #openstack-ansible | 20:55 | |
*** sigmavirus24_awa has joined #openstack-ansible | 20:58 | |
openstackgerrit | Miguel Grinberg proposed openstack/openstack-ansible: Put horizon in its own process https://review.openstack.org/226889 | 21:02 |
*** mgariepy has quit IRC | 21:08 | |
*** woodard has quit IRC | 21:14 | |
*** Mudpuppy_ has joined #openstack-ansible | 21:33 | |
*** Mudpuppy_ has quit IRC | 21:34 | |
*** Mudpuppy has quit IRC | 21:36 | |
*** skamithi14 has joined #openstack-ansible | 21:48 | |
*** skamithi13 has quit IRC | 21:50 | |
*** spotz is now known as spotz_zzz | 21:50 | |
*** k_stev has joined #openstack-ansible | 21:50 | |
*** kukacz has quit IRC | 21:52 | |
*** sdake_ has quit IRC | 21:59 | |
*** k_stev has quit IRC | 22:00 | |
*** galstrom_zzz is now known as galstrom | 22:11 | |
*** kerwin_bai has joined #openstack-ansible | 22:13 | |
*** openstackgerrit has quit IRC | 22:16 | |
*** openstackgerrit has joined #openstack-ansible | 22:16 | |
*** galstrom is now known as galstrom_zzz | 22:21 | |
*** alejandrito has quit IRC | 22:36 | |
*** KLevenstein has quit IRC | 22:45 | |
*** jwagner is now known as jwagner_away | 22:46 | |
openstackgerrit | seetha ramaiah munnangi proposed openstack/openstack-ansible: Add Administration Capabilites to the Haproxy Stats GUI https://review.openstack.org/227042 | 22:59 |
*** phalmos has quit IRC | 23:39 | |
*** skamithi14 has quit IRC | 23:55 | |
*** skamithi13 has joined #openstack-ansible | 23:55 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!