*** sacharya has joined #openstack-ansible | 00:48 | |
*** KLevenstein has joined #openstack-ansible | 00:59 | |
*** KLevenstein has quit IRC | 01:00 | |
*** sacharya has quit IRC | 01:03 | |
*** daneyon has joined #openstack-ansible | 01:27 | |
*** daneyon has quit IRC | 01:31 | |
*** stevemar has joined #openstack-ansible | 02:40 | |
javeriak | hey, can we pre-define what ips the infra containers will take? | 03:06 |
---|---|---|
*** stevemar has quit IRC | 03:06 | |
*** froots has quit IRC | 03:06 | |
*** hughsaunders has quit IRC | 03:06 | |
openstackgerrit | Nolan Brubaker proposed stackforge/os-ansible-deployment-specs: Propose new developer documentation spec https://review.openstack.org/173155 | 03:10 |
*** stevemar has joined #openstack-ansible | 03:13 | |
*** froots has joined #openstack-ansible | 03:13 | |
*** hughsaunders has joined #openstack-ansible | 03:13 | |
*** daneyon has joined #openstack-ansible | 03:16 | |
*** bluebox has joined #openstack-ansible | 03:17 | |
*** sacharya has joined #openstack-ansible | 03:19 | |
*** britthouser has joined #openstack-ansible | 03:19 | |
*** daneyon has quit IRC | 03:21 | |
openstackgerrit | Nolan Brubaker proposed stackforge/os-ansible-deployment-specs: Propose new developer documentation https://review.openstack.org/173155 | 03:21 |
*** britthou_ has joined #openstack-ansible | 03:21 | |
*** javeriak has quit IRC | 03:23 | |
*** britthouser has quit IRC | 03:24 | |
*** sdake has joined #openstack-ansible | 03:56 | |
*** sdake_ has quit IRC | 03:58 | |
*** sdake_ has joined #openstack-ansible | 04:02 | |
*** sdake has quit IRC | 04:04 | |
*** JRobinson__ is now known as JRobinson__afk | 04:18 | |
*** britthou_ has quit IRC | 04:22 | |
*** britthouser has joined #openstack-ansible | 04:23 | |
*** bluebox has quit IRC | 04:27 | |
*** JRobinson__afk has quit IRC | 04:32 | |
*** JRobinson__afk has joined #openstack-ansible | 04:34 | |
*** JRobinson__afk is now known as JRobinson__ | 04:37 | |
*** bilal has joined #openstack-ansible | 04:41 | |
*** bluebox has joined #openstack-ansible | 04:42 | |
*** javeriak has joined #openstack-ansible | 04:43 | |
*** bluebox has quit IRC | 04:43 | |
bilal | I have 3 controller rackspace 10 up and running. when i try to create network im getting ERROR: neutronclient.shell <html><body><h1>504 Gateway Time-out</h1> The server didn't respond in time. | 04:44 |
*** sacharya has quit IRC | 04:53 | |
*** daneyon has joined #openstack-ansible | 05:04 | |
*** daneyon has quit IRC | 05:09 | |
*** mahito has joined #openstack-ansible | 05:10 | |
*** ishant has joined #openstack-ansible | 05:14 | |
*** JRobinson__ has quit IRC | 05:57 | |
*** javeriak has quit IRC | 06:01 | |
*** javeriak has joined #openstack-ansible | 06:01 | |
*** javeriak has quit IRC | 06:08 | |
*** javeriak has joined #openstack-ansible | 06:08 | |
*** javeriak has quit IRC | 06:13 | |
*** javeriak has joined #openstack-ansible | 06:14 | |
*** javeriak has quit IRC | 06:43 | |
*** daneyon has joined #openstack-ansible | 06:53 | |
*** daneyon has quit IRC | 06:58 | |
*** stevemar has quit IRC | 07:09 | |
odyssey4me | bilal it sounds like your load balancer can't speak to the back-end service, which is most likely a service misconfiguration of some sort | 08:06 |
bilal | odyssey4me: which configuration files should i check for? | 08:07 |
odyssey4me | bilal no, I mean that you've either misconfigured an IP somewhere or the LB IP or something like that - is your load balancer working, check it's health and status | 08:08 |
odyssey4me | A pertinant question would probably also be whether you've configured a load balancer at all? | 08:09 |
bilal | load balancer is configured and its working. also the request to authentiacte from keystone etc is going to the right ip when i see the logs | 08:18 |
*** sdake has joined #openstack-ansible | 08:22 | |
bilal | not sure about neutron.. every neutron request should also go to lb first. right? | 08:22 |
*** sdake_ has quit IRC | 08:26 | |
odyssey4me | yes, all the services are configured to go through the LB address unless you've changed them to do otherwise | 08:30 |
*** sdake has quit IRC | 08:30 | |
odyssey4me | the 500 message usually comes back from the LB | 08:30 |
odyssey4me | I guess you'll have to track down whether it is coming from the LB or from one of the back-ends | 08:30 |
mattt | odyssey4me: that message looks like an haproxy message | 08:31 |
odyssey4me | mattt agreed | 08:32 |
bilal | here is the trace back:neutron --debug net-create net1 DEBUG: keystoneclient.session REQ: curl -g -i -X GET http://10.22.37.149:35357/v2.0 -H "Accept: application/json" -H "User-Agent: python-keystoneclient" DEBUG: keystoneclient.session RESP: [200] date: Tue, 14 Apr 2015 06:21:57 GMT vary: X-Auth-Token content-length: 423 content-type: application/json server: Apache/2.4.7 (Ubuntu) RESP BODY: {"version": {"status": "stable | 08:41 |
odyssey4me | bilal you'll need to put that into a pastebin, not into IRC | 08:42 |
bilal | : [{"href": "http://10.22.37.149:35357/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}} DEBUG: stevedore.extension found extension EntryPoint.parse('table = cliff.formatters.table:TableFormatter') DEBUG: stevedore.extension found extension EntryPoint.parse('shell = cliff.formatters.shell:ShellFormatter') DEBUG: stevedore.extension found extension EntryPoint.parse(' | 08:42 |
bilal | oh ok | 08:42 |
odyssey4me | try using http://paste.openstack.org/ | 08:42 |
*** daneyon has joined #openstack-ansible | 08:42 | |
bilal | http://paste.openstack.org/show/203827/ | 08:43 |
mattt | bilal: try logging into the neutron-server containers to see what is going on | 08:46 |
mattt | bilal: have a poke at /var/log/neutron/neutron-server.log etc. | 08:47 |
*** daneyon has quit IRC | 08:47 | |
bilal | mattt: neutron-server.log saying AMQP unreachable. it is trying to find it in localhost. i dont see any amqp service/process running on neutron-server container. should it be running on this container or a seperate one? http://paste.openstack.org/show/203828/ | 08:55 |
odyssey4me | bilal it runs in a cluster in its own containers | 08:56 |
mattt | bilal: get it ? | 09:15 |
openstackgerrit | Serge van Ginderachter proposed stackforge/os-ansible-deployment: [WIP] first shot at implementing Ceph/RBD support https://review.openstack.org/173229 | 09:29 |
mattt | svg: woo! will test this when i get some free time! | 09:34 |
* svg just got a mail from Rackspace Support Sales telling Ceph will be supported in kilo+1 release | 09:35 | |
mancdaz | svg ish | 09:38 |
svg | I'm looking at https://review.openstack.org/#/c/173229/ can't seem to find some 'review' button? | 09:40 |
hughsaunders | svg: are you logged in to gerrit? | 09:40 |
svg | yes | 09:40 |
svg | just want to add the -1 (WIP) | 09:41 |
svg | Is that a permission thing? | 09:41 |
odyssey4me | svg that's a pretty good start - can see a few things which look odd and it's unfortunate that it's not a master patch, but thanks for the submission - it provides a great base! | 09:42 |
hughsaunders | svg: do you have this button? http://i.imgur.com/EtKWFh9.png | 09:42 |
hughsaunders | I think we'll also need a spec for ceph support | 09:46 |
hughsaunders | so we can agree on an approach, and potentially implement it in stages | 09:47 |
hughsaunders | svg: I've WIPd it for you. | 09:47 |
svg | hughsaunders: no, I have https://dl.dropboxusercontent.com/u/13986042/20150414114810.png | 09:48 |
odyssey4me | hmm, what browser are you using - the issue may be browser related | 09:49 |
svg | firefox | 09:49 |
odyssey4me | oh, that should work | 09:49 |
hughsaunders | svg: thats the new change screen, you need to click reply | 09:50 |
svg | ok, could do a -1 from there, but still a totally different screen | 09:51 |
svg | same in chromium | 09:52 |
svg | perhaps I don;t have review permission? | 09:53 |
svg | odyssey4me: tight now, it;s all about getting juno to work, and ready to possibly deploy it in production; but I'l definitely work on a master patch later; | 09:54 |
odyssey4me | svg have you signed the CLA? you may not be part of a general group which would give you access | 09:54 |
svg | I did, but there was an issue with that, and somone here pointed me to another way to do it. | 09:56 |
svg | I do have a status Verified on the ICLA in my settings | 09:56 |
odyssey4me | svg ah, that's fairly common - I think it might need some input from #openstack-infra to determine what's causing the button to be missing | 09:56 |
*** ctgriffiths_ has joined #openstack-ansible | 09:57 | |
svg | ok; nothing urgent for now, I'' check on -infra later, thx | 09:57 |
openstackgerrit | Merged stackforge/os-ansible-deployment: Fix bug in playbooks/library/neutron https://review.openstack.org/171489 | 09:57 |
hughsaunders | svg: so do you have any radio boxes for workflow in the reply popup? | 09:58 |
svg | yes, workflow & code-review | 09:58 |
openstackgerrit | Merged stackforge/os-ansible-deployment: Correctly deploy neutron_metering_agent_check https://review.openstack.org/171577 | 09:58 |
hughsaunders | svg: so all you need to do to mark a review as wip is to set workflow to -1 | 09:59 |
svg | hughsaunders: yes, that's what I did in the meantime now; | 10:00 |
mancdaz | hughsaunders what determines if one sees the 'new' screen or not? | 10:00 |
svg | patch on master + spec: should that still be - possibly - timely for 'kilo'? When is the v11/kilo release planned? | 10:01 |
mancdaz | svg we are going to cut an rc1 for kilo in a few days | 10:02 |
*** ctgriffiths has quit IRC | 10:02 | |
mancdaz | anything requiring a new spec would likely make it into one of the minor kilo releases 11.1, 11.2 ... | 10:03 |
hughsaunders | mancdaz: settings > preferences > change view | 10:04 |
mancdaz | hughsaunders ah k | 10:04 |
svg | Ah, ok, I had the new shiny view enbled. Should have reverted that as I didn't find any ponies. | 10:07 |
odyssey4me | erm, interesting... this'll take some getting used to | 10:07 |
* odyssey4me is trying out the new view for no particular reason | 10:07 | |
hughsaunders | I disabled it because it makes the patchset chain less clear | 10:07 |
*** mahito has quit IRC | 10:07 | |
hughsaunders | all related patches appear in the same list | 10:08 |
hughsaunders | in the old view required and dependant patches are in separate lists - very clear. | 10:08 |
hughsaunders | The only realy pony in the new view is live zuul job status | 10:08 |
odyssey4me | the collapsed history is quite nice, although it defaults to expanded for me (even though the button says expand) | 10:10 |
odyssey4me | I don't see a live zuul status, where is that? | 10:10 |
svg | mancdaz: Does this means, in time, there might be an updated 10.x with ceph added? | 10:10 |
mancdaz | svg it's not something that the core group would be directly working on backporting into juno | 10:11 |
mancdaz | that's not to say somebody couldn't do it... | 10:12 |
svg | yes, but it ight be possible if I work on it then | 10:12 |
mancdaz | svg there's going to be a lot of activity working on getting ceph (block) storage support in to master | 10:12 |
mancdaz | whether it's worth waiting and attempting a backport, or working completely independently on juno, I'm not sure | 10:13 |
svg | ok, i c | 10:13 |
mancdaz | juno and master are quite different in terms of the way ansible hangs together, so the solutions are likely to be quite different | 10:14 |
mancdaz | but if we waited for the master work, and then based the juno work on that, at least the approach would be consistent | 10:14 |
openstackgerrit | Merged stackforge/os-ansible-deployment: Ensure OpenStack commands are run as correct user https://review.openstack.org/172368 | 10:14 |
svg | it's just the I am in the middle of things, and will need to start using ceph support before it gets merged upstream | 10:15 |
svg | so trying to find out which will be the best approach to handle that | 10:15 |
hughsaunders | odyssey4me: Can't find a live example https://twitter.com/sdague/status/583603459775193088 | 10:17 |
openstackgerrit | git-harry proposed stackforge/os-ansible-deployment: Add HP monitoring playbook https://review.openstack.org/171223 | 10:26 |
openstackgerrit | git-harry proposed stackforge/os-ansible-deployment: Add network.yml monitoring playbook https://review.openstack.org/170062 | 10:29 |
*** daneyon has joined #openstack-ansible | 10:31 | |
odyssey4me | hughsaunders I don't see the same in my view - it seems that I'm missing the results box on the right under the commit msg | 10:35 |
*** daneyon has quit IRC | 10:36 | |
hughsaunders | odyssey4me: yeah, I'm not seeing it at the moment.. maybe it has to be disabled for some reason :-/ | 10:36 |
*** ishant has quit IRC | 10:50 | |
*** daneyon has joined #openstack-ansible | 12:20 | |
*** daneyon has quit IRC | 12:25 | |
cloudnull | morning | 12:26 |
*** markvoelker has joined #openstack-ansible | 13:00 | |
*** markvoelker_ has joined #openstack-ansible | 13:06 | |
*** markvoelker has quit IRC | 13:09 | |
*** KLevenstein has joined #openstack-ansible | 13:30 | |
*** britthouser has quit IRC | 13:32 | |
openstackgerrit | Matt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users https://review.openstack.org/173317 | 13:33 |
openstackgerrit | Matt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users https://review.openstack.org/173317 | 13:34 |
*** sdake has joined #openstack-ansible | 13:35 | |
openstackgerrit | Matthew Kassawara proposed stackforge/os-ansible-deployment: Update keystone middleware in neutron for Kilo https://review.openstack.org/173318 | 13:36 |
*** KLevenstein has quit IRC | 13:37 | |
*** sigmavirus24_awa is now known as sigmavirus24 | 13:37 | |
*** sdake has quit IRC | 13:40 | |
openstackgerrit | Matt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users https://review.openstack.org/173317 | 13:46 |
svg | hey guys, think I aready asked this, but still not sure: as per interfaces/network interfaces for the metal hosts, what exactly is meant for use by "Container management bridge br-mgmt"? | 13:51 |
svg | and that I relation to the ocntainer's eth0 that get's defined with a 10.0.3.x address | 13:52 |
Sam-I-Am | svg: thats the interface used for managing containers | 13:52 |
svg | Sam-I-Am: as in, the sysadmin accessing it from his machije? | 13:52 |
Sam-I-Am | svg: containers use snat to access the outside world... through that 10.0.3 range | 13:52 |
Sam-I-Am | but to get into them, the eth1 in the container attaches to br-mgmt on the host which attaches to a physical interface on the host | 13:53 |
Sam-I-Am | svg: http://docs.rackspace.com/rpc/api/v10/bk-rpc-installation/content/sec_overview_host-networking.html | 13:53 |
svg | accessing outside world, in our setup, would happen here through the management network, too | 13:54 |
svg | I'm not sure why I would need that extra network | 13:54 |
Sam-I-Am | what extra network? | 13:54 |
Sam-I-Am | by default, the outbound container traffic will go through whatever has the default route | 13:55 |
svg | extra networlk = the container's eth0 network with snat// We have the mgmt network for that already | 13:56 |
Sam-I-Am | that network doesn't see the light of day | 13:56 |
svg | basically, I'd like to disable that snat network, and not deploy it, and have my default gateway point to the network attached to br-mgmt | 13:57 |
*** sdake has joined #openstack-ansible | 13:57 | |
sigmavirus24 | stevelle: ping | 13:57 |
stevelle | pong sigmavirus24 | 13:58 |
sigmavirus24 | did you fix the horizon ssl cipher suite thing you pinged me about? | 13:59 |
openstackgerrit | Matt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users https://review.openstack.org/173317 | 13:59 |
stevelle | sigmavirus24: yes | 13:59 |
sigmavirus24 | thank you kind sir | 13:59 |
openstackgerrit | Matt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users https://review.openstack.org/173317 | 14:01 |
*** britthouser has joined #openstack-ansible | 14:03 | |
*** jaypipes has joined #openstack-ansible | 14:06 | |
*** daneyon has joined #openstack-ansible | 14:09 | |
openstackgerrit | Matt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users https://review.openstack.org/173317 | 14:13 |
*** daneyon has quit IRC | 14:13 | |
*** markvoelker has joined #openstack-ansible | 14:19 | |
*** markvoelker_ has quit IRC | 14:19 | |
*** stevemar has joined #openstack-ansible | 14:30 | |
*** Mudpuppy has joined #openstack-ansible | 14:33 | |
openstackgerrit | Merged stackforge/os-ansible-deployment: Updated the repo scripts https://review.openstack.org/171777 | 14:37 |
*** alextric_ is now known as alextricity | 14:41 | |
openstackgerrit | Matt Thompson proposed stackforge/os-ansible-deployment: Update tempest to use admin user https://review.openstack.org/173317 | 14:42 |
openstackgerrit | Matt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users https://review.openstack.org/173358 | 14:45 |
*** stevemar has quit IRC | 14:49 | |
*** stevemar has joined #openstack-ansible | 14:49 | |
svg | Any extra resources available on how to make the design of the hosts/containers (rpc_user_config.yml)? I'm looking at a two datacenter setup for starters/ | 14:51 |
Sam-I-Am | what do you mean design? | 14:53 |
svg | how many metal hosts are needed, and especially which components to put where | 14:59 |
Sam-I-Am | in general, metal hosts are just compute | 15:00 |
Sam-I-Am | i think swift storage nodes are too | 15:00 |
svg | default setup talks about 3 hosts, where all components are equally put on all, also neutron - other openstack projects tend to put neutron on a separate dedicated hosts | 15:01 |
Sam-I-Am | the default is 3 infra, 1 storage, 1 compute iirc | 15:01 |
svg | yes | 15:01 |
cloudnull | odyssey4me: the issue is "playbooks/inventory/dynamic_inventory.py:20:1: F401 'hashlib' imported but unused" | 15:01 |
alextricity | The AIO script is failing on the XEN Server Information section. Anybody seeing the same thing? | 15:02 |
odyssey4me | alextricity it shouldn't, that has || true - unless you're using a very old clone - which branch is that? | 15:02 |
cloudnull | which went in here https://github.com/stackforge/os-ansible-deployment/commit/5341949f02a1c0ae056e84eeaf4a295ebf4a86f5 | 15:03 |
odyssey4me | svg one more for logging | 15:03 |
alextricity | odyssey4me: it's master. All I get is [ Error Info -275 0 ] | 15:03 |
alextricity | Then [ Status: Failure ] | 15:03 |
svg | I know about the defaults :) | 15:03 |
svg | and network_hosts is a separate group also | 15:04 |
svg | infra_hosts: storage_hosts: log_hosts: network_hosts: compute_hosts: | 15:04 |
odyssey4me | cloudnull odd how that didn't show up in the build result | 15:04 |
svg | the latter computes are obviously separate dedicated ones | 15:04 |
svg | infra, storage, network and log can share certain hosts | 15:05 |
svg | also, log with one host wouldnt be redundant | 15:05 |
openstackgerrit | Kevin Carter proposed stackforge/os-ansible-deployment: Fix lint issue with dynamic_inventory.py https://review.openstack.org/173369 | 15:06 |
openstackgerrit | Jesse Pretorius proposed stackforge/os-ansible-deployment: Add network.yml monitoring playbook https://review.openstack.org/170062 | 15:07 |
svg | say e.g. I have 4 servers to use, spread amongst two datacenters, two in each dc, how would I define those target hosts, and optionally which containers to put where | 15:07 |
odyssey4me | svg container membership is group names is in the environment file | 15:13 |
svg | odyssey4me: I understand that yes | 15:13 |
odyssey4me | changing that will require quite a bit of fiddling though | 15:13 |
svg | I'm talking aboout how to fill in infra_hosts: storage_hosts: log_hosts: network_hosts: in rpc_user_config.yml | 15:14 |
svg | and possibly using the optional limit_host option | 15:14 |
svg | (or was it limit container) | 15:14 |
odyssey4me | so essentially your user_config needs the hosts in their groups - you can set a host from each DC in whichever groups, and as long as your inter-DC link is great then it'll be no different than everything being in one DC | 15:14 |
odyssey4me | hmm, not sure - multi-DC configurations are something we have planned to tackle after kilo releases | 15:15 |
mattt | svg: sorry if i'm stating obvious, but we typically spread containers across 3 nodes, w/ logging going on a dedicated host (for IO reasons) | 15:23 |
mattt | svg: rabbitmq/galera clusters across datacentres could be interesting, and also keep in mind galera should have an odd # of nodes | 15:30 |
*** jwagner_away is now known as jwagner | 15:39 | |
*** sacharya has joined #openstack-ansible | 15:41 | |
*** markvoelker has quit IRC | 15:44 | |
openstackgerrit | Matthew Kassawara proposed stackforge/os-ansible-deployment: Use proposed/kilo branch instead of master https://review.openstack.org/173397 | 15:47 |
*** stevemar has quit IRC | 15:49 | |
*** stevemar has joined #openstack-ansible | 15:49 | |
openstackgerrit | Matt Thompson proposed stackforge/os-ansible-deployment: Use different passwords for admin and keystone users https://review.openstack.org/173358 | 15:50 |
b3rnard0 | is anyone else seeing issues with OS etherpad? i can't get anything to load | 15:55 |
Sam-I-Am | b3rnard0: you need to clear your cache | 15:55 |
Sam-I-Am | i had that problem this morning | 15:56 |
b3rnard0 | cool thanks for the pointer | 15:56 |
*** daneyon has joined #openstack-ansible | 15:58 | |
cloudnull | helo peoples | 16:00 |
Sam-I-Am | EHLO ? | 16:00 |
* Sam-I-Am speaks in smtp | 16:01 | |
cloudnull | # /quit | 16:01 |
cloudnull | ready for another exciting day of bug sifting ? | 16:01 |
Apsu | Sam-I-Am: I don't want your enhanced greetings. I'll take the standard old SMTP thanks. | 16:01 |
b3rnard0 | hello | 16:01 |
Apsu | cloudnull: Bug sifting is life | 16:01 |
Sam-I-Am | Apsu: HELO | 16:01 |
Apsu | Sam-I-Am: +1 | 16:02 |
b3rnard0 | when are we going to get meetbot in here so he can handle bug triage? | 16:02 |
Sam-I-Am | Apsu: mail from: | 16:02 |
*** daneyon has quit IRC | 16:02 | |
b3rnard0 | https://etherpad.openstack.org/p/openstack_ansible_bug_triage.2015-04-14-16.00 | 16:02 |
cloudnull | i thought you were the meat bot b3rnard0 | 16:02 |
Sam-I-Am | rcpt to | 16:02 |
cloudnull | :) | 16:02 |
Sam-I-Am | cloudnull: he's hipsterbot | 16:02 |
hughsaunders | b3rnard0: you should create meatbot that orders lunch | 16:03 |
cloudnull | oh thats where i went wrong | 16:03 |
cloudnull | ^ hughsaunders +1 | 16:03 |
b3rnard0 | no more soup for you, hughsaunders | 16:03 |
cloudnull | b3rnard0 action item | 16:03 |
cloudnull | so without further adieu | 16:04 |
cloudnull | https://bugs.launchpad.net/openstack-ansible/+bug/1441363 | 16:04 |
openstack | Launchpad bug 1441363 in openstack-ansible "nf_conntrack schould be unloaded on swift object server" [Undecided,New] | 16:04 |
cloudnull | "causing nf_conntrack to be violated" poor nf_conntrack | 16:04 |
cloudnull | this seems like a sensable request. | 16:05 |
Sam-I-Am | yes, it does | 16:05 |
cloudnull | andymccr: you around ? | 16:05 |
andymccr | yeh im reading it now | 16:05 |
Apsu | I disagree with this request. It assumes that swift object servers will be all by themselves owning an entire physical host. While that may be the current state or even a mostly desired state, it also reduces flexibility and really isn't necessary at all. | 16:06 |
Apsu | If you've got TIME_WAIT issues, set the reuse sysctl. | 16:06 |
Apsu | If you've got extremely aggressive connections, raise the conntrack limit or set the recycle sysctl. | 16:06 |
andymccr | i think Apsu is correct, as an example or possibility we may deploy rsyslog containers | 16:07 |
andymccr | or the swift hosts | 16:07 |
Apsu | This isn't a new problem and it's an inflexible hack to just remove conntrack. | 16:07 |
andymccr | or some other generic container that people may require | 16:07 |
Apsu | And a really old way of trying to solve connection problems in Linux :P | 16:07 |
andymccr | it also makes aio testing difficult | 16:07 |
Apsu | ^ | 16:07 |
*** Bjoern__ has joined #openstack-ansible | 16:07 | |
cloudnull | boom! "wont-fix'd" | 16:07 |
Apsu | +1 | 16:08 |
andymccr | can we resolve the actual issue in a different way though? | 16:08 |
cloudnull | so can we or should we set the recycle by default ? | 16:08 |
Apsu | Yes. Set the reuse sysctl. | 16:08 |
andymccr | ok cool so we can fix that | 16:08 |
Apsu | Definitely not recycle. That's a last ditch effort on super busy boxen. | 16:08 |
Apsu | It can cause TCP state errors and lost connections | 16:08 |
cloudnull | 's/recycle/reuse/' | 16:08 |
Apsu | +2 | 16:08 |
andymccr | yeh ok cool so leave it open with that as the triage solution | 16:09 |
cloudnull | Apsu can you drop some knowledge in that issue. | 16:09 |
Apsu | Sure. | 16:09 |
andymccr | thanks Apsu! | 16:09 |
Apsu | np | 16:09 |
cloudnull | what do we think the priority is on that ? | 16:10 |
cloudnull | also do we want to backport to juno ? | 16:10 |
andymccr | sure | 16:10 |
andymccr | it seems sensible to me | 16:10 |
andymccr | its causing issues so lets backport it | 16:10 |
cloudnull | ok | 16:10 |
svg | mattt: odyssey4me fyi both dc's have a 10gb interlink | 16:10 |
cloudnull | i've set medium and confirmed the issue. Apsu once you drop some knowledge in that can you change it triaged? | 16:11 |
cloudnull | next issue: https://bugs.launchpad.net/openstack-ansible/+bug/1441800 | 16:11 |
openstack | Launchpad bug 1441800 in openstack-ansible "add secure_proxy_ssl_header for heat" [Undecided,New] | 16:11 |
*** sdake_ has joined #openstack-ansible | 16:11 | |
*** sdake_ has quit IRC | 16:11 | |
Apsu | cloudnull: can do | 16:11 |
cloudnull | tyvm sir | 16:12 |
*** sdake_ has joined #openstack-ansible | 16:12 | |
cloudnull | so this also seems like a sensible request. looks like a templated config option would do the trick . | 16:14 |
cloudnull | is miguelgrinberg around ? | 16:14 |
miguelgrinberg | cloudnull: yup | 16:14 |
cloudnull | can you make this go https://bugs.launchpad.net/openstack-ansible/+bug/1441800 ? | 16:14 |
openstack | Launchpad bug 1441800 in openstack-ansible "add secure_proxy_ssl_header for heat" [Undecided,New] | 16:14 |
*** markvoelker has joined #openstack-ansible | 16:14 | |
miguelgrinberg | I certainly can | 16:14 |
*** sdake has quit IRC | 16:15 | |
cloudnull | i confirmed the issue and tagged it medium for both juno and trunk. | 16:16 |
cloudnull | tyvm miguelgrinberg | 16:16 |
miguelgrinberg | sure, I'll look into it today, just back from pycon ready to start on something | 16:17 |
cloudnull | much appreciated miguelgrinberg | 16:17 |
cloudnull | is regex master Bjoern__ here ? | 16:18 |
cloudnull | https://bugs.launchpad.net/openstack-ansible/+bug/1442239 | 16:18 |
openstack | Launchpad bug 1442239 in openstack-ansible "Commit c5d488059d9407f1b9b96552159ffc298c8dc547 is invalidating sshd_config" [Undecided,New] | 16:18 |
*** Bjoern__ is now known as BjoernT | 16:18 | |
BjoernT | lol | 16:18 |
cloudnull | ^ there have been some updates on that issue from people in the community . did you see that ? | 16:18 |
BjoernT | Regex for dummies helps, lol | 16:18 |
BjoernT | I did see approd0 request | 16:18 |
cloudnull | it seems that its confirmed that the issues are ansible related. which i assume is "1.6.10"? | 16:19 |
cloudnull | in master we're running "v1.9.0.1-1" curious if we update to the latest stable if we see that same issue? | 16:20 |
BjoernT | in the end we agree that the missing linefeed triggers this issue and we just talking about how to fix it | 16:20 |
cloudnull | yup/ | 16:21 |
BjoernT | supposedly not. I did only test with our 1.6.10 ansible version | 16:21 |
cloudnull | im of the opinion that we update to the latest stable ansible. | 16:22 |
cloudnull | we only stayed on 1.6.10 because 1.7 had delegation issues. | 16:22 |
Apsu | So you're saying the lineinfile module is behaving differently on whole line matching depending on if you have a newline or not, i.e., if it's the last line in the file or not? | 16:22 |
hughsaunders | deja-vu on that bug, I feel like I've fixed it before | 16:23 |
Apsu | In python regex that behavior is generally dictated by the presence or absence of $, but it looks like the regexp in question doesn't have that. | 16:23 |
Sam-I-Am | then you didnt fix it? :) | 16:23 |
andymccr | https://bugs.launchpad.net/openstack-ansible/+bug/1416626 its basically this. | 16:25 |
openstack | Launchpad bug 1416626 in openstack-ansible trunk "invalid sshd_config entry after ssh_config.yml task runs" [Low,Confirmed] - Assigned to Miguel Grinberg (miguelgrinberg) | 16:25 |
andymccr | there is a link to the ansible bug | 16:25 |
miguelgrinberg | the ansible bug has been fixed in a newer release than the one we use | 16:25 |
andymccr | ^ that | 16:26 |
*** markvoelker has quit IRC | 16:26 | |
cloudnull | so if we backport our ansible-boostrap.sh script to juno, we can be done with it | 16:26 |
miguelgrinberg | have you verified the version we use in master has this fix? I haven't | 16:27 |
Sam-I-Am | i think we use 1.9.0? | 16:28 |
cloudnull | yes. | 16:28 |
miguelgrinberg | I think this wasn't a released fix back when I tested it, it was in master, haven't looked since then if the fix got released | 16:28 |
cloudnull | v1.9.0.1-1 | 16:28 |
cloudnull | ok decreed. we'll backport that and do the things | 16:30 |
cloudnull | so next: https://bugs.launchpad.net/openstack-ansible/+bug/1442366 | 16:31 |
openstack | Launchpad bug 1442366 in openstack-ansible "nova user is removed from libvirtd group" [Undecided,New] | 16:31 |
cloudnull | so this seems like something we need to dig into for juno | 16:34 |
cloudnull | idk that it effects trunk | 16:34 |
Sam-I-Am | hmmm | 16:34 |
BjoernT | fyi, I was not able to reproduce the issue | 16:35 |
BjoernT | from what I heard, a wrong ansible version (lower than 1.6.10) was causing this issue. | 16:35 |
Sam-I-Am | on master now, i have nova in the libvirtd group | 16:35 |
Sam-I-Am | dont have a juno box atm | 16:36 |
BjoernT | yeah, I read the playbooks and was not seeing any way how that can happen | 16:36 |
cloudnull | ok so. do we have antying else we want to talk about here? | 16:40 |
Sam-I-Am | cloudnull: did you want to bring up the neutron bits thing? | 16:41 |
Sam-I-Am | or just on this bug... | 16:41 |
* Sam-I-Am needs more coffee | 16:41 | |
cloudnull | ah yes ,this one https://bugs.launchpad.net/openstack-ansible/+bug/1443927 | 16:41 |
openstack | Launchpad bug 1443927 in openstack-ansible "Neutron configuration files should depend on container type" [Undecided,New] | 16:41 |
cloudnull | i say fix in kilo | 16:41 |
cloudnull | dont bp to juno | 16:41 |
Sam-I-Am | makes sense | 16:42 |
Sam-I-Am | its not fixing stuff that is breaking | 16:42 |
cloudnull | its just a update to best practices. | 16:43 |
Sam-I-Am | da | 16:43 |
Sam-I-Am | only case for l3/meta agents on compute nodes would be if we use dvr | 16:44 |
Sam-I-Am | which implies... dun dun dun... ovs. | 16:44 |
cloudnull | wont-fix | 16:44 |
cloudnull | :) | 16:44 |
Sam-I-Am | exactly | 16:44 |
Sam-I-Am | everyone has a soft spot for ovs :) | 16:44 |
openstackgerrit | Merged stackforge/os-ansible-deployment: Fix lint issue with dynamic_inventory.py https://review.openstack.org/173369 | 16:44 |
cloudnull | so confirmed and targeted to 11 | 16:44 |
cloudnull | and we're done here | 16:44 |
cloudnull | thanks everyone . | 16:45 |
Sam-I-Am | excellent. | 16:45 |
* cloudnull goes to eat | 16:45 | |
Sam-I-Am | mmmfood | 16:45 |
Sam-I-Am | i should do that | 16:45 |
Sam-I-Am | also might need to rebase all of these patches for the lint fix | 16:45 |
BjoernT | can I get the status of https://bugs.launchpad.net/openstack-ansible/+bug/1428833 ? | 16:45 |
openstack | Launchpad bug 1428833 in openstack-ansible trunk "Add novnc console support in favor of spice" [High,Triaged] - Assigned to Andy McCrae (andrew-mccrae) | 16:45 |
openstackgerrit | Miguel Alejandro Cantu proposed stackforge/os-ansible-deployment: Implement Ceilometer[WIP] https://review.openstack.org/173067 | 16:46 |
b3rnard0 | BjoernT: we are officially done but that has been triaged and assigned | 16:46 |
BjoernT | Yeah I need to get someone working on it | 16:46 |
BjoernT | I fixed all issues but the spice mouse issue is there and won't go away looks like | 16:47 |
b3rnard0 | andymccr appears to handling for juno. we just need to determine target milestone/priority | 16:47 |
andymccr | the previous comment regarding grub there isnt anything we can fix with that | 16:47 |
andymccr | the other comment is new so i havent looked at it yet | 16:47 |
BjoernT | the primary issue is windows not linux | 16:47 |
BjoernT | so we either spend more time than I did already to get the mouse syncronozied with spice-html5 proxy or new talk about getting vnc back | 16:48 |
BjoernT | I did test yesterday the support in libvirt to enable vdagent in libvirt and inside the windows guest but no luck | 16:49 |
openstackgerrit | Matthew Kassawara proposed stackforge/os-ansible-deployment: Use proposed/kilo branch instead of master https://review.openstack.org/173397 | 16:49 |
BjoernT | Apart from the fact that openstack does not support to enable vdagent in libvirt, i did use a ugly workaround to enable the libvirt instance to enable it | 16:49 |
openstackgerrit | Matthew Kassawara proposed stackforge/os-ansible-deployment: Update keystone middleware in neutron for Kilo https://review.openstack.org/173318 | 16:50 |
BjoernT | b3rnard0: let's talk once you are back | 16:51 |
alextricity | Has anyone seen this error? http://pastebin.com/rFp3hMqN | 16:57 |
alextricity | It looks like the AIO is having a hard time creating containers | 16:57 |
alextricity | Something about the template, but I don't have enough info to make it out | 16:57 |
Sam-I-Am | alextricity: version? | 17:00 |
alextricity | Master | 17:00 |
Sam-I-Am | resources available? | 17:02 |
alextricity | It's a rackspace standard-16 | 17:05 |
alextricity | VM | 17:05 |
alextricity | 15GB RAM, 6vCPUS | 17:06 |
alextricity | 620GB system disk | 17:06 |
alextricity | Maybe someone else can spin up an instance and give it a go. I'm following these instructions: https://github.com/stackforge/os-ansible-deployment/blob/master/development-stack.rst | 17:07 |
*** sdake has joined #openstack-ansible | 17:07 | |
*** jwagner is now known as jwagner_away | 17:07 | |
Sam-I-Am | have you tried rerunning that playbook? | 17:08 |
alextricity | Well, I'm simply running the gate script. | 17:09 |
alextricity | But if you are asking if I tried rerunning the lxc-create play, then no | 17:10 |
alextricity | I have not | 17:10 |
*** sdake_ has quit IRC | 17:11 | |
Sam-I-Am | i'd do that first. could be a fluke. | 17:13 |
*** sdake_ has joined #openstack-ansible | 17:14 | |
*** sdake has quit IRC | 17:18 | |
*** sigmavirus24 is now known as sigmavirus24_awa | 17:30 | |
*** sdake has joined #openstack-ansible | 17:31 | |
*** javeriak has joined #openstack-ansible | 17:31 | |
*** sdake_ has quit IRC | 17:35 | |
*** javeriak has quit IRC | 17:39 | |
*** javeriak has joined #openstack-ansible | 17:40 | |
*** sdake_ has joined #openstack-ansible | 17:44 | |
*** sdake has quit IRC | 17:45 | |
cloudnull | who has a v10 install that they want to break? https://gist.github.com/cloudnull/b3471271e78bb82938d4 <- WIP upgrade script to kilo - should work (TM) | 17:47 |
openstackgerrit | Tom Cameron proposed stackforge/os-ansible-deployment: Kilofication of Neutron playbooks https://review.openstack.org/173435 | 17:47 |
cloudnull | BOOM rackertom in da house! | 17:47 |
*** sdake_ has quit IRC | 17:48 | |
rackertom | Had to stop my birds from eating actual paint for a minute there. | 17:48 |
*** sdake has joined #openstack-ansible | 17:48 | |
cloudnull | hahahaha | 17:48 |
cloudnull | Sam-I-Am, can you sync up with rackertom on https://bugs.launchpad.net/openstack-ansible/+bug/1443927 | 17:48 |
openstack | Launchpad bug 1443927 in openstack-ansible trunk "Neutron configuration files should depend on target location" [Low,Confirmed] | 17:48 |
cloudnull | in that way we can get those fixes into the neutron kilo work, if at all possible | 17:49 |
rackertom | Is that a bug where the playbooks are putting configs on all hosts receiving a neutron component? | 17:49 |
rackertom | Sorry that was supposed to say "...all configs on all hosts..." | 17:50 |
*** sdake has quit IRC | 17:52 | |
*** sdake has joined #openstack-ansible | 17:52 | |
svg | odyssey4me: in what way does oad not have multi-dc configurations? what would be miussing for that? | 17:53 |
cloudnull | rackertom no thats not a bug, its clean up. | 17:53 |
cloudnull | we're dropping config places where its not needed. | 17:53 |
cloudnull | and when we branch out beyond ml2 / linuxbridge we're going to have to do some of that to clean up things | 17:54 |
cloudnull | svg, if you're just looking to orchestrate between multiple DC's that have internal access to both sides, IE VPN mesh, this shouldn't be a problem in OSAD. | 17:55 |
cloudnull | just fill in the user_config.yml with the ip address of the hosts. | 17:55 |
cloudnull | but if you're looking for region x, region y for use within openstack for the various DCs then that will be a post Kilo release item . | 17:56 |
svg | I have a 10gb connection between both dc's :) | 18:01 |
svg | the idea would be to use availability zones | 18:02 |
svg | and configure separate storage hosts in each dc with a separate ceph backend | 18:02 |
*** javeriak has quit IRC | 18:04 | |
*** bilal has quit IRC | 18:09 | |
*** jwagner_away is now known as jwagner | 18:13 | |
*** daneyon has joined #openstack-ansible | 18:16 | |
*** sigmavirus24_awa is now known as sigmavirus24 | 18:16 | |
*** daneyon_ has joined #openstack-ansible | 18:20 | |
cloudnull | svg: that should go. az's already work, but is not something that OSAD sets up for you. | 18:23 |
*** daneyon has quit IRC | 18:23 | |
svg | not needed, that's more os config afterwards | 18:23 |
cloudnull | for sure. | 18:23 |
*** markvoelker has joined #openstack-ansible | 18:25 | |
*** sdake_ has joined #openstack-ansible | 18:25 | |
*** jwagner is now known as jwagner_away | 18:26 | |
*** sdake has quit IRC | 18:29 | |
*** jwagner_away is now known as jwagner | 18:29 | |
*** markvoelker has quit IRC | 18:31 | |
*** sdake has joined #openstack-ansible | 18:39 | |
*** sdake_ has quit IRC | 18:43 | |
openstackgerrit | Miguel Alejandro Cantu proposed stackforge/os-ansible-deployment: Implement Ceilometer[WIP] https://review.openstack.org/173067 | 18:48 |
*** javeriak has joined #openstack-ansible | 18:57 | |
*** erikmwil_ has joined #openstack-ansible | 19:01 | |
*** erikmwilson is now known as Guest22108 | 19:01 | |
*** erikmwil_ is now known as erikmwilson | 19:01 | |
*** jwagner is now known as jwagner_away | 19:01 | |
*** erikmwilson_ has joined #openstack-ansible | 19:01 | |
*** javeriak has quit IRC | 19:05 | |
*** javeriak_ has joined #openstack-ansible | 19:07 | |
javeriak_ | hey, does anyone know if the ip's assigned to containers can be pre-configured? | 19:08 |
*** markvoelker has joined #openstack-ansible | 19:29 | |
*** markvoelker has quit IRC | 19:34 | |
*** sdake_ has joined #openstack-ansible | 19:37 | |
mattt | alextricity: you still there ? | 19:38 |
*** BjoernT has quit IRC | 19:38 | |
mattt | alextricity: not sure it's related, but make sure you use the PVHVM image | 19:38 |
*** rromans has quit IRC | 19:40 | |
*** sdake has quit IRC | 19:41 | |
*** jwagner_away is now known as jwagner | 19:46 | |
*** sdake has joined #openstack-ansible | 19:51 | |
*** sdake_ has quit IRC | 19:54 | |
alextricity | mattt: Thanks. I tried both images but both didn't work for me :( | 19:55 |
alextricity | On another note, is there work being done on the nova_console? | 19:56 |
alextricity | I see that it has been removed from the deployment | 19:57 |
alextricity | If that's the case then it should also be removed from the haproxy config: https://github.com/stackforge/os-ansible-deployment/blob/master/playbooks/vars/configs/haproxy_config.yml | 19:57 |
*** jwagner is now known as jwagner_away | 20:04 | |
*** sdake_ has joined #openstack-ansible | 20:05 | |
*** sdake has quit IRC | 20:07 | |
*** rrrobbb has joined #openstack-ansible | 20:08 | |
alextricity | javeriak_: I guess you could modify your inventory file before running the plays. | 20:10 |
alextricity | I'm not sure if the inventory_manage script does that | 20:10 |
alextricity | but if not you can modify them yourself | 20:10 |
*** rromans has joined #openstack-ansible | 20:11 | |
javeriak_ | alextricity: I suppose /etc/rpc_deploy/rpc_user_config.yml is the appropriate place to put these? | 20:13 |
alextricity | javeriak. Not necessarily. I'm talking about the /etc/openstack_deploy/openstack_inventory.json | 20:14 |
alextricity | or in your case, /etc/rpc_deploy/rpc_inventory.json | 20:15 |
javeriak_ | alextricity: nope I dont have the openstack_deploy directory and as far as I can tell the /etc/rpc_deploy/rpc_inventory.json gets generated after the play and containers get created, I dont have it in a new setup | 20:18 |
alextricity | javeriak_: Ah. Then you're wanting to change the IPs of existing containers? | 20:20 |
*** sdake has joined #openstack-ansible | 20:23 | |
javeriak_ | alextricity: Nope, I would like to define them before they get created | 20:24 |
javeriak_ | Thats incase they aren't already pre-defined and I don't that being done anywhere | 20:25 |
javeriak_ | see* | 20:25 |
*** sdake_ has quit IRC | 20:25 | |
alextricity | javeriak_: You can generate the inventory file before running the plays to create the containers by running playbooks/inventory/dynmaic_inventory.py. Then go in there and edit the IPS from the assigned ones to your desired ones. | 20:26 |
alextricity | You should see entries for "conatiner_address" for each container in the inventory file | 20:27 |
alextricity | Other than that, I don't know any other ways | 20:27 |
*** rrrobbb has quit IRC | 20:28 | |
*** jwagner_away is now known as jwagner | 20:29 | |
*** markvoelker has joined #openstack-ansible | 20:33 | |
javeriak_ | alextricity: okay cool, thanks | 20:36 |
*** markvoelker has quit IRC | 20:37 | |
cloudnull | alextricity the nova_spice_console has been renamed to nova_console . | 20:52 |
alextricity | cloudnull: yeah I see that now :/ was using an old inventory | 20:52 |
alextricity | lol | 20:52 |
cloudnull | it happens. :) | 20:52 |
*** jwagner is now known as jwagner_away | 20:59 | |
*** sigmavirus24 is now known as sigmavirus24_awa | 21:03 | |
*** KLevenstein has joined #openstack-ansible | 21:05 | |
*** KLevenstein has quit IRC | 21:17 | |
*** sigmavirus24_awa is now known as sigmavirus24 | 21:17 | |
*** sacharya has quit IRC | 21:20 | |
*** daneyon has joined #openstack-ansible | 21:30 | |
*** daneyon_ has quit IRC | 21:32 | |
*** daneyon has quit IRC | 21:44 | |
*** Mudpuppy_ has joined #openstack-ansible | 21:45 | |
*** Mudpuppy has quit IRC | 21:49 | |
*** Mudpuppy_ has quit IRC | 21:49 | |
*** KLevenstein has joined #openstack-ansible | 21:50 | |
*** JRobinson__ has joined #openstack-ansible | 21:54 | |
*** KLevenstein has quit IRC | 22:00 | |
*** markvoelker has joined #openstack-ansible | 22:10 | |
*** markvoelker_ has joined #openstack-ansible | 22:10 | |
*** markvoelker has quit IRC | 22:14 | |
*** markvoelker has joined #openstack-ansible | 22:36 | |
*** markvoelker_ has quit IRC | 22:37 | |
*** markvoelker_ has joined #openstack-ansible | 22:41 | |
*** markvoelker has quit IRC | 22:44 | |
*** britthouser has quit IRC | 22:44 | |
*** javeriak has joined #openstack-ansible | 22:46 | |
*** javeriak_ has quit IRC | 22:46 | |
*** erikmwilson has quit IRC | 22:50 | |
*** sigmavirus24 is now known as sigmavirus24_awa | 22:59 | |
*** javeriak has quit IRC | 23:01 | |
*** javeriak has joined #openstack-ansible | 23:02 | |
*** erikmwilson_ is now known as erikmwilson | 23:09 | |
javeriak | hey, im pulling 10.1.2 playbooks and they seem to be generating an aio container in the inventory, which then errors out in the play, this isnt a problem, but I was wondering if its supposed to be there? | 23:41 |
*** britthouser has joined #openstack-ansible | 23:50 | |
*** markvoelker_ has quit IRC | 23:54 | |
cloudnull | javeriak check to see if you have something in the /etc/openstack_deploy/conf.d/ | 23:57 |
cloudnull | likely you have a swift.yml file in there that is creating it | 23:57 |
cloudnull | thats from the examples. | 23:58 |
javeriak | There is no openstack_deploy under /etc, is that supposed to be there on juno too? | 23:58 |
palendae | Juno it's rpc_deploy I think | 23:58 |
javeriak | cloudnull: you mean the deploy node right? | 23:59 |
*** britthouser has quit IRC | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!