*** thorst_ has joined #openstack-ansible | 00:13 | |
*** fops_ has joined #openstack-ansible | 00:14 | |
*** fops has quit IRC | 00:17 | |
*** smatzek has quit IRC | 00:20 | |
*** thorst_ has quit IRC | 00:21 | |
openstackgerrit | Nish Patwa(nishpatwa_) proposed openstack/openstack-ansible-ops: Adding influx relay to make the existing monitoring stack highly available(WIP) https://review.openstack.org/390128 | 00:36 |
---|---|---|
*** jrosser_ has joined #openstack-ansible | 00:42 | |
*** rromans_ has joined #openstack-ansible | 00:42 | |
*** klamath has quit IRC | 00:42 | |
*** promethe1nfire has joined #openstack-ansible | 00:42 | |
*** sulo has quit IRC | 00:43 | |
*** jrosser has quit IRC | 00:43 | |
*** sulo_ has joined #openstack-ansible | 00:43 | |
*** rromans has quit IRC | 00:43 | |
*** prometheanfire has quit IRC | 00:43 | |
*** promethe1nfire is now known as prometheanfire | 00:44 | |
*** zerick_ has joined #openstack-ansible | 00:51 | |
*** markvoelker has joined #openstack-ansible | 00:51 | |
*** kysse_ has joined #openstack-ansible | 00:52 | |
*** hwoarang_ has joined #openstack-ansible | 00:53 | |
*** zerick has quit IRC | 00:53 | |
*** hwoarang has quit IRC | 00:53 | |
*** kysse has quit IRC | 00:53 | |
*** markvoelker has quit IRC | 00:56 | |
*** drifterza has quit IRC | 01:13 | |
*** thorst_ has joined #openstack-ansible | 01:17 | |
*** thorst_ has quit IRC | 01:26 | |
*** openstack has joined #openstack-ansible | 05:34 | |
*** thorst_ has quit IRC | 05:37 | |
*** markvoelker has joined #openstack-ansible | 05:41 | |
*** Jack_Iv has joined #openstack-ansible | 05:44 | |
*** thorst_ has joined #openstack-ansible | 06:05 | |
*** drifterza has quit IRC | 06:14 | |
*** alij has quit IRC | 06:20 | |
*** alij has joined #openstack-ansible | 06:20 | |
*** pcaruana has joined #openstack-ansible | 06:21 | |
*** alij has quit IRC | 06:33 | |
*** javeriak has joined #openstack-ansible | 06:43 | |
*** v1k0d3n has joined #openstack-ansible | 06:44 | |
*** v1k0d3n_ has joined #openstack-ansible | 06:48 | |
*** v1k0d3n has quit IRC | 06:49 | |
*** alij has joined #openstack-ansible | 06:52 | |
*** v1k0d3n_ has quit IRC | 06:52 | |
*** fxpester has joined #openstack-ansible | 06:55 | |
*** thorst_ has quit IRC | 06:56 | |
*** thorst_ has joined #openstack-ansible | 06:57 | |
*** fandi has quit IRC | 07:00 | |
*** thorst_ has quit IRC | 07:02 | |
*** Jack_Iv has quit IRC | 07:02 | |
*** Jack_Iv has joined #openstack-ansible | 07:03 | |
*** admin0 has quit IRC | 07:10 | |
*** v1k0d3n has joined #openstack-ansible | 07:12 | |
*** Guest49900 has quit IRC | 07:14 | |
*** Guest49900 has joined #openstack-ansible | 07:14 | |
*** Guest49900 is now known as ioni | 07:14 | |
*** v1k0d3n has quit IRC | 07:16 | |
*** v1k0d3n has joined #openstack-ansible | 07:16 | |
*** ybabenko has joined #openstack-ansible | 07:18 | |
*** winggundamth_ has joined #openstack-ansible | 07:22 | |
*** drifterza has joined #openstack-ansible | 07:29 | |
drifterza | Hello boys | 07:29 |
*** hwoarang_ is now known as hwoarang | 07:36 | |
*** thorst_ has joined #openstack-ansible | 07:40 | |
*** gtrxcb has quit IRC | 07:43 | |
*** markvoelker has quit IRC | 07:44 | |
*** rgogunskiy has joined #openstack-ansible | 07:45 | |
*** markvoelker has joined #openstack-ansible | 07:46 | |
*** haad1 has joined #openstack-ansible | 07:47 | |
*** markvoelker has quit IRC | 07:50 | |
*** admin0 has joined #openstack-ansible | 07:55 | |
*** alij_ has joined #openstack-ansible | 07:58 | |
*** alij has quit IRC | 07:58 | |
*** Mudpuppy has quit IRC | 08:00 | |
*** Mudpuppy has joined #openstack-ansible | 08:01 | |
*** Mudpuppy has quit IRC | 08:06 | |
*** javeriak has quit IRC | 08:06 | |
*** pester has joined #openstack-ansible | 08:07 | |
*** berendt has joined #openstack-ansible | 08:10 | |
*** fandi has joined #openstack-ansible | 08:10 | |
*** fxpester has quit IRC | 08:11 | |
*** pcaruana has quit IRC | 08:12 | |
*** admin0 has quit IRC | 08:14 | |
*** gouthamr has joined #openstack-ansible | 08:17 | |
*** alij_ has quit IRC | 08:19 | |
*** fandi has quit IRC | 08:19 | |
*** alij has joined #openstack-ansible | 08:30 | |
*** admin0 has joined #openstack-ansible | 08:40 | |
*** gouthamr has quit IRC | 08:42 | |
*** gouthamr has joined #openstack-ansible | 08:44 | |
*** thorst_ has quit IRC | 08:50 | |
*** thorst_ has joined #openstack-ansible | 08:51 | |
*** jrosser_ is now known as jrosser | 08:52 | |
*** berendt has quit IRC | 08:54 | |
*** admin0 has quit IRC | 08:55 | |
*** admin0 has joined #openstack-ansible | 09:01 | |
*** admin0 has quit IRC | 09:03 | |
*** thorst_ has quit IRC | 09:13 | |
*** thorst_ has joined #openstack-ansible | 09:13 | |
*** thorst_ has quit IRC | 09:17 | |
*** electrofelix has joined #openstack-ansible | 09:25 | |
*** ybabenko has quit IRC | 09:26 | |
*** alij has quit IRC | 09:26 | |
*** alij has joined #openstack-ansible | 09:26 | |
*** thorst_ has joined #openstack-ansible | 09:29 | |
*** thorst_ has quit IRC | 09:32 | |
*** thorst_ has joined #openstack-ansible | 09:33 | |
*** jamielennox|away is now known as jamielennox | 09:37 | |
*** Trident has joined #openstack-ansible | 09:37 | |
*** thorst_ has quit IRC | 09:37 | |
*** jamielennox is now known as jamielennox|away | 09:39 | |
*** Jeffrey4l has joined #openstack-ansible | 09:40 | |
*** thorst_ has joined #openstack-ansible | 09:40 | |
*** v1k0d3n has quit IRC | 09:41 | |
*** v1k0d3n has joined #openstack-ansible | 09:42 | |
*** fedruantine has quit IRC | 09:45 | |
*** v1k0d3n has quit IRC | 09:46 | |
*** alij_ has joined #openstack-ansible | 09:49 | |
*** alij has quit IRC | 09:49 | |
*** markvoelker has joined #openstack-ansible | 09:50 | |
*** alij has joined #openstack-ansible | 09:50 | |
*** alij_ has quit IRC | 09:53 | |
*** Jeffrey4l has quit IRC | 10:00 | |
*** thorst_ has quit IRC | 10:03 | |
*** thorst_ has joined #openstack-ansible | 10:06 | |
*** thorst_ has quit IRC | 10:10 | |
*** thorst_ has joined #openstack-ansible | 10:12 | |
*** berendt has joined #openstack-ansible | 10:12 | |
*** thorst_ has quit IRC | 10:17 | |
*** berendt has quit IRC | 10:18 | |
*** ybabenko has joined #openstack-ansible | 10:19 | |
*** m4rx has joined #openstack-ansible | 10:20 | |
*** ybabenko has quit IRC | 10:24 | |
*** gouthamr has quit IRC | 10:30 | |
*** fedruantine has joined #openstack-ansible | 10:56 | |
*** admin0 has joined #openstack-ansible | 11:00 | |
neith | hello guys | 11:08 |
neith | regarding this page, http://docs.openstack.org/developer/openstack-ansible/install-guide/targethosts-networkconfig.html should I do the network config manually or OSA will do it? | 11:09 |
admin0 | manually | 11:10 |
admin0 | OSA does not touch networking | 11:10 |
*** ybabenko has joined #openstack-ansible | 11:10 | |
*** ybabenko has quit IRC | 11:12 | |
neith | admin0: ok | 11:12 |
*** ybabenko has joined #openstack-ansible | 11:14 | |
*** admin0 has quit IRC | 11:24 | |
*** johnmilton has quit IRC | 11:27 | |
*** Jeffrey4l has joined #openstack-ansible | 11:31 | |
*** berendt has joined #openstack-ansible | 11:32 | |
*** ybabenko has quit IRC | 11:34 | |
*** NewJorg has quit IRC | 11:35 | |
*** NewJorg has joined #openstack-ansible | 11:35 | |
*** retreved has joined #openstack-ansible | 11:36 | |
*** ybabenko has joined #openstack-ansible | 11:40 | |
*** sulo_ is now known as sulo | 11:41 | |
*** ybabenko has quit IRC | 11:48 | |
*** alij has quit IRC | 11:51 | |
*** alij has joined #openstack-ansible | 11:51 | |
*** ybabenko has joined #openstack-ansible | 11:53 | |
*** johnmilton has joined #openstack-ansible | 11:55 | |
*** alij has quit IRC | 11:56 | |
*** alij has joined #openstack-ansible | 11:56 | |
*** johnmilton has joined #openstack-ansible | 11:57 | |
neith | does OSA works on kvm guest instances? for evaluation purpose | 12:03 |
sigmavirus | neith: there's http://git.openstack.org/cgit/openstack/openstack-ansible-ops which has a way of building a "multi-node" deployment of OSA on a single node using kvm/virsh iirc | 12:04 |
sigmavirus | neith: that would probably give you a good way of evaluating OSA | 12:05 |
neith | sigmavirus: sounds cool, but I also wanted to be close from a real deployment using several kvm guests | 12:06 |
neith | sigmavirus: especially dealing with network configuration | 12:07 |
*** phalmos has joined #openstack-ansible | 12:09 | |
*** gouthamr has joined #openstack-ansible | 12:13 | |
*** phalmos has quit IRC | 12:16 | |
*** marst_ has joined #openstack-ansible | 12:35 | |
*** adrian_otto has joined #openstack-ansible | 12:38 | |
*** z-_ has joined #openstack-ansible | 12:40 | |
*** adrian_otto has quit IRC | 12:43 | |
*** coolj_ has joined #openstack-ansible | 12:43 | |
*** andymccr_ has joined #openstack-ansible | 12:43 | |
*** dgonzalez_ has joined #openstack-ansible | 12:43 | |
*** timrc_ has joined #openstack-ansible | 12:43 | |
*** irtermit- has joined #openstack-ansible | 12:43 | |
*** jduhamel_ has joined #openstack-ansible | 12:43 | |
*** Maeca has joined #openstack-ansible | 12:44 | |
*** xar has joined #openstack-ansible | 12:44 | |
*** coolj has quit IRC | 12:45 | |
*** openstackgerrit has quit IRC | 12:45 | |
*** marst has quit IRC | 12:45 | |
*** xar- has quit IRC | 12:45 | |
*** jcannava has quit IRC | 12:45 | |
*** antonym has quit IRC | 12:45 | |
*** irtermite has quit IRC | 12:45 | |
*** nishpatwa_ has quit IRC | 12:45 | |
*** dgonzalez has quit IRC | 12:45 | |
*** Maeca_ has quit IRC | 12:45 | |
*** jascott1 has quit IRC | 12:45 | |
*** z- has quit IRC | 12:45 | |
*** timrc has quit IRC | 12:45 | |
*** jduhamel has quit IRC | 12:45 | |
*** galstrom_zzz has quit IRC | 12:45 | |
*** Jolrael has quit IRC | 12:45 | |
*** andymccr has quit IRC | 12:45 | |
*** dgonzalez_ is now known as dgonzalez | 12:45 | |
*** adrian_otto has joined #openstack-ansible | 12:45 | |
*** galstrom_zzz has joined #openstack-ansible | 12:46 | |
*** ricardobuffa_ufs has joined #openstack-ansible | 12:49 | |
*** nishpatwa007 has joined #openstack-ansible | 12:51 | |
*** shasha_t_ has joined #openstack-ansible | 12:51 | |
*** askb has quit IRC | 12:52 | |
*** adrian_otto1 has joined #openstack-ansible | 12:52 | |
*** antonym has joined #openstack-ansible | 12:52 | |
*** adrian_otto has quit IRC | 12:52 | |
*** alij_ has joined #openstack-ansible | 12:53 | |
*** jascott1 has joined #openstack-ansible | 12:53 | |
*** openstackgerrit has joined #openstack-ansible | 12:54 | |
*** alij__ has joined #openstack-ansible | 12:54 | |
*** johnmilton has quit IRC | 12:54 | |
*** alij has quit IRC | 12:55 | |
*** ricardobuffa_ufs has quit IRC | 12:55 | |
*** karimb has joined #openstack-ansible | 12:56 | |
*** Jolrael has joined #openstack-ansible | 12:56 | |
*** gouthamr has quit IRC | 12:57 | |
*** ricardobuffa_ufs has joined #openstack-ansible | 12:57 | |
*** alij_ has quit IRC | 12:57 | |
*** johnmilton has joined #openstack-ansible | 12:58 | |
*** mgariepy has joined #openstack-ansible | 12:59 | |
*** markvoelker has quit IRC | 13:01 | |
*** adrian_otto1 has quit IRC | 13:05 | |
*** cathrichardson has joined #openstack-ansible | 13:07 | |
*** cathrich_ has quit IRC | 13:09 | |
*** alij__ has quit IRC | 13:10 | |
*** alij has joined #openstack-ansible | 13:10 | |
*** jheroux has joined #openstack-ansible | 13:11 | |
*** alij_ has joined #openstack-ansible | 13:12 | |
*** alij__ has joined #openstack-ansible | 13:13 | |
*** alij has quit IRC | 13:14 | |
*** alij_ has quit IRC | 13:17 | |
*** klamath has joined #openstack-ansible | 13:24 | |
*** klamath has quit IRC | 13:24 | |
*** klamath has joined #openstack-ansible | 13:24 | |
*** bapalm has joined #openstack-ansible | 13:26 | |
mgariepy | good morning everyone | 13:27 |
*** alij__ has quit IRC | 13:32 | |
*** alij has joined #openstack-ansible | 13:32 | |
*** jheroux has quit IRC | 13:32 | |
*** jheroux has joined #openstack-ansible | 13:35 | |
*** alij has quit IRC | 13:36 | |
*** ybabenko has quit IRC | 13:38 | |
*** alij has joined #openstack-ansible | 13:38 | |
*** m4rx has quit IRC | 13:41 | |
*** ybabenko has joined #openstack-ansible | 13:41 | |
*** ybabenko has quit IRC | 13:43 | |
*** ybabenko has joined #openstack-ansible | 13:45 | |
*** ybabenko has quit IRC | 13:51 | |
*** thorst_ has joined #openstack-ansible | 13:56 | |
cloudnull | mornings | 13:56 |
*** allanice001 has quit IRC | 13:57 | |
cloudnull | neith: have a look at https://github.com/openstack/openstack-ansible-ops/tree/master/multi-node-aio -- builds ~14 vms and creates everything using a partition scheme and nic layout which I use in production. | 13:58 |
*** Mudpuppy has joined #openstack-ansible | 13:58 | |
*** Matias has quit IRC | 13:59 | |
klamath | can someone give me the command syntax to remove an old swift node from ansible inventory? | 14:01 |
*** haad1 has quit IRC | 14:03 | |
*** aleph1 is now known as agarner | 14:05 | |
*** alij has quit IRC | 14:05 | |
*** alij has joined #openstack-ansible | 14:06 | |
*** alij has quit IRC | 14:06 | |
*** alij has joined #openstack-ansible | 14:06 | |
sigmavirus | klamath: from your checkout of openstack-ansible "scripts/inventory-manage -r hostname" | 14:09 |
sigmavirus | er scripts/inventory-manage.py | 14:09 |
klamath | cool, thank you | 14:09 |
sigmavirus | klamath: that script explains to you how to use it | 14:11 |
sigmavirus | klamath: so if you do ./scripts/inventory-manage.py -h | 14:11 |
sigmavirus | It tells you what it can do and how | 14:11 |
klamath | cool, im decoming some swift nodes, i assume remove them from swift.yml, then remove them from inventory, run swift-sync and should be done right? | 14:13 |
*** chris_hultin|AWA is now known as chris_hultin | 14:13 | |
sigmavirus | I'm no swift expert, I can't verify that. | 14:14 |
*** karimb has quit IRC | 14:16 | |
cloudnull | klamath: that should be all that's needed. -cc andymccr_ | 14:16 |
openstackgerrit | Marc Gariépy proposed openstack/openstack-ansible-lxc_hosts: LXC version to 2.0.5 on CentOS https://review.openstack.org/390353 | 14:17 |
*** alij has quit IRC | 14:22 | |
*** berendt has quit IRC | 14:23 | |
jwitko | Hey All, My setup-openstack.yml keeps failing on the swift install. The testing/check script is failing. I've detailed my config, the errors, and the host here http://cdn.pasteraw.com/a1cl6gn0u1i236uufjgwaic3676wtz9 It looks like something is not happy in the storage ring? I've tried to follow the docs as closely as possible | 14:23 |
jwitko | this is the latest newton release on ubuntu 16.04, fyi | 14:24 |
cloudnull | o/ jwitko looking now | 14:24 |
*** h5t4 has joined #openstack-ansible | 14:24 | |
jwitko | hey cloudnull! thank you! | 14:24 |
jwitko | are you in Barcelona? | 14:24 |
*** alij has joined #openstack-ansible | 14:25 | |
cloudnull | no sadly. | 14:25 |
cloudnull | is this a new setup? | 14:25 |
*** winggundamth_ has quit IRC | 14:26 | |
kysse_ | whats happenin in barcelona? | 14:26 |
jwitko | cloudnull, yes this is a brand new setup | 14:26 |
*** h5t4 has quit IRC | 14:27 | |
*** h5t4 has joined #openstack-ansible | 14:29 | |
*** alij has quit IRC | 14:29 | |
*** v1k0d3n has joined #openstack-ansible | 14:31 | |
mgariepy | can someone review this please ? https://review.openstack.org/390353 | 14:32 |
cloudnull | mgariepy: looking | 14:34 |
*** marst_ has quit IRC | 14:34 | |
jwitko | also looking | 14:34 |
cloudnull | jwitko: it looks like a node is missing from the righ | 14:34 |
cloudnull | *ring | 14:34 |
mgariepy | great :) thanks guys | 14:34 |
cloudnull | mgariepy: LGTM | 14:35 |
jwitko | cloudnull, Yea but why/how? I'm very new to swift so I don't fully understand the ring but I thought I configured OSA correctly | 14:35 |
*** v1k0d3n has quit IRC | 14:35 | |
jwitko | mgariepy, I don't have the ability to review this one :( | 14:35 |
jwitko | but LGTM as well :) | 14:35 |
*** thorst_ has quit IRC | 14:36 | |
*** haad1 has joined #openstack-ansible | 14:36 | |
*** thorst_ has joined #openstack-ansible | 14:36 | |
mgariepy | jwitko, well it's only to use lxc-2.0.5 instead of the 2.0.4 on centos :) | 14:37 |
jwitko | haha yea I can see the code change. But I think onlu cloudnull has permission to review | 14:37 |
jwitko | only* | 14:37 |
*** Jeffrey4l has quit IRC | 14:37 | |
cloudnull | jwitko: you can +1 :) | 14:38 |
jwitko | Really? I don't see the normal button | 14:38 |
jwitko | oh, doh. | 14:38 |
mgariepy | I'm currently having some issue whit my containers not starting eth0 at boot sometimes. | 14:38 |
cloudnull | or -1 -- all of which are important. | 14:38 |
jwitko | i see it, thanks | 14:38 |
jwitko | +1 | 14:38 |
*** thorst_ has quit IRC | 14:42 | |
jwitko | cloudnull, so yea was there something I was supposed to do in addition for setting up this host? | 14:42 |
*** alij has joined #openstack-ansible | 14:43 | |
cloudnull | still thinking. | 14:43 |
*** hj-hpe has joined #openstack-ansible | 14:47 | |
*** shausy has quit IRC | 14:47 | |
cloudnull | jwitko: so if this is new and there's no data within it maybe just nuke the rings and rerun the playbooks ? | 14:48 |
jwitko | sure, I'm down for that | 14:48 |
jwitko | is there an ansible way of nuking? | 14:48 |
cloudnull | I'm not sure how / where it went off the rails | 14:48 |
jwitko | well I've run it like a dozen times trying to get it to work | 14:48 |
jwitko | so I'm sure this is my fault | 14:48 |
cloudnull | ansible -m shell -a 'rm -rf /etc/swift' swift_all; openstack-ansible os-swift-install.yml | 14:49 |
jwitko | re-running | 14:49 |
*** rgogunskiy has quit IRC | 14:52 | |
mgariepy | cloudnull, can you +w https://review.openstack.org/#/c/390353/ or I need to find someone else ? :D | 14:54 |
cloudnull | mgariepy: sadly we need another core to +2 before we can workflow it | 14:54 |
*** ybabenko has joined #openstack-ansible | 14:55 | |
mgariepy | Ho I though we only needed one +2 | 14:55 |
cloudnull | and i believe most of them are at the summit | 14:55 |
cloudnull | mgariepy: no we need 2 +2's then it can be +w | 14:55 |
*** woodard has joined #openstack-ansible | 14:55 | |
mgariepy | cloudnull, how come you are not at the summit ? | 14:55 |
cloudnull | good question :) | 14:55 |
mgariepy | you missed the plane ? | 14:56 |
mgariepy | hehe | 14:56 |
cloudnull | I wish. then i'd have myselft to blame . | 14:56 |
*** allanice001 has joined #openstack-ansible | 14:57 | |
*** ybabenko has quit IRC | 14:59 | |
jwitko | cloudnull, same failure http://cdn.pasteraw.com/2wfq5s2suk3v6qcv9xzyouur1shd5fm | 15:00 |
cloudnull | jwitko: are the drives mounted ? | 15:01 |
cloudnull | k2_swift{1,3} on all of the nodes? | 15:01 |
cloudnull | within /srv/node | 15:01 |
*** woodard has quit IRC | 15:02 | |
jwitko | yes, but while creating a paste to show you that | 15:03 |
jwitko | I just saw an error | 15:03 |
jwitko | http://cdn.pasteraw.com/hwkxoltt1elqcdkbb0yuzbhv0wuleu8 | 15:03 |
*** haad1 has quit IRC | 15:03 | |
jwitko | should /srv/node be owned by swift:swift or root:root ? | 15:04 |
jwitko | wow... wtf | 15:05 |
jwitko | cdn.pasteraw.com/slcvzzwunyko28vuf5b026on75hz7la | 15:05 |
cloudnull | http://cdn.pasteraw.com/p4cb81rspfpuaxd8xhgmtau8uu4lt5a | 15:05 |
cloudnull | mine are all owned by swift:swift | 15:05 |
jwitko | ok, /srv/node i had owned by root. I changed that but these other errors about the iscsi directories are alarming | 15:07 |
jwitko | the directories are all mounted http://cdn.pasteraw.com/cd7vpksti8ojaiogs5izjweg7n4kf2u | 15:08 |
cloudnull | is that mounted on all 3 of the swift nodes? | 15:11 |
jwitko | There is only one swift node | 15:11 |
jwitko | http://cdn.pasteraw.com/a1cl6gn0u1i236uufjgwaic3676wtz9 -- config here | 15:11 |
cloudnull | is 10.89.112.3 == swift-node1 ? | 15:12 |
jwitko | yes | 15:12 |
jwitko | thats the br-storage interface | 15:12 |
cloudnull | ok. | 15:12 |
jwitko | the interface is up and it can ping its gateway no problem | 15:13 |
cloudnull | ok. | 15:13 |
jwitko | so I unmounted those LUNs | 15:13 |
jwitko | and the directory is still screwed up | 15:13 |
jwitko | I can't cd to it or anything | 15:13 |
cloudnull | interesting. | 15:13 |
cloudnull | cold boot ? | 15:13 |
jwitko | i was able to delete them | 15:14 |
cloudnull | maybe iscsi is having a bad time . | 15:14 |
jwitko | no this is the OS, I unmounted the dirs no problem | 15:14 |
cloudnull | do you have something executing on that path and is there anything in a sleepwait state? | 15:14 |
*** alij has quit IRC | 15:15 | |
jwitko | no, but check this out. I just touched a file in those dirs as a test | 15:15 |
*** pester has quit IRC | 15:16 | |
jwitko | and I guess they needed to be "primed" so to speak | 15:16 |
jwitko | because the file created and now I can cd and ls and everything | 15:16 |
jwitko | running again to see if that fixes it | 15:16 |
*** allanice001 has quit IRC | 15:16 | |
cloudnull | ha. gremlins | 15:17 |
jwitko | this hasn't happened on this SAN with ext4 formatted volumes | 15:17 |
jwitko | might be something going on with this san/xfs | 15:17 |
*** adrian_otto has joined #openstack-ansible | 15:21 | |
*** weezS has joined #openstack-ansible | 15:23 | |
jwitko | it didn't fix it... but I also didn't do the rm -rf /etc/swift stuff | 15:24 |
jwitko | removing the existing configs and running again | 15:24 |
*** h5t4 has quit IRC | 15:24 | |
*** shananigans has joined #openstack-ansible | 15:33 | |
jwitko | cloudnull, ok so with the directories working/fixed and the permissions matched for swift to own /srv/node and below I am still receiving the same error | 15:35 |
jwitko | and this is after I removed all the existing config | 15:35 |
jwitko | the 6000,6001,6002 ports are open and listening. I can connect to them. | 15:37 |
cloudnull | hum. if you run the swift ring command manually does it tell you anything else? | 15:38 |
jwitko | sorry not sure how to do that ? | 15:39 |
cloudnull | sadly the people most knowledgable about the ring and OSA are at the summit. | 15:39 |
cloudnull | the command : "/etc/swift/scripts/swift_rings_check.py -f /etc/swift/scripts/account.contents" | 15:40 |
jwitko | yea, thats the one that fails | 15:41 |
cloudnull | I'm looking through https://github.com/openstack/openstack-ansible-os_swift/blob/master/templates/swift_rings_check.py.j2 now | 15:41 |
cloudnull | trying to figure out what/where/why | 15:41 |
jwitko | yea, I did that last night | 15:42 |
jwitko | https://github.com/openstack/openstack-ansible-os_swift/blob/master/templates/swift_rings_check.py.j2#L107 | 15:42 |
jwitko | so it only runs the check on the first inventory item for swift_hosts | 15:43 |
jwitko | when: inventory_hostname == groups['swift_hosts'][0] | 15:43 |
jwitko | no sorry thats not right. | 15:43 |
jwitko | cloudnull, I may have found a possible cause | 15:48 |
cloudnull | Looking at the config I think its the config w/ r1,2,3 | 15:49 |
jwitko | well I tried to recreate the command to build the ring | 15:49 |
jwitko | to see what was happening | 15:49 |
jwitko | http://cdn.pasteraw.com/k3gkzpv0ptn785eb97182obklxpmiwt | 15:49 |
jwitko | I didn't have repl_number set in conf.d/swift.yml which defaults to '3' | 15:49 |
cloudnull | I also believe http://cdn.pasteraw.com/rdxpypois1sq31v1luxsp8i6ej48890 this could be it | 15:50 |
jwitko | oh yea? | 15:50 |
cloudnull | you have r1,2,3 which translates to regions 1,2,3 | 15:50 |
cloudnull | I have a similar setup with a single storage node and all of my region settings are the same using only r1 | 15:51 |
jwitko | ah wow, ok thank you | 15:51 |
jwitko | if you also look at my paste | 15:52 |
jwitko | it looks like the directory name is getting mashed | 15:52 |
jwitko | due to the underscore? | 15:52 |
* cloudnull looking now | 15:52 | |
openstackgerrit | Nolan Brubaker proposed openstack/openstack-ansible: Add command to remove IPs from inventory https://review.openstack.org/390375 | 15:52 |
*** allanice001 has joined #openstack-ansible | 15:53 | |
cloudnull | jwitko: that may be too | 15:53 |
cloudnull | https://github.com/openstack/openstack-ansible-os_swift/blob/master/templates/swift_rings_check.py.j2#L26 | 15:53 |
*** alij has joined #openstack-ansible | 15:54 | |
cloudnull | looks like it's doing string replacement in a few places and could be dropping the "_.*" I'm not sure why that'd be quite yet but it makes sense based on your paste | 15:55 |
jwitko | ok, I've renamed all "r2" and "r3" to "r1" in the config. I've also renamed and remounted the LUNs to disk1,2,3 | 15:56 |
jwitko | Edited the deploy config to reflect those new mounts | 15:56 |
jwitko | deploying again | 15:56 |
cloudnull | cool :) | 15:56 |
cloudnull | sorry i dont have good answers on these bits | 15:57 |
jwitko | oh please, like you ever need to apologize | 15:58 |
*** alij has quit IRC | 15:58 | |
jwitko | cloudnull, it passed that error! :) | 15:59 |
cloudnull | woot! | 15:59 |
cloudnull | on to a new one i'm sure ;) | 15:59 |
cloudnull | :p | 16:00 |
jwitko | "ensure services are started" - we're at a holding pattern here. was OK on the bare metal but hanging on the containers it looks like | 16:00 |
cloudnull | I remember there was a loop that caused it to cycle a bunch in some cases. | 16:01 |
cloudnull | automagically: you around? | 16:01 |
cloudnull | did you look into that, the swift system init script issue? | 16:01 |
cloudnull | I thought that got resolved. but maybe not or not in your checkout? | 16:02 |
jwitko | cloudnull, it looks to be proceeding | 16:06 |
jwitko | still on the same step but progress | 16:06 |
*** rromans_ is now known as rromans | 16:09 | |
*** agrebennikov has joined #openstack-ansible | 16:10 | |
jwitko | cloudnull, it finished, no errors! | 16:12 |
jwitko | no how in the world to test haha | 16:12 |
cloudnull | nice! | 16:12 |
*** tschacara has joined #openstack-ansible | 16:12 | |
*** Matias has joined #openstack-ansible | 16:12 | |
cloudnull | you can upload an object and run swift stat | 16:12 |
jwitko | going to try and create a container | 16:12 |
jwitko | for glance_images | 16:12 |
cloudnull | that's a good one too. if glance is configured to use swift you can upload a bunch of images. if they save it's working | 16:13 |
jwitko | hm, so I see this error in the logs when I set up a tail before I created the container | 16:13 |
jwitko | http://cdn.pasteraw.com/56gxkzp8i3tggym4ea1eoigcqr3qdtq | 16:13 |
jwitko | the file does exist though | 16:14 |
jwitko | so maybe it recovered | 16:14 |
cloudnull | https://github.com/openstack/openstack-ansible-ops/blob/master/multi-node-aio/openstack-service-setup.sh#L97-L168 | 16:14 |
cloudnull | that's generally my step on uploading a mess of images | 16:14 |
jwitko | container created! | 16:15 |
jwitko | thats further than before haha | 16:15 |
cloudnull | also in newton there are no flavors by default so this maybe handy https://github.com/openstack/openstack-ansible-ops/blob/master/multi-node-aio/openstack-service-setup.sh#L7-L43 | 16:16 |
cloudnull | going to grab a coffee . back in a few | 16:17 |
*** zerick_ is now known as zerick | 16:18 | |
jwitko | ty! | 16:18 |
jwitko | cloudnull, so the image imported successfully according to the CLI | 16:20 |
jwitko | but the swift container shows an object count of zero | 16:20 |
jwitko | however glance says it has the image and the status is active lol | 16:21 |
*** hughmFLEXin has joined #openstack-ansible | 16:22 | |
*** TxGirlGeek has joined #openstack-ansible | 16:22 | |
*** hughmFLEXin has quit IRC | 16:23 | |
*** hughmFLEXin has joined #openstack-ansible | 16:24 | |
*** TxGirlGeek has quit IRC | 16:29 | |
chris_hultin | jwitko: Can you actually build a server with the image? | 16:43 |
jwitko | chris_hultin, about to try now | 16:43 |
jwitko | probably not I'd guess | 16:43 |
*** fops has joined #openstack-ansible | 16:47 | |
jwitko | Error: Failed to perform requested operation on instance "test1", the instance has an error status: Please try again later [Error: Build of instance 45915aa3-91f6-45d7-9afd-4da80fab76d9 aborted: Block Device Mapping is Invalid.]. | 16:47 |
jwitko | looks like I have some block storage issues | 16:48 |
cloudnull | back | 16:49 |
cloudnull | jwitko: was that created using horizon | 16:49 |
jwitko | cloudnull, think I found a bug in the cinder setup | 16:49 |
jwitko | http://cdn.pasteraw.com/mky3b4ge863s288in2qv0wdk9cgud7w | 16:49 |
*** fops_ has quit IRC | 16:50 | |
cloudnull | seeing if i can reproduce that now. | 16:51 |
jwitko | my lvm "cinder-volume" is running | 16:53 |
*** adrian_otto has quit IRC | 16:56 | |
*** fops_ has joined #openstack-ansible | 17:00 | |
*** fops has quit IRC | 17:02 | |
cloudnull | jwitko: was that during volume attachment or create? | 17:02 |
jwitko | cloudnull, looks like creation | 17:03 |
*** weezS has quit IRC | 17:03 | |
jwitko | cloudnull, i manually installed tgt package | 17:03 |
jwitko | I'll paste new full error logs | 17:03 |
cloudnull | do you have the cinder-volumes VG ? | 17:03 |
jwitko | cloudnull, yes I do. | 17:04 |
jwitko | cloudnull, http://cdn.pasteraw.com/sbd0u1vanm3r5odkj1qsy4hfq3ym69r | 17:04 |
jwitko | so it looks like tgt is expecting a certain config | 17:04 |
jwitko | that my basic apt package install didn't provide | 17:05 |
*** c-mart has joined #openstack-ansible | 17:06 | |
cloudnull | my test env is using 14.04 so it's not exactly like yours but I'm not seeing tgt specific issues at this point. | 17:08 |
*** drifterza has quit IRC | 17:08 | |
cloudnull | cinder is spawning volumes and nova is attaching without issues. | 17:08 |
cloudnull | I do see cinder volume disconnects from the DB from time to time. but i generally chaulk that up as normal. | 17:09 |
jwitko | cloudnull, it looks like the issue is there is no tgt package install for 16.04 cinder. due to this the tgt conf that allows targets to cinder volumes is not in place | 17:10 |
jwitko | I installed tgt manually and I'm running os-cinder-install again to verify | 17:10 |
jwitko | looks like it might be skipping TASK [os_cinder : Ensure cinder tgt include] *********************************** | 17:13 |
cloudnull | those packages should be there using this var https://github.com/openstack/openstack-ansible-os_cinder/blob/master/vars/ubuntu-16.04.yml#L41-L44 | 17:14 |
cloudnull | curious if it's skipping for some other reason or if it just didn't complete first time ? | 17:14 |
jwitko | when: | 17:15 |
jwitko | - inventory_hostname in groups['cinder_volume'] | 17:15 |
jwitko | - cinder_backend_lvm_inuse | bool | 17:15 |
cloudnull | those packages should be installed here https://github.com/openstack/openstack-ansible-os_cinder/blob/master/tasks/cinder_install_apt.yml#L51-L62 | 17:15 |
cloudnull | lol, yes. that task | 17:15 |
jwitko | so the first conditional I can confirm is true | 17:16 |
cloudnull | the second one should auto enable https://github.com/openstack/openstack-ansible-os_cinder/blob/master/defaults/main.yml#L201 | 17:17 |
cloudnull | assuming that the cinder config has lvm set as the backend type | 17:17 |
cloudnull | http://cdn.pasteraw.com/kvkd9grr7xka1xz1zrrqxn7odz1busb | 17:18 |
jwitko | oh wow | 17:18 |
jwitko | I am missing the entire "cinder_backends: " block | 17:18 |
jwitko | I have everythign under it | 17:18 |
jwitko | but not itself | 17:18 |
jwitko | http://cdn.pasteraw.com/9zzd55m67js0t3s2v9p510lbdxcog44 | 17:19 |
cloudnull | that'll do it | 17:19 |
jwitko | sorry | 17:19 |
cloudnull | no worries. | 17:19 |
jwitko | I should just be able to run os-cinder-install again right? | 17:19 |
cloudnull | yup | 17:19 |
jwitko | I see it went through this time. ugh. | 17:22 |
jwitko | not sure how I lost that guy lol | 17:22 |
cloudnull | shit happens | 17:23 |
*** alij has joined #openstack-ansible | 17:29 | |
*** asettle has joined #openstack-ansible | 17:30 | |
*** admin0 has joined #openstack-ansible | 17:32 | |
*** maeker has joined #openstack-ansible | 17:34 | |
*** thorst_ has joined #openstack-ansible | 17:34 | |
*** gouthamr has joined #openstack-ansible | 17:41 | |
*** allanice001 has quit IRC | 17:44 | |
agrebennikov | hi folks, have a question about variables topology in OSA | 17:45 |
agrebennikov | I put the version of openstack to user_variables.yml | 17:46 |
*** h5t4 has joined #openstack-ansible | 17:46 | |
agrebennikov | and it turned out that the deployment process took the one from /opt/openstack-ansible/playbooks/inventory/group_vars/all.yml | 17:46 |
agrebennikov | is this expected behaviour? | 17:46 |
*** thorst_ has quit IRC | 17:47 | |
*** gouthamr has quit IRC | 17:47 | |
*** thorst_ has joined #openstack-ansible | 17:47 | |
*** johnmilton has quit IRC | 17:48 | |
cloudnull | agrebennikov: which var are you setting? | 17:49 |
agrebennikov | openstack_version | 17:49 |
agrebennikov | struggled for about 2 hours trying to figure out where the wrong one comes from | 17:49 |
cloudnull | I dont know where openstack_version is set . | 17:51 |
cloudnull | I know we have openstack_release | 17:51 |
agrebennikov | ah, sorry | 17:51 |
agrebennikov | my bad | 17:51 |
agrebennikov | that one | 17:51 |
*** allanice001 has joined #openstack-ansible | 17:52 | |
agrebennikov | so in stable it keeps changing | 17:52 |
agrebennikov | while I needed static one | 17:52 |
agrebennikov | but in the playbooks (not everywhere) the one from all.yml was used | 17:52 |
agrebennikov | I made some debugging and found out that "openstack_release" itself was right, but while creating urls for venvs - the wrong one was used | 17:53 |
cloudnull | interesting. that may be an ansible variable precedence problem. | 17:54 |
cloudnull | setting anything in user_variables.yml should always win | 17:54 |
mgariepy | agrebennikov, which version of openstack-ansible ? | 17:55 |
agrebennikov | mitaka | 17:55 |
*** johnmilton has joined #openstack-ansible | 17:55 | |
*** electrofelix has quit IRC | 17:56 | |
*** thorst_ has quit IRC | 17:56 | |
cloudnull | mitaka used 1.9.x which does have precedence issues in certain situations. | 17:56 |
cloudnull | 2.x is more strict | 17:56 |
cloudnull | http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable | 17:56 |
*** johnmilton has joined #openstack-ansible | 17:56 | |
cloudnull | ansible should ALWAYS use the var sepcified on the CLI which is what user_.*yaml does | 17:57 |
agrebennikov | yeah, looking to this page right now | 17:57 |
cloudnull | but i've heard of folks running into these types of issues with 1.9.x | 17:57 |
mgariepy | like this one : https://bugs.launchpad.net/openstack-ansible/+bug/1605302 | 17:57 |
openstack | Launchpad bug 1605302 in openstack-ansible "it's not possible to change cinder or nova ceph client via user_variables for the ceph_client role" [Low,Fix released] - Assigned to Marc Gariépy (mgariepy) | 17:57 |
agrebennikov | which is telling me that user_vars have a precedence | 17:57 |
*** poopcat has joined #openstack-ansible | 17:58 | |
mgariepy | 1.9.4 have vars precedence issue and 1.9.5 - 1.9.6 have other issue.. | 17:58 |
cloudnull | yea. there really should be a 1.9.final release where those things were fixed but there never was. | 17:59 |
jwitko | hm... I'm trying to create an instance from a volume snapshot and I'm seeing the following error from nova | 18:00 |
jwitko | AddrFormatError: failed to detect a valid IP address from 'openstack-int.lab.mgmt.saucelabs.net' | 18:00 |
jwitko | it created the first instance no problem | 18:00 |
jwitko | looks like it moves past that error though and finds an address | 18:02 |
jwitko | but then I end up with Error: Failed to perform requested operation on instance "test2", the instance has an error status: Please try again later [Error: Build of instance 0a23d982-5568-41e9-b4ec-728be592764f aborted: Block Device Mapping is Invalid.]. and nothing in the cinder logs | 18:02 |
cloudnull | agrebennikov: if you can figure out a way to force it to be the highest precedence then we can backport that into mitaka | 18:04 |
cloudnull | which I think folks will really appreciate. | 18:04 |
cloudnull | but right now its simply a known issue. | 18:04 |
agrebennikov | :) gotcha | 18:04 |
agrebennikov | but is it ansible problem or OSA? | 18:05 |
jwitko | it looks like my volume is actually being created without issue but Horizon doesn't like the amount of time it takes | 18:05 |
jwitko | and the instance creation times out waiting for it | 18:05 |
mgariepy | agrebennikov, https://github.com/ansible/ansible/pull/14652 | 18:05 |
mgariepy | it's ansible 1.9.4 issue | 18:06 |
*** poopcat has quit IRC | 18:06 | |
agrebennikov | and just to clarify - st/newton is already on 2.x, isn't it? | 18:06 |
openstackgerrit | Nolan Brubaker proposed openstack/openstack-ansible: Add command to remove IPs from inventory https://review.openstack.org/390375 | 18:07 |
cloudnull | agrebennikov: yes. newton is on 2.1 | 18:07 |
agrebennikov | hm.. so the fix is backported in march... | 18:07 |
cloudnull | and should a 2.2 be released not break everything I think we'll end up on that in future newton releases. | 18:07 |
agrebennikov | so basically everybody just have to switch to 1.9.5 then | 18:09 |
agrebennikov | (for mitaka) | 18:09 |
cloudnull | yea. 1.9.5 fixed vars but broke lxc. | 18:09 |
cloudnull | https://github.com/ansible/ansible/issues/15093 | 18:10 |
*** admin0 has quit IRC | 18:11 | |
cloudnull | sorry wrong bug that was the bug where they broke host/groupvars in 1.9.5 | 18:12 |
cloudnull | 1.9.6 broke lxc which was here https://github.com/ansible/ansible-modules-extras/issues/2042 | 18:12 |
cloudnull | last I recall the fix was added to the stable branch | 18:12 |
cloudnull | but never released. | 18:13 |
cloudnull | agrebennikov: if you dont mind giving the stable/1.9 branch a try we could lockin on the sha for mitaka | 18:14 |
cloudnull | assuming it fixes those issues | 18:14 |
baz | In Horizon (Newton), are ipv6 FIPs supposed to be available? | 18:14 |
cloudnull | it can be | 18:15 |
baz | currently only getting v4 addresses as options on a network with both v4 and v6 | 18:15 |
cloudnull | https://github.com/openstack/openstack-ansible-os_horizon/blob/master/defaults/main.yml#L95-L96 | 18:16 |
cloudnull | oh FIPs | 18:16 |
baz | yeah FIPs | 18:16 |
cloudnull | IDK if neutron support's FIPs with v6 | 18:16 |
baz | works fine for directly attached | 18:16 |
cloudnull | http://docs.openstack.org/newton/networking-guide/config-ipv6.html | 18:17 |
cloudnull | I think the best you can do is assign the port and attach a fixed IP if you need a specific address. | 18:18 |
cloudnull | you can do a v4 subnet on the same network and request a FIP from it. | 18:19 |
cloudnull | but that's not what you were asking for. | 18:19 |
*** poopcat has joined #openstack-ansible | 18:19 | |
baz | It's not a huge deal, at some point it'll be in OS. If someone really wants a dual stack router they can make their own with a VM. | 18:22 |
cloudnull | in the osic cloud1 we're using a dual stack net but its v6 direct return with v4 on a router. | 18:23 |
cloudnull | it'd be nice it it worked both ways. | 18:24 |
baz | I haven't tried yet, but can OS routers not be NATs? | 18:28 |
cloudnull | no i dont think so . | 18:28 |
cloudnull | at least not with the ML2 plugins | 18:28 |
baz | which is why v6 FIPs wouldn't make sense. | 18:28 |
cloudnull | not that i've seen that is | 18:28 |
agrebennikov | cloudnull, sorry, was out... what is the proposal? do you want me to try to pull stable 1.9 and retry the same I got stuck with? | 18:28 |
cloudnull | agrebennikov: if you have time that'd be great! | 18:29 |
*** thorst_ has joined #openstack-ansible | 18:31 | |
*** McMurlock1 has joined #openstack-ansible | 18:32 | |
openstackgerrit | Merged openstack/openstack-ansible-os_neutron: Update paste, policy and rootwrap configurations 2016-10-21 https://review.openstack.org/389705 | 18:32 |
agrebennikov | cloudnull, sure can do it | 18:35 |
cloudnull | that'd be awesome. | 18:37 |
*** McMurlock1 has quit IRC | 18:46 | |
*** Jeffrey4l has joined #openstack-ansible | 18:48 | |
*** weezS has joined #openstack-ansible | 18:49 | |
*** alij has quit IRC | 18:50 | |
c-mart | deployment pulled from newton/stable last friday, doesn't know about its compute nodes, even though I specified two. Anyone seen this? | 18:54 |
*** irtermit- is now known as irtermite | 18:55 | |
c-mart | the nova-compute service is running on both of the nodes | 18:56 |
*** drifterza has joined #openstack-ansible | 18:57 | |
*** alij has joined #openstack-ansible | 18:57 | |
c-mart | but they aren't in the nova database, and not shown in Horizon UI | 18:59 |
*** alij has quit IRC | 18:59 | |
*** alij has joined #openstack-ansible | 19:00 | |
*** alij has quit IRC | 19:01 | |
*** alij has joined #openstack-ansible | 19:02 | |
*** ricardobuffa_ufs has quit IRC | 19:03 | |
*** alij has quit IRC | 19:05 | |
*** alij has joined #openstack-ansible | 19:05 | |
*** Jack_Iv has quit IRC | 19:07 | |
cloudnull | they're not in the db ? | 19:09 |
cloudnull | no stack traces on the running compute services? | 19:10 |
cloudnull | also do you see anything in the logs showing that it's checking in ? | 19:10 |
cloudnull | c-mart: also if you get a chance can you review https://review.openstack.org/#/c/389965/ | 19:11 |
cloudnull | I believe that should resolve the issue with the repo-server selecting the wrong node to clone or build wheels into | 19:11 |
c-mart | correct, not in the DB. Logs show "2016-10-24 19:11:50.912 95637 WARNING nova.conductor.api [req-85a5d38d-fdc9-43df-bed7-265c61e7e3bc - - - - -] Timed out waiting for nova-conductor. Is it running? Or did this service start before nova-conductor? Reattempting establishment of nova-conductor connection..." | 19:12 |
c-mart | cloudnull, awesome, i'll take a look. I've been seeing the repo container sync issue with every rebuild. | 19:14 |
cloudnull | c-mart: i wonder if its just a matter of restarting conductor then the compute nodes? | 19:16 |
c-mart | aha! conductor service wasn't running | 19:17 |
c-mart | I have had to manually start several services that should have been brought up automatically by OSA. Nova API metadata service, compute service, and now the conductor service. | 19:19 |
c-mart | and there are my hypervisors :) | 19:19 |
c-mart | now in the database and shown in Horizon. | 19:20 |
*** schwicht has joined #openstack-ansible | 19:22 | |
mgariepy | cloudnull, https://github.com/openstack/openstack-ansible-ops/tree/master/cluster_metrics would that works with mitaka and ansible 1.9.4 ? | 19:23 |
*** johnmilton has quit IRC | 19:23 | |
*** alij has quit IRC | 19:24 | |
*** allanice001 has quit IRC | 19:27 | |
*** thorst_ has quit IRC | 19:29 | |
jwitko | cloudnull, so everything seems to be sorted well now. can't create volumes off of volume snapshots because the creation time takes longer than the timeout for spinning up an instance and waiting for its block device lol | 19:32 |
jwitko | but thats probably because of my crap hardware and network in this lab | 19:32 |
jwitko | the one weird thing is my glance_images container still shows no objects within it | 19:32 |
*** Mudpuppy_ has joined #openstack-ansible | 19:32 | |
jwitko | which makes me almost think its hidden ? | 19:32 |
jwitko | and the one I created is just not being used | 19:32 |
cloudnull | jwitko: that'll be owned by the swift tenant . | 19:34 |
cloudnull | so you'll need to use the swift creds which you can extract from the glance-api.conf ot see those objects. | 19:34 |
cloudnull | mgariepy: yes that'll work with mitaka just fine | 19:34 |
mgariepy | cloudnull, ok great:) thanks | 19:35 |
cloudnull | we've been using that in the osic since liberty | 19:35 |
*** Mudpuppy has quit IRC | 19:35 | |
cloudnull | https://cloud1.osic.org/grafana/ | 19:35 |
cloudnull | https://cloud1.osic.org/grafana/dashboard/db/openstack-all-compute-aggregates | 19:35 |
cloudnull | in case you were curious. | 19:35 |
mgariepy | nice :D | 19:36 |
*** admin0 has joined #openstack-ansible | 19:36 | |
*** Mudpuppy_ has quit IRC | 19:36 | |
cloudnull | right now it'll only work on 1 of the logging nodes by default. | 19:36 |
cloudnull | with this pr https://review.openstack.org/#/c/390128/ | 19:36 |
cloudnull | we should be able to get it working on more than 1 for data queries and storage. | 19:37 |
admin0 | \o | 19:37 |
cloudnull | o/ admin0 | 19:37 |
admin0 | arrived in barcelona :D | 19:37 |
cloudnull | how is it ? | 19:37 |
admin0 | hot :) | 19:38 |
admin0 | netherlans is 9* | 19:38 |
admin0 | now | 19:38 |
admin0 | got a question .. i see that on friday from 2:00 - 6:00 is OpenStackAnsible: Contributors meetup — who will be there ? | 19:38 |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible: [TEST] Update to Ansible v2.2.0.0-0.2.rc2 https://review.openstack.org/389292 | 19:39 |
cloudnull | andymccr_: odyssey4me: RE: [14:38] <admin0> got a question .. i see that on friday from 2:00 - 6:00 is OpenStackAnsible: Contributors meetup — who will be there ? | 19:40 |
cloudnull | admin0: sadly idk | 19:40 |
cloudnull | evrardjp: automagically: ^ idk if either of you are at the summit | 19:40 |
admin0 | logan pjm and winggundamth and me are here | 19:41 |
*** javeriak has joined #openstack-ansible | 19:42 | |
*** allanice001 has joined #openstack-ansible | 19:42 | |
*** z-_ is now known as z | 19:43 | |
*** z is now known as z- | 19:44 | |
*** z- has quit IRC | 19:44 | |
*** z- has joined #openstack-ansible | 19:44 | |
*** admin0_ has joined #openstack-ansible | 19:46 | |
*** admin0 has quit IRC | 19:46 | |
*** admin0_ is now known as admin0 | 19:46 | |
*** tschacara has quit IRC | 19:58 | |
*** pjm6 has joined #openstack-ansible | 19:58 | |
*** tschacara has joined #openstack-ansible | 19:59 | |
cloudnull | going afk for a bit, bbl. | 20:00 |
*** c-mart has quit IRC | 20:00 | |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-repo_build: Ensure the venv creation process is able to work with isolated deps https://review.openstack.org/386765 | 20:03 |
castulo | cloudnull: I still have issues using the run-upgrade script from openstack-ansible, I have run the script probably like 8 times and it keeps failing in different part every time.... for example last time it failed here: infra03_neutron_server_container-3f7c461a : ok=45 changed=9 unreachable=0 failed=1 but it is a different place every time :S | 20:03 |
castulo | you guys have not seen problems using it to upgrade from stable/mitaka to stable/newton? | 20:04 |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-repo_build: Ensure the venv creation process is able to work with isolated deps https://review.openstack.org/386765 | 20:05 |
*** adrian_otto has joined #openstack-ansible | 20:06 | |
cloudnull | castulo: I've not been testing upgrade from stable/mitaka > stable/newton quite yet. | 20:06 |
*** alij has joined #openstack-ansible | 20:07 | |
cloudnull | in truth some of the projects are still wrestling with bugs within their own upgrade process so we're very likely going to have to wait to get that tackled. | 20:07 |
cloudnull | that said, if you have a list of possible break points it'd be great to get those noted. | 20:08 |
cloudnull | castulo: do you have an etherpad for the issues you've been seeing ? | 20:09 |
cloudnull | or a list of launchpad bugs? | 20:09 |
cloudnull | castulo: also -- for example last time it failed here: infra03_neutron_server_container-3f7c461a : ok=45 changed=9 unreachable=0 failed=1 -- where is there? what was the name of the task where it failed? | 20:11 |
*** alij has quit IRC | 20:12 | |
*** c-mart has joined #openstack-ansible | 20:14 | |
*** askb has joined #openstack-ansible | 20:14 | |
cloudnull | castulo: can you outline what the hardware and config profile looks like for these tests. distro, releaes, kernel, openstack_user_config, enabled services. | 20:16 |
cloudnull | ? | 20:16 |
*** Mudpuppy has joined #openstack-ansible | 20:17 | |
*** Mudpuppy has quit IRC | 20:17 | |
*** Mudpuppy has joined #openstack-ansible | 20:18 | |
*** Mudpuppy_ has joined #openstack-ansible | 20:20 | |
*** Mudpuppy has quit IRC | 20:22 | |
c-mart | admin0, we should talk about Ceph later | 20:26 |
c-mart | I have it working :) | 20:26 |
castulo | I think this was the task: TASK [os_neutron : Add Ubuntu Cloud Archive Repository] | 20:26 |
admin0 | mine is working as well :) | 20:26 |
admin0 | with multiple pools | 20:26 |
c-mart | what did it take? | 20:26 |
admin0 | a bug report :D | 20:27 |
admin0 | hold | 20:27 |
castulo | cloudnull I'm not sure I understand what you mean by hardware and config profiles, remember I'm very new to ansible | 20:28 |
admin0 | c-mart: https://bugs.launchpad.net/openstack-ansible/+bug/1636018 | 20:28 |
openstack | Launchpad bug 1636018 in openstack-ansible "cinder ceph not working if documentation followed " [Undecided,New] - Assigned to Praveen N (praveenn) | 20:28 |
admin0 | drifterza is the detective here to find this :) | 20:28 |
drifterza | huh | 20:29 |
drifterza | I'm hating icehouse | 20:29 |
drifterza | jeeze | 20:29 |
mattt | icehouse?? | 20:30 |
drifterza | yeah | 20:30 |
drifterza | stuck in migration phase | 20:30 |
cloudnull | castulo: I want to setup a test env similar to what you have. so I'd like to know what the setup is ? | 20:31 |
cloudnull | drifterza: is that icehouse deployed by OSA ? | 20:32 |
drifterza | no it was a manual thing dude | 20:33 |
cloudnull | that sounds like a blast ! | 20:33 |
drifterza | before OSA was even a twinkle in odyssey4me's eyes | 20:33 |
cloudnull | it's cowboy cloud | 20:33 |
admin0 | our region1 is similar :) | 20:34 |
admin0 | cowboy icehouse | 20:34 |
drifterza | I'm just struggling with live migration | 20:34 |
cloudnull | I've taken part in a quite a few cowboy clouds. great times had by all. | 20:34 |
drifterza | so my current issue is with the conductor | 20:36 |
drifterza | Exception during message handling: Migration error: 'ComputeNode' object has no attribute 'ram_allocation_ratio' | 20:36 |
castulo | cloudnull: ooh we are using an onmetal server from Rackspace private cloud, and using the playbooks here to deploy openstack: https://github.com/osic/qa-jenkins-onmetal | 20:36 |
*** dxiri has joined #openstack-ansible | 20:37 | |
drifterza | but I'm no python genius | 20:37 |
cloudnull | is that an option set within the nova.conf file? | 20:39 |
drifterza | well usually its ram_allocation_ratio | 20:40 |
drifterza | im trying to migrate to an instance thats already calculated the ram as fully used. | 20:40 |
drifterza | but it lies | 20:40 |
*** dxiri has quit IRC | 20:41 | |
drifterza | Unable to migrate 79464cf4-cec6-4543-bb8b-9327c0583e13 to cp13.ran1.isoc.co.za: Lack of memory(host:-94910 <= instance:2048) | 20:42 |
*** dxiri has joined #openstack-ansible | 20:42 | |
drifterza | https://bugs.launchpad.net/nova/+bug/1413119 | 20:43 |
openstack | Launchpad bug 1413119 in OpenStack Compute (nova) "Pre-migration memory check- Invalid error message if memory value is 0" [Low,Confirmed] | 20:43 |
drifterza | https://bugs.launchpad.net/nova/+bug/1214943 | 20:43 |
*** allanice001 has quit IRC | 20:43 | |
openstack | Launchpad bug 1214943 in OpenStack Compute (nova) "Live migration should use the same memory over subscription logic as instance boot" [High,Fix released] - Assigned to Sylvain Bauza (sylvain-bauza) | 20:43 |
drifterza | I cant get the latter bug-fix to work | 20:44 |
drifterza | cos icehouse code is so old | 20:44 |
drifterza | dammit | 20:44 |
cloudnull | yea, that's going to be tough. | 20:45 |
*** admin0 has quit IRC | 20:45 | |
*** admin0_ has joined #openstack-ansible | 20:45 | |
*** h5t4 has quit IRC | 20:46 | |
admin0_ | “Simplify Day 2 Operations (And Get Some Sleep!) Through Craton Fleet Management” — is this a good topic ? | 20:49 |
*** c-mart has quit IRC | 20:52 | |
*** retreved has quit IRC | 20:53 | |
*** dxiri has quit IRC | 20:54 | |
*** dxiri has joined #openstack-ansible | 20:54 | |
*** aludwar has quit IRC | 20:55 | |
cloudnull | admin0_: ++ | 20:59 |
*** dxiri has quit IRC | 20:59 | |
cloudnull | castulo: I have a couple upgrade env's running using the head of stable/mitaka to stable/newton | 21:02 |
cloudnull | I should know more in a bit | 21:02 |
*** aludwar has joined #openstack-ansible | 21:02 | |
*** kvcobb has joined #openstack-ansible | 21:04 | |
*** c-mart has joined #openstack-ansible | 21:08 | |
*** dxiri has joined #openstack-ansible | 21:14 | |
*** javeriak has quit IRC | 21:22 | |
*** schwicht has quit IRC | 21:24 | |
*** tschacara has quit IRC | 21:30 | |
*** jheroux has quit IRC | 21:32 | |
castulo | cool :) | 21:33 |
*** javeriak has joined #openstack-ansible | 21:35 | |
baz | so I'm having difficulty getting Neutron QoS (traffic shaping) to work in Newton from an OSA deploy. | 21:35 |
*** adrian_otto has quit IRC | 21:38 | |
baz | http://cdn.pasteraw.com/dpl8g0ff3y68pw9mapkw54htkbqy89h | 21:41 |
baz | the port in that paste was created after the QoS policy was attached to the network. According to the documentation, any QoS policy applied to the network is supposed to be the policy used by any port that's created (or recreated) as long as the port itself doesn't have a qos_policy_id as well (most specific wins). | 21:43 |
baz | however the behavior is, when using linux bridge, no tc commands appear to ever be run during creating all the virtual interfaces and bridges associated with instance creation. | 21:44 |
*** schwicht has joined #openstack-ansible | 21:45 | |
cloudnull | baz: I've not messed with QoS /w neutron, so I'm not really sure what should and should not be there. | 21:46 |
cloudnull | do you see anything in the logs indicating there's a fault ? | 21:46 |
cloudnull | or is it just not doing anything ? | 21:46 |
baz | there's no errors being thrown, it creates the stack as if the QoS stuff wasn't there. | 21:47 |
cloudnull | I assume qos was added "neutron_plugin_base" and that you're seeing qos as an extension driver? | 21:48 |
baz | I've turned on debugging so it logs which iproute2 and brctl commands it's issuing, and there's nothing from tc, even tho there's rootwrapper for it, and the qos settings are where they should be on the compute hosts, and server/agent containers. | 21:49 |
baz | neutron ext-list has a line for: | qos | Quality of Service | | 21:50 |
cloudnull | if you # grep extension_drivers /etc/neutron/plugins/ml2/ml2_conf.ini | 21:53 |
cloudnull | do you see qos in the exension driver list ? | 21:53 |
baz | http://cdn.pasteraw.com/60mu9a1g0tgdpbn9cfxiiyw1yo6ddkp | 21:53 |
cloudnull | ok | 21:53 |
baz | from the agents container | 21:53 |
baz | the agents and server containers *are* on the compute hosts, but that shouldn't matter since they're containers. | 21:54 |
cloudnull | I for sure see the reno for it https://github.com/openstack/neutron/blob/master/releasenotes/notes/QoS-for-linuxbridge-agent-bdb13515aac4e555.yaml | 21:54 |
*** admin0_ has quit IRC | 21:55 | |
baz | the bare metal compute /etc/neutron/plugins/ml2/ml2_conf.ini is identical to the paste above. | 21:55 |
cloudnull | ok. so that's all good. | 21:56 |
baz | yeah | 21:56 |
baz | it *should* be working | 21:56 |
baz | at least as far as I can tell from the neutron docs on qos | 21:57 |
*** sdake_ has joined #openstack-ansible | 21:57 | |
cloudnull | baz: looking at the docs | 21:59 |
cloudnull | http://docs.openstack.org/mitaka/networking-guide/config-qos.html | 21:59 |
*** schwicht has quit IRC | 22:00 | |
cloudnull | while the config reference there is for ovs | 22:00 |
cloudnull | maybe we need to add qos to the agent section as an extension ? | 22:00 |
cloudnull | that can be done with a config_template override globally. | 22:00 |
cloudnull | but if that turns out to be the fix we should update the templates when qos is enabled | 22:01 |
cloudnull | which would make it automatic | 22:01 |
baz | k, will have a look | 22:02 |
*** hughmFLE_ has joined #openstack-ansible | 22:02 | |
cloudnull | adding http://cdn.pasteraw.com/p8czlnm06v2jj9rynqvi9ge50cugxck into your user-variables.yml file and then running ``openstack-ansible os-neutron-install.yml --tags neutron-config`` should do the needful | 22:03 |
*** asettle has quit IRC | 22:03 | |
*** chris_hultin is now known as chris_hultin|AWA | 22:03 | |
*** asettle has joined #openstack-ansible | 22:04 | |
*** berendt has joined #openstack-ansible | 22:04 | |
*** hughmFLEXin has quit IRC | 22:05 | |
*** thorst_ has joined #openstack-ansible | 22:05 | |
cloudnull | castulo: running the upgrade on the first system has encountered an issue within ceilometer | 22:05 |
cloudnull | but I'm past neutron and nova | 22:05 |
cloudnull | and so far my instances have stayed up without an interuption in ping or ssh | 22:06 |
baz | so if that turns out to work, the change to OSA's template should be fairly straight forward. | 22:06 |
cloudnull | yes. | 22:06 |
cloudnull | assuming that's the casue | 22:06 |
cloudnull | *cause. | 22:06 |
cloudnull | we should be able to use the same conditional found here for qos | 22:07 |
cloudnull | https://github.com/openstack/openstack-ansible-os_neutron/blob/master/templates/plugins/ml2/ml2_conf.ini.j2#L8 | 22:07 |
baz | exactly | 22:07 |
*** asettle has quit IRC | 22:08 | |
*** berendt has quit IRC | 22:09 | |
baz | ml2_conf.ini should be identical between the agents container and the compute host, correct? | 22:11 |
*** adrian_otto has joined #openstack-ansible | 22:13 | |
*** karimb has joined #openstack-ansible | 22:15 | |
*** klamath has quit IRC | 22:15 | |
*** phalmos has joined #openstack-ansible | 22:17 | |
*** alij has joined #openstack-ansible | 22:18 | |
*** alij has quit IRC | 22:22 | |
*** johnmilton has joined #openstack-ansible | 22:29 | |
baz | well look at that: qdisc ingress ffff: dev tap524993ab-51 parent ffff:fff1 ---------------- | 22:30 |
baz | so good sign so far. | 22:30 |
*** sdake_ has quit IRC | 22:31 | |
*** schwicht has joined #openstack-ansible | 22:34 | |
*** schwicht has quit IRC | 22:37 | |
*** karimb has quit IRC | 22:37 | |
*** weezS has quit IRC | 22:44 | |
*** phalmos has quit IRC | 22:46 | |
*** thorst_ has quit IRC | 22:46 | |
*** thorst_ has joined #openstack-ansible | 22:47 | |
*** javeriak has quit IRC | 22:49 | |
baz | cloudnull: that looks like the fix so far. I can change the policy with neutron and it applies while live. | 22:49 |
*** javeriak has joined #openstack-ansible | 22:50 | |
dxiri | hey everyone! | 22:52 |
dxiri | quick question, what is the container_interface setting I am seeing multiple times on the config file? (in openstack_user_config.yml) | 22:55 |
dxiri | for example container_interface: "eth10" | 22:55 |
*** thorst_ has quit IRC | 22:55 | |
*** gtrxcb has joined #openstack-ansible | 22:56 | |
*** schwicht has joined #openstack-ansible | 23:00 | |
*** agrebennikov has quit IRC | 23:01 | |
*** hughmFLE_ has quit IRC | 23:01 | |
*** Mudpuppy_ has quit IRC | 23:01 | |
*** Mudpuppy has joined #openstack-ansible | 23:02 | |
*** hughmFLEXin has joined #openstack-ansible | 23:03 | |
*** schwicht has quit IRC | 23:09 | |
*** hughmFLEXin has quit IRC | 23:14 | |
*** alij has joined #openstack-ansible | 23:19 | |
*** pmannidi has quit IRC | 23:20 | |
*** pmannidi has joined #openstack-ansible | 23:21 | |
*** alij has quit IRC | 23:24 | |
*** hughmFLEXin has joined #openstack-ansible | 23:24 | |
*** javeriak has quit IRC | 23:25 | |
c-mart | dxiri: those are the "name of unique interface in containers to use for this network. Typical values include 'eth1', 'eth2', etc." | 23:28 |
dxiri | must they match my phisical interfaces? | 23:29 |
*** hughmFLEXin has quit IRC | 23:29 | |
c-mart | no. in fact, they probably shouldn't to avoid confusion | 23:29 |
dxiri | ok so, eth10 is good | 23:29 |
dxiri | since I don't have eth10 at all | 23:29 |
c-mart | yeah, I just used the names from the boilerplate, and it works fine | 23:30 |
dxiri | that will be the name of the nic inside the container itself | 23:30 |
c-mart | yes. hopefully that's namespaced away from the NICs known to the physical host. but I don't know, I'm new, and just stuck with the recommended config here :) | 23:30 |
*** javeriak has joined #openstack-ansible | 23:31 | |
c-mart | on a different topic: has anyone had to manually start a bunch of systemctl services inside their containers after the OSA deployment finished? | 23:32 |
c-mart | nova-conductor, nova-scheduler, nova-api-os-compute, and nova-api-metadata all had to be started manually here | 23:33 |
c-mart | I had to re-run the nova playbook a couple of times, perhaps it got in a weird state that's unique to me :) just curious. | 23:34 |
*** maeker has quit IRC | 23:41 | |
*** johnmilton has quit IRC | 23:43 | |
*** shasha_t_ has quit IRC | 23:43 | |
*** irtermite has quit IRC | 23:43 | |
*** coolj_ has quit IRC | 23:43 | |
*** z- has quit IRC | 23:43 | |
*** Trident has quit IRC | 23:43 | |
*** shananigans has quit IRC | 23:43 | |
*** jrosser has quit IRC | 23:43 | |
*** bsv has quit IRC | 23:43 | |
*** neillc has quit IRC | 23:43 | |
*** rackertom has quit IRC | 23:43 | |
*** maximov_ has quit IRC | 23:43 | |
*** kong has quit IRC | 23:43 | |
*** dolphm has quit IRC | 23:43 | |
*** jroll has quit IRC | 23:43 | |
*** homerp_ has quit IRC | 23:43 | |
*** evrardjp has quit IRC | 23:43 | |
*** castulo has quit IRC | 23:43 | |
*** jwitko has quit IRC | 23:43 | |
*** mgagne has quit IRC | 23:43 | |
*** arif-ali has quit IRC | 23:43 | |
*** hughsaunders has quit IRC | 23:43 | |
*** hughsaunders has joined #openstack-ansible | 23:44 | |
*** johnmilton has joined #openstack-ansible | 23:44 | |
*** shananigans has joined #openstack-ansible | 23:44 | |
*** shasha_t_ has joined #openstack-ansible | 23:44 | |
*** irtermite has joined #openstack-ansible | 23:44 | |
*** coolj_ has joined #openstack-ansible | 23:44 | |
*** z- has joined #openstack-ansible | 23:44 | |
*** Trident has joined #openstack-ansible | 23:44 | |
*** jrosser has joined #openstack-ansible | 23:44 | |
*** bsv has joined #openstack-ansible | 23:44 | |
*** neillc has joined #openstack-ansible | 23:44 | |
*** maximov_ has joined #openstack-ansible | 23:44 | |
*** rackertom has joined #openstack-ansible | 23:44 | |
*** kong has joined #openstack-ansible | 23:44 | |
*** dolphm has joined #openstack-ansible | 23:44 | |
*** jroll has joined #openstack-ansible | 23:44 | |
*** homerp_ has joined #openstack-ansible | 23:44 | |
*** evrardjp has joined #openstack-ansible | 23:44 | |
*** castulo has joined #openstack-ansible | 23:44 | |
*** jwitko has joined #openstack-ansible | 23:44 | |
*** mgagne has joined #openstack-ansible | 23:44 | |
*** arif-ali has joined #openstack-ansible | 23:44 | |
*** rackertom has quit IRC | 23:46 | |
*** maximov_ has quit IRC | 23:46 | |
*** kong has quit IRC | 23:46 | |
*** Drago has joined #openstack-ansible | 23:50 | |
*** LiYuenan has quit IRC | 23:52 | |
*** kong has joined #openstack-ansible | 23:52 | |
*** thorst has joined #openstack-ansible | 23:52 | |
*** Mudpuppy has quit IRC | 23:54 | |
*** maximov_ has joined #openstack-ansible | 23:56 | |
*** rackertom has joined #openstack-ansible | 23:58 | |
*** gouthamr has joined #openstack-ansible | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!