Tuesday, 2019-12-03

*** skyraven has joined #openstack00:04
*** jistr has quit IRC00:07
*** jistr has joined #openstack00:08
*** Warped has joined #openstack00:09
*** ivve has quit IRC00:16
*** skyraven has quit IRC00:16
*** coboluxx has joined #openstack00:38
*** Adri2000 has joined #openstack00:46
*** brokencycle has quit IRC00:55
*** mikecmpbll has quit IRC00:55
*** shibboleth has quit IRC01:18
*** gyee has quit IRC01:48
*** Goneri has quit IRC01:57
*** awalende has joined #openstack02:00
*** awalende has quit IRC02:04
*** skyraven has joined #openstack02:12
*** davee_ has quit IRC02:16
*** davee_ has joined #openstack02:17
*** skyraven has quit IRC02:28
*** yaawang has quit IRC02:36
*** yaawang has joined #openstack02:36
*** macz has quit IRC02:37
*** chenhaw has joined #openstack02:45
*** maddtux has joined #openstack03:17
*** macz has joined #openstack03:46
*** macz has quit IRC03:51
*** msalo has joined #openstack03:58
*** jangutter has joined #openstack03:59
*** yaawang has quit IRC04:01
*** yaawang has joined #openstack04:02
*** msalo has quit IRC04:02
*** jangutter has quit IRC04:03
*** cyberworm54 has quit IRC04:12
*** negronjl has quit IRC04:17
*** negronjl has joined #openstack04:19
*** skyraven has joined #openstack04:24
*** igordc has quit IRC04:34
*** skyraven has quit IRC04:39
*** aakarsh has quit IRC04:41
*** jangutter has joined #openstack04:46
*** jangutter has quit IRC04:51
*** surpatil has joined #openstack04:58
*** SurajPatil has joined #openstack05:13
*** surpatil has quit IRC05:15
*** surpatil has joined #openstack05:16
*** SurajPatil has quit IRC05:18
*** surpatil has quit IRC05:21
*** surpatil has joined #openstack05:22
*** msalo has joined #openstack05:35
*** skyraven has joined #openstack05:39
*** msalo has quit IRC05:40
*** links has joined #openstack05:52
*** skyraven has quit IRC05:58
*** ircuser-1 has joined #openstack06:19
*** ymasson has quit IRC06:25
*** skyraven has joined #openstack06:31
*** tonythomas has quit IRC06:37
*** sauvin has joined #openstack06:39
*** msalo has joined #openstack06:43
*** skyraven has quit IRC06:44
*** jtomasek has joined #openstack06:44
*** jangutter has joined #openstack06:47
*** yaawang has quit IRC06:47
*** yaawang has joined #openstack06:48
*** msalo has quit IRC06:51
*** jangutter has quit IRC06:52
*** jab416171 has joined #openstack06:56
*** msalo has joined #openstack06:58
*** msalo has quit IRC07:03
*** brault has joined #openstack07:05
dirtwashim confused about networking in openstack, cant I just tell it to use a certain bridge? im not sure what is meant with provider network, is that a openstack terM?07:19
*** jab416171 has quit IRC07:22
*** jab416171 has joined #openstack07:28
*** msalo has joined #openstack07:34
jrosserdirtwash: you can tell neutron to use an interface, not a bridge. (in the context of your external network)07:35
jrossera provider network is one managed and configured by the cloud operator (aka provider) rather than one created on demand by a user to use privately within their project07:37
*** msalo has quit IRC07:38
dirtwashwell im trying this now https://docs.openstack.org/newton/install-guide-ubuntu/launch-instance-networks-provider.html07:39
dirtwashbut im already failing at executing any opentsack command, I got the RC file and sourced it but there are no openstack binaries on my controller node07:40
*** msalo has joined #openstack07:40
*** msalo has quit IRC07:41
dirtwashjrosser: well if there is a simpler way, but the admin interface said I need a provider name if I want to create a network07:42
dirtwashso I figured I have to create a provider network?07:42
dirtwashall I want is to spin up a VM to use my LAN interface to test if that works07:43
dirtwashwith most managment platforms u just tell it to use interface X and the rest is done via bridge07:43
*** msalo has joined #openstack07:44
*** slaweq has joined #openstack07:45
jrosseryou need to have the neutron config define the network name “provider” and which interface it is bound to, you see that in your link in ml2_conf.ini07:45
*** surpatil is now known as surpatil|lunch07:46
jrosserit is unfortunate that in that example they choose to make the name of the physical network “provider” which is also the conceptual term for this kind of network07:46
*** yaawang has quit IRC07:46
jrosseranyway, you need to get the physical interface mapped to a logical network name in the neutron config files first, probably via whatever deployment tool you are using07:47
jrosserthen use that logical have to create a provider network with the openstack CLI with the setup you need07:47
jrosser*logical name07:47
*** jab416171 has quit IRC07:48
dirtwashyea I understood that07:48
dirtwashbut im stuck with not being ble to execute the openstack command07:48
*** yaawang has joined #openstack07:48
dirtwashbecause its not an existing binary07:48
dirtwashdeployment was via Kolla btw07:48
jrosserI use openstack-ansibke so I would go to my “utility container” - I don’t know what the same construct is for kolla07:49
dirtwashopenstack network create...  <-- this just leads to "command not found"07:49
dirtwashwell docs say to execute on the controller node07:49
jrosseror you can install hem anywhere with pip07:49
dirtwashor did they mean the deployment instance07:50
*** calbers has quit IRC07:50
dirtwashI swear...IT and terms, everybody just makes up their own words07:50
*** jklare has quit IRC07:50
*** CobHead has quit IRC07:50
*** Roedy86 has quit IRC07:50
*** jc__ has joined #openstack07:51
*** trident has quit IRC07:51
*** jc_ has quit IRC07:51
*** calbers has joined #openstack07:51
* dirtwash hates human languages07:51
*** johanssone has quit IRC07:51
*** whyz has quit IRC07:51
*** Roedy has joined #openstack07:51
*** trident has joined #openstack07:51
*** jklare has joined #openstack07:52
*** johanssone has joined #openstack07:52
jrosserpip install python-openstackclient07:53
jrosserfrom here https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html07:53
dirtwashjrosser: yea already found it thanks07:56
dirtwashjust trying to figure out where the config files are I need to edit07:56
dirtwashthe ml2 and linuxbridge_agent07:56
dirtwashi figure i have to edit some kolla files to adjust those07:56
*** damien_r has joined #openstack08:00
*** jangutter has joined #openstack08:04
*** msalo has quit IRC08:04
*** macz has joined #openstack08:12
*** rainmanjam has joined #openstack08:13
*** awalende has joined #openstack08:14
*** tesseract has joined #openstack08:16
*** macz has quit IRC08:16
*** miloa has joined #openstack08:17
*** tonythomas has joined #openstack08:17
*** bengates has joined #openstack08:22
*** alexmcleod has joined #openstack08:22
*** fsimonce has joined #openstack08:26
*** msalo has joined #openstack08:29
*** avivgta has joined #openstack08:35
*** msalo has quit IRC08:36
*** skyraven has joined #openstack08:40
*** rpittau|afk is now known as rpittau08:43
*** ivve has joined #openstack08:51
*** avivgt has joined #openstack08:54
*** skyraven has quit IRC08:55
*** msalo has joined #openstack09:03
*** citadelcore has joined #openstack09:10
citadelcoreHello. I recently deployed Openstack with Cinder/Ceph as its storage backend and I am running into some issues deploying VMs09:11
citadelcoreFails to allocate hosts because Placement thinks there is not enough storage to run them on09:11
citadelcoreLooking at the available hosts listed in Placement it only shows their local OS disks and not the Ceph volume created by Cinder which appears to be the cause of the problem09:12
*** brokencycle has joined #openstack09:17
*** surpatil|lunch is now known as surpatil09:22
*** gmoro has joined #openstack09:32
*** random_yanek has quit IRC09:37
*** admin0 has quit IRC09:41
*** wpp has quit IRC09:41
*** do1jlr has quit IRC09:42
*** dasp has quit IRC09:42
*** do1jlr has joined #openstack09:42
*** dasp has joined #openstack09:43
*** yaawang has quit IRC09:45
*** yaawang has joined #openstack09:46
*** imega has joined #openstack09:48
*** suuuper has joined #openstack09:51
*** random_yanek has joined #openstack09:55
*** k_mouza has joined #openstack09:58
*** msalo has quit IRC10:00
*** msalo has joined #openstack10:01
*** wpp has joined #openstack10:03
*** rcernin has quit IRC10:06
*** pcaruana has joined #openstack10:08
*** msalo has quit IRC10:16
*** electrofelix has joined #openstack10:22
*** coboluxx has quit IRC10:22
*** msalo has joined #openstack10:23
*** fandi has joined #openstack10:25
citadelcoreIs placement even necessary? I've seen some bundles omit it completely10:25
*** sshnaidm|afk is now known as sshnaidm10:26
*** chenhaw has quit IRC10:27
*** soniya29 has joined #openstack10:28
*** fandi has quit IRC10:34
*** lpetrut has joined #openstack10:34
*** brokencycle has quit IRC10:43
*** skyraven has joined #openstack10:51
*** rgogunskiy has joined #openstack10:58
*** skyraven has quit IRC11:06
DHEthe placement API is mandatory as of the O or P release I think...11:11
citadelcoreah11:13
citadelcorewell, it's not doing its job :P11:13
*** rpittau is now known as rpittau|bbl11:18
*** SurajPatil has joined #openstack11:22
citadelcoreI can't deploy any servers as I get back "No valid host was found."11:23
citadelcoreand looking at the logs, this is caused by placement because it thinks the only storage is the local disks of the hosts11:23
*** surpatil has quit IRC11:24
*** msalo_ has joined #openstack11:24
citadelcorebut my storage is Ceph RBD via Cinder...11:24
*** msalo__ has joined #openstack11:25
*** msalo has quit IRC11:25
*** msalo has joined #openstack11:26
*** msalo_ has quit IRC11:29
*** msalo__ has quit IRC11:29
*** laurent\ has quit IRC11:30
*** laurent\ has joined #openstack11:31
*** bengates has quit IRC11:33
*** bengates has joined #openstack11:33
*** awalende_ has joined #openstack11:37
*** trident has quit IRC11:40
*** awalende has quit IRC11:41
*** msalo_ has joined #openstack11:42
*** trident has joined #openstack11:42
*** msalo_ has quit IRC11:44
*** msalo_ has joined #openstack11:45
*** msalo has quit IRC11:46
*** SurajPatil has quit IRC11:51
*** SurajPatil has joined #openstack11:51
*** lpetrut has quit IRC11:53
*** surpatil has joined #openstack11:54
*** bengates has quit IRC11:55
*** zbr_ has quit IRC11:55
*** mikecmpbll has joined #openstack11:56
*** bengates has joined #openstack11:56
*** zbr has joined #openstack11:56
*** SurajPatil has quit IRC11:57
*** gmoro has quit IRC11:59
*** gmoro has joined #openstack12:01
*** gmoro has quit IRC12:02
*** gmoro_ has joined #openstack12:02
fricklercitadelcore: if you only use boot from volume, you likely could use flavors with 0 GB root disk size. otherwise maybe you want to configure nova to place root disks also on rbd12:04
*** msalo_ has quit IRC12:06
*** SurajPatil has joined #openstack12:06
*** msalo has joined #openstack12:06
*** gmoro__ has joined #openstack12:08
*** surpatil has quit IRC12:09
*** gmoro_ has quit IRC12:11
*** gmoro has joined #openstack12:11
*** SurajPatil is now known as surpatil12:12
*** gmoro__ has quit IRC12:13
*** gmoro_ has joined #openstack12:13
*** gmoro has quit IRC12:15
*** bengates has quit IRC12:15
*** bengates has joined #openstack12:16
*** msalo has quit IRC12:24
*** macz has joined #openstack12:33
*** msalo has joined #openstack12:33
*** gmoro has joined #openstack12:34
*** servagem has joined #openstack12:35
*** gmoro_ has quit IRC12:36
*** macz has quit IRC12:38
*** msalo has quit IRC12:38
*** chrizl has joined #openstack12:43
*** gmoro has quit IRC12:46
*** morazi has joined #openstack12:46
*** gmoro has joined #openstack12:46
*** msalo has joined #openstack12:49
*** msalo has quit IRC12:54
*** msalo has joined #openstack12:55
*** drewlander has joined #openstack13:02
*** skyraven has joined #openstack13:02
*** msalo has quit IRC13:11
*** dasp has quit IRC13:12
*** dasp has joined #openstack13:12
*** skyraven has quit IRC13:16
*** msalo has joined #openstack13:20
*** maddtux has quit IRC13:21
*** schwicht has joined #openstack13:23
*** msalo has quit IRC13:24
*** msalo has joined #openstack13:29
citadelcorefrickler: yeah, I believe it should be set to boot from volume... but I'm thinking this could be some other issue13:30
citadelcoreas it was working when I performed a deploy in admin_domain13:31
citadelcorebut in a new project in a new domain I created, it has this issue13:31
citadelcorejust curious, how would I configure nova to put root disks on the rbd store?13:32
*** lpetrut has joined #openstack13:40
*** mikecmpbll has quit IRC13:40
fricklercitadelcore: hmm, the documentation on ceph.com looks broken, mainly something like this https://cloud.garr.it/support/kb/cloud/rbd_backend_for_ehpemeral/13:43
citadelcoreAh cool, thanks13:45
*** rpittau|bbl is now known as rpittau13:46
*** schwicht has quit IRC13:49
*** mikecmpbll has joined #openstack13:53
*** mcornea has joined #openstack13:57
citadelcorefricker: Thank you, that fixed the issue!14:01
citadelcorefrickler*14:01
citadelcoreShows the correct size of the RBD volume in Placement and everything14:02
*** leanderthal has joined #openstack14:06
*** schwicht has joined #openstack14:09
*** __cf has joined #openstack14:10
*** Goneri has joined #openstack14:12
*** tinwood_ is now known as tinwood14:13
*** strobert1 has quit IRC14:14
*** strobert1 has joined #openstack14:17
*** trident has quit IRC14:24
*** trident has joined #openstack14:25
*** jhesketh has quit IRC14:27
*** jhesketh has joined #openstack14:28
*** ab2434_ has joined #openstack14:32
*** pcaruana has quit IRC14:33
*** surpatil has quit IRC14:35
*** aconole has joined #openstack14:40
*** soniya29 has quit IRC14:46
*** aakarsh has joined #openstack14:56
*** igordc has joined #openstack14:57
*** aakarsh|2 has joined #openstack14:58
*** admin0 has joined #openstack14:59
*** aakarsh has quit IRC15:00
*** Lucas_Gray has joined #openstack15:01
*** pcaruana has joined #openstack15:04
*** aakarsh|2 has quit IRC15:06
*** rgogunskiy has quit IRC15:12
*** skyraven has joined #openstack15:12
*** lightstalker has quit IRC15:13
*** igordc has quit IRC15:17
*** oncall-pokemon has joined #openstack15:21
*** igordc has joined #openstack15:25
*** awalende_ has quit IRC15:25
*** awalende has joined #openstack15:26
*** skyraven has quit IRC15:27
*** mcornea has quit IRC15:28
*** igordc has quit IRC15:30
*** awalende has quit IRC15:30
*** miloa has quit IRC15:31
*** miloa has joined #openstack15:32
*** tesseract has quit IRC15:37
*** msalo has quit IRC15:41
*** jtomasek has quit IRC15:42
*** mcornea has joined #openstack15:42
*** tesseract has joined #openstack15:48
*** morazi has quit IRC15:52
*** jmlowe has quit IRC15:52
*** msalo has joined #openstack15:56
*** morazi has joined #openstack15:57
*** miloa has quit IRC16:00
*** miloa has joined #openstack16:00
*** miloa has quit IRC16:00
*** miloa has joined #openstack16:03
*** jmlowe has joined #openstack16:09
*** ivve has quit IRC16:15
*** jab416171 has joined #openstack16:17
*** gyee has joined #openstack16:19
*** msalo has quit IRC16:19
*** ymasson has joined #openstack16:25
*** jamesdenton has quit IRC16:25
*** avivgta has quit IRC16:26
*** jeremy_h has joined #openstack16:28
jeremy_hIm trying to use web download to import an image in glance, but its stuck in queued status. journalctl doesnt return errors from any service in logs. Any idea what could cause this?16:28
jeremy_hworking with glance in devstack, stable/stein branches for both16:29
*** electrofelix has quit IRC16:29
*** rainmanjam has quit IRC16:32
*** chrizl has quit IRC16:37
*** lpetrut has quit IRC16:38
*** damien_r has quit IRC16:40
*** leanderthal has quit IRC16:42
*** aakarsh has joined #openstack16:42
*** leanderthal has joined #openstack16:43
*** bengates has quit IRC16:43
*** mikecmpbll has quit IRC16:45
*** miloa has quit IRC16:45
*** suuuper has quit IRC16:45
*** ejat has joined #openstack16:46
*** avivgt has quit IRC16:46
*** jathan has joined #openstack16:58
*** jklare has quit IRC16:59
*** tesseract has quit IRC17:03
*** aedc has joined #openstack17:04
*** jklare has joined #openstack17:04
*** morazi has quit IRC17:11
*** morazi has joined #openstack17:11
*** rpittau is now known as rpittau|afk17:12
*** mikecmpbll has joined #openstack17:15
*** Lucas_Gray has quit IRC17:15
*** mikecmpbll has quit IRC17:20
*** skyraven has joined #openstack17:22
*** aconole has quit IRC17:24
*** aconole has joined #openstack17:28
*** dudek has joined #openstack17:36
*** skyraven has quit IRC17:36
*** jonaspaulo has joined #openstack17:38
*** links has quit IRC17:40
*** jhesketh has quit IRC17:42
*** jhesketh has joined #openstack17:44
*** rgogunskiy has joined #openstack17:55
*** rgogunskiy has quit IRC17:59
*** k_mouza has quit IRC18:00
*** renich has joined #openstack18:06
*** aedc has quit IRC18:09
*** jmlowe has quit IRC18:15
*** gagan662 has quit IRC18:15
*** igordc has joined #openstack18:15
*** jmlowe has joined #openstack18:15
*** sshnaidm is now known as sshnaidm|afk18:21
*** jmlowe has quit IRC18:24
*** jtomasek has joined #openstack18:25
*** awalende has joined #openstack18:27
*** k_mouza has joined #openstack18:30
*** awalende has quit IRC18:31
*** k_mouza has quit IRC18:36
*** mcornea has quit IRC18:36
*** elico has joined #openstack18:36
*** mcornea has joined #openstack18:36
*** skyraven has joined #openstack18:37
*** jmlowe has joined #openstack18:40
*** bobmel has joined #openstack18:44
*** bobmel has quit IRC18:49
*** tonythomas has quit IRC18:50
*** jeremy_h has quit IRC18:51
*** k_mouza has joined #openstack18:56
*** aakarsh|2 has joined #openstack18:56
*** damien_r has joined #openstack18:57
*** aakarsh has quit IRC18:59
*** gmann is now known as gmann_afk19:00
*** jamesdenton has joined #openstack19:00
*** sauvin has quit IRC19:02
*** laerlingSAP has joined #openstack19:02
*** engine20191 has joined #openstack19:16
*** awalende has joined #openstack19:20
*** morazi has quit IRC19:21
*** awalende has quit IRC19:24
*** ivve has joined #openstack19:28
*** aakarsh|3 has joined #openstack19:38
*** aakarsh|2 has quit IRC19:41
*** msalo has joined #openstack19:43
*** msalo has quit IRC19:46
*** iniazi_ has joined #openstack19:53
*** iniazi has quit IRC19:57
*** aakarsh|3 is now known as aakarsh19:58
*** laerlingSAP has quit IRC19:59
*** msalo has joined #openstack20:16
*** msalo has quit IRC20:19
*** msalo has joined #openstack20:20
*** gmann_afk is now known as gmann20:25
citadelcoreAnyone else experienced intermittent timeouts with various API services?20:26
citadelcoreI've traced it back to once in a while, Keystone will give 503 errors...20:26
citadelcoreNothing in the logs about it though20:26
*** cyberworm54 has joined #openstack20:45
*** vesper11 has quit IRC20:49
*** vesper11 has joined #openstack20:51
*** avivgt has joined #openstack20:57
umbSublimecitadelcore: Is that for long-running operations ?21:09
umbSublimeHow do you reproduce the issue ? Is it only when talking to keystone. Or once you've got your token and start using other services ?21:10
*** aedc has joined #openstack21:18
*** idlemind has quit IRC21:21
*** daniel1302 has joined #openstack21:21
daniel1302Hi :)21:21
daniel1302I have installed microstack on Ubuntu Server 18.10. then i ran sudo microstack.init. But now I can see Waiting for 10.20.20.1:567221:22
daniel1302And I can see it for last 15 minutes21:22
daniel1302How Can I debug it?21:22
*** idlemind has joined #openstack21:24
*** RickDeckard has joined #openstack21:26
*** engine20191 has quit IRC21:29
*** Goneri has quit IRC21:33
*** servagem has quit IRC21:33
*** dudek has quit IRC21:35
*** aedc has quit IRC21:40
*** aedc has joined #openstack21:41
citadelcoreumbSublime: I believe it's after the token's acquired as the errors seem to come up from the various API services that are called...21:41
citadelcoreAll of them have the same message, which is "The Keystone service is temporarily unavailable." with a code 50321:41
citadelcoreAnd no not for long running operations... it just happens randomly21:42
*** skyraven has quit IRC21:43
*** aedc_ has joined #openstack21:43
umbSublimeDo you have say many nova-api's running on different "controller" nodes ?21:44
citadelcoreNope... just one nova api unit.21:44
*** aakarsh has quit IRC21:46
citadelcoreAlso it's worth noting I had a look at the keystone apache2 logs and there doesn't seem to be any 503 errors in the request log.21:46
*** aedc has quit IRC21:47
umbSublimecitadelcore: how do you see this ? from openstack cli ?21:52
umbSublimeif so can you share an example output21:52
*** rgogunskiy has joined #openstack21:55
citadelcoreIt's most prelavent on the horizon dashboard where it was first observed happening (lots of errors when navigating around) but I swear it happened in the CLI as well... am trying to re-replicate that to ensure it isn't just the dashboard21:58
*** aedc_ has quit IRC21:58
citadelcorehttps://i.imgur.com/aDm4G37.png21:58
*** aedc has joined #openstack21:58
citadelcoreThis can happen with pretty much any endpoint it tries to make a request to, hell, I've even had it happen to the dashboard itself where it would just straight up return a 50321:59
*** awalende has joined #openstack22:00
*** rgogunskiy has quit IRC22:00
*** awalende has quit IRC22:04
*** pcaruana has quit IRC22:06
*** mcornea has quit IRC22:07
*** aedc_ has joined #openstack22:09
*** aedc has quit IRC22:11
*** msalo has quit IRC22:13
*** msalo has joined #openstack22:13
*** thorre has quit IRC22:14
*** skyraven has joined #openstack22:15
*** slaweq has quit IRC22:15
*** thorre has joined #openstack22:17
*** aedc_ has quit IRC22:18
*** msalo has quit IRC22:20
*** alexmcleod has quit IRC22:22
*** skyraven has quit IRC22:25
*** aakarsh has joined #openstack22:29
*** elico has quit IRC22:36
ozzzocitadelcore: how many nodes in your cluster? If it's a large number maybe you're maxing out a keystone connection setting22:47
*** bobmel has joined #openstack22:48
*** elico has joined #openstack22:51
*** elico has quit IRC22:52
*** elico has joined #openstack22:52
*** msalo has joined #openstack22:54
*** jab416171 has quit IRC22:56
*** rcernin has joined #openstack22:57
*** avivgt has quit IRC22:57
*** k_mouza has quit IRC23:01
*** aakarsh has quit IRC23:08
*** jonaspaulo has quit IRC23:10
*** epei has joined #openstack23:16
*** cyberworm54 has quit IRC23:17
*** epei has quit IRC23:19
citadelcoreozzzo: only four nova-compute nodes23:20
citadelcoreit's a small deployment23:21
*** elico has quit IRC23:23
*** k_mouza has joined #openstack23:23
*** k_mouza has quit IRC23:23
*** epei has joined #openstack23:24
citadelcorehttps://i.imgur.com/fRNsYUN.png -> even horizon won't load sometimes23:25
*** slaweq has joined #openstack23:25
*** ab2434_ has quit IRC23:27
*** fsimonce has quit IRC23:28
*** msalo has quit IRC23:30
*** slaweq has quit IRC23:31
ozzzocitadelcore: are you using LB or is 188 a controller?23:32
citadelcore.188 is just the LXD container running the horizon dashboard.23:32
*** epei has quit IRC23:33
*** msalo has joined #openstack23:35
*** msalo has quit IRC23:40
citadelcoreozzzo: it seems like it's a good chance it could be some performance issue, but I haven't been able to nail down what23:41
citadelcoreeverytime I do something as simple as try and download a image from glance for conversion, the whole node appears to fall over23:42
citadelcoreI am fairly certain it's not hitting resource limits... it's on a SSD, has a 10Gbps interconnect, and plenty of RAM left23:43
citadelcore(and by "fall over" I mean API services timing out and the containers being generally slow up to a point where they can become unresponsive)23:44
*** calbers has quit IRC23:44
*** satanist has quit IRC23:46
*** satanist has joined #openstack23:46
*** epei has joined #openstack23:47
*** ivve has quit IRC23:47
*** epei has quit IRC23:50
citadelcoreNvm. I think I might see the issue. Host is out of memory and it's also ran out of swap23:53
*** epei has joined #openstack23:55
*** calbers has joined #openstack23:55
citadelcoreMySQL has spawned a shit-ton of processes23:55
*** linuxdaemon has left #openstack23:56
citadelcoreabout 60 or so, and they're all demanding 700M each23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!