openstackgerrit | Rosario Di Somma proposed openstack-infra/shade master: WIP: Add pagination for the list_volumes call https://review.openstack.org/466927 | 00:29 |
---|---|---|
*** jamielennox is now known as jamielennox|away | 01:54 | |
*** gkadam has joined #openstack-shade | 03:51 | |
*** jamielennox|away is now known as jamielennox | 04:23 | |
*** jamielennox is now known as jamielennox|away | 05:11 | |
*** jamielennox|away is now known as jamielennox | 05:51 | |
*** jamielennox is now known as jamielennox|away | 06:23 | |
*** ioggstream has joined #openstack-shade | 07:39 | |
*** openstackgerrit has quit IRC | 08:18 | |
*** cdent has joined #openstack-shade | 11:26 | |
*** gouthamr_ has joined #openstack-shade | 11:36 | |
*** cdent has quit IRC | 11:45 | |
*** cdent has joined #openstack-shade | 11:50 | |
*** gouthamr_ has quit IRC | 12:02 | |
*** noshankus has joined #openstack-shade | 12:08 | |
*** rods has quit IRC | 12:11 | |
noshankus | Hi guys, I'm trying to execute operator_cloud.get_compute_usage() from a remote machine, however, it seems to always use the internal api service URL rather than the public specified in clouds.yml. When using openstack_cloud.get_compute_limits(), it works fine for the same cloud... any ideas? | 12:11 |
*** rods has joined #openstack-shade | 12:14 | |
noshankus | Looks like I can specify: operator_instance = shade.operator_cloud(cloud=cloud.name, interface='public'), which doe sreturn, but fails to find any resource usage | 12:17 |
mordred | noshankus: hrm. that definitely seems like a bug for sure - let me poke for a second | 12:19 |
mordred | hrm. both of those are still using novaclient under the covers - lemme go check to see if novaclient is doing something _weird_ | 12:20 |
mordred | noshankus: I do not see anything in the code anywhere that would cause the thing you're talking about - but this is openstack, that doesn't mean anything. :) I'm going to put together a quick local reproduction and see what's up - may ask you to run something in just a bit | 12:25 |
noshankus | @mordred Sure - thanks | 12:42 |
noshankus | So, looking at the code, interface=admin by default - setting to public as above gets a response, but I don't get any usage stats. Do I have to enable the gathering of usage stats somehow? | 12:44 |
mordred | noshankus: where are you seeing the interface=admin bit? that should only be for keystone v2 ... oh, wait - I'm looking at master | 12:45 |
mordred | I believe that part is a bug we fixed that we're about to release | 12:45 |
noshankus | I see... it looks like that when I specify "interface: public" in my extra_config clouds.yml, that openstack_cloud uses the parameter, but operator_cloud does not, so must be specified | 12:46 |
noshankus | Yeah - in __init__ def operator_cloud, it checks for "interface" in kwargs - if it's missing, it sets to admin | 12:48 |
noshankus | That's the difference I guess | 12:48 |
*** jamielennox|away is now known as jamielennox | 12:49 | |
mordred | AHA! | 12:50 |
mordred | thank you | 12:50 |
mordred | that's totally a bug | 12:50 |
mordred | (turns out interface=admin is actually a thing that pretty much should never be used and is really only a thing for keystonev2. we fixed that elsewhere for this release, but I missed the factory function | 12:51 |
noshankus | @mordred No problem - happy to help :) | 12:52 |
mordred | however - I'm definitely also getting empty responses back from the cloud I just checked - which is fair, since I've never booted a server in that project - let me check a different project | 12:54 |
noshankus | Still can't get any usage stats via operator_cloud.get_compute_usage() however.... Any ideas? It does hand off to nova_client | 12:54 |
*** openstackgerrit has joined #openstack-shade | 13:01 | |
openstackgerrit | Monty Taylor proposed openstack-infra/shade master: Fix get_compute_usage normalization problem https://review.openstack.org/467226 | 13:01 |
mordred | noshankus: that should fix the interface issue and a different one that was causing the dict returned to be broken | 13:01 |
mordred | noshankus: I'm getting data on projects that had usage, and a bunch of 0s on a project that hasn't had any servers booted in it for the time period | 13:02 |
Shrews | hrm, we really have no difference in OpenStackCloud and OperatorCloud now | 13:04 |
Shrews | except for code modularization | 13:04 |
noshankus | @mordred - cool - I'll have a look and try it out... also noticed in "def get_compute_usage", "if not proj" -> "name=proj.id" should be "name_or_id" - can't get id of proj if proj doesn't exist :) | 13:05 |
Shrews | mordred: +3'd if you want to redo the release stuff | 13:05 |
mordred | noshankus: haha. good catch. | 13:08 |
mordred | Shrews: and yah - as a general concept that turned out to be not nearly as useful as we may have originally thought | 13:08 |
mordred | Shrews: although at least the modularization puts documentation of functions into a different so people dont' thinkn they can call them when they can't ... but we could also do that just with docstrings | 13:09 |
noshankus | @mordred do you actually get empty responses, or the message: "Unable to get resources usage for project:" | 13:12 |
openstackgerrit | Monty Taylor proposed openstack-infra/shade master: Fix get_compute_limits error message https://review.openstack.org/467231 | 13:13 |
mordred | noshankus: I get empty responses from the cloud, and then I get a dict back from shade witha bunch of 0 values | 13:14 |
noshankus | Strange, I don't - I always get that message for each project. Do you happen to know if I need to enable resource usage gathering somewhere, or is it auto-built-in? | 13:15 |
mordred | I don't ..if you turn on debug logging, does it give you more info? | 13:25 |
*** cdent has quit IRC | 13:26 | |
mordred | noshankus: also - we'll be able to give a better error message once we convert this to REST - turns out there are useful error messagfes in the rest responses that we're not picking up via novaclient atm | 13:26 |
mordred | Shrews: mind doing https://review.openstack.org/#/c/467231/ too? | 13:28 |
Shrews | done | 13:30 |
noshankus | @mordred - cool thanks, how far out is the REST implementation? Also, don't forget the "def get_compute_usage", "if not proj" -> "name=proj.id" should be "name_or_id" patch :) | 13:31 |
rods | mordred looks like default value for `osapi_max_limit` in devstack is set to 1000, is going to be hard to have a functional test for https://review.openstack.org/#/c/466927/ | 13:34 |
rods | * that's for cinder | 13:34 |
noshankus | @mordred - nothing special with debug - I can see it ran the task, but still unable to get resource usage. I've been trying some of the other "list_" functions but only some work, like "list_hypervisor", but not "list_endpoints" or "list_services" - running against Mitaka btw | 13:40 |
mordred | noshankus: that makes me sad :( | 13:43 |
mordred | rods: hrm. well - we could override that in our devstack job config I suppose | 13:44 |
mordred | noshankus: perhaps the user account you're using doesn't have access to those calls and we're not detecting/communicating that properly? | 13:46 |
noshankus | Hmm, am using the default "admin" account... In fact, when calling list_services() I get a "The resource could not be found. (HTTP 404)" - so maybe the API endpoints are off somehow? | 13:47 |
*** pabelanger has joined #openstack-shade | 13:51 | |
pabelanger | ohai | 13:51 |
pabelanger | mordred: so, is building a caching logic for SSH keys something we'd consider for shade? Today, there is no easy way to update an existing SSH keypair | 13:52 |
*** rcarrillocruz has quit IRC | 13:55 | |
Shrews | pabelanger: what do you mean? you want to avoid the delete_keypair()/create_keypair() to update it? | 14:01 |
mordred | noshankus: for list_services - are you using keystone v2 or keystone v3? | 14:02 |
pabelanger | Shrews: when we switch nodepool to use keypairs for infra-root users, I've since noticed an issue. We cannot update said keypair, infra-root-keys. So, if we need to rotate a user (because we store 8 keys in infra-root-keys), we have to delete / create the key again with shade (ansible-role-cloud-launcher). Which is okay, but as soon as we delete the key, nodepool will fail to create a server because | 14:04 |
pabelanger | infra-root-keys is missing | 14:04 |
pabelanger | so, looking for way to do this, with out staging 4 patches and nodepool outages | 14:04 |
pabelanger | https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/openstack/os_keypair.py#L149 | 14:05 |
pabelanger | So, might be possible to update os_keypair in ansible to do this, just trying to see what options are | 14:06 |
Shrews | pabelanger: i'm not familiar with this nodepool change. is there a review up? spec? | 14:08 |
mordred | pabelanger: I mean, the real problem is that you can't update keypairs - you can only create/delete | 14:08 |
pabelanger | mordred: yes | 14:08 |
mordred | pabelanger: so what we'd ACTUALLY need to do is create a new keypair, update nodepool config to use the new config, then delete the old keypair | 14:08 |
mordred | sadly | 14:08 |
Shrews | well, there's more than that. | 14:09 |
pabelanger | Shrews: https://review.openstack.org/#/c/455480/ | 14:09 |
Shrews | if you have nodes ready with the old keypair, they become unreachable. which is why i'm concerned about this change to nodepool | 14:09 |
pabelanger | we use this key for root user only, so nodepool will not be affected. It uses jenkins / zuul users | 14:10 |
rods | mordred ya, that should do it. Not sure what value we should set osapi_max_limit to though, I don't know how big is the devstack instance. Do you see a value in a functional test for that change? should we wait to implement pagination support to all of the list calls? | 14:10 |
mordred | Shrews: good point - you actually need to do this twice | 14:10 |
mordred | you need a new keypair with both the old and new keys in it. you need to update nodepool config with that key name -then you need to wait until all the node with the old key name are gone. then you can make a new keyair with only the new keys, then update the nodepool config, wait for servers to go away agin, then you can delete the old and temp keypairs | 14:12 |
pabelanger | mordred: yes, that's how we'd need to do it today. Which is fine, just a lot of code churn | 14:12 |
mordred | it is - but there's no other way to do it - other than eating nodepool boot failures | 14:12 |
mordred | I'd honestly say just eat the boot failures | 14:12 |
mordred | and do the delete/re-create quickly | 14:12 |
Shrews | you can only supply a single keypair to nodepool, right? i must be missing something | 14:13 |
mordred | Shrews: the situatoin is no better for key rotation when the keys are baked into the images | 14:13 |
mordred | Shrews: right. that's why I mentioned updating the nodepool config | 14:13 |
pabelanger | If we can avoid updating nodepool.yaml, that would be good. I could leave with boot failures for create / delete window | 14:14 |
pabelanger | live* | 14:14 |
mordred | we'd ALSO need to be able to tell nodepool about more than one local private key - so that nodepool can have the old and the new key at the same time | 14:14 |
mordred | or- just update the nodepool copy of the private key when you update the keypairs | 14:14 |
pabelanger | why do we want nodepool to have both keys again? | 14:15 |
mordred | there'll be a little window - but test job operation will still work during that time | 14:15 |
mordred | pabelanger: wait- this is just hte keypair wit hthe infra-root keys in it right? | 14:15 |
pabelanger | yes, for root SSH user | 14:16 |
mordred | pabelanger: or is nodepool also actually using key information from this for anything itself? | 14:16 |
pabelanger | no | 14:16 |
pabelanger | nodepool does not use this key | 14:16 |
mordred | gotcha. then yah - I think it's fine | 14:16 |
pabelanger | that key is still baked into image | 14:16 |
mordred | just eat the boot failures for the few seconds during the rotation | 14:16 |
pabelanger | Okay, so how would that look like | 14:17 |
mordred | nova will just return an error that it couldn't boot the node becaus of invalid key_name | 14:17 |
mordred | shade does not sanity check key name given to create_server | 14:18 |
pabelanger | 2 patches to cloud_layout.yaml? or add logic to os_keypairs (ansible) to do the delete / create in a single operation | 14:18 |
pabelanger | other wise, we have a 1hr (I think) window between cloud launcher runs | 14:19 |
mordred | pabelanger: GOTCHA - I understand your concern now | 14:19 |
*** rcarrillocruz has joined #openstack-shade | 14:19 | |
mordred | pabelanger: yes - I think adding logic to os_keypair that would simulate an "update" with a create/delete - we'll need to add a flag because thats a behavior change | 14:20 |
mordred | and we'd want people to opt-in to such a change | 14:20 |
mordred | Shrews: ^^ that make sense to you? | 14:20 |
pabelanger | Agree about flag | 14:20 |
mordred | so the normal os_keypair will go "hey, I've got one named that, nothing more to do" - with the flag, it'll go "I've got one, let me check to see if its key info matches what I was given, if it does, cool, if not I will delete the existing key and create a new one with the same name" | 14:21 |
pabelanger | state latest / updated, something along that lines | 14:21 |
mordred | yah. something lke that | 14:21 |
pabelanger | k, I'll add it to list of things to hack on | 14:21 |
Shrews | so is the zuul key baked into the image then? | 14:22 |
pabelanger | yes: http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/jenkins-slave/install.d/20-jenkins-slave#n22 | 14:23 |
Shrews | i guess it'd have to be. i didn't realize that | 14:23 |
pabelanger | I don't think we all agreed about moving that key to nova | 14:23 |
Shrews | ok, i think i'm now catching up. | 14:23 |
Shrews | mordred: yes, that makes sense. since nodepool retries launches anyway, hopefully there won't be too many launch failures. we could also increase that value | 14:25 |
Shrews | or even add some sort of incremental delay between attempts | 14:26 |
Shrews | pabelanger: ^^^, if you want something to hack on :) | 14:26 |
mordred | Shrews: ++ | 14:27 |
openstackgerrit | Merged openstack-infra/shade master: Fix get_compute_usage normalization problem https://review.openstack.org/467226 | 14:27 |
mordred | Shrews: you're going to love the patch I'm about to push - based on this morning's fun with noshankus | 14:27 |
mordred | I got annoyed by an interface | 14:27 |
*** cdent has joined #openstack-shade | 14:28 | |
openstackgerrit | Monty Taylor proposed openstack-infra/shade master: Allow a user to submit start and end time as strings https://review.openstack.org/467257 | 14:28 |
mordred | Shrews: I didn't add a unit test because you cna't mock datetime.datetime | 14:31 |
Shrews | mordred: problem | 14:31 |
Shrews | :) | 14:31 |
mordred | Shrews: once we convert that call to rest I believe it'll be easier to test the outcome in the rest call itself | 14:31 |
mordred | since the parameter passed to the rest call is a string | 14:31 |
mordred | Shrews: also - please enjoy how nova expects an iso datetime but will actually reject any iso datetime that include a tz offset | 14:32 |
*** gkadam has quit IRC | 14:33 | |
noshankus | @mordred - I'm was using v2.0 - switching over to v3 I now get a BadRequest: BadRequest: Expecting to find domain in project - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) | 14:36 |
mordred | noshankus: kk. so - there is a fix for list_services in latest os-client-config master and latest shade master | 14:40 |
mordred | noshankus: working on getting both released today so it should all work properly | 14:41 |
noshankus | @mordred - excellent - and your new patch showed me the light with the get_compute_usage() - I was passing in a datetime as a string for the start date! +1 to adding in the simplicity to be able to add in times as strings :) | 14:43 |
noshankus | @mordred - thanks for all the help today - much appreciated your help and time! | 14:44 |
mordred | noshankus: yay! thanks for pointing out the issue- this was a real mess | 14:45 |
noshankus | @mordred - no problem, I'm in the middle of playing around with a lot of this stuff, so if, and as I find more, it's good to know you guys are here | 14:46 |
mordred | yes- please let us know - if you have problems, they're bug :) | 14:46 |
noshankus | cool, will do | 14:46 |
openstackgerrit | Merged openstack-infra/shade master: Fix get_compute_limits error message https://review.openstack.org/467231 | 14:51 |
*** cdent has quit IRC | 14:51 | |
*** cdent has joined #openstack-shade | 14:52 | |
*** slaweq has joined #openstack-shade | 15:00 | |
*** slaweq has quit IRC | 15:52 | |
*** slaweq has joined #openstack-shade | 15:53 | |
*** slaweq has quit IRC | 15:58 | |
*** gouthamr has joined #openstack-shade | 16:35 | |
morgan | mordred: ok about to start replacing keystoneclient calls now that i am caught up on emails and such post vacation | 17:04 |
morgan | mordred: since i really need a break from de-mocking tests. | 17:04 |
*** ioggstream has quit IRC | 17:08 | |
*** slaweq has joined #openstack-shade | 17:17 | |
rods | mordred about the functional tests for https://review.openstack.org/#/c/466927/2, I wonder if we should wait to add pagination support to all of the list calls | 17:19 |
*** slaweq has quit IRC | 17:46 | |
*** slaweq has joined #openstack-shade | 17:46 | |
*** slaweq has quit IRC | 17:51 | |
*** slaweq has joined #openstack-shade | 18:28 | |
*** slaweq has quit IRC | 18:32 | |
mordred | morgan: awesome! | 18:49 |
mordred | morgan: so - before you get TOO far ... | 18:50 |
mordred | morgan: let's sync up on discovery ... I owe slaweq_ and rods a quick writeup on thoughts on how the discovery api-wg specs apply here ... | 18:50 |
mordred | although keystoneauth already implements version discovery for keystone - so you're going to be _mostly_ fine - or at least fine enough to start and be upwards compat | 18:51 |
mordred | but if I can get that braindump out real quick and make sure you agree - it might be good to keep an eye out for gotchas as you work | 18:51 |
morgan | sure | 19:00 |
morgan | i figured i'd work on the easier ksc-ectomy bits. since discovery is already KSA, it wouldn't really be touched | 19:01 |
morgan | for now. | 19:01 |
openstackgerrit | Monty Taylor proposed openstack-infra/shade master: Allow a user to submit start and end time as strings https://review.openstack.org/467257 | 19:06 |
mordred | morgan: ++ | 19:07 |
mordred | morgan: sounds great to me | 19:07 |
*** slaweq has joined #openstack-shade | 19:08 | |
*** slaweq has quit IRC | 19:24 | |
*** slaweq has joined #openstack-shade | 19:25 | |
openstackgerrit | Rosario Di Somma proposed openstack-infra/shade master: Add pagination for the list_volumes call https://review.openstack.org/466927 | 20:08 |
*** slaweq has quit IRC | 20:28 | |
*** slaweq has joined #openstack-shade | 20:29 | |
*** slaweq has quit IRC | 20:54 | |
*** slaweq has joined #openstack-shade | 20:55 | |
*** gouthamr has quit IRC | 21:05 | |
openstackgerrit | Monty Taylor proposed openstack-infra/shade master: Pick most recent rather than first fixed address https://review.openstack.org/467385 | 21:07 |
-openstackstatus- NOTICE: The logserver has filled up, so jobs are currently aborting with POST_FAILURE results; remediation is underway. | 21:23 | |
*** ChanServ changes topic to "The logserver has filled up, so jobs are currently aborting with POST_FAILURE results; remediation is underway." | 21:23 | |
*** slaweq has quit IRC | 21:30 | |
*** slaweq has joined #openstack-shade | 21:30 | |
*** slaweq has quit IRC | 21:35 | |
*** jroll has quit IRC | 21:47 | |
*** jroll has joined #openstack-shade | 21:47 | |
*** jroll has quit IRC | 21:49 | |
*** jroll has joined #openstack-shade | 21:53 | |
*** cdent has quit IRC | 22:09 | |
*** gouthamr has joined #openstack-shade | 22:21 | |
*** slaweq has joined #openstack-shade | 23:09 | |
*** slaweq has quit IRC | 23:15 | |
openstackgerrit | Monty Taylor proposed openstack-infra/shade master: Pick most recent rather than first fixed address https://review.openstack.org/467385 | 23:19 |
openstackgerrit | Monty Taylor proposed openstack-infra/shade master: Pick most recent rather than first fixed address https://review.openstack.org/467385 | 23:37 |
mordred | rods: so - I think yah, adding it to all of the list calls is likely a whole new effort we should think about. for the volumes, I thinkn it would not be too hard to to set osapi_max_limit to something super low like 2 or something... or maybe something like 20 so that we can have most of our tests not hit it - but we can maybe make one test that creates 30 tiny volumes and then lists them to test | 23:40 |
mordred | that pagination actually works maybe? | 23:40 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!