*** queria is now known as Guest614 | 02:25 | |
*** queria is now known as Guest616 | 02:31 | |
*** queria is now known as queria^afk | 06:43 | |
*** rpittau|afk is now known as rpittau | 07:28 | |
*** queria is now known as queria^afk | 07:37 | |
*** queria^afk is now known as queria | 07:42 | |
proximm | Hello. Can anyone can tell me if I can update stack (with several servers) no all at once but one at time? I read about rolling_updates but looks like it is only available in autoscaling. | 08:00 |
---|---|---|
*** queria is now known as queria^afk | 09:02 | |
steule | Hi, I am trying to influence VM scheduling by setting property in glance image. Image has: "| properties | aggregate_instance_extra_specs:xxyz='xxx' |" and compute aggregate has: "| properties | xxyz='xxx' |" for some reason it seems that image property is ignored during scheduling. Could someone point me to correct way how to setup scheduling to specific aggregate based on image property? Thanks a lot | 09:29 |
cz3 | steule: you need to add appropriate filters into nova configuration | 10:14 |
cz3 | steule: AggregateInstanceExtraSpecsFilter | 10:15 |
steule | this filter is already there | 10:15 |
steule | also the property works fine for flavor, however I need to make it work for image too | 10:16 |
cz3 | oh | 10:16 |
cz3 | that's another filter iirc | 10:16 |
cz3 | one sec | 10:16 |
cz3 | AggregateImagePropertiesIsolation - check this one | 10:17 |
steule | cz3: thanks! I will give it a try | 10:21 |
steule | cz3: seems it is bugged https://bugs.launchpad.net/nova/+bug/1741810 and no custom properties are supported for scheduler | 11:26 |
steule | cz3: anyway, thank you for pointing me to correct way | 11:27 |
cz3 | steule: huh? that's weird. we have a similar use case (spawn windows instances in a host aggregate) | 11:27 |
cz3 | and it does work | 11:27 |
cz3 | lemme take a look at my deployment | 11:27 |
steule | cz3: thanks! any info is more than welcome | 11:28 |
*** queria^afk is now known as queria | 11:33 | |
*** Guest7261 is now known as marlinc | 11:38 | |
cz3 | steule: oh, I see - you are trying to use *custom* property, as in not the one that is something like "os_type"? | 11:39 |
marlinc | Can anyone maybe give their insight on my question on Serverfault about database latency in Keystone? https://serverfault.com/questions/1078114/requesting-access-token-through-openstack-keystone-very-slow | 11:39 |
steule | cz3: yes, I wanted to use custom one, but maybe I could go with some predefined property metioned in bug report. | 11:40 |
marlinc | TLDR is that it seems Keystone is doing about 100 database queries when requesting a token, this together with 2-3 ms of time per query easily results in token requests that take at least 500 ms. Is it normal for Keystone to need this many queries for for example a token creation and if so how do others solve this. One way could be to always | 11:41 |
marlinc | connect to the database over localhost | 11:41 |
cz3 | steule: you can also try doing a copy/paste of https://opendev.org/openstack/nova/src/branch/master/nova/scheduler/filters/aggregate_image_properties_isolation.py into a separate filter which would match the exact metadata keys that you need | 11:41 |
cz3 | steule: I mean, copy/paste that into my_own_imageprops_filter.py and make it match your property explicitely | 11:42 |
cz3 | I'm not a fan of diverging from upstream code, but sometimes there's no choice | 11:43 |
cz3 | I think creating such modified copy of a filter will be enough when it comes to nova code, after that you just append it to enabled_filters (MyOwnImagePropsFilter, depends on how you will name the class) | 11:47 |
cz3 | marlinc: is your database running from some sort of slow storage? | 11:48 |
cz3 | well, actually, scratch that, that's probably not the point. | 11:48 |
cz3 | marlinc: which OpenStack version are you using? | 11:49 |
marlinc | cz3 the database is running on SSD storage but it's mostly network latency and the amount of queries that are the issue, this is right now on Ussuri (that's what images existed for when I set this up) | 11:50 |
cz3 | okay, so you are using fernet tokens | 11:50 |
marlinc | I checked the release logs to see if there were any performance related things but I couldn't find much | 11:50 |
marlinc | Yes fernet tokens | 11:50 |
marlinc | Right now I am considering either moving the database locally or using slave_connection with a slave database to do as many reads using a localhost network connection as possible | 11:54 |
marlinc | To avoid the 0.2 ms network hop | 11:54 |
cz3 | Hm. I remember that I had a similar situation (long keystone queries), but I don't think we've ever done anything with that. | 11:56 |
cz3 | I have no clue about that right now | 11:57 |
cz3 | you can also try mailing the openstack-discuss@ about that. there's a bit more movement in that mailing list and people are usually more responsive than here. | 11:57 |
marlinc | I will try that the mailing list to see if there's some people that might know more | 12:00 |
marlinc | How are you running Keystone's database conneciton and what kind of request times are you seeing when requesting tokens? | 12:01 |
cz3 | openstack --debug token issue 0.74s user 0.07s system 46% cpu 1.734 total | 12:03 |
cz3 | as for the setup, nothing out of ordinary - three MySQL VMs fronted by haproxy | 12:04 |
marlinc | Yea exactly, MySQL over the network via a load balancer so similar to this | 12:04 |
marlinc | And also a similar latency it seems | 12:04 |
cz3 | I can also try hammering the API for more reqs | 12:05 |
cz3 | sec | 12:05 |
cz3 | yeah 100 parallel token requests indeed make it slow | 12:12 |
cz3 | :-) | 12:13 |
cz3 | still running | 12:13 |
cz3 | https://www.irccloud.com/pastebin/NX47oHwS/ | 12:15 |
marlinc | Are you using an application credential to connect or something else? | 12:24 |
cz3 | username/password | 12:24 |
marlinc | Yea it seems like application credential makes is a bit slower that's why I was wondering | 12:25 |
marlinc | Thanks for your time cz3, its definitely very helpful to get the experience from someone else also running OpenStack | 12:45 |
cz3 | you're welcome! | 12:50 |
steule | cz3: Thank you, seems it is working for me with os_distro property. I just had to tag other aggregate with different value for this property, otherwise VM was spawning there too. | 12:58 |
*** arxcruz is now known as arxcruz|ruck | 13:14 | |
*** rpittau is now known as rpittau|afk | 15:45 | |
-opendevstatus- NOTICE: Zuul has been restarted in order to address a performance regression related to event processing; any changes pushed or approved between roughly 17:00 and 18:30 UTC should be rechecked if they're not already enqueued according to the Zuul status page | 18:35 | |
*** slaweq_ is now known as slaweq | 19:26 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!