*** igordc has quit IRC | 00:10 | |
openstackgerrit | sean mooney proposed openstack/nova-specs master: resubmit image metadata prefiltering spec for ussuri https://review.opendev.org/683258 | 00:21 |
---|---|---|
*** mriedem_away has quit IRC | 00:22 | |
*** gyee has quit IRC | 00:28 | |
*** elod has quit IRC | 00:29 | |
*** mmethot_ has quit IRC | 00:40 | |
*** mmethot has joined #openstack-nova | 00:40 | |
*** mmethot has quit IRC | 00:45 | |
*** TxGirlGeek has quit IRC | 00:45 | |
*** mmethot has joined #openstack-nova | 00:45 | |
*** markvoelker has joined #openstack-nova | 00:46 | |
*** mmethot has quit IRC | 00:46 | |
*** mmethot has joined #openstack-nova | 00:47 | |
*** elod has joined #openstack-nova | 00:49 | |
*** markvoelker has quit IRC | 00:50 | |
*** mmethot has quit IRC | 00:55 | |
*** mmethot_ has joined #openstack-nova | 00:55 | |
*** Tianhao_Hu has joined #openstack-nova | 01:06 | |
*** Tianhao_Hu has left #openstack-nova | 01:06 | |
*** mlavalle has quit IRC | 01:09 | |
*** larainema has quit IRC | 01:15 | |
*** takashin has left #openstack-nova | 01:30 | |
*** yedongcan has joined #openstack-nova | 01:37 | |
*** ricolin has joined #openstack-nova | 01:55 | |
*** boxiang has joined #openstack-nova | 02:13 | |
*** tbachman has quit IRC | 02:24 | |
*** mkrai_ has joined #openstack-nova | 02:45 | |
*** mkrai_ has quit IRC | 02:50 | |
*** tinwood has quit IRC | 02:50 | |
*** tinwood has joined #openstack-nova | 02:52 | |
*** boxiang has quit IRC | 02:54 | |
*** zhubx has joined #openstack-nova | 02:54 | |
*** mkrai has joined #openstack-nova | 02:57 | |
*** cfriesen has quit IRC | 02:59 | |
*** ricolin_ has joined #openstack-nova | 03:04 | |
*** ricolin has quit IRC | 03:06 | |
*** zhubx has quit IRC | 03:13 | |
*** boxiang has joined #openstack-nova | 03:13 | |
*** TxGirlGeek has joined #openstack-nova | 03:27 | |
*** psachin has joined #openstack-nova | 03:35 | |
*** dave-mccowan has quit IRC | 04:19 | |
*** BjoernT has joined #openstack-nova | 04:28 | |
*** BjoernT_ has joined #openstack-nova | 04:32 | |
*** BjoernT has quit IRC | 04:33 | |
*** ratailor has joined #openstack-nova | 04:40 | |
*** _mmethot_ has joined #openstack-nova | 04:41 | |
*** mmethot_ has quit IRC | 04:41 | |
*** Luzi has joined #openstack-nova | 05:01 | |
*** BjoernT_ has quit IRC | 05:02 | |
*** BjoernT has joined #openstack-nova | 05:04 | |
*** BjoernT has quit IRC | 05:05 | |
*** TxGirlGeek has quit IRC | 05:07 | |
*** BjoernT has joined #openstack-nova | 05:07 | |
*** BjoernT has quit IRC | 05:11 | |
*** TxGirlGeek has joined #openstack-nova | 05:11 | |
*** BjoernT has joined #openstack-nova | 05:12 | |
*** pcaruana has joined #openstack-nova | 05:28 | |
*** BjoernT has quit IRC | 05:29 | |
*** yedongcan has quit IRC | 05:38 | |
*** pcaruana has quit IRC | 05:53 | |
*** ricolin_ is now known as ricolin | 05:54 | |
*** jawad_axd has joined #openstack-nova | 05:58 | |
*** jawad_ax_ has joined #openstack-nova | 06:02 | |
*** jawad_axd has quit IRC | 06:03 | |
*** TxGirlGeek has quit IRC | 06:08 | |
*** slaweq has joined #openstack-nova | 06:19 | |
*** psachin has quit IRC | 06:23 | |
*** xek has joined #openstack-nova | 06:36 | |
*** psachin has joined #openstack-nova | 06:38 | |
*** tetsuro has joined #openstack-nova | 06:44 | |
*** tetsuro has quit IRC | 06:47 | |
*** zbr is now known as zbr|ruck | 06:48 | |
*** tetsuro has joined #openstack-nova | 06:48 | |
*** trident has quit IRC | 06:49 | |
*** luksky has joined #openstack-nova | 06:50 | |
*** tetsuro has quit IRC | 06:51 | |
*** markvoelker has joined #openstack-nova | 06:57 | |
*** trident has joined #openstack-nova | 07:01 | |
*** markvoelker has quit IRC | 07:01 | |
*** cshen has joined #openstack-nova | 07:02 | |
*** tetsuro has joined #openstack-nova | 07:03 | |
*** trident has quit IRC | 07:07 | |
*** damien_r has joined #openstack-nova | 07:08 | |
*** maciejjozefczyk has joined #openstack-nova | 07:11 | |
*** ccamacho has joined #openstack-nova | 07:13 | |
*** zhubx has joined #openstack-nova | 07:13 | |
*** boxiang has quit IRC | 07:14 | |
*** awalende has joined #openstack-nova | 07:17 | |
*** trident has joined #openstack-nova | 07:17 | |
*** udesale has joined #openstack-nova | 07:20 | |
*** rcernin has quit IRC | 07:29 | |
*** rpittau|afk is now known as rpittau | 07:31 | |
*** zhubx has quit IRC | 07:32 | |
*** zhubx has joined #openstack-nova | 07:32 | |
*** tbachman has joined #openstack-nova | 07:42 | |
*** ralonsoh has joined #openstack-nova | 07:44 | |
*** ttsiouts has joined #openstack-nova | 07:45 | |
*** ivve has joined #openstack-nova | 07:45 | |
*** tbachman has quit IRC | 07:47 | |
*** huth has joined #openstack-nova | 07:49 | |
*** huth has left #openstack-nova | 07:49 | |
*** tkajinam has quit IRC | 08:09 | |
*** avolkov has joined #openstack-nova | 08:10 | |
*** cshen has quit IRC | 08:27 | |
*** cshen has joined #openstack-nova | 08:44 | |
*** tetsuro has quit IRC | 08:45 | |
*** cshen has quit IRC | 08:49 | |
*** mkrai has quit IRC | 08:51 | |
*** ociuhandu has joined #openstack-nova | 08:52 | |
*** dtruong has quit IRC | 08:54 | |
*** rcernin has joined #openstack-nova | 08:55 | |
luyao | efried_pto, stephenfin: Hi, could you help review the vpmems doc if you get time, since you both are familiar with the vpmems series. :D I think I get the content close which is focusing on the current functionality. https://review.opendev.org/#/c/680300. | 08:59 |
luyao | And the patch 'objects: use all_things_equal from objects.base' is ready to merge , need +W on it, https://review.opendev.org/#/c/681397/13 | 09:02 |
*** mkrai has joined #openstack-nova | 09:05 | |
*** cshen has joined #openstack-nova | 09:05 | |
*** tetsuro has joined #openstack-nova | 09:05 | |
openstackgerrit | Sylvain Bauza proposed openstack/nova master: Add a prelude for the Train release https://review.opendev.org/683327 | 09:07 |
*** cshen has quit IRC | 09:10 | |
*** pcaruana has joined #openstack-nova | 09:12 | |
*** igordc has joined #openstack-nova | 09:15 | |
*** igordc has quit IRC | 09:20 | |
*** derekh has joined #openstack-nova | 09:27 | |
*** dtantsur|afk is now known as dtantsur | 09:27 | |
*** ociuhandu has quit IRC | 09:30 | |
*** ociuhandu_ has joined #openstack-nova | 09:30 | |
*** luksky has quit IRC | 09:37 | |
*** cshen has joined #openstack-nova | 09:38 | |
*** igordc has joined #openstack-nova | 09:39 | |
*** rcernin has quit IRC | 09:49 | |
*** zhongjun2__ has joined #openstack-nova | 09:50 | |
*** zhongjun2__ has quit IRC | 09:50 | |
*** AdamMork has joined #openstack-nova | 09:53 | |
*** tetsuro has quit IRC | 09:55 | |
*** sean-k-mooney has quit IRC | 09:57 | |
*** sean-k-mooney has joined #openstack-nova | 09:58 | |
AdamMork | Hello, friends! I have a openstack contains 2 controllers an 4 compute node's. I need to enable AES-NI for one or more vm. Openstack deployed via kolla and work on docker containers(nova container, neutron container etc. on controller). I read manual https://software.intel.com/en-us/articles/openstack-epa-feature-breakdown-and-analysis . On the | 09:59 |
AdamMork | compute node (in bios i enable AES-NI) . But in vm use command (cat /proc/cpuinfo | grep aes) i not found aes support. How to configure nova to enable aes instructions? | 09:59 |
*** jaosorior has quit IRC | 10:03 | |
*** tetsuro has joined #openstack-nova | 10:05 | |
*** ociuhandu_ has quit IRC | 10:07 | |
*** ociuhandu has joined #openstack-nova | 10:07 | |
*** ttsiouts has quit IRC | 10:11 | |
*** ttsiouts has joined #openstack-nova | 10:12 | |
*** tetsuro has quit IRC | 10:13 | |
*** brinzhang has quit IRC | 10:14 | |
*** ociuhandu has quit IRC | 10:14 | |
*** tetsuro has joined #openstack-nova | 10:14 | |
*** ttsiouts has quit IRC | 10:17 | |
*** ociuhandu has joined #openstack-nova | 10:18 | |
*** luksky has joined #openstack-nova | 10:22 | |
*** ociuhandu has quit IRC | 10:22 | |
AdamMork | ok! How i can modify Nova conf "The Nova* libvirt driver takes its configuration information from a section in the main Nova file /etc/nova/nova.conf." I have docker containers and how to find /etc/nova/nova.conf.? should i connect to nova container? | 10:28 |
*** tetsuro has quit IRC | 10:30 | |
*** ociuhandu has joined #openstack-nova | 10:32 | |
*** artom has quit IRC | 10:32 | |
*** ociuhandu has quit IRC | 10:33 | |
*** ociuhandu has joined #openstack-nova | 10:34 | |
*** ttsiouts has joined #openstack-nova | 10:35 | |
*** ociuhandu has quit IRC | 10:39 | |
*** sapd1_x has joined #openstack-nova | 10:40 | |
*** ociuhandu has joined #openstack-nova | 10:40 | |
*** brault has joined #openstack-nova | 10:47 | |
*** mkrai has quit IRC | 10:52 | |
*** mkrai_ has joined #openstack-nova | 10:52 | |
*** pcaruana has quit IRC | 10:56 | |
*** panda is now known as panda|lunch | 11:03 | |
*** artom has joined #openstack-nova | 11:05 | |
*** ccamacho has quit IRC | 11:21 | |
*** igordc has quit IRC | 11:26 | |
*** ociuhandu has quit IRC | 11:27 | |
*** jaosorior has joined #openstack-nova | 11:27 | |
AdamMork | ok. I see more manuals and understand : in globals.yml need uncomment :node_custom_config: "/etc/kolla/config" and overrides basic config for nova (As of now kolla only supports config overrides for ini based configs.) https://github.com/openstack/kolla-ansible/blob/master/doc/source/admin/advanced-configuration.rst THX | 11:28 |
*** pcaruana has joined #openstack-nova | 11:29 | |
*** AdamMork has quit IRC | 11:40 | |
*** pcaruana has quit IRC | 11:42 | |
*** pcaruana has joined #openstack-nova | 11:42 | |
*** jaosorior has quit IRC | 11:47 | |
*** udesale has quit IRC | 11:52 | |
*** udesale has joined #openstack-nova | 11:54 | |
*** mgariepy has joined #openstack-nova | 12:01 | |
*** artom has quit IRC | 12:04 | |
*** ociuhandu has joined #openstack-nova | 12:06 | |
*** ociuhandu has quit IRC | 12:06 | |
*** markvoelker has joined #openstack-nova | 12:07 | |
*** ociuhandu has joined #openstack-nova | 12:07 | |
*** mrch_ has quit IRC | 12:08 | |
*** mrch_ has joined #openstack-nova | 12:10 | |
*** sapd1_x has quit IRC | 12:11 | |
*** panda|lunch is now known as panda | 12:16 | |
*** ttsiouts has quit IRC | 12:18 | |
*** ttsiouts has joined #openstack-nova | 12:19 | |
*** ttsiouts has quit IRC | 12:23 | |
*** ratailor has quit IRC | 12:25 | |
*** psachin has quit IRC | 12:28 | |
*** awalende has quit IRC | 12:36 | |
*** jaosorior has joined #openstack-nova | 12:38 | |
*** tetsuro has joined #openstack-nova | 12:39 | |
*** ociuhandu has quit IRC | 12:40 | |
*** ociuhandu has joined #openstack-nova | 12:45 | |
*** ociuhandu has quit IRC | 12:45 | |
*** ociuhandu has joined #openstack-nova | 12:46 | |
*** mkrai_ has quit IRC | 12:46 | |
*** mkrai has joined #openstack-nova | 12:47 | |
*** mriedem has joined #openstack-nova | 12:52 | |
*** dave-mccowan has joined #openstack-nova | 12:54 | |
mriedem | gibi: i saw your resize reschedule rpc pin bug, do you know if that's a new regression in train? | 12:54 |
gibi | mriedem: not yet. I'm about to push a reproduction patch, then I will look into when the fault was introduced | 12:55 |
gibi | it is clear that the problem is with the legacy request spec in prep_resize ending up in the conductor during a re-schedule with rpc pinned to 5.0 | 12:55 |
*** eharney has joined #openstack-nova | 12:56 | |
*** rcernin has joined #openstack-nova | 12:56 | |
*** ociuhandu has quit IRC | 12:57 | |
*** ociuhandu has joined #openstack-nova | 12:57 | |
*** Luzi has quit IRC | 12:58 | |
gibi | mriedem: this is where it blows https://github.com/openstack/nova/blob/9b2e00e015f22b2d876cd3c239af8e139040c8c8/nova/conductor/manager.py#L327 | 12:58 |
mriedem | i know that in compute rpc api 5.1 we send the RequestSpec to compute https://opendev.org/openstack/nova/src/branch/master/nova/compute/rpcapi.py#L833 | 12:58 |
mriedem | but backlevel if to the dict form if we can't | 12:58 |
mriedem | gibi: ah yeah - that's my patch | 12:59 |
mriedem | remember? | 12:59 |
mriedem | https://review.opendev.org/#/c/680762/ | 12:59 |
mriedem | so if you have a recreate we can lay that on top | 13:00 |
gibi | mriedem: ohh you have a fix for it. cool. Yeah I will push a functional repro soon and the you can rebase | 13:00 |
gibi | or I can rebase | 13:00 |
gibi | your series | 13:00 |
mriedem | you can rebase it, | 13:01 |
mriedem | also, i've duplicated your bug against 1843090 | 13:01 |
gibi | mriedem: thanks. I will update my patch accordingly | 13:01 |
gibi | mriedem: I reached this bug while tried to create a func test for the bandwidth case when rpc in pinned | 13:01 |
gibi | mriedem: it seems there will also be extra issues | 13:02 |
mriedem | yeah it was this patch that made me think of it in your bw series for prep_resize where you were using the request spec | 13:02 |
gibi | mriedem: btw do you know about a bug regading booting a instance with --availablity-zone az:node and then migrate it. I saw that in this case nova does not try to re-schedule the migration if the first dest host fails in prep_resize | 13:03 |
gibi | it seem the that filter_properties are not populated for re-schedule | 13:03 |
gibi | if I boot the server with the new host parameter then the re-schedule work | 13:04 |
mriedem | gibi: that's working as designed https://opendev.org/openstack/nova/src/branch/master/nova/scheduler/utils.py#L801 | 13:05 |
mriedem | using az::node will set force_nodes | 13:05 |
mriedem | do you see "Re-scheduling is disabled due to forcing a host" in the logs? | 13:06 |
gibi | mriedem: but forcing the host during the boot allows me to migrate it without dest host specified, but it does not allow nova to re-schedule | 13:06 |
gibi | during that migrate | 13:06 |
gibi | mriedem: I have to go back and re-create the situation as I saw this while I created the func test for the rpc pin bug. | 13:07 |
mriedem | gibi: which release? https://review.opendev.org/#/q/I3f488be6f3c399f23ccf2b9ee0d76cd000da0e3e | 13:07 |
gibi | mriedem: master :) | 13:07 |
mriedem | oh nvm that's ignore_hosts | 13:07 |
mriedem | so i think force_hosts/nodes gets persisted for some reason, i'm not sure why, but for every move operation we call this to basically unset those fields https://opendev.org/openstack/nova/src/branch/master/nova/objects/request_spec.py#L692 | 13:08 |
mriedem | it would be simpler if we just didn't persist those values | 13:08 |
mriedem | gibi: so maybe we're calling ^ *after* populate_retry | 13:09 |
*** ociuhandu has quit IRC | 13:09 | |
mriedem | yup https://opendev.org/openstack/nova/src/branch/master/nova/conductor/tasks/migrate.py#L293 | 13:10 |
*** spatel has joined #openstack-nova | 13:11 | |
gibi | mriedem: so is the order of pupulate and rest wrong there? | 13:11 |
mriedem | it appears so | 13:11 |
gibi | s/pupulate/populate_retry | 13:11 |
gibi | mriedem: OK I will create a reproduction for that as well | 13:12 |
mriedem | i'm checking if that's a regression | 13:12 |
*** ociuhandu has joined #openstack-nova | 13:13 | |
*** brault has quit IRC | 13:13 | |
mriedem | gibi: looks like it's been that way since newton https://review.opendev.org/#/c/284974/18/nova/conductor/tasks/migrate.py | 13:13 |
gibi | mriedem: then nobody cares :D | 13:14 |
mriedem | ha | 13:14 |
gibi | mriedem: btw there are re-schedule func tests that uses az:node boot, so those will start behaving differently if we fix this | 13:14 |
mriedem | or nobody just ever thought to ask if that's wrong | 13:14 |
mriedem | dansmith: bauzas: can you think of any rason why we persist RequestSpec.force_hosts/nodes? | 13:15 |
mriedem | like many of the other request spec fields, persisting that seems to only cause headaches | 13:15 |
bauzas | PEBKAC ? | 13:15 |
bauzas | I thought we have a method for this | 13:16 |
mriedem | i understand that when request spec was originally written everything was persisted, but we've rolled back a lot of that piece by piece as we find problems | 13:16 |
*** ociuhandu has quit IRC | 13:16 | |
mriedem | bauzas: yeah, but that's kind of ... dumb, right? | 13:16 |
bauzas | mriedem: https://github.com/openstack/nova/blob/master/nova/objects/request_spec.py#L692 | 13:16 |
*** ociuhandu has joined #openstack-nova | 13:16 | |
mriedem | we could just not persist the damn field and then we wouldn't have to explicitly reset for every move operation | 13:16 |
bauzas | mriedem: yeah, this | 13:16 |
*** jawad_ax_ has quit IRC | 13:16 | |
bauzas | mriedem: when alaski provided the persistence for the spec, his and me forgot to discuss about this :( | 13:17 |
bauzas | him* | 13:17 |
bauzas | so we created this method | 13:17 |
*** jawad_axd has joined #openstack-nova | 13:17 | |
mriedem | ok but going forward, to avoid new ways of shooting off our toes, we could just stop persisting force_hosts/nodes and only rely on that method as a workaround for existing request specs | 13:18 |
mriedem | these are the fields i count that we've changed to not be persisted on the requestspec: ignore_hosts, requested_destination, retry, network_metadata, requested_resources | 13:19 |
mriedem | the latter 2 are newer and were not persisted since they were introduced, but the first 3 were all retroactive | 13:20 |
mriedem | oh and the instance_group members/hosts https://github.com/openstack/nova/blob/master/nova/objects/request_spec.py#L614 | 13:20 |
mriedem | which reminds me, if any stable core cares about pike and ocata https://review.opendev.org/#/q/topic:bug/1830747+status:open | 13:21 |
bauzas | mriedem: cool with this | 13:21 |
*** jawad_axd has quit IRC | 13:22 | |
bauzas | mriedem: I was also thinking about some way to say which specific fields shouldn't be persisted | 13:22 |
bauzas | mriedem: and sure, will look at the stable changes | 13:22 |
*** mrch_ has quit IRC | 13:22 | |
mriedem | gibi: as for existing functional resize reschedule tests using az::node, i'd have to see the impacts to assess really | 13:23 |
bauzas | mriedem: efried_pto: btw. https://review.opendev.org/#/c/683327/ | 13:23 |
mriedem | but if we ignore the forced host/node to boot the server during resize scheduling, it seems we should allow rescheduling to alternates | 13:24 |
gibi | mriedem: sure. we will see when I managed to write the reproduction and reorder the populate_retry call | 13:26 |
mriedem | stephenfin: are you working on an admin guide docs patch for pcpu? | 13:27 |
stephenfin | mriedem: yup | 13:27 |
stephenfin | atm, in fact | 13:27 |
stephenfin | should have something ready to go before EOD | 13:27 |
mriedem | bauzas: notes on the prelude patch | 13:35 |
bauzas | cool | 13:36 |
*** zhubx has quit IRC | 13:37 | |
*** zhubx has joined #openstack-nova | 13:37 | |
*** zhubx has quit IRC | 13:38 | |
mriedem | weee https://zuul.opendev.org/t/openstack/build/e3f767dbd32247ffa5aa7daa79e0e5af/log/job-output.txt#32748 | 13:40 |
openstackgerrit | Balazs Gibizer proposed openstack/nova master: Func test for migrate reschedule with pinned compute rpc https://review.opendev.org/683385 | 13:40 |
gibi | mriedem: reproduction for 1843090 ^^ | 13:41 |
gibi | mriedem: now I'm going to do the rebase of your fix on it | 13:41 |
mriedem | gibi: when you rebase you might as well strike whatever word efried_pto didn't like from my comment | 13:42 |
gibi | mriedem: sure thing | 13:42 |
*** BjoernT has joined #openstack-nova | 13:45 | |
*** cshen has quit IRC | 13:46 | |
*** mkrai has quit IRC | 13:48 | |
*** mkrai_ has joined #openstack-nova | 13:48 | |
*** tbachman has joined #openstack-nova | 13:48 | |
*** tetsuro has quit IRC | 13:49 | |
*** JamesBenson has joined #openstack-nova | 13:52 | |
mriedem | gibi: comments in your functional recreate patch | 13:52 |
*** artom has joined #openstack-nova | 13:54 | |
gibi | mriedem: thanks. regarding the request_spec uglyness. I can cook up something that is test only. But sooner or later we (I) need to go back and fix this uglyness as it is now needed in two places (as noted) | 13:54 |
*** macz has joined #openstack-nova | 13:55 | |
gibi | mriedem: the rest of your comments is valid and clear. | 13:55 |
mriedem | yeah i have no idea why that object is a problem b/c we pass request spec over rpc (select_destinations and such) all the time | 13:55 |
mriedem | in tests i mean | 13:56 |
gibi | mriedem: in the meantime I can tell you that your fix works according to the functional test | 13:56 |
mriedem | \o/ | 13:56 |
*** mlavalle has joined #openstack-nova | 13:56 | |
gibi | ... what a nice Friday it is | 13:56 |
*** belmoreira has quit IRC | 13:56 | |
*** belmoreira has joined #openstack-nova | 14:00 | |
mriedem | my kid is also staying home from school today claiming to be sick and i can't tell if she's playing opposum | 14:02 |
mriedem | i need to put her to work | 14:02 |
*** mkrai_ has quit IRC | 14:04 | |
artom | How good is she with Python? Weren't we looking for interns for all the osc gaps? | 14:05 |
gibi | mriedem: Is there a change that we consider this https://review.opendev.org/#/c/672577/ as a bug and backport it to stable/pike? (yeah I have a bug day today) | 14:05 |
gibi | artom, mriedem: hm osc gaps, and osc bugfix backport ^^ | 14:05 |
*** openstackgerrit has quit IRC | 14:06 | |
mriedem | aspiers: you might want to talk to some suse developers downstream about their upstream "fix only on the branch i care about" practices https://review.opendev.org/#/c/676882/ | 14:07 |
mriedem | gibi: s/change/chance/ ? | 14:08 |
gibi | mriedem: yes, sorry | 14:08 |
mriedem | going to pike is probably going to be tough... | 14:08 |
mriedem | but, that fix should be compatible... | 14:09 |
gibi | mriedem: OK. I will look into that as well | 14:09 |
mriedem | it's clearly a bug if you're using >= 2.53 | 14:09 |
gibi | mriedem: or find somebody who has time | 14:09 |
mriedem | pike is in extended maintenance upstream so i'm not sure how much dtroyer is going to care about backporting it that far | 14:09 |
gibi | mriedem: yes, we have pike deployemnts out there using 2.53 and seeing openstack client fail | 14:09 |
mriedem | gibi: are they pinning the version to 2.latest or something? | 14:10 |
mriedem | or they are just pinning to 2.53 since that's the max in pike? | 14:10 |
gibi | mriedem: using 2.53 by default, but sure we can go back to olderin for this command as a workaround | 14:10 |
gibi | mriedem: 2.53 is used as that is max pike | 14:11 |
*** tbachman has quit IRC | 14:11 | |
gibi | it seems I was able to hook in elod to do this client backports... | 14:11 |
mriedem | i started stein for you https://review.opendev.org/#/c/683394/1 | 14:11 |
gibi | mriedem: thanks, elod ^^ | 14:12 |
mriedem | gibi: btw, where do you get your packages? some upstream distro or do you build your own? | 14:12 |
mriedem | b/c for pike you might have to patch your osc package | 14:12 |
gibi | mriedem: packages come from Mirantis | 14:13 |
gibi | mriedem: so if I fail to do a clean thing upstream I will dump the "downstream" work there | 14:13 |
dtroyer | gibi, mriedem: that is enough of a bug it would be back-portable. OSC follows the same rules of working backwards through releases on backports. Of course, standard disclaimer of just using a modern OSC goes here, I understand that packagers don't do that, I wish they would/could… | 14:15 |
gibi | dtroyer: thanks | 14:16 |
*** BjoernT has quit IRC | 14:16 | |
mriedem | the biggest barrier to backporting osc fixes is if they rely on something in novaclient that's not in whatever stable branch you're targeting | 14:17 |
mriedem | it looks like in this case you might get lucky | 14:17 |
*** BjoernT has joined #openstack-nova | 14:19 | |
*** rcernin has quit IRC | 14:23 | |
*** cfriesen has joined #openstack-nova | 14:29 | |
*** jaosorior has quit IRC | 14:30 | |
*** dtantsur is now known as dtantsur|afk | 14:33 | |
*** liuyulong has joined #openstack-nova | 14:34 | |
*** tbachman has joined #openstack-nova | 14:36 | |
hemna | hey guys, any idea why I might be getting theses failures against stable/pike: http://paste.openstack.org/show/778056/ | 14:36 |
hemna | tox -epy27 against stable/pike gives about 6300 of those errors | 14:37 |
*** rcernin has joined #openstack-nova | 14:38 | |
*** ociuhandu has quit IRC | 14:39 | |
*** damien_r has quit IRC | 14:39 | |
mriedem | oh hemna | 14:42 |
mriedem | hemna: KeithMnemonic1 must have some dirt on you | 14:43 |
KeithMnemonic1 | lol | 14:43 |
mriedem | hemna: so you're trying to figure out why these are failing https://storage.bhs1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_6e4/683008/2/check/openstack-tox-py27/6e40a0c/testr_results.html.gz right? i'm not sure why you'd be hitting that weird db issue locally - should be using sqlite | 14:45 |
mriedem | hemna: i'd try cleaning pycs and rebuilding the venv: | 14:45 |
mriedem | find . -name *.pyc delete | 14:45 |
mriedem | tox -r -e py27 | 14:45 |
bauzas | mriedem: just to make it clear, you want me providing the links to https://releases.openstack.org/train/highlights.html#nova-compute-service in https://review.opendev.org/#/c/683327/1/releasenotes/notes/train-prelude-3db0f5f6a75cc57a.yaml ? | 14:46 |
*** BjoernT_ has joined #openstack-nova | 14:46 | |
mriedem | bauzas: in the commit message? | 14:47 |
mriedem | i would | 14:47 |
mriedem | wait no not the latter | 14:47 |
bauzas | mriedem: looks like you also want to have links in the release note | 14:47 |
mriedem | the etherpad and the published highlights doc | 14:47 |
mriedem | bauzas: no | 14:47 |
*** damien_r has joined #openstack-nova | 14:47 | |
mriedem | i'm telling you, pull your content from the highlights, | 14:47 |
bauzas | oh, k | 14:48 |
mriedem | plus additional stuff not in the highlights that's in the etherpad, like nova-cells/consoleauth removal and xenapi driver deprecatoin | 14:48 |
bauzas | ok, understood | 14:48 |
mriedem | we intentionally didn't include ^ in highlights | 14:48 |
mriedem | b/c they aren't highlights | 14:48 |
bauzas | cool | 14:48 |
bauzas | will do it later | 14:48 |
mriedem | xenapi driver deprecatoin is a brown light | 14:48 |
*** BjoernT has quit IRC | 14:48 | |
*** ociuhandu has joined #openstack-nova | 14:49 | |
*** ociuhandu has quit IRC | 14:49 | |
*** ociuhandu has joined #openstack-nova | 14:50 | |
hemna | ok I'll try that | 14:50 |
hemna | I'm not sure I'll be successfull with this conflict resolution on that patch | 14:50 |
*** udesale has quit IRC | 14:50 | |
hemna | since so much changed between pike and queens | 14:50 |
*** udesale has joined #openstack-nova | 14:51 | |
mriedem | i didn't do the tests for a reason | 14:51 |
mriedem | i could, but i'd need some incentive$$$ | 14:52 |
*** openstackgerrit has joined #openstack-nova | 14:52 | |
openstackgerrit | Walter A. Boring IV (hemna) proposed openstack/nova stable/pike: WIP: Avoid redundant initialize_connection on source post live migration https://review.opendev.org/683008 | 14:52 |
hemna | fixed the pep8 issues at least..... | 14:53 |
hemna | heh | 14:53 |
KeithMnemonic1 | mriedem will you be in Shanghai. I can incent you there | 14:54 |
*** TxGirlGeek has joined #openstack-nova | 14:55 | |
*** TxGirlGeek has quit IRC | 14:56 | |
*** TxGirlGeek has joined #openstack-nova | 14:57 | |
*** luksky has quit IRC | 14:58 | |
*** macz has quit IRC | 14:59 | |
mriedem | i don't think so | 14:59 |
gibi | whaaat? | 15:02 |
gibi | who will be in Shanghai then? | 15:02 |
mnaser | its friday and i dont want to deal with this >:( | 15:04 |
mnaser | nova_api.instance_mappings shows a mapping for an instance | 15:04 |
bauzas | gibi: \o | 15:04 |
*** _mmethot_ has quit IRC | 15:04 | |
gibi | bauzas: :) | 15:04 |
mnaser | nova.instances shows the instance there with deleted=0, vm_state=error though | 15:04 |
*** _mmethot_ has joined #openstack-nova | 15:05 | |
bauzas | at least we can ask a room for two :p | 15:05 |
openstackgerrit | Balazs Gibizer proposed openstack/nova master: Func test for migrate reschedule with pinned compute rpc https://review.opendev.org/683385 | 15:05 |
openstackgerrit | Balazs Gibizer proposed openstack/nova master: Handle legacy request spec dict in ComputeTaskManager._cold_migrate https://review.opendev.org/680762 | 15:05 |
openstackgerrit | Balazs Gibizer proposed openstack/nova master: Isolate request spec handling from _cold_migrate https://review.opendev.org/680763 | 15:05 |
mriedem | stephenfin: you might have thoughts on this https://bugs.launchpad.net/nova/+bug/1844721 | 15:05 |
openstack | Launchpad bug 1844721 in OpenStack Compute (nova) "Need NUMA aware RAM reservation to avoid OOM killing host processes" [Undecided,New] | 15:05 |
* bauzas should consider playing ping-pong there | 15:05 | |
mnaser | api gives 404 when looking up the instance.. | 15:05 |
mnaser | but its there when doing a list | 15:05 |
*** bnemec is now known as beekneemech | 15:06 | |
stephenfin | mriedem: Yup, that's a long standing issue. There's definitely at least one existing report of that. Possibly many | 15:06 |
stephenfin | Unfortunately the impact of a fix will likely be significant, which is why we've punted it continuously /o\ | 15:06 |
*** belmoreira has quit IRC | 15:07 | |
mriedem | do you know if it's documented as a known issue? | 15:07 |
sean-k-mooney | they can get numa aware ram reservation by setting hw:mem_page_size=small | 15:07 |
sean-k-mooney | if you are using cpu pinning but not hugepages you should set that | 15:07 |
mriedem | yeah the bug says "Many mitigation are "invented", but those mitigation all have some form of technical or operational "difficulties". One mitigation, for example, is to enable huge pages, and put VMs on huge pages." | 15:07 |
*** maciejjozefczyk has quit IRC | 15:07 | |
*** gyee has joined #openstack-nova | 15:07 | |
aspiers | mriedem: I'm sure it was honest ignorance of the policy rather than deliberately avoiding work. Your -1 should be sufficient to get it fixed. | 15:08 |
*** jmlowe has joined #openstack-nova | 15:08 | |
sean-k-mooney | mriedem: hugepages will prevent it but hw:mem_page_size=small will use 4k pages and still fix the issue | 15:08 |
mriedem | aspiers: i didn't say anyone was deliberately avoiding doing the work, | 15:08 |
stephenfin | mriedem: yeah, what sean-k-mooney said | 15:08 |
mriedem | but i've also seen it across several projects from multiple suse developers | 15:08 |
mriedem | so it's a pattern | 15:08 |
mriedem | hence my post to the ML | 15:08 |
aspiers | mriedem: OK, I'll forward it internally | 15:09 |
mriedem | aspiers: thanks | 15:09 |
mriedem | sean-k-mooney: stephenfin: ok it would be good if we had this documented as a limitation somewhere if we don't already | 15:09 |
mriedem | i'm not sure if that's best in the numa topo admin docs, flavor extra spec, reserved host ram config option, other? | 15:09 |
stephenfin | I think we should, but I'll check. Will be easy tack onto this existing doc | 15:09 |
sean-k-mooney | ya we should document the requirement to set hw:mem_page_size small if you set hw:cpu_policy=dedicated and dont you hugepages | 15:10 |
mriedem | ok so maybe it's best to just document alongside that extra spec https://docs.openstack.org/nova/latest/user/flavors.html | 15:10 |
sean-k-mooney | the reason this happens is when you enable pinning with out settin hw:mem_page_size we only look at available cpus in the numa toplogy filter not allso aviable memory on the numa nodes | 15:11 |
*** belmoreira has joined #openstack-nova | 15:11 | |
sean-k-mooney | i fyou enable hw:mem_page_size=small|large it validate the avaiable memory too | 15:11 |
stephenfin | sean-k-mooney: isn't there some issue with how we track mempages though | 15:12 |
stephenfin | none of this is obviously in placement yet | 15:12 |
sean-k-mooney | you cant mix vms that use hw:mem_page_size with vms that dont on the same host or it invalidate the tracking | 15:12 |
sean-k-mooney | but you should not mix numa and non numa vms on the same host anyway | 15:12 |
mriedem | "but you should not mix numa and non numa vms on the same host anyway" - is that documented? | 15:13 |
mriedem | we have a shit load of tribal knowledge about this stuff | 15:13 |
mriedem | and by we i mean you | 15:13 |
mriedem | all i see on https://docs.openstack.org/nova/latest/admin/cpu-topologies.html is "Host aggregates should be used to separate pinned instances from unpinned instances as the latter will not respect the resourcing requirements of the former." | 15:14 |
sean-k-mooney | i dont think its in the upstream docs or redhats it was in the intel tuning guide i helped writhe in like 2014 | 15:14 |
sean-k-mooney | so yes i shoudl document the tribal knowlage around this | 15:14 |
sean-k-mooney | *my tribale knowlasge around this | 15:14 |
*** cfriesen has quit IRC | 15:14 | |
*** trident has quit IRC | 15:14 | |
sean-k-mooney | the extention of that warning to numa instead of pinned is to because of the OOM behavior | 15:15 |
sean-k-mooney | i can try and write something for this on monday if you / stephen can correct the spelling after | 15:16 |
kashyap | sean-k-mooney: Happy to look as well :-) | 15:16 |
mriedem | ah here is the duplicate https://bugs.launchpad.net/nova/+bug/1792985 | 15:17 |
openstack | Launchpad bug 1439247 in OpenStack Compute (nova) "duplicate for #1792985 Small pages memory are not take into account when not explicitly requested" [Medium,Confirmed] | 15:17 |
mriedem | which is a duplicate of another bug | 15:17 |
aspiers | mriedem: forwarded internally. If you see it happen again feel free to ping me privately | 15:17 |
mriedem | ack | 15:17 |
sean-k-mooney | mriedem: do you want to assing one of those bugs to me and i can update the documentaiton regarding the mixing of numa and non numa instances and the use of hw:mem_page_size=small|large to avoid OOM events for numa affined instances | 15:19 |
mriedem | artom: this isn't fixed with your numa live migration series right? https://bugs.launchpad.net/nova/+bug/1496135 - that's the thing you were half fixing but we had you remove it? | 15:19 |
openstack | Launchpad bug 1496135 in OpenStack Compute (nova) "live-migration will not honor destination vcpu_pin_set config" [Medium,Confirmed] | 15:19 |
mriedem | sean-k-mooney: i can only do so much assing in one day | 15:20 |
sean-k-mooney | see alrealy indicating my spelling mistakes :) | 15:20 |
mriedem | sean-k-mooney: i don't think we probably need to assign it to you, a docs patch would just be a related bug anyway | 15:21 |
sean-k-mooney | am i thnk that should be fixed by the numa live migration changes | 15:21 |
artom | mriedem, not clear - they talk about policy, which I assume means hw:cpu_policy, which my thing *did* fix | 15:21 |
mriedem | you can track a todo however you want though | 15:21 |
*** ivve has quit IRC | 15:21 | |
sean-k-mooney | oh | 15:21 |
sean-k-mooney | this is the edgecase we removed | 15:21 |
artom | But... then they also talk about vcpuset | 15:21 |
sean-k-mooney | yes | 15:22 |
artom | So, it's confusing | 15:22 |
sean-k-mooney | this is migration of a non numa instance between host with vcpu_pin_set defined | 15:22 |
sean-k-mooney | so the thing we added then removed | 15:22 |
sean-k-mooney | so this is not fixed yet | 15:22 |
artom | I guess? In any case, that's what it can become, because we need something to track that | 15:22 |
mriedem | ok i left a comment https://bugs.launchpad.net/nova/+bug/1496135/comments/10 | 15:23 |
openstack | Launchpad bug 1496135 in OpenStack Compute (nova) "live-migration will not honor destination vcpu_pin_set config" [Medium,Confirmed] | 15:23 |
mriedem | artom: while you're here, are we going to try and get https://review.opendev.org/#/c/672595/ into train yet? | 15:24 |
mriedem | i'm not sure how much fun that would be to backport | 15:24 |
artom | mriedem, I'm always here man | 15:24 |
*** trident has joined #openstack-nova | 15:24 | |
mriedem | though it's all test stuff | 15:24 |
mriedem | you weren't yesterday when i needed to bug you | 15:24 |
artom | PTO, apple picking with son's daycare | 15:24 |
mriedem | see, you're not *always* here | 15:25 |
artom | Bought some cider, good stuff | 15:25 |
artom | In my hear I am | 15:25 |
artom | *heart, even | 15:25 |
mriedem | heart | 15:25 |
artom | Anyways | 15:25 |
mriedem | meanwhile i'm just assing around | 15:25 |
artom | Yeah, the func test should land in Train | 15:25 |
artom | If you and dansmith are ready to dive back in, I can pick it up again | 15:25 |
artom | Although it conflicts with some of the PCPU stuff that's still in flight | 15:26 |
mriedem | i've added it to https://etherpad.openstack.org/p/nova-train-release-todo | 15:26 |
artom | https://review.opendev.org/#/c/681060/ specifically | 15:26 |
mriedem | i'm not going to review that today | 15:26 |
artom | ^^ needs to land first, anyways, then I rebase and continue | 15:27 |
mriedem | so likely rc2 at this point | 15:27 |
*** luksky has joined #openstack-nova | 15:27 | |
artom | I guess yeah | 15:28 |
*** trident has quit IRC | 15:29 | |
mriedem | what is the end of the pcpu series? the reshaper patch? because i was thinking about rebasing https://review.opendev.org/#/c/683011/ on top of that so we can approve it to make sure it gets into rc1 | 15:29 |
sean-k-mooney | if its a test only patch we could technically backport it right | 15:30 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Revert "Temporarily skip TestNovaMigrationsMySQL" https://review.opendev.org/683011 | 15:30 |
sean-k-mooney | so does it need to land now | 15:30 |
mriedem | sean-k-mooney: correct | 15:30 |
mriedem | sean-k-mooney: what i worry about is saying "sure we'll get to it in ussuri" and then 4 months go by and dansmith and i have lost all context on that stuff and then we don't want to land it and it just never lands | 15:30 |
mriedem | strike while the iron is hot and all that | 15:31 |
sean-k-mooney | i think we should time box any of this stuff to ensure it before m1 | 15:31 |
sean-k-mooney | ya | 15:31 |
sean-k-mooney | i was just suggesting that if did not make rc1/2 it could still end up in train | 15:32 |
mriedem | if anyone is looking for glance image properties docs to work on https://bugs.launchpad.net/nova/+bug/1763761 | 15:35 |
openstack | Launchpad bug 1763761 in Glance "CPU topologies in nova - doesn't mention numa specific image properties" [Medium,Triaged] | 15:35 |
*** belmoreira has quit IRC | 15:37 | |
mriedem | looking at old numa bugs, it looks like https://review.opendev.org/#/c/458848/ just needed some test, | 15:39 |
mriedem | might be something for a core to pick up | 15:39 |
*** trident has joined #openstack-nova | 15:40 | |
mriedem | artom: i wonder if this is the same numa topology claim race failure thing described in the ML https://bugs.launchpad.net/nova/+bug/1829349 | 15:41 |
openstack | Launchpad bug 1829349 in OpenStack Compute (nova) "Resource Tracker failed to update usage in case numa topology conflict happen" [Undecided,In progress] - Assigned to leehom (feli5) | 15:41 |
artom | mriedem, not sure that's still applicable, stephenfin did a thing around that a long time ago | 15:41 |
mriedem | note they are enabling he workaround config | 15:41 |
artom | mriedem, I don't think that's related - their bug is "if two instances have the same pins, RT blows up" | 15:42 |
artom | But with NUMA LM we're not supposed to end up in that situation in the first place | 15:43 |
mriedem | "supposed to" | 15:43 |
mriedem | but if we have a race with claims not reporting correctly or something, we could still race and blow up right? | 15:43 |
artom | Right | 15:44 |
mriedem | anyway there are some details from the reporter on what they think the problem is and where, so maybe would help | 15:44 |
*** rcernin has quit IRC | 15:44 | |
artom | mriedem, well they proposed https://review.opendev.org/#/c/661208/ | 15:45 |
artom | Which is... no | 15:45 |
artom | I need food | 15:46 |
sean-k-mooney | mriedem: if your looking for numa related bug we should fix sooner rather then later we should aim to merge and backport https://review.opendev.org/#/c/662522/ sonner rather then later | 15:52 |
sean-k-mooney | its alex_xu and stephenfin fix for https://bugs.launchpad.net/nova/+bug/1805767 | 15:52 |
openstack | Launchpad bug 1805767 in OpenStack Compute (nova) "The new numa topology in the new flavor extra specs weren't parsed when resize" [Medium,In progress] - Assigned to Stephen Finucane (stephenfinucane) | 15:52 |
sean-k-mooney | i think of think all move operation bar shelve are broken in some way if you have a numa toplogy | 15:53 |
sean-k-mooney | there are speic things that work but it feels like there are more edgecase that dont then do somethimes | 15:54 |
sean-k-mooney | that said we have a tone of fixes for them pending in general | 15:54 |
mriedem | this is where i make a generic statement about lack of integration test coverage with tempest for numa, | 15:55 |
mriedem | and then you say, "i'm working on a ci job for that" | 15:55 |
sean-k-mooney | i still thinks there are enogh latent bugs to keep us going for U before im going to be happy we have hardended it enough | 15:55 |
sean-k-mooney | well its true | 15:55 |
*** jawad_axd has joined #openstack-nova | 15:55 | |
sean-k-mooney | but also we have been without any real ci coverage for like 18 months so :( | 15:56 |
mriedem | you've done more to try and get numa tempest integration testing on nova changes than anyone so i'm not faulting you | 15:56 |
sean-k-mooney | oh i know i just whish i had more time to spend on it | 15:56 |
mriedem | it's just that my eyes glaze over when i hear "we have a bug with x that's related to numa" | 15:57 |
sean-k-mooney | although for the next week or two im going to continue looking at testing and ci | 15:57 |
*** TxGirlGeek has quit IRC | 15:57 | |
*** rpittau is now known as rpittau|afk | 15:57 | |
*** TxGirlGeek has joined #openstack-nova | 15:58 | |
*** elod has quit IRC | 15:58 | |
sean-k-mooney | mriedem: i just pushed this by the way https://review.opendev.org/#/c/683431/ | 15:58 |
sean-k-mooney | that will allow us to test cpu pinning hugepages and dpdk more or less reliably | 15:59 |
*** jawad_axd has quit IRC | 15:59 | |
sean-k-mooney | im going to wait until after RC* to follow up with getting multi numa flavor on vexxhost and limestone | 16:00 |
openstackgerrit | Sylvain Bauza proposed openstack/nova master: Add a prelude for the Train release https://review.opendev.org/683327 | 16:00 |
mriedem | stephenfin: i'm going to rebase https://review.opendev.org/#/c/682267/4 on top of https://review.opendev.org/#/c/674895/44 since it sounds like we're holding back on merging the former so as to not interrupte the latter | 16:04 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: libvirt: Get the CPU model, not 'arch' from get_capabilities() https://review.opendev.org/682267 | 16:04 |
mriedem | there wasn't even a merge conflict so i'm not sure what gerrit was complaining about | 16:05 |
mriedem | bauzas: do you want to re-approve https://review.opendev.org/#/c/681750/ since it was rebased? | 16:06 |
*** damien_r has quit IRC | 16:06 | |
mriedem | the rest of that stack is stuck behind it | 16:06 |
openstackgerrit | Stephen Finucane proposed openstack/nova master: docs: Update CPU topologies guide to reflect the new PCPU world https://review.opendev.org/683437 | 16:08 |
donnyd | sean-k-mooney: LGTM | 16:08 |
stephenfin | mriedem: Makes sense. Also, there the start of the doc ^ | 16:08 |
mriedem | yu[p | 16:08 |
donnyd | I gave it my whole +1 | 16:08 |
bauzas | mriedem: cool | 16:09 |
bauzas | and I call it a day | 16:09 |
mriedem | stephenfin: man i wish there wasn't a big refactor in that patch | 16:10 |
sean-k-mooney | donnyd: thanks the first thing i hope to use it for is ovs-dpdk testing with https://review.opendev.org/#/c/656580/ | 16:10 |
stephenfin | I literally just said I shouldn't have done that to sean-k-mooney /o\ | 16:10 |
stephenfin | (who's working up in the RH office in Dublin with me for the day) | 16:10 |
sean-k-mooney | yep you did | 16:11 |
stephenfin | mriedem: Gimme five to separate it out | 16:11 |
mriedem | i know it's a compulsion | 16:11 |
mriedem | obligatory "P | 16:11 |
mriedem | :P | 16:11 |
*** elod has joined #openstack-nova | 16:13 | |
sean-k-mooney | stephenfin: by the way once i get teh ovs-dpdk job working on networking-ovs-dpdk i assume you have no issue with me adding it as non voting to check in os-vif and then promote it to voting around m1 | 16:15 |
stephenfin | not on the slightest | 16:15 |
sean-k-mooney | cool i think the vhost user path is the only one without tempest coverate in os-vif currently | 16:16 |
*** jdillaman has quit IRC | 16:16 | |
sean-k-mooney | i also need to port one of the jobs to zuul v3 which i hope to do next week | 16:16 |
*** pcaruana has quit IRC | 16:19 | |
mriedem | mnaser: does the instance mapping have a cell mapping? | 16:23 |
mriedem | does the instance have a stale build request? | 16:23 |
mriedem | in that weird GET /servers/{server_id} 404 case before i've handled the InstanceNotFound in the API controller code, logged the traceback and then re-raised | 16:23 |
mriedem | so you can see where we're coming from, e.g. build request or what | 16:24 |
mriedem | mnaser: log the traceback here https://github.com/openstack/nova/blob/stable/stein/nova/api/openstack/common.py#L471 | 16:25 |
mriedem | i'm pretty sure you wrote a script to deal with the case that the instance is in a cell, the build request is gone, but the intsance mapping doesn't have a cell mapping | 16:26 |
mriedem | yeah which i used here https://review.opendev.org/#/c/655908/ | 16:27 |
openstackgerrit | Stephen Finucane proposed openstack/nova master: docs: Clarify everything CPU pinning https://review.opendev.org/683437 | 16:27 |
openstackgerrit | Stephen Finucane proposed openstack/nova master: docs: Update CPU topologies guide to reflect the new PCPU world https://review.opendev.org/683485 | 16:27 |
openstackgerrit | OpenStack Release Bot proposed openstack/os-vif stable/train: Update .gitreview for stable/train https://review.opendev.org/683488 | 16:28 |
openstackgerrit | OpenStack Release Bot proposed openstack/os-vif stable/train: Update TOX/UPPER_CONSTRAINTS_FILE for stable/train https://review.opendev.org/683489 | 16:28 |
openstackgerrit | OpenStack Release Bot proposed openstack/os-vif master: Update master for stable/train https://review.opendev.org/683490 | 16:28 |
stephenfin | mriedem: Split ^ | 16:28 |
stephenfin | wait, that doesn't look right | 16:29 |
* stephenfin tries again | 16:29 | |
stephenfin | :( | 16:29 |
mriedem | i left some other comments/questions about upgrade and quota details, not sure if you saw those while splitting | 16:31 |
stephenfin | ack | 16:32 |
*** BjoernT_ has quit IRC | 16:32 | |
sean-k-mooney | lyarwood: can you take a look at the propasl bot patches for os-vif above on stable ? they look sane to me | 16:34 |
sean-k-mooney | lyarwood: also it not supper urgent so it can wait till next week but they are trivial | 16:35 |
mriedem | dansmith: finally got around to replying to our comments on this heal instance mappings command https://review.opendev.org/#/c/655908/ - since there was no vote i hadn't noticed | 16:38 |
openstackgerrit | Stephen Finucane proposed openstack/nova master: docs: Clarify everything CPU pinning https://review.opendev.org/683437 | 16:39 |
openstackgerrit | Stephen Finucane proposed openstack/nova master: docs: Update CPU topologies guide to reflect the new PCPU world https://review.opendev.org/683485 | 16:39 |
*** sean-k-mooney has quit IRC | 16:42 | |
*** derekh has quit IRC | 16:42 | |
*** ociuhandu has quit IRC | 16:43 | |
dansmith | mriedem: okay I'll have to re-read to build my context back up | 16:45 |
mnaser | mriedem: the instance does have a mapping, which is where this is different than every time | 16:51 |
mnaser | instance_mapping in nova_api exists, and instance in nova exists.. | 16:51 |
*** igordc has joined #openstack-nova | 16:55 | |
*** udesale has quit IRC | 16:57 | |
*** gbarros has joined #openstack-nova | 16:58 | |
*** ociuhandu has joined #openstack-nova | 17:00 | |
*** ociuhandu has quit IRC | 17:04 | |
mnaser | waaaaaaaaaaait | 17:10 |
mnaser | the cell mapping points at cell0 | 17:10 |
mnaser | but the instance exists in error state.. in the actual cell (cell1 in this case) | 17:11 |
mnaser | and the cell0 one _is_ marked as deleted | 17:12 |
mnaser | given my cell knowledge so far, you cant start from cell1 and then get buried in cell0 right? | 17:12 |
mnaser | you can't end up in cell0 if you're in cell1? | 17:13 |
dansmith | correct | 17:13 |
dansmith | I mean.. that's the intent/rule | 17:14 |
openstackgerrit | Artom Lifshitz proposed openstack/nova master: Move pre-3.44 Cinder post live migration test to test_compute_mgr https://review.opendev.org/683597 | 17:17 |
mriedem | artom: "The previous patch (Id0e8b1c32600d53382e5ac938e403258c80221a0) created" you're learning! | 17:22 |
artom | mriedem, next step: multicellular organisms | 17:22 |
mriedem | mnaser: that's really strange, there are only two places we bury in cell0, | 17:24 |
mriedem | 1. if scheduling fails https://github.com/openstack/nova/blob/stable/stein/nova/conductor/manager.py#L1359 | 17:24 |
mriedem | 2. or scheduling picks a host that doesn't have a mapping https://github.com/openstack/nova/blob/stable/stein/nova/conductor/manager.py#L1386 | 17:24 |
mriedem | i wonder if you had hosts incorrectly mapped to cell0? | 17:25 |
mriedem | https://github.com/openstack/nova/blob/stable/stein/nova/conductor/manager.py#L1397 | 17:25 |
mriedem | there shouldn't ever be host mappings pointing at cell0 but we don't explicitly fail if you try to do that | 17:25 |
mriedem | e.g. if you started a compute service pointing at the wrong cell database and it was registered in cell0 and then got mapped there in the host mapping | 17:26 |
mriedem | dansmith: do you think it'd be worthwhile to have a sanity check around https://github.com/openstack/nova/blob/stable/stein/nova/conductor/manager.py#L1421 such that if the cell is actually cell0 in that case we blow up with some kind of "idk what you're doing but this is definitely wrong" | 17:27 |
*** mrch_ has joined #openstack-nova | 17:30 | |
mriedem | that still wouldn't explain how the instance in mnaser's case went from cell0 to cell1, unless he has a busted heal script | 17:30 |
mriedem | maybe a host mapping pointing at cell0 isn't possible after all https://github.com/openstack/nova/blob/stable/stein/nova/objects/host_mapping.py#L253 | 17:32 |
* mnaser catching up on backlog | 17:33 | |
mnaser | yeah so in this case, i have a db record for this instance both in cell0 and cell1 | 17:33 |
mnaser | and i dont think the config changed anytime recently.. | 17:33 |
mriedem | is the host value set for either instance record? | 17:33 |
mnaser | both null | 17:34 |
mnaser | but the cell0 one is deleted and cell1 is not (aka deleted=0) | 17:34 |
mriedem | which has the older created_at time? | 17:34 |
mnaser | oh good idea | 17:34 |
mnaser | the same exact time. | 17:35 |
mriedem | wtf | 17:35 |
mriedem | you don't have some janky homebrew scripts that try to heal and revive an instance in cell0 and put it into cell1? | 17:35 |
mnaser | not that i know of | 17:36 |
mnaser | and also in case im losing it | 17:36 |
mnaser | http://paste.openstack.org/show/778146/ | 17:36 |
mriedem | some sort of weird mariadb cluster issue or something? | 17:37 |
mriedem | you wouldn't be clustering those together though unless you really screwed up | 17:37 |
mnaser | its just plain old galera, and those are two different databases too.. | 17:38 |
mnaser | so i dont know how it would like "oh let me make a cell0 thing" | 17:38 |
mriedem | it's clearly a phantom, some kind of 9/11 phantom, and it showed up too early for halloween | 17:38 |
mriedem | no obvious weirdness for that instance uuid in the logs? | 17:38 |
mnaser | stuff are mostly rotated out but im looking at all the fields and see the diffs | 17:39 |
mnaser | oh hm | 17:39 |
mnaser | so different fields are | 17:40 |
openstackgerrit | Balazs Gibizer proposed openstack/nova master: Remove functional test specific nova code https://review.opendev.org/683609 | 17:40 |
mnaser | updated_at (obviously cause deleted was changed), deleted_at (for the one that deleted), id (different ids from cell0 and cell1), availablity zone and launched_on (it looks like it was scheduled?!) | 17:40 |
dansmith | mriedem: hmm, that's probably not a bad idea | 17:41 |
mnaser | let me check if there's anything intersting on the host it was provisioned on.. | 17:41 |
mriedem | it's very weird that launched_on would be set but host is not | 17:41 |
openstackgerrit | OpenStack Release Bot proposed openstack/python-novaclient stable/train: Update .gitreview for stable/train https://review.opendev.org/683625 | 17:42 |
dansmith | mriedem: I agree that this sounds highly fishy | 17:42 |
openstackgerrit | OpenStack Release Bot proposed openstack/python-novaclient stable/train: Update TOX/UPPER_CONSTRAINTS_FILE for stable/train https://review.opendev.org/683626 | 17:42 |
openstackgerrit | OpenStack Release Bot proposed openstack/python-novaclient master: Update master for stable/train https://review.opendev.org/683627 | 17:42 |
mriedem | dansmith: not even sounds, smells! | 17:42 |
mnaser | ok, timed out waiting for network-vif-plugged | 17:42 |
dansmith | mriedem: reeks of fishy sounds | 17:42 |
mriedem | mnaser: this is what sets launched_on https://github.com/openstack/nova/blob/f4aaa9e229c98a97af085f31e43509189e2e4585/nova/compute/resource_tracker.py#L544 | 17:42 |
mriedem | oh i bet i know why, | 17:42 |
mnaser | BuildAbortException: Build of instance 10d8a93a-bb6b-44ef-83f1-be6b21336651 aborted: Failed to allocate the network(s), not rescheduling. | 17:42 |
mriedem | we fail the build, set host/node to None but not launched_on or az | 17:43 |
mriedem | https://github.com/openstack/nova/blob/f4aaa9e229c98a97af085f31e43509189e2e4585/nova/compute/manager.py#L2069 | 17:43 |
mriedem | yeah _nil_out_instance_obj_host_and_node should also null out launched_on | 17:44 |
mriedem | anyway, that doesn't explain why the instance is also in cell0 | 17:44 |
mnaser | yeah that explains half the story i guess | 17:44 |
mriedem | mnaser: which copy of the instance has launched_on set? | 17:44 |
mriedem | cell1? | 17:44 |
mnaser | yeah | 17:44 |
mriedem | and that compute host that launched_on is set to, the host_mapping in the api db is pointing at cell1? | 17:45 |
mnaser | yeah the one is cell1 is correct | 17:45 |
mnaser | good question about host_mapping | 17:46 |
mriedem | does the name of the instance have an index-like suffix on it, i.e. was it part of a multi-create request? | 17:46 |
mriedem | e.g. my-vm-1, my-vm-2 etc | 17:46 |
mnaser | its not a multicreate request (afaik) but the title looks like it is | 17:46 |
mnaser | centos-7-large-tf-ci-0000006321 | 17:47 |
mnaser | a zuul instance though | 17:47 |
mriedem | oh that's not the kind of index i'd be looking for | 17:47 |
mriedem | simple integers | 17:47 |
mnaser | yeah | 17:47 |
mnaser | i was just wondering in case nova had some logic to match <name>-<number> | 17:47 |
mnaser | but i didn't think we'd be that wild :) | 17:47 |
mriedem | i don't know how this could happen, unless some kind of weird rabbit thing where like we actually got 2 copies of the same build request message to conductor, one went to cell1 and one failed scheduling and went to cell0 or something | 17:48 |
mnaser | yeah host_mappings is the right one | 17:48 |
mnaser | yeah im pretty confused.. | 17:49 |
*** markvoelker has quit IRC | 17:49 | |
dansmith | that sounds pretty fantastical | 17:49 |
dansmith | especially without other MAJOR problems showing up all over the place | 17:49 |
mriedem | btw the request_specs.num_instances field for the instance would tell you if it was a multi-create request | 17:49 |
mriedem | each instance in a multi-create request gets a unique id but i've seen some weird things with multi-create requests that blow up the scheduler and trigger automatic retries to the scheduler | 17:50 |
mriedem | we removed that in stein though i think | 17:50 |
mriedem | this thing https://github.com/openstack/nova/blob/stable/queens/nova/scheduler/utils.py#L807 | 17:51 |
mnaser | i mean, given the instance failed to deploy because of a timeout | 17:51 |
mnaser | there may be other weird things happening | 17:52 |
mriedem | well the timeout was likely a failure on the ovs agent | 17:52 |
mnaser | i .. don't know how it can result in something like scheduling it twice though | 17:52 |
mriedem | causing vif plugging to fail | 17:52 |
*** markvoelker has joined #openstack-nova | 17:52 | |
dansmith | if the failure was vif plugging, we should be well-settled on the destination host and in the right cell | 17:53 |
dansmith | way way way away from anything that would re-create it in cell0 | 17:53 |
mnaser | yeah thats why i think the only place it could have happened is scheduled twice | 17:54 |
mriedem | mnaser: do you see instance actions for that instance in both dbs? | 17:59 |
mnaser | oh good idea | 17:59 |
mriedem | and if so, are there any differences in the events? | 17:59 |
mriedem | the instance_action_events tables i mean | 17:59 |
mriedem | *table | 17:59 |
mriedem | i'm not sure we actually have instance action events until we have picked a cell though | 18:00 |
mnaser | theres no column for instance in instance_actions_events or am i losing it? | 18:00 |
mriedem | yeah we create the action once we've picked a cell https://github.com/openstack/nova/blob/stable/stein/nova/conductor/manager.py#L1483 | 18:00 |
mriedem | mnaser: instance -> instance_actions -> instance_action_events | 18:00 |
mnaser | ohhhh okay | 18:00 |
mriedem | *instance_actions_events | 18:01 |
mnaser | create in cell1, nothing in cell0 | 18:01 |
*** BjoernT has joined #openstack-nova | 18:01 | |
mnaser | compute__do_build_and_run_instance in actions | 18:01 |
mriedem | yeah i guess that's what i'd expect since bury in cell0 before creating the "create" action | 18:01 |
mnaser | so the only thing is somehow it got scheduled .. twice? | 18:02 |
mnaser | one said "nope" and the other said "yep" | 18:02 |
mriedem | idk how that could happen | 18:02 |
mnaser | probably because a db exception error for some constraint | 18:02 |
mnaser | i mean at this point i'm convinced it might be an infra thing | 18:02 |
mnaser | again the VM failed to provision because port plugging timed out so that does hint at things likely being weird | 18:03 |
mriedem | well that's why i brought up rabbit | 18:03 |
mnaser | and not an issue in galera, yeah, it would be rabbit issue surely | 18:03 |
mriedem | e.g. is it possible that rabbit thought the message to conductor (or scheduler) wasn't received and resent it? | 18:03 |
mnaser | i guess that is the most likely theory | 18:04 |
dansmith | I don't think so | 18:04 |
mriedem | pre-cells v2 we wouldn't have a problem b/c instances.uuid is unique per cell db | 18:04 |
mnaser | unless rabbit thought the cluster was split or something | 18:04 |
dansmith | I think that when you enqueue a message, it goes into a queue for a given receiver, if there is one waiting | 18:04 |
mriedem | do you have rabbit logs going back to when that instance was created? | 18:04 |
dansmith | but yeah, rabbit split brain could have done something I guess | 18:04 |
dansmith | mriedem: what if, in bury_in_cell0, we check last minute to see if there's already a host mapping and if so, we log and abort? | 18:05 |
dansmith | I mean, I would think that's where we fail in that routine anyway, | 18:05 |
*** eharney has quit IRC | 18:05 | |
dansmith | so there should be some log fallout from us failing to create the cell0 mapping anyway right? | 18:05 |
mriedem | is this one of the many reasons people say to not cluster rabbit? | 18:06 |
mriedem | beyond performance? | 18:06 |
mnaser | rabbitmq is the worlds biggest pita :( | 18:06 |
*** BjoernT has quit IRC | 18:06 | |
mnaser | ok so at least to clean things up i will update the instance_mappings to point that the right cell | 18:07 |
mnaser | and then at least the instance will be delete-able | 18:08 |
mnaser | and then the cell0 will just get archived and disappear | 18:08 |
mriedem | dansmith: i'm not sure what you mean by "already a host mapping" in bury_in_cell0 | 18:08 |
dansmith | mriedem: when we bury in cell0, we also have to create a mapping for it to be there | 18:09 |
dansmith | mriedem: we should fail to do that if we're late to the party because we're a failed reschedule right? | 18:09 |
*** gbarros has quit IRC | 18:09 | |
mnaser | hmm yeah, that's true, i guess the bury doesn't check if there's an existing one there | 18:09 |
mriedem | dansmith: so here https://github.com/openstack/nova/blob/stable/stein/nova/conductor/manager.py#L1282 ? | 18:09 |
*** henriqueof has joined #openstack-nova | 18:09 | |
mriedem | before doing that, double check that instance_mapping.cell_mapping is None, right? | 18:09 |
mriedem | i mean we'd want to check way before that though, before this https://github.com/openstack/nova/blob/stable/stein/nova/conductor/manager.py#L1260 | 18:11 |
mriedem | i'm not opposed to adding a sanity check | 18:11 |
mriedem | worst case is it doesn't hurt anything, | 18:11 |
mriedem | best case is it avoids crazy rabbit clustering split brain weirdness | 18:11 |
dansmith | mriedem: well, I'm thinking more like direct hit the database to make sure | 18:12 |
dansmith | mriedem: but again, we should fail to create a duplicate mapping there | 18:12 |
mnaser | is this just a matter of `if inst_mapping.cell_mapping is not None:` before updating it? | 18:13 |
dansmith | no | 18:14 |
dansmith | mriedem: oh, hmm | 18:19 |
dansmith | mriedem: we look up the mapping and set it to cell0 there and then save, not a create | 18:20 |
dansmith | that's why no dupe | 18:20 |
dansmith | so maybe the cell0 save happens first and then the cell1 one comes in later? | 18:20 |
dansmith | so, mnaser maybe yes to your above question | 18:20 |
*** igordc has quit IRC | 18:20 | |
dansmith | I was forgetting how this worked, that we create the mapping early and then update it later | 18:20 |
mriedem | mnaser: yes that's what i was thinking | 18:21 |
mriedem | dansmith: the create happens in the api | 18:21 |
mriedem | atomically with the build request and request spec | 18:21 |
mriedem | yeah that (sorry was catching up) | 18:21 |
mriedem | mnaser: so i'd move the code that gets the instance mapping earlier before we create the instance record in cell0, check if instance_mapping.cell_mapping is not None and if not log an error or something and return | 18:22 |
mriedem | note we still have a "remove after ocata" thing in that method :) | 18:22 |
dansmith | mriedem: move the getting of the mapping? I would think you'd want it very tight like it is now so you had the best opportunity to see that it has been set by a competitor | 18:23 |
*** igordc has joined #openstack-nova | 18:26 | |
*** ricolin has quit IRC | 18:27 | |
*** ricolin has joined #openstack-nova | 18:27 | |
*** adriant has quit IRC | 18:28 | |
*** adriant has joined #openstack-nova | 18:28 | |
*** jistr has quit IRC | 18:28 | |
*** redrobot has quit IRC | 18:28 | |
*** jistr has joined #openstack-nova | 18:28 | |
*** BjoernT has joined #openstack-nova | 18:28 | |
*** jkulik has quit IRC | 18:29 | |
*** dansmith has quit IRC | 18:29 | |
*** dansmith has joined #openstack-nova | 18:29 | |
*** jkulik has joined #openstack-nova | 18:30 | |
*** BjoernT_ has joined #openstack-nova | 18:34 | |
*** BjoernT has quit IRC | 18:35 | |
mriedem | dansmith: as long as it's before we create the instance record in cell0 | 18:37 |
mriedem | b/c otherwise you have to roll that back | 18:38 |
mriedem | anyway, i think the shed should be bronw | 18:38 |
mriedem | *brown | 18:38 |
*** mriedem has quit IRC | 18:40 | |
*** mriedem has joined #openstack-nova | 18:40 | |
dansmith | maybe we're talking about different things | 18:43 |
mriedem | mnaser: were you going to push a patch for this sanity check? | 18:47 |
mriedem | it'd be easier to discuss in review | 18:47 |
*** eharney has joined #openstack-nova | 18:52 | |
*** igordc has quit IRC | 19:13 | |
*** ricolin has quit IRC | 19:16 | |
openstackgerrit | Artom Lifshitz proposed openstack/nova master: Poison netifaces.interfaces() in tests https://review.opendev.org/671773 | 19:18 |
*** igordc has joined #openstack-nova | 19:19 | |
*** slaweq has quit IRC | 19:32 | |
mnaser | mriedem: i was *if* i was talking about the right thing | 19:37 |
mnaser | even if i pushed the initial patch tbh i dont know if i have enough bandwidth to drive it all the way through review and all right now | 19:38 |
mnaser | but i can do the initial if statement before the update there.. | 19:38 |
mnaser | with a test | 19:38 |
openstackgerrit | Artom Lifshitz proposed openstack/nova master: Rename Claims resources to compute_node https://review.opendev.org/679470 | 19:40 |
* mriedem finally got a travis ci build with an encrypted file to work properly, yay | 19:45 | |
mriedem | travis --com, travis --org, what a mess | 19:45 |
mriedem | mnaser: that's good enough to start, it shouldn't be a difficult change, i think the complexity is mostly in the test since we want to assert that (1) we fail if the instance mapping already has a cell mapping set in _bury_in_cell0 and (2) we check that before calling instance.create | 19:46 |
*** ociuhandu has joined #openstack-nova | 19:49 | |
*** ociuhandu has quit IRC | 20:03 | |
*** markvoelker has quit IRC | 20:05 | |
*** ralonsoh has quit IRC | 20:05 | |
*** artom has quit IRC | 20:10 | |
*** tbachman has quit IRC | 20:15 | |
*** tbachman has joined #openstack-nova | 20:22 | |
*** KeithMnemonic1 has quit IRC | 20:23 | |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Clear instance.launched_on when build fails https://review.opendev.org/683725 | 20:24 |
*** CeeMac has quit IRC | 20:29 | |
*** redrobot has joined #openstack-nova | 20:34 | |
*** BjoernT has joined #openstack-nova | 20:37 | |
*** BjoernT_ has quit IRC | 20:39 | |
*** tbachman has quit IRC | 20:52 | |
*** tbachman has joined #openstack-nova | 20:59 | |
mriedem | maybe we should finally fix that bug where nova-compute creates volumes but doesn't name them... | 21:01 |
*** _mmethot_ has quit IRC | 21:05 | |
*** mmethot_ has joined #openstack-nova | 21:05 | |
openstackgerrit | Matt Riedemann proposed openstack/nova master: WIP: Sanity check instance mapping in _bury_in_cell0 https://review.opendev.org/683730 | 21:10 |
mriedem | mnaser: dansmith: ^ this is what i was thinking | 21:11 |
mriedem | it's a bit fuglier than i expected | 21:11 |
dansmith | mriedem: okay I thought you meant adding it quite a bit earlier | 21:12 |
dansmith | er, moving | 21:12 |
mriedem | so did i, | 21:12 |
mriedem | but realized we have a list here | 21:12 |
mriedem | so it has to go in the loop | 21:12 |
dansmith | mriedem: the other thing is, we should probably do the same sanity check when we set it for a non cell0 cell, | 21:12 |
dansmith | because if cell0 is faster (likely) then we'll create it there and map it, | 21:12 |
dansmith | then remap it to cell1 | 21:12 |
dansmith | and we should at least warn that that happened for the forensic value | 21:13 |
mriedem | yeah - can you leave a comment so i don't forget? | 21:13 |
dansmith | I mean...if we think this is what is happening | 21:13 |
dansmith | done | 21:14 |
mriedem | danke | 21:14 |
mriedem | replied with a question, | 21:18 |
mriedem | mostly about leaving 2 copies in different dbs which could mess up listing, | 21:18 |
mriedem | i guess the warning could just be "uh oh spaghettios this is in cell0 and now it's in cell1 too - we're going to trust the cell1 version but you'll need to cleanup cell0" | 21:19 |
dansmith | it's going to be in both places anyway, apparently, so mapped appropriately and logged is minimal best case I think | 21:20 |
*** liuyulong has quit IRC | 21:20 | |
dansmith | you could also nuke the cell0 version, but I kinda want to see evidence that this is really happening first | 21:20 |
mriedem | right i don't want to overengineer this | 21:21 |
mriedem | anyway, not going to happen today, i'm taking off | 21:21 |
*** mriedem is now known as mriedem_away | 21:21 | |
dansmith | ack | 21:21 |
*** JamesBenson has quit IRC | 21:29 | |
*** JamesBenson has joined #openstack-nova | 21:31 | |
*** markvoelker has joined #openstack-nova | 21:32 | |
*** JamesBenson has quit IRC | 21:35 | |
*** eharney has quit IRC | 21:39 | |
*** markvoelker has quit IRC | 21:42 | |
*** Ben78 has joined #openstack-nova | 21:44 | |
*** dave-mccowan has quit IRC | 21:55 | |
mlavalle | I am working on a devstack built with stable rockey. I modified and restarted the nova api with the following changes: https://review.opendev.org/#/c/674038 and https://review.opendev.org/#/c/645452. I also modified https://github.com/openstack/nova/blob/master/nova/policies/servers.py#L58 to base.RULE_ADMIN_API or base.SYSTEM_READER | 22:07 |
mlavalle | granted alt_demo user the reader role with system(all) scope. alt_demo is still unbale to do list server --all-users. what am I missing? | 22:08 |
mlavalle | granted alt_demo user the reader role with system(all) scope. alt_demo is still unbale to do list server --all-projects. what am I missing? | 22:09 |
mlavalle | lbragstad: ^^^^ any advice? | 22:09 |
*** xek has quit IRC | 22:12 | |
lbragstad | mlavalle what kind of token are you using to make the request to nova? | 22:14 |
lbragstad | mlavalle you could put something like https://pasted.tech/pastes/7d06348fdea072ad4784fa75940c142dc3d63f86.raw in your clouds.yaml and then export OS_CLOUD=devstack-alt-system to ensure you're using a system-scoped token in your request | 22:19 |
lbragstad | i assume you're using openstackclient | 22:19 |
mlavalle | lbragstad: duuh. that was the problem. I was getting a project scoped token. I just tested with a system scoped token and it worked | 22:19 |
lbragstad | mlavalle nice | 22:19 |
mlavalle | thanks! | 22:19 |
mlavalle | have a nice weekend | 22:19 |
lbragstad | no problem | 22:20 |
lbragstad | you, too | 22:20 |
*** gbarros has joined #openstack-nova | 22:29 | |
*** tbachman has quit IRC | 22:30 | |
*** gbarros has quit IRC | 22:40 | |
*** tbachman has joined #openstack-nova | 22:41 | |
*** spatel has quit IRC | 22:44 | |
*** gbarros has joined #openstack-nova | 23:08 | |
*** mgoddard has quit IRC | 23:17 | |
*** mgoddard has joined #openstack-nova | 23:19 | |
*** avolkov has quit IRC | 23:20 | |
*** jawad_axd has joined #openstack-nova | 23:24 | |
*** JamesBenson has joined #openstack-nova | 23:25 | |
*** jawad_axd has quit IRC | 23:28 | |
*** JamesBenson has quit IRC | 23:29 | |
*** markvoelker has joined #openstack-nova | 23:43 | |
*** markvoelker has quit IRC | 23:47 | |
*** luksky has quit IRC | 23:56 | |
*** gbarros has quit IRC | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!