tonyb | mriedem: any chnace you can take a quick look at: https://review.openstack.org/#/c/543348 ? | 00:03 |
---|---|---|
*** salv-orlando has quit IRC | 00:06 | |
*** salv-orlando has joined #openstack-nova | 00:07 | |
*** claudiub has quit IRC | 00:09 | |
openstackgerrit | Nicolas Bock proposed openstack/nova stable/pike: Fix SUSE Install Guide: Placement port https://review.openstack.org/545167 | 00:09 |
melwitt | mnaser, mriedem: I'm going through your comments on https://review.openstack.org/#/c/340614/18/nova/compute/api.py@2055 and I'm not seeing what the 'available' volume scenario is ... | 00:11 |
*** salv-orlando has quit IRC | 00:12 | |
melwitt | that is, even back in liberty, if the instance failed on the host, _cleanup_volumes in the compute manager would have called volume_api.delete, so maybe that's why the detach would fail in the compute api local delete ... but that doesn't add up with having a volume left in 'available' state | 00:13 |
*** AlexeyAbashkin has joined #openstack-nova | 00:13 | |
mnaser | melwitt: in that case you are right there wouldn’t be any possible scenario where you would end up in an available volume if it failed in compute | 00:13 |
melwitt | so I'm not really sure what case the new except block is handling | 00:14 |
melwitt | (I didn't add that part but back when this was going on in liberty, an operator we were working with added that saying they ran into it when they were testing the patch) | 00:14 |
*** mlavalle has quit IRC | 00:15 | |
melwitt | I'm trying to figure out whether to nix it from this patch or if there's some scenario I'm missing where we need to handle it like this | 00:15 |
mnaser | melwitt: I just got home so I’m on mobile but can you boot from volume with an existing volume that’s also delete on terminate? | 00:16 |
mnaser | That would be a scenario where you’d want us to delete it, assuming scheduling failed and we want to follow through and delete the volume as requested | 00:16 |
mnaser | Though that might be super confusing to the user and unexpected | 00:16 |
*** AlexeyAbashkin has quit IRC | 00:18 | |
cfriesen | mnaser: in that case wouldn't the instance sit around in an ERROR state with the volume still attached? | 00:20 |
cfriesen | ie the volume should stay attached until the instance is deleted | 00:20 |
melwitt | mnaser: cfriesen it was that way until very recently https://review.openstack.org/#/c/528385/ | 00:21 |
mnaser | Will the volume ever end up on attached state? I think it would be in attaching because it never reached the compute note to finish the attach | 00:23 |
*** hiro-kobayashi has joined #openstack-nova | 00:27 | |
openstackgerrit | Merged openstack/nova stable/queens: Refine waiting for vif plug events during _hard_reboot https://review.openstack.org/542738 | 00:27 |
openstackgerrit | Merged openstack/nova stable/queens: Don't JSON encode instance_info.traits for ironic https://review.openstack.org/545037 | 00:27 |
openstackgerrit | Merged openstack/nova stable/queens: Use correct arguments in task inits https://review.openstack.org/544109 | 00:27 |
*** chyka has quit IRC | 00:27 | |
openstackgerrit | Merged openstack/nova stable/queens: Update UPPER_CONSTRAINTS_FILE for stable/queens https://review.openstack.org/542657 | 00:28 |
melwitt | mnaser: yeah, you're right, in the scenario you're describing, BFV with already existing volume + delete_on_termination, if scheduling fails, the volume would never have been attached | 00:28 |
melwitt | so it would be available, so detach would fail, so we would want to delete the volume. | 00:28 |
melwitt | er, detach might just do a no-op if available, but I'm not sure on that | 00:29 |
mnaser | melwitt: wouldn’t it actually be in “attaching” state because the API layer reserves the volume | 00:31 |
melwitt | mnaser: oh, yeah good point. it will be whatever the reserve did | 00:32 |
mnaser | melwitt: yeah so it would have an attachment or reservation depending on flow, same issue we’re trying to resolve here | 00:32 |
mnaser | melwitt: I do think the extra try except added in this patch is useless if detach is noop | 00:34 |
mnaser | If detach is noop, nothing will happen. If it fails because the volume doesn’t exist, then we won’t try to delete it anyways | 00:34 |
melwitt | yeah. that's where I'm at too | 00:34 |
mnaser | Now put multiattach in that equation lol.. | 00:35 |
melwitt | but at least I think now I understand why it was added long ago, if detach didn't used to do a no-op if not attached | 00:35 |
mnaser | Yeah it’s a bit of old code | 00:36 |
*** gyee has quit IRC | 00:41 | |
*** Dinesh_Bhor has joined #openstack-nova | 00:42 | |
*** Dinesh_Bhor has quit IRC | 00:42 | |
*** andreas_s has joined #openstack-nova | 00:45 | |
*** chyka has joined #openstack-nova | 00:47 | |
*** andreas_s has quit IRC | 00:49 | |
*** tbachman has joined #openstack-nova | 00:50 | |
*** hiro-kobayashi_ has joined #openstack-nova | 00:51 | |
*** chyka has quit IRC | 00:52 | |
*** hamzy has joined #openstack-nova | 01:02 | |
*** felipemonteiro_ has quit IRC | 01:03 | |
*** salv-orlando has joined #openstack-nova | 01:08 | |
*** salv-orlando has quit IRC | 01:12 | |
*** AlexeyAbashkin has joined #openstack-nova | 01:13 | |
*** AlexeyAbashkin has quit IRC | 01:18 | |
*** jobewan has joined #openstack-nova | 01:42 | |
*** jobewan has quit IRC | 01:49 | |
openstackgerrit | Merged openstack/nova stable/queens: Cleanup the manage-volumes admin doc https://review.openstack.org/545141 | 01:51 |
openstackgerrit | Merged openstack/nova stable/queens: Add admin guide doc on volume multiattach support https://review.openstack.org/545142 | 01:51 |
*** jobewan has joined #openstack-nova | 01:52 | |
*** jobewan has quit IRC | 01:58 | |
*** Tom-Tom has quit IRC | 01:58 | |
*** Tom-Tom has joined #openstack-nova | 01:58 | |
*** yamahata has quit IRC | 02:04 | |
*** acormier has joined #openstack-nova | 02:04 | |
*** salv-orlando has joined #openstack-nova | 02:08 | |
*** hiro-kobayashi_ has quit IRC | 02:10 | |
*** salv-orlando has quit IRC | 02:13 | |
*** hongbin has joined #openstack-nova | 02:18 | |
*** tidwellr has quit IRC | 02:26 | |
*** mriedem1 has joined #openstack-nova | 02:29 | |
*** mriedem has quit IRC | 02:29 | |
mriedem1 | cfriesen: the method on the instance is just a helper method | 02:29 |
*** mriedem1 is now known as mriedem | 02:29 | |
mriedem | melwitt: mnaser: did you sort out the available volume thing? | 02:30 |
mriedem | tonyb: done | 02:30 |
tonyb | mriedem: Thanks. | 02:32 |
*** takashin has left #openstack-nova | 02:32 | |
mriedem | imacdonn: if you can, leave a comment on the patch that fixed the issue for you to support it's greatness | 02:34 |
mriedem | imacdonn: nvm i see you already did, thanks | 02:34 |
mriedem | lyarwood: can you hit the +2ed stable/queens backports in your morning? https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens - then we'll cut RC2 | 02:35 |
mriedem | tonyb: or i guess you could do that ^ | 02:35 |
tonyb | mriedem: Oh I can? I didn't think I could between rc1 and release | 02:36 |
* tonyb checks | 02:36 | |
mriedem | smcginnis was able to, so i assume you can | 02:36 |
tonyb | mriedem: Oh look at that ... | 02:36 |
* tonyb was intentionally ignoring queens as opposed the pike and ocata where it was just an accident ;p | 02:37 | |
mnaser | mriedem: i think so? | 02:41 |
tonyb | mriedem: done | 02:45 |
*** slaweq has joined #openstack-nova | 02:46 | |
*** slaweq has quit IRC | 02:50 | |
mriedem | tonyb: nice thanks | 02:50 |
tonyb | mriedem: If you create the tag request with a -W I can update it with the final SHA once they merge ... or we can wait 'til next week | 02:51 |
*** jogo has quit IRC | 02:51 | |
openstackgerrit | Merged openstack/osc-placement master: tox.ini settings for global constraints are out of date https://review.openstack.org/543348 | 02:54 |
mriedem | i shall do so | 02:55 |
tonyb | mriedem: cool | 02:56 |
*** vladikr has quit IRC | 02:57 | |
*** vladikr has joined #openstack-nova | 02:58 | |
*** jogo has joined #openstack-nova | 02:58 | |
tonyb | mriedem, stephenfin, melwitt: I seem to recall that we said for backports of docs only changes we'd only require 1 stable-core to approve. Did I make that up? | 02:59 |
*** dave-mccowan has joined #openstack-nova | 03:01 | |
mriedem | idk | 03:02 |
*** claudiub has joined #openstack-nova | 03:02 | |
mriedem | if it's a docs bug fix then i think that's probably ok | 03:02 |
mriedem | if it's another stable core that backported it then i'm also ok with fast approve on those | 03:02 |
mriedem | alright gotta go | 03:06 |
*** mriedem has quit IRC | 03:07 | |
*** salv-orlando has joined #openstack-nova | 03:09 | |
*** dave-mcc_ has joined #openstack-nova | 03:14 | |
*** salv-orlando has quit IRC | 03:14 | |
*** dave-mccowan has quit IRC | 03:15 | |
*** sree has joined #openstack-nova | 03:19 | |
*** claudiub has quit IRC | 03:22 | |
*** harlowja_ has quit IRC | 03:23 | |
*** vivsoni has joined #openstack-nova | 03:24 | |
*** bhujay has joined #openstack-nova | 03:26 | |
*** bhujay has quit IRC | 03:37 | |
*** bhujay has joined #openstack-nova | 03:37 | |
*** abhishekk has joined #openstack-nova | 03:39 | |
*** dave-mcc_ has quit IRC | 03:41 | |
*** acormier has quit IRC | 03:43 | |
*** jaianshu has joined #openstack-nova | 03:46 | |
*** sridharg has joined #openstack-nova | 03:54 | |
*** bhujay has quit IRC | 03:55 | |
*** bhujay has joined #openstack-nova | 03:56 | |
*** bhujay has quit IRC | 04:01 | |
*** yamamoto has joined #openstack-nova | 04:05 | |
*** mdnadeem has joined #openstack-nova | 04:08 | |
*** udesale has joined #openstack-nova | 04:09 | |
*** bhujay has joined #openstack-nova | 04:12 | |
*** andreas_s has joined #openstack-nova | 04:13 | |
*** andreas_s has quit IRC | 04:17 | |
*** salv-orlando has joined #openstack-nova | 04:23 | |
*** psachin has joined #openstack-nova | 04:25 | |
*** bhujay has quit IRC | 04:28 | |
*** salv-orlando has quit IRC | 04:29 | |
*** bhujay has joined #openstack-nova | 04:29 | |
*** cfriesen has quit IRC | 04:29 | |
*** sree has quit IRC | 04:34 | |
*** sree has joined #openstack-nova | 04:34 | |
*** janki|afk has joined #openstack-nova | 04:35 | |
*** lpetrut has joined #openstack-nova | 04:36 | |
*** hongbin has quit IRC | 04:45 | |
*** bhujay has quit IRC | 04:45 | |
*** harlowja has joined #openstack-nova | 04:46 | |
*** slaweq has joined #openstack-nova | 04:46 | |
*** slaweq has quit IRC | 04:51 | |
*** yamamoto_ has joined #openstack-nova | 04:51 | |
openstackgerrit | Merged openstack/nova stable/queens: Bindep does not catch missing libpcre3-dev on Ubuntu https://review.openstack.org/544108 | 04:52 |
openstackgerrit | Merged openstack/nova stable/queens: doc: fix the link for the evacuate cli https://review.openstack.org/543507 | 04:52 |
openstackgerrit | Merged openstack/nova stable/queens: Add regression test for BFV+IsolatedHostsFilter failure https://review.openstack.org/543594 | 04:52 |
*** janki|afk has quit IRC | 04:54 | |
*** janki has joined #openstack-nova | 04:54 | |
*** yamamoto has quit IRC | 04:55 | |
*** ameeda has quit IRC | 04:57 | |
*** harlowja has quit IRC | 04:59 | |
openstackgerrit | Merged openstack/nova stable/queens: Handle volume-backed instances in IsolatedHostsFilter https://review.openstack.org/543595 | 05:00 |
openstackgerrit | Merged openstack/nova stable/queens: Fix docs for IsolatedHostsFilter https://review.openstack.org/543596 | 05:00 |
*** rmcall has joined #openstack-nova | 05:02 | |
*** ansiwen has quit IRC | 05:13 | |
*** mdbooth has quit IRC | 05:13 | |
*** lpetrut has quit IRC | 05:13 | |
openstackgerrit | Merged openstack/nova stable/queens: Make bdms querying in multi-cell use scatter-gather and ignore down cell https://review.openstack.org/543489 | 05:15 |
openstackgerrit | Merged openstack/nova stable/queens: VGPU: Modify the example of vgpu white_list set https://review.openstack.org/542882 | 05:16 |
*** priteau has joined #openstack-nova | 05:19 | |
*** lpetrut has joined #openstack-nova | 05:19 | |
*** harlowja has joined #openstack-nova | 05:19 | |
*** priteau has quit IRC | 05:23 | |
*** inara has quit IRC | 05:23 | |
*** salv-orlando has joined #openstack-nova | 05:25 | |
*** harlowja has quit IRC | 05:27 | |
*** salv-orlando has quit IRC | 05:29 | |
*** inara has joined #openstack-nova | 05:31 | |
*** lpetrut has quit IRC | 05:35 | |
*** claudiub has joined #openstack-nova | 05:40 | |
*** hoonetorg has quit IRC | 05:40 | |
*** acormier has joined #openstack-nova | 05:43 | |
*** acormier has quit IRC | 05:48 | |
*** hoonetorg has joined #openstack-nova | 05:53 | |
*** Tom-Tom has quit IRC | 06:10 | |
*** bhujay has joined #openstack-nova | 06:11 | |
openstackgerrit | melanie witt proposed openstack/nova master: Clean up ports and volumes when deleting ERROR instance https://review.openstack.org/340614 | 06:13 |
openstackgerrit | melanie witt proposed openstack/nova master: Add functional recreate test of deleting a BFV server pre-scheduling https://review.openstack.org/545123 | 06:13 |
openstackgerrit | melanie witt proposed openstack/nova master: Detach volumes when deleting a BFV server pre-scheduling https://review.openstack.org/545132 | 06:13 |
*** threestrands has quit IRC | 06:20 | |
*** salv-orlando has joined #openstack-nova | 06:25 | |
*** openstackstatus has quit IRC | 06:27 | |
*** openstack has joined #openstack-nova | 06:29 | |
*** ChanServ sets mode: +o openstack | 06:29 | |
*** Tom-Tom has joined #openstack-nova | 06:29 | |
*** salv-orlando has quit IRC | 06:30 | |
*** salv-orlando has joined #openstack-nova | 06:32 | |
*** jafeha__ is now known as jafeha | 06:36 | |
*** yamahata has joined #openstack-nova | 06:37 | |
openstackgerrit | melanie witt proposed openstack/nova master: Add periodic task to clean expired console tokens https://review.openstack.org/325381 | 06:37 |
openstackgerrit | melanie witt proposed openstack/nova master: Use ConsoleAuthToken object to generate authorizations https://review.openstack.org/325414 | 06:37 |
openstackgerrit | melanie witt proposed openstack/nova master: Convert websocketproxy to use db for token validation https://review.openstack.org/333990 | 06:37 |
*** andreas_s has joined #openstack-nova | 06:57 | |
*** andreas_s has quit IRC | 07:01 | |
*** lajoskatona has joined #openstack-nova | 07:19 | |
*** threestrands has joined #openstack-nova | 07:22 | |
*** threestrands has quit IRC | 07:22 | |
*** threestrands has joined #openstack-nova | 07:23 | |
*** threestrands has quit IRC | 07:23 | |
*** threestrands has joined #openstack-nova | 07:23 | |
*** AlexeyAbashkin has joined #openstack-nova | 07:26 | |
*** threestrands has quit IRC | 07:34 | |
*** stelucz_ has joined #openstack-nova | 07:40 | |
*** yamamoto_ has quit IRC | 07:40 | |
*** rcernin has quit IRC | 07:41 | |
*** yamamoto has joined #openstack-nova | 07:42 | |
*** yamamoto has quit IRC | 07:42 | |
*** yamamoto has joined #openstack-nova | 07:43 | |
stelucz_ | Hello, is there nova client command to delete resource_provider from database? After running nova service-delete <id>, record for compute node still exists in resource_providers table, thus reprovisioning of compute node ends up in message: Another thread already created a resource provider with the UUID 7b01cc27-c101-4c05-aaed-5958ef1270a1. Grabbing that record from the placement API. | 07:43 |
*** lpetrut has joined #openstack-nova | 07:44 | |
*** slaweq has joined #openstack-nova | 07:44 | |
*** dtantsur|afk is now known as dtantsur | 07:46 | |
*** yamamoto has quit IRC | 07:48 | |
*** sree has quit IRC | 07:50 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/nova master: Imported Translations from Zanata https://review.openstack.org/541561 | 07:57 |
*** vladikr has quit IRC | 07:57 | |
*** vladikr has joined #openstack-nova | 07:58 | |
*** jafeha has quit IRC | 08:01 | |
openstackgerrit | Hiroaki Kobayashi proposed openstack/osc-placement master: Use jsonutils of oslo_serialization https://review.openstack.org/545231 | 08:01 |
*** jafeha has joined #openstack-nova | 08:01 | |
*** damien_r has joined #openstack-nova | 08:07 | |
*** andreas_s has joined #openstack-nova | 08:10 | |
*** alexchadin has joined #openstack-nova | 08:13 | |
*** trinaths has joined #openstack-nova | 08:23 | |
*** alexchadin has quit IRC | 08:24 | |
*** alexchadin has joined #openstack-nova | 08:25 | |
*** tesseract has joined #openstack-nova | 08:27 | |
*** sree has joined #openstack-nova | 08:31 | |
*** hiro-kobayashi has quit IRC | 08:31 | |
*** salv-orlando has quit IRC | 08:32 | |
*** amoralej|off is now known as amoralej | 08:32 | |
*** salv-orlando has joined #openstack-nova | 08:32 | |
*** salv-orlando has quit IRC | 08:37 | |
*** gibi has joined #openstack-nova | 08:40 | |
*** gibi has quit IRC | 08:41 | |
*** gibi has joined #openstack-nova | 08:41 | |
*** jpena|off is now known as jpena | 08:41 | |
*** gibi is now known as giblet | 08:42 | |
*** yamamoto has joined #openstack-nova | 08:44 | |
*** salv-orlando has joined #openstack-nova | 08:45 | |
*** david-lyle has quit IRC | 08:46 | |
openstackgerrit | Tetsuro Nakamura proposed openstack/nova master: [libvirt] Add _get_numa_memnode() https://review.openstack.org/529906 | 08:47 |
openstackgerrit | Tetsuro Nakamura proposed openstack/nova master: [libvirt] Add _get_XXXpin_cpuset() https://review.openstack.org/527631 | 08:47 |
openstackgerrit | Tetsuro Nakamura proposed openstack/nova master: Add NumaTopology support for libvirt/qemu driver https://review.openstack.org/530451 | 08:47 |
openstackgerrit | Tetsuro Nakamura proposed openstack/nova master: disable cpu pinning with libvirt/qemu driver https://review.openstack.org/531049 | 08:47 |
*** slaweq_ has joined #openstack-nova | 08:48 | |
*** yamamoto has quit IRC | 08:50 | |
*** ralonsoh has joined #openstack-nova | 08:51 | |
*** slaweq_ has quit IRC | 08:53 | |
*** tssurya has joined #openstack-nova | 08:55 | |
*** yangyapeng has joined #openstack-nova | 08:57 | |
*** yamamoto has joined #openstack-nova | 08:59 | |
*** salv-orlando has quit IRC | 09:04 | |
*** salv-orlando has joined #openstack-nova | 09:05 | |
hrw | how to mock nova.conf option in tests? | 09:05 |
*** mgoddard_ has joined #openstack-nova | 09:05 | |
*** salv-orlando has quit IRC | 09:09 | |
*** pcaruana has joined #openstack-nova | 09:11 | |
*** belmoreira has joined #openstack-nova | 09:11 | |
*** rmcall has quit IRC | 09:14 | |
*** yamamoto has quit IRC | 09:15 | |
*** rmcall has joined #openstack-nova | 09:15 | |
*** ociuhandu has joined #openstack-nova | 09:18 | |
*** yamahata has quit IRC | 09:21 | |
*** rmcall has quit IRC | 09:23 | |
*** ociuhandu has quit IRC | 09:23 | |
*** priteau has joined #openstack-nova | 09:26 | |
*** derekh has joined #openstack-nova | 09:29 | |
*** bauzas is now known as bauwser | 09:30 | |
bauwser | good Friday everyone | 09:30 |
*** acormier has joined #openstack-nova | 09:34 | |
*** stephenfin is now known as finucannot | 09:35 | |
openstackgerrit | Tetsuro Nakamura proposed openstack/nova master: trivial: omit condition evaluations https://review.openstack.org/545248 | 09:36 |
gmann_ | hrw: you can by set_override - https://github.com/openstack/nova/blob/bfae5f28a4c3b39f7978d5f3015c1b32be81215d/nova/tests/functional/api_sample_tests/test_hide_server_addresses.py#L30 | 09:41 |
hrw | gmann_: thx | 09:41 |
hrw | oslo_config.cfg.NoSuchOptError: no such option num_of_pcie_slots in group [libvirt] | 09:44 |
hrw | now just have to find where tests fake whole nova.conf ;d | 09:44 |
*** links has joined #openstack-nova | 09:47 | |
*** links has quit IRC | 09:49 | |
*** acormier has quit IRC | 09:51 | |
*** chyka has joined #openstack-nova | 09:54 | |
*** priteau has quit IRC | 09:56 | |
*** priteau has joined #openstack-nova | 09:57 | |
*** links has joined #openstack-nova | 09:58 | |
*** abhishekk has quit IRC | 09:58 | |
*** chyka has quit IRC | 09:59 | |
*** alexchadin has quit IRC | 10:01 | |
*** priteau has quit IRC | 10:03 | |
*** giblet has quit IRC | 10:03 | |
openstackgerrit | Marcin Juszkiewicz proposed openstack/nova master: Allow to configure amount of PCIe ports in aarch64 instance https://review.openstack.org/545034 | 10:07 |
*** udesale_ has joined #openstack-nova | 10:07 | |
hrw | this one adds test and new config option | 10:08 |
*** udesale has quit IRC | 10:10 | |
stelucz_ | Hello, is there nova client command to delete resource_provider from database? After running nova service-delete <id>, record for compute node still exists in resource_providers table, thus reprovisioning of compute node ends up in message: Another thread already created a resource provider with the UUID 7b01cc27-c101-4c05-aaed-5958ef1270a1. Grabbing that record from the placement API. | 10:11 |
*** links has quit IRC | 10:12 | |
*** yamamoto has joined #openstack-nova | 10:15 | |
*** udesale__ has joined #openstack-nova | 10:16 | |
*** openstackgerrit has quit IRC | 10:18 | |
*** ccamacho has joined #openstack-nova | 10:19 | |
*** udesale_ has quit IRC | 10:19 | |
*** yamamoto has quit IRC | 10:20 | |
*** priteau has joined #openstack-nova | 10:23 | |
*** jafeha__ has joined #openstack-nova | 10:25 | |
*** jafeha has quit IRC | 10:27 | |
*** dosaboy has quit IRC | 10:33 | |
*** stakeda has quit IRC | 10:33 | |
*** dosaboy has joined #openstack-nova | 10:38 | |
*** trinaths has quit IRC | 10:38 | |
*** sambetts|afk is now known as sambetts | 10:41 | |
*** sree has quit IRC | 10:42 | |
*** sree has joined #openstack-nova | 10:43 | |
*** cdent has joined #openstack-nova | 10:46 | |
*** sree has quit IRC | 10:47 | |
* cdent blinks | 10:50 | |
*** yamamoto has joined #openstack-nova | 10:51 | |
*** alexchadin has joined #openstack-nova | 10:54 | |
*** lucas-afk is now known as lucasagomes | 11:03 | |
*** giblet has joined #openstack-nova | 11:05 | |
*** alexchadin has quit IRC | 11:10 | |
*** openstackgerrit has joined #openstack-nova | 11:15 | |
openstackgerrit | Merged openstack/nova master: api-ref: provide more detail on what a provider aggregate is https://review.openstack.org/539033 | 11:15 |
*** alexchadin has joined #openstack-nova | 11:22 | |
*** chyka has joined #openstack-nova | 11:25 | |
*** belmoreira has quit IRC | 11:28 | |
*** udesale_ has joined #openstack-nova | 11:29 | |
*** chyka has quit IRC | 11:29 | |
*** udesale__ has quit IRC | 11:32 | |
*** udesale_ has quit IRC | 11:34 | |
*** AlexeyAbashkin has quit IRC | 11:37 | |
*** sdague has joined #openstack-nova | 11:39 | |
*** AlexeyAbashkin has joined #openstack-nova | 11:40 | |
*** alexchadin has quit IRC | 11:41 | |
*** alexchadin has joined #openstack-nova | 11:42 | |
openstackgerrit | Marcin Juszkiewicz proposed openstack/nova master: Allow to configure amount of PCIe ports in aarch64 instance https://review.openstack.org/545034 | 11:45 |
openstackgerrit | Stephen Finucane proposed openstack/nova master: trivial: Move __init__ function https://review.openstack.org/538223 | 11:48 |
openstackgerrit | Stephen Finucane proposed openstack/nova master: tox: Add mypy target https://review.openstack.org/538221 | 11:48 |
openstackgerrit | Stephen Finucane proposed openstack/nova master: tox: Store list of converted files https://review.openstack.org/538222 | 11:48 |
openstackgerrit | Stephen Finucane proposed openstack/nova master: mypy: Add type annotations to 'nova.pci' https://review.openstack.org/538224 | 11:48 |
openstackgerrit | Stephen Finucane proposed openstack/nova master: zuul: Add 'mypy' job https://review.openstack.org/539168 | 11:48 |
*** jaianshu has quit IRC | 11:49 | |
*** acormier has joined #openstack-nova | 11:52 | |
*** yamamoto has quit IRC | 11:56 | |
*** acormier has quit IRC | 11:56 | |
*** yamamoto has joined #openstack-nova | 11:59 | |
*** links has joined #openstack-nova | 12:03 | |
*** cdent has quit IRC | 12:08 | |
*** yamamoto has quit IRC | 12:08 | |
*** salv-orlando has joined #openstack-nova | 12:10 | |
*** yamamoto has joined #openstack-nova | 12:11 | |
*** psachin has quit IRC | 12:14 | |
*** bhujay has quit IRC | 12:15 | |
*** chyka has joined #openstack-nova | 12:24 | |
*** psachin has joined #openstack-nova | 12:25 | |
*** larsks has joined #openstack-nova | 12:25 | |
*** elmaciej has joined #openstack-nova | 12:26 | |
larsks | Hey nova folk: there are some libvirt guests on our pike compute nodes that have an empty <nova:owner> attribute in the libvirt xml (which is causing ceilometer to blow up). Is an empty owner a known problem? Or is that expected behavior in some circumstances? | 12:26 |
hrw | larsks: maybe owner account got removed from system? | 12:27 |
larsks | hrw: the owner is the associated project, right? The project still exists. | 12:28 |
*** elmaciej_ has joined #openstack-nova | 12:28 | |
larsks | And would nova update the xml on a running guest like that in any case? | 12:28 |
hrw | <nova:owner><nova:user /><nova:project /></nova:owner> | 12:29 |
*** chyka has quit IRC | 12:29 | |
hrw | larsks: sorry, no idea. I jsut hack nova | 12:29 |
hrw | and use bigger and bigger knives each time | 12:30 |
larsks | Fair enough :). | 12:30 |
*** elmaciej has quit IRC | 12:32 | |
*** janki has quit IRC | 12:35 | |
*** cdent has joined #openstack-nova | 12:38 | |
*** udesale has joined #openstack-nova | 12:41 | |
*** yamamoto has quit IRC | 12:41 | |
*** belmoreira has joined #openstack-nova | 12:43 | |
*** sree has joined #openstack-nova | 12:43 | |
*** jpena is now known as jpena|lunch | 12:48 | |
cdent | finucannot: has mypy found some juicy bugs in nova yet? | 12:49 |
*** slaweq_ has joined #openstack-nova | 12:50 | |
*** slaweq_ has quit IRC | 12:54 | |
*** sree has quit IRC | 12:55 | |
*** janki has joined #openstack-nova | 13:06 | |
*** sree has joined #openstack-nova | 13:06 | |
*** dave-mccowan has joined #openstack-nova | 13:08 | |
*** nicolasbock has joined #openstack-nova | 13:10 | |
*** nicolasbock has quit IRC | 13:10 | |
*** sree has quit IRC | 13:14 | |
*** r-daneel has joined #openstack-nova | 13:14 | |
*** yamamoto has joined #openstack-nova | 13:15 | |
*** janki has quit IRC | 13:25 | |
*** hshiina has quit IRC | 13:29 | |
*** jaypipes is now known as leakypipes | 13:31 | |
*** dave-mccowan has quit IRC | 13:31 | |
*** vladikr has quit IRC | 13:31 | |
hrw | anyone know how to get is guest instance i440fx or q35? | 13:32 |
*** leakypipes has quit IRC | 13:33 | |
hrw | found. guest.os_mach_type | 13:35 |
finucannot | cdent: Nothing exciting yet but it will as it expands. I started with 'nova.pci' simply because comprehending that stuff is currently a nightmare and types help | 13:37 |
* cdent nods at finucannot | 13:37 | |
cdent | Will be cool to see how things play out | 13:38 |
*** psachin has quit IRC | 13:42 | |
*** eharney has quit IRC | 13:43 | |
*** awaugama has joined #openstack-nova | 13:49 | |
*** jpena|lunch is now known as jpena | 13:51 | |
*** acormier has joined #openstack-nova | 13:52 | |
*** liverpooler has joined #openstack-nova | 13:55 | |
*** mlavalle has joined #openstack-nova | 13:56 | |
*** acormier has quit IRC | 13:56 | |
*** pchavva has joined #openstack-nova | 13:59 | |
*** cdent has quit IRC | 14:00 | |
*** ralonsoh_ has joined #openstack-nova | 14:00 | |
openstackgerrit | Marcin Juszkiewicz proposed openstack/nova master: Allow to configure amount of PCIe ports in aarch64 instance https://review.openstack.org/545034 | 14:01 |
hrw | one step closer | 14:01 |
hrw | shit. have to fix commit message now | 14:01 |
openstackgerrit | Marcin Juszkiewicz proposed openstack/nova master: Allow to configure amount of PCIe ports https://review.openstack.org/545034 | 14:02 |
*** mriedem has joined #openstack-nova | 14:03 | |
*** ralonsoh has quit IRC | 14:03 | |
openstackgerrit | Eric Young proposed openstack/nova master: Enable native mode for ScaleIO volumes https://review.openstack.org/545304 | 14:06 |
*** dave-mccowan has joined #openstack-nova | 14:07 | |
hrw | one simple thing, few projects involved to find out what is going on. nova, libvirt, qemu, uefi :( | 14:07 |
*** edleafe is now known as figleaf | 14:08 | |
*** acormier has joined #openstack-nova | 14:08 | |
*** udesale has quit IRC | 14:08 | |
*** udesale_ has joined #openstack-nova | 14:08 | |
*** sree has joined #openstack-nova | 14:10 | |
*** acormier has quit IRC | 14:11 | |
*** acormier has joined #openstack-nova | 14:12 | |
*** sree has quit IRC | 14:15 | |
*** acormier has quit IRC | 14:16 | |
*** andreas_s has quit IRC | 14:24 | |
*** rmcall has joined #openstack-nova | 14:24 | |
*** andreas_s has joined #openstack-nova | 14:25 | |
*** mdnadeem has quit IRC | 14:25 | |
*** eharney has joined #openstack-nova | 14:26 | |
*** jaypipes has joined #openstack-nova | 14:28 | |
*** sree has joined #openstack-nova | 14:28 | |
*** jaypipes is now known as leakypipes | 14:28 | |
*** acormier has joined #openstack-nova | 14:29 | |
*** stelucz_ has quit IRC | 14:29 | |
mnaser | this simple clean up has been a lot messier than expected :( | 14:30 |
*** felipemonteiro_ has joined #openstack-nova | 14:30 | |
*** lucasagomes is now known as lucas-hungry | 14:31 | |
mnaser | melwitt: for some reason that example you have listed has multiple attachments so that’s probably why it still is marked as reserved? mriedem maybe can confirm this? | 14:31 |
*** elmaciej_ has quit IRC | 14:32 | |
*** elmaciej has joined #openstack-nova | 14:33 | |
*** felipemonteiro__ has joined #openstack-nova | 14:34 | |
*** acormier has quit IRC | 14:34 | |
*** efried is now known as fried_rice | 14:34 | |
*** sree has quit IRC | 14:35 | |
*** andreas_s has quit IRC | 14:35 | |
finucannot | bauwser: Could you knock this one through? https://review.openstack.org/#/c/538223/ | 14:36 |
fried_rice | stelucz: I believe osc has the command you're looking for. But it doesn't sound like you need to delete it in that scenario. The message you're seeing is not an error. | 14:37 |
*** alexchadin has quit IRC | 14:37 | |
*** felipemonteiro_ has quit IRC | 14:37 | |
*** elmaciej has quit IRC | 14:39 | |
*** elmaciej has joined #openstack-nova | 14:40 | |
*** sree has joined #openstack-nova | 14:42 | |
*** amoralej is now known as amoralej|lunch | 14:43 | |
*** dansmith is now known as superdan | 14:43 | |
*** elmaciej_ has joined #openstack-nova | 14:43 | |
*** elmaciej has quit IRC | 14:46 | |
*** jistr is now known as jistr|mtg | 14:52 | |
*** sree has quit IRC | 14:52 | |
*** andreas_s has joined #openstack-nova | 14:54 | |
melwitt | mnaser: multiple attachments? no | 14:54 |
lyarwood | sean-k-mooney: https://bugs.launchpad.net/os-vif/+bug/1749972 - finucannot pointed me in your direction regarding this odd `brctl setageing $bridge 0` behaviour I've just stumbled across with the latest 16.04 kernel, would you mind taking a look? | 14:55 |
openstack | Launchpad bug 1749972 in os-vif "`brctl setageing $bridge 0` fails on Ubuntu 16.04 4.4.0-21-generic" [Undecided,New] | 14:55 |
*** salv-orl_ has joined #openstack-nova | 14:55 | |
mnaser | melwitt: in your curl example, I saw multiple attachments returned | 14:55 |
mnaser | I count 4 | 14:55 |
mnaser | (To 4 différence instances) | 14:55 |
mnaser | Different* | 14:55 |
openstackgerrit | Eric Fried proposed openstack/nova master: api-ref: Further clarify placement aggregates https://review.openstack.org/545356 | 14:56 |
fried_rice | figleaf: ^^^ | 14:56 |
melwitt | mnaser: oh, weird. I didn't notice that | 14:56 |
fried_rice | mriedem, superdan: bauwser also ^ | 14:56 |
mnaser | melwitt: I can only guess that maybe you tried to run this before applying patch so a few got accumulated? Or maybe it’s a multiattach volume? | 14:57 |
melwitt | I had been reusing the same volume as I messed up a bunch of tests. I'll try again with a new one | 14:57 |
bauwser | fried_rice: +2 | 14:58 |
fried_rice | thx | 14:58 |
bauwser | we can iterate long about docs :) | 14:58 |
*** r-daneel has quit IRC | 14:58 | |
*** salv-orlando has quit IRC | 14:58 | |
bauwser | but I'm fine | 14:58 |
*** andreas_s has quit IRC | 14:58 | |
*** mdbooth has joined #openstack-nova | 14:58 | |
*** ansiwen has joined #openstack-nova | 14:58 | |
*** pooja_jadhav has quit IRC | 15:00 | |
fried_rice | As long as each iteration is an improvement over the previous, agree that we need not make every patch perfect. | 15:00 |
melwitt | mnaser: indeed you're right, it works fine with a fresh volume with only one attachment. thanks for pointing that out | 15:04 |
*** andreas_s has joined #openstack-nova | 15:04 | |
*** sree has joined #openstack-nova | 15:04 | |
*** amodi has joined #openstack-nova | 15:04 | |
melwitt | I'll respin it to remove that code comment I wrote that was wrong | 15:04 |
mnaser | melwitt: awesome! | 15:04 |
*** felipemonteiro__ has quit IRC | 15:06 | |
*** felipemonteiro_ has joined #openstack-nova | 15:06 | |
*** andreas_s has quit IRC | 15:08 | |
*** acormier has joined #openstack-nova | 15:09 | |
*** acormier has quit IRC | 15:13 | |
*** lbragstad has quit IRC | 15:14 | |
*** jistr|mtg is now known as jistr | 15:15 | |
*** lbragstad has joined #openstack-nova | 15:15 | |
melwitt | correction, with a fresh volume it actually has no attachments, if I make it fail to build on compute. it ends up 'reserved' with no attachments, then the detach makes it 'available' then the delete deletes the volume | 15:16 |
openstackgerrit | melanie witt proposed openstack/nova master: Clean up ports and volumes when deleting ERROR instance https://review.openstack.org/340614 | 15:17 |
openstackgerrit | melanie witt proposed openstack/nova master: Add functional recreate test of deleting a BFV server pre-scheduling https://review.openstack.org/545123 | 15:17 |
openstackgerrit | melanie witt proposed openstack/nova master: Detach volumes when deleting a BFV server pre-scheduling https://review.openstack.org/545132 | 15:17 |
*** vladikr has joined #openstack-nova | 15:18 | |
tobasco | i have a ongoing issue right now, nova metadata api returns error code 300 to neutron-metadata-agent, so cloud-init fails, neutron metadata givens error 500 to cloud-init | 15:18 |
tobasco | mitaka, anybody know what would cause a 300 error code? i'm gonna dump the request and try manually now | 15:19 |
*** kholkina has quit IRC | 15:25 | |
*** kholkina has joined #openstack-nova | 15:25 | |
*** amoralej|lunch is now known as amoralej | 15:25 | |
*** felipemonteiro__ has joined #openstack-nova | 15:26 | |
*** vladikr has quit IRC | 15:27 | |
*** vladikr has joined #openstack-nova | 15:27 | |
*** lucas-hungry is now known as lucasagomes | 15:28 | |
*** acormier_ has joined #openstack-nova | 15:29 | |
*** kholkina has quit IRC | 15:29 | |
*** felipemonteiro_ has quit IRC | 15:29 | |
*** acormie__ has joined #openstack-nova | 15:30 | |
*** acormier_ has quit IRC | 15:30 | |
*** cfriesen has joined #openstack-nova | 15:31 | |
mriedem | it sucks that we have to provide a token to list the versions for the compute api | 15:31 |
*** lpetrut has quit IRC | 15:31 | |
*** acormie__ has quit IRC | 15:35 | |
*** acormier has joined #openstack-nova | 15:36 | |
*** andreas_s has joined #openstack-nova | 15:36 | |
*** cdent has joined #openstack-nova | 15:38 | |
fried_rice | mriedem: We fixed that for placement. | 15:39 |
fried_rice | We should fix it for nova too | 15:39 |
*** r-daneel has joined #openstack-nova | 15:40 | |
fried_rice | mriedem: https://review.openstack.org/#/c/522002/ | 15:40 |
*** sridharg has quit IRC | 15:40 | |
fried_rice | Because the root URI is supposed to be auth-less. mordred will tell you so. | 15:40 |
*** acormier_ has joined #openstack-nova | 15:40 | |
mriedem | yeah i remember | 15:40 |
superdan | mordred will tell you lots of things | 15:41 |
fried_rice | It just makes sense too though. | 15:41 |
fried_rice | superdan: Are you saying mordred tends to the garrulous? | 15:41 |
* superdan googles | 15:41 | |
superdan | oh I dunno about that | 15:42 |
*** slaweq has quit IRC | 15:42 | |
superdan | he's just mordredulous | 15:42 |
fried_rice | mordredoquacious | 15:42 |
*** slaweq has joined #openstack-nova | 15:42 | |
*** acormier has quit IRC | 15:43 | |
mriedem | huh, this is kind of fun | 15:44 |
mriedem | stack@queens:~$ openstack server list | 15:44 |
mriedem | +--------------------------------------+---------+--------+----------+-------+--------+ | 15:44 |
mriedem | | ID | Name | Status | Networks | Image | Flavor | | 15:44 |
mriedem | +--------------------------------------+---------+--------+----------+-------+--------+ | 15:44 |
mriedem | | fd20384d-c0e5-40ab-b9f5-c3fae406379b | server1 | ERROR | | | | | 15:44 |
mriedem | +--------------------------------------+---------+--------+----------+-------+--------+ | 15:44 |
mriedem | this server failed in the api i think | 15:44 |
mriedem | on create | 15:44 |
mriedem | it's volume-backed so that's why no image, but no flavor? | 15:44 |
*** david-lyle has joined #openstack-nova | 15:44 | |
*** slaweq has quit IRC | 15:45 | |
mriedem | melwitt: mnaser: heh, as a user, i just hit the bug we're trying to fix, | 15:45 |
*** slaweq has joined #openstack-nova | 15:45 | |
mriedem | volume-backed server create failed, i delete it, then went to do it again but the volume is reserved | 15:46 |
mriedem | so now i have to switch to admin to force detach it | 15:46 |
melwitt | that bug so hot right now | 15:46 |
*** swamireddy has quit IRC | 15:47 | |
*** READ10 has joined #openstack-nova | 15:48 | |
openstackgerrit | Marcin Juszkiewicz proposed openstack/nova master: Allow to configure amount of PCIe ports https://review.openstack.org/545034 | 15:49 |
*** slaweq has quit IRC | 15:49 | |
*** Tom-Tom_ has joined #openstack-nova | 15:50 | |
cdent | I think a t-shirt with "that bug so hot right now" would work pretty well. It rings. | 15:52 |
*** Tom-Tom has quit IRC | 15:53 | |
mordred | superdan, mriedem, fried_rice: yes please to not having auth on discovery urls | 15:54 |
fried_rice | cdent: Picture of a cockroach with a smug grin and a pompadour | 15:55 |
*** r-daneel has quit IRC | 15:55 | |
ingy | mordred: hey o/ | 15:56 |
cdent | fried_rice: that will do nicely | 15:56 |
* mordred waves to ingy | 15:57 | |
*** pcaruana has quit IRC | 15:57 | |
finucannot | FYI, I'm gone for the next week so if anyone pings me don't expect me to answer. See everyone at the PTG! | 16:00 |
ingy | cdent is OnIt™ | 16:00 |
*** salv-orl_ has quit IRC | 16:00 | |
cdent | ingy: always | 16:00 |
*** salv-orlando has joined #openstack-nova | 16:00 | |
openstackgerrit | Stephen Finucane proposed openstack/nova master: Rename '_numa_get_constraints_XXX' functions https://review.openstack.org/385072 | 16:01 |
openstackgerrit | Stephen Finucane proposed openstack/nova master: Standardize '_get_XXX_constraints' functions https://review.openstack.org/385071 | 16:01 |
*** felipemonteiro__ has quit IRC | 16:02 | |
*** bnemec is now known as beekneemech | 16:03 | |
*** r-daneel has joined #openstack-nova | 16:03 | |
*** mrjk_ has joined #openstack-nova | 16:03 | |
mriedem | am i making some obvious mistake here? | 16:05 |
mriedem | curl -d '{"os-force_detach": {}}' -H "accept: application/json" -H "x-auth-token: $token" http://199.204.45.19/volume/v3/e9d773beeef2435eb59f7c6eeaf685a9/volumes/126c8d4b-c582-484a-8c09-fe901a7dc17f/action | 16:05 |
mriedem | {"badRequest": {"message": "There is no such action: None", "code": 400}} | 16:05 |
*** salv-orlando has quit IRC | 16:05 | |
mriedem | https://developer.openstack.org/api-ref/block-storage/v3/#force-detach-a-volume | 16:05 |
*** slaweq has joined #openstack-nova | 16:06 | |
melwitt | did you include a request body? | 16:06 |
mriedem | yeah, -d | 16:06 |
*** yamahata has joined #openstack-nova | 16:07 | |
melwitt | oh, I'm blind | 16:07 |
*** acormier_ has quit IRC | 16:08 | |
*** acormier has joined #openstack-nova | 16:09 | |
*** andreas_s has quit IRC | 16:09 | |
mriedem | aha | 16:09 |
mriedem | Feb 16 16:08:49 queens devstack@c-api.service[1549]: DEBUG cinder.api.openstack.wsgi [None req-c7279a60-f7ba-4a11-98f2-8fa2b2ec281d demo demo] Unrecognized Content-Type provided in request {{(pid=1723) get_body /opt/stack/cinder/cinder/api/openstack/wsgi.py:724}} | 16:09 |
mriedem | excellent UX | 16:09 |
*** finucannot is now known as stephenfin | 16:10 | |
cdent | a != b | 16:10 |
*** mlavalle has quit IRC | 16:10 | |
*** mlavalle has joined #openstack-nova | 16:10 | |
*** slaweq has quit IRC | 16:11 | |
mriedem | yeah my fault | 16:13 |
mriedem | cinder api can figure out if i'm missing the content-type header though and let me know | 16:14 |
mriedem | rather than just 'no such action, f u' | 16:15 |
*** david-lyle has quit IRC | 16:15 | |
smcginnis | if user == mriedem: return "f u" | 16:15 |
mriedem | why i aughta | 16:15 |
* mriedem pushes a patch | 16:15 | |
melwitt | lol | 16:15 |
*** hongbin has joined #openstack-nova | 16:16 | |
*** slunkad_ has joined #openstack-nova | 16:17 | |
*** jafeha has joined #openstack-nova | 16:18 | |
*** jafeha__ has quit IRC | 16:19 | |
*** david-lyle has joined #openstack-nova | 16:22 | |
mrjk_ | Hi, I hit this problem: https://bugs.launchpad.net/nova/+bug/1579213. Comments are pretty well explicit as well. So I came to change filter order, to lower scheduler_driver_task_period=30 (#was 60) but nothing worked. | 16:23 |
openstack | Launchpad bug 1579213 in OpenStack Compute (nova) "ComputeFilter fails because compute node has not been heard from in a while" [Undecided,Invalid] | 16:23 |
*** sree has quit IRC | 16:23 | |
mriedem | mrjk_: are you using the caching scheduler? | 16:23 |
*** markvoelker has quit IRC | 16:23 | |
mrjk_ | Now, I only have the service_down_time>60 option, but I don't like it as it will impact all of my services | 16:24 |
mrjk_ | mriedem, lemme check | 16:24 |
*** damien_r has quit IRC | 16:24 | |
mriedem | if you're not using the caching_scheduler, scheduler_driver_task_period is not used | 16:24 |
mrjk_ | mriedem, no caching_scheduler in place (I'm running liberty) | 16:25 |
mriedem | then scheduler_driver_task_period isn't used | 16:26 |
mriedem | you're sure that you don't have scheduler_driver set in nova.conf? | 16:26 |
*** sree has joined #openstack-nova | 16:26 | |
mrjk_ | Got this: scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler | 16:27 |
*** andreas_s has joined #openstack-nova | 16:27 | |
*** markvoelker_ has joined #openstack-nova | 16:27 | |
mriedem | then the compute filter is probably failing because you have a down compute service | 16:28 |
mriedem | nova service-list will show you the compute service that is down | 16:28 |
openstackgerrit | Merged openstack/nova master: trivial: Move __init__ function https://review.openstack.org/538223 | 16:28 |
mrjk_ | Actually it fails when I try to load a lots of VM (from 30) | 16:29 |
mrjk_ | I have some node down, because it always have, it shouldn't impact. The sheduling process mays definitely take up to 1 minute as well | 16:29 |
mriedem | mrjk_: ok, well, that could be lots of things potentially so you're going to have to dig through some logs; and you're on a long EOL release | 16:29 |
mrjk_ | yep, I know ... The log said the host heartbeat is too old | 16:30 |
mrjk_ | The thing I don't get is how this does work. Does it do like a snapshot of it's current available hosts, and then process it for all instances ? | 16:31 |
*** sree has quit IRC | 16:31 | |
cfriesen | mrjk_: basically, yes | 16:31 |
*** AlexeyAbashkin has quit IRC | 16:33 | |
mrjk_ | So I've no choice to increase service_down_time :/ | 16:33 |
cfriesen | mrjk_: is the service actually down, or does it just look down due to missed updates from the compute node due to load? | 16:34 |
*** slaweq has joined #openstack-nova | 16:34 | |
mrjk_ | Services are up, I only have few nodes down (3/150). In the log I see the scheduler querying for each compute status, and then after 1 minute it start to say all computes seems dead because no heartbeat | 16:36 |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Add a nova-caching-scheduler job to the experimental queue https://review.openstack.org/539260 | 16:37 |
mrjk_ | And I look into the code, the is_up check is made at the query time, it does not seems to query any cache service (my 2cts) | 16:37 |
*** udesale_ has quit IRC | 16:37 | |
*** imacdonn has quit IRC | 16:37 | |
*** imacdonn has joined #openstack-nova | 16:38 | |
mriedem | are you running conductor and scheduler on the same host? do you only have 1 conductor? maybe you need more conductor workers. | 16:38 |
mriedem | sounds like a scaling problem | 16:38 |
*** slaweq has quit IRC | 16:39 | |
mriedem | anyway, debugging liberty deployment scaling issues isn't really the focus for this channel, you can try #openstack or #openstack-operators maybe | 16:39 |
*** Dave has quit IRC | 16:39 | |
cfriesen | mriedem: if you were scheduling a whole bunch of instances, such that by the time you get to the last one the cached "last checkin" time on a service was more than "service_down_time" ago, wouldn't that cause this sort of thing? | 16:39 |
mriedem | possible, idk, i don't create a bunch of instances in a single request | 16:40 |
mriedem | i know doing so has all sorts of issues | 16:40 |
*** Dave has joined #openstack-nova | 16:40 | |
cfriesen | mrjk_: is there a reason why you are creating so many in one request? | 16:40 |
mriedem | like if you create 1000 instances in a single request, we don't limit that, and we can cause the rpc call from conductor to scheduler to timeout and retry it, thus increasing the load and failure | 16:40 |
mriedem | https://review.openstack.org/#/c/510235/ | 16:41 |
cfriesen | mrjk_: as a general rule, I'd suggest keeping the number of servers in a single boot request small enough that the scheduling time is safely below "service_down_time" | 16:44 |
*** felipemonteiro has joined #openstack-nova | 16:45 | |
*** andreas_s has quit IRC | 16:46 | |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Provide a hint when performing a volume action can't find the method https://review.openstack.org/545382 | 16:48 |
*** acormier has quit IRC | 16:52 | |
*** itlinux has joined #openstack-nova | 16:55 | |
mriedem | melwitt: i asked laura last night if she knew what "get on the horn" meant and she had no idea what i was talking about | 16:56 |
superdan | wat | 16:58 |
melwitt | hah, *solidarity* | 16:59 |
*** vladikr has quit IRC | 17:00 | |
*** r-daneel has quit IRC | 17:02 | |
giblet | just a quick heads up, I will be mostly unavailable during next week. see you in Dublin! | 17:04 |
cfriesen | melwitt: given the new behaviour in https://review.openstack.org/#/c/528385/ it seems the new expectation if you fail to build is that you need to create a new cinder volume and create a new instance. previously you could try doing a "rebuild" operation after fixing whatever the problem was. | 17:04 |
*** slaweq has joined #openstack-nova | 17:04 | |
mriedem | cfriesen: what is the new behavior here? that the volume isn't left stuck in a non-available status? | 17:06 |
melwitt | cfriesen: it would only be a new volume if it was set to delete_on_termination. otherwise, it's just detached. and yes, create new instance | 17:06 |
cfriesen | mriedem: about the instance.info_cache.network_info vs self.network_api.get_instance_nw_info(context, instance) you said the method on the instance is a helper method...but instance.info_cache isn't a method, it's an object. | 17:07 |
melwitt | tbh, I wasn't thinking of anyone using rebuild to fix a failed build | 17:07 |
*** elmaciej_ has quit IRC | 17:07 | |
*** chyka has joined #openstack-nova | 17:07 | |
mriedem | melwitt: not necessarily a new volume | 17:08 |
cfriesen | mriedem: I think previously you would have an instance in ERROR state with an attached volume. I think you could have done a "rebuild" on it from the error state. | 17:08 |
mriedem | melwitt: i can create a volume in cinder, and use it for bfv, and specify delete_on_termination | 17:08 |
mriedem | i'm not sure why you'd do that though | 17:08 |
*** sambetts is now known as sambetts|afk | 17:08 | |
mriedem | cfriesen: i thought you were asking about instance.get_network_info() | 17:08 |
mriedem | not self.network_api.get_instance_nw_info(context, instance) | 17:08 |
mriedem | self.network_api.get_instance_nw_info(context, instance) will rebuild the info cache | 17:09 |
melwitt | mriedem: if delete_on_termination was set and it failed the build and you had to delete the instance and start again, you would have to create a new volume, right? | 17:09 |
mriedem | instance.get_network_info() is the same as instance.info_cache.network_info | 17:09 |
mriedem | melwitt: yes | 17:09 |
melwitt | that's what I was saying to cfriesen, you would only have to create a new volume if you had used delete_on_termination. else the volume would be only detached and you could use it again with a new instance | 17:10 |
smcginnis | You might want a bfv deleted on instance delete if you are using it just to manage your storage capacity separate from your local n-compute local storage. | 17:10 |
*** felipemonteiro has quit IRC | 17:10 | |
cfriesen | mriedem: how do I know when I can call instance.get_network_info()? Specifically I'm looking at nova.compute.rpcapi.pre_live_migration()....I want to adjust the RPC timeout based on the number of network ports. | 17:11 |
mriedem | smcginnis: so not the root disk | 17:11 |
mriedem | application data | 17:11 |
mriedem | i just figure people that create a volume directly in cinder and use it to bfv care about re-using the volume, | 17:11 |
mriedem | and people that let nova create the volume for you, don't care | 17:11 |
smcginnis | Eh, less useful maybe just for application data, but still can be used in that way. | 17:11 |
mriedem | cfriesen: you can call it at any time...? | 17:12 |
smcginnis | mriedem: I would think that is usually the case that they do care about that data though if they create in cinder first. | 17:12 |
mriedem | smcginnis: yeah | 17:12 |
mriedem | cfriesen: presumably we rebuild the nw info cache before starting live migration anyway | 17:12 |
mriedem | cfriesen: yes we do | 17:13 |
mriedem | network_info = self.network_api.get_instance_nw_info(context, instance) | 17:13 |
mriedem | in pre_live_migration in the ComputeManager | 17:13 |
cfriesen | mriedem: I'm confused (clearly). in the existing code, in some places where they want network_info they look at instance.info_cache.network_info, and in other places they call | 17:13 |
mriedem | that will refresh the nw info cache | 17:13 |
cfriesen | self.network_api.get_instance_nw_info( | 17:13 |
cfriesen | context, instance) | 17:13 |
cfriesen | whoops, bad paste, sorry | 17:13 |
mriedem | so at that point, the instance.info_cache.network_info is up to date | 17:13 |
mriedem | so if you're going to use it to count ports to adjust rpc call timeout, you should be good | 17:13 |
*** derekh has quit IRC | 17:14 | |
cfriesen | no, I need to have the ports in the rpc code that is *calling* pre_live_migration on the dest node. | 17:15 |
cfriesen | so updating in pre_live_migration doesn't help | 17:15 |
mriedem | let me dig up the live migration call chart | 17:15 |
mriedem | of which i still owe fried_rice a shiny nickel | 17:15 |
fried_rice | \o/ | 17:16 |
cfriesen | I think live_migration() on the source node calls pre_live_migration() on the dest | 17:16 |
cfriesen | or rather _do_live_migration() I guess | 17:16 |
mriedem | f me if i can ever find anything in our docs | 17:17 |
mriedem | https://docs.openstack.org/nova/latest/reference/live-migration.html | 17:17 |
mriedem | cfriesen: yeah you're right | 17:17 |
openstackgerrit | Matthew Edmonds proposed openstack/nova-specs master: PowerVM Virt Integration (Rocky) https://review.openstack.org/545111 | 17:18 |
mriedem | and _do_live_migration doesn't refresh the instance nw info cache before calling pre_live_migration | 17:18 |
cfriesen | okay, so I probably need to call self.network_api.get_instance_nw_info(context, instance) ? | 17:18 |
*** links has quit IRC | 17:18 | |
mriedem | cfriesen: so from _do_live_migration if you call instance.get_network_info() then you're just getting whatever is current for the instance in the db | 17:18 |
mriedem | if you really want to refresh to get the latest | 17:19 |
cfriesen | what would cause that to be stale? if someone was in the middle of attaching an interface or something? | 17:19 |
mriedem | but we have the network events from neutron if that changes, and the heal_instance_info_cache periodic | 17:19 |
mriedem | umm | 17:19 |
mriedem | yeah, | 17:20 |
mriedem | since we don't set a task_state for attaching/detaching an interface, you can live migrate the server at the same time | 17:20 |
mriedem | if the instance.task_state was not None (attaching/detaching), you couldn't do a live migration | 17:20 |
mriedem | thta's something i've always wondered about, | 17:20 |
mriedem | why we don't change task_state while attaching/detaching interfaces and volumes | 17:21 |
*** itlinux has quit IRC | 17:22 | |
cfriesen | might be a fun stress test. ping-pong live migrations while attaching/detaching interfaces/volumes | 17:22 |
*** swamireddy has joined #openstack-nova | 17:24 | |
*** READ10 has quit IRC | 17:24 | |
mriedem | fun like a hernia | 17:24 |
*** itlinux has joined #openstack-nova | 17:25 | |
mriedem | cfriesen: so you're probably better off just starting small and getting the current info cache in _do_live_migration and basing the rpc timeout on that | 17:26 |
*** acormier has joined #openstack-nova | 17:27 | |
*** belmoreira has quit IRC | 17:29 | |
*** acormier has quit IRC | 17:29 | |
*** acormier has joined #openstack-nova | 17:29 | |
*** acormier has quit IRC | 17:30 | |
*** acormier has joined #openstack-nova | 17:30 | |
cfriesen | mriedem: agreed, it should be good enough. Is this idea of varying the RPC timeout based on number of attached interfaces something that you think would be generally useful? We're seeing it take ~1.5 sec per interface in pre_live_migration() to update the ports, but I'm not sure if that's typical or due to our neutron changes. | 17:36 |
superdan | I don't really like the idea of doing that | 17:36 |
cfriesen | and in nova those calls to neutron are serialized, so with a large number of interfaces it's possible to eat up a good chunk of the RPC timeout just doing the port updates | 17:37 |
superdan | I'd rather see something in olso.messaging that heartbeats running calls so we have a soft and hard timeout range | 17:37 |
mriedem | i seem to remember having similar conversations for other rpc calls that can take a long time, but can't remember details, | 17:37 |
mriedem | or if they were just "should we add a specific rpc timeout config option for this *one* really bad operation?" | 17:37 |
superdan | or work on not chaining rpc calls together | 17:37 |
mriedem | cfriesen: and this is with https://review.openstack.org/#/c/465787/ applied right? | 17:38 |
*** giblet is now known as gibi_off | 17:38 | |
cfriesen | we internally already hack the pre_live_migration timeout for block-live-migration to allow time to download the image from glance. | 17:38 |
mriedem | cfriesen: do you happen to have any rough numbers on how much ^ helps with live migration with >1 ports? | 17:38 |
mriedem | are you using the image cache? | 17:39 |
cfriesen | mriedem: yes, it's with that applied. let me see if I can dig something up. | 17:39 |
cfriesen | mriedem: I think so, but that still means it could hit the first instance that wants that image. | 17:39 |
*** ttsiouts has quit IRC | 17:40 | |
mriedem | on the dest host | 17:41 |
mriedem | the image would be cached on the source host but that doesn't help you | 17:41 |
mriedem | you guys don't use ceph? | 17:41 |
cfriesen | mriedem: up to the end user | 17:41 |
mriedem | ok; figured with all of the live migration you seem to do, you'd push ceph | 17:41 |
cfriesen | mriedem: we support compute nodes with ceph, with local qcow2, and with local thin LVM | 17:41 |
cfriesen | mriedem: some installs are really small, like 2 all-in-one nodes | 17:42 |
*** READ10 has joined #openstack-nova | 17:47 | |
cfriesen | mriedem: looking at my notes, that patch cut the neutron load by a factor of 3 and reduced lock contention in nova. We also had an oslo.lockutils change to introduce fair locks--they merged it but then it seemed to cause mysterious issues in the CI for a couple of projects so they reverted it. | 17:48 |
mriedem | how many ports on that instance? | 17:49 |
mriedem | in that test? | 17:49 |
cfriesen | actually, wait, that factor of 3 reduction was for removing redundant calls for the same instance | 17:50 |
cfriesen | 16 ports on the instance (our max) | 17:51 |
*** vladikr has joined #openstack-nova | 17:54 | |
*** harlowja has joined #openstack-nova | 17:55 | |
*** lbragstad has quit IRC | 17:55 | |
*** lbragstad has joined #openstack-nova | 17:56 | |
cfriesen | mriedem: found some numbers. as of april last year each call to get_instance_nw_info() cost roughly 200ms plus about 125ms per port. And without that patch there are O(N) network-changed events in a live-migration (where N is the number of interfaces), so the overall cost is O(N^2) | 17:57 |
cfriesen | with that patch, it drops to O(N) | 17:57 |
*** hemna_ has quit IRC | 17:57 | |
*** mgoddard_ has quit IRC | 17:58 | |
mriedem | cool | 17:58 |
*** mriedem is now known as mriedem_lunch | 17:58 | |
mriedem_lunch | i'll feast on those results at lunch | 17:58 |
*** hemna_ has joined #openstack-nova | 18:00 | |
*** gyee has joined #openstack-nova | 18:00 | |
* cdent notes the date of an O-notation sighting | 18:02 | |
*** r-daneel has joined #openstack-nova | 18:05 | |
*** Swami has joined #openstack-nova | 18:06 | |
*** jpena is now known as jpena|off | 18:09 | |
*** hemna_ has quit IRC | 18:09 | |
*** salv-orlando has joined #openstack-nova | 18:10 | |
*** hemna_ has joined #openstack-nova | 18:11 | |
*** yamahata has quit IRC | 18:11 | |
*** david-lyle has quit IRC | 18:12 | |
*** brault has quit IRC | 18:17 | |
*** hemna_ has quit IRC | 18:20 | |
*** AlexeyAbashkin has joined #openstack-nova | 18:24 | |
*** hemna_ has joined #openstack-nova | 18:25 | |
*** ralonsoh_ has quit IRC | 18:26 | |
*** tesseract has quit IRC | 18:26 | |
*** AlexeyAbashkin has quit IRC | 18:28 | |
*** dtantsur is now known as dtantsur|afk | 18:29 | |
openstackgerrit | Ken'ichi Ohmichi proposed openstack/nova master: Trivial: Update help of enabled_filters https://review.openstack.org/545431 | 18:30 |
openstackgerrit | Ken'ichi Ohmichi proposed openstack/nova master: Trivial: Update help of enabled_filters https://review.openstack.org/545431 | 18:31 |
*** r-daneel has quit IRC | 18:37 | |
*** r-daneel has joined #openstack-nova | 18:38 | |
*** jamesdenton has quit IRC | 18:39 | |
*** psachin has joined #openstack-nova | 18:41 | |
*** harlowja has quit IRC | 18:44 | |
*** oomichi has joined #openstack-nova | 18:46 | |
*** harlowja has joined #openstack-nova | 18:48 | |
*** weshay is now known as weshay|bbiab | 18:50 | |
*** r-daneel has quit IRC | 18:54 | |
*** lpetrut has joined #openstack-nova | 18:55 | |
*** lajoskatona has quit IRC | 18:55 | |
*** mgoddard_ has joined #openstack-nova | 18:56 | |
*** cdent has quit IRC | 18:58 | |
*** r-daneel has joined #openstack-nova | 18:58 | |
*** yamahata has joined #openstack-nova | 19:04 | |
mrjk_ | mriedem, ok, thank you for your hints | 19:06 |
mnaser | if nova creates neutron ports when it boots an instance, and the port is manually detached with nova instance-detach, the port is deleted (as per preserve_on_delete=False).. anyone else feel this behaviour is bleh? | 19:08 |
mnaser | if i detach a port manually, it means i want to use it | 19:08 |
*** harlowja has quit IRC | 19:10 | |
*** hemna_ has quit IRC | 19:11 | |
*** lucasagomes is now known as lucas-afk | 19:12 | |
*** lpetrut has quit IRC | 19:12 | |
*** david-lyle has joined #openstack-nova | 19:13 | |
imacdonn | hmm, interesting .... it could be argued that if you want to make your ports available for use with other instances, you should pre-create them separately | 19:15 |
*** hemna_ has joined #openstack-nova | 19:15 | |
*** gyee has quit IRC | 19:16 | |
imacdonn | assuming that, perhaps you shouldn't be allowed to detach a port that was created along with the instance (thinking out loud) | 19:18 |
*** fullmetaljackiet has joined #openstack-nova | 19:18 | |
*** lpetrut has joined #openstack-nova | 19:20 | |
*** sambetts|afk has quit IRC | 19:22 | |
*** mgoddard_ has quit IRC | 19:23 | |
*** sambetts_ has joined #openstack-nova | 19:24 | |
*** pchavva1 has joined #openstack-nova | 19:27 | |
TheJulia | jroll: do you, or anyone else, remember if there was any outcome from the duplicate placement issues we encountered yesterday with ironic's multinode grenade job on queens->master ? It passed without issues once, revised the patch slightly and hit the same issue again | 19:29 |
jroll | TheJulia: I didn't see anything, fried_rice said he might look into it | 19:29 |
jroll | I probably won't get bandwidth to investigate today | 19:30 |
jroll | all I got was that yes, we likely triggered a rebalance of ironic n-cpu things | 19:30 |
mrjk_ | cfriesen, I'm reading back your comments, "I'd suggest keeping the number of servers in a single boot request small enough" => Google does not answer to this question :) | 19:36 |
mrjk_ | That means I don't have the control on this issue ? (I was looking for a max cap allowed instance per requests) | 19:37 |
*** mrjk_ has quit IRC | 19:38 | |
*** mrjk has joined #openstack-nova | 19:39 | |
*** pchavva1 has quit IRC | 19:42 | |
*** pchavva has quit IRC | 19:42 | |
*** mriedem_lunch is now known as mriedem | 19:44 | |
imacdonn | mriedem: I have another "pool volume handling" situation to run by you ... wondering if it'd be covered by any existing work, or if I should create a new bug for it | 19:46 |
mriedem | mrjk: there is no limit, besides the user quota, on --max-count instances in a single server create request | 19:46 |
openstackgerrit | Merged openstack/nova master: Only log during pop retry phase https://review.openstack.org/541655 | 19:46 |
mriedem | mrjk: so if you allow users to have a quota of instances to be 30, and they can create up to 30 instances in a single request, then you have to account for that in your deployment | 19:47 |
mriedem | imacdonn: pool volume handling? | 19:47 |
imacdonn | mriedem: SA reported that a volume was attached to an instance that didn't exist .... from log-trawling, I discovered that the volume was attached to an instance, and the instance was terminated while the cinder backend was down - cinder-volume threw a VolumeBackendAPIException, but nova-compute ignored it, and proceeded to delete the instance anyway - https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2404 | 19:49 |
*** weshay|bbiab is now known as weshay | 19:50 | |
*** harlowja has joined #openstack-nova | 19:50 | |
*** amoralej is now known as amoralej|off | 19:53 | |
mriedem | imacdonn: and then | 19:53 |
imacdonn | mriedem: and then we have a volume that's attached to an instance that doesn't exist | 19:53 |
mriedem | yup | 19:53 |
mriedem | you'll have to force-detach the volume | 19:53 |
imacdonn | not only that, but it left behind broken SCSI/multipath devices on the compute node | 19:54 |
imacdonn | shouldn't the instance termination fail in this case ? | 19:54 |
mriedem | this is the way it's always worked, i'm not exactly sure why, besides nova just wants to delete the instance | 19:55 |
cfriesen | mriedem: I can see an argument for not deleting the instance if it means leaving neutron/cinder in a confused state | 19:56 |
mriedem | well, | 19:56 |
mriedem | i think nova ignores it, | 19:56 |
mriedem | because at this point, nova has already destroyed the guest | 19:56 |
mriedem | https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2354 | 19:56 |
mriedem | similarly, if we can't unbind the port, we log an exception but don't reraise it https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L563 | 19:58 |
mrjk | mriedem, ok so I guess I can't really solve this issue then, but I'm still annoyed because users report errors now. Increasing service_down_time has a wider impact than just the scheduler, right? | 19:58 |
cfriesen | anyone else find it disconcerting that _shutdown_instance() is closer to "destroy/delete" than just "shutdown"? | 19:59 |
imacdonn | hmm that log message for the port really should be LOG.warn(), not debug() | 19:59 |
mriedem | mrjk: yes service_down_time is used per service, | 19:59 |
mriedem | mrjk: if you have your control services running on different hosts, then you could have a separate config value for them | 19:59 |
mriedem | mrjk: if you are going to allow your users to have a high enough quota to create lots of instances in a single request, and scheduler/conductor is taking too longer, then you need to scale something out | 20:00 |
mrjk | hmm, it's a bit hackish, but I will definitely consider this | 20:00 |
mriedem | maybe conductor | 20:00 |
imacdonn | also, a port not existing is different from a "something went badly wrong" exception, IMO | 20:00 |
mriedem | imacdonn: it's not logging an exception on port not found | 20:01 |
mrjk | I'll continue to investigate on conductor, to see if I see more stuffs | 20:01 |
mriedem | it doesn't care about the port not found b/c it's trying to unbind it | 20:01 |
mrjk | Thx for your help | 20:01 |
mriedem | if the actual port update fails with a 400 or 500 or something, it logs an exception trace | 20:01 |
mriedem | but keeps going | 20:01 |
mriedem | like i said, the guest is gone by this point | 20:01 |
fried_rice | jroll, TheJulia: My investigation stalled at "Yup, you tried to create a provider with the same name but a different UUID." | 20:01 |
imacdonn | oh, right, yeah | 20:01 |
mriedem | so if volume/port cleanup fails, | 20:01 |
mriedem | the guest in the hypervisor is already gone, and you have manual cleanup to do in cinder/neutron | 20:02 |
mriedem | if there are other better historical reasons for this, i'm hoping maybe leakypipes or superdan can chime in | 20:02 |
*** r-daneel_ has joined #openstack-nova | 20:02 | |
*** ccamacho has quit IRC | 20:02 | |
smcginnis | And if the cinder or neutron failure was due to that volume or port being deleted externally, you wouldn't want your broken instances stuck. | 20:03 |
*** salv-orlando has quit IRC | 20:03 | |
*** salv-orlando has joined #openstack-nova | 20:04 | |
imacdonn | I'm actually more concerned about the left-behind SCSI/multipath devices on the compute node ... we actually monitor for that, because it's caused us a lot of pain .. it may be less painful with Gorka's work on os-brick, but it's still messy to leave that stuff laying around | 20:04 |
openstackgerrit | Merged openstack/nova master: api-ref: Further clarify placement aggregates https://review.openstack.org/545356 | 20:04 |
openstackgerrit | Merged openstack/nova master: Fix and update compute schedulers config guide https://review.openstack.org/544010 | 20:04 |
openstackgerrit | Merged openstack/nova master: Fix warn api_class is deprecated, use backend https://review.openstack.org/543830 | 20:05 |
superdan | lyarwood is probably your man for understanding the multipath residue | 20:05 |
*** r-daneel has quit IRC | 20:05 | |
*** r-daneel_ is now known as r-daneel | 20:05 | |
superdan | I definitely don't have much context on that | 20:05 |
imacdonn | if nova's going to take the ostrich approach to cinder failures, perhaps it should at least clean up the os-brick stuff .... I suppose I should try to confirm that it doesn't do that with the latest code .. the case I diagnosed was on Ocata | 20:05 |
mriedem | imacdonn: anything to do with os-brick, | 20:06 |
mriedem | would be when driver.destroy is called to delete the guest | 20:06 |
mriedem | which presumably didn't fail | 20:07 |
mriedem | right here https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1055 | 20:07 |
*** salv-orlando has quit IRC | 20:08 | |
imacdonn | hmmm | 20:08 |
imacdonn | something failed, because the devices were left behind | 20:08 |
mriedem | the thing you pointed out earlier, was when nova detaches the volume on the cinder side | 20:08 |
mriedem | not the compute host | 20:08 |
mriedem | well, calls terminate_connection to remove the export | 20:09 |
mriedem | detach_volume in cinder is just updating the volume status to 'available' | 20:09 |
mriedem | multipath shenanigans would be in os-brick when the libvirt driver calls disconnect_volume | 20:09 |
mriedem | btw, i don't know what an SA is | 20:10 |
mriedem | except super america | 20:10 |
mriedem | gas station | 20:10 |
imacdonn | heh .. sys admin | 20:10 |
imacdonn | seems like I'm going to have to try to reproduce this .... when I got to the case I diagnosed, the broken devices had already been cleaned up manually ... I only have non-debug logs to go on | 20:13 |
*** eharney has quit IRC | 20:13 | |
imacdonn | actually... just had a thought ... it may be that the devices DID get unplumbed, but then a subsequent instance creation caused an iSCSI rescan, and they were rediscovered .. if that's the case, Gorka's os-brick work (in Pike) should solve that | 20:15 |
mriedem | imacdonn: if os-brick failed we should have a warning from https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1064 | 20:15 |
*** lpetrut has quit IRC | 20:16 | |
*** fullmetaljackiet has quit IRC | 20:16 | |
*** lpetrut has joined #openstack-nova | 20:17 | |
imacdonn | ("Gorka's work" referring to https://review.openstack.org/#/c/445943/) | 20:18 |
imacdonn | or whatever that turned into .. that doesn't look like the right review | 20:19 |
imacdonn | https://review.openstack.org/#/c/433104/ | 20:20 |
imacdonn | given that, it's not so bad .... I still think it's kinda not great to leave cinder believing that the volumes are attached to instances that don't exist, but I think I can live with it | 20:22 |
*** BlackDex has quit IRC | 20:23 | |
*** salv-orlando has joined #openstack-nova | 20:34 | |
*** READ10 has quit IRC | 20:36 | |
*** BlackDex has joined #openstack-nova | 20:41 | |
*** andreas_s has joined #openstack-nova | 20:43 | |
*** andreas_s has quit IRC | 20:47 | |
TheJulia | fried_rice: jroll: In that case, it almost sounds like the list of nodes being serviced by that individual compute node is out of sync... and such a constraint almost sounds like we can't have multiple nova-compute processes running at all 8( | 20:48 |
fried_rice | TheJulia: As long as your compute node RPs are actually the same, you should be fine. Why aren't they being created with the same name & uuid? | 20:50 |
fried_rice | TheJulia: In resource tracker _update, the compute_node param has a UUID and a name. The report client is using both of them to _ensure_resource_provider. | 20:53 |
TheJulia | so, I guess then I have a data model question due to a lack of understanding on my part. is the uuid ironic's baremetal node uuid that we are talking about? Because I thought based on comments yesterday that it was getting posted as the name of the entry and the uuid was the actual nova-compute node? | 20:53 |
fried_rice | TheJulia: This is where I'm also pretty confused. But it seems to me like, whichever is the case, its uuid/name should be immutable. | 20:54 |
TheJulia | ugh | 20:54 |
fried_rice | Especially since the name is a UUID (which is even more confusing) | 20:54 |
TheJulia | if it is the compute node uuid.... then.... we have lost functionality | 20:55 |
fried_rice | We're creating the RP in placement with compute_node.uuid, compute_node.hypervisor_hostname | 20:55 |
*** markvoelker_ has quit IRC | 20:55 | |
TheJulia | so they can never fail between compute nodes.... heh | 20:56 |
fried_rice | Under what circumstances could I get a compute_node with the same hypervisor_hostname but a *different* uuid? | 20:56 |
*** markvoelker has joined #openstack-nova | 20:56 | |
TheJulia | if a compute process went down too long and the hash ring recalculated | 20:56 |
TheJulia | the other compute process would take over for the other nodes | 20:56 |
TheJulia | or at least, that is what we were doing as i understand it | 20:57 |
*** liverpooler has quit IRC | 20:57 | |
fried_rice | Cool cool. But why doesn't the second compute process perceive the nodes as having the same UUID as the first compute process did? | 20:57 |
TheJulia | and now I'm confused :) | 20:58 |
* TheJulia switches gears and cracks open the nova code | 21:00 | |
fried_rice | TheJulia: resource_tracker.py in the 900s | 21:00 |
fried_rice | sorry, 500s | 21:00 |
TheJulia | thanks | 21:00 |
fried_rice | In particular, I thought I4253cffca3dbf558c875eed7e77711a31e9e3406 was supposed to be attacking this very problem. | 21:01 |
*** eharney has joined #openstack-nova | 21:05 | |
*** sambetts_ has quit IRC | 21:06 | |
fried_rice | TheJulia: I think I get it. Is the hypervisor_hostname is assigned based on compute_node.host? | 21:06 |
TheJulia | that is what I think, it is just I've never dug through this portion of nova, so I'm not grasping it very well | 21:06 |
TheJulia | also *squirrel* *blink* *blink* | 21:07 |
fried_rice | mriedem: Executive summary on the difference between nodename and hypervisor_hostname? | 21:07 |
jroll | in ironic-land, nodename == ironic node uuid, hypervisor_hostname == nova-compute hostname (rabbit queue) | 21:08 |
fried_rice | Aha | 21:09 |
fried_rice | I think therein lies the boggle. | 21:09 |
fried_rice | Because in non-ironic-land, I *think* they are the same. | 21:09 |
fried_rice | And we're using the wrong one. | 21:09 |
jroll | the patch you mentioned was meant to handle this, but I think only for the compute_nodes table, not resource providers | 21:09 |
*** r-daneel has quit IRC | 21:10 | |
jroll | (emphasis on I think) | 21:10 |
fried_rice | L869 | 21:11 |
jroll | nice. | 21:11 |
mriedem | for non-ironic, compute_nodes.host and compute_nodes.hypervisor_hostname are the same | 21:12 |
* fried_rice hacks | 21:12 | |
jroll | wait, 869 can't be wrong, we would have never had any resources | 21:13 |
jroll | that would have blown up all over the place | 21:13 |
jroll | did I have this backwards? | 21:13 |
TheJulia | hmmmmmmmm | 21:13 |
jroll | fried_rice: I'm sorry, I lied. hypervisor_hostname is the ironic node uuid | 21:15 |
jroll | compute_node.host is the nova-compute hostname | 21:16 |
jroll | your use of nodename threw me off, that is equivalent in (most? all?) places to hypervisor_hostname | 21:16 |
fried_rice | So then we go back to the original question: How does compute_node.uuid *change* when compute_node.hypervisor_hostname is the same? | 21:17 |
jroll | compute_node.uuid does not | 21:18 |
jroll | but resource provider uuid does | 21:18 |
jroll | afaict | 21:18 |
* jroll goes back to the logs | 21:18 | |
fried_rice | No, because the error is happening via _ensure_resource_provider(compute_node.uuid, compute_node.hypervisor_hostname) | 21:18 |
fried_rice | and we're running into a conflict on the latter. | 21:18 |
jroll | is resource provider UUID always the same as compute node uuid? | 21:19 |
fried_rice | For compute node resource providers, in Queens, yes. | 21:19 |
fried_rice | Uhm. | 21:19 |
fried_rice | Yes. | 21:19 |
fried_rice | Was gonna say get_inventory might be able to muck with it, but no. | 21:20 |
fried_rice | Is this because nova is creating the ComputeNode entries afresh, with an autogenerated UUID? | 21:20 |
jroll | seems like it would be, yes. but that's what https://review.openstack.org/#/c/508555/ should have fixed | 21:20 |
fried_rice | Exacitically | 21:20 |
fried_rice | L518-9 in fact | 21:21 |
jroll | right | 21:21 |
*** tssurya has quit IRC | 21:22 | |
*** oomichi has quit IRC | 21:22 | |
jroll | we demonstrate that code is running: http://logs.openstack.org/50/544750/10/check/ironic-grenade-dsvm-multinode-multitenant/d7a1ee7/logs/subnode-2/screen-n-cpu.txt.gz#_Feb_16_17_21_04_613786 | 21:23 |
* fried_rice hacks more... | 21:23 | |
fried_rice | I'm sorta guessing you can't change the UUID of a ComputeNode in the db. mriedem? | 21:24 |
jroll | both compute nodes appear to be attempting to create an RP for that node (9de4d7b4-51c9-4088-99b4-cd648332504e), at different times of course | 21:24 |
fried_rice | Right; the first one succeeds but the second one barfs | 21:24 |
fried_rice | right? | 21:24 |
jroll | um | 21:24 |
jroll | well, both fail | 21:24 |
mriedem | fried_rice: changing the uuid would be kinda bad | 21:24 |
jroll | but my um: how do we feel about adding a cn.save() here: https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py?utf8=%E2%9C%93#L525 | 21:24 |
* jroll checks if that's done elsewhere | 21:25 | |
fried_rice | jroll: I was going to suggest that. | 21:25 |
fried_rice | but also not knowing if it's The Right Thing. | 21:25 |
fried_rice | I was also going to suggest setting the UUID before save()ing. | 21:25 |
fried_rice | But mriedem won't come to my birthday party if I do that. | 21:25 |
jroll | nah, the UUID should be fine, we're updating an existing CN | 21:25 |
fried_rice | But I thought that was the whole problem. | 21:26 |
fried_rice | btw, assuming _resource_change returns True, that .save() should be getting done at L859. | 21:26 |
* jroll unwinds his brain | 21:26 | |
jroll | yeah, I'm not sure it will, the schedulable resources should be the same as before, only the host attribute is changing | 21:27 |
jroll | ok, I get it now, we did get a new UUID, wtf | 21:30 |
fried_rice | w, indeed, tf | 21:30 |
jroll | sorry, I got confuzzled | 21:30 |
jroll | you know... | 21:31 |
jroll | does a compute_node record get deleted at some point if the compute service disappears? | 21:32 |
jroll | and if so, do we clean up the resource providers? | 21:32 |
fried_rice | I could answer that second question if I knew the answer to the first. But I don't. mriedem? (Or mriedem if you'd care to delegate, who's around who knows this stuff?) | 21:33 |
jroll | so um | 21:33 |
jroll | this is pike->queens, afaik | 21:33 |
jroll | can we please land https://review.openstack.org/#/c/527423/ . | 21:33 |
openstackgerrit | Eric Fried proposed openstack/nova master: WIP: Make sure rebalance saves the compute node https://review.openstack.org/545464 | 21:33 |
fried_rice | jroll: Here's a quickie to make sure that save() is happening ^ | 21:34 |
fried_rice | jroll: But where the error happens, we're clearly running that code. | 21:34 |
jroll | er, the one I'm looking at is queens->master | 21:34 |
jroll | yeah | 21:34 |
fried_rice | So whereas I agree we should land that backport, that's not gonna be your fix here. | 21:35 |
mriedem | jroll: it doesn't | 21:35 |
jroll | sorry, we've been having issues with this job since before queens was cut, so I've been getting confused | 21:35 |
mriedem | compute_nodes hang out until manually removed | 21:35 |
jroll | damn | 21:35 |
mriedem | there are some bugs that tssurya opened for removing compute nodes and providers | 21:35 |
jroll | I was hoping compute nodes did get deleted but not RPs | 21:36 |
jroll | that would explain things | 21:36 |
mriedem | https://bugs.launchpad.net/nova/+bug/1749734 | 21:36 |
openstack | Launchpad bug 1749734 in OpenStack Compute (nova) "Purge the compute_node records, resource provider records and host_mappings when doing force delete of the host" [Medium,Confirmed] - Assigned to Surya Seetharaman (tssurya) | 21:36 |
jroll | thanks | 21:36 |
fried_rice | jroll: I can get a little more aggressive with the hacking. | 21:37 |
jroll | fried_rice: feel free, I'm just trying to wrap my head around some of this | 21:38 |
jroll | this seems wrong. http://logs.openstack.org/50/544750/10/check/ironic-grenade-dsvm-multinode-multitenant/d7a1ee7/logs/screen-n-cpu.txt.gz#_Feb_16_17_06_33_592313 | 21:38 |
jroll | oh, there we are http://logs.openstack.org/50/544750/10/check/ironic-grenade-dsvm-multinode-multitenant/d7a1ee7/logs/screen-n-cpu.txt.gz#_Feb_16_17_04_22_814020 | 21:39 |
jroll | but we can't talk to placement, so the RP stays | 21:39 |
jroll | because Feb 16 17:04:22.591129 ubuntu-xenial-rax-ord-0002580076 nova-compute[28778]: DEBUG nova.virt.ironic.driver [None req-969cdb75-026b-4cac-ba09-3f3be962a09d service nova] Returning 0 available node(s) {{(pid=28778) get_available_nodes /opt/stack/old/nova/nova/virt/ironic/driver.py:757}} | 21:39 |
jroll | because ironic is down | 21:40 |
jroll | got dang. | 21:40 |
* jroll stabs everything | 21:40 | |
jroll | I remember this code landing to fix something else | 21:40 |
fried_rice | jroll: Is there any chance that this ironic node being rebalanced has allocations? | 21:40 |
jroll | fried_rice: yes, we create an instance before the upgrade AFAIK | 21:41 |
fried_rice | So we have to do more than just delete the old RP. We have to move his allocations to the new one. This ain't gonna work. | 21:41 |
jroll | this crap is burning us: https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L607-L616 | 21:41 |
jroll | right, we shouldn't be deleting the compute node or the RP | 21:41 |
jroll | so here's what's going on, in short: | 21:42 |
fried_rice | The answer is quite simply to make sure the compute node doesn't change its freakin uuid. | 21:42 |
jroll | n-cpu has a bunch of ironic nodes it's managing | 21:42 |
jroll | ironic goes down for upgrade | 21:42 |
jroll | n-cpu does a RT update | 21:42 |
jroll | n-cpu can't reach ironic | 21:42 |
jroll | n-cpu thinks all the ironic nodes are gone, for good, as if ironic returned I have no nodes | 21:43 |
jroll | n-cpu deletes the compute_node records | 21:43 |
jroll | ironic comes back | 21:43 |
jroll | n-cpu does an RT update, sees nodes, creates compute_node records | 21:43 |
jroll | meanwhile, n-cpu couldn't delete the resource providers from placement, and so it tries to create new ones and *boom* | 21:44 |
* jroll burns this entire driver to the ground | 21:44 | |
fried_rice | n-cpu deletes the compute_node records? | 21:45 |
jroll | btw, s/ironic goes down for upgrade/keystone goes down for upgrade/, which is why neither ironic nor placement can be reached | 21:45 |
fried_rice | I thought we decided that wasn't happening. | 21:45 |
jroll | yes | 21:45 |
jroll | http://logs.openstack.org/50/544750/10/check/ironic-grenade-dsvm-multinode-multitenant/d7a1ee7/logs/screen-n-cpu.txt.gz#_Feb_16_17_04_22_814020 | 21:45 |
jroll | because: http://logs.openstack.org/50/544750/10/check/ironic-grenade-dsvm-multinode-multitenant/d7a1ee7/logs/screen-n-cpu.txt.gz#_Feb_16_17_04_22_591129 | 21:45 |
jroll | because: https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L607-L616 | 21:45 |
jroll | (maybe it is apache that is down, don't know, don't care, things are down and n-cpu can't deal) | 21:46 |
jroll | for the curious, the commit message and bug here explain why we're returning an empty list of nodes there: https://review.openstack.org/#/c/487925/ | 21:47 |
jroll | fried_rice: that all make sense? | 21:48 |
jroll | TheJulia: ^ fyi, I think I nailed it down. | 21:48 |
*** r-daneel has joined #openstack-nova | 21:48 | |
* TheJulia hands jroll accelerant to assist with the burning the driver to the ground | 21:50 | |
jroll | thanks, my gas can is nearly empty | 21:50 |
jroll | turns out lying to the resource tracker is wrong, who'da thought | 21:53 |
* jroll thinks we should just kill that with fire, but then n-cpu can't start without ironic up, need to work that out too | 21:53 | |
*** sree has joined #openstack-nova | 21:54 | |
* TheJulia hands jroll magnesium for good measure | 21:54 | |
jroll | so the only way n-cpu will blow up by ironic not being reachable is because of this: https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L524 | 21:57 |
jroll | which will happen on the first RT run | 21:57 |
jroll | or not even? wtf | 21:57 |
jroll | oh, there used to be a _refresh_cache() there | 21:58 |
jroll | but even before 487925 we would return an empty list | 21:58 |
*** sree has quit IRC | 21:59 | |
jroll | ah jeez https://github.com/openstack/nova/commit/cce06a1e9855d9eed3f7c653200853f23466d791 | 21:59 |
* jroll thinks we can kill all this, test what happens when starting n-cpu without ironic available, fix that stuff without lying to RT, go from there | 22:00 | |
*** lpetrut has quit IRC | 22:00 | |
*** edmondsw has quit IRC | 22:01 | |
jroll | hm, 5pm friday | 22:01 |
* jroll chugs a coffee and hacks | 22:01 | |
fried_rice | jroll: Need anything from me? | 22:01 |
*** edmondsw has joined #openstack-nova | 22:01 | |
jroll | fried_rice: whiskey may be needed | 22:01 |
jroll | :) | 22:01 |
fried_rice | Can https://review.openstack.org/545464 be abandoned? | 22:01 |
jroll | yes, believe so | 22:02 |
fried_rice | jroll: The fact that you're a time zone ahead of me indicates I have no way of getting you a bottle in time to save you. | 22:02 |
TheJulia | jroll: I will buy you whiskey in Dublin | 22:02 |
fried_rice | Yeah, that ^ | 22:02 |
jroll | heh | 22:02 |
TheJulia | And next time I'm through your part of the country, I'll make a point of bringing really good whiskey on my RV | 22:02 |
jroll | :o <3 | 22:03 |
*** psachin has quit IRC | 22:04 | |
*** edmondsw has quit IRC | 22:06 | |
*** fullmetaljackiet has joined #openstack-nova | 22:09 | |
*** amodi has quit IRC | 22:10 | |
*** awaugama has quit IRC | 22:13 | |
*** slaweq has quit IRC | 22:15 | |
mrjk | About nova.cfg, something is not clear. Let's say I've conductor, api and any other service. Most of settings are in the DEFAULT section, but is it possible to override some default parameters for a specific service ? Let's say I want to change the debug mode for only one service, and not the others ... | 22:16 |
mriedem | debug is global | 22:17 |
mriedem | as long as you have all of your controller services running on the same host, they are going to share config from the [DEFAULT] section | 22:17 |
mrjk | How could I find out this info by myself ? | 22:17 |
mriedem | you could split configs and create an /etc/nova/nova-api.conf which has config specific to your API service | 22:17 |
mriedem | and remove debug from the base /etc/nova/nova.conf | 22:17 |
mriedem | then run the service with both config files | 22:18 |
mrjk | Ok, this would be the way to go. I was unsure, I believed there was a kind defaulting/override values | 22:18 |
mriedem | nova-api --config-file /etc/nova/nova.conf --config-file /etc/nova/nova-api.conf | 22:18 |
*** mchlumsky_ has quit IRC | 22:18 | |
*** jafeha__ has joined #openstack-nova | 22:18 | |
*** jafeha has quit IRC | 22:21 | |
*** Guest48782 has quit IRC | 22:24 | |
openstackgerrit | Matthew Treinish proposed openstack/nova master: Remove single quotes from posargs on stestr run commands https://review.openstack.org/545476 | 22:26 |
mtreinish | melwitt: ^^^ but lets test this and make sure I'm not just seeing things... | 22:26 |
melwitt | k, I'll try it | 22:27 |
*** acormier_ has joined #openstack-nova | 22:28 | |
*** acormier has quit IRC | 22:32 | |
*** zzzeek has quit IRC | 22:38 | |
*** kuzko has joined #openstack-nova | 22:40 | |
*** priteau has quit IRC | 22:41 | |
*** zzzeek has joined #openstack-nova | 22:42 | |
openstackgerrit | Matt Riedemann proposed openstack/nova master: Fix error handling in compute API for multiattach errors https://review.openstack.org/545478 | 22:42 |
mriedem | well i wish i would have found this before we cut RC2 ^ because that's an annoying UX problem | 22:42 |
openstackgerrit | Jim Rollenhagen proposed openstack/nova master: ironic: stop lying to the RT when ironic is down https://review.openstack.org/545479 | 22:46 |
jroll | TheJulia: fried_rice: ^ that fixes it, but will crash at startup if ironic is down | 22:46 |
* jroll is going to walk his dog before sunlight runs out | 22:47 | |
fried_rice | jroll: Maybe we *should* crash at startup if ironic is down. | 22:47 |
jroll | fried_rice: yeah, I kind of agree, kind of don't, regardless crashing when ironic is down was a huge pain in CI in the past that I don't want to live again | 22:48 |
jroll | I also feel like I want to be able to start my computes whenever and have them do stuff when ironic comes back | 22:48 |
jroll | though I don't believe in upgrading nova and ironic at the same time (or even the same maintenance window), other people do and this makes their life easier | 22:49 |
mriedem | nova-compute doesn't start if we can't connect to libvirt | 22:49 |
mriedem | i think the same for powervm? | 22:49 |
mriedem | not sure about hyperv/xen/vmware | 22:49 |
* fried_rice recalls doing something for this in powervm... | 22:49 | |
* fried_rice looks... | 22:50 | |
mriedem | nova-compute shouldn't come up, | 22:50 |
mriedem | because then the service will say it's up, and be around for scheduling, | 22:50 |
mriedem | and will just not work if the scheduler picks it and the hypervisor is gone | 22:50 |
jroll | mriedem: yeah, but libvirt isn't some external service, that just means you've configured your hypervisor wrong | 22:50 |
mriedem | vcenter is an external service | 22:51 |
jroll | and at least in ironic's case, there won't be any resources to schedule to, until it can connect to ironic | 22:51 |
jroll | or I guess there will, sigh | 22:51 |
fried_rice | In powervm, it looks like we'll hold up init_host for a while if we can't talk to the hypervisor, and then we'll ultimately blow up. | 22:51 |
mriedem | i thought someone's dog was going to get walked? | 22:51 |
mriedem | i can hear him whining from here | 22:51 |
jroll | good point | 22:51 |
fried_rice | But I've got a nice TODO there to make it work like I73a34eb6e0ca32d03e54d12a5e066b2ed4f19a61 which will actually disable the compute service (but not crash it) in that case. | 22:52 |
jroll | bbiab | 22:52 |
mriedem | melwitt: did you figure this out? https://review.openstack.org/#/c/340614/18/nova/compute/api.py@2029 | 22:53 |
mriedem | you had >1 attachment right> | 22:53 |
mriedem | ? | 22:53 |
mriedem | and that's why the volume status wasn't changing to 'available'? | 22:53 |
melwitt | mriedem: I had multiple attachments because I was having trouble getting the code path to hit in devstack. so I tried the scenario multiple times with the same volume by reset-state on it | 22:55 |
melwitt | and didn't notice it was building up attachments | 22:55 |
melwitt | so once I started from a clean slate, new volume and did the scenario, it worked as expected. the volume actually had no attachments in the fresh volume case. it was 'reserved' with no attachments | 22:56 |
melwitt | then the attachment_delete change it from 'reserved' -> 'available', then the volume_api.delete deleted the volume properly | 22:56 |
melwitt | so I think all is well | 22:57 |
mriedem | that was tied to the thing on L2063 too? | 22:58 |
mriedem | if detach fails, you definitely can't delete | 22:58 |
mriedem | i just updated the comments after you realized it was a test env issue | 23:00 |
mriedem | and yes volume attachments can build up if not managed | 23:00 |
mriedem | that's why i was using force detach earlier, | 23:00 |
mriedem | because you can do reset-state on the volume, but that doesn't remove the old attachments | 23:00 |
mriedem | really need a CLI for force detach in cinder | 23:01 |
melwitt | yeah | 23:03 |
melwitt | yeah, I know that if the detach fails you definitely can't delete. I was just trying to work out whether we should add a new try-except there to try the delete even if the detach fails | 23:04 |
melwitt | but I think that was legacy from when detach could fail with "nothing to detach" | 23:04 |
melwitt | so I took it out | 23:04 |
*** burt has quit IRC | 23:07 | |
TheJulia | jroll: fwiw, we put some restarts in for issues with having to restart nova-compute due to restarts. We should just be able to make it a default thing... I think.... we'll likely want to verify that it works across multinode grenade jobs for ironic since they are different | 23:09 |
jroll | TheJulia: so what you're saying is allowing a crash at startup shouldn't be an issue for CI? | 23:10 |
TheJulia | afaik it should not be | 23:10 |
TheJulia | we have a default restart if it is not multinode if I'm remembering correctly | 23:11 |
*** acormier_ has quit IRC | 23:11 | |
TheJulia | which likely is why... | 23:11 |
TheJulia | ugh | 23:11 |
* TheJulia puts laptop down, and goes and has a drink | 23:11 | |
*** acormier has joined #openstack-nova | 23:11 | |
jroll | hrm | 23:11 |
jroll | agree, this sounds like a monday thing | 23:12 |
*** weshay is now known as weshay_PTO | 23:14 | |
TheJulia | jroll: yes ++ | 23:14 |
mriedem | melwitt: ok +2 on https://review.openstack.org/#/c/340614/ now | 23:14 |
mriedem | melwitt: now we need to rope in superdan to peruse the series | 23:15 |
mriedem | at 3:15pm on a friday | 23:15 |
mriedem | melwitt: did you take a look at https://review.openstack.org/#/c/545123/ and the one after it? | 23:15 |
*** acormier has quit IRC | 23:17 | |
*** figleaf is now known as edleafe | 23:22 | |
melwitt | \o/ hallelujah | 23:37 |
melwitt | mriedem: not yet, it's next on my list | 23:38 |
melwitt | good, the patches are small. yess | 23:41 |
openstackgerrit | Eric Fried proposed openstack/nova master: New-style _set_inventory_for_provider https://review.openstack.org/537648 | 23:43 |
openstackgerrit | Eric Fried proposed openstack/nova master: SchedulerReportClient.update_from_provider_tree https://review.openstack.org/533821 | 23:43 |
openstackgerrit | Eric Fried proposed openstack/nova master: Use update_provider_tree from resource tracker https://review.openstack.org/520246 | 23:43 |
openstackgerrit | Eric Fried proposed openstack/nova master: Fix nits in update_provider_tree series https://review.openstack.org/531260 | 23:43 |
openstackgerrit | Eric Fried proposed openstack/nova master: Move refresh time from report client to prov tree https://review.openstack.org/535517 | 23:43 |
openstackgerrit | Eric Fried proposed openstack/nova master: Make generation optional in ProviderTree https://review.openstack.org/539324 | 23:43 |
openstackgerrit | Eric Fried proposed openstack/nova master: WIP: Add nested resources to server moving tests https://review.openstack.org/527728 | 23:43 |
fried_rice | Because you know I'm all about rebase, 'bout rebase... | 23:43 |
*** hemna_ has quit IRC | 23:50 | |
*** hongbin has quit IRC | 23:51 | |
fried_rice | edleafe: Making sure I'm not seeing things - do we not have GET /resource_providers?with_traits=... ? | 23:54 |
*** sree has joined #openstack-nova | 23:55 | |
*** chyka has quit IRC | 23:58 | |
*** chyka has joined #openstack-nova | 23:58 | |
*** sree has quit IRC | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!