opendevreview | Brin Zhang proposed openstack/nova master: Replaces tenant_id with project_id from List/Update Servers APIs https://review.opendev.org/c/openstack/nova/+/764292 | 01:46 |
---|---|---|
opendevreview | Brin Zhang proposed openstack/nova master: Replaces tenant_id with project_id from List/Update Servers APIs https://review.opendev.org/c/openstack/nova/+/764292 | 02:01 |
opendevreview | Brin Zhang proposed openstack/nova master: [Trival] Fix wrong microversion in TestClass name https://review.opendev.org/c/openstack/nova/+/816778 | 02:12 |
opendevreview | Brin Zhang proposed openstack/nova master: Replace all_tenants with all_projects in List Server APIs https://review.opendev.org/c/openstack/nova/+/765311 | 02:47 |
opendevreview | Brin Zhang proposed openstack/nova master: Replaces tenant_id with project_id from Rebuild Server API https://review.opendev.org/c/openstack/nova/+/766380 | 02:58 |
opendevreview | Brin Zhang proposed openstack/nova master: Replaces tenant_id with project_id from List/Update Servers APIs https://review.opendev.org/c/openstack/nova/+/764292 | 05:59 |
opendevreview | Brin Zhang proposed openstack/nova master: Replace all_tenants with all_projects in List Server APIs https://review.opendev.org/c/openstack/nova/+/765311 | 05:59 |
opendevreview | Brin Zhang proposed openstack/nova master: Replaces tenant_id with project_id from Rebuild Server API https://review.opendev.org/c/openstack/nova/+/766380 | 05:59 |
kashyap | frickler: Morning: https://listman.redhat.com/archives/libvir-list/2021-November/msg00171.html | 07:33 |
*** songwenping_ is now known as songwenping | 07:45 | |
frickler | kashyap: nice, thx for that. we did change the bullseye job to use swap for now, that seems rather stable and is still faster than the serial variant | 08:17 |
kashyap | No problem; I think the series will be respun to use -accel for accelerator in general. (https://gitlab.com/libvirt/libvirt/-/issues/233) | 08:19 |
gibi | elodilles: hi! you can upgrade your vote ;) on the victoria part of https://review.opendev.org/q/topic:bug/1944759 as the wallaby patches have landed | 09:46 |
elodilles | gibi: \o/ | 10:00 |
elodilles | gibi: done | 10:00 |
gibi | thanks a lot | 10:00 |
elodilles | no problem :) | 10:00 |
frickler | kashyap: already there. nice monster patch updating umpteen tests ;) | 10:10 |
kashyap | frickler: Yep, v2 (from Michal) is large because of the test noise - https://listman.redhat.com/archives/libvir-list/2021-November/msg00194.html | 10:11 |
kashyap | frickler: I'm just wiring up the XML config class in Nova. And writing a commit message | 10:11 |
EugenMayer | is there really no better way or even usual default to start vms after a node reboot then adding resume_guests_state_on_host_boot to the nova conf and/or using virsh to add the autostart flag? There is no GUI nor nova API flag (openstack server create) flag to automatically create an instance with autostart? | 10:45 |
opendevreview | Balazs Gibizer proposed openstack/nova master: Use ReplaceEngineFacade fixture https://review.opendev.org/c/openstack/nova/+/816820 | 10:50 |
opendevreview | Balazs Gibizer proposed openstack/nova master: Fix interference in db unit test https://review.opendev.org/c/openstack/nova/+/814735 | 10:52 |
gibi | EugenMayer: I think nova considers such automatic recovery as mostly outside of nova scope. An external service can detect the compute host failre and can implement automatic evacuation for some VMs while for others it can wait for the compute host recovery and implement auto startup. | 10:53 |
EugenMayer | gibi interesting. I understand that this is an good option, but it would be somewhat nice to have this as a possible build in default to, or? | 10:55 |
EugenMayer | but i understand, if you run hundreds of VMs the strategy to recover could be more suffistacted | 10:55 |
gibi | EugenMayer: OpenStack has tools to build such behavior top of nova. I.e. https://docs.openstack.org/self-healing-sig/latest/use-cases/heat-mistral-aodh.html | 10:56 |
gibi | the use case is vaild, I just don't think the implementation needs to be inside nova | 10:56 |
EugenMayer | thank you for that article, will look into that. | 10:58 |
opendevreview | Lee Yarwood proposed openstack/nova master: nova-next: Deploy noVNC from source instead of packages https://review.opendev.org/c/openstack/nova/+/816738 | 11:07 |
opendevreview | Kashyap Chamarthy proposed openstack/nova master: libvirt: Introduce config classes for QEMU's "tb-cache" https://review.opendev.org/c/openstack/nova/+/816823 | 11:09 |
kashyap | Huh, plural; it's a single class | 11:10 |
opendevreview | Kashyap Chamarthy proposed openstack/nova master: libvirt: Introduce config class for QEMU's "tb-cache" https://review.opendev.org/c/openstack/nova/+/816823 | 11:12 |
kashyap | Not sure if the above WIP requires a bp yet ... but I filed one prememptively (https://blueprints.launchpad.net/nova/+spec/control-qemu-tb-cache) | 11:14 |
gibi | kashyap: I think a specless bp is enough | 11:35 |
kashyap | gibi: Cool; guessed as much. :) | 11:36 |
sean-k-mooney | EugenMayer: https://docs.openstack.org/masakari/latest/ is really the service you likely want to have automatic evacuation of ha instnaces | 11:41 |
sean-k-mooney | you can manually do it with mistal heat and aodh but masakari is a single service desinged to provide instance ha | 11:42 |
EugenMayer | migrating from proxmox to openstack, i have VMs with multiple disks. I understand that i can import disks as images using 'openstack image create --import' - but how to create a multi-disk VM? | 11:43 |
EugenMayer | sean-k-mooney interesting. Currently, since we are not planning in cinder, auto-evacuation is not possible anyway | 11:43 |
sean-k-mooney | EugenMayer: the imporant thing to remember however for the "core" service like nova is that openstack design principal is not based on a declaritve model its inparitive. e.g. it is not intend to take action autonomusly only when you interact with the system. so nova should never alter the state of a vm unless you make an api call | 11:44 |
EugenMayer | sean-k-mooney beside doing it from backup, which is rather a manual decision to do so. So not as cloudish as masakari would do it | 11:44 |
sean-k-mooney | EugenMayer: i hesitate to say this cause i dislike this code path but without cinder you can use masikari if you put the instnace state dir on an nfs share | 11:45 |
EugenMayer | sean-k-mooney understood. So it is more an API wrapper with high-level tasks, but it does not just act, that is for others to implement | 11:45 |
sean-k-mooney | EugenMayer: correct | 11:45 |
EugenMayer | sean-k-mooney nfs share is nothing else as cinder on horrible storage :) | 11:45 |
sean-k-mooney | yep and it used a diffeent code path then the one we normally use so less well tested | 11:45 |
sean-k-mooney | which is why its better to avoid it | 11:46 |
EugenMayer | right now, we go for local disks only, at least for the legacy VMs. K8s cluster is not yet decided. So most of the evacuate features / hot/live migration are not for us with the legacy vms | 11:46 |
sean-k-mooney | and the performance sucks vs a real shared stoage system | 11:46 |
sean-k-mooney | so back to your multi disk question | 11:46 |
sean-k-mooney | you can create multi disk vms but its not how nova is typically used | 11:46 |
EugenMayer | the performance is the main / only reason we are not jumping on ceph or similiar. We have to be realistic, our network is 1GB | 11:46 |
EugenMayer | with an MTU of 1400, no jumbo or anything | 11:47 |
sean-k-mooney | in the falvor there are 3 storage options disk, ephemeral and swap | 11:47 |
sean-k-mooney | disk is the root disk size | 11:47 |
EugenMayer | can i create VMs manually with any amount of disks since it's virsh in the end? I will only need this for the VMs i migrate - the ones i create from scratch are based on cloud-init and the proper disks you get using nova directly | 11:48 |
sean-k-mooney | ephemeral is the total amount of addtional ephemeral storage, i belive by default if you dont otherwise say all the ephemeral storage will be created as a singel addtional disk | 11:48 |
sean-k-mooney | you can however subdevide the ephemeral storage into multiple disk on the command line when you create the vm | 11:48 |
EugenMayer | using ephemeral storage we stopped, since the bug from 2016 that it cannot be resized :) | 11:49 |
sean-k-mooney | EugenMayer: no, as an end use you are not allowed to know the hypervior in use | 11:49 |
EugenMayer | nice, openstack server create is not the tool to go with here i guess | 11:49 |
sean-k-mooney | so you can just assume its virsh in the end | 11:49 |
EugenMayer | so using 'nova' is still hyperevisor agnostic, so it would be somewhat right? using virsh is discourage - that's what you mean, right? | 11:50 |
sean-k-mooney | openstack provides an abstration api over multiple hyperviors like libvirt/kvm, hyperv, vmware so yes the api is hypervior agnostic mostly | 11:51 |
sean-k-mooney | EugenMayer: using virsh is entirely unsupported | 11:51 |
sean-k-mooney | with opentask you can use virsh to inspect the xml for debugging | 11:52 |
EugenMayer | right now, i'am planing on how to migrate my old VMs which might have 1-3 disks. I know how to do it with one-disk setups. openstack image create --import .. the launch from snapshot. With multiple disks it all seems more complicated | 11:52 |
sean-k-mooney | but beyond that you shoudl never use virsh to modify any vm created by nova | 11:52 |
EugenMayer | i see, understood | 11:53 |
sean-k-mooney | with multi disk unfortunetly the only option that is supported via the api would be cinder | 11:53 |
EugenMayer | which is not an option (i understand, those would be volumes) | 11:53 |
sean-k-mooney | the unsupproted way woudl be to create a mulit disk vm. stop it and then copy the data | 11:53 |
EugenMayer | copy the data you mean, fs to fs? | 11:53 |
EugenMayer | or as an image | 11:53 |
sean-k-mooney | yes or via a rescue image but avoiding glance | 11:54 |
sean-k-mooney | so boot a 3 disk vm then put it in rescue mode using a clonzilla image and have clonzilla reimage the disk using the orignal vm as the image source | 11:54 |
sean-k-mooney | something like that | 11:54 |
EugenMayer | wow, that sounds quiet complicated. But understood. I somehow create and instance with x disks, then using grml and somehow mount new disks and old disks and sync the filesystem | 11:55 |
sean-k-mooney | there are some generic tools that might help virt2virt i think is one? i have never used them myself | 11:56 |
sean-k-mooney | https://libguestfs.org/virt-v2v.1.html | 11:56 |
EugenMayer | interesting, thanks | 11:58 |
EugenMayer | sean-k-mooney i assume that the disk if find on my compute is, eventhough named 'disk' nothing else then jsut a qcow disk | 11:59 |
EugenMayer | so i really consider going full force and trying to neither do nova-api nor virsh api, but replacing the disks. I understand this is not supported, but in the end, i do not run y types of hypevervisors and i need to migrate | 12:00 |
sean-k-mooney | yes you jsut need the qcow | 12:00 |
sean-k-mooney | ya an simple scp and overridign the disk might work | 12:01 |
sean-k-mooney | provided the disk dont have backing files | 12:01 |
EugenMayer | i will need to either adjust the size via the XML or via the API | 12:01 |
sean-k-mooney | you might need to first convert the qcow to a flatend one | 12:01 |
EugenMayer | yeah aware on how to do this, but i usually use flat | 12:01 |
EugenMayer | thank you a lot, again! | 12:02 |
EugenMayer | One last question, is there an API based way to activate nested-virtualization on a VM? I use host-passthrough already, but not sure that is enough. I need this to run an ESXi which i soley run for image disk conversion | 12:05 |
sean-k-mooney | no nested virt will be avaiable to the vm automatically if the host is configured to allow it | 12:05 |
sean-k-mooney | on older kernel you use to have to set teh nested virt flag in the intel-kvm or amd-kvm kernel module options | 12:06 |
sean-k-mooney | in newer kernels it default to enabled so that is no longer required | 12:06 |
sean-k-mooney | EugenMayer: you can check by installing linbvirt in the guest and runnign virt-host-validate | 12:07 |
sean-k-mooney | or just look for vmx or svm on the amd side in lscpu in the guest i think | 12:07 |
sean-k-mooney | so ya if your host is set up correctly then it shoudl just work | 12:08 |
sean-k-mooney | with that said my expeirnce with nested virt is mainly kvm on kvm and limited expericne with windows(docker/linux subsystem) usecases | 12:10 |
sean-k-mooney | both of those can be made work in openstack but have never tired esxi | 12:10 |
kashyap | EugenMayer: sean-k-mooney: Nested is enabled only in the newer _upstream_ kernels, though. | 12:17 |
kashyap | Some distributions might not enable it by default | 12:17 |
sean-k-mooney | well its enabled on rhel for intel by default now but not for amd | 12:18 |
kashyap | EugenMayer: So check before you run. A handy tool is `virt-host-validate` (it'll check for a whole bunch of things, including /dev/kvm) | 12:18 |
kashyap | You can run it on a VM too | 12:18 |
sean-k-mooney | yep also said that above its pretty handy but not well advertised in my experince | 12:18 |
kashyap | sean-k-mooney: No, not even for RHEL for Intel | 12:19 |
kashyap | sean-k-mooney: RHEL made it tech preview for AMD *and* Intel. | 12:19 |
kashyap | There it is, public docs: | 12:19 |
kashyap | https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/creating-nested-virtual-machines_configuring-and-managing-virtualization | 12:19 |
kashyap | sean-k-mooney: Ah, I missed your virt-host-validate reference earlier; maybe we should include it in Nova docs somewhere too | 12:20 |
sean-k-mooney | well we retoactivly made it tech preview it was GA'd then we found issue on the intel support | 12:20 |
EugenMayer | thank you both - sorry was at dinner | 12:21 |
kashyap | Yes, I'm saying about the current status. "Retroactie" == well, if you spot blocker issues, then you must put it back to tech-preview | 12:21 |
kashyap | EugenMayer: No prob. In general, don't feel pressure to respond "instantly" on IRC | 12:22 |
kashyap | It's called "instant messaging"; not "instant responding" :D | 12:22 |
EugenMayer | if i get helped i tend to keep myself as 'instant responding' - but not anybody else. If find this somewhat respecting the time of the answer-giver (what ever that word is:) ) | 12:23 |
kashyap | Yes, sure. That's contextual | 12:24 |
EugenMayer | i currently run debian:bullseye with the stable kernel 5.10, will check what status it has and maybe upgrade to backports or spin up an compute on a different OS | 12:24 |
kashyap | EugenMayer: Some 7 years ago I wrote this thing, because we used to a see a lot of "naked pings" - https://www.rdoproject.org/contribute/irc-etiquette/ | 12:24 |
sean-k-mooney | EugenMayer: i think the change was made upstream after 5.10 in 5.14 so with 5.10 it might still be disabel by default | 12:25 |
* kashyap --> AFK; back later | 12:27 | |
EugenMayer | kashyap laters! | 12:27 |
EugenMayer | sean-k-mooney https://packages.debian.org/bullseye-backports/linux-image-amd64 so 5.14 could be it | 12:28 |
opendevreview | Lee Yarwood proposed openstack/nova master: nova-next: Deploy noVNC from source instead of packages https://review.opendev.org/c/openstack/nova/+/816738 | 13:39 |
opendevreview | Lee Yarwood proposed openstack/nova master: nova-next: Deploy noVNC from source instead of packages https://review.opendev.org/c/openstack/nova/+/816738 | 13:55 |
dansmith | gmann: if I have a policy rule with a specific scope_types= set, and then I override that rule with enforcer.set_rules({}) do you know what happens to the scope_types from the default? | 16:07 |
dansmith | it seems like they are being kept through the override (which I would expect), but that's preventing me from overriding some rules to @ in the tests, like check_image before check_image:allow_volume_backed | 16:08 |
dansmith | lbragstad: ^ | 16:08 |
opendevreview | Merged openstack/nova master: [Trival] Fix wrong microversion in TestClass name https://review.opendev.org/c/openstack/nova/+/816778 | 16:10 |
lbragstad | dansmith i think you're right in that it shouldn't override the scope_type | 16:13 |
dansmith | okay, so some of the tests we have hit two policies, and the tests try to @ the first one to make sure we test the second one, | 16:13 |
dansmith | but in the case of the "what happens when we enable scope checking" tests, I'm not sure how to get past that | 16:14 |
lbragstad | oh... | 16:14 |
dansmith | other than to just assume that scope violations get caught by the first one | 16:14 |
lbragstad | the only way i could think of doing that today would be re-register the rules so that you can reset the scope type of the one you want to pass through | 16:15 |
lbragstad | but that sounds clunky | 16:15 |
dansmith | yeah | 16:15 |
lbragstad | what's the scope_type for check_image, project? | 16:16 |
dansmith | or mock the enforce with a side_effect=[skip(), orig()] | 16:16 |
dansmith | it was both (of course, like all of them) but I'm moving it back to just project since it's on an instance | 16:16 |
dansmith | (and it's create_image, typo above) | 16:16 |
lbragstad | ah | 16:17 |
dansmith | the tests allow me to skip the check that makes sure it's the exact rule name that triggers the failure, | 16:17 |
dansmith | so I could just do that, but it kinda sidesteps the actual verification (although only for the system: contexts, which isn't so bad) | 16:18 |
lbragstad | and you want to test the second check fails with a system-scoped token? | 16:18 |
dansmith | I mean it would still ensure the behavior, it's just that the tests currently disable the first check with @ to hit the second for all the context | 16:18 |
dansmith | just trying to change as little as possible about the test behavior of course | 16:18 |
dansmith | maybe I'll just put a #NOTE in there and leave it for review | 16:18 |
lbragstad | sure - and that's not necessarily realistic given the discussions the other day | 16:19 |
dansmith | https://termbin.com/z7yz | 16:22 |
dansmith | this ^ works | 16:22 |
dansmith | so we keep the strict checking for the non-scope-enforcing case, but relax it (as allowed by common_policy_check()) if we are scope-obsessed | 16:23 |
dansmith | oh you know, I could use the base rule name instead of none to be even more explicit | 16:24 |
dansmith | the problem is I would get this from the checker: | 16:24 |
dansmith | testtools.matchers._impl.MismatchError: !=: | 16:24 |
dansmith | reference = "Policy doesn't allow os_compute_api:servers:create_image:allow_volume_backed to be performed." | 16:24 |
dansmith | actual = "Policy doesn't allow os_compute_api:servers:create_image to be performed." | 16:24 |
dansmith | because we failed on the earlier check before we got to create_image:allow_volume_backed | 16:24 |
dansmith | anyway, I'm mostly just talking to myself out loud at this point :) | 16:26 |
lbragstad | yeah - that makes sense | 16:29 |
lbragstad | i guess my question now is if we think nesting checks like that is going to be a common pattern, or a one off thing here and there | 16:30 |
lbragstad | feels like a one-off thing | 16:31 |
dansmith | it's more a result of nova's test pattern as anything | 16:32 |
dansmith | but yeah I'll see as I keep going through it | 16:32 |
lbragstad | if it's common, i think we should have a better answer for it? | 16:33 |
lbragstad | or potentially consolidate the checks | 16:33 |
dansmith | yup | 16:34 |
opendevreview | Sylvain Bauza proposed openstack/nova master: [doc] propose Review-Priority label for contribs https://review.opendev.org/c/openstack/nova/+/816861 | 16:43 |
gmann | dansmith: lbragstad yeah, we have this type of multi policy enforcement in nova and I think in neutron there are many. | 16:50 |
gmann | but do we have mixed type of scope_type in such multi policy in single API? | 16:51 |
gmann | that sounds not correct if we do have | 16:51 |
dansmith | gmann: so far that hypervisors "give a different view for projecty people" is the only multi-scope one I know of | 16:51 |
gmann | dansmith: or it is different APIs like resource creation for GET test etc | 16:52 |
gmann | dansmith: lbragstad I am thinking to allow tests to disable (set to None) the scope_type but yes only for testing | 16:53 |
dansmith | gmann: yeah, let me continue through more of this to see how many are a problem | 16:53 |
gmann | dansmith: in nova case where it is unit tests we can handle with mocking API controller method itself if it is different APIs. | 16:54 |
dansmith | gmann: yeah, maybe, or just ensure that a system token gets stopped at the parent policy check | 16:54 |
gmann | yeah with switching the CONF.encorce_scope to false first and then true | 16:56 |
dansmith | gmann: yeah, index:get_all_tenants and index are in the same boat :/ | 17:06 |
gmann | dansmith: you mean with the new direction or existing scope_type? | 17:08 |
gmann | but in both case anyone allow to do index:get_all_tenants should be allow to do index right? | 17:09 |
dansmith | gmann: I mean checking index before index:get_all_tenants and failing the scope check | 17:09 |
dansmith | gmann: that's not the problem | 17:09 |
gmann | but both are same scope_type | 17:09 |
gmann | oh no, sorry | 17:09 |
dansmith | gmann: the problem is that both aren't allowed for system:* but we'll fail on the first one before we get to the second one, even if we set the check_str=* | 17:09 |
dansmith | it's not actually a problem because as you mention, their scope restriction will be the same, it's just a matter of verification in the tests | 17:10 |
gmann | yeah | 17:10 |
dansmith | let me propose a change to common_policy_check and see what you think | 17:10 |
gmann | +1 | 17:10 |
gibi | bottom line: ospurge works and can be used but it does not handle default security group yet. | 17:15 |
gibi | hups | 17:15 |
gibi | wrong window | 17:15 |
opendevreview | Dan Smith proposed openstack/nova master: WIP: Allow per-context rule in error messages https://review.opendev.org/c/openstack/nova/+/816865 | 17:37 |
dansmith | gmann: ^ | 17:37 |
dansmith | gmann: that lets me pass a function that will decide if we should fail on index or index:get_all_tenants | 17:37 |
dansmith | it means that we don't actually pass the parent check to make sure the child check is the one that fails, but only in the case where the scope_types= is what causes us to fail | 17:38 |
dansmith | (in the case where we have overridden the parent check to be @) | 17:39 |
gmann | dansmith: can you give some example of rule_name in this ^^ case? | 17:43 |
dansmith | gmann: https://termbin.com/0w1b | 17:45 |
dansmith | that's snips from multiple places, hopefully that makes sense | 17:47 |
dansmith | I would push up my actual test changes, but they are a TOTAL mess atm :) | 17:48 |
gmann | dansmith: so in case of project scoped context they will fail on all-tenant as it will index policy | 17:49 |
gmann | and for system reader case, it will fail on index but we want to skip that and test all-tenant right? | 17:50 |
dansmith | for project member, it will fail index:get_all_tenants | 17:51 |
dansmith | for project admin, it will succeed | 17:51 |
dansmith | for system * it will fail on index (because of scope check requiring project) | 17:51 |
dansmith | the callable makes sure we assert the right rule name in the error message for system:* and project:member | 17:51 |
gmann | yeah for system* so how you will test index:get_all_tenants when index will fail first | 17:52 |
dansmith | well, that's my point.. we won't :) | 17:52 |
gmann | yeah | 17:52 |
dansmith | in reality, it doesn't matter because they will be stopped there (operator can't override scope_types in their config file) | 17:52 |
dansmith | if you think we really need to, then we have to do something more complicated | 17:53 |
dansmith | your NOTE in there about rule_name=None, however, seems intended for this purpose, but is less specific than what I have there | 17:53 |
gmann | dansmith: no, I was thinking from operator perspective. so index policy has to be system+project scoped so yeah test passing/failing on system for all-tenant is ok | 17:54 |
gmann | dansmith: by this "for system * it will fail on index (because of scope check requiring project)" do you mean you will modify index policy to project scoped only? | 17:55 |
gmann | if so then we need to to little magic in multi-policy like if system scoped token then check all-tenant policy only and not index-policy. | 17:57 |
dansmith | gmann: yeah that's what I'm doing, working on making those project-only again | 17:57 |
dansmith | I think for these cases, enforce_scope=project will handle that for us | 17:58 |
gmann | so we want system user to do only 1. get all tenant instances and 2. get single tenant instances with all-tenant=true but NOT 3. get specific tenant instances directly | 17:59 |
gmann | ? | 17:59 |
dansmith | gmann: what we discussed on wednesday (and before) is that system users don't get to see project resources, so they can never list instances | 18:00 |
gmann | as filtering instances of single tenant-id is only allowed with all-tenant parameter | 18:00 |
gmann | dansmith: ^^ this is one way to do for them in nova imementation | 18:00 |
gmann | we may want to change that in that case | 18:00 |
dansmith | gmann: okay I'm confused | 18:02 |
gmann | dansmith: may be I am confusing with current and newthings. system I mean domain admin who will be allowed to do all-tenant instances | 18:02 |
dansmith | I'm working on https://review.opendev.org/c/openstack/governance/+/815158/7/goals/selected/yoga/consistent-and-secure-rbac.rst line 144 | 18:03 |
dansmith | gmann: ah, no I'm talking about system:* specifically :) | 18:03 |
gmann | dansmith: ohk, so new-system as per our discussion | 18:03 |
dansmith | yeah | 18:03 |
gmann | dansmith: say with doamin-admin, we want index to be project+domain or just project ? | 18:04 |
gmann | because we have to think on filter instance by tenant also in this case | 18:05 |
dansmith | gmann: yeah I think it'll be domain+project at that point | 18:05 |
gmann | ohk, +1 | 18:05 |
gmann | that will make sense. that what i was asking but sorry confused with system | 18:05 |
dansmith | all good, it's all very confusing :) | 18:05 |
gmann | so now onwards we talk on new things on old way:) | 18:05 |
gmann | I make note of that | 18:06 |
gmann | s/on/not | 18:06 |
dansmith | heh | 18:07 |
gmann | dansmith: I am sure these multi-policy testing stuff will be more conflicted in the 'GET server with additional attributes' | 18:07 |
dansmith | okay, I haven't gotten to that yet | 18:07 |
gmann | there are many embedded confusing policy there | 18:07 |
dansmith | yeah | 18:08 |
dansmith | gmann: I wish I had thought of this before I started the refactor, | 18:11 |
dansmith | but I also think these tests should provide just the list of who can do a thing, and have it subtract those from the all_contexts lists, and assert which ones can't | 18:12 |
dansmith | it would make things a lot less verbose and easier to read I think, | 18:12 |
dansmith | and since you assert that auth+unauth==all, I don't think you lose anything by making it less verbose | 18:12 |
gmann | dansmith: yeah, and I am realizing now asserting on actual scope_type (for multi-policy API) can give us more clarity on 'who can do a thing' so that we fix the mulit-policy things on scope_type if there is under or over -permission | 18:16 |
gmann | I am thinking not to skip/mock scope_type checks for multi-policy API, skip/mock scope_type only when different APIs policy need to be skipped for testing separate operation on same resoruce | 18:17 |
dansmith | gmann: I think I know what you mean, and I think I agree :) | 18:18 |
gmann | dansmith: are you handing the server-boot-on-specific-host case also? this one- https://review.opendev.org/c/openstack/nova-specs/+/793011 | 18:20 |
gmann | if not then I can re-spin the spec as it need microversion bump also. I think we have clear direction now as per wed discussion. | 18:20 |
dansmith | gmann: not yet | 18:20 |
dansmith | gmann: I'll skip that one in what I'm doing for now then | 18:21 |
gmann | ok | 18:21 |
EugenMayer | is there any way to open a serial tty on the terminal when using nova? | 18:31 |
EugenMayer | I would like to have this on the terminal so c&p is integrated | 18:34 |
dansmith | gmann: I'm still failing flavor policy but I think I'm going to clean up what I have a little and push it up to get some read on it before I delve into another module | 18:36 |
dansmith | (failing flavorextraspecs because they depend on server policy to be clear) | 18:36 |
gmann | dansmith: ohk, in that case can we just set_override CONF.enforce_scope=false for server request and then enable before flavorextraspecs ? | 18:45 |
gmann | or you mean 'os_compute_api:os-flavor-extra-specs:index' policy | 18:45 |
sean-k-mooney | EugenMayer: you can specify addtion serial port | 18:46 |
sean-k-mooney | in the image propertiese | 18:46 |
sean-k-mooney | the first will be use for the console but the rest could be used for host comumnication it that is what you wanted | 18:46 |
dansmith | gmann: I mean those tests appear to do things like servers/detail before/after flavor extra specs work and fail with contexts that don't work anymore | 18:47 |
dansmith | gmann: I will clean then up, or hack around in the first patch, but I'm about out of steam for the week | 18:47 |
dansmith | and want to get something up that is close | 18:47 |
gmann | dansmith: with new direction, this policy needs to be made more granular now. 1. for flavor API showing extraspecs for system user 2. flavor extraspecs in GET servers for porject users | 18:49 |
dansmith | ah, right because of the nested flavors right? | 18:50 |
gmann | yeah | 18:50 |
sean-k-mooney | gmann: personally i think even project reader shoudl be able to see extra specs | 18:50 |
sean-k-mooney | you cant really choose between flavor properly without seing them | 18:50 |
gmann | sean-k-mooney: so indexing the flavor extra specs are system+project things but having extraspec in GET response if only project things | 18:51 |
sean-k-mooney | im not sure what you meen by indexing flagor extra specs | 18:51 |
gmann | dansmith: we might need such granularity in more policy as we de-coupling the system form project resources but need to check case by case | 18:51 |
opendevreview | Dan Smith proposed openstack/nova master: WIP: Revert project-specific APIs for servers https://review.opendev.org/c/openstack/nova/+/816206 | 18:51 |
dansmith | gmann: ack | 18:51 |
gmann | sean-k-mooney: i mean this '/flavors/{flavor_id}/os-extra_specs/' | 18:52 |
sean-k-mooney | i think that should really be readable by everyone | 18:52 |
sean-k-mooney | i know there are reason to hide it in some cases but in general by default i think it shoudl be open | 18:53 |
sean-k-mooney | and i think the same really shoudl be true of the server show | 18:53 |
dansmith | sean-k-mooney: what kinds of things could be in flavor extra specs that would be sensitive? | 18:54 |
dansmith | topology of the instance isn't really sensitive if you can see the rest of the instance, IMHO | 18:54 |
dansmith | trying to think of what else could be in there though | 18:54 |
sean-k-mooney | dansmith: non standard extra spec used for filters | 18:54 |
sean-k-mooney | that about it | 18:55 |
dansmith | something they passed during create to cause it to be scheduled in some way right? | 18:55 |
sean-k-mooney | we could exclude un namescased extra specs and the filter ones by default for non admin | 18:55 |
dansmith | that's not really something we should show to member but hide from reader | 18:55 |
dansmith | (which I think is your point) | 18:56 |
sean-k-mooney | yes | 18:56 |
sean-k-mooney | we likely would want to hide https://github.com/openstack/nova/blob/master/nova/api/validation/extra_specs/aggregate_instance_extra_specs.py#L53 | 18:57 |
sean-k-mooney | anything prefixed with aggregate_instance_extra_specs: or that is not namespaced | 18:57 |
sean-k-mooney | form member and reader | 18:57 |
sean-k-mooney | but hw:mem_page_size=large e.g. this flavor use hugepages is proably relevent to memebers | 18:57 |
dansmith | do we hide that today? | 18:58 |
sean-k-mooney | i dont know what the default is since im almost alwasy admin when i interact but its contoled by poligy | 18:58 |
sean-k-mooney | *policy | 18:58 |
sean-k-mooney | however its all or nothing | 18:58 |
dansmith | okay | 18:58 |
sean-k-mooney | we dont filter | 18:58 |
dansmith | but regardless, I think you point (which I agree with) is that there doesn't likely need to be any that we distinguish between project member and reader | 18:59 |
sean-k-mooney | https://github.com/openstack/nova/blob/master/nova/policies/flavor_extra_specs.py | 18:59 |
dansmith | and that's kinda the point of reader, AIUI | 18:59 |
gmann | it is reader by default currently | 18:59 |
sean-k-mooney | PROJECT_READER_OR_SYSTEM_READER | 18:59 |
sean-k-mooney | yep | 18:59 |
dansmith | ++ | 18:59 |
sean-k-mooney | gmann: i toght you were suggestign changing that | 19:00 |
* dansmith thought so too | 19:01 | |
gmann | sean-k-mooney: no, i mean we need to make it granular for GET servers so that we can remove system_reader from GET server as they cannot GEt server right | 19:01 |
sean-k-mooney | flavor unless you use the flavor access api to me are system resouces | 19:01 |
gmann | flavor API will be same so no change | 19:01 |
dansmith | gmann: yes, because you can get these either through servers or flavors, | 19:02 |
gmann | yeah | 19:02 |
dansmith | yeah | 19:02 |
sean-k-mooney | i cant find it but i tought we had a policy for if this si shown in server detail by the way | 19:03 |
sean-k-mooney | maybe re never added it | 19:03 |
sean-k-mooney | i guess ya we just have this one policy https://github.com/openstack/nova/blob/master/nova/policies/flavor_extra_specs.py#L76 | 19:05 |
gmann | yeah, this one | 19:08 |
lyarwood | melwitt: FWIW the nova-next change to noVNC from source did actually fail on a novnc test with the latest run https://zuul.opendev.org/t/openstack/build/f626656bc01b4cb7aee847e21f3b4b4c - I don't have the energy to look into it now but will take a look on Monday. | 19:27 |
sean-k-mooney | is openstack server restore undelete? | 19:35 |
sean-k-mooney | i tought it was restore form backup but i guess not | 19:35 |
lyarwood | https://docs.openstack.org/api-ref/compute/#restore-soft-deleted-instance-restore-action - yup it restores soft deleted instances | 19:37 |
sean-k-mooney | ok in my head i put backup and restore together | 19:37 |
lyarwood | sean-k-mooney: I thought you just spawned a new instance from the backup image? | 19:38 |
sean-k-mooney | you do a rebuild | 19:39 |
sean-k-mooney | well it depend on why you are doing it | 19:39 |
sean-k-mooney | infra failure then sure but if its just to roleback guest state tehn rebuild | 19:40 |
melwitt | lyarwood: ack, I'll look and see if anything stands out to me | 19:50 |
opendevreview | Lee Yarwood proposed openstack/nova master: DNM - Test tempest-integrated-compute-centos-8-stream https://review.opendev.org/c/openstack/nova/+/799996 | 19:58 |
*** blmt is now known as Guest5036 | 20:04 | |
opendevreview | melanie witt proposed openstack/nova master: nova-next: Deploy noVNC from source instead of packages https://review.opendev.org/c/openstack/nova/+/816738 | 20:39 |
*** yoctozepto8 is now known as yoctozepto | 21:28 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!