*** salv-orlando has joined #openstack-api | 00:08 | |
woodster_ | elmiko: take care! | 00:09 |
---|---|---|
woodster_ | miguelgrinberg: thanks for the thoughts | 00:10 |
miguelgrinberg | woodster_: np, if you find out more about the race condition issue with heat let me know | 00:10 |
woodster_ | miguelgrinberg: will do | 00:11 |
*** salv-orlando has quit IRC | 00:15 | |
*** kfox1111 has joined #openstack-api | 00:18 | |
kfox1111 | miguelgrinberg: alive? | 00:18 |
miguelgrinberg | kfox1111: and kicking ;-) | 00:18 |
kfox1111 | :) | 00:18 |
kfox1111 | woodster_ mentioned you were interested in the barbican acl race. | 00:18 |
miguelgrinberg | yes, I was wondering what was that about | 00:19 |
kfox1111 | say you create a heat autoscaling group that launches heat stacks that look like the following: | 00:19 |
kfox1111 | 1 resource for launching a vm paused and creating a keystone user for it. 1 resource that takes the user from the other resource, and tries to enable that user to access barbican secret X, a resource that after all acl's are done, unpauses the vm, a wait condition to pause until the vm's ready, and a resource that adds the vm to a load balancer. | 00:21 |
kfox1111 | now, heat will automatically launch and delete vms at whim. | 00:21 |
kfox1111 | it may launch 2 vm's at roughly the same time, or launch one while deleting another, or deleting several at a time. | 00:22 |
miguelgrinberg | right | 00:22 |
kfox1111 | If the barbican acl api is "download the acl document, change it, upload a new one" | 00:22 |
kfox1111 | each of the instances of the heat barbican acl resource will do that behavior and potentially upload incorrect documents. | 00:23 |
kfox1111 | ie, download, download, change, change, upload, upload, one of the two changes is dropped. | 00:23 |
kfox1111 | but both think they succeeded. | 00:23 |
miguelgrinberg | okay, I think I understand. So what I suggested to woodster_ is to expose the ACL as a collection, then you can use REST semantics | 00:23 |
kfox1111 | exactly. :) | 00:24 |
woodster_ | kfox1111: so two resources are configuring state for one VM? | 00:24 |
kfox1111 | no. each acl resource maps to just one vm. but they point at the same secret/acl | 00:24 |
kfox1111 | for example, a wild card certificate. :) | 00:24 |
miguelgrinberg | right, so let's say you have your secret endpoint as /secrets/{secret_id} | 00:24 |
woodster_ | kfox1111: so two VMs share one secret? | 00:25 |
kfox1111 | yes. :) | 00:25 |
miguelgrinberg | instead of having the ACL inside that resource representation, add another endpoint /secrets/{secret_id}/users | 00:25 |
kfox1111 | in a heat autoscaling group, the vm's are identical in function. it lets you scale with load. | 00:25 |
miguelgrinberg | then to add a user send a PUT request to /secrets/{secret_id}/users/miguel | 00:25 |
miguelgrinberg | to delete same thing but with DELETE | 00:26 |
miguelgrinberg | then both can be changing the ACL list at the same time | 00:26 |
kfox1111 | miguelgrinberg: exactly what I suggested yesterday. :) | 00:26 |
kfox1111 | we're on the same page. :) | 00:26 |
miguelgrinberg | awesome, that's very RESTful in my opinion | 00:26 |
woodster_ | I don't think per user policy is the correct way to handle that. keystone is considering a user group approach that would support group based ACLs | 00:26 |
kfox1111 | yeah. | 00:27 |
kfox1111 | woodster_: that may simplify things a bit, but the problem is not just isolated to heat. | 00:27 |
kfox1111 | if heat is modifying an acl, and a user does at the same time, the same problem happens. | 00:27 |
kfox1111 | it really should be solved RESTfully. :/ | 00:28 |
woodster_ | Well I would agree that is better than PATCHing! | 00:28 |
miguelgrinberg | this has nothing to do with heat, heat just happens to make lots of calls into lots of APIs | 00:28 |
kfox1111 | of all places you don't want a race condition, its in your authorization subsystem. :) | 00:28 |
kfox1111 | yeah. | 00:29 |
miguelgrinberg | well, problem solved :) | 00:30 |
woodster_ | I'm still not comfortable with multiple processes modifying shared resources concurrently....seems dangerous. | 00:30 |
kfox1111 | sure. but its unavoidable. it will happen. | 00:30 |
miguelgrinberg | that's true of any API, you can't control what your clients do | 00:30 |
woodster_ | I think user groups are really the correct way to manage this | 00:31 |
kfox1111 | assume it will happen, and design api's that can handle it safely. | 00:31 |
kfox1111 | Your going to see the same issue when two admins try and modify the same acl at the same time. | 00:32 |
kfox1111 | groups don't solve it. | 00:32 |
woodster_ | I agree as a general statement but for orchestration in particular is makes for a more complicated system. | 00:32 |
kfox1111 | groups are a performance optomization only. | 00:32 |
kfox1111 | heh. we're all ready about 100x more complicated then my previous suggested solution to the vm-integration system. ;) | 00:33 |
woodster_ | If I could put shared secrets in a group that all VM instance users are part of, I only have to mod ACL on that secret once | 00:33 |
woodster_ | Less chatter as a minimum | 00:33 |
kfox1111 | the keystone groups stuff will make it 120%. :/ | 00:33 |
kfox1111 | lets see.... | 00:34 |
kfox1111 | you woudl create a group in the over template, and associate it with the secret, | 00:34 |
woodster_ | 120% for traffic you mean? Why would that be meta data passed along to service like roles? | 00:34 |
kfox1111 | then in each instance you would still have to bind the instance user to the group. it wouldn't decrease traffic, just move it from barbican acl api to keystone group api. | 00:35 |
woodster_ | That *not* be meta, from keystone, that is | 00:35 |
kfox1111 | no, I mean 120% more complicated then the initial vm-integration solution I proposed. ;) | 00:35 |
kfox1111 | the simple nova plugin -> barbican api tweaks one. :) | 00:35 |
kfox1111 | the instance user one so far is like 9 blueprints across almost as many projects. :/ | 00:36 |
woodster_ | kfox1111:that's where policy/access stuff should live :) | 00:36 |
kfox1111 | and I'm thinking there are still a few. | 00:36 |
kfox1111 | woodster_: heh. then why do acl's at all? ;) | 00:36 |
kfox1111 | its keystones job. | 00:36 |
kfox1111 | I don't nessisarily disagree, but getting the keystone folks to accept that is a very hard thing. :/ | 00:37 |
kfox1111 | The vm-integration spec/actual implementation took me like 8 hours to right. The instance-user variant will be weeks-months probably and longer to actually land. :/ | 00:38 |
woodster_ | Well I think we are hogging the API channel with Barbican foo now, sorry miguelgrinberg :) | 00:38 |
kfox1111 | I'm willing to do it because I think it will make the guest agent problem better too. | 00:38 |
miguelgrinberg | no problem :) | 00:38 |
kfox1111 | but projects really need to stop pushing all the work to the other projects. :/ | 00:39 |
*** annegentle has joined #openstack-api | 00:39 | |
kfox1111 | one more quick one miguelgrinberg, | 00:39 |
miguelgrinberg | sure | 00:39 |
kfox1111 | barbican has secrets, and containers. both have acls. | 00:39 |
kfox1111 | for the heat stuff, I'm mostly interested in containers, and getting all the secrets out to the vm. | 00:39 |
kfox1111 | currently the api only allows you to set acls on containers and secrets seperatly. acl on container doesn't affect the secrets listed in the container. | 00:40 |
kfox1111 | and one secret can be in multiple containers. | 00:40 |
kfox1111 | If I make a heat resource that takes in a container name, and adds the vm user to the acl in the container, and then iterates on all of the secrets, and adds it to all of the secrets too, I think thats pretty racy too. | 00:41 |
miguelgrinberg | do you need the acls on the container and all secrets updated? | 00:42 |
miguelgrinberg | can you just rely on container-level ACLs? | 00:42 |
miguelgrinberg | sorry if it is a dumb question :-) | 00:42 |
kfox1111 | I just want the vm to be able to download any secret listed in the specified container. | 00:42 |
kfox1111 | currently its not implemented that way. I was hoping it would be that simple. | 00:42 |
miguelgrinberg | it sounds like you'd want the barbican side to do the iteration | 00:43 |
kfox1111 | shoudl we push to change it so container acl's ripple down to the secrets listed in them? | 00:43 |
miguelgrinberg | so you add a user to the container ACL and that user shows up on all the secrets in it | 00:43 |
kfox1111 | yeah. | 00:43 |
miguelgrinberg | and on any secrets added later | 00:43 |
kfox1111 | yeah. | 00:44 |
kfox1111 | oh, and secrets removed. | 00:44 |
miguelgrinberg | right, that needs to be handled too | 00:44 |
miguelgrinberg | which is an expensive way of saying we are only going to use container ACLs | 00:44 |
kfox1111 | wouldn't want users forgotten to be removed because they were removed from the container. | 00:44 |
kfox1111 | k. | 00:44 |
kfox1111 | woodster_: should I file blueprint for this? | 00:45 |
miguelgrinberg | ideally you want secrets to inherit the container ACL, plus their own | 00:45 |
kfox1111 | yeah. | 00:45 |
woodster_ | The multiple container per secret makes this very tough | 00:46 |
miguelgrinberg | the key is in the word "inherit", not sure about the implementation, but the idea is that the lookup is doing done in both places | 00:46 |
kfox1111 | What about adding back the container secret download endpoint I was talking about before. | 00:47 |
miguelgrinberg | you have to search the secret and container ACLs | 00:47 |
kfox1111 | if you try and download the secret through the container, then it only has to check the one container acl. | 00:47 |
miguelgrinberg | well, okay, then it isn't too complicated | 00:48 |
kfox1111 | so container acl's would only aply when accessing /container/<uuid>/secret/<name>/ | 00:48 |
kfox1111 | or /container/<name>/secret/<name>/ | 00:48 |
kfox1111 | very similar to the api I had in the vm-integration spec. | 00:49 |
kfox1111 | I've got most of an implementation there aready too. | 00:49 |
kfox1111 | Ok. I think I'll write up a spec for this. its getting complicated enough that we should nail down all the details explicitly. | 00:49 |
kfox1111 | but it doesn't seem very hard to actually implement. | 00:49 |
miguelgrinberg | glad I could help | 00:50 |
kfox1111 | yup. thanks. :) | 00:50 |
kfox1111 | have a great weekend guys. :) | 00:50 |
woodster_ | kfox1111: you too | 00:51 |
kfox1111 | I'll try and post a spec monday. | 00:51 |
miguelgrinberg | you too :) | 00:51 |
*** annegentle has quit IRC | 00:54 | |
*** alex_klimov has quit IRC | 00:58 | |
*** annegentle has joined #openstack-api | 01:02 | |
*** Apoorva has quit IRC | 01:06 | |
*** annegentle has quit IRC | 01:07 | |
*** sigmavirus24 is now known as sigmavirus24_awa | 01:35 | |
*** salv-orlando has joined #openstack-api | 02:18 | |
*** salv-orlando has quit IRC | 02:23 | |
*** salv-orlando has joined #openstack-api | 04:49 | |
*** salv-orlando has quit IRC | 04:53 | |
*** e0ne has joined #openstack-api | 05:30 | |
*** e0ne has quit IRC | 05:44 | |
*** woodster_ has quit IRC | 06:30 | |
*** salv-orlando has joined #openstack-api | 07:21 | |
*** salv-orlando has quit IRC | 07:53 | |
*** e0ne has joined #openstack-api | 08:21 | |
*** salv-orlando has joined #openstack-api | 08:24 | |
*** salv-orlando has quit IRC | 08:28 | |
*** salv-orlando has joined #openstack-api | 08:37 | |
*** e0ne has quit IRC | 08:38 | |
*** e0ne has joined #openstack-api | 08:55 | |
*** e0ne has quit IRC | 09:08 | |
*** salv-orlando has joined #openstack-api | 09:45 | |
*** e0ne has joined #openstack-api | 09:47 | |
*** e0ne has quit IRC | 09:48 | |
*** salv-orlando has quit IRC | 10:20 | |
*** salv-orlando has joined #openstack-api | 10:50 | |
*** salv-orlando has quit IRC | 11:00 | |
*** e0ne has joined #openstack-api | 11:30 | |
*** e0ne has quit IRC | 11:43 | |
*** woodster_ has joined #openstack-api | 12:21 | |
*** e0ne has joined #openstack-api | 12:24 | |
*** e0ne has quit IRC | 12:30 | |
*** salv-orlando has joined #openstack-api | 12:46 | |
*** salv-orlando has quit IRC | 13:18 | |
*** e0ne has joined #openstack-api | 13:35 | |
*** salv-orlando has joined #openstack-api | 13:47 | |
*** salv-orlando has quit IRC | 13:51 | |
*** e0ne has quit IRC | 14:08 | |
*** e0ne has joined #openstack-api | 14:12 | |
*** e0ne has quit IRC | 14:26 | |
*** salv-orlando has joined #openstack-api | 15:15 | |
*** salv-orlando has quit IRC | 15:21 | |
*** terrylhowe has left #openstack-api | 15:36 | |
*** salv-orlando has joined #openstack-api | 15:54 | |
*** salv-orlando has quit IRC | 15:58 | |
*** e0ne has joined #openstack-api | 17:04 | |
*** e0ne has quit IRC | 17:33 | |
*** openstackgerrit has quit IRC | 17:51 | |
*** salv-orlando has joined #openstack-api | 17:51 | |
*** openstackgerrit has joined #openstack-api | 17:51 | |
*** woodster_ has quit IRC | 18:20 | |
*** salv-orlando has quit IRC | 18:25 | |
*** alex_klimov has joined #openstack-api | 18:28 | |
*** salv-orlando has joined #openstack-api | 18:41 | |
*** salv-orlando has quit IRC | 18:46 | |
*** alex_klimov has quit IRC | 18:50 | |
*** salv-orlando has joined #openstack-api | 18:59 | |
*** salv-orlando has quit IRC | 19:01 | |
*** alex_klimov has joined #openstack-api | 20:13 | |
*** salv-orlando has joined #openstack-api | 20:20 | |
*** salv-orlando has quit IRC | 20:57 | |
*** alex_klimov has quit IRC | 21:43 | |
*** alex_klimov has joined #openstack-api | 22:21 | |
*** salv-orlando has joined #openstack-api | 23:25 | |
*** salv-orlando has quit IRC | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!