14:02:20 #startmeeting glance 14:02:21 Meeting started Thu Oct 4 14:02:20 2018 UTC and is due to finish in 60 minutes. The chair is jokke__. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:02:23 I thought I was late 14:02:25 The meeting name has been set to 'glance' 14:02:29 o/ 14:02:38 o/ 14:02:40 o/ 14:02:42 o/ 14:02:44 o/ 14:02:55 smcginnis, rosmaita o/ 14:04:53 ok so I actually don't have any updates for us ... quiet start for the cycle 14:04:58 #topic tip jobs 14:05:18 ohhh 14:05:32 #topic tip jobs 14:05:41 client, store are all green 14:05:59 cursive is suddenly failing py35-functional on master 14:06:13 which is weird, i don't think that library is being actively maintained 14:06:22 they are time outs 14:06:31 i will take a look later 14:06:35 functional-py35 was failing yesterday on master due to timeout 14:06:38 There was some dependency update issue with cursive over the weekend I thought, but I also thought it was cleared up. 14:06:45 ahh 14:06:53 this morn passing again 14:06:55 i have not been reading the ML 14:06:59 But that manifested itself as a stacking failure I thought. 14:07:05 ok, maybe it has fixed itself 14:07:49 ok, i'll look at today's run (happens in 8 hours or so) and see what's up 14:07:56 next item 14:08:04 how to configure the jobs 14:08:15 there is a small controversy happening 14:08:33 quick reminder of what we're talking about 14:08:46 during rocky, we configured periodic "tips" jobs so we will get advance notice if some project we depend on merges a change to their master that breaks glance tests 14:09:11 problem is, when we cut stable/rocky from master, the periodic tests also started running in stable/r, which is pointless 14:09:22 solution was to modify .zuul.yaml so that the periodic jobs only run against master no matter what branch .zuul.yaml is in 14:09:36 but, there is some pushback from infra team because now we have all this cruft in the stable branch that is nonoperational and possibly confusing to people looking at the file 14:09:55 so, our options are: 14:09:56 1) do what we did with glance_store and just leave the cruft in the stable branch 14:09:56 2) have the release CPL remove the cruft manually at the time the stable branch is cut 14:09:56 3) andreas's suggestion: move the periodic jobs to project-config, where they will be defined to be only run against master and then won't show up in our local .zuul.yaml and won't confuse anyone 14:10:09 editorial comment from me: if we do 3), then any change to these jobs has to go through the infra team for approval (makes us less agile) 14:10:24 another editorial comment: the infra manual says that jobs like this (only run against master) should be defined in project-config, but, infra team members in IRC have indicated that they aren't interested in policing this, it's more of a suggestion 14:10:38 anyway, the changes for glance and glanceclient have yet to be approved, so it would be good to have a decision about what we want to do here 14:10:49 questions? 14:10:58 not only that but it was very strong request not having the jobs defined there and move them to the project repo 14:11:18 yes, their position on that seems a bit inconsistent 14:11:31 which causes IMO even more confusion when people start trying to find where these jobs are coming from as they are not in .zuul.yaml 14:11:44 jokke__: ++ 14:12:01 i agree entirely, it's always a PITA to figure out where the stuff is coming from 14:12:50 Personally I think the cruft in the .zuul.yaml is not that big of a deal ... if one does not know a) what it is doing b) why it's there they likely should not touch it 14:13:28 ok, so first item is: (1) we will keep the periodic tips jobs in our local .zuul.yaml 14:13:38 i mean, we agree on that 14:13:46 like all that was implemented exactly for the reason so we don't need to be manually playing with the .zuul.yaml on every release and risking breaking something 14:13:55 right 14:14:25 We can have a branch filter on the job in .zuul.yaml. Even though I think overall the use of that was trying to be avoided. 14:14:42 ok, so second point of agreement: (2) we will modify the .zuul.yaml for glance, glanceclient same way as glance_store is now 14:15:38 smcginnis: i think we have the branch filter (at your suggestion) in glance_store 14:15:55 :) 14:16:01 if there is too much confusion about it, maybe we should try to document it bit better inline what we are achieving with those definitions 14:16:19 the problem is that since it makes the tests nonoperational in stable/r, andreas would prefer to see them removed 14:16:24 so what was the reason we avoided the branch filter in the first place? 14:16:38 i didn't know about it 14:16:45 ahh 14:16:48 but the one we merged has hte branch filter 14:17:02 it makes the periodic job defs a bit verbose 14:17:21 take a look at .zuul.yaml in glance_store 14:17:43 well, I'm more than happy to review his contributions of cleaning them up from each stable branch when they are cut. If he has problem them being there and doing nothing ;) 14:18:08 ok, that's fine then 14:18:43 so let's merge these: 14:18:48 https://review.openstack.org/599837 14:18:58 https://review.openstack.org/599844 14:19:10 and then i will backport them to stable/r 14:19:24 and hten someone else can clean up the cruft if they are sufficiently motivated 14:19:41 and i will put up a patch to release CPL docs about it 14:19:52 sound good? 14:21:15 so we do need to specify the branch for each job, we can't do that on group level? 14:21:40 yes, don't remember why, but that was the way to do it 14:21:49 (i consulted with top professionals) 14:22:02 sure ... oh well 14:22:14 it is what it is 14:22:54 it may be doable, someone else can try it out 14:23:10 but the key thing is i want to stop running the periodic jobs against stable/r 14:23:17 kind of a daily waste of resources 14:23:24 ++ 14:23:55 ok moving on 14:24:01 ok, that's all from me ... please review & approve the reviews linked above! 14:24:23 #topic json-patching and Glance MIME types 14:24:36 i guess that's me too 14:24:43 i left comments on the spec 14:24:59 problem is that the current proposal is not consistent with json-patch 14:25:33 because it adds or replaces an object that isn't actually there in the representation 14:26:01 #link https://review.openstack.org/#/c/597648/ 14:26:47 jokke__ is going to hate this, but the "proper" way to do this will be to directly patch /checksum, etc 14:26:49 I'm not sure I followed the recommendation for how to handle that with the schema and such. 14:27:24 I'd like to hear other opinions, but I'd rather define that new version than open the can of worms of trying to solve the opening of those attributes 14:27:55 that will be absolutely nightmare to maintain and likely bring us couple of new security bugs before we get it right 14:28:03 agree 14:28:14 i'm not so sure about that 14:28:38 we have readonly_properties or something internally, we keep them in there 14:28:56 we would just change them on the schema to not have readOnly: true 14:28:58 or 14:29:11 we could leave them as readOnly in the schema 14:29:24 it's allowed by the json-patch standard 14:29:49 i discussed this with ian in glance channel yesterday, you can look at the log 14:29:54 has some references to the standard 14:30:34 but, if we want to change the image schema to allow validation_data as a location attribute 14:30:38 we can look into that 14:31:25 i guess we just make it not a required property 14:31:37 then when it never shows up in the response, that's fine 14:31:56 and it will be rejected in most PATCH calls 14:32:12 so can't change multiple objects at once with json-patch ... meaning that limiting the ability to set the checksums only when setting the location and all the side effects not doing that on single call will be nightmare to handle 14:32:53 jokke__: well, we have a validator on the request deserializer 14:33:12 i think ian has a patch or paste up that does that 14:33:28 we can talk with him later 14:33:32 like we would need to reject all upload and import calls once one sets the checksum 14:33:49 which is ridiculously bad user experience 14:34:00 But will it not be a patchy then? 14:34:27 or we would need to rework how the image gets activated when the locations are set causing the same 14:34:41 yeah, it is starting to sound bad 14:35:10 ok, i will take a crack at how we can add validation_data to the image schema so this will work in a standard json-patch way 14:37:04 i guess that's all from me about this 14:37:23 although, schema change means api bump 14:37:38 which maybe means this needs to be a full spec, not spec-lite? 14:38:34 like we already have defined our own MIME type for the patch calls, we're not accepting json-patch call even it is within our parameters ... I look forward for seeing if there is better way to do it, but we had legnthy discussion about this in PTG trying not to srew over our users and still making the it possible to have the checksums in images that currently can't have it 14:39:06 well, right now we accept a strict subset of json-patch calls 14:39:31 if we make the change as the spec is now, we will be accepting some stuff that is ruled out by json-patch 14:39:48 which is not helpful to users 14:39:52 tru, it still doesn't change the fact that we follow the standard only partially ;) 14:40:21 well, it's all about the containment relation ... we follow it partially but what we do follow is all allowed by the spec 14:40:43 that will no longer be true if we accept this proposal as it is 14:41:46 I guess my point is that the audience for this change is very small and the work needed to do it by json-patch spec limitations vs. bumping the mime version and extending json-patch for what we need to do (and documenting it) is quite a bit less overhead 14:42:14 like said I'm more than happy to hear if there is clean better way to do this. We just couldn't figure one out 14:42:27 so looking forward what you can come up with 14:42:55 ok, so to be clear, if we accept the spec as written now, we will have to introduce a new openstack-json-patch-v2.2 14:43:14 yes, I do see that 14:43:16 i think changing the image spec so that we can stick with v2.1 will be easier 14:43:21 ok, just want to be sure 14:43:27 is it openstack-json-patch we call it? 14:43:51 application/openstack-images-v2.1-json-patch 14:44:38 ok, so it's not claiming to be openstack wide it had glance/image specific component on it 14:44:39 basically, what we do is constrain json-patch to what we have implemented by requiring this MIME type 14:44:48 oh yeah, this is just us 14:45:01 i don't know if anyone else uses PATCH 14:45:15 yeah, I was worried it was something openstack wide we would have had to coordinate much more 14:45:23 right 14:45:36 the only coordination would be to discuss with the API SIG 14:45:53 which i sort of did by speaking with cdent yesterday (notes are on the review) 14:46:09 so likely _if_ we do it this way we want to have couple of conditions and support v2.1 and v2.2 just to not break all the clients out there as it's clean extensions 14:46:38 and just not allow them if them mime type is v2.1 14:46:38 we also still support v2.0 14:46:43 yeah 14:46:57 *shrug* 14:46:58 but yeah, we should check mime type 14:47:30 but actually, if we change the image schema, we can keep v2.1 14:47:41 won't have to change the mime type 14:48:02 the change will be ruled out by the older image schema that doesn't have 'validation_data' as a locations property 14:49:02 sorry that this has dragged on so long ... i will see what i can come up with on an etherpad and send a link to the ML 14:49:07 well it's still affecting the other object (image) instead of the location 14:49:15 sure, that would be gr8 14:49:36 lets hop the encryption before we run out of time 14:49:54 #topic image encrypion clarifications 14:50:10 hi 14:50:33 we had a mail on the ML and a discussion in cinder, as you suggested 14:50:47 Luzi: quick question ... when you have one of these encrypted images on a disk, what's the result of doing 'file' on it? 14:51:16 it should be: gpg symmetrical encrypted data 14:51:22 cool, ty 14:51:34 we have found some further questions, we would like to ask you 14:51:51 when handling encrypted images we need keys for encryption and decryption, which we retrieve from Barbican 14:52:08 default OpenStack behavior for encrypted resources is to automatically generate and delete keys on-demand 14:52:26 in our current proposal we rely on the user to create and delete keys in Barbican for the image encryption 14:52:40 do you think we should change our approach to completely automate this for images as well? 14:53:28 This would mean the glance server needing to delete Barbican keys upon image deletion – which is actually an open TODO from Cinder already: https://github.com/openstack/cinder/blob/1060074dcb80dc10a4f3353e6bd7c39402f0c321/cinder/api/contrib/volume_actions.py#L225 14:54:22 I personally really hate the idea of system doing something I did not ask it to do 14:54:33 same here ;) 14:54:47 meaning that's kinf of like making nova deleting the image when user deletes the instance 14:55:05 also, if you want to import an image that is already encrypted (which i think is the use case), you can't have glance generate the key, can you? 14:55:25 in the cinder yase: no, the key is just copied 14:56:26 oh users would love that amount of database overhead when every single time the same data is copied over and over again 14:56:36 how about we don't do that? ;P 14:57:02 it is needed, because when you create an image from an encrypted volume it is automatically encrypted 14:57:29 so cinder copies the key, to be able to decrypt the image even when the original volume is deleted 14:57:52 the current cinder case is the only one where encrypted images already exist in openstack 14:58:12 and that's exactly the reason why cinder shouldn't be deleting the user's resource on a different service in the first place ;) 14:59:21 that might take longer, so before the time is over, we have other questions too: 14:59:21 I quess the one thing we should make sure is that the key id is retained when the image gets deleted so those kays can be tracked and cleaned out 14:59:32 quick 14:59:42 we are currently planning to outsource the encryption mechanism 14:59:42 implementations into the openstacksdk, because it is used across several 14:59:42 components 14:59:53 seems like the other teams have differing opinions on that in regards to where to outsource the code to – what’s yours? 15:01:10 we can also discuss this further in the glance chat 15:01:10 we are out of time 15:01:25 I really don't have opinion on that ... neither cinder nor nova uses it to access glance 'though iiuc so it might be problematic to tap in 15:01:34 okay 15:01:37 yeah ... lets continue this on #openstack-glance 15:01:41 #endmeeting