14:02:20 <jokke__> #startmeeting glance
14:02:21 <openstack> Meeting started Thu Oct  4 14:02:20 2018 UTC and is due to finish in 60 minutes.  The chair is jokke__. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:23 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:23 <abhishekk> I thought I was late
14:02:25 <openstack> The meeting name has been set to 'glance'
14:02:29 <jokke__> o/
14:02:38 <Luzi> o/
14:02:40 <abhishekk> o/
14:02:42 <mhen> o/
14:02:44 <smcginnis> o/
14:02:55 <abhishekk> smcginnis, rosmaita o/
14:04:53 <jokke_> ok so I actually don't have any updates for us ... quiet start for the cycle
14:04:58 <jokke_> #topic tip jobs
14:05:18 <jokke_> ohhh
14:05:32 <jokke__> #topic tip jobs
14:05:41 <rosmaita> client, store are all green
14:05:59 <rosmaita> cursive is suddenly failing py35-functional on master
14:06:13 <rosmaita> which is weird, i don't think that library is being actively maintained
14:06:22 <rosmaita> they are time outs
14:06:31 <rosmaita> i will take a look later
14:06:35 <jokke__> functional-py35 was failing yesterday on master due to timeout
14:06:38 <smcginnis> There was some dependency update issue with cursive over the weekend I thought, but I also thought it was cleared up.
14:06:45 <rosmaita> ahh
14:06:53 <jokke__> this morn passing again
14:06:55 <rosmaita> i have not been reading the ML
14:06:59 <smcginnis> But that manifested itself as a stacking failure I thought.
14:07:05 <rosmaita> ok, maybe it has fixed itself
14:07:49 <rosmaita> ok, i'll look at today's run (happens in 8 hours or so) and see what's up
14:07:56 <rosmaita> next item
14:08:04 <rosmaita> how to configure the jobs
14:08:15 <rosmaita> there is a small controversy happening
14:08:33 <rosmaita> quick reminder of what we're talking about
14:08:46 <rosmaita> during rocky, we configured periodic "tips" jobs so we will get advance notice if some project we depend on merges a change to their master that breaks glance tests
14:09:11 <rosmaita> problem is, when we cut stable/rocky from master, the periodic tests also started running in stable/r, which is pointless
14:09:22 <rosmaita> solution was to modify .zuul.yaml so that the periodic jobs only run against master no matter what branch .zuul.yaml is in
14:09:36 <rosmaita> but, there is some pushback from infra team because now we have all this cruft in the stable branch that is nonoperational and possibly confusing to people looking at the file
14:09:55 <rosmaita> so, our options are:
14:09:56 <rosmaita> 1) do what we did with glance_store and just leave the cruft in the stable branch
14:09:56 <rosmaita> 2) have the release CPL remove the cruft manually at the time the stable branch is cut
14:09:56 <rosmaita> 3) andreas's suggestion: move the periodic jobs to project-config, where they will be defined to be only run against master and then won't show up in our local .zuul.yaml and won't confuse anyone
14:10:09 <rosmaita> editorial comment from me: if we do 3), then any change to these jobs has to go through the infra team for approval (makes us less agile)
14:10:24 <rosmaita> another editorial comment: the infra manual says that jobs like this (only run against master) should be defined in project-config, but, infra team members in IRC have indicated that they aren't interested in policing this, it's more of a suggestion
14:10:38 <rosmaita> anyway, the changes for glance and glanceclient have yet to be approved, so it would be good to have a decision about what we want to do here
14:10:49 <rosmaita> questions?
14:10:58 <jokke__> not only that but it was very strong request not having the jobs defined there and move them to the project repo
14:11:18 <rosmaita> yes, their position on that seems a bit inconsistent
14:11:31 <jokke__> which causes IMO even more confusion when people start trying to find where these jobs are coming from as they are not in .zuul.yaml
14:11:44 <rosmaita> jokke__: ++
14:12:01 <rosmaita> i agree entirely, it's always a PITA to figure out where the stuff is coming from
14:12:50 <jokke__> Personally I think the cruft in the .zuul.yaml is not that big of a deal ... if one does not know a) what it is doing b) why it's there they likely should not touch it
14:13:28 <rosmaita> ok, so first item is: (1) we will keep the periodic tips jobs in our local .zuul.yaml
14:13:38 <rosmaita> i mean, we agree on that
14:13:46 <jokke__> like all that was implemented exactly for the reason so we don't need to be manually playing with the .zuul.yaml on every release and risking breaking something
14:13:55 <rosmaita> right
14:14:25 <smcginnis> We can have a branch filter on the job in .zuul.yaml. Even though I think overall the use of that was trying to be avoided.
14:14:42 <rosmaita> ok, so second point of agreement: (2) we will modify the .zuul.yaml for glance, glanceclient same way as glance_store is now
14:15:38 <rosmaita> smcginnis: i think we have the branch filter (at your suggestion) in glance_store
14:15:55 <smcginnis> :)
14:16:01 <jokke__> if there is too much confusion about it, maybe we should try to document it bit better inline what we are achieving with those definitions
14:16:19 <rosmaita> the problem is that since it makes the tests nonoperational in stable/r, andreas would prefer to see them removed
14:16:24 <jokke__> so what was the reason we avoided the branch filter in the first place?
14:16:38 <rosmaita> i didn't know about it
14:16:45 <jokke__> ahh
14:16:48 <rosmaita> but the one we merged has hte branch filter
14:17:02 <rosmaita> it makes the periodic job defs a bit verbose
14:17:21 <rosmaita> take a look at .zuul.yaml in glance_store
14:17:43 <jokke__> well, I'm more than happy to review his contributions of cleaning them up from each stable branch when they are cut. If he has problem them being there and doing nothing ;)
14:18:08 <rosmaita> ok, that's fine then
14:18:43 <rosmaita> so let's merge these:
14:18:48 <rosmaita> https://review.openstack.org/599837
14:18:58 <rosmaita> https://review.openstack.org/599844
14:19:10 <rosmaita> and then i will backport them to stable/r
14:19:24 <rosmaita> and hten someone else can clean up the cruft if they are sufficiently motivated
14:19:41 <rosmaita> and i will put up a patch to release CPL docs about it
14:19:52 <rosmaita> sound good?
14:21:15 <jokke__> so we do need to specify the branch for each job, we can't do that on group level?
14:21:40 <rosmaita> yes, don't remember why, but that was the way to do it
14:21:49 <rosmaita> (i consulted with top professionals)
14:22:02 <jokke__> sure ... oh well
14:22:14 <jokke__> it is what it is
14:22:54 <rosmaita> it may be doable, someone else can try it out
14:23:10 <rosmaita> but the key thing is i want to stop running the periodic jobs against stable/r
14:23:17 <rosmaita> kind of a daily waste of resources
14:23:24 <jokke__> ++
14:23:55 <jokke__> ok moving on
14:24:01 <rosmaita> ok, that's all from me ... please review & approve the reviews linked above!
14:24:23 <jokke__> #topic json-patching and Glance MIME types
14:24:36 <rosmaita> i guess that's me too
14:24:43 <rosmaita> i left comments on the spec
14:24:59 <rosmaita> problem is that the current proposal is not consistent with json-patch
14:25:33 <rosmaita> because it adds or replaces an object that isn't actually there in the representation
14:26:01 <rosmaita> #link https://review.openstack.org/#/c/597648/
14:26:47 <rosmaita> jokke__ is going to hate this, but the "proper" way to do this will be to directly patch /checksum, etc
14:26:49 <smcginnis> I'm not sure I followed the recommendation for how to handle that with the schema and such.
14:27:24 <jokke__> I'd like to hear other opinions, but I'd rather define that new version than open the can of worms of trying to solve the opening of those attributes
14:27:55 <jokke__> that will be absolutely nightmare to maintain and likely bring us couple of new security bugs before we get it right
14:28:03 <abhishekk> agree
14:28:14 <rosmaita> i'm not so sure about that
14:28:38 <rosmaita> we have readonly_properties or something internally, we keep them in there
14:28:56 <rosmaita> we would just change them on the schema to not have readOnly: true
14:28:58 <rosmaita> or
14:29:11 <rosmaita> we could leave them as readOnly in the schema
14:29:24 <rosmaita> it's allowed by the json-patch standard
14:29:49 <rosmaita> i discussed this with ian in glance channel yesterday, you can look at the log
14:29:54 <rosmaita> has some references to the standard
14:30:34 <rosmaita> but, if we want to change the image schema to allow validation_data as a location attribute
14:30:38 <rosmaita> we can look into that
14:31:25 <rosmaita> i guess we just make it not a required property
14:31:37 <rosmaita> then when it never shows up in the response, that's fine
14:31:56 <rosmaita> and it will be rejected in most PATCH calls
14:32:12 <jokke__> so can't change multiple objects at once with json-patch ... meaning that limiting the ability to set the checksums only when setting the location and all the side effects not doing that on single call will be nightmare to handle
14:32:53 <rosmaita> jokke__: well, we have a validator on the request deserializer
14:33:12 <rosmaita> i think ian has a patch or paste up that does that
14:33:28 <rosmaita> we can talk with him later
14:33:32 <jokke__> like we would need to reject all upload and import calls once one sets the checksum
14:33:49 <jokke__> which is ridiculously bad user experience
14:34:00 <abhishekk> But will it not be a patchy then?
14:34:27 <jokke__> or we would need to rework how the image gets activated when the locations are set causing the same
14:34:41 <rosmaita> yeah, it is starting to sound bad
14:35:10 <rosmaita> ok, i will take a crack at how we can add validation_data to the image schema so this will work in a standard json-patch way
14:37:04 <rosmaita> i guess that's all from me about this
14:37:23 <rosmaita> although, schema change means api bump
14:37:38 <rosmaita> which maybe means this needs to be a full spec, not spec-lite?
14:38:34 <jokke__> like we already have defined our own MIME type for the patch calls, we're not accepting json-patch call even it is within our parameters ... I look forward for seeing if there is better way to do it, but we had legnthy discussion about this in PTG trying not to srew over our users and still making the it possible to have the checksums in images that currently can't have it
14:39:06 <rosmaita> well, right now we accept a strict subset of json-patch calls
14:39:31 <rosmaita> if we make the change as the spec is now, we will be accepting some stuff that is ruled out by json-patch
14:39:48 <rosmaita> which is not helpful to users
14:39:52 <jokke__> tru, it still doesn't change the fact that we follow the standard only partially ;)
14:40:21 <rosmaita> well, it's all about the containment relation ... we follow it partially but what we do follow is all allowed by the spec
14:40:43 <rosmaita> that will no longer be true if we accept this proposal as it is
14:41:46 <jokke__> I guess my point is that the audience for this change is very small and the work needed to do it by json-patch spec limitations vs. bumping the mime version and extending json-patch for what we need to do (and documenting it) is quite a bit less overhead
14:42:14 <jokke__> like said I'm more than happy to hear if there is clean better way to do this. We just couldn't figure one out
14:42:27 <jokke__> so looking forward what you can come up with
14:42:55 <rosmaita> ok, so to be clear, if we accept the spec as written now, we will have to introduce a new openstack-json-patch-v2.2
14:43:14 <jokke__> yes, I do see that
14:43:16 <rosmaita> i think changing the image spec so that we can stick with v2.1 will be easier
14:43:21 <rosmaita> ok, just want to be sure
14:43:27 <jokke__> is it openstack-json-patch we call it?
14:43:51 <rosmaita> application/openstack-images-v2.1-json-patch
14:44:38 <jokke__> ok, so it's not claiming to be openstack wide it had glance/image specific component on it
14:44:39 <rosmaita> basically, what we do is constrain json-patch to what we have implemented by requiring this MIME type
14:44:48 <rosmaita> oh yeah, this is just us
14:45:01 <rosmaita> i don't know if anyone else uses PATCH
14:45:15 <jokke__> yeah, I was worried it was something openstack wide we would have had to coordinate much more
14:45:23 <rosmaita> right
14:45:36 <rosmaita> the only coordination would be to discuss with the API SIG
14:45:53 <rosmaita> which i sort of did by speaking with cdent yesterday (notes are on the review)
14:46:09 <jokke__> so likely _if_ we do it this way we want to have couple of conditions and support v2.1 and v2.2 just to not break all the clients out there as it's clean extensions
14:46:38 <jokke__> and just not allow them if them mime type is v2.1
14:46:38 <rosmaita> we also still support v2.0
14:46:43 <jokke__> yeah
14:46:57 <jokke__> *shrug*
14:46:58 <rosmaita> but yeah, we should check mime type
14:47:30 <rosmaita> but actually, if we change the image schema, we can keep v2.1
14:47:41 <rosmaita> won't have to change the mime type
14:48:02 <rosmaita> the change will be ruled out by the older image schema that doesn't have 'validation_data' as a locations property
14:49:02 <rosmaita> sorry that this has dragged on so long ... i will see what i can come up with on an etherpad and send a link to the ML
14:49:07 <jokke__> well it's still affecting the other object (image) instead of the location
14:49:15 <jokke__> sure, that would be gr8
14:49:36 <jokke__> lets hop the encryption before we run out of time
14:49:54 <jokke__> #topic image encrypion clarifications
14:50:10 <Luzi> hi
14:50:33 <Luzi> we had a mail on the ML and a discussion in cinder, as you suggested
14:50:47 <rosmaita> Luzi: quick question ... when you have one of these encrypted images on a disk, what's the result of doing 'file' on it?
14:51:16 <Luzi> it should be: gpg symmetrical encrypted data
14:51:22 <rosmaita> cool, ty
14:51:34 <Luzi> we have found some further questions, we would like to ask you
14:51:51 <Luzi> when handling encrypted images we need keys for encryption and decryption, which we retrieve from Barbican
14:52:08 <Luzi> default OpenStack behavior for encrypted resources is to automatically generate and delete keys on-demand
14:52:26 <Luzi> in our current proposal we rely on the user to create and delete keys in Barbican for the image encryption
14:52:40 <Luzi> do you think we should change our approach to completely automate this for images as well?
14:53:28 <Luzi> This would mean the glance server needing to delete Barbican keys upon image deletion – which is actually an open TODO from Cinder already: https://github.com/openstack/cinder/blob/1060074dcb80dc10a4f3353e6bd7c39402f0c321/cinder/api/contrib/volume_actions.py#L225
14:54:22 <jokke__> I personally really hate the idea of system doing something I did not ask it to do
14:54:33 <mhen> same here ;)
14:54:47 <jokke__> meaning that's kinf of like making nova deleting the image when user deletes the instance
14:55:05 <rosmaita> also, if you want to import an image that is already encrypted (which i think is the use case), you can't have glance generate the key, can you?
14:55:25 <Luzi> in the cinder yase: no, the key is just copied
14:56:26 <jokke__> oh users would love that amount of database overhead when every single time the same data is copied over and over again
14:56:36 <jokke__> how about we don't do that? ;P
14:57:02 <Luzi> it is needed, because when you create an image from an encrypted volume it is automatically encrypted
14:57:29 <Luzi> so cinder copies the key, to be able to decrypt the image even when the original volume is deleted
14:57:52 <mhen> the current cinder case is the only one where encrypted images already exist in openstack
14:58:12 <jokke__> and that's exactly the reason why cinder shouldn't be deleting the user's resource on a different service in the first place ;)
14:59:21 <Luzi> that might take longer, so before the time is over, we have other questions too:
14:59:21 <jokke__> I quess the one thing we should make sure is that the key id is retained when the image gets deleted so those kays can be tracked and cleaned out
14:59:32 <jokke__> quick
14:59:42 <Luzi> we are currently planning to outsource the encryption mechanism
14:59:42 <Luzi> implementations into the openstacksdk, because it is used across several
14:59:42 <Luzi> components
14:59:53 <Luzi> seems like the other teams have differing opinions on that in regards to where to outsource the code to – what’s yours?
15:01:10 <Luzi> we can also discuss this further in the glance chat
15:01:10 <abhishekk> we are out of time
15:01:25 <jokke__> I really don't have opinion on that ... neither cinder nor nova uses it to access glance 'though iiuc so it might be problematic to tap in
15:01:34 <Luzi> okay
15:01:37 <jokke__> yeah ... lets continue this on #openstack-glance
15:01:41 <jokke__> #endmeeting