14:01:21 <markwash> #startmeeting glance
14:01:22 <iccha> 0/
14:01:22 <openstack> Meeting started Thu Oct 10 14:01:21 2013 UTC and is due to finish in 60 minutes.  The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:23 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:25 <openstack> The meeting name has been set to 'glance'
14:01:37 <zhiyan> hi!
14:02:00 <rosmaita> \o
14:02:26 <markwash> Agenda for today is again at the following link
14:02:34 <markwash> #link https://etherpad.openstack.org/glance-team-meeting-agenda
14:02:41 <markwash> its an etherpad, so add as you see fit
14:03:24 <markwash> Okay, looks like we can get started
14:03:24 <flwang> o/
14:03:31 <markwash> #topic new core members
14:03:49 <markwash> I'm sure you all know zhiyan and flwang were nominated for glance core
14:04:04 <markwash> I added them to the group in gerrit, so now they can +2
14:04:18 <zhiyan> hi folks, this is zhiyan, 刘知言 is my Chinese name :) nice to see you, and glade to work with you.
14:04:26 <zhiyan> and thanks for your support! again
14:04:54 <markwash> I'm very glad to have them on the team
14:05:18 <markwash> #topic glance havana rc2
14:05:26 <flwang> markwash: thanks, it's glad to work with you guys
14:05:45 <markwash> perhaps you've noticed that ttx and I set up a glance rc2 thanks to all the great bug finding/triaging/fixing you guys have been doing (among some others)
14:06:26 <markwash> #link https://launchpad.net/glance/+milestone/havana-rc2
14:06:52 <markwash> Most of those have already landed in milestone-proposed
14:07:30 <markwash> or anyway they've been approved and are going through the gate at the moment
14:07:47 <markwash> there are two or three that need more attention
14:08:17 <markwash> #link https://review.openstack.org/#/c/48943/
14:08:35 <markwash> zhiyan: you are +1 on that patch. . wanna go change it to +2 approve now that you're core?
14:08:41 <markwash> its okay if you want more time to revisit it
14:08:50 <zhiyan> done
14:08:52 <iccha> thanks for listing them in etherpad markwash
14:08:57 <zhiyan> just check it again
14:09:01 <markwash> cool
14:09:20 <markwash> then there is this critical review
14:09:21 <markwash> https://review.openstack.org/#/c/49009/
14:09:42 <markwash> mclaren: it looks like there is a tiny bit of back and forth on that review, what's the status?
14:09:52 <markwash> nikhil: I think you were looking at it too, with a -1
14:10:47 <mclaren> markwash: let me see
14:11:15 <nikhil> markwash: ya
14:11:22 <mclaren> markwash: hadn't noticed the -1 from nikhil
14:11:51 <nikhil> one min, looking more
14:11:59 <markwash> there's at least one change (in version_negotiation.py) that looks like maybe it was maybe unintentional (or maybe I misunderstand the signficance)
14:13:00 <nikhil> markwash: I was hoping to see if people agreed to move all those logs in the debug mode
14:13:01 <mclaren> markwash: that was intentional but I can revert it np
14:13:12 <nikhil> some of them are in warn and error
14:13:43 <mclaren> nikhil: we run in debug mode in production, we find we need that level of verbiage to race problems
14:13:51 <nikhil> I've monitored the keystone client patch for a similar change and it was painful
14:13:54 <mclaren> race -> trace
14:14:07 <markwash> mclaren: hmm, I guess it actually not a problem to make that change in version_negotiation.py
14:14:17 <markwash> strange that its not a translated string, oh well
14:14:26 <nikhil> mclaren: gotcha, my suggestion was to have both LOG.debug and then LOG.warn/error
14:15:02 <nikhil> so that if someone needs to go firefight, there is debug mode to help even if no logs in warn/error
14:15:07 <venkatesh> mclaren: i left a small comment for clarification there. did not do -1 though. would be great to know your thoughts.
14:15:36 <mclaren> nikhil: not sure I understand
14:16:14 <mclaren> nikhil: log output is already differentiated betwen debug/warn/error, I think I'm missing something here
14:16:36 <nikhil> mclaren: my suggestion was to have something like:
14:16:48 <nikhil> LOG.debug(log my creds here)
14:16:55 <nikhil> followed by
14:17:09 <nikhil> LOG.error(just print uri without creds)
14:17:17 <nikhil> kinda thing
14:17:39 <markwash> nikhil: I think mclaren was saying they run with debug on all the time, regardless of these specific messages. . so that duplicating these in error and debug wouldn't mean they don't need to run debug
14:17:49 <mclaren> yes
14:18:09 <markwash> I think in general we have a commitment not to reveal credentials in logs of any form
14:18:18 <markwash> I don't know if there are some exceptions to that
14:18:40 <nikhil> markwash: I don't want to be pain here, however the keystone client patch never went in
14:19:05 <nikhil> which was removing creds from logs
14:19:07 <markwash> nikhil: hmm, you don't happen to have a link to that patch?
14:19:11 <mclaren> leaking the glance admin password would be catastrophic...
14:19:17 <nikhil> it's in the review, I'll post here
14:19:25 <markwash> ah, sorry I missed it
14:19:29 <nikhil> https://bugs.launchpad.net/horizon/+bug/1004114 https://review.openstack.org/#/c/42467/
14:19:33 <nikhil> markwash: ^^
14:20:30 <mclaren> venkatesh: thanks will take a look at your comment
14:20:42 <venkatesh> mclaren: thanks. :)
14:21:05 <markwash> hmm.  I think I might just differ with dolph on this one
14:21:34 <mclaren> venkatesh: which patch set was it against? (I don't see it on #8) thanks
14:21:35 <markwash> I think it makes sense to run debug some of the time in production at least, and isolation of debug logs to a secure container may not really be a reliable option
14:22:02 <nikhil> markwash: mclaren It was just a suggestion, I can remove -2 if glance decides to go other way .. just wanted to keep sanity within all projects in openstack otherwise it's a deployer's pain
14:22:07 <nikhil> -1
14:22:20 <venkatesh> mclaren: it was against PS4
14:22:34 <mclaren> venkatesh: thanks
14:22:37 <markwash> nikhil: but thanks for your thoroughness, I wouldn't have considered the issue as much without it
14:22:42 <nikhil> I'm trying to fight a bug with swift auth version 2 atm so there's that pain which is being added here :)
14:22:48 <venkatesh> mclaren: np
14:22:58 <nikhil> markwash: thanks
14:23:05 <markwash> looks like we'll get another patchset here soon, mclaren? I'd love to submit it to milestone-proposed in the next 4 hours or so
14:23:22 <markwash> but really any time before tomorrow is probably okay
14:23:47 <mclaren> venkatesh: could it be patch PS3?
14:24:41 <venkatesh> mclaren: its part of the review comment section. its not an inline comment. https://review.openstack.org/#/c/49009/ sorry about the confusion
14:25:14 <markwash> it looks like the question is about why bother changing the migration scripts
14:25:49 <mclaren> markwash: sure, happy to help out.
14:25:57 <mclaren> markwash: we definitely need a new PS?
14:26:04 <venkatesh> markwash: yes mark. also the current log message might not give us much details on which row had problem
14:26:28 <markwash> mclaren: maybe not. . I was just guessing based on the back and forth here
14:26:52 <markwash> it seems +2'able to me, but I want to wait to hear folks out
14:27:00 <mclaren> venkatesh: the migrate script will log to syslog/wherever in the  same way as the api
14:27:33 <mclaren> venkatesh: so I figured the other changes were pointless if we still ended up with the password in the log files anyway
14:28:29 <markwash> mclaren: venkatesh: it seems that log is only for invalid store uris. Would it be possible to add a little more identifying info to that log, just to help narrow down the search if such a message appears? something like the related image_id?
14:28:45 <mclaren> markwash: sure, makes sense
14:28:53 <zhiyan> +1
14:29:09 <venkatesh> markwash: +1
14:29:13 <zhiyan> image_id maybe a good candidate
14:29:19 <markwash> okay cool
14:29:22 <mclaren> zhiyan: you had suggested similar elsewhere if I remember correctly
14:29:45 <zhiyan> yes, but not for migrate script. mclaren
14:29:59 <markwash> any other potential blockers people know of for https://review.openstack.org/#/c/49009/ ?
14:30:06 <zhiyan> it's store driver iirc. mclaren ^
14:30:41 <zhiyan> markwash: i'm ok, but prefer add something to migrate script, like venkatesh proposed.
14:31:16 <markwash> zhiyan: sounds good, I think mclaren is okay with making that change in another patchset. . pls correct me
14:31:19 <mclaren> markwash: I'll push up a new patch with the id in migrate script in the next hour
14:31:36 <zhiyan> mclaren: cool. thanks.
14:31:44 <mclaren> np
14:31:49 <markwash> great, thanks guys
14:31:50 <venkatesh> mclaren: thank you. :)
14:32:01 <mclaren> srue
14:32:03 <mclaren> sure
14:32:18 <markwash> any more rc2 notes/issues/bugs?
14:32:39 <flwang> markwash: a tiny issue
14:33:01 <flwang> markwash: should we update glance man date? https://github.com/openstack/glance/blob/master/doc/source/man/glanceapi.rst
14:33:16 <flwang> seems all of them are out of date long ago
14:33:46 <markwash> wow that is pretty bad :-)
14:34:24 <flwang> seems we're assuming the deployer never look at it :)
14:34:27 <markwash> flwang: is there a bug for that?
14:34:39 <flwang> I guess the content is out of date as well, is it?
14:34:47 <flwang> not yet, just notice it
14:35:15 <markwash> flwang: would you mind filing one? I guess its not the most critical issue but nice to at least fix it up in Icehouse
14:35:31 <flwang> markwash: sure, will do
14:35:40 <markwash> okay, if nothing else, moving on to check in with the on going async work
14:35:57 <markwash> #topic async tasks status check in
14:36:10 <zhiyan> 1 sec markwash
14:36:14 <nikhil> venkatesh: ^^
14:36:27 <zhiyan> not sure you notice https://bugs.launchpad.net/glance/+bug/1226013 and https://bugs.launchpad.net/glance/+bug/1236868
14:36:48 <zhiyan> do you think they are good candidate for rc2
14:37:38 <markwash> I'm not sure. . its definitely a bug, but I guess I can't see what it would break exactly?
14:37:49 <markwash> it seems like it would just be an unfortunate "huh that's unexpected" moment for a user
14:37:58 <markwash> and then they would delete it again?
14:38:20 <dosaboy> markwash: depends how serious it is if a deleted image ends up in 'killed' state
14:38:55 <dosaboy> it's just a race condition really
14:38:57 <zhiyan> i think they are not break something
14:39:12 <dosaboy> but as you say, if the user can just deleted the killed (but deleted) image
14:39:13 <flwang> markwash: for 1236868, I think it's impacting the image state
14:39:15 <dosaboy> that's fine
14:39:49 <markwash> flwang: right. . but can I just delete the image again if its in the 'killed' state?
14:39:54 <zhiyan> but there are 2 points to me: 1. the status is wrong for the operation sequence to end user, like you said. 2. the v2 and v1 api's behavior is different for the same operation sequence.
14:40:35 <dosaboy> there is also the point that if an image is in killed state, the user expects the data to be extant
14:40:40 <dosaboy> if i understand correctly
14:41:20 <zhiyan> dosaboy: are you saying image content can be partial leak?
14:41:21 <flwang> markwash: TBH, I think it's a tricky, just like I mentioned in the comments. depends on how we define the root cause of the failure
14:41:36 <markwash> I guess for rc2 I only care about the ease of workarounds really
14:41:42 <dosaboy> zhiyan: well, its kind of the reverse i think
14:41:50 <markwash> if it put the image in a state it could never recover from, we should fix it for rc2
14:42:02 <markwash> otherwise we should just fix it in master
14:42:13 <dosaboy> if the image gets  deleted completely but is left in 'killed' state
14:42:30 <dosaboy> markwash: agreed
14:42:32 <zhiyan> markwash: ok, i'm fine for master.
14:42:36 <dosaboy> need to verify that
14:42:43 <flwang> dosaboy: as I said, I think it depends what's the root cause of the image create failure
14:42:55 <dosaboy> flwang: very possibly
14:43:12 <dosaboy> i'm not familiar with all paths that can lead to 'killed'
14:43:32 <dosaboy> you guys are the experts :)
14:43:50 <flwang> I will take a look as well
14:44:05 <flwang> markwash: async workers?
14:44:12 <markwash> okay, thanks for bringing those up
14:44:20 <markwash> async workers!!!!1
14:44:26 <nikhil> markwash: https://review.openstack.org/#/c/50788/
14:44:42 <nikhil> venkatesh: and I spent some time on the db patch last evening
14:44:45 <markwash> we had a meeting yesterday, at 1400 UTC and I think we're planning on another next Tuesday at 1400 UTC
14:44:49 <venkatesh> nikhil: thanks
14:45:12 <markwash> the main thing we talked about yesterday was a slight change to the HTTP and DB apis
14:45:48 <venkatesh> markwash: yes mark. the patch that we submitted yesterday are related to those changes.
14:45:49 <markwash> essentially, the goal is to have better mysql performance by not doing bulk list queries on tables with TEXT columns
14:46:04 <markwash> I know most of the folks here were at the meeting, I just want to summarize for the record
14:47:05 <markwash> venkatesh: I saw that patch. . I have to say I really hate to land something that makes it so, for the same task, task_repo.list()[0] != task_repo.get(<task_id>)
14:47:48 <venkatesh> markwash: you are right mark. that was the first cut. i am still working on it.
14:47:58 <markwash> okay cool.
14:48:07 <markwash> I mean, i think we could live with that patch as an intermediate step for sure
14:48:18 <nikhil> markwash: more comments please, so that we can bec onfident about landing :)
14:48:34 <flwang> venkatesh: and I saw there are same "created_at", "updated_at", "deleted", too bad
14:48:53 <nikhil> oh run_at too
14:48:58 <flwang> as I said, seems we need 2 times deletion for 1 task, is it?
14:49:08 <nikhil> nope
14:49:44 <venkatesh> flwang: no. the model base currently that we are extending it forcing us to use those columns
14:49:47 <markwash> I wonder if we could also have a quick chance to review the proposed change to the http api, since that's the most significant thing we deliver (due to the commitment to backwards compatibility with it)
14:49:56 <markwash> #link https://wiki.openstack.org/wiki/Glance-tasks-api#List_Tasks
14:50:11 <nikhil> for soft-deletes the entry in other table won't matter
14:50:29 <flwang> venkatesh: i know, but I don't like it :)
14:50:33 <venkatesh> flwang: may be i can add the points that we know as need to be fixed
14:50:39 <rosmaita> so i don't have "created_at", just "updated_at"
14:50:44 <venkatesh> flwang: me neither. :)
14:50:46 <nikhil> markwash: flwang venkatesh: one thing we observed about the formatting of task back from db even while doing a show did not have input, message, result
14:51:28 <markwash> venkatesh: would it make sense to skip created_at, updated_at, and deleted_at for the TaskInfo table?
14:51:55 <flwang> nikhil: yep, maybe we should allow update the 'input'
14:52:02 <flwang> however
14:52:04 <zhiyan> do we have plan to give 1 task have more taskinfo in future?
14:52:09 <venkatesh> markwash: yes mark that make sense. i am trying to figure out a better solution. as its all derived from single ModelBase
14:52:09 <zhiyan> venkatesh: %
14:52:19 <markwash> venkatesh: yeah, that may make it complicated
14:52:36 <flwang> given the task will be run immediately, so I doubt if there is a chance to update 'input'
14:52:43 <markwash> venkatesh: there might be some stuff in openstack common to make this easier though. . that is, they might have separated some if those concerns already for us
14:53:09 <markwash> (hmm, we're running a bit low on time)
14:53:14 <rosmaita> flwang: +1 no task update, at least not now
14:53:20 <nikhil> flwang: I think we should allow all
14:53:22 <venkatesh> markwash: yes mark. there is a patch already submitted to bring in oslo/db. may be that might help us fix this easily
14:53:27 <nikhil> flwang: we can discuss offline
14:54:00 <flwang> nikhil: sure
14:54:15 <markwash> just to kind of close things out for now
14:54:17 <markwash> #link https://review.openstack.org/#/q/status:open+project:openstack/glance+branch:master+topic:bp/async-glance-workers,n,z
14:54:36 <markwash> I had penciled in some talk about other future blueprints
14:54:40 <mclaren> markwash: o/
14:54:48 <markwash> mclaren: go for it
14:54:52 <markwash> #topic open discussion
14:54:54 <mclaren> markwash venkatesh: I took a quick look at https://review.openstack.org/#/c/49009/ ... it may not be that easy to print the image_id. All you have to work with at that point in the code is a uri which is known to be malformed.
14:55:47 <rosmaita> can i get an "amen" on what we are planning to expose in the (sparse) task list response?
14:55:56 <markwash> rosmaita: Amen!
14:56:02 <markwash> (from me at least :-)
14:56:04 <iccha> markwash: when free can u check this out https://review.openstack.org/#/c/48076/
14:56:10 <nikhil> me too
14:56:59 <markwash> iccha: sure thing
14:57:11 <flwang> rosmaita: so after discussed with UI guys, that's the final proposal?
14:57:14 <iccha> mclaren: too ^
14:57:46 <rosmaita> flwang: it was inconclusive
14:57:46 <mclaren> iccha: I'll try :-)
14:57:47 <markwash> iccha: if you're okay with the suggestion we came up in email, I think its a good approach
14:58:17 <venkatesh> mclaren: yes it needs little bit of more work. we might have to throw the exception and handle it in 'migrate_location_credentials' probably
14:58:18 <flwang> rosmaita: ok
14:58:19 <iccha> yeah i mainly wanted mclaren to check it out, cause will be working with hemanth_ on it today/tom
14:58:59 <rosmaita> the only thing holding me back is that including 2 times would make sorting more complicated ... is that stupid?
14:59:48 <markwash> rosmaita: are you talking about including another schema in the list schema?
14:59:55 <markwash> not sure I follow
14:59:57 <flwang> I think we don't want to sort Text columns
15:00:17 <flwang> rosmaita: are you talking about this?
15:00:42 <nikhil> we'r past the hour now
15:00:57 <rosmaita> flwang: let's move to openstack-glance channel
15:00:58 <markwash> true
15:01:02 <markwash> thanks errbody!
15:01:06 <nikhil> and I missed tha chance of possible bp
15:01:11 <markwash> #endmeeting