16:01:23 <krotscheck> #startmeeting Storyboard
16:01:24 <openstack> Meeting started Mon Jan 19 16:01:23 2015 UTC and is due to finish in 60 minutes.  The chair is krotscheck. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:28 <openstack> The meeting name has been set to 'storyboard'
16:01:31 <NikitaKonovalov> o/
16:01:48 <SergeyLukjanov> o/
16:02:09 <krotscheck> So, before I post an agenda, what quorum do we actually need?
16:02:55 <krotscheck> We’ve got 3/4 cores here.
16:02:58 <krotscheck> That works for me.
16:03:03 <krotscheck> Agenda: https://wiki.openstack.org/wiki/StoryBoard#Agenda
16:03:14 <krotscheck> #topic Actions from last week
16:03:15 <krotscheck> None!
16:03:23 <rcarrillocruz> hey
16:03:24 <rcarrillocruz> so
16:03:34 <krotscheck> Hey!
16:03:36 <rcarrillocruz> it seems there's a bunch of people off today?
16:03:39 <jedimike> o/
16:03:51 <rcarrillocruz> wasn't aware of US being bank holiday, what is it?
16:03:55 <krotscheck> rcarrillocruz: There aren’t. ttx is on a plane, that’s the only person we’re missing.
16:04:05 <jedimike> i'm here, but lagging, builders are here doing things to the house
16:04:09 <krotscheck> It’s Martin Luther King day in the U.S.
16:04:27 <krotscheck> #topic Urgent Items: Truncated OAuth Tokens Work (rcarrillocruz)
16:04:46 <rcarrillocruz> ah ok
16:04:56 <yolanda> hi
16:05:03 <yolanda> was in wrong window...
16:05:06 <krotscheck> rcarrillocruz: Any progress on that?
16:05:28 <CTtpollard> hi
16:06:14 <rcarrillocruz> nope, haven't looked at it yet, sorry
16:06:24 <krotscheck> Alrightey, postponed to next week.
16:06:26 <krotscheck> #topic User Feedback
16:06:29 <krotscheck> Any new user feedback?
16:06:53 <krotscheck> Honestly, the UI has been pretty stagnant. We’ve got a large backlog of UI issues and I’m still tied up with email :/
16:07:20 <CTtpollard> my work allocation for our internal storyboard instance has dried up atm so I don't have any new feedback
16:07:28 <krotscheck> No worryes, CTtpollard
16:07:30 <krotscheck> So we’ll move on.
16:07:37 <krotscheck> #topic Discussion Topics
16:07:46 <krotscheck> Nothing on the agenda, anyone have something they want to discuss?
16:07:49 <rcarrillocruz> yeah
16:08:01 <krotscheck> The floor is yours.
16:08:03 <rcarrillocruz> i wanted to show the deferred queue/replay events
16:08:21 <rcarrillocruz> so, rabbitmq has something called exchange to exchange binding
16:08:30 <rcarrillocruz> i.e., you can bind an exchange to another exchange
16:08:40 <krotscheck> yep
16:08:54 <rcarrillocruz> so, in order to have a queue that logs all events (deferred queue for processing) and websocket clients the ability to replay events
16:09:00 <rcarrillocruz> we can do something like this:
16:09:01 <rcarrillocruz> http://stackoverflow.com/questions/6148381/rabbitmq-persistent-message-with-topic-exchange
16:09:11 <rcarrillocruz> (look at the diagram)
16:09:34 <rcarrillocruz> a fanout exchange that we push all events to, and that's abound to a queue that logs ALL events
16:10:00 <rcarrillocruz> and to another topic exchange, where we create on-demand queues with the given filters (i want just tasks, with id 1 for example)
16:10:07 <rcarrillocruz> what do you think?
16:10:28 <rcarrillocruz> if a websocket clients triggers 'replay history', we bind the client to the deferred queue
16:10:57 <krotscheck> I like it. We already have a topic exchange, however the topic format won’t yet allow us to subscribe to specific events.
16:10:59 <rcarrillocruz> and upon binding, all events are pushed to the client. We could filter on the websocket client that only events meeting a date criteria is shown
16:11:05 <krotscheck> All it filters on right now is type.
16:11:25 <rcarrillocruz> yeah, that topic format needs to be changed to address the pubsub spec
16:11:43 <rcarrillocruz> all good then?
16:12:20 <NikitaKonovalov> rcarrillocruz: is all the history relayed on request?
16:12:27 <krotscheck> Yeah, let’s talk a bit more about that though. We may be able to add some kind of an ack/nack layer to websocket clients so they themselves can manage which events they’ve handled.
16:12:49 <NikitaKonovalov> it clould be a huge amount of data then
16:12:56 <rcarrillocruz> NikitaKonovalov: yes, so we need some filtering mechanism, since a 'replay' would contain a param saying 'just replay since day X from time Y'
16:13:46 <krotscheck> Rather than thinking of it from a ‘replay history’ perspective, we could think of it from an atomic level and assert that individual messages were received.
16:13:57 <krotscheck> That way we can forget about anything that wasn’t received.
16:14:00 <krotscheck> Sorry
16:14:10 <krotscheck> That way we can forget about anything that WAS received.
16:14:31 <rcarrillocruz> so you mean keeping a list of what's being sent per client connection?
16:15:04 <rcarrillocruz> so when the client triggers 'replay' we only send what was NOT sent?
16:15:06 <krotscheck> rcarrillocruz: Well, I say create a persistent queue that collects messages and has some nice high upper bound, one that can store messages through a day or so of zuul being down.
16:15:26 <rcarrillocruz> ah
16:15:33 <rcarrillocruz> i think rabbitmq queues have a TTL
16:15:41 <krotscheck> And on our version of rabbit that’ll slowly but surely accumulate, until a client connects and starts acknowledging that it’s consuming messages.
16:15:43 <rcarrillocruz> i.e. keep stuff just for X days
16:16:06 * rcarrillocruz digging
16:16:42 <rcarrillocruz> https://www.rabbitmq.com/extensions.html
16:16:47 <krotscheck> I figure, that way we make rabbit do all of the history management for us, and don’t have to implement some crazy on-disk cache.
16:16:55 <rcarrillocruz> check for Per-Queue Message TTL
16:17:00 <rcarrillocruz> yep
16:17:22 <rcarrillocruz> so let's agree that, sure, a client can trigger replay on-demand, but not since the beginning of mankind
16:17:35 <rcarrillocruz> just up to a day or something (obv configurable)
16:18:10 <krotscheck> rcarrillocruz: I don’t think it’s our job to cache messages in case a client dies horribly.
16:18:23 <rcarrillocruz> nod
16:18:41 <krotscheck> All we can reqally guarantee is that we’ll send them every message, and assume they can handle it sanely
16:20:03 <krotscheck> rcarrillocruz: But with that said, we can definitely create two queues, one as a replay cache and the other as the consumer cache. I’m just concerned about memory
16:20:26 <rcarrillocruz> yeah, i thought about that also
16:20:46 <rcarrillocruz> creating a 'backup' queue upon websocket opened and normal queue is created
16:21:12 <rcarrillocruz> but this is twice memory...so if we have a lot of consumers it can quickly kill the instance
16:21:32 <NikitaKonovalov> well, then we could keep only a very short history, so that the client can recover from a short outage or a network issue
16:21:48 <rcarrillocruz> so lean towards having a global replay queue, that is capped (as you say a day or something)
16:21:55 <NikitaKonovalov> I dont see a case where a client needs events for a whole day
16:21:58 <krotscheck> I think we have two use cases here. One is replay of already-handled messages, while the other is guaranteed delivery.
16:21:58 <rcarrillocruz> and it's the client that says 'ok, replay me everthing from last 3 hours)
16:22:09 <krotscheck> rcarrillocruz: That makes sense.
16:22:56 <rcarrillocruz> ok, i think we have a plan here
16:23:23 <rcarrillocruz> second topic: refresh tokens. The more i think about it, the more I think i should not handle this (i'm only working on the backned thing)
16:23:27 <krotscheck> rcarrillocruz: If there’s a global replay queue, how do we guarantee that the queue will remain full for other clients, if one client drains it?
16:24:28 <rcarrillocruz> i think there's a persistence setting for messages for that
16:24:31 <krotscheck> kk
16:24:31 <rcarrillocruz> but it's a good point
16:24:45 <rcarrillocruz> let's put a work item for me to have a POC on this idea
16:24:52 <krotscheck> You got it
16:24:53 <rcarrillocruz> and i get back to you next week on the meeting
16:24:58 <NikitaKonovalov> I thought a persistent qeue means the messages are kept until TTL
16:25:11 <krotscheck> #action rcarrillocruz Figure out global replay queue edge cases.
16:25:31 <krotscheck> NikitaKonovalov: I think none of us actually know the actual behavior of the system right now :)
16:25:42 <krotscheck> rcarrillocruz: So, refresh tokens.
16:25:43 <rcarrillocruz> heh, indeed
16:25:46 <rcarrillocruz> so yeah
16:25:58 <krotscheck> #topic Discussion: Subscription API Refresh Tokens
16:26:04 <rcarrillocruz> refresh tokens: i think that should be handled on the frontend, not backend (which is i'm working right now)
16:26:20 <rcarrillocruz> i've been looking at the code and the refresh code is mainly done in the SB-webclient
16:26:44 <krotscheck> That mostly makes sense. What cases are we looking at here?
16:26:48 <rcarrillocruz> i.e. request contains a refresh token from OAuth endpoint, then token expires, it uses it to get a new token
16:26:55 <rcarrillocruz> but nothing on the backend
16:27:02 <rcarrillocruz> the backend simply cares about tokens
16:27:16 <rcarrillocruz> that is a Bearer , it's on the db storage and it's valid
16:27:20 <krotscheck> Ok, so if I connect with a valid token and get a socket, and then that token expires…
16:27:24 <rcarrillocruz> this raises another thing:
16:27:40 <rcarrillocruz> we should have a section in the SB webclient for streaming
16:27:41 <rcarrillocruz> :-)
16:27:51 <rcarrillocruz> something i have no clue thre, not a frontend guy
16:27:54 <NikitaKonovalov> if a client notices that his token expires soon, it may refresh it through REST and reestablish connection
16:28:25 <krotscheck> NikitaKonovalov: I agree. What should happen on the socket connection handler serverside though? Should it drop the connection?
16:28:26 <rcarrillocruz> yolanda: would you work on this with me?
16:28:40 <krotscheck> If we have ack/nack on the client, a suddenly dropped connection wouldn’t result in data loss.
16:28:46 <zaro> hi
16:28:48 <yolanda> rcarrillocruz, sure
16:29:11 * yolanda cannot attend 100% to this meeting, tied with on-call issues
16:29:32 <rcarrillocruz> krotscheck: so the way i have it right now it's to close the websocket if check_access_token is not cool
16:29:41 <krotscheck> yolanda: NO worries.
16:30:04 <rcarrillocruz> so, access tokens, but i don't check anything on the backend for refresh tokens
16:30:07 <krotscheck> And then leave it for the webclient to get a good token.
16:30:15 <rcarrillocruz> yep
16:30:17 <NikitaKonovalov> krotscheck: what about sending a warning while a token is valid
16:30:26 <rcarrillocruz> NikitaKonovalov: good point
16:30:39 <NikitaKonovalov> if a client does not care it will be disconnected
16:30:40 <krotscheck> Yeah, building a streaming handler for oauth tokens is basically trying to rewrite the oauth spec. Let’s not do that.
16:30:46 <rcarrillocruz> we can check for token validity, if it's going to expire we return to the websocket client 'hey, your token is gonna die'
16:31:08 <rcarrillocruz> but dunno...i think we should leave that to the frontend
16:31:19 <rcarrillocruz> a web browser websocket thingy can handle this
16:31:31 <rcarrillocruz> for command line websocket clients, we can leave that to implementors
16:31:33 <NikitaKonovalov> rcarrillocruz: sure the consumer should care, not the backend
16:31:35 <krotscheck> NikitaKonovalov: I don’t even think that’ll be necessary. If a client is disconnected, they’ll try to reconnect, get a 401, trigger the refresh token flow, then reconnect and keep consuming where it left off.
16:31:37 <rcarrillocruz> i really think it's out of the scope
16:32:37 <krotscheck> And as long as the server queue remembers where we left off, we’re good.
16:32:50 <NikitaKonovalov> krotscheck: then let's sent a "Disconnectin for expired token" message before the connection is physically dropped
16:33:05 <NikitaKonovalov> so  it would not like an unexpected error
16:33:21 <rcarrillocruz> krotscheck: so, that implies we have a grace period for a websocket connection death. meaning, we leave the associated queue open for some time, just in case they reconnect?
16:33:38 <krotscheck> NikitaKonovalov: That’s fair. That way the APi explains itself.
16:34:08 <krotscheck> rcarrillocruz: Is that something the client could ask for when they connect?
16:34:23 <krotscheck> A queue TTL perhaps?
16:34:28 * krotscheck ponders this.
16:34:34 <krotscheck> That’s going to get dangerous :/
16:34:35 <rcarrillocruz> rcarrillocruz: it could be done. When the websocket is opened and the queue is created, we can return to the client 'ok, this is your queue ID'
16:34:54 <rcarrillocruz> so if the websocket dies, the client at least knows the id to bind to again when the connection is done again...
16:35:36 <rcarrillocruz> but we must weigh how long we keep those queues around....to not have leaked queues filling the instance memory...
16:36:34 <krotscheck> Exactly, that’s what worries me.
16:36:46 <krotscheck> That would fix our replay queue problem though :)
16:36:50 <NikitaKonovalov> rcarrillocruz: is it an expensive operation to create a queue, if not than it makes more sense to drop and recreate every time
16:37:05 <NikitaKonovalov> and reply will help if some messages were missed
16:37:19 <rcarrillocruz> hmm
16:37:31 <rcarrillocruz> let's see if the client drain of the global replay queue is doable
16:37:49 <rcarrillocruz> if not, we can look at keeping on-demand queues around upon websocket die...
16:37:53 <rcarrillocruz> decisions decisions...
16:37:55 <rcarrillocruz> :-)
16:38:28 <krotscheck> Have fun with that :). Shall we move on?
16:38:45 <rcarrillocruz> yeah, sorry to hijack, thx folks!
16:38:46 <yolanda> i wanted feedback for that
16:38:47 <yolanda> https://review.openstack.org/147105
16:39:16 <yolanda> so that's a button to remove all recent events, question raised if there is a need to open a confirmation box or not
16:39:54 * krotscheck doesn’t have an opinion, but can appreciate the frustration of an accidentaly click.
16:40:04 <NikitaKonovalov> yolanda: what about a confirmation box with "No ask me again option"?
16:40:29 <yolanda> NikitaKonovalov, and store the pref in user settings?
16:40:47 <NikitaKonovalov> yes, that's the good place to store options
16:40:59 <yolanda> i was thinking on the modalbox but i also appreciate the situation where you have to clean tons of events and clicking on a modal box all the time
16:41:05 <zaro> yolanda: maybe work on that as separate change?
16:41:34 <yolanda> zaro, an incremental change looks fine to me , yes
16:41:45 <yolanda> but i wanted to know feedback of people
16:42:24 <krotscheck> It feels like we’re undecided, because none of us know how it would feel to use it.
16:42:47 <yolanda> ttx would have an opinion too
16:42:55 <zaro> yeah, that would me my preference.  just merge this one and work on user preference as another change.
16:43:03 <yolanda> so i can close the change without any confirmation then wait for feedback, sounds good
16:43:12 <krotscheck> kk
16:43:15 <krotscheck> I’m ok with that
16:43:26 <NikitaKonovalov> I'm ok also
16:43:32 <yolanda> i'm on call this week but i'll try to find a hole to finish that
16:43:39 <rcarrillocruz> +1
16:43:46 <krotscheck> I’ve got a couple of thoughts on improving the event list now that we have story_id’s in all relevant events.
16:44:10 <krotscheck> What if we group the results we get from the server by story.
16:44:20 <krotscheck> And then say: Here’s all the changes that happened in story X.
16:44:32 <krotscheck> Someone commented, someone updated status, etc etc.
16:44:36 <NikitaKonovalov> krotscheck: sounds good
16:44:46 <krotscheck> And then only offer one ‘remove’ button which flushes all relevant events.
16:45:10 <NikitaKonovalov> and ui could have a hide/show toggle so the resuls are displayed in a more compac way then
16:45:17 <krotscheck> yep
16:45:27 <krotscheck> 12 things happened to story X: (show more)
16:45:42 <yolanda> sounds good to me, yes
16:45:56 <krotscheck> Coool. Any other discussion topics?
16:46:04 <yolanda> what do you think about sorting that? by story id?
16:46:11 <yolanda> by latest update of some event?
16:47:05 <krotscheck> I’d start with just sorting by the order they arrive in.
16:47:05 <zaro> i like that.
16:47:06 <NikitaKonovalov> yolanda: I think the story with the latest event should be on top, so yes by update times
16:47:23 <krotscheck> Which is chronologically by date.
16:47:27 <zaro> i almost want an entire page dedicated to events that i can sort and chop up anyway i want
16:47:28 <NikitaKonovalov> yep
16:48:15 <yolanda> makes sense to me, but first have something simple in the dashboard then we could add an independent events screen
16:48:49 <krotscheck> yep
16:48:56 <krotscheck> Ok, let’s move on to ongoing work.
16:49:02 <zaro> so why fuss over how it should look or return, just leave it up to user defined.
16:49:31 <krotscheck> zaro: I guess I don’t understand what you mean by that.
16:49:55 <krotscheck> Because to be able to have a user define something, the UI has to be able to render it that way first.
16:50:12 <zaro> leave as is.  make something that users can sort and filter whichever data they want out of it
16:51:03 <krotscheck> zaro: Ok, do you want to take a stab at putting that together?
16:51:19 <zaro> hmm, good point, not atm
16:51:25 <krotscheck> :D
16:51:32 <krotscheck> Alright, moving on.
16:51:33 <krotscheck> #topic Ongoing Work (rcarrillocruz)
16:51:47 <krotscheck> We more or less covered your stuff in discussion, anything else you want to add?
16:51:59 <rcarrillocruz> what was commented earlier , nothing else, nope
16:52:03 <krotscheck> Coool
16:52:12 <krotscheck> #topic Ongoing Work (jedimike)
16:52:23 <krotscheck> Eaten by bandits?
16:53:12 <krotscheck> I guess he’s not here.
16:53:20 <krotscheck> #topic Ongoing Work (NikitaKonovalov)
16:53:26 <krotscheck> How’s the client API coming?
16:53:47 <NikitaKonovalov> The Stories/Tasks support is in progress
16:54:15 <NikitaKonovalov> what I've noticed is that we cannot access tasks as a story subresource
16:54:49 <NikitaKonovalov> so I'm now refactoring API controllers to support both /tasks and /stories/id/tasks enpoints
16:55:05 <NikitaKonovalov> at lease the API will look more consistent
16:55:24 <krotscheck> NikitaKonovalov: Ironic does something interesting with that, you might want to talk to that team to see how they have automatically nested resources.
16:55:58 <NikitaKonovalov> krotscheck: we already have this in a lot of places like teams/users and project_groups/projects
16:56:12 <NikitaKonovalov> and those work fine
16:56:20 <krotscheck> Works for me.
16:56:30 <NikitaKonovalov> so there is no reason why tasks would not work
16:56:43 <NikitaKonovalov> Also I've finally updated the tags change
16:56:51 <krotscheck> They managed to work some magic where any controller ended up being magically embedded as a subcontroller for every other controller
16:57:09 <NikitaKonovalov> krotscheck: I'll have a look
16:57:11 <krotscheck> Yay tags!
16:57:17 <NikitaKonovalov> link for tags https://review.openstack.org/#/c/114217/
16:57:24 <krotscheck> NikitaKonovalov: It might be too much magic.
16:57:37 <NikitaKonovalov> most comments resolved I hope
16:57:48 <krotscheck> Yay tags :)
16:58:05 <krotscheck> Is that all?
16:58:14 <NikitaKonovalov> all from me
16:58:16 <krotscheck> #topic Ongoing Work (yolanda)
16:58:59 <yolanda> krotscheck, sorry, on 2 conversations at same time
16:59:04 <yolanda> something is on fire here
16:59:13 <krotscheck> Ok, we’ll let you pass then :)
16:59:19 <krotscheck> #topic Ongoing Work (krotscheck)
16:59:34 <krotscheck> I’ve been working on various utilities necessary for email & simplification.
16:59:57 <krotscheck> The email templateing engine is up, and I’ve written a new, tested algorithm to evaluate the correct list of subscribers for a given resource.
17:00:33 <krotscheck> I’ve also got a new UI where you can actually see which story a given comment was left on.
17:00:50 <krotscheck> Small change, but omg so much better.
17:01:22 <krotscheck> Anyway, my next big push is going to be to build an email outbox handler that just sends emails.
17:01:31 <krotscheck> After that I’ll build the digest and the individual email consumers.
17:01:45 <krotscheck> And now we’re out of time.
17:01:46 <krotscheck> DOH
17:01:49 <krotscheck> Thanks everyone!
17:01:55 <krotscheck> #endmeeting