16:02:02 #startmeeting oslo 16:02:03 Meeting started Mon Oct 20 16:02:02 2014 UTC and is due to finish in 60 minutes. The chair is dhellmann. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:07 The meeting name has been set to 'oslo' 16:02:22 who's around to finish the discussion of meeting topics? 16:02:26 o/ 16:02:48 hi, bnemec 16:02:58 I think I may have told everyone the wrong time on Friday. :-/ 16:03:05 oh? 16:03:12 * dhellmann looks at meeting logs 16:03:54 Ah, I got my local time wrong, but the right UTC. 16:04:02 ah, ok, good 16:04:24 dimsum_, jd__, flaper87|afk, harlowja_away - ping? 16:05:08 bnemec: I think you're right about maybe needing > 1 session for oslo.messaging 16:05:24 I'm interested in learning more about how harlowja_away thinks we can drop the library 16:05:42 esp. if he's signing up to rewrite everything currently using it :-) 16:05:53 :-) 16:08:10 The py3 subpoint for oslo.messaging will probably be covered in the py3 session. 16:08:32 pong 16:08:49 I thought the first meeting was after the summit 16:09:14 jd__: we need to finalize our summit schedule; bnemec scheduled this slot during friday's meeting 16:09:55 ok cool missed that :) 16:10:56 np 16:11:16 last time I just did this, but we were supposed to be collaborating on it this time around 16:11:35 I was told jd__ was travelling too, so I really should have sent something to the list for anyone who wasn't at the meeting Friday. :-/ 16:12:07 jd__: the "local" module keeps coming up. We keep thinking that we only use it one place, but then find other places it is used (logging and rpc have something like it, for example). Do you know the full status there? 16:12:14 bnemec: ah, right, forgot jd__ was travelling 16:12:21 that doesn't explain why no one else is around :-/ 16:12:37 I chalk that up to Monday. 16:12:41 yeah 16:13:18 dhellmann: I asked dimsum_ about the local module and he thought we could cover it in the graduation session. 16:13:57 we may not need a full session on it, but I need someone to write down all of the users and figure out where it should live before we get into a room in paris 16:14:17 Yeah, makes sense. 16:14:39 dhellmann: rpc used to use it but don't anymore AFAIK 16:14:45 the request context is used for logging and rpc, and it's the main user, but zzzeek also had something about thread-locals come up for oslo.db recently 16:14:46 only log is using it now AFAIK 16:15:34 jd__: messaging now has a separate implementation: http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/localcontext.py 16:15:58 why? does it actually work properly? 16:17:07 I'm not seeing any calls to that module outside of oslo.messaging, so I wonder if that context is the same as the one I'm thinking of 16:17:33 can we call that a separate implementation? I mean it's so thin it's basically just a small module used internally by oslo.messaging I don't think it's something that should/could be outsourced 16:18:00 dhellmann: yeah I think there the key is more hardcoded 16:18:16 well, yeah, but my point is why does that not use the fancy module we set up in the incubator? is it missing something important by not using that module? 16:18:24 having a whole lib about one usage of threading.local() really looks overkill to me :) 16:18:47 it would be, except for the dependency management issues we'll have if we put it in one of the other libraries 16:19:13 dhellmann: well basically oslo.messaging.localcontext would only use openstack.common.local.strong_store (which is a threading.local() instance)… really not worth it 16:19:28 it doesn't need the weak store? 16:19:32 Yeah, it really belongs in concurrency IMHO, but that pulls in oslo.config, i18n, and utils. :-( 16:19:33 o/ 16:19:41 dhellmann: not according to http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/localcontext.py 16:20:07 I see that it doesn't use a weak store, but I wonder why we have a special weak store at all and whether we should be using it here, too 16:20:42 that setting for key looks weird, too. Why do we use a random value there? 16:20:51 * dhellmann wishes some of this code had more comments 16:20:59 http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/localcontext.py#n25 16:21:13 dunnow 16:21:50 dhellmann: also note that AFAIK WeakLocal is not used anywhere neither 16:22:10 so the only thing that is used is the threading.local() instance in local.py 16:22:15 sigh 16:22:24 really AFAIK this is all deadcode 16:22:46 jd__: I'll bet there something in nova depending on the weakref stuff 16:23:00 let's ask the Grep God 16:23:16 dhellmann: can't find it :) 16:23:41 this wouldn't be the first time grep didn't find it, but that it was there 16:23:55 dhellmann: I can try a test patch that remove it and see if it fails? 16:24:11 I think lockutils might have used weak references from local.py at one point. 16:24:21 we did that once with something that only rackspace was using in production 16:24:32 bnemec: interesting 16:24:56 jd__: we removed something from oslo.db that depended on a patched version of eventlet and there was a firestorm on the ML 16:25:12 O_o 16:25:15 jd__: so now I prefer to be very cautious about removing things 16:25:16 yeah 16:25:33 woauh oO 16:25:54 anyway 16:26:34 actually the local.store is used to store the context for the logger to find 16:26:37 (logger I mean log.py) 16:26:50 but it seems that's not used anymore? who was responsible to put the context in there? 16:27:15 I'm not sure how that part works. I was going to try to figure it out as part of graduating oslo.log 16:27:20 ok 16:27:27 I think the context class had code to inject itself? 16:27:38 (into the logging thread-local context, that is) 16:28:06 dhellmann: no :( 16:28:15 not sure that's used anymore in fact 16:28:43 https://github.com/openstack/oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L174 is what I was thinking of regarding lockutils, but I think it's actually unrelated now that I look closer. 16:28:52 dhellmann: I'll take the #action to build a test patch on Nova to remove local.py 16:28:59 jd__: ok 16:29:01 see if it breaks anything 16:29:13 let's try to remove it from projects that think they need it before we remove it from the incubator 16:29:21 (I wrote the patch while we were chatting I'm testing it right now) 16:29:38 I guess worst case scenario is someone hollers and we restore it. 16:30:08 or we gag him 16:30:23 right 16:31:15 So what about https://github.com/openstack/nova/blob/master/nova/context.py#L97 16:31:24 I think ^ is what dhellmann was talking about. 16:31:31 ah! yes, that's it 16:33:07 bnemec: I suppose if we put the thread.local() instance in oslo.log, we could do that setup from the library instead of from the subclass 16:33:19 ^from the library^from the base class 16:34:07 dhellmann: You mean if everyone was inheriting our Context class? 16:34:21 does it seem weird to put the context class in oslo.log when it is also used for web requests? should we just have oslo.context for that? 16:34:23 bnemec: yes 16:35:24 the context provides features for oslo.log and oslo.messaging, which makes finding the logical home for it difficult 16:35:41 Yeah, log does seem like kind of a weird place for it to live. 16:35:58 maybe it should be stand-alone, even though it would be a tiny library 16:36:53 Yeah, that's probably safest. If it didn't add a bunch of bug/blueprint/release overhead I'd say there's no question. 16:37:18 I'll work on updating the blueprints I've already done related to context 16:37:26 So then local would live in oslo.log, and context would be its own lib? 16:37:45 And context would have a dep on oslo.log so it can inject itself. 16:38:33 the other way around - context would create a public threading.local() instance and store itself there, and oslo.log would look in oslo.context to get the current context 16:38:49 actually we'd probably make a couple of functions to hid the threading.local() object 16:38:55 like oslo.messaging has 16:39:14 except from the outside, you would only query the context, because it would automatically be stored 16:40:17 Ah, okay. I think that makes sense. 16:40:31 let's talk functional testing 16:40:42 Yeah, that's one I needed to ask you about. 16:41:05 we need to set up some jobs for oslo.messaging and oslo.db that use real services like mysql and rabbit but don't require all of openstack to be installed to run like tempest does 16:41:29 I haven't thought much further than that about how to do it, so we need someone to drive that work 16:41:43 I'm not sure if we're going to get that done this cycle, given the other work we're trying to do 16:41:50 Okay, kgiusti expressed interest in helping. 16:42:25 I think there are some other really important things to do with oslo.messaging first -- mainly clearing our backlog and building up a strong review team 16:42:45 I'm inclined to punt this to L, frankly 16:43:12 Yes, we are critically short on reviewers. 16:44:11 oslo.db kind of already has functional testing with their opportunistic test cases, right? 16:44:31 yeah, true 16:45:34 how about the quota library? 16:46:05 should we encourage a from-scratch library, an incubated library, something happening directly in neutron and then being turned into a library later, or some other approach? 16:46:21 * bnemec was not a fan of the existing quota code last time he looked 16:46:56 I'm content to let salv-orlando handle creating something new and useful, but I'm not sure where that work should happen. I worry a bit about it happening inside neutron. 16:47:30 dhellmann: because everything that grows inside neutron is poisonous? 16:47:33 just kidding. 16:47:49 Yeah, I guess the tradeoff is that it would be easier to develop in a single project and then import to incubator, but then we're more likely to end up with project-specific bits in it. 16:47:57 salv-orlando: no, because libs that grow inside of any project tend to be project-specific in unfortunate ways 16:48:19 but really I can do wherever you feel more appropiate. I can do that in oslo-incubator and periodically push things in neutron to validate that 16:48:28 assuming the other neutron cores are on board with this idea 16:48:35 salv-orlando: if we do it in neutron directly, I think it will take 3 cycles to graduate (neutron, incubator & another project adopting, lib) 16:48:57 dhellmann: agree, this is the drawback I identified as well. 16:49:03 salv-orlando: yeah, the idea with the incubator is you have a tight feedback loop with the consumers, so you might push each patch right into neutron, too 16:49:31 dhellmann: cool. Let’s say that I will prepare a specification for oslo incubator then. 16:49:52 if we go straight to a library (bypassing the incubator and neutron) we get a library faster, but may have to live with an API we don't like if we have to make changes to allow projects other than neutron to use it 16:49:53 unless you feel it is better to discuss that at the summit first (I reckong ML + gerrit is enough) 16:50:25 dhellmann: I think the incubator path has worked so far, so let’s keep doing that. 16:50:32 salv-orlando: we also need to figure out how an incubated module adds to the neutron db schema 16:50:40 I'm not sure we've done that before? 16:51:03 That sounds a little scary. 16:51:07 yeah 16:51:14 so does doing it in the library, though, actually 16:51:16 dhellmann: I was planning to avoid this… by having a quota enforcement module, which assumes the project where you’re plugging it already has the models 16:51:19 which a given interface 16:51:25 sorry with a given interface 16:51:49 salv-orlando: so you would provide classes to use, but not the migration scripts to create the tables? 16:52:52 salv-orlando: there are a couple of tricky aspects to figuring out packaging for this. do you think you'll have time to prepare an oslo spec before the summit? 16:53:23 dhellmann: yes I will be able to. Also, I would defer conversation on the db migrations to the ML as it’s going to steal too much meeting time 16:54:26 salv-orlando: we can see about that one. That's the part I'm most worried about, because we have lots of experience evolving APIs but little with the database schema sharing 16:55:05 Just to complicate things further, there was some discussion of storing quotas in a completely separate project too, wasn't there? 16:55:08 salv-orlando: ok, we'll watch for the spec 16:55:08 dhellmann: I think that having libraries that extend a db schema is probably not a good design idea in the long term. 16:55:21 bnemec: yeah, keystone I think 16:55:37 salv-orlando: yeah, does that mean a separate quota service, then? 16:56:03 bnemec: yes. And this why I was thinking of having a quota enforcement library which would call a quota manager class that can then interface with keystone or else 16:56:09 salv-orlando, dhellmann, that was a longer term thought iirc 16:56:14 the separate quota service 16:56:28 dhellmann: no quota service - that’s the whole point. We need to keep things simple. If people need a quota management & reservation service, they will use blazar 16:56:49 * dhellmann makes a note to look up what blazar is after the meeting 16:57:05 Heh, I'm reading their wiki page right now :-) 16:57:11 dhellmann: blazar or the project formerly known as murano 16:57:36 * morganfainberg can't keep up with all the project names 16:57:49 salv-orlando: if there is a quota enforcement library, and it needs to use the database, it also needs to own the part of the schema it manages, doesn't it? 16:58:07 we'll definitely need a session on the quota management 16:58:13 dhellmann, ++ 16:58:21 +1 16:58:26 I was just typing the same thing. :-) 16:58:34 ok, so we have 1 slot left 16:58:38 dhellmann: a session perhpas, but the quota enforcement library can exist without adding things of its own to the schema. 16:58:42 we could use that on taskflow 16:58:43 that’s all from me ;) 16:59:04 salv-orlando: ok, I can't imagine how that would work so I'll wait for your spec :-) 16:59:56 dhellmann: On Friday, harlowja_away was okay with rolling some of his taskflow topics into the oslo.messaging session. 17:00:41 yeah, I think the problem there is harlowja_away doesn't like oslo.messaging so doesn't want to use it in taskflow and I'm trying to understand why taskflow then recreated some of the abstractions that oslo.messaging has and how we can avoid having 2 implementations of RPC 17:01:08 but that's where we started, and we're out of time now so we should clear the room 17:01:19 bnemec, salv-orlando, jd__ , morganfainberg : thanks for your help today! 17:01:24 * bnemec -> openstack-oslo 17:01:30 dhellmann: np 17:01:32 bnemec: yep 17:01:36 let's move to our room to finish up 17:01:38 #endmeeting