14:00:35 <markmc> #startmeeting oslo
14:00:36 <openstack> Meeting started Fri Jun  7 14:00:35 2013 UTC.  The chair is markmc. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:39 <openstack> The meeting name has been set to 'oslo'
14:00:40 <markmc> roll up, roll up
14:00:43 <markmc> who's around?
14:00:46 <flaper87> o/
14:00:52 <zyluo> hi
14:00:53 <markmc> #link https://wiki.openstack.org/wiki/Oslo/Messaging
14:00:55 <dhellmann> o/
14:01:03 <markmc> #link https://wiki.openstack.org/wiki/Oslo/Messaging
14:01:08 <markmc> #link https://wiki.openstack.org/wiki/Meetings/Oslo
14:01:39 * markmc will wait a minute or two for others
14:02:22 <jd__> o/
14:02:39 * jd__ . o O (hey a meeting, let's join!)
14:02:56 <markmc> jd__, :)
14:02:59 <markmc> lbragstad, hey
14:03:08 <markmc> again, etherpad is https://etherpad.openstack.org/HavanaOsloMessaging
14:03:32 <markmc> I've pinged simo
14:03:40 <markmc> no sign of ewindisch
14:03:50 <dhellmann> #link https://etherpad.openstack.org/HavanaOsloMessaging
14:03:58 <markmc> or markwash
14:03:58 <lbragstad> markmc hey
14:04:14 <markmc> beagles maybe ?
14:04:21 <markmc> ok, let's get going
14:04:27 <lbragstad> mrodden: here?
14:04:33 <markmc> so, it's been about a month since the last meeting
14:04:44 <markmc> I've listed the updates that have happened since then in the etherpad
14:04:46 <beagles> o/
14:05:08 <markmc> I think that's pretty solid progress, but a pity we don't have unit tests or a real driver yet
14:05:23 <markmc> so, I think they're definitely the next big things to tackle
14:05:54 <markmc> anyone got any thoughts?
14:06:15 <markmc> the "executor" name was the only thing I could think of worth hashing out here
14:06:19 <lbragstad> specifically, where are you looking for unit test contribution in oslo?
14:06:23 <markmcclain> o/
14:06:25 <markmc> apart from "who's got time to do what?"
14:07:22 * dhellmann goes to remind himself what the executor does
14:07:25 <markmc> lbragstad, this is about the new messaging API we're working on here: https://github.com/markmc/oslo-incubator/tree/messaging
14:07:43 <markmc> dhellmann, polls the listener for messages, and then passes them to the dispatcher
14:07:45 <lbragstad> ok
14:07:49 <dhellmann> markmc: what's the best branch to look at in your github repo?
14:07:55 <markmc> dhellmann, the messaging branch
14:08:09 <dhellmann> #link https://github.com/markmc/oslo-incubator/tree/messaging/openstack/common/messaging
14:08:14 <flaper87> I agree that unittest should be tackled down next
14:08:16 <markmc> dhellmann, the messaging-rebasing branch is me cleaning up the patch set to something which could be submitted to gerrit
14:08:23 <dhellmann> ah, got it
14:08:25 <lbragstad> markmc do these changes get pushed to a gerrit?
14:08:42 <markmc> lbragstad, they will do soon, once we have unit tests and a real driver
14:09:13 <lbragstad> ok, cool, so for right now we are just pushing to your branch
14:09:15 <flaper87> I was thinking the other day whether it would make sense to have the executor's code as a separate package in oslo. Can we abstract it a bit more? There are some projects (and here I'm speacking about glance) that might benefit from an "Executor" package without the queue / rpc part.
14:09:16 <dhellmann> markmc: do you like "Receiver" better as a name? I don't remember where Executor came from, but it seems like it was mirroring a name used in another library somewhere.
14:09:26 <markmc> dhellmann, eventlet executor has a thread that polls for messages and then methods are dispatched in a thread per message
14:10:06 <markmc> dhellmann, I think the idea you had was to mirror http://www.python.org/dev/peps/pep-3148/#executor
14:10:22 <dhellmann> ah, yeah, that's probably it. Did I come up with that name? :-)
14:10:24 <markmc> dhellmann, i.e. we've got an executor that we use to call the dispatcher
14:10:43 <markmc> dhellmann, and whether that's a greenthread or a real thread or whatever depends on what executor you use
14:10:55 <dhellmann> ah, right
14:10:55 <markmc> dhellmann, except we do a bit more than that in our executor
14:11:11 <markmc> lbragstad, right, pull requests for now
14:11:32 <dhellmann> flaper87: how would glance use this without the rpc part?
14:11:42 <markmc> flaper87, I'm not sure what about executors are reusable outside of messaging
14:12:25 <dhellmann> markmc: In your blog post (I think) you brought up a question about how much of the executor's behavior would need to be exposed to code outside of the rpc library. For cooperative multitasking, for example.
14:12:36 <flaper87> dhellmann: We're thinking about having async workers in glance and I'm afraid we'll end up writing yet another (Async)TaskExecutor implementation
14:12:58 <markmc> dhellmann, I'm thinking we have BlockingRPCServer, EventletRPCserver, TulipRPCServer, etc.
14:13:01 <dhellmann> I'd like to avoid that, but perhaps the "current executor" should be accessible and have a public API so tasks can yield the context, etc.?
14:13:50 <markmc> dhellmann, the server methods definitely need to be written with knowledge about what executor is going to be used
14:14:10 <markmc> dhellmann, e.g. if you're using the tulip executor, you'll need to write the methods as coroutines
14:14:26 <markmc> dhellmann, but I don't think you need to actually talk directly to the executor
14:14:31 <simo> markmc: can'twe assume threatds ?
14:15:19 <dhellmann> markmc: I thought putting that info in the executor would mean we could avoid having separate server implementations
14:15:19 <markmc> dhellmann, in the case of tulip, you might want to talk to the right event loop ... but you'd pass the event loop to the TulipRPCServer constructor
14:15:22 <dhellmann> but maybe not
14:15:44 <markmc> dhellmann, basically, I'm thinking we don't expose an interface to the executor
14:15:51 <markmc> dhellmann, just the ability to choose an executor
14:16:05 <markmc> dhellmann, i.e. "I'm using tulip, give me an RPCServer that does the right thing"
14:16:18 <dhellmann> I thought the goal was to have the executor encapsulate all of that, so the server could be reused
14:16:33 <markmc> simo, no, I don't think we want to use threads by default
14:16:35 <dhellmann> do you think that's not possible?
14:16:46 <markmc> the server is being re-used
14:16:47 <markmc> totally
14:16:52 <simo> markmc: I won;t insist, but I have done async programming for 10 years now
14:17:05 <simo> markmc: it's a world of hurt, not worth for anything really complex
14:17:22 <simo> it's very good for the thigns it is good at
14:17:26 <simo> very bad for anything else
14:17:33 <markmc> dhellmann, all EventletRPCServer does is pass executor_cls=EventletExecutor or whatever
14:17:35 <dhellmann> sorry, not "reused" but so that we'd only need one server implementation
14:17:39 <simo> and we are now n the 'else' camp in openstack
14:17:43 <simo> it's too big a project
14:17:51 <markmc> simo, let's not play the who has more experience game
14:17:58 <markmc> simo, openstack uses eventlet now
14:17:59 <simo> I am not playing it
14:18:10 <markmc> simo, we're not going to be changing that any time soon
14:18:15 <markmc> simo, I'd like to, but that's not the goal here
14:18:21 <dhellmann> yes, please, let's not derail this discussion
14:18:35 <simo> markmc: just tryuing to avoid having something built that will make it impossible to move away
14:18:36 <dhellmann> markmc:
14:18:40 <dhellmann> oops
14:18:52 <markmc> simo, the point is just to design the API that we can support multiple I/O handling strategies without breaking the API
14:19:00 <markmc> simo, so, yes - exactly
14:19:03 <simo> markmc: right
14:19:07 <dhellmann> markmc: you said the server implementation would need to know about its executor, which made me think the servers were more than just a thin wrapper that set the executor
14:19:07 <simo> then I am fine
14:19:15 <markmc> simo, we can't do that by "assuming threads"
14:19:28 <flaper87> dhellmann: that got me a bit confused as well
14:19:53 <simo> markmc: well it depends, in many event systems you can simply use threads right away, with just a little bit of abstraction around a 'fake' async request
14:19:54 <markmc> dhellmann, https://github.com/markmc/oslo-incubator/blob/messaging/openstack/common/messaging/eventlet.py#L43
14:20:30 <markmc> simo, we can't dispatch methods to threads because openstack code is written to assume it's running in greenthreads
14:20:58 <simo> markmc: yeah that's the problem, we should stop asuming green threads in the medium term
14:20:59 <dhellmann> markmc: OK. I don't think that class is necessary as it stands right now. _RPCServer could be public, and the caller could give it the executor class.
14:21:06 <markmc> dhellmann, all I want is to avoid exposing the executor interface if we don't need to
14:21:08 <flaper87> markmc: is that needed? Can the Server be "executor agnostic"
14:21:13 <flaper87> ?
14:21:23 <markmc> dhellmann, so we can change it
14:21:25 <flaper87> dhellmann: you beat me
14:21:29 <markmc> dhellmann, but also, for convenience
14:21:41 <markmc> dhellmann, why make every openstack project have to provide executor_cls
14:21:49 <markmc> dhellmann, rather than just use EventletRPCServer
14:22:13 <markmc> dhellmann, would you prefer a get_eventlet_rpc_server() helper ?
14:22:19 <markmc> dhellmann, because that's fine by me too
14:22:22 <simo> just to make clear what is the concern here: securemessaging requires 'slow' operations, you want them in their own thread, not have them slow the whole server
14:22:27 <dhellmann> convenience is fine, I see the point of that, and either path works
14:22:30 <flaper87> markmc: so basically, we'd assume that if X is using eventletServer it is capable of using EventletExecutor which works better with that serer implementation
14:22:32 <dhellmann> but I want to make sure we don't bake something into that EventletRPCServer that knows about eventlet since that knowledge is supposed to be in the executor
14:23:16 <dhellmann> if the server needs to do something different based on which executor it is using, then I want that to be a feature of the executor that the server uses, rather than built into the server itself
14:23:28 <markmc> yeah, that's fair
14:23:34 <dhellmann> not sure that's clear...
14:23:34 <flaper87> +1
14:23:47 <markmc> ok, let's go with helper factory methods things
14:23:56 <markmc> i.e. eventlet.get_rpc_server()
14:23:58 <dhellmann> so a wrapper class is ok, and a helper function is ok, but the helper function feels like maybe it's "safer" in that respect
14:23:59 <dhellmann> yeah
14:24:47 <markmc> simo, having the secure messaging stuff do its slow computation in threads is workable too
14:24:49 <dhellmann> practical question: if we provide another eventlet implementation, how/when do we expect projects to move off of it? or do we?
14:25:01 <markmc> simo, definitely adds a new dimension, I'll keep it in mind for sure
14:25:12 <markmc> dhellmann, another executor you mean ?
14:25:23 <flaper87> dhellmann: as soon as there's another executor ?
14:25:49 <markmc> dhellmann, e.g. moving to tulip ?
14:25:52 <flaper87> I mean, another implementation that's a drop-in replacement for eventlet's
14:25:57 <markmc> dhellmann, each project would need to opt-in to it
14:26:05 <lbragstad> this might be an elementary question but, how is using eventlet.get_rpc_server() going to effect projects who are attempting to isolate eventlet dependencies? (e.g. Keystone)?
14:26:17 <simo> markmc: I am not going to add threading on my own
14:26:31 <dhellmann> yes
14:26:31 <dhellmann> I thought one of our goals was to start moving away from eventlet? Or do we think it's safe to keep using it?
14:26:31 <markmc> lbragstad, if you do 'import openstack.common.messaging' eventlet isn't imported
14:26:45 <simo> I would rather be able to swap out a bad greenthread implementation for a real threaded implementation in the medium term
14:26:46 <flaper87> dhellmann: I'd prefer moving away from it
14:26:47 <markmc> lbragstad, you have to do 'import openstack.common.messaging.eventlet' and then eventlet.get_rpc_server()
14:26:54 <flaper87> but I guess we'd have to first support it
14:26:57 <markmc> lbragstad, we could have e.g. openstack.common.messaging.tulip in future
14:27:00 <flaper87> and then make everybody cry :P
14:27:06 <lbragstad> ahh ok
14:27:08 <markmc> lbragstad, basically, the eventlet dependency is not baked in
14:27:10 <dhellmann> markmc: opt-in makes sense; I wasn't sure if that was still even a goal.
14:27:18 <lbragstad> so projects don't *have* so use it if they don't want to
14:27:25 <lbragstad> to*
14:27:44 <simo> dhellmann: eventlet is bad, monkey patching is bad, non-thread service do not scale on multicore cpus
14:27:50 <simo> dhellmann: make your conclusions
14:28:05 <dhellmann> simo: python threads don't work across multiple cores, either
14:28:25 <flaper87> simo: python Threads don't use multiple cores
14:28:26 <simo> dhellmann: the gil is being worked on though, and C helpers can wotk in parallel
14:28:34 <markmc> simo, we use multiple processes to scale across cores
14:28:36 <markmc> ok guys
14:28:52 <markmc> let's move on :)
14:28:53 <flaper87> markmc: +1
14:28:55 <simo> ok::)
14:29:08 <markmc> so
14:29:21 <markmc> who's up with contributing to some of the outstanding tasks for the messaging stuff
14:29:23 <dhellmann> nova imports something from eventlet outside of the common code 134 times
14:29:39 <dhellmann> so I'm wondering if changing the rpc api is enough to make it possible to move off of eventlet :-)
14:30:10 <flaper87> markmc: I can help creating some of the tests
14:30:25 <flaper87> but I'd say we should split them since I'd also like to get the qpid implementation done
14:31:03 <markmc> flaper87, if you're working on the qpid driver, I'd definitely say focus on that
14:31:13 <flaper87> markmc: oke-doke
14:31:15 <markmc> flaper87, I'd love to have a real driver
14:31:36 <flaper87> btw, we'd have to sync with markwash, I think he was going to work on some tests
14:31:41 <dhellmann> my plate is pretty full right now, or I would commit to working ont he rabbit driver :-/
14:31:49 <markmc> havana-2 is 2013-07-18
14:31:53 <markmc> dhellmann, ok, thanks
14:32:15 <flaper87> markmc: makes sense, I was kind of blocked by the URL stuff but it's merged now, I'll move on with it
14:32:15 <markmc> if it's just me and flaper87, I'd say it's pretty unlikely we'll have in shape by then
14:32:25 <markmc> "in shape" == ready for e.g. nova to use it
14:32:34 <markmc> flaper87, sorry about that
14:32:49 <flaper87> markmc: no worries, didn't want to complain, just update everyone
14:32:56 <dhellmann> oh, I wanted to ask about the status of the url config discussion
14:33:05 <markmc> before nova could use it
14:33:13 <dhellmann> did we settle on definitely using urls?
14:33:25 <markmc> we need the API finalized, tests and at least qpid/rabbit drivers
14:33:40 <markmc> so, I guess it's someone keen to work on the rabbit driver we really need
14:33:49 <markmc> dhellmann, yes, I think it's pretty well settled in there now
14:33:53 <dhellmann> ok, good
14:33:57 <flaper87> dhellmann: I'd say so
14:34:04 <flaper87> we're discussing the url format
14:34:15 <dhellmann> do we need a migration strategy for the old config format?
14:34:16 <flaper87> #link https://github.com/markmc/oslo-incubator/pull/7
14:34:19 <markmc> dhellmann, see https://github.com/markmc/oslo-incubator/blob/messaging/openstack/common/messaging/transport.py#L78
14:34:32 <markmc> ok, let's talk briefly about the URL format
14:34:46 <markmc> flaper87, remind me where you took the multi-host format from?
14:35:27 <flaper87> markmc: right, the example I have is mongodb URI's http://docs.mongodb.org/manual/reference/connection-string/
14:35:33 <flaper87> which is basically what I used
14:35:47 <flaper87> it chains multiple hosts separated by ","
14:35:59 <simo> I am not sure that is a valid URI by RFC
14:36:21 <markmc> it probably doesn't need to be a valid URI
14:36:55 <flaper87> simo: is there an RFC that supports multiple hosts / ports / credentials / options for URLs ?
14:36:56 <markmc> we basically want a string format that encapsulates a configuration a transport driver
14:37:06 <markmc> suitable for config files and database entries
14:37:28 <markmc> a json dict would be ok, if we didn't want it to be human-editable in config files
14:38:09 <simo> flaper87: probably not, because a URI represent _a_ resource
14:38:21 <simo> multiple URIs in a json file is probably better
14:39:18 <markmc> ok, we're 40 minutes in
14:39:25 <markmc> noted the json idea in the etherpad
14:39:32 <flaper87> so, any objections about current URI iplementation?
14:39:32 <markmc> #topic secure messaging
14:39:37 <markmc> you're up simo :)
14:39:40 <simo> ok
14:39:45 <flaper87> awww :( next time :D
14:39:50 <markmc> flaper87, fine for now, let's be open to doing something else IMHO
14:39:50 <simo> I have most of the code done
14:39:59 <flaper87> markmc: sure
14:40:05 <simo> I am having some difficulty due to my lack of knowledge in properly wiring it in
14:40:15 <simo> need to pass topic around alot
14:40:25 <simo> and there seem to be quite a few places where stuff is intialized
14:40:29 <simo> even in nova
14:40:33 <markmc> interesting
14:40:39 <simo> api seem not to use the Service thing others are using
14:40:40 * markmc would be happy to look at a WIP patch
14:40:55 <simo> I started using traceback.print_stack in serialize_msg to see where things come from
14:41:05 <simo> markmc: one sec
14:41:12 * ttx lurks
14:41:12 <markmc> heh, good idea :)
14:41:28 <simo> markmc: http://fedorapeople.org/cgit/simo/public_git/oslo.git/log/?h=shared-key-msg
14:41:38 <simo> 3 patches
14:41:52 <simo> they have everything from crypto to embedded client to fetch keys from kds
14:41:55 <markmc> simo, ah, I meant the nova integration patch
14:42:16 <markmc> simo, I've taken a quick look at the oslo patches already, they look good at a high level
14:42:17 <simo> kds implementation here: http://fedorapeople.org/cgit/simo/public_git/keystone.git/log/?h=shared-key-msg
14:42:26 <markmc> ah, cool
14:42:28 <simo> markmc: still working on nova
14:42:33 <simo> I do not have a patch yet
14:42:57 <markmc> simo, yeah, just hard to help with "need to pass topic around a lot" without seeing it, that's all
14:43:02 <simo> markmc: will be here once I have something: http://fedorapeople.org/cgit/simo/public_git/nova.git/log/?h=shared-key-msg
14:43:05 <markmc> simo, even if it's a totally non-working patch
14:43:08 <markmc> simo, ok
14:43:16 <simo> markmc: hard to find out indeed :)
14:43:31 <simo> markmc: just git grep serialize_msg
14:43:51 <simo> but I think for the most important cases it may be easy enough
14:44:01 <markmc> git grep serialize_msg ... and ... ?
14:44:09 <simo> and see where it is used
14:44:22 <simo> I need to add the destination topic as an argument
14:44:31 <markmc> ah, ok - that explains it
14:44:32 <simo> in some cases topic seem readily available in the caller
14:44:47 <simo> the other problem though was to find out what initializes the rpc services
14:45:00 <simo> I though Service would suffice as I set the server's own topic there
14:45:10 <simo> but apparently nova-api uses different ways
14:45:23 <simo> I probably just need to do something in wsgi.py
14:45:27 <markmc> well, nova-api is just an RPC client, maybe that's the difference
14:45:58 <simo> markmc: bottom line please make the new RPC code have a very clearly defined class you call for initialization so all these things can be plugged in there :)
14:46:14 <markmc> simo, client side or server side ?
14:46:18 <simo> markmc: both
14:46:23 <markmc> ok, we have both
14:46:34 <markmc> ok
14:46:41 <simo> both need knowledge of their own name and key in order to be able to do any form of signing
14:46:43 <markmc> how is the kds work going ?
14:46:49 <simo> markmc: basically done
14:46:51 <markmc> is it proposed in gerrit for keystone yet ?
14:47:15 <simo> markmc: no I want to test it first, and there is some work going on in keystone to make it easier to add contrib stuff
14:47:27 <simo> (I have tested it manually, it works)
14:47:35 <simo> but want to see the whole thing runnig
14:47:42 <markmc> what is this "contrib stuff" ? :)
14:47:44 <simo> then a few changes ayoung have ready
14:47:56 <markmc> would be good to get a WIP patch in gerrit anyway
14:48:01 <simo> markmc: ayoung wants to make the kds in via 'contrib'
14:48:02 <markmc> to allow people start looking at it
14:48:08 <simo> it;s a place where new stuff leave
14:48:13 <simo> guess srota of incubator
14:48:17 <simo> *sorta
14:48:22 <simo> ec2 stuff is there for example
14:48:27 <markmc> yeah
14:48:32 * markmc hates "contrib"
14:48:44 <markmc> we could call it "preview" or something
14:48:51 <markmc> as long as it's in and usable
14:48:53 <simo> well ec2 is not really preview
14:48:58 <simo> not sure why it is not core
14:48:58 <markmc> I know it's not
14:49:04 <simo> I would call them just 'plugins'
14:49:11 <markmc> that's why using "contrib" to mean two different things is silly
14:49:15 <mordred> ++
14:49:21 <simo> yup, not my choice
14:49:26 <markmc> what does "contrib" even mean for ec2 ?
14:49:28 <markmc> yeah, I know
14:49:31 * markmc rants
14:49:36 <mordred> also, I thnk it's trying to put things in the tree without being responsibiel for them, which doesn't work
14:49:42 <markmc> indeed
14:49:45 <simo> markmc: the only reason why it makes sense in this case is that we want to be able to easily move kds out of keystone
14:49:55 <simo> infact I am creating a new endopint for it
14:49:57 * dhellmann thinks we've drifted off topic again
14:50:04 <markmc> we might make a statement like "we may make incompatible changes to this in the next release"
14:50:07 <markmc> that would be fine
14:50:10 <mordred> dhellmann: ++
14:50:19 <simo> however currently I am not 'discovering' it but rather have a configuration option for olso-incubator
14:50:20 <markmc> dhellmann, we can't merge the message security stuff without a kds
14:50:32 <simo> because changing all the caller to also give me that info was a bit too hard for now
14:50:37 <simo> we'll do later
14:50:42 <markmc> so, getting an update on how likely kds is going to be merged into keystone in havana is important
14:51:02 <simo> yup
14:51:15 <dhellmann> I meant the "contrib" discussion
14:51:22 <markmc> I still don't have a feeling either way of how likely this is to get in for havana
14:51:24 <markmc> dhellmann, fair
14:51:25 <simo> markmc: I think it is likely, but not settled 100% ye
14:51:28 <simo> *yet
14:51:29 <markmc> I know ttx is nervous too :)
14:51:34 <markmc> which is why he's lurking :)
14:51:39 <markmc> ok
14:51:55 <markmc> anyway, cool stuff - very nice progress
14:52:02 <markmc> only got a few minutes left
14:52:08 <markmc> and lbragstad had something to discuss
14:52:09 <simo> markmc: from what I care we could put the kds in nova too :)
14:52:16 <simo> it's pretty well self contained
14:52:23 <markmc> #topic keystone/eventlet/notifications dependency
14:52:35 <markmc> simo, well, I'm open to other options
14:52:45 <simo> ok
14:52:50 <markmc> simo, but it'd be good to know sooner rather than later we need to consider other options
14:52:56 <simo> yep
14:52:56 <lbragstad> markmc I just wanted to discuss what I needed to help with to get eventlet separation from oslo so we can use it in Keystnoe
14:53:00 <simo> so far I haven;t got a no
14:53:04 <simo> so we are going forward
14:53:04 <ayoung> markmc, the thing is that I want the SQL migrations for simo 's stuff to be in a separate migration repo
14:53:18 <simo> ayoung: details :)
14:53:26 <ayoung> so it can be split off etc from the rest of Keystone if we decide to move it around
14:53:33 <markmc> ayoung, we've moved on
14:53:46 <markmc> lbragstad, so, the issue is using the logging -> notifications thing ?
14:53:54 <markmc> lbragstad, you want keystone to be able to send notifications
14:54:02 <lbragstad> markmc yep
14:54:05 <markmc> lbragstad, even if it's e.g. running in mod_wsgi without eventlet ?
14:54:48 <mrodden> thats tough, because the default notifier driver is RPC
14:54:48 <markmc> lbragstad, is that the concrete example - mod_wsgi without eventlet ?
14:54:56 <ayoung> markmc, yes
14:54:59 <markmc> ok
14:55:08 <simo> we really want to go mod_wsgi in keystone
14:55:08 <markmc> has anyone tried it, where does it break?
14:55:17 <markmc> just because the code imports eventlet
14:55:20 <markmc> doesn't mean it uses it
14:55:24 <markmc> just to send an RPC message
14:55:29 <markmc> I'm not sure it does use it
14:55:29 <dhellmann> that may be an issue for pecan-based services in the future, too
14:55:34 <lbragstad> and Keystone wants to use the oslo implementation but doesn't want to have any hard dependencies on eventlet: https://github.com/openstack/oslo-incubator/blob/master/openstack/common/local.py#L22
14:55:50 <ayoung> markmc, we haven't tried it with notifier yet.  It might be fine, so long as no monkey patching is required for the code to be propery executed
14:55:57 <lbragstad> this is more pertaining to the logging for Keystone
14:55:57 <markmc> ah, ok - it's the thread local store
14:56:01 <ayoung> but it is kind of dumb to pull in those deps
14:56:01 <lbragstad> yep
14:56:13 <markmc> has anyone tried replacing just that ?
14:56:18 <flaper87> FWIW, we faced the same issue in Marconi
14:56:25 <lbragstad> I was hoping ayoung would chime in :)
14:56:26 <ayoung> for Apache in prefork, we should be able to use just the normal store
14:56:30 <ayoung> er
14:56:34 <ayoung> just normal memory
14:56:49 <markmc> ok, my main point is
14:56:56 <markmc> ignore the fact that eventlet gets imported
14:57:07 <markmc> just hacking things so we can send a notification without *using* eventlet
14:57:08 <simo> is it possible to import eventlet only confditionally  ?
14:57:18 <markmc> e.g. allowing an alternate to eventlet.corolocal
14:57:28 <markmc> simo, importing eventlet isn't the issue
14:57:31 <simo> ok
14:57:32 <ayoung> simo, in this case, the problem was that the impl depended on it. I am ok with a import that is unused for now
14:57:33 <markmc> importing it doesn't have side-effects
14:57:47 <markmc> and later, with the new messaging API, we'll be able to avoid the imports
14:58:00 <simo> ack
14:58:02 <ayoung> markmc, sounds good.  I can work with lbragstad to set up a test
14:58:09 <markmc> cool stuff
14:58:14 <simo> as long as we all understand the problem I think we achieved the goal of the topic
14:58:19 <markmc> ok, 2 minutes left :)
14:58:24 <markmc> #topic open floor
14:58:27 <markmc> anything else ?
14:58:36 <simo> markmc: can you give deadlines ?
14:58:41 <markmc> simo, for?
14:58:46 <simo> I will have some PTO in the next few days
14:58:52 <lbragstad> markmc thanks for meeting up this week, I know it cleared things up for me!
14:58:54 <markmc> the dates for the milestones are well all in launchpad
14:58:55 <simo> next deadline and following one
14:59:01 <simo> for code to be in
14:59:14 <markmc> simo, I think havana-3 is feature freeze
14:59:16 * markmc checks
14:59:22 <simo> markmc: so anything that lands until 1 minutes before those dates is fine ?
14:59:26 <flaper87> I'm almost done with the cache blueprint and will submit it for review next week (or today)
14:59:32 <flaper87> for the curious ones https://github.com/FlaPer87/oslo-incubator/tree/cache/openstack/common/cache
14:59:43 <markmc> simo, https://wiki.openstack.org/wiki/Havana_Release_Schedule
15:00:09 <flaper87> not expecting you to go there and review. I'll first summit the general API and then specific implementations (and by general I mean, API + in-memory support)
15:00:11 <markmc> simo, I'll start -2ing large new features a couple of weeks before feature freeze, I think
15:00:24 <markmc> simo, bear in mind, it's the feature freeze of e.g. nova and keystone you need to worry about
15:00:27 <simo> flaper87: good to know, I have my local cache in the securemessage code, will see if what you propose there applies
15:00:41 <flaper87> simo: awesome! thnx!
15:01:08 <markmc> ok, we're out of time
15:01:10 <markmc> thanks everyone
15:01:13 <markmc> #endmeeting