14:00:35 #startmeeting oslo 14:00:36 Meeting started Fri Jun 7 14:00:35 2013 UTC. The chair is markmc. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:37 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:39 The meeting name has been set to 'oslo' 14:00:40 roll up, roll up 14:00:43 who's around? 14:00:46 o/ 14:00:52 hi 14:00:53 #link https://wiki.openstack.org/wiki/Oslo/Messaging 14:00:55 o/ 14:01:03 #link https://wiki.openstack.org/wiki/Oslo/Messaging 14:01:08 #link https://wiki.openstack.org/wiki/Meetings/Oslo 14:01:39 * markmc will wait a minute or two for others 14:02:22 o/ 14:02:39 * jd__ . o O (hey a meeting, let's join!) 14:02:56 jd__, :) 14:02:59 lbragstad, hey 14:03:08 again, etherpad is https://etherpad.openstack.org/HavanaOsloMessaging 14:03:32 I've pinged simo 14:03:40 no sign of ewindisch 14:03:50 #link https://etherpad.openstack.org/HavanaOsloMessaging 14:03:58 or markwash 14:03:58 markmc hey 14:04:14 beagles maybe ? 14:04:21 ok, let's get going 14:04:27 mrodden: here? 14:04:33 so, it's been about a month since the last meeting 14:04:44 I've listed the updates that have happened since then in the etherpad 14:04:46 o/ 14:05:08 I think that's pretty solid progress, but a pity we don't have unit tests or a real driver yet 14:05:23 so, I think they're definitely the next big things to tackle 14:05:54 anyone got any thoughts? 14:06:15 the "executor" name was the only thing I could think of worth hashing out here 14:06:19 specifically, where are you looking for unit test contribution in oslo? 14:06:23 o/ 14:06:25 apart from "who's got time to do what?" 14:07:22 * dhellmann goes to remind himself what the executor does 14:07:25 lbragstad, this is about the new messaging API we're working on here: https://github.com/markmc/oslo-incubator/tree/messaging 14:07:43 dhellmann, polls the listener for messages, and then passes them to the dispatcher 14:07:45 ok 14:07:49 markmc: what's the best branch to look at in your github repo? 14:07:55 dhellmann, the messaging branch 14:08:09 #link https://github.com/markmc/oslo-incubator/tree/messaging/openstack/common/messaging 14:08:14 I agree that unittest should be tackled down next 14:08:16 dhellmann, the messaging-rebasing branch is me cleaning up the patch set to something which could be submitted to gerrit 14:08:23 ah, got it 14:08:25 markmc do these changes get pushed to a gerrit? 14:08:42 lbragstad, they will do soon, once we have unit tests and a real driver 14:09:13 ok, cool, so for right now we are just pushing to your branch 14:09:15 I was thinking the other day whether it would make sense to have the executor's code as a separate package in oslo. Can we abstract it a bit more? There are some projects (and here I'm speacking about glance) that might benefit from an "Executor" package without the queue / rpc part. 14:09:16 markmc: do you like "Receiver" better as a name? I don't remember where Executor came from, but it seems like it was mirroring a name used in another library somewhere. 14:09:26 dhellmann, eventlet executor has a thread that polls for messages and then methods are dispatched in a thread per message 14:10:06 dhellmann, I think the idea you had was to mirror http://www.python.org/dev/peps/pep-3148/#executor 14:10:22 ah, yeah, that's probably it. Did I come up with that name? :-) 14:10:24 dhellmann, i.e. we've got an executor that we use to call the dispatcher 14:10:43 dhellmann, and whether that's a greenthread or a real thread or whatever depends on what executor you use 14:10:55 ah, right 14:10:55 dhellmann, except we do a bit more than that in our executor 14:11:11 lbragstad, right, pull requests for now 14:11:32 flaper87: how would glance use this without the rpc part? 14:11:42 flaper87, I'm not sure what about executors are reusable outside of messaging 14:12:25 markmc: In your blog post (I think) you brought up a question about how much of the executor's behavior would need to be exposed to code outside of the rpc library. For cooperative multitasking, for example. 14:12:36 dhellmann: We're thinking about having async workers in glance and I'm afraid we'll end up writing yet another (Async)TaskExecutor implementation 14:12:58 dhellmann, I'm thinking we have BlockingRPCServer, EventletRPCserver, TulipRPCServer, etc. 14:13:01 I'd like to avoid that, but perhaps the "current executor" should be accessible and have a public API so tasks can yield the context, etc.? 14:13:50 dhellmann, the server methods definitely need to be written with knowledge about what executor is going to be used 14:14:10 dhellmann, e.g. if you're using the tulip executor, you'll need to write the methods as coroutines 14:14:26 dhellmann, but I don't think you need to actually talk directly to the executor 14:14:31 markmc: can'twe assume threatds ? 14:15:19 markmc: I thought putting that info in the executor would mean we could avoid having separate server implementations 14:15:19 dhellmann, in the case of tulip, you might want to talk to the right event loop ... but you'd pass the event loop to the TulipRPCServer constructor 14:15:22 but maybe not 14:15:44 dhellmann, basically, I'm thinking we don't expose an interface to the executor 14:15:51 dhellmann, just the ability to choose an executor 14:16:05 dhellmann, i.e. "I'm using tulip, give me an RPCServer that does the right thing" 14:16:18 I thought the goal was to have the executor encapsulate all of that, so the server could be reused 14:16:33 simo, no, I don't think we want to use threads by default 14:16:35 do you think that's not possible? 14:16:46 the server is being re-used 14:16:47 totally 14:16:52 markmc: I won;t insist, but I have done async programming for 10 years now 14:17:05 markmc: it's a world of hurt, not worth for anything really complex 14:17:22 it's very good for the thigns it is good at 14:17:26 very bad for anything else 14:17:33 dhellmann, all EventletRPCServer does is pass executor_cls=EventletExecutor or whatever 14:17:35 sorry, not "reused" but so that we'd only need one server implementation 14:17:39 and we are now n the 'else' camp in openstack 14:17:43 it's too big a project 14:17:51 simo, let's not play the who has more experience game 14:17:58 simo, openstack uses eventlet now 14:17:59 I am not playing it 14:18:10 simo, we're not going to be changing that any time soon 14:18:15 simo, I'd like to, but that's not the goal here 14:18:21 yes, please, let's not derail this discussion 14:18:35 markmc: just tryuing to avoid having something built that will make it impossible to move away 14:18:36 markmc: 14:18:40 oops 14:18:52 simo, the point is just to design the API that we can support multiple I/O handling strategies without breaking the API 14:19:00 simo, so, yes - exactly 14:19:03 markmc: right 14:19:07 markmc: you said the server implementation would need to know about its executor, which made me think the servers were more than just a thin wrapper that set the executor 14:19:07 then I am fine 14:19:15 simo, we can't do that by "assuming threads" 14:19:28 dhellmann: that got me a bit confused as well 14:19:53 markmc: well it depends, in many event systems you can simply use threads right away, with just a little bit of abstraction around a 'fake' async request 14:19:54 dhellmann, https://github.com/markmc/oslo-incubator/blob/messaging/openstack/common/messaging/eventlet.py#L43 14:20:30 simo, we can't dispatch methods to threads because openstack code is written to assume it's running in greenthreads 14:20:58 markmc: yeah that's the problem, we should stop asuming green threads in the medium term 14:20:59 markmc: OK. I don't think that class is necessary as it stands right now. _RPCServer could be public, and the caller could give it the executor class. 14:21:06 dhellmann, all I want is to avoid exposing the executor interface if we don't need to 14:21:08 markmc: is that needed? Can the Server be "executor agnostic" 14:21:13 ? 14:21:23 dhellmann, so we can change it 14:21:25 dhellmann: you beat me 14:21:29 dhellmann, but also, for convenience 14:21:41 dhellmann, why make every openstack project have to provide executor_cls 14:21:49 dhellmann, rather than just use EventletRPCServer 14:22:13 dhellmann, would you prefer a get_eventlet_rpc_server() helper ? 14:22:19 dhellmann, because that's fine by me too 14:22:22 just to make clear what is the concern here: securemessaging requires 'slow' operations, you want them in their own thread, not have them slow the whole server 14:22:27 convenience is fine, I see the point of that, and either path works 14:22:30 markmc: so basically, we'd assume that if X is using eventletServer it is capable of using EventletExecutor which works better with that serer implementation 14:22:32 but I want to make sure we don't bake something into that EventletRPCServer that knows about eventlet since that knowledge is supposed to be in the executor 14:23:16 if the server needs to do something different based on which executor it is using, then I want that to be a feature of the executor that the server uses, rather than built into the server itself 14:23:28 yeah, that's fair 14:23:34 not sure that's clear... 14:23:34 +1 14:23:47 ok, let's go with helper factory methods things 14:23:56 i.e. eventlet.get_rpc_server() 14:23:58 so a wrapper class is ok, and a helper function is ok, but the helper function feels like maybe it's "safer" in that respect 14:23:59 yeah 14:24:47 simo, having the secure messaging stuff do its slow computation in threads is workable too 14:24:49 practical question: if we provide another eventlet implementation, how/when do we expect projects to move off of it? or do we? 14:25:01 simo, definitely adds a new dimension, I'll keep it in mind for sure 14:25:12 dhellmann, another executor you mean ? 14:25:23 dhellmann: as soon as there's another executor ? 14:25:49 dhellmann, e.g. moving to tulip ? 14:25:52 I mean, another implementation that's a drop-in replacement for eventlet's 14:25:57 dhellmann, each project would need to opt-in to it 14:26:05 this might be an elementary question but, how is using eventlet.get_rpc_server() going to effect projects who are attempting to isolate eventlet dependencies? (e.g. Keystone)? 14:26:17 markmc: I am not going to add threading on my own 14:26:31 yes 14:26:31 I thought one of our goals was to start moving away from eventlet? Or do we think it's safe to keep using it? 14:26:31 lbragstad, if you do 'import openstack.common.messaging' eventlet isn't imported 14:26:45 I would rather be able to swap out a bad greenthread implementation for a real threaded implementation in the medium term 14:26:46 dhellmann: I'd prefer moving away from it 14:26:47 lbragstad, you have to do 'import openstack.common.messaging.eventlet' and then eventlet.get_rpc_server() 14:26:54 but I guess we'd have to first support it 14:26:57 lbragstad, we could have e.g. openstack.common.messaging.tulip in future 14:27:00 and then make everybody cry :P 14:27:06 ahh ok 14:27:08 lbragstad, basically, the eventlet dependency is not baked in 14:27:10 markmc: opt-in makes sense; I wasn't sure if that was still even a goal. 14:27:18 so projects don't *have* so use it if they don't want to 14:27:25 to* 14:27:44 dhellmann: eventlet is bad, monkey patching is bad, non-thread service do not scale on multicore cpus 14:27:50 dhellmann: make your conclusions 14:28:05 simo: python threads don't work across multiple cores, either 14:28:25 simo: python Threads don't use multiple cores 14:28:26 dhellmann: the gil is being worked on though, and C helpers can wotk in parallel 14:28:34 simo, we use multiple processes to scale across cores 14:28:36 ok guys 14:28:48 STOP WITH THE THREADS VS ASYNC DEBATE :) 14:28:52 let's move on :) 14:28:53 markmc: +1 14:28:55 ok::) 14:29:08 so 14:29:21 who's up with contributing to some of the outstanding tasks for the messaging stuff 14:29:23 nova imports something from eventlet outside of the common code 134 times 14:29:39 so I'm wondering if changing the rpc api is enough to make it possible to move off of eventlet :-) 14:30:10 markmc: I can help creating some of the tests 14:30:25 but I'd say we should split them since I'd also like to get the qpid implementation done 14:31:03 flaper87, if you're working on the qpid driver, I'd definitely say focus on that 14:31:13 markmc: oke-doke 14:31:15 flaper87, I'd love to have a real driver 14:31:36 btw, we'd have to sync with markwash, I think he was going to work on some tests 14:31:41 my plate is pretty full right now, or I would commit to working ont he rabbit driver :-/ 14:31:49 havana-2 is 2013-07-18 14:31:53 dhellmann, ok, thanks 14:32:15 markmc: makes sense, I was kind of blocked by the URL stuff but it's merged now, I'll move on with it 14:32:15 if it's just me and flaper87, I'd say it's pretty unlikely we'll have in shape by then 14:32:25 "in shape" == ready for e.g. nova to use it 14:32:34 flaper87, sorry about that 14:32:49 markmc: no worries, didn't want to complain, just update everyone 14:32:56 oh, I wanted to ask about the status of the url config discussion 14:33:05 before nova could use it 14:33:13 did we settle on definitely using urls? 14:33:25 we need the API finalized, tests and at least qpid/rabbit drivers 14:33:40 so, I guess it's someone keen to work on the rabbit driver we really need 14:33:49 dhellmann, yes, I think it's pretty well settled in there now 14:33:53 ok, good 14:33:57 dhellmann: I'd say so 14:34:04 we're discussing the url format 14:34:15 do we need a migration strategy for the old config format? 14:34:16 #link https://github.com/markmc/oslo-incubator/pull/7 14:34:19 dhellmann, see https://github.com/markmc/oslo-incubator/blob/messaging/openstack/common/messaging/transport.py#L78 14:34:32 ok, let's talk briefly about the URL format 14:34:46 flaper87, remind me where you took the multi-host format from? 14:35:27 markmc: right, the example I have is mongodb URI's http://docs.mongodb.org/manual/reference/connection-string/ 14:35:33 which is basically what I used 14:35:47 it chains multiple hosts separated by "," 14:35:59 I am not sure that is a valid URI by RFC 14:36:21 it probably doesn't need to be a valid URI 14:36:55 simo: is there an RFC that supports multiple hosts / ports / credentials / options for URLs ? 14:36:56 we basically want a string format that encapsulates a configuration a transport driver 14:37:06 suitable for config files and database entries 14:37:28 a json dict would be ok, if we didn't want it to be human-editable in config files 14:38:09 flaper87: probably not, because a URI represent _a_ resource 14:38:21 multiple URIs in a json file is probably better 14:39:18 ok, we're 40 minutes in 14:39:25 noted the json idea in the etherpad 14:39:32 so, any objections about current URI iplementation? 14:39:32 #topic secure messaging 14:39:37 you're up simo :) 14:39:40 ok 14:39:45 awww :( next time :D 14:39:50 flaper87, fine for now, let's be open to doing something else IMHO 14:39:50 I have most of the code done 14:39:59 markmc: sure 14:40:05 I am having some difficulty due to my lack of knowledge in properly wiring it in 14:40:15 need to pass topic around alot 14:40:25 and there seem to be quite a few places where stuff is intialized 14:40:29 even in nova 14:40:33 interesting 14:40:39 api seem not to use the Service thing others are using 14:40:40 * markmc would be happy to look at a WIP patch 14:40:55 I started using traceback.print_stack in serialize_msg to see where things come from 14:41:05 markmc: one sec 14:41:12 * ttx lurks 14:41:12 heh, good idea :) 14:41:28 markmc: http://fedorapeople.org/cgit/simo/public_git/oslo.git/log/?h=shared-key-msg 14:41:38 3 patches 14:41:52 they have everything from crypto to embedded client to fetch keys from kds 14:41:55 simo, ah, I meant the nova integration patch 14:42:16 simo, I've taken a quick look at the oslo patches already, they look good at a high level 14:42:17 kds implementation here: http://fedorapeople.org/cgit/simo/public_git/keystone.git/log/?h=shared-key-msg 14:42:26 ah, cool 14:42:28 markmc: still working on nova 14:42:33 I do not have a patch yet 14:42:57 simo, yeah, just hard to help with "need to pass topic around a lot" without seeing it, that's all 14:43:02 markmc: will be here once I have something: http://fedorapeople.org/cgit/simo/public_git/nova.git/log/?h=shared-key-msg 14:43:05 simo, even if it's a totally non-working patch 14:43:08 simo, ok 14:43:16 markmc: hard to find out indeed :) 14:43:31 markmc: just git grep serialize_msg 14:43:51 but I think for the most important cases it may be easy enough 14:44:01 git grep serialize_msg ... and ... ? 14:44:09 and see where it is used 14:44:22 I need to add the destination topic as an argument 14:44:31 ah, ok - that explains it 14:44:32 in some cases topic seem readily available in the caller 14:44:47 the other problem though was to find out what initializes the rpc services 14:45:00 I though Service would suffice as I set the server's own topic there 14:45:10 but apparently nova-api uses different ways 14:45:23 I probably just need to do something in wsgi.py 14:45:27 well, nova-api is just an RPC client, maybe that's the difference 14:45:58 markmc: bottom line please make the new RPC code have a very clearly defined class you call for initialization so all these things can be plugged in there :) 14:46:14 simo, client side or server side ? 14:46:18 markmc: both 14:46:23 ok, we have both 14:46:34 ok 14:46:41 both need knowledge of their own name and key in order to be able to do any form of signing 14:46:43 how is the kds work going ? 14:46:49 markmc: basically done 14:46:51 is it proposed in gerrit for keystone yet ? 14:47:15 markmc: no I want to test it first, and there is some work going on in keystone to make it easier to add contrib stuff 14:47:27 (I have tested it manually, it works) 14:47:35 but want to see the whole thing runnig 14:47:42 what is this "contrib stuff" ? :) 14:47:44 then a few changes ayoung have ready 14:47:56 would be good to get a WIP patch in gerrit anyway 14:48:01 markmc: ayoung wants to make the kds in via 'contrib' 14:48:02 to allow people start looking at it 14:48:08 it;s a place where new stuff leave 14:48:13 guess srota of incubator 14:48:17 *sorta 14:48:22 ec2 stuff is there for example 14:48:27 yeah 14:48:32 * markmc hates "contrib" 14:48:44 we could call it "preview" or something 14:48:51 as long as it's in and usable 14:48:53 well ec2 is not really preview 14:48:58 not sure why it is not core 14:48:58 I know it's not 14:49:04 I would call them just 'plugins' 14:49:11 that's why using "contrib" to mean two different things is silly 14:49:15 ++ 14:49:21 yup, not my choice 14:49:26 what does "contrib" even mean for ec2 ? 14:49:28 yeah, I know 14:49:31 * markmc rants 14:49:36 also, I thnk it's trying to put things in the tree without being responsibiel for them, which doesn't work 14:49:42 indeed 14:49:45 markmc: the only reason why it makes sense in this case is that we want to be able to easily move kds out of keystone 14:49:55 infact I am creating a new endopint for it 14:49:57 * dhellmann thinks we've drifted off topic again 14:50:04 we might make a statement like "we may make incompatible changes to this in the next release" 14:50:07 that would be fine 14:50:10 dhellmann: ++ 14:50:19 however currently I am not 'discovering' it but rather have a configuration option for olso-incubator 14:50:20 dhellmann, we can't merge the message security stuff without a kds 14:50:32 because changing all the caller to also give me that info was a bit too hard for now 14:50:37 we'll do later 14:50:42 so, getting an update on how likely kds is going to be merged into keystone in havana is important 14:51:02 yup 14:51:15 I meant the "contrib" discussion 14:51:22 I still don't have a feeling either way of how likely this is to get in for havana 14:51:24 dhellmann, fair 14:51:25 markmc: I think it is likely, but not settled 100% ye 14:51:28 *yet 14:51:29 I know ttx is nervous too :) 14:51:34 which is why he's lurking :) 14:51:39 ok 14:51:55 anyway, cool stuff - very nice progress 14:52:02 only got a few minutes left 14:52:08 and lbragstad had something to discuss 14:52:09 markmc: from what I care we could put the kds in nova too :) 14:52:16 it's pretty well self contained 14:52:23 #topic keystone/eventlet/notifications dependency 14:52:35 simo, well, I'm open to other options 14:52:45 ok 14:52:50 simo, but it'd be good to know sooner rather than later we need to consider other options 14:52:56 yep 14:52:56 markmc I just wanted to discuss what I needed to help with to get eventlet separation from oslo so we can use it in Keystnoe 14:53:00 so far I haven;t got a no 14:53:04 so we are going forward 14:53:04 markmc, the thing is that I want the SQL migrations for simo 's stuff to be in a separate migration repo 14:53:18 ayoung: details :) 14:53:26 so it can be split off etc from the rest of Keystone if we decide to move it around 14:53:33 ayoung, we've moved on 14:53:46 lbragstad, so, the issue is using the logging -> notifications thing ? 14:53:54 lbragstad, you want keystone to be able to send notifications 14:54:02 markmc yep 14:54:05 lbragstad, even if it's e.g. running in mod_wsgi without eventlet ? 14:54:48 thats tough, because the default notifier driver is RPC 14:54:48 lbragstad, is that the concrete example - mod_wsgi without eventlet ? 14:54:56 markmc, yes 14:54:59 ok 14:55:08 we really want to go mod_wsgi in keystone 14:55:08 has anyone tried it, where does it break? 14:55:17 just because the code imports eventlet 14:55:20 doesn't mean it uses it 14:55:24 just to send an RPC message 14:55:29 I'm not sure it does use it 14:55:29 that may be an issue for pecan-based services in the future, too 14:55:34 and Keystone wants to use the oslo implementation but doesn't want to have any hard dependencies on eventlet: https://github.com/openstack/oslo-incubator/blob/master/openstack/common/local.py#L22 14:55:50 markmc, we haven't tried it with notifier yet. It might be fine, so long as no monkey patching is required for the code to be propery executed 14:55:57 this is more pertaining to the logging for Keystone 14:55:57 ah, ok - it's the thread local store 14:56:01 but it is kind of dumb to pull in those deps 14:56:01 yep 14:56:13 has anyone tried replacing just that ? 14:56:18 FWIW, we faced the same issue in Marconi 14:56:25 I was hoping ayoung would chime in :) 14:56:26 for Apache in prefork, we should be able to use just the normal store 14:56:30 er 14:56:34 just normal memory 14:56:49 ok, my main point is 14:56:56 ignore the fact that eventlet gets imported 14:57:07 just hacking things so we can send a notification without *using* eventlet 14:57:08 is it possible to import eventlet only confditionally ? 14:57:18 e.g. allowing an alternate to eventlet.corolocal 14:57:28 simo, importing eventlet isn't the issue 14:57:31 ok 14:57:32 simo, in this case, the problem was that the impl depended on it. I am ok with a import that is unused for now 14:57:33 importing it doesn't have side-effects 14:57:47 and later, with the new messaging API, we'll be able to avoid the imports 14:58:00 ack 14:58:02 markmc, sounds good. I can work with lbragstad to set up a test 14:58:09 cool stuff 14:58:14 as long as we all understand the problem I think we achieved the goal of the topic 14:58:19 ok, 2 minutes left :) 14:58:24 #topic open floor 14:58:27 anything else ? 14:58:36 markmc: can you give deadlines ? 14:58:41 simo, for? 14:58:46 I will have some PTO in the next few days 14:58:52 markmc thanks for meeting up this week, I know it cleared things up for me! 14:58:54 the dates for the milestones are well all in launchpad 14:58:55 next deadline and following one 14:59:01 for code to be in 14:59:14 simo, I think havana-3 is feature freeze 14:59:16 * markmc checks 14:59:22 markmc: so anything that lands until 1 minutes before those dates is fine ? 14:59:26 I'm almost done with the cache blueprint and will submit it for review next week (or today) 14:59:32 for the curious ones https://github.com/FlaPer87/oslo-incubator/tree/cache/openstack/common/cache 14:59:43 simo, https://wiki.openstack.org/wiki/Havana_Release_Schedule 15:00:09 not expecting you to go there and review. I'll first summit the general API and then specific implementations (and by general I mean, API + in-memory support) 15:00:11 simo, I'll start -2ing large new features a couple of weeks before feature freeze, I think 15:00:24 simo, bear in mind, it's the feature freeze of e.g. nova and keystone you need to worry about 15:00:27 flaper87: good to know, I have my local cache in the securemessage code, will see if what you propose there applies 15:00:41 simo: awesome! thnx! 15:01:08 ok, we're out of time 15:01:10 thanks everyone 15:01:13 #endmeeting