14:01:30 #startmeeting oslo 14:01:31 Meeting started Fri Nov 15 14:01:30 2013 UTC and is due to finish in 60 minutes. The chair is dhellmann. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:34 The meeting name has been set to 'oslo' 14:01:36 hey 14:01:38 #link https://wiki.openstack.org/wiki/Meetings/Oslo 14:01:43 Show of hands, please? 14:01:44 o/ 14:01:49 o/ 14:01:52 \o 14:02:02 o/ 14:02:24 \o 14:02:34 luisg, here? 14:02:53 ok, we'll start without him... 14:02:55 #topic Pecan/WSME status 14:03:01 bnemec, the floor is yours 14:03:04 o/ 14:03:09 Okay 14:03:21 This came up with some of the work around Tuskar/Ironic. 14:03:50 They apparently just copy-pasted the basic code for Pecan/WSME and were wondering if there's anything going on in Oslo to make some of that common code available. 14:04:16 so pecan comes with a "project scaffold", but it's really meant for web applications 14:04:25 we had planned to create some common base code, but had not yet 14:04:44 Doug and I discussed the possibility of one that was basically a pecan+wsme REST API scaffold 14:04:56 yeah, ironic API stuff was based on ceilometer and tuskar was based on ironic 14:05:02 definitely some stuff copied around 14:05:18 o/ 14:05:35 I didn't see a blueprint for any of this. 14:05:39 o/ 14:05:40 Should we get one opened to track it? 14:05:46 +1 14:05:53 we should start figuring out how much of that we can share through oslo 14:05:57 yes, a blueprint would make sense 14:06:06 bnemec_, want to open that? 14:06:08 what the common mode would look like? I wonder if it makes sense to release an oslo lib for that, or just putting that in Pecan makes sense 14:06:21 I can, but I don't know much about it so it's going to be pretty basic if I write it. 14:06:37 jd__, yeah I'm not sure -- let's get it into the incubator, and then decide where it makes sense to put it 14:06:45 I suppose someone else can add to it later. 14:06:46 bnemec_, ok, I'll open one 14:06:46 works for me :) 14:06:53 dhellmann: Okay, thanks. 14:07:09 +1 14:07:15 #action dhellmann open blueprint to track adding common pecan/wsme API code to the incubator 14:07:26 I guess that's all I was looking for on that topic. 14:07:33 looks like pecan exposes a setuptools entrypoint for defining new scaffolds: https://github.com/stackforge/pecan/blob/master/setup.py#L118 14:07:33 ok, thanks 14:07:44 (in other packages) 14:08:44 yeah, there's scaffold code and then there's truly sharable code that we can place in a library 14:08:53 right 14:08:53 we need to work out how much of the latter we really have 14:09:16 next up... 14:09:18 #topic python3 requirements file 14:09:23 #link https://review.openstack.org/#/c/55606/ 14:09:25 dims? 14:09:39 * jd__ hides 14:09:44 lol 14:09:47 lol 14:10:00 so, do we want to maintain py3 requirements in another file? :) 14:10:03 -1, needs a better commit message :-) 14:10:11 jd__ and I discussed this at the summit 14:10:20 I'm ok to go to the front and ask -infra FWIW 14:10:40 we need to track the requirements for each version of python separately (e.g., mox vs. mox3) 14:10:46 +1 to that jd__ since they need to think about global-requirements and devstack etc 14:11:08 yeah, we need to get -infra thoughts and considerations here 14:11:17 basically the requirements are the same, except that python 3 can't have everything installed 14:11:33 would be good if we had requirements.txt, requirements-py2.txt and requirements-py3.txt 14:11:39 and for most things to live in requirements.txt 14:11:39 but we could have mox3 and mox installed in parallel for all Python to minimize divergence 14:11:47 only exceptions in py2.txt and py3.txt 14:11:57 markmc: +1 14:12:05 markmc: we just need requirements.txt and requirements-py2.txt then 14:12:14 markmc: that would be more complicated to track upstream in the requirements project, I think 14:12:15 that's the common way to do it, AFAIK 14:12:20 that was one option we talked about 14:12:27 oh, there's precedent? 14:12:39 dhellmann, yeah - maybe what I describe only belongs in the requirements repo 14:12:57 dhellmann, and we autogenerate distinct python2 and python3 requirements files from the sources in the requirements repo 14:13:14 markmc: I like that 14:13:24 +1 14:13:46 nice 14:14:01 if we leave the python 2 requirements file in each project "requirements.txt" then we don't have to change devstack 14:14:20 doesn't it merge the requirements together now? 14:14:36 oh, no, nevermind, the way they implemented that was to update the requirements in each repo before installing anything 14:15:01 still, it feels like keeping python 2 in requirements.txt will result in fewer changes overall 14:15:42 mmh, but I think requirements.txt should have Py2 and Py3 requirements 14:16:11 otherwise we won't be able to know which packages are not py3k compatible 14:16:41 (unless I'm missing something) 14:16:56 how does combining them tell us which are not py3 compatible? 14:17:39 it doesn't but at least we know they work for both. 14:17:59 ahh, you are suggesting to have requirements.txt for py2 and requirements-py3.txt for py3k 14:18:06 i like 3 files - common/py2-only/py3-only approach then we know the py2 wont work in py3 and vice versa and common will work on both 14:18:19 flaper87: yes 14:18:29 damn, it's friday! :D 14:18:39 Yes 14:18:41 yeah, that would make it easier 14:18:43 Yes it is :-) 14:18:49 dhellmann, in that approach won't you have some stuff in py2 file that wont work in py3? 14:19:19 the only downside is that we would have some packages duplicated in both requirements, IIUC 14:19:24 dims: the idea would be to ignore requirements.txt when installing under python 3 14:19:38 thinking about where we need to make changes, this is going to be complicated 14:19:39 So I gather from this conversation that "remove all deps that don't work in py3" is not an option? 14:19:40 dhellmann, got it. 14:19:47 pbr will have to look at different file(s) 14:19:56 the requirements updater tool will have to modify more files 14:20:12 the mirror building tool will have to look at all of these files 14:20:50 I wonder if we can keep a single global requirements list in the requirements project, and only have separate files in the repos where we run tox 14:21:18 that would work as long as we don't require different versions of the same library under python 2 and 3 14:21:26 which I don't think we do now 14:22:00 jd__: I think we need a blueprint, and input from infra on this, but I think we can make it work 14:22:22 dhellmann: +1 for bp and input from -infra 14:22:23 yeah, I'm just waiting for instructions on how to build it basically 14:22:29 either 2 files or 3 files 14:22:30 and devstack folks 14:22:47 I think this affects other projects as well so, definitely -infra input is worth it. 14:22:48 #action jd__ open a blueprint for tracking python 3 requirements and solicit input from -infra 14:23:01 dims: I think we can assume devstack runs python 2 for the time being 14:23:05 dhellmann: bp on oslo ? 14:23:20 jd__: probably on the requirements project, but oslo if there isn't a launchpad project for that 14:23:41 dhellmann: I don't know nor think there's a LP for that indeed :) 14:24:03 I can use https://launchpad.net/openstack-ci 14:24:09 that works 14:24:10 jd__, worst case mailing list 14:24:36 dims: yeah, worst case, I'd like to avoid bikeshedding :D 14:24:46 once we figure out how to manage the requirements, we should update the python 3 page in the wiki with directions, too 14:24:49 mailing-list +1 14:24:53 jd__, agree :) 14:25:05 dhellmann: fair enough 14:25:14 luisg: ping? 14:25:20 dhellmann, here 14:25:27 hi! 14:25:34 anything else on python 3 requirements before we move on? 14:25:55 #topic Message code changes & implications to the current code base 14:25:56 #link https://etherpad.openstack.org/p/i18nmessages 14:26:00 #link https://review.openstack.org/#/c/56093/ 14:26:04 luisg, you have the floor 14:26:18 thx, sorry i was late 14:26:38 so i implemented the changes that we had agreed on our last meeting on this topic 14:26:53 and i have been running into several issues, that i documented in that ehterpad 14:27:20 basically one of the one that worries me more is the first one in the etherpad, about logging exceptions 14:27:22 for the first issue, with passing exceptions to the logging code, I like option 3 14:27:45 however, I think this is why we at one point said we would support .__unicode__() on Message 14:28:24 to cut down on code churn in the other projects, where exceptions are turned into strings explicitly like this 14:28:40 dhellmann, y im finding that not supporting unicode() can be problematic 14:29:02 what do you think about supporting it for now, and phasing it out later? 14:29:02 even for non-exceptions, like in the example of logging a connection object 14:29:06 smaller steps... 14:29:57 if we phase it out eventually, would u want to phase it out in favor of a "translate()" interface? 14:30:11 or how would we w2nt to address that and the other issues 14:30:46 it seems like we'll always be merging an exception with a message object, right? 14:30:57 _('some text: %s') % the_exception 14:31:24 if that's the case, we can have Message look for exceptions when doing the final interpolation, and treat them differently 14:31:53 the *right* way to do it is to pass them separately: log.error(_('bad thing happened: %s'), the_exception) 14:32:01 or just log.exception(the_exception) 14:32:16 but that really just delays the point at which the % operator is used 14:32:29 so we still have to look for them explicitly 14:32:38 make sense? 14:32:40 yeah..whether they 7use % or , it is ultimately the same 14:32:46 right 14:32:55 the problem is that not all exceptions have their message in the same field 14:33:09 some use explanation, some use message, details.. 14:33:22 we should definitely not look inside the exception object 14:33:24 so it is up to each exception to choose its representation in their __unicode__ 14:33:41 not knowing what they use woudl kind of force us to look inside 14:33:43 we should either have an exception base class that supports translate() or just require it to support __unicode__ 14:33:47 unless we somehow unify where to look 14:34:10 * beekneemech notes that we just removed our exception base class :-) 14:34:16 yeah 14:34:41 ok that addresses exceptions only, but we are limiting devs to log anyother object 14:34:47 i.e. the connection example 14:34:49 I wonder, if we do nothing, would unicode(the_exception) return a Message instance? 14:34:56 they would only be allowed to log "translatables" 14:36:01 because if that's the case, we could have our log formatter turn the exception into a message, and then call translate on it 14:36:43 what i am seeing is that in the basic case where for instance in that cinder example 14:36:58 where we log message, unicode(e) 14:37:23 the exception was raised, but i need to see if that i b/c unicode was invoked inside Fault on the msg 14:37:47 right 14:37:54 that is issue 1, what do u guys think about the other issues? 14:38:05 ok, let's err on the side of avoiding exceptions and support __unicode__ using the default translation 14:38:37 issues with messages not being logged with the proper language are then bugs in the app or library code that is handling the exception improperly, right? 14:39:10 although I guess we need to provide a path for getting a Message out of an exception, still 14:39:33 yeah so that we can do the secondary log 14:39:54 ok, I'm going to have to spend more time thinking about that, I think 14:39:58 if we loose the M object say by supporting _uni_ we need to be able to recreate the msg 14:40:06 k 14:40:34 I think we'll have to handle it explicitly, so if someone calls unicode() on an exception, translation of the result will no longer be supported 14:40:42 but there might be a get_message() method or something 14:40:47 or get_translatable() 14:40:59 but in any case, let's start by avoiding unicode exceptions 14:41:16 k 14:41:30 how about the keystone context formatter issue? 14:41:41 do they not use it because they don't like it, or because no one has added it? 14:42:18 yeah that is a nother tricky one, and it comes back to the fact that we now depend on that formatter or there will be no translation() and thig would crash 14:42:26 due to the explicit translation reuqired 14:42:34 i think we if allow unicode, this would not be a problem 14:42:39 but need to double check paths 14:42:45 i don't know why it's not used tbh 14:42:48 except that they would not have translated logs 14:42:53 but out of the box it's not used 14:43:07 do you feel up to proposing a patch to keystone to have it used? 14:43:22 dhellmann, yeah i could do that 14:43:32 cool -- add me as a reviewer 14:43:44 ok, the + operator 14:43:54 I thought we said we did need to go ahead and support that? 14:43:58 i was thinking that is the smaller issue, we just go in and change all to % 14:44:04 we said no add 14:44:06 only % and format 14:44:15 we can add add or change that to % 14:44:29 ok, it seems to me that the use of + in the specific example you've provided is a mistake anyway 14:44:32 line 22 says where we said no add 14:44:41 I believe you :-) 14:44:46 haha.. 14:44:53 the reason i pointed out the line was b/c of the rationale 14:45:03 would it help to support __add__ but have it log a deprecated message? 14:45:31 it says that it would imply trnslation in improper ways 14:45:34 bnut can't rememebr what that was about 14:45:42 Do we have any idea how many places that's happening? 14:45:43 yeah, I'm thinking pragmatically again: can we get something working, then remove some of the parts we don't want long term? 14:45:48 i have only seen it in noa 14:45:50 *nova 14:45:51 Because it's definitely wrong to add translated strings. 14:45:57 are there a lot of cases like that in nova? 14:46:01 So that should be fixed anyway. 14:46:03 the whole exception module 14:46:10 k 14:46:14 anything outside of that module? 14:46:21 maybe beekneemech is right, and we should just fix nova first 14:46:31 i did not see, but there may still be stuff in there 14:46:38 ok 14:47:00 well, I'm willing to be flexible on __add__ and __radd__ if it gets us to the point where things mostly work, and we can phase out their use later 14:47:25 rather than trying to make it all work right at one time, if you know what I mean 14:47:33 Yeah, that makes sense to me. 14:47:35 yeah that make sense 14:47:46 Logging the message would be nice so we can find out how much this is happening. 14:48:00 yeah 14:48:13 (a deprecation message, not a Message message :-) 14:48:19 right :-) 14:48:32 ok, luisg, did we cover all of the issues you had? 14:48:41 we didn't talk about the issue of sending Ex through RPC 14:49:00 that would be fixed with unicode worknig i guess 14:49:07 supporting __unicode__ should fix that for now -- right 14:49:20 so still im worried about several things, 1) we can't log "non-translatables" 2) we depend on the contxt formatter and 3) strings don't support all operations, and 4) how to recover the M after unicoding for secondary log 14:49:25 and when we know what explicit interface we will provide, we can address it differently later 14:49:41 non-translatables? 14:49:42 i was wondering if u would allow me to send out a WIP with what I propose at the bottom 14:50:07 what is a non-translatable? 14:50:30 sorry 1) is no t an issue only 4) 14:50:56 2) relying on that class to handle translation is just going to have to be ok 14:51:01 3) what operations/ 14:51:02 ? 14:51:12 4) why would we need to do that? what is a "secondary log"? 14:51:22 like these places where ppl use + 14:51:47 ok, we can generalize that approach: if you find other operators that are needed, support them with deprecation warnings 14:51:48 i thought we were conswidering the requirement where we would log in different languages 14:52:12 I thought we wanted to let the operator choose to log in one language, that might not be english 14:52:12 the default, but if there was another handler that wants to log in a diff lang thatn the default 14:52:50 in that case, the log record would be given to 2 formatters for 2 loggers and the translation would just happen twice, right? 14:52:51 there is another req to basically create different logs, e.g. one in INFO in the default locale, and a debug one in a diff one 14:53:05 although the formatter would have to get the language value from somewhere other than the locale 14:53:23 yeah from the logg config most likely 14:53:27 luisg, do we really need that "feature"? 14:53:27 let's get this working properly for one language at a time before we try to make it work in parallel 14:53:35 right 14:53:42 one step at a time 14:53:55 dims, it's a controversial one, but there was a blueprint pushed out from havana 14:54:41 I won't say no to supporting it, but like I said, let's get one language working first :-) 14:54:48 +1 14:54:52 ok, we're almost out of time and I need to talk schedule 14:54:53 luisg, agree with dhellmann 14:55:02 #topic scheduling 14:55:03 ok 14:55:06 #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule 14:55:09 #link https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Oslo 14:55:16 I need to know estimates for targets for the blueprints that are going into icehouse 14:55:44 so if you have a blueprint, either go ahead and set the target or email me the info (I'm not sure what permissions are required to change that field) 14:56:09 I'll be reviewing the summit etherpads today and at least setting the release on all of the blueprints 14:56:18 but we need to be more specific about which milestone, too 14:57:02 does anyone have any concerns about the blueprints based on what we discussed at the summit? are there any that we need to include that did not have a session? 14:57:07 I think we need to split oslo.config side effects into several blueprints 14:57:26 flaper87: good point, I have that on my list 14:58:32 I was also going to open blueprints for graduating each part of the incubator, but not assign them to icehouse until we think we're actually going to be able to do them 14:58:50 that will give us a place to track what would be required for graduation for each piece, though 14:59:23 sounds good, that'll also let us discuss the graduation aspects in the blueprint itself 14:59:28 for each module 14:59:33 right 14:59:48 ah damn, I didn't read your last message 14:59:51 lol 15:00:01 I'll take a stab at grouping some of the smaller modules into libraries, too, although that's up for discussion 15:00:22 Can we start discussing the grouping on an etherpad ? 15:00:31 if there isn't one already 15:00:38 flaper87: sure, we could do it that way 15:00:49 I'll start one and send an email 15:00:51 to the ml 15:00:54 then we can translate that into a blueprint! 15:00:56 awesome 15:01:08 we're at the end of our time slot for the meeting 15:01:17 thanks everyone! 15:01:19 good meeting! 15:01:21 thanks guys! 15:01:29 markmc, sad to see the oslo.messaging abandoned :( 15:01:29 thank you! 15:01:35 #action dhellmann create etherpad to start grouping oslo modules into libraries 15:01:38 patch to nova i mean 15:01:44 temporarily 15:01:54 whew ok 15:01:57 good 15:02:06 #endmeeting