16:00:47 #startmeeting oslo 16:00:47 Meeting started Mon Feb 29 16:00:47 2016 UTC and is due to finish in 60 minutes. The chair is dims. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:47 courtesy ping for GheRivero, amotoki, amrith, bknudson, bnemec, dansmith, dhellmann, dougwig, e0ne, flaper87, garyk, harlowja, haypo, 16:00:48 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:48 courtesy ping for ihrachyshka, jd__, jecarey, johnsom, jungleboyj, kgiusti, kragniz, lifeless, lintan, ozamiatin, redrobot, rpodolyaka, spamaps 16:00:48 courtesy ping for sergmelikyan, sreshetnyak, sileht, sreshetnyak, stevemar, therve, thinrichs, toabctl, viktors, zhiyan, zzzeek, gcb 16:00:50 courtesy ping for dukhlov, lxsli, rbradfor, mikal, nakato, tcammann1, browne, 16:00:51 hi 16:00:51 The meeting name has been set to 'oslo' 16:00:53 o/ 16:00:53 o/ 16:00:54 o/ 16:00:55 o/ 16:00:58 hello 16:01:03 o/ 16:01:09 hi 16:01:13 o/ 16:01:22 hi gcb johnsom rpodolyaka ozamiatin_ haypo jecarey toabctl rbradfor_ 16:01:31 o/ 16:01:35 o/ 16:01:36 o/ 16:01:39 ahoy 16:01:59 o/ 16:02:02 hi everyone, let's get started 16:02:02 o/ 16:02:06 #topic Red flags for/from liaisons 16:02:22 none for keystone. We have our own problems with leap days 16:02:27 Nothing to report this week 16:02:40 Nothing from Cinder. 16:02:44 we wrapped up all the Mitaka releases for oslo libraries, so no more changes unless we find bugs or do requirements updates 16:02:53 neutron here. we were wondering whether oslo.config set_override behaviour shown here: https://review.openstack.org/#/c/285278/ is as designed. 16:02:55 bknudson johnsom jungleboyj thanks 16:02:58 don't approve new features? 16:03:11 neutron was hit by it in one of patches on review, so we wanted to check with oslo.config folks 16:03:17 bknudson : yep, please don't approve new features until we have stable branches 16:03:25 dims: are you going to go through and -2? 16:03:56 bknudson : i could use help if you are volunteering? :) 16:04:18 dims: I can help 16:04:35 ./ 16:04:40 trove got hit by some change to oslocontext 16:04:41 ihrachys: my understanding is that it's expected. deprecating mechanism allows you to have old names in a config file, not in the code 16:04:44 am still gathering details 16:04:51 awesome, anyone else wants to help? please let bknudson and i know 16:05:03 rpodolyaka: I see. makes sense I guess. 16:05:12 ihrachys : right CONF.old_name will not work 16:05:15 ihrachys: not sure if it's documented, though 16:05:23 *the behaviour 16:05:33 dims: it may be a tiny bit problematic if external code accesses the options 16:06:06 * ihrachys believes that any code that directly relies on config options is broken by design but not everyone in neutron is on the same page 16:06:10 ihrachys : right, this behavior is not recent, it's always been this way 16:06:11 amrith, are you referring to the roles[] added to context for policy changes. 16:06:29 ihrachys : let me poke through that after the meeting 16:06:32 ihrachys: ++ 16:06:39 rbradfor_, I believe so, yes. 16:06:59 amrith, I though projects got patches for that fix. 16:07:13 dims: great, thanks 16:07:19 as we use this for communication with the guest, this change breaks backward compatibility with old guest images. 16:07:34 yes, there's a patch. but nothing in the CI tests for backward compatibility 16:07:45 one more thing from neutron side is: https://review.openstack.org/#/c/282090/ but I believe we won't get it in this cycle. basically, without it a new fixture for oslo.versionedobjects is not useable 16:07:47 it assumes that clients and servers are all upgrading in lock step 16:08:20 amrith : http://git.openstack.org/cgit/openstack/trove/tree/trove/common/context.py does not have roles explicitly so what broke? (some projects like heat had roles in their constructor) 16:09:15 amrith, I see your point about need for backward compatibility. not sure if your problem is then roles related. 16:09:19 amrith : so let's do this on oslo channel when you have some details 16:09:27 agree with rbradfor_ 16:09:46 #topic Bugs needed to be fixed for Mitaka 16:09:46 #link https://review.openstack.org/#/c/285294/ (Revert "Use tempfile.tempdir for lock_path if OSLO_LOCK_PATH is not set") 16:09:54 we need consensus on that one 16:10:13 sc68cal, can you please explain a bit? 16:10:37 dims: i have one more fix here https://review.openstack.org/#/c/286093 16:10:56 we used to fail fast when OSLO_LOCK_PATH was not set and an earlier review from sc68cal added code to default to temp dir 16:11:29 Yeah - any consumer of oslo.concurrency has to understand lock_path 16:11:32 otherwise it blows up 16:11:34 ozamiatin_ : ack 16:11:54 gettempdir() does not return /tmp 16:12:07 print tempfile.gettempdir() 16:12:07 /var/folders/5t/rnpx9j6d137fxcxh81xkybp4qb2pvn/T 16:12:14 dhellmann : do you remember the previous discussions? 16:12:15 jimbobhickville: depends on your OS 16:12:19 ah 16:12:38 but it's different on every run of a process 16:12:45 and probably for python processes forks too 16:12:50 jimbobhickville: what system are you running on? 16:12:58 yeah, that was on my mac, derp 16:13:00 carry on 16:13:02 My point being - if the default is an environment variable that probably nobody uses - and the program crashes or errors out if you don't set it - that's not a default 16:13:24 sc68cal : so essentially what that means is that there is no intra process locking as everyone is looking at a different directory 16:13:31 sorry inter 16:13:42 o/ 16:14:02 sc68cal : so if the code expects some interprocess locks to work it will fail 16:14:18 hi lxsli_web 16:14:19 It's not that it fails 16:14:36 there's a bit of code deeper in oslo.concurrency that actually makes lock_path a required option 16:14:49 take a look at the paste that I link in the commit message 16:15:08 the help text for the option says it's required to set it depending on how the library is used 16:15:33 it must only be needed for interprocess locks 16:15:37 sc68cal : ack. let me ping dhellmann and bnemec as well 16:15:40 couldn't we just make a default folder and set the permissions as we want them to be if it doesn't exist? 16:16:00 /tmp/oslo.locks or something? 16:16:03 So then why does it depend on an environment variable as a default? My point is - this library is not as helpful as it could be 16:16:22 Frankly I didn't care about IPC and locks - oslo.concurreny forced me to 16:16:31 I had assumed that was taken care of for me 16:16:46 since it's well... a library 16:17:06 instead - we're cargo culting these config options around in devstack 16:17:28 sc68cal : see my email from Dec 3, 2013 :) http://markmail.org/message/57fsfojqqshjbcc5 16:17:38 I'm of the opinion that you make it configurable, but do the right thing by default. if they don't specify a folder, create one and set up security correctly 16:17:59 jimbobhickville: exactly 16:18:16 "Still, obviously not cool when your locks don't lock, which is why we made the unpleasant change to require lock_path" 16:18:20 dims: mkdtemp works for me 16:18:27 http://markmail.org/message/ytw33eirpkgccedg 16:19:16 sc68cal : each process creates its own directory and if there's code that you expect to lock it won't lock 16:19:41 jimbobhickville: what permissions would we set on /tmp/oslo.locks ? e.g. nova and cinder would probably be running on behalf of different users. make it writable for all? 16:19:53 rpodolyaka: sticky bit 16:20:24 sc68cal : jimbobhickville : let's get bnemec to chime in please 16:20:34 You can't automatically create a temporary directory at start. You're dealing with multiple processes, and they all have to point at the same directory. 16:20:43 ah there you are :) 16:20:44 And you can't use a known path because that becomes a security risk. 16:20:55 Because we're creating files there. 16:20:59 sc68cal: ah, missed that. yeah, that should help 16:21:26 So basically we know these issues exist, and we're throwing it on devstack and deployers to figure it out - despite oslo being a library? 16:21:31 I know there was a patch up that attempted to make it somewhat safer to use a known path by not overwriting any existing files in the lock path. 16:21:58 sc68cal: If you can come up with a solution that isn't a security risk and doesn't result in no locking, I'm all ears. :-) 16:22:07 zookeeper? :P 16:22:07 It's not as if we haven't tried to solve this before. 16:22:16 sc68cal : just laying out what the problems are 16:22:25 right bnemec 16:22:42 jimbobhickville : you can switch to zookeeper with tooz. yes 16:22:53 right - I understand that there is a security risks - but does that mean we're happy with what we have now? 16:23:17 An external locking services probably is the answer. Basically you need something outside the services themselves that knows how to coordinate between them. 16:23:22 sc68cal : so everyone who runs into it gets aware that their locks may not work properly unless they deal with it 16:23:28 Right now it's the deployer giving them all a single path. 16:23:49 Potentially it could be a locking service like zookeeper that all the services talk to instead. 16:24:06 right, we can do this in N 16:24:23 dims: and don't you think a deployer would maybe get upset that openstack can't even do something this low level without him/her getting involved 16:24:26 oslo.concurrency -> tooz -> zookeeper 16:24:42 * bnemec would be perfectly happy if he never had to discuss file locking again :-) 16:25:07 sc68cal : it's upto puppet, fuel etc to set things up for deployers.. no? 16:25:30 sc68cal : we have a problem and we don't know the right solution 16:25:34 fuel and puppet are band-aids for this 16:25:42 just like devstack is a band-aid 16:25:58 Honestly we as a project need to make more of an effort to be sane by default 16:26:01 Wait, what? https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L44 16:26:10 When did that gettempdir go back in? 16:26:12 sc68cal : if you are ok with deployers taking the default and all their locks don't work that would be worse i'd think 16:26:24 bnemec : last week or so 16:26:35 Ah, hence this discussion. I get it now. :-) 16:26:44 dims: that's the only solution? no default will ever work? 16:27:12 why can't we have a default that A) Works B) is reasonable 16:27:14 sc68cal : one suggestion above was to move away from file locks 16:27:38 sc68cal : so what's the solution? 16:27:54 that does both A and B? :) 16:28:15 dims: I don't know. I took a stab at it but looks like it wasn't good enough. 16:28:32 It turns out concurrency is hard. Who knew? :-) 16:28:51 sc68cal : right. only thing we can think of right now is switch to external services which is a bit late for M 16:29:05 we've been struggling with this for a while sc68cal 16:29:22 Honestly, it's goign to take significant effort to get off file locks even if you start at the beginning of a cycle. 16:29:38 right - i've seen the big threads on the ml. 16:29:48 There are services that implicitly rely on the file lock implementation details, unfortunately. 16:29:49 bnemec : the hard code debugging logging around file locks still gives me night mates 16:29:51 mares 16:30:13 would it be safe to use a directory in /tmp if that directory was owned by the user? you have to be root to change the owner anyways? 16:30:30 bknudson: All of the processes have to know to use that directory. 16:30:50 and must be running as users that have read/write on that directory too 16:31:09 * bnemec suspects we aren't going to solve this before the end of the meeting 16:31:20 we have a lot of options, and all of them suck 16:31:30 :D 16:31:34 Pretty much. :-) 16:31:51 bnemec : sc68cal : jimbobhickville : bknudson : so i'm going to +2A the revert for now for M 16:31:52 We picked the one that at least fails loudly if you do it wrong. 16:32:29 ok changing topics :) 16:32:31 #topic Austin Design Summit space needs 16:32:31 Fishbowl / Workroom / Meetup : 3 / 5 / 0 16:32:47 was the number of meetings we had last time enough? 16:33:18 any opinions? 16:33:47 do we skip the full day meetup? 16:34:12 newer cores... lxsli_web, rbradfor_ any thoughts? 16:34:33 If there's a meetup day I'll probably be meeting with keystone. 16:34:46 bknudson : ack 16:34:50 dims, I'm looking forward to being about to contribute. "what doe you mean by full day meetup?" 16:35:10 s/about/able/ 16:35:19 on the last day we can ask for a room where all oslo cores can hangout the whole day 16:36:00 Fishbowl is the biggest room, so it's more towards showcasing what we did/ what we want people in other projects to do next 16:36:05 in addition to the 8 sessions? 16:36:10 I haven't seen any proposed topics for oslo so don't have any opinion about the sessions 16:36:29 workroom is a standard room with everyone around big tables 16:36:34 I wasn't able to make all the sessions last summit 16:36:34 dims: Is the etherpad for proposing sessions open now? 16:36:49 both workroom and fishbowl are around an hour each 16:37:12 jungleboyj : bknudson : we haven't started yet, will get that going today 16:37:47 dims: Cool. Thanks. 16:37:49 ok, so let's try to plan for what we had last time and get the etherpad populated and revisit if needed 16:38:02 #topic Using our CI instead of travis 16:38:03 #link https://etherpad.openstack.org/p/dims-periodic-jobs 16:38:19 So all the jobs are set up, links to jobs and logs are there. 16:38:42 the idea is that before we go request a release, we look at what failed and fix it before asking for a release 16:39:05 there's oslo.* master running against py27 and py34 of various projects 16:39:13 where do we look? 16:39:22 bknudson : see etherpad 16:39:59 if any liaisons want these jobs against their projects, please ping me or just add project-config review 16:40:17 there's one job that runs dsvm+tempest against oslo.* master as well 16:40:56 that one uses neutron as well 16:41:42 the health check url is a bit wonky as something that collects stats has had problems the last week 16:41:49 http://logs.openstack.org/periodic/ 16:42:05 so here's for keystone http://logs.openstack.org/periodic/periodic-keystone-py27-with-oslo-master/ 16:42:06 periodic-.*-py27-with-oslo-master/ 16:42:18 periodic-tempest-dsvm-oslo-latest-full-master/ 16:42:20 yep 16:42:37 it would fail if it ran today 16:42:45 not because of oslo 16:42:57 bknudson : yep 16:43:02 the leap year problem 16:43:37 dims: I will check with Cinder and see if that is something we want to add. 16:43:46 So no more travis and all of us have to go look at stuff 16:44:04 jungleboyj : +1 16:44:14 jungleboyj : very easy to add it 16:44:50 jungleboyj : then you can blame person requesting a release that they did not do due diligence :) 16:44:58 Cool. I will check into it. 16:45:10 thanks jungleboyj 16:45:23 #topic Open discussion 16:45:58 please check g-r release version ranges and make sure they reflect what you need in various projects 16:46:12 this week is the requirements freeze 16:46:57 gcb johnsom rpodolyaka ozamiatin_ haypo jecarey toabctl rbradfor_ : anything else? 16:47:03 dims, what's the policy for new reviews (work not for M) do we mark as WIP, or do we wait to submit? 16:47:05 Nope 16:47:19 No 16:47:22 no 16:47:27 no 16:47:37 kgiusti : jungleboyj : jimbobhickville : sc68cal : ihrachys : anything else? 16:47:43 rbradfor_ : WIP should be fine 16:47:54 nope - all good. 16:47:54 dims, good to know, thanks 16:47:56 not from me 16:48:00 bknudson and i will go around -2'ing too 16:48:23 dims: https://etherpad.openstack.org/p/oslo-mitaka-freeze 16:48:28 dims: Nothing from me. 16:48:34 also, I need to reboot for security patches 16:48:47 ok thanks everyone 16:48:55 talk to you next week. 16:48:58 #endmeeting