16:00:47 <dims> #startmeeting oslo
16:00:47 <openstack> Meeting started Mon Feb 29 16:00:47 2016 UTC and is due to finish in 60 minutes.  The chair is dims. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:47 <dims> courtesy ping for GheRivero, amotoki, amrith, bknudson, bnemec, dansmith, dhellmann, dougwig, e0ne, flaper87, garyk, harlowja, haypo,
16:00:48 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:48 <dims> courtesy ping for ihrachyshka, jd__, jecarey, johnsom, jungleboyj, kgiusti, kragniz, lifeless, lintan, ozamiatin, redrobot, rpodolyaka, spamaps
16:00:48 <dims> courtesy ping for sergmelikyan, sreshetnyak, sileht, sreshetnyak, stevemar, therve, thinrichs, toabctl, viktors, zhiyan, zzzeek, gcb
16:00:50 <dims> courtesy ping for dukhlov, lxsli, rbradfor, mikal, nakato, tcammann1, browne,
16:00:51 <bknudson> hi
16:00:51 <openstack> The meeting name has been set to 'oslo'
16:00:53 <gcb> o/
16:00:53 <johnsom> o/
16:00:54 <rpodolyaka> o/
16:00:55 <ozamiatin_> o/
16:00:58 <haypo> hello
16:01:03 <jecarey> o/
16:01:09 <toabctl> hi
16:01:13 <rbradfor_> o/
16:01:22 <dims> hi gcb johnsom rpodolyaka ozamiatin_ haypo jecarey toabctl rbradfor_
16:01:31 <kgiusti> o/
16:01:35 <sc68cal> o/
16:01:36 <jungleboyj> o/
16:01:39 <jimbobhickville> ahoy
16:01:59 <stevemar> o/
16:02:02 <dims> hi everyone, let's get started
16:02:02 <ihrachys> o/
16:02:06 <dims> #topic Red flags for/from liaisons
16:02:22 <bknudson> none for keystone. We have our own problems with leap days
16:02:27 <johnsom> Nothing to report this week
16:02:40 <jungleboyj> Nothing from Cinder.
16:02:44 <dims> we wrapped up all the Mitaka releases for oslo libraries, so no more changes unless we find bugs or do requirements updates
16:02:53 <ihrachys> neutron here. we were wondering whether oslo.config set_override behaviour shown here: https://review.openstack.org/#/c/285278/ is as designed.
16:02:55 <dims> bknudson johnsom jungleboyj thanks
16:02:58 <bknudson> don't approve new features?
16:03:11 <ihrachys> neutron was hit by it in one of patches on review, so we wanted to check with oslo.config folks
16:03:17 <dims> bknudson : yep, please don't approve new features until we have stable branches
16:03:25 <bknudson> dims: are you going to go through and -2?
16:03:56 <dims> bknudson :  i could use help if you are volunteering? :)
16:04:18 <bknudson> dims: I can help
16:04:35 <amrith> ./
16:04:40 <amrith> trove got hit by some change to oslocontext
16:04:41 <rpodolyaka> ihrachys: my understanding is that it's expected. deprecating mechanism allows you to have old names in a config file, not in the code
16:04:44 <amrith> am still gathering details
16:04:51 <dims> awesome, anyone else wants to help? please let bknudson and i know
16:05:03 <ihrachys> rpodolyaka: I see. makes sense I guess.
16:05:12 <dims> ihrachys : right CONF.old_name will not work
16:05:15 <rpodolyaka> ihrachys: not sure if it's documented, though
16:05:23 <rpodolyaka> *the behaviour
16:05:33 <ihrachys> dims: it may be a tiny bit problematic if external code accesses the options
16:06:06 * ihrachys believes that any code that directly relies on config options is broken by design but not everyone in neutron is on the same page
16:06:10 <dims> ihrachys : right, this behavior is not recent, it's always been this way
16:06:11 <rbradfor_> amrith, are you referring to the roles[] added to context for policy changes.
16:06:29 <dims> ihrachys : let me poke through that after the meeting
16:06:32 <sc68cal> ihrachys: ++
16:06:39 <amrith> rbradfor_, I believe so, yes.
16:06:59 <rbradfor_> amrith, I though projects got patches for that fix.
16:07:13 <ihrachys> dims: great, thanks
16:07:19 <amrith> as we use this for communication with the guest, this change breaks backward compatibility with old guest images.
16:07:34 <amrith> yes, there's a patch. but nothing in the CI tests for backward compatibility
16:07:45 <ihrachys> one more thing from neutron side is: https://review.openstack.org/#/c/282090/ but I believe we won't get it in this cycle. basically, without it a new fixture for oslo.versionedobjects is not useable
16:07:47 <amrith> it assumes that clients and servers are all upgrading in lock step
16:08:20 <dims> amrith : http://git.openstack.org/cgit/openstack/trove/tree/trove/common/context.py does not have roles explicitly so what broke? (some projects like heat had roles in their constructor)
16:09:15 <rbradfor_> amrith, I see your point about need for backward compatibility. not sure if your problem is then roles related.
16:09:19 <dims> amrith : so let's do this on oslo channel when you have some details
16:09:27 <dims> agree with rbradfor_
16:09:46 <dims> #topic Bugs needed to be fixed for Mitaka
16:09:46 <dims> #link https://review.openstack.org/#/c/285294/  (Revert "Use tempfile.tempdir for lock_path if OSLO_LOCK_PATH is not set")
16:09:54 <dims> we need consensus on that one
16:10:13 <dims> sc68cal, can you please explain a bit?
16:10:37 <ozamiatin_> dims: i have one more fix here https://review.openstack.org/#/c/286093
16:10:56 <dims> we used to fail fast when OSLO_LOCK_PATH was not set and an earlier review from sc68cal added code to default to temp dir
16:11:29 <sc68cal> Yeah - any consumer of oslo.concurrency has to understand lock_path
16:11:32 <sc68cal> otherwise it blows up
16:11:34 <dims> ozamiatin_ : ack
16:11:54 <jimbobhickville> gettempdir() does not return /tmp
16:12:07 <jimbobhickville> print tempfile.gettempdir()
16:12:07 <jimbobhickville> /var/folders/5t/rnpx9j6d137fxcxh81xkybp4qb2pvn/T
16:12:14 <dims> dhellmann : do you remember the previous discussions?
16:12:15 <sc68cal> jimbobhickville: depends on your OS
16:12:19 <jimbobhickville> ah
16:12:38 <rpodolyaka> but it's different on every run of a process
16:12:45 <rpodolyaka> and probably for python processes forks too
16:12:50 <bknudson> jimbobhickville: what system are you running on?
16:12:58 <jimbobhickville> yeah, that was on my mac, derp
16:13:00 <jimbobhickville> carry on
16:13:02 <sc68cal> My point being - if the default is an environment variable that probably nobody uses - and the program crashes or errors out if you don't set it - that's not a default
16:13:24 <dims> sc68cal : so essentially what that means is that there is no intra process locking as everyone is looking at a different directory
16:13:31 <dims> sorry inter
16:13:42 <lxsli_web> o/
16:14:02 <dims> sc68cal : so if the code expects some interprocess locks to work it will fail
16:14:18 <dims> hi lxsli_web
16:14:19 <sc68cal> It's not that it fails
16:14:36 <sc68cal> there's a bit of code deeper in oslo.concurrency that actually makes lock_path a required option
16:14:49 <sc68cal> take a look at the paste that I link in the commit message
16:15:08 <bknudson> the help text for the option says it's required to set it depending on how the library is used
16:15:33 <rpodolyaka> it must only be needed for interprocess locks
16:15:37 <dims> sc68cal : ack. let me ping dhellmann and bnemec as well
16:15:40 <jimbobhickville> couldn't we just make a default folder and set the permissions as we want them to be if it doesn't exist?
16:16:00 <jimbobhickville> /tmp/oslo.locks or something?
16:16:03 <sc68cal> So then why does it depend on an environment variable as a default? My point is - this library is not as helpful as it could be
16:16:22 <sc68cal> Frankly I didn't care about IPC and locks - oslo.concurreny forced me to
16:16:31 <sc68cal> I had assumed that was taken care of for me
16:16:46 <sc68cal> since it's well... a library
16:17:06 <sc68cal> instead - we're cargo culting these config options around in devstack
16:17:28 <dims> sc68cal : see my email from Dec 3, 2013 :) http://markmail.org/message/57fsfojqqshjbcc5
16:17:38 <jimbobhickville> I'm of the opinion that you make it configurable, but do the right thing by default.  if they don't specify a folder, create one and set up security correctly
16:17:59 <sc68cal> jimbobhickville: exactly
16:18:16 <dims> "Still, obviously not cool when your locks don't lock, which is why we made the unpleasant change to require lock_path"
16:18:20 <sc68cal> dims: mkdtemp works for me
16:18:27 <dims> http://markmail.org/message/ytw33eirpkgccedg
16:19:16 <dims> sc68cal : each process creates its own directory and if there's code that you expect to lock it won't lock
16:19:41 <rpodolyaka> jimbobhickville: what permissions would we set on /tmp/oslo.locks  ? e.g. nova and cinder would probably be running on behalf of different users. make it writable for all?
16:19:53 <sc68cal> rpodolyaka: sticky bit
16:20:24 <dims> sc68cal : jimbobhickville : let's get bnemec to chime in please
16:20:34 <bnemec> You can't automatically create a temporary directory at start.  You're dealing with multiple processes, and they all have to point at the same directory.
16:20:43 <dims> ah there you are :)
16:20:44 <bnemec> And you can't use a known path because that becomes a security risk.
16:20:55 <bnemec> Because we're creating files there.
16:20:59 <rpodolyaka> sc68cal: ah, missed that. yeah, that should help
16:21:26 <sc68cal> So basically we know these issues exist, and we're throwing it on devstack and deployers to figure it out - despite oslo being a library?
16:21:31 <bnemec> I know there was a patch up that attempted to make it somewhat safer to use a known path by not overwriting any existing files in the lock path.
16:21:58 <bnemec> sc68cal: If you can come up with a solution that isn't a security risk and doesn't result in no locking, I'm all ears. :-)
16:22:07 <jimbobhickville> zookeeper? :P
16:22:07 <bnemec> It's not as if we haven't tried to solve this before.
16:22:16 <dims> sc68cal : just laying out what the problems are
16:22:25 <dims> right bnemec
16:22:42 <dims> jimbobhickville : you can switch to zookeeper with tooz. yes
16:22:53 <sc68cal> right - I understand that there is a security risks - but does that mean we're happy with what we have now?
16:23:17 <bnemec> An external locking services probably is the answer.  Basically you need something outside the services themselves that knows how to coordinate between them.
16:23:22 <dims> sc68cal : so everyone who runs into it gets aware that their locks may not work properly unless they deal with it
16:23:28 <bnemec> Right now it's the deployer giving them all a single path.
16:23:49 <bnemec> Potentially it could be a locking service like zookeeper that all the services talk to instead.
16:24:06 <dims> right, we can do this in N
16:24:23 <sc68cal> dims: and don't you think a deployer would maybe get upset that openstack can't even do something this low level without him/her getting involved
16:24:26 <dims> oslo.concurrency -> tooz -> zookeeper
16:24:42 * bnemec would be perfectly happy if he never had to discuss file locking again :-)
16:25:07 <dims> sc68cal : it's upto puppet, fuel etc to set things up for deployers.. no?
16:25:30 <dims> sc68cal : we have a problem and we don't know the right solution
16:25:34 <sc68cal> fuel and puppet are band-aids for this
16:25:42 <sc68cal> just like devstack is a band-aid
16:25:58 <sc68cal> Honestly we as a project need to make more of an effort to be sane by default
16:26:01 <bnemec> Wait, what?  https://github.com/openstack/oslo.concurrency/blob/master/oslo_concurrency/lockutils.py#L44
16:26:10 <bnemec> When did that gettempdir go back in?
16:26:12 <dims> sc68cal : if you are ok with deployers taking the default and all their locks don't work that would be worse i'd think
16:26:24 <dims> bnemec : last week or so
16:26:35 <bnemec> Ah, hence this discussion.  I get it now. :-)
16:26:44 <sc68cal> dims: that's the only solution? no default will ever work?
16:27:12 <sc68cal> why can't we have a default that A) Works B) is reasonable
16:27:14 <dims> sc68cal : one suggestion above was to move away from file locks
16:27:38 <dims> sc68cal : so what's the solution?
16:27:54 <dims> that does both A and B? :)
16:28:15 <sc68cal> dims: I don't know. I took a stab at it but looks like it wasn't good enough.
16:28:32 <bnemec> It turns out concurrency is hard.  Who knew? :-)
16:28:51 <dims> sc68cal : right. only thing we can think of right now is switch to external services which is a bit late for M
16:29:05 <dims> we've been struggling with this for a while sc68cal
16:29:22 <bnemec> Honestly, it's goign to take significant effort to get off file locks even if you start at the beginning of a cycle.
16:29:38 <sc68cal> right - i've seen the big threads on the ml.
16:29:48 <bnemec> There are services that implicitly rely on the file lock implementation details, unfortunately.
16:29:49 <dims> bnemec : the hard code debugging logging around file locks still gives me night mates
16:29:51 <dims> mares
16:30:13 <bknudson> would it be safe to use a directory in /tmp if that directory was owned by the user? you have to be root to change the owner anyways?
16:30:30 <bnemec> bknudson: All of the processes have to know to use that directory.
16:30:50 <dims> and must be running as users that have read/write on that directory too
16:31:09 * bnemec suspects we aren't going to solve this before the end of the meeting
16:31:20 <jimbobhickville> we have a lot of options, and all of them suck
16:31:30 <jimbobhickville> :D
16:31:34 <bnemec> Pretty much. :-)
16:31:51 <dims> bnemec : sc68cal : jimbobhickville : bknudson : so i'm going to +2A the revert for now for M
16:31:52 <bnemec> We picked the one that at least fails loudly if you do it wrong.
16:32:29 <dims> ok changing topics :)
16:32:31 <dims> #topic Austin Design Summit space needs
16:32:31 <dims> Fishbowl / Workroom / Meetup : 3  / 5  / 0
16:32:47 <dims> was the number of meetings we had last time enough?
16:33:18 <dims> any opinions?
16:33:47 <dims> do we skip the full day meetup?
16:34:12 <dims> newer cores... lxsli_web, rbradfor_ any thoughts?
16:34:33 <bknudson> If there's a meetup day I'll probably be meeting with keystone.
16:34:46 <dims> bknudson : ack
16:34:50 <rbradfor_> dims, I'm looking forward to being about to contribute. "what doe you mean by full day meetup?"
16:35:10 <rbradfor_> s/about/able/
16:35:19 <dims> on the last day we can ask for a room where all oslo cores can hangout the whole day
16:36:00 <dims> Fishbowl is the biggest room, so it's more towards showcasing what we did/ what we want people in other projects to do next
16:36:05 <rbradfor_> in addition to the 8 sessions?
16:36:10 <bknudson> I haven't seen any proposed topics for oslo so don't have any opinion about the sessions
16:36:29 <dims> workroom is a standard room with everyone around big tables
16:36:34 <bknudson> I wasn't able to make all the sessions last summit
16:36:34 <jungleboyj> dims: Is the etherpad for proposing sessions open now?
16:36:49 <dims> both workroom and fishbowl are around an hour each
16:37:12 <dims> jungleboyj : bknudson : we haven't started yet, will get that going today
16:37:47 <jungleboyj> dims: Cool.  Thanks.
16:37:49 <dims> ok, so let's try to plan for what we had last time and get the etherpad populated and revisit if needed
16:38:02 <dims> #topic Using our CI instead of travis
16:38:03 <dims> #link https://etherpad.openstack.org/p/dims-periodic-jobs
16:38:19 <dims> So all the jobs are set up, links to jobs and logs are there.
16:38:42 <dims> the idea is that before we go request a release, we look at what failed and fix it before asking for a release
16:39:05 <dims> there's oslo.* master running against py27 and py34 of various projects
16:39:13 <bknudson> where do we look?
16:39:22 <dims> bknudson : see etherpad
16:39:59 <dims> if any liaisons want these jobs against their projects, please ping me or just add project-config review
16:40:17 <dims> there's one job that runs dsvm+tempest against oslo.* master as well
16:40:56 <dims> that one uses neutron as well
16:41:42 <dims> the health check url is a bit wonky as something that collects stats has had problems the last week
16:41:49 <dims> http://logs.openstack.org/periodic/
16:42:05 <bknudson> so here's for keystone http://logs.openstack.org/periodic/periodic-keystone-py27-with-oslo-master/
16:42:06 <dims> periodic-.*-py27-with-oslo-master/
16:42:18 <dims> periodic-tempest-dsvm-oslo-latest-full-master/
16:42:20 <dims> yep
16:42:37 <bknudson> it would fail if it ran today
16:42:45 <bknudson> not because of oslo
16:42:57 <dims> bknudson : yep
16:43:02 <dims> the leap year problem
16:43:37 <jungleboyj> dims: I will check with Cinder and see if that is something we want to add.
16:43:46 <dims> So no more travis and all of us have to go look at stuff
16:44:04 <dims> jungleboyj : +1
16:44:14 <dims> jungleboyj : very easy to add it
16:44:50 <dims> jungleboyj : then you can blame person requesting a release that they did not do due diligence :)
16:44:58 <jungleboyj> Cool.  I will check into it.
16:45:10 <dims> thanks jungleboyj
16:45:23 <dims> #topic Open discussion
16:45:58 <dims> please check g-r release version ranges and make sure they reflect what you need in various projects
16:46:12 <dims> this week is the requirements freeze
16:46:57 <dims> gcb johnsom rpodolyaka ozamiatin_ haypo jecarey toabctl rbradfor_ : anything else?
16:47:03 <rbradfor_> dims, what's the policy for new reviews (work not for M) do we mark as WIP, or do we wait to submit?
16:47:05 <johnsom> Nope
16:47:19 <jecarey> No
16:47:22 <rpodolyaka> no
16:47:27 <gcb> no
16:47:37 <dims> kgiusti : jungleboyj : jimbobhickville : sc68cal : ihrachys : anything else?
16:47:43 <dims> rbradfor_ : WIP should be fine
16:47:54 <kgiusti> nope - all good.
16:47:54 <rbradfor_> dims, good to know, thanks
16:47:56 <sc68cal> not from me
16:48:00 <dims> bknudson and i will go around -2'ing too
16:48:23 <bknudson> dims: https://etherpad.openstack.org/p/oslo-mitaka-freeze
16:48:28 <jungleboyj> dims: Nothing from me.
16:48:34 <bknudson> also, I need to reboot for security patches
16:48:47 <dims> ok thanks everyone
16:48:55 <dims> talk to you next week.
16:48:58 <dims> #endmeeting