20:00:29 <ying_zuo> #startmeeting horizon
20:00:30 <openstack> Meeting started Wed Nov 29 20:00:29 2017 UTC and is due to finish in 60 minutes.  The chair is ying_zuo. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:31 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:33 <openstack> The meeting name has been set to 'horizon'
20:00:41 <ying_zuo> Hi everyone o/
20:00:47 <rdopiera> o/
20:00:51 <e0ne> hi
20:01:51 <ying_zuo> #topic Queens-2 milestone
20:01:57 <ying_zuo> I plan to tag the release next Tuesday or Wednesday
20:02:06 <ying_zuo> here are the tickets planned for this milestone
20:02:20 <ying_zuo> #link https://launchpad.net/horizon/+milestone/queens-2
20:03:02 <ying_zuo> #topic Fetch resources in parallel discussion (again) (e0ne)
20:03:28 <e0ne> I want to get some feedback on this proposal
20:03:38 <e0ne> #link http://lists.openstack.org/pipermail/openstack-dev/2017-October/124129.html
20:04:02 <e0ne> there're number of patches on the review and some are already merged
20:04:23 <e0ne> but we still don't have a decision how to go forward this futurist
20:06:20 <ying_zuo> are we discussing about a specific patch?
20:07:01 <e0ne> ying_zuo: you can find patches in scope of https://blueprints.launchpad.net/horizon/+spec/fetch-resources-in-parallel
20:07:29 <e0ne> but we need a general agreement how to deal with futurist in horizon
20:08:05 <rdopiera> e0ne: what I'm concerned with is how this works with the web servers, such as apache, which do their own thread management
20:08:41 <e0ne> rdopiera: for now, we use python threads and allow them in mod_wsgi config
20:08:57 <e0ne> rdopiera: my proposal is to use greenthreads instead
20:09:17 <rdopiera> e0ne: you don't need stackless python for that?
20:09:44 <e0ne> rdopiera: no, just eventlet + futurist(it's already in our requirements.txt)
20:10:46 <ying_zuo> amotoki asked for the performance data on the patch/blueprint https://review.openstack.org/#/c/426493/
20:11:09 <e0ne> ying_zuo: there are performance testing results in my email
20:11:28 <e0ne> amotoki asked it before I sent a message to openstack-dev
20:11:51 <ying_zuo> I see. can you reply there with the link?
20:12:09 <e0ne> #link https://docs.google.com/spreadsheets/d/14zDpdkPUfGDR_ZycaGUoT64kHsGi1gL9JOha8n5PVeg/edit?usp=sharing
20:12:50 <e0ne> I added the link ^^ to the blueprint
20:14:13 <amotoki> hi
20:14:14 <amotoki> , i succeeded to wake up earlier
20:14:54 <amotoki> e0ne: do you have some data without futurist too?
20:14:59 <ying_zuo> good morning :)
20:15:06 <e0ne> amotoki: good morning! I hope, you enjoyed your morning coffee
20:15:21 <e0ne> amotoki: no, why do we need it?
20:15:47 <e0ne> amotoki: futurist just a wrapper over threading libraries
20:16:24 <amotoki> in my understandng, we first need to discuss futurist (or parallel loading) is needed or not.
20:16:31 <e0ne> amotoki: I hope, everybody agrees that parallel API calls are better for our case
20:17:02 <amotoki> even though it speed up loading time per user, i am not sure how this affect totally.
20:17:20 <e0ne> amotoki: I don't get your point
20:17:52 <amotoki> in general, web server serves several requests in parallel.
20:18:12 <cshen_> do we have benchmark for the current used libraries?
20:18:23 <amotoki> total throughput and response time per request is a different thing.
20:18:30 <e0ne> cshen_: yes, there as a link above
20:19:06 <amotoki> i am wondering futurist introduces how much additional overhead.
20:19:36 <rdopiera> if I remember correctly, we started the whole AngularJS rewrite to get parallel requests
20:19:39 <e0ne> amotoki: there is almost no overhead for horizon
20:20:04 <e0ne> rdopiera: yes, but I propose to speed-up our server-side code too
20:20:50 <rdopiera> e0ne: sure
20:21:03 <ying_zuo> it would be good to get some data on how much performance gain this will give
20:21:35 <e0ne> #links https://github.com/openstack/horizon/blob/04894a0858adda0ae161d9ac1f1a8da017ba1c74/openstack_dashboard/dashboards/project/instances/views.py#L122
20:21:43 <e0ne> ^^ that's how we use futurist now
20:21:47 <amotoki> okay, so have we agreed introducing parallel loading using futurist as a baseline?
20:22:17 <e0ne> @ying_zuo: did you see my spreadsheet with tests results?
20:22:21 <cshen_> e0ne, how many API calls were in your benchmark?
20:22:31 <e0ne> amotoki: I hope so,,,
20:22:48 <amotoki> anyway some baseline performance data (without parallel loading) would be useful
20:22:53 <rdopiera> I wonder if we should have settings like max_workers exposed in the settings, instead of hardcoded...
20:22:57 <ying_zuo> eone: yes, I mean compare with without parallel
20:23:23 <rdopiera> in particular, to give the admins the ability to limit or even disable parallel loading
20:23:26 <e0ne> cshen_: it was 3 API calls
20:23:59 <amotoki> rdopiera: are talking about max_workers of ThreadPoolExecutor ?
20:24:21 <e0ne> rdopiera: I still doesn't understand why do we need to have this configurable
20:24:42 <rdopiera> amotoki: I'm not sure about how that particular option works, I'm thinkning about the ability to limit the number of concurrent requests, not sure how we would do it exactly
20:25:03 <e0ne> ying_zuo: I don't have such data now, but I'm sure it will be slower (I did some testing in the past)
20:25:42 <rdopiera> e0ne: with certain networks settings you don't want to have too many transfers at once
20:26:02 <rdopiera> e0ne: that is also why web browsers only fetch up to 4 resources at a time
20:26:22 <e0ne> rdopiera: it's not true for current web browsers
20:26:44 <rdopiera> e0ne: the option is there
20:27:26 <e0ne> rdopiera: seriously, if your production network and APIs don't allow 3-4 request in parallel, it's not a problem of horizon
20:28:06 <rdopiera> e0ne: seriously, if horizon doesn't work at customer's site and there is no option to make it work, it is a problem of Horizon
20:28:43 <e0ne> I can't argue
20:28:58 <rdopiera> e0ne: you are making assumptions about how horizon is being used that are not necessarily true
20:29:39 <e0ne> rdopiera: I agree
20:29:44 <rdopiera> I'm also thinking that the ability to limit it to 1 request at a time may be useful to help debug race conditions
20:29:59 <e0ne> good point
20:30:04 <rdopiera> espceially when there are multiple requests to the same service
20:30:12 <rdopiera> or to services that use the same resource
20:30:20 <e0ne> but we won't have race conditions with only 1 request
20:30:56 <rdopiera> exactly -- so if the problem goes away when you limit it to 1 request, you know it's a race condition :)
20:31:06 <e0ne> :)
20:31:13 <rdopiera> so far most use cases are for reading, so it's not very risky
20:31:40 <rdopiera> but we might at some point also use it for requests that modify data
20:32:22 <ying_zuo> so we should make that configurable
20:32:45 <ying_zuo> anything else on this topic?
20:33:07 <e0ne> ying_zuo: do we have any decision on this topic?
20:33:58 <ying_zuo> ThreadPoolExecutor is being used in horizon now, do you want to change it to something else?
20:34:08 <amotoki> i am not sure max_workers=1 and serial API calls are same or not. Does max_workers=1 create one new thread?
20:34:24 <e0ne> amotoki: in a current case - yes
20:34:34 <e0ne> I mean with futurist
20:35:32 <amotoki> in such case, does futurist execute each submit() one by one in a single worker?
20:36:21 <e0ne> it does, AFAIR
20:37:33 <amotoki> thanks
20:38:13 <e0ne> np
20:39:28 <amotoki> regarding ying_zuo's question, I am not sure which is betetr ThreadPoolExecutor or GreenThreadExecutor
20:40:45 <e0ne> I prefer GreenThreadExecutor
20:41:21 <e0ne> it's faster and doesn't require additional webserver configuration
20:41:52 <rdopiera> it also uses less memory
20:41:55 <e0ne> ying_zuo: I propose to switch to GreenThreadExecutor
20:41:58 <e0ne> rdopiera: +1
20:42:11 <rdopiera> however, it does monkey-patch the system functions, iirc
20:42:28 <rdopiera> to add greenthread support to them
20:42:51 <amotoki> yes, eventlet assumes related modules are monkey-patched.
20:43:21 <rdopiera> but I think it has been around long enough for most bugs to be shaken out
20:43:42 <e0ne> rdopiera: +1
20:44:17 <ying_zuo> e0ne: please update the blueprint for your proposal
20:44:28 <e0ne> ying_zuo: ok, I'll do
20:44:43 <ying_zuo> thanks. we can continue the discussion there
20:45:00 <ying_zuo> #topic Merge openstack-auth (amotoki)
20:45:03 <e0ne> ying_zuo: thanks
20:45:08 <ying_zuo> #link https://blueprints.launchpad.net/horizon/+spec/merge-openstack-auth
20:45:20 <ying_zuo> #info Upgrade issue of openstack_auth in pip environment
20:45:33 <ying_zuo> here's some information from amotoki:
20:45:49 <ying_zuo> https://www.irccloud.com/pastebin/AaEbFVsG/
20:46:50 <amotoki> it only affects operators who use pip directly to install horizon.
20:47:02 <ying_zuo> would be good to have this information on the release note
20:47:17 <amotoki> perhaps it does not affect distro users. venv/container users are not affected too
20:48:19 <amotoki> i assume most operators with venv replaces venv when upgrading :)
20:48:33 <rdopiera> the whole change also affects packagers
20:48:57 <rdopiera> I propose to let them know about such changes in the future, as they might not be following everything all the time
20:49:10 <e0ne> rdopiera: +1
20:49:31 <ying_zuo> there was an email a while back
20:49:51 <rdopiera> I missed it completely and so did zigo
20:50:06 <rdopiera> it's our fault, of course
20:50:14 <ying_zuo> release note will be a good place for this :)
20:50:16 <zigo> rdopiera: Missed what?
20:50:24 <zigo> Merge of openstack-auth ?
20:50:26 <rdopiera> zigo: the merging of openstack-auth
20:50:28 <rdopiera> yeah
20:50:29 <zigo> Right.
20:50:33 <zigo> I'm fine with it, really.
20:50:44 <zigo> One less package to maintain ! :)
20:50:51 <rdopiera> yeah, it's just the kind of change that would be nice to be coordinated
20:51:06 <amotoki> the mail was http://lists.openstack.org/pipermail/openstack-dev/2017-July/119731.html
20:51:43 <rdopiera> ok, that is exactly the kind of information that I meant, I withdraw my statement
20:51:47 <zigo> What concerns me more is django 2.x, Horizon IMO needs to support that ASAP.
20:51:48 <rdopiera> it's my fault I missed it
20:52:24 <e0ne> zigo: I think, we can start work on if right after release
20:52:43 <zigo> e0ne: That's already too late for Debian at least.
20:52:51 <e0ne> :(
20:53:20 <zigo> I'm not sure when, but "soon" django 2.x will replace 1.11 in Sid. When this happens, horizon wont work anymore in Debian.
20:53:43 <amotoki> but isn't django 2.x released yet?
20:53:45 <e0ne> I didn't test horizon with django2.0
20:53:54 <e0ne> amotoki: it's not released yet
20:54:34 <e0ne> zigo: oh.. it will be painfull for anybody who doesn't support python3 and django 2.0+
20:55:04 <e0ne> 5mins reminder
20:55:10 <zigo> There's the rc1 already.
20:55:40 <ying_zuo> I thought django 1.11 is LTS so we are good for a while
20:55:48 <e0ne> I thinks, it's a good idea to add non-voting job with django 2.0
20:55:59 <amotoki> Doesn't Debian give django app maintainers time to catch up their apps after the release?
20:56:58 <cshen_> django 2.0 seems to be not a LTS version.
20:57:06 <ying_zuo> we will need a blueprint and a volunteer
20:57:08 <e0ne> ying_zuo: you're right, 1.11 will be supported until 2020
20:57:29 <e0ne> ying_zuo: I'll file a blueprint and propose a patch with non-voting job
20:57:43 <e0ne> cshen_: +1
20:57:44 <amotoki> https://www.djangoproject.com/weblog/2015/jun/25/roadmap/
20:57:50 <e0ne> #link https://www.djangoproject.com/weblog/2015/jun/25/roadmap/
20:58:26 <ying_zuo> :)
20:58:27 <cshen_> django 2.2 will be a LTS version, which is scheduled to be released in 2019.
20:58:27 <amotoki> returning back to django-openstack-auth topic, I need volunteers to finish remaining works
20:58:54 <e0ne> robcresswell: are you around? :)
20:58:56 <amotoki> there are a lot of remaining works out side of horizon repos.
20:59:09 <amotoki> requirements, devstack, project-config......
20:59:45 <e0ne> unfortunately, I don't have time for it:(. there are still some bugs in my queue
21:00:18 <rdopiera> I'm finishing up with downstream pike, hopefully should have some time now
21:00:19 <ying_zuo> we are running out of time. If anyone has some bandwidth, please pick up the todos in the blueprint
21:00:35 <ying_zuo> thanks everyone for joining
21:00:49 <amotoki> i am not sure i have covered all items..... cross-check is required
21:00:49 <ying_zuo> #endmeeting