00:00:51 #startmeeting congressteammeeting 00:00:51 Meeting started Thu Nov 24 00:00:51 2016 UTC and is due to finish in 60 minutes. The chair is ramineni_. Information about MeetBot at http://wiki.debian.org/MeetBot. 00:00:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 00:00:54 The meeting name has been set to 'congressteammeeting' 00:01:00 hello! 00:01:35 here is our todayś agenda 00:01:43 1. Gate issues 00:01:43 2. Ocata-1 00:01:43 3. Newton 4.0.1 00:01:43 4. status updates 00:01:58 anything else? 00:03:12 #topic gate issues 00:03:35 ekcs: you want to give an update on it 00:03:37 ? 00:03:45 yup. 00:05:07 Last Thursday a change to aodh made it so it didn't work with the mysql version on trusty test nodes, causing all those tempest tests to fail and block the gate. 00:05:52 It took a while for us to track it down and make the right changes to project config to run xenial instead of trusty on the newer branches of congress. 00:06:18 The issue was finally resolved and we were finally able to merge a log jam of patches. 00:06:34 ekcs: is it a occasinal failure ?.. when aodh patch is merged everything looked green right 00:07:32 ramineni_: no it was a every time failure. the aodh project merged the change last thursday or something, after we added aodh to the tempest devstack config. 00:07:52 We seem to have a new gate issue though. 00:08:06 ekcs: ah, ok.. got it 00:08:39 starting yesterday all tempest tests seem to be failing with 00:08:40 2016-11-23 07:58:19.519 | publicURL endpoint for policy service in RegionOne region not found 00:08:41 2016-11-23 07:58:19.549 | exit_trap: cleaning up child processes 00:08:41 2016-11-23 07:58:19.550 | ./stack.sh: line 493: kill: (19488) - No such process 00:08:56 not sure if anyone knows more about it. 00:10:45 ekcs: im not seeing any logs regarding it in https://review.openstack.org/#/c/401090/ 00:10:48 btw here’s the aodh patch that broke compatibility with old mysql. https://review.openstack.org/#/c/372586/5 00:12:04 Right. I don’t see any congress errors either. I guess we’ll recheck again and see if it goes away. If not, we may need to dig deeper. 00:12:09 ekcs: is that error saying it couldn't find keystone? 00:12:31 Or that keystone is throwing an error? 00:13:48 thinrichs: hmm could be saying it couldn’t find keystone. or the URL is not set correctly. 00:13:57 i guess we should see what we can find in keystone log. 00:14:23 may be congress endpoint is not registered with keystone 00:14:55 congress is not successfully started? 00:15:51 ramineni_: maybe. all three congress logs show congress running for about 20s without errors. 00:16:22 ohok 00:16:36 right now I can’t see the logs because of infra cinder problem. we’ll just have to figure it out later. 00:17:05 but oh what’s the name of the keystone logs in devstack? 00:17:30 the logs will be inside apache i guess 00:17:56 got it. 00:18:08 anyway that’s all i have on the topic. 00:18:52 there is one more issue where py27 jobs are failily more frequently this time 00:19:17 with error engine service timeout 00:20:33 i debugged it a lil .. engine service is up, but looks like its waiting till the response .. so i have raised the rpc_timeput in the tests 00:20:40 https://review.openstack.org/#/c/401090/ 00:21:16 **its not waiting 00:22:46 anything else from anyone on this topic? 00:23:38 short update from my side. 00:24:14 I checked Congress driver error. and it would be fixed by the patch. 00:24:40 https://review.openstack.org/#/c/397096/ 00:25:14 it's my update for gate issue. 00:26:09 masahito_: doctor driver error right? what is the reason for its failure 00:26:43 the root cause is not doctor driver, it's from api server. 00:27:18 when deleting a datasource, api server called delete_datasource method in its node. 00:28:23 and then, the node that api server runs on calls synchronize_datasource and make all configured ds in it. 00:29:48 ok, thanks for digging it up 00:30:33 moving on then 00:30:44 #topic ocata-1 00:31:05 ocata-1 released last week 00:31:14 #link https://github.com/openstack/congress/releases/tag/5.0.0.0b1 00:31:51 unfortunately most of the patches didnt make it because of gate issues 00:32:55 they would be targted for ocata-2, which is around Dec 12 I suppose 00:33:19 Any thoughts/comments? 00:34:32 sounds good 00:35:42 #topic Newton 4.0.1 00:36:27 We need to do a 4.0.1 release because of some critical (for multi-node) bugs resolved by the following patches: 00:36:47 https://review.openstack.org/#/c/395875/ 00:36:47 https://review.openstack.org/#/c/400322/ 00:37:20 ekcs: thinrichs: is there timeline for newton releases? 00:37:38 Looking… 00:37:42 ramineni_: I think it’s up to us. 00:38:14 I read somewhere that people sometimes to biweekly releases (as needed) for the latest stable. 00:39:01 http://docs.openstack.org/project-team-guide/stable-branches.html 00:39:11 http://docs.openstack.org/project-team-guide/stable-branches.html#proactive-backports 00:39:49 Can't seem to find any announcements on the ML or on the usual release schedule webpage 00:39:49 http://docs.openstack.org/project-team-guide/stable-branches.html#release-often 00:40:04 “Proactive backporting process is expected to trigger higher volume of changes in stable branches. To make releases more granular, it’s advised participating projects create new stable releases often. It may be done on a bi-weekly basis, or any other schedule that fits better the project and its actual backports volume.” 00:40:30 They definitely used to do synchronized releases. Could ping the release team if we want. I'll check one more thing quickly.. 00:42:09 No lunch. Can't discern a pattern from tags on Nova/Neutron. 00:42:14 s/lunch/luck 00:42:31 Docs seem pretty clear that it's up to us. 00:43:07 ok 00:44:14 we can check what all other patches can be targted for Newton 00:44:50 If this is a simple fix, then maybe we can get this in: https://bugs.launchpad.net/congress/+bug/1637172 00:44:50 Launchpad bug 1637172 in congress "rule using policy:table(…) reference fails to create" [High,Confirmed] - Assigned to Tim Hinrichs (thinrichs) 00:45:38 And I think we should backport this too if ready in time: https://review.openstack.org/#/c/400643/ 00:46:07 https://bugs.launchpad.net/congress/+bug/1641501 ? 00:46:07 Launchpad bug 1641501 in congress "Horizon unable to get policies using keystone v3" [High,Confirmed] - Assigned to Anusha (anusha-iiitm) 00:46:37 Haven't had the time to diagnose that one. 00:46:41 I'll look right now 00:47:14 aimeeu: do you still want to work on the horizon bug? 00:47:38 ramineni_: I don't think I am going to have time. 00:48:05 aimeeu: ok, np 00:48:08 I will finish the minor refactoring assigned to me for Ocata-2 00:48:41 ramineni_: i’d lean toward not backporting that horizon v3 fix to newton because there is a working workaround in place. what do you think? 00:49:54 ekcs: yes , but v2 is kind of deprecated in newton i suppose 00:50:18 ramineni_: i see. 00:50:46 ekcs: or yes it can be targeted for ocata 00:52:13 ekcs: you are tarheting the release next week? for newton 00:52:52 ramineni_: we can decide that based on what we feel we can get in soon. 00:53:10 sure 00:53:25 anything else on this topic? 00:54:11 #topic open discussion 00:54:55 we have around 5 mins.. anyone would like to discuss anything 00:55:00 Attended the policy meeting again today. 00:55:02 just a reminder to wait for another infra update before running any rechecks. 00:55:28 Someone did a POC hooking up ApacheFortress (a standard RBAC system) to oslo-policy and keystone. 00:55:53 Seemed like not much of a change to oslo-policy. 00:56:20 It's worth us thinking about doing that for those users who want proactively enforce policy at the API layer. 00:56:25 Let me see if I can find the link. 00:57:17 https://review.openstack.org/#/c/237521/ 00:57:44 One of the challenging things about that POC was that AF isn't natively multi-tenant. 00:58:04 So it'll answer the question is user-object-right permitted? 00:58:32 But the question we have to answer to whether user-object-right permitted for the project that the object belongs to. 00:58:39 Which Congress can do well. 00:59:01 ah interesting. 01:00:22 time to wrap up meeting , Happy Thanksgiving to all 01:00:27 #endmeeting