20:07:14 #startmeeting deb_packaging 20:07:15 Meeting started Mon Sep 26 20:07:14 2016 UTC and is due to finish in 60 minutes. The chair is zigo. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:07:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:07:18 The meeting name has been set to 'deb_packaging' 20:07:26 Sorry for being late. 20:07:42 o/ 20:07:51 #chair onovy zigo tlbr 20:07:51 Current chairs: onovy tlbr zigo 20:08:07 #topic project status 20:08:19 so, we have everything up to rc1, but Horizon. 20:08:36 libjs-tv4 has been accepted by Debian FTP masters, so we can finally start the Horizon packaging. 20:08:43 great 20:09:04 Ironic & Ironic-client also needs update to 6.2.0 and 4.2.0 respectively. 20:09:28 Then we need to figure out how to run tempest stuff... 20:10:05 fungi: clarkb: pabelanger: mordred: Any of you here? We'd need some advice. 20:10:40 Nop, never mind. 20:10:58 onovy: tlbr: Any idea how we shall start running tempest in infra? 20:11:07 Maybe a periodic job? 20:11:32 is it hard to setup on OS infra ? 20:11:43 I'm not sure yet. 20:11:57 periodic job sounds good 20:11:59 ok, so we need help of infra-guys 20:12:24 My idea was that, once we got the tempest stuff in, and Newton is finally released and uploaded to Sid, we should run it as a check job for any packaging change. 20:12:41 Not yet right now though, as it may be too long to run. 20:12:42 i'm around, just busy 20:13:01 yep 20:13:10 fungi: We were wondering how we could setup a tempest validation script for packaging. 20:13:26 fungi: We have that already written, we just need to figure out how to set it up on infra. 20:13:35 the obvious choice is to run tempest against each proposed patch to your packaging repos 20:14:00 which will took ages to finish 20:14:10 for example i send ~ 200 patches today 20:14:19 fungi: Yeah, but maybe do that only once Newton is finally released, so we kind of "lock" things, without too much disturbance when we work on the release. 20:14:22 i don;t want tempest for each one :] 20:14:39 onovy: But once we have everything released, that's fine for you, no? 20:14:51 zigo: not sure. tempest takes long time to finish 20:14:52 i'm assuming that would have to: 1. build a package with the prospective change checked out and applied, 2. install that package along with the already official packages, 3. start tempest and report the results 20:15:20 fungi: Yes, something like that. 20:15:27 also depends on whether you run all tempest tests or just a subset (smoke for example) 20:15:28 it will slow down release process of package 20:15:38 but it will higher quality :) 20:15:51 fungi: One of my fear is the mac filter in the nodes. 20:16:15 fungi: My current setup uses a bridge, and then we get the MAC of that bridge instead of the eth0. 20:16:20 fungi: Will it be a problem? 20:16:25 i mean, our ci is designed to test proposed changes and find bugs before you merge them. it has less efficient solutions for finding bugs after merging and reporting those 20:16:52 so let's do it same 20:17:14 but we have some lock-in problems 20:17:15 I had the MAC filter issue on the Mirantis OpenStack deployment we have (accessed through the Mirantis VPN, very private...). 20:17:24 example: we can't update lib and python wrapper around it at same time 20:17:26 but we need to 20:17:27 zigo: i don't know exactly what bridge issues you're describing, but we strongly suggest jobs don't rely on the system-provided network interfaces directly and instead use an abstraction you can control 20:17:48 fungi: What do you mean? 20:18:13 don't rely on being able to do things that communicate outside the test node 20:18:36 the job should be entirely encapsulated within the node (or multinode group) on which the job is initiated 20:18:50 we are running single node tempest i think 20:19:22 and if it needs to test network connections to things, it should do them over a loopback or similar local connection to another local process 20:19:23 fungi: Ok, got you, though our script used to be controled over ssh, so we'll have to fix that. 20:19:56 It shouldn't be so hard, and yeah, I didn't think about that bit that we don't really care outside connectivity in the case of infra. 20:20:04 right, the job launcher will ssh in and run a script that initiates your job, and collect results/artifacts over rsync+ssh 20:20:14 fungi: How can I start a test stuff then? 20:20:24 Oh... 20:20:29 but everything else needs to try to stay local to the machine 20:20:33 But what if ssh / rsync fails with no connectivity? 20:20:49 So, same issue then. 20:21:01 Anyway, I guess best is to just try. 20:21:07 it streams stdout of teh main job process continuously, so you'll at least hopefully get a console log with output up to the point where the job node became unreachable 20:21:41 fungi: All the sources are currently located within "openstack-meta-packages", maybe I can start it as a non-voting check for this package? 20:21:50 seems reasonable 20:21:51 (then slowly fix things there...) 20:22:31 fungi: Ok, thanks for your help, moving on then. 20:22:42 i don't know how feasible it is, but if you can use devstack to deploy a system with your packages then that might be the easiest solution 20:23:05 anyway, yes doesn't need to be designed in your meeting. find us in the infra channel 20:23:05 fungi: I have designed everything already, nothing much to rewrite. 20:23:13 great 20:23:27 fungi: The only thing that needs to rewrite is how to be smart when calling the scripts. 20:23:36 fungi: It used to run on a bare metal node... 20:23:41 fungi: Oh, that's another question ! 20:24:01 fungi: 8GB of RAM may be a bit short for installing everything and running tempest... 20:24:09 fungi: Or is that enough?!? 20:24:23 I used to run with 32 GB on a real bare metal node. 20:25:03 depends on what "everything" is 20:25:23 i mean, we do limit the default service list we run for most devstack+tempest jobs 20:25:24 fungi: Nova, Neutron, Cinder, Glance, Swift, Keystone, Horizon ... 20:25:34 (at least) 20:25:54 Ceilometer too. 20:26:00 but use existing devstack+tempest jobs for a guideline as to what we're able to run within 8gb, at least 20:27:17 fungi: I think I'll just try, and see how it goes. If I make sure to reduce the amount of thread per servers, that should be fine, normally. 20:27:35 yep, i encourage the scientific method there 20:27:55 :) 20:28:22 #action Zigo to add a non-voting job to deb-openstack-meta-packages to start tempest tests. 20:28:45 #topic unit test re-enabling 20:28:53 Has anyone worked on that? 20:28:56 tlbr: ? 20:29:14 nope 20:29:14 There's not so many remaining. 20:29:19 yep 20:29:22 i done in swiftclient 20:29:32 testtools, traceback2, reno are the only one remaining. 20:29:45 btw: i created section in https://etherpad.openstack.org/p/openstack-deb-packaging which is progress of my packages. for anyone interested :] 20:29:59 onovy: Do you want to do them? 20:30:17 testtools,...? no :) 20:30:29 really i just pushed ~200 patches and many of them failed 20:30:44 Failed, how? 20:30:48 With failed downloads? 20:30:56 missing deps, etc. 20:31:02 Oh... 20:31:06 Got to fix that then. 20:32:08 #topic onovy mass commits 20:32:24 onovy: Another thing is that I don't think it's very nice to push so many patches at just once... 20:32:34 Then it becomes very hard to push patches. 20:32:46 why very hard? 20:32:49 onovy: Could you imagine something that would push in a slower pace? 20:33:13 onovy: Let's say I add libjs-tv4 to the auto-backports, then it will take forever to do its POST job. 20:33:40 right. so it's not about pushing, but reviewing 20:34:15 Well, yeah ... 20:34:29 onovy: Anyway, it's a *very* good thing to do such a mass rebuild of everything. 20:34:54 we should have better solution to rebuild everything - without commit/review ;] 20:34:55 onovy: It will prevent "Luca Neusbaum" kind of bugs getting in the BTS ! :) 20:35:12 and we should rebuild periodicaly 20:35:55 onovy: How would you get the feedback then? 20:36:08 you can get feedback from periodic job 20:36:12 How? 20:36:17 email? 20:36:22 No idea ! :) 20:36:25 I'm asking ... :P 20:36:32 email, autobug create 20:36:57 ok 20:37:15 * zigo recieves a fair amount of robot spam already ... 20:37:17 :P 20:37:55 #topic open discussion 20:37:58 Anything else you guys wana talk about? 20:38:13 i think everyone should go throu review list 20:38:15 and fix -1 20:38:19 or abadon 20:38:23 we should clean it up 20:38:27 Yeah, I'll do that. 20:38:31 https://review.openstack.org/#/q/projects:openstack/deb-+AND+is:open 20:38:33 Some are obvious. 20:38:35 this link will help 20:38:43 Like the glance-store one (ie: add enum34 as build-depends) 20:38:51 anyone else? :) 20:39:02 I'll do some of them for sure. 20:39:16 I just wonder how it could have been built the first time though. 20:39:45 maybe i wasn't build first time 20:39:47 it 20:40:01 Some didn't indeed. 20:40:06 Like the Horizon plugins. 20:40:14 Some others did, like glance-store. 20:40:21 there was package which didn't have any reviews at all 20:40:21 I didn't look deep enough yet to be able to tell. 20:40:27 so nobody tried to build it 20:40:30 Right. 20:40:50 There's that as well indeed. 20:41:10 #action zigo & onovy to check on the mass commit FTBFS 20:41:19 and tlbr ? :] 20:42:23 He's watching a series ? :P 20:42:36 It's too late for Moscow, I guess. 20:42:43 Maybe we should put the meeting time earlier. 20:42:44 i'm here 20:42:47 Oh! :) 20:43:05 tlbr: Will you try to fix some failed to build in the mass-commit of onovy? 20:43:18 tlbr: Do you have time for that, or do you prefer working on the Horizon packaging maybe? 20:43:21 zigo, ok, i can do that tomorrow if that is ok ? 20:43:34 cool, thanks 20:43:34 tlbr: I don't think we'll be done in a single day. 20:43:41 yes, I've just finished working on tenacity 20:43:52 #action tlbr to also check on the mass commit FTBFS 20:43:52 zigo, ok then 20:44:13 tlbr: onovy: Anything else you guys wana talk about? 20:44:23 i need to remove python-eventlet from backports 20:44:27 Oh. 20:44:34 onovy: Ask pabelanger about it. 20:44:40 onovy: In #openstack-infra 20:44:55 zigo: let's talk about it outside of meeting 20:45:00 Ok. 20:45:04 We're done then. 20:45:12 #endmeeting