15:00:45 #startmeeting RDO meeting (2016-06-29) 15:00:45 Meeting started Wed Jun 29 15:00:45 2016 UTC. The chair is imcsk8. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:45 Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:00:45 The meeting name has been set to 'rdo_meeting_(2016-06-29)' 15:00:46 Meeting started Wed Jun 29 15:00:45 2016 UTC and is due to finish in 60 minutes. The chair is imcsk8. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:48 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:51 The meeting name has been set to 'rdo_meeting__2016_06_29_' 15:00:55 o/ 15:00:57 #topic roll call 15:01:17 #chair jpena leifmadsen 15:01:17 Current chairs: imcsk8 jpena leifmadsen 15:01:18 Current chairs: imcsk8 jpena leifmadsen 15:01:35 o/ 15:01:39 o/ 15:01:49 #chair trown amoralej 15:01:49 Current chairs: amoralej imcsk8 jpena leifmadsen trown 15:01:50 Current chairs: amoralej imcsk8 jpena leifmadsen trown 15:02:25 i guess we can wait for more folks to join 15:02:53 apevec: meeting? 15:04:05 \o/ 15:04:44 #chair chandankumar 15:04:44 Current chairs: amoralej chandankumar imcsk8 jpena leifmadsen trown 15:04:45 Current chairs: amoralej chandankumar imcsk8 jpena leifmadsen trown 15:05:07 we need to stop openstack bot. 15:05:28 ok, we need an op for that 15:06:07 ok, i guess we can start 15:06:32 #topic pinning some packages in RDO Trunk-not-all-master ? 15:06:46 this was proposed by apevec 15:07:13 he does not appear to be here 15:07:29 his idea was that, since we are having some troubles with clients and oslo libraries (due to CI not testing master but releases), we could have an intermediate RDO Trunk branch 15:07:57 it would be chasing Trunk, but for clients and oslo we would be using a tagged release, chasing what's currently in upper-constraints 15:08:13 +1 15:08:14 from the DLRN side, it requires some work, as it currently assumes branches 15:08:22 +1 15:08:43 so, that could justify a more elaborated approach to pinning packages 15:08:56 and of course, any CI currently using master should switch to that new repo 15:09:06 that the initial one we thought of creating a overrides repo 15:09:19 ideally, testing both would be perfect 15:09:19 * apevec back 15:09:22 yes, I think it's a better idea 15:09:28 well, oslo and clients are also part of OS, so we still need to chase their trunk so the intermediate branch is a good thing 15:09:29 thanks for the summary jpena ! 15:09:30 #chair apevec 15:09:30 Current chairs: amoralej apevec chandankumar imcsk8 jpena leifmadsen trown 15:09:30 Current chairs: amoralej apevec chandankumar imcsk8 jpena leifmadsen trown 15:09:44 number80, yes, we need both 15:09:46 apevec: I had it fresh after today' talk :) 15:10:04 * number80 slow typing this week 15:10:15 but to unblock rest of the pipeline, we need almost-master 15:10:22 name suggestions welcome! 15:10:50 rdo-master-upper-constraints 15:11:00 ah nice one 15:11:06 maybe rdo-master-it-worked-on-devstack(tm) :P 15:11:06 +1 15:11:07 rdo-master-we-hope-this-pass-ci 15:11:18 jpena: i would like to assist in this task since i've been only reading reviews related to this 15:11:19 rdo-restricted 15:11:26 rdo-master-it-worked-in-devstack 15:11:33 rdo-miracle 15:11:40 ok ok :) 15:12:17 re. -upper-constraints is good b/c we actually will need to track tags from u-c 15:12:29 so that it matches what is used in upstream gate 15:13:02 proposal bot could be keeping it in sync 15:14:00 rdo-works-on-my-machine 15:14:13 ok, action time, I can write proposal and post it on rdo-list 15:14:33 is u-c updated when new point release are released ? 15:14:36 #action apevec post rdo-trunk-upper-constraints to rdo-list 15:14:43 amoralej, there's proposal bot 15:14:50 ok 15:15:07 * apevec is looking for an example 15:15:37 also, we should work on DLRN to support tags. apevec, is the patch you mentioned today still around? 15:15:42 ah for oslo there's now bump on release: https://review.openstack.org/#/q/status:open+project:openstack/requirements+branch:master+topic:new-release 15:15:59 but to have both in parallel we'll need more workers == more infra to build and test 15:16:02 jpena, it was just local hack when I did RC builds last year 15:16:03 will that be a problem? 15:16:16 amoralej, yes and yes 15:16:29 amoralej: we'll probably need more disk space (unless we aggressively purge) 15:16:55 but we can workaround by alternating CI pipeline runs 15:17:16 [sensu] NEW: master.monitoring.rdoproject.org - check-delorean-newton-current @ http://uchiwa.monitoring.rdoproject.org/#/client/rdo-monitoring/master.monitoring.rdoproject.org?check=check-delorean-newton-current |#| Build failure on centos7-master/current: ceilometer, oslo.config: http://trunk.rdoproject.org/centos7-master/report.html 15:17:17 and lowering frequency for rdo trunk stable pipelines 15:17:18 yeap, 15:18:34 amoralej, here are reviews bumping u-c based on latest pypi releases https://review.openstack.org/#/q/project:openstack/requirements+topic:openstack/requirements/constraints 15:19:16 but we're concerned with oslo and clients 15:19:37 then u-c is almost the same as last point.release 15:20:12 yes, unless it fails CI, but coverage is not that great 15:20:16 i was thinking in implementation details, :) 15:20:45 so things passing reqs CI might still break things 15:22:29 yeah, let's not get into impl details here 15:22:48 anything else on this topic? 15:23:20 can we move on? 15:23:42 3.2.1, let's 15:23:44 yes, I think so 15:23:51 #topic Does it make sense to have an RDO ISO installer? 15:23:57 that's mine 15:24:22 imcsk8: what would it install? 15:24:35 RDO is not installer but package repos 15:24:55 RDO is an OpenStack distribution right? 15:25:20 right, but not an installer itself, it includes many installers 15:25:27 yes, but there are multiple installers using RDO 15:25:31 i was playing with kickstart files from that standpoint and created a proof of concept 15:25:50 do you have that pushed to github so it could be reviewed? 15:26:07 hard to tell what is it about with something concrete :) 15:26:20 i now, but i thoght that it could be a good way for new users to try a PoC install with packstack 15:26:39 i wanted to ask to see if i polish this or move on 15:27:06 imcsk8: so the idea is a kickstart installation that runs packstack --allinone at the end (or something like that)? 15:27:06 so how it relates to remix last year? 15:27:19 jpena: yes 15:27:21 GSoC project 15:28:16 * chandankumar not tried rdo remix, just posted the link on etherpad. 15:28:17 apevec: it's not really related since my attempt is to install openstack without rebooting 15:28:18 not sure env in kickstart %post is ready enough for full installation, wasn't that the problem in gsoc remix? 15:29:06 https://github.com/asadpiz/org_centos_cloud 15:29:12 for reference, the GSoC code 15:29:14 i was in contact with the guy that did that but didn't got too much into the details, you were he's advisor right apevec ? 15:29:28 I was not 15:29:39 i think rbowen is the mentor 15:29:50 yup 15:29:55 I just remember it vaguel from the list 15:30:22 imcsk8: after running GSoC for few years, don't expect students to come back, most don't 15:30:35 (mine was hired by CoreOS to work on their cloud-init fork) 15:30:53 imcsk8, I guess you could push what you have publicly then start discussion on the list, 15:30:59 well, my experiment is about avoiding rebooting in order to install OpenStack, right now i'm using packstack 15:31:13 but I know there's also usbkey based on tripleo-quickstart 15:31:25 so maybe better to consolidate efforts 15:31:47 weshay, trown - is ooo-usbkey in some repo? 15:31:47 live-USB may be a better promotion tool 15:31:53 not sure they are the same idea really, as quickstart usbkey is not bootable 15:32:23 apevec: it's in the tripleo-quickstart repo under ci-scripts I think 15:32:25 trown, ah, I dunno details 15:32:27 or at least triggered from there 15:32:40 well, i'll send the bate to the list with the current code and see what happens 15:32:43 apevec: needs docs on how to create it, but ya it is CI'd in quickstart tree https://github.com/openstack/tripleo-quickstart/tree/master/ci-scripts/usbkey 15:33:22 apevec, internal msg.. re image location. I need to host it some where public as well 15:33:30 ok, let's advertise that on the list and collect/consolidate ideas 15:33:31 sudo gate job for it is here https://ci.centos.org/view/rdo/view/tripleo-gate/job/tripleo-quickstart-gate-mitaka-delorean-quick-ooo-usbkey/ 15:33:54 imcsk8, let's use your post to trigger that discussion? 15:34:04 apevec: ok 15:34:22 #action imcsk8 to send message to ML about ISO PoC installer 15:34:57 that's it for me on this topic 15:35:18 we have a lot of time and no more topics so... 15:35:21 weshay, image could be posted under http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/ 15:35:28 open floor ! 15:35:32 #topic open floor 15:35:50 apevec, k.. 15:35:59 FYI item could be that DLRN production was migrated to ci.centos 15:36:16 kudos to all invovled, jpena++ amoralej++ dmsimard++ trown++ 15:36:19 apevec: Karma for jpena changed to 2 (for the f24 release cycle): https://badges.fedoraproject.org/tags/cookie/any 15:36:21 who did I miss? 15:36:21 apevec: Karma for amoralej changed to 2 (for the f24 release cycle): https://badges.fedoraproject.org/tags/cookie/any 15:36:24 apevec: Karma for dmsimard changed to 1 (for the f24 release cycle): https://badges.fedoraproject.org/tags/cookie/any 15:36:27 apevec: Karma for trown changed to 2 (for the f24 release cycle): https://badges.fedoraproject.org/tags/cookie/any 15:36:37 was a bit bumpy, but we got there :) 15:36:51 good progress w/ python3 migration, I hope to finish this week, but I'm doubting as I'm unable to use a hand for a few days 15:36:59 *hoped 15:37:12 trown, last item is to make ooo take images from buildlogs by default 15:37:13 number80: then you should be having some rest! 15:37:39 jpena: I'm fine, just typing slower 15:37:47 I didnt make a topic for it, but I want to switch out TripleO HA job in the promote pipeline for a single node pacemaker setup with ceph 15:38:19 until we have the ability to choose which services to install, the HA deploy is too bloated to pass on 3 of the 4 chassis' in cico 15:38:41 single node with pacemaker at least tests the code path of actual HA 15:39:00 if we throw in ceph we have gained more than we are losing in terms of coverage 15:39:05 trown, so increasing timeout didn't help? 15:39:40 apevec: well, we have another issue on master, so hard to say, but that is not a very long term solution 15:39:59 because we are so close to the edge of everything falling over 15:40:38 yeah https://etherpad.openstack.org/p/delorean_master_current_issues never ends 15:40:57 lol yep 15:41:00 trown, so 5. is still open? 15:41:23 i.e. DEBUG: testing increase timeout for tripleo CI. didn't help? 15:41:26 apevec: that patch is not in consistent I dont think, so hard to tell 15:43:12 i'll send a change in oslo-config to disable temporarily the tests, so that we can move the consistent link 15:43:19 amoralej, ack 15:43:24 thanks amoralej 15:43:37 I was just looking at topic:rdo-FTBFS 15:44:19 tripleo-common one created review for master, but it is issue on mitaka 15:44:25 oslo.config is the only left on master, mitaka has o-t-common, I'll take that 15:44:26 iirc, oslo-config is the only one 15:44:30 yeah 15:45:00 I merged rpm-master change which broke mitaka, but do not want to create rpm-mitaka just yet 15:45:49 trown, when is ooo-HA job change going to happen? 15:46:05 apevec: I plan to put up patches for it this afternoon 15:46:23 cool, 25% luck rate is not that nice to have :) 15:47:04 if we don't get a fix for issue 7 in the etherpad we may have to pin osc again 15:47:21 ya, if we had single job retry in the multijob it would be livable, but the weirdo jobs and the minimal job are not 100%, so we sometimes get the lucky chassis on the ha job only to have some other job fail 15:47:35 amoralej, heh, so back to first topic today :) 15:47:41 yes 15:47:49 that is something I want to brainstorm with dmsimard when he is back on 15:48:36 ok, so we can have that as a topic for the next week 15:48:49 I think we could save alot of CI resources that way too 15:49:49 trown, btw, have you tried new liberty oslo-concurrency build? 15:50:07 upgrades worked both ways locally for me with -3 cbs build 15:50:23 apevec: ya seems to have fixed that issue, there is something else failing on liberty, but it is later and I havent looked closely 15:50:29 just saw undercloud install is working now 15:50:31 cool 15:51:03 ok, next week we should have less red in https://ci.centos.org/view/rdo/view/promotion-pipeline/ 15:51:13 we can always hope :) 15:52:19 should we end the meeting? 15:52:22 we're always at the top of the our, should we wrap it up? 15:52:29 yep 15:52:47 okay 15:52:47 hi guys 15:52:50 #endmeeting