19:17:41 <greghaynes> #startmeeting tripleo
19:17:42 <openstack> Meeting started Tue Jan 27 19:17:41 2015 UTC and is due to finish in 60 minutes.  The chair is greghaynes. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:17:43 * bnemec is eating lunch
19:17:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:17:46 <openstack> The meeting name has been set to 'tripleo'
19:17:51 <greghaynes> #topic agenda
19:17:53 <greghaynes> * bugs
19:17:55 <greghaynes> * reviews
19:17:57 <greghaynes> * Projects needing releases
19:17:59 <greghaynes> * CD Cloud status
19:18:01 <greghaynes> * CI
19:18:03 <greghaynes> * Specs
19:18:05 <greghaynes> * open discussion
19:18:07 <greghaynes> Remember that anyone can use the link and info commands, not just the moderator - if you have something worth noting in the meeting minutes feel free to tag it
19:18:13 <greghaynes> #topic bugs
19:18:26 <greghaynes> #link https://bugs.launchpad.net/tripleo/
19:18:28 <greghaynes> #link https://bugs.launchpad.net/diskimage-builder/
19:18:30 <greghaynes> #link https://bugs.launchpad.net/os-refresh-config
19:18:32 <greghaynes> #link https://bugs.launchpad.net/os-apply-config
19:18:34 <greghaynes> #link https://bugs.launchpad.net/os-collect-config
19:18:36 <greghaynes> #link https://bugs.launchpad.net/os-cloud-config
19:18:38 <greghaynes> #link https://bugs.launchpad.net/os-net-config
19:18:40 <greghaynes> #link https://bugs.launchpad.net/tuskar
19:18:42 <greghaynes> #link https://bugs.launchpad.net/python-tuskarclient
19:19:04 <greghaynes> We have the same https://bugs.launchpad.net/tripleo/+bug/1374626 which I believe is still blocking on SpamapS ENOTIME
19:19:58 <greghaynes> also https://bugs.launchpad.net/tripleo/+bug/1401300 thats blocking on me getting distracted by CI fails
19:20:26 <greghaynes> any other bugs people want to mention?
19:20:54 <bnemec> nope
19:21:01 <greghaynes> https://bugs.launchpad.net/diskimage-builder/+bug/1407828 is wierd
19:21:15 <greghaynes> maybe one of the rhers could help out with that?
19:21:34 <bnemec> greghaynes: I responded to it already.
19:21:36 <greghaynes> oh, looks like bnemec did :)
19:21:46 <bnemec> The link works for me, so I can't make any more progress on it.
19:22:16 <greghaynes> ah, ok, since its been over a week maybe we should just mark as invalid
19:23:07 <greghaynes> ok, thats all I got
19:23:10 <greghaynes> moving on...
19:23:29 <greghaynes> #topic reviews
19:23:36 <greghaynes> #info There's a dashboard linked from https://wiki.openstack.org/wiki/TripleO#Review_team - look for "TripleO Inbox Dashboard"
19:23:38 <greghaynes> #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html
19:23:40 <greghaynes> #link http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
19:23:42 <greghaynes> #link http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
19:24:29 <greghaynes> im pretty sure our list of oldest reviews is different this week, so seems like a good thing
19:25:05 <greghaynes> I dont really have anything to add to that...
19:25:21 <greghaynes> we seem to be doing fine with our review backlog imo so *shrug*
19:26:30 <greghaynes> oh, I think we said this week SpamapS would look at culling some core reviewers if their numbers stayed low?
19:26:53 <bnemec> Well, in the near future.
19:27:10 <bnemec> I'd be inclined to wait maybe for the next time we meet at this time.
19:27:10 <slagle> didnt he already?
19:27:20 <bnemec> Just to make sure we're >30 days out from the holidays.
19:27:30 <bnemec> He removed a couple who said they were no longer interested.
19:27:47 <bnemec> I think he was holding off on the forced removals though.
19:27:57 <greghaynes> yep, seems reasonable
19:28:36 <greghaynes> ok, so looks like our meeting wiki page isnt updated with out new agenda
19:28:38 <bnemec> #link http://lists.openstack.org/pipermail/openstack-dev/2015-January/054575.html
19:29:23 <greghaynes> #action Update https://wiki.openstack.org/wiki/Meetings/TripleO to reflect our new agenda
19:29:44 <greghaynes> Our next topic was supposed to be operators?
19:30:09 <greghaynes> #topic live cloud status
19:30:40 <greghaynes> Do we have anyone here who is running / representing a live cloud?
19:31:15 <greghaynes> I think our two who have been working on hp ci clouds tend to come to the other meeting
19:31:17 <bnemec> Or anyone at all besides the three of us? :-)
19:31:39 <greghaynes> heh, yea
19:32:18 <greghaynes> ooook. Welp going to move on to....
19:32:28 <greghaynes> #topic Projects needing releases
19:32:44 <greghaynes> I released last week, new yubikey is awesome
19:33:31 <greghaynes> anyone want to take it this week?
19:34:27 <greghaynes> *crickets*
19:34:43 <greghaynes> ok, ill do it if things look like they need releasing
19:34:51 <greghaynes> #action greghaynes to release all the things
19:35:11 <greghaynes> #topic CI
19:35:27 <greghaynes> so, I hope people have something to say about this :)
19:35:59 <bnemec> It's broken.
19:36:03 <greghaynes> that it is
19:36:05 <TheJulia> It looks like a fix was proposed
19:36:12 <greghaynes> a fix?
19:36:16 <TheJulia> for neutron
19:36:18 <greghaynes> ah
19:36:25 <greghaynes> yep, just in merge queue
19:36:58 <greghaynes> Id also like to point out http://logs.openstack.org/19/149819/3/check-tripleo/check-tripleo-ironic-overcloud-f20-nonha/3e39a48/logs/get_state_from_host.txt.gz
19:37:08 <greghaynes> which shows why our f20 jobs have no seed logs
19:38:36 <greghaynes> So theres still the spurious failure that is causing most f20 jobs to fail, and also the spurious nodes going into error deleting state in heat
19:39:13 <greghaynes> I think it would be awesome if we could rally some support around getting those fixed, they seem to be causing a lot of pain for anyone trying to use our CI :)
19:39:46 <bnemec> So I caught part of the conversation about the seed_logs thing.
19:40:09 <bnemec> Were we thinking that it's because there are no logs at all?
19:40:34 <greghaynes> oh, no, theres a failure on f20 but an unrelated bug makes us get no logs for the seed in this certian failure condition
19:40:42 <greghaynes> so debugging the failure is basically impossible
19:40:58 <greghaynes> http://logs.openstack.org/19/149819/3/check-tripleo/check-tripleo-ironic-overcloud-f20-nonha/3e39a48/ is an example of such a job
19:41:59 <bnemec> And we don't have a working theory yet?
19:42:18 <greghaynes> and the reason for no seed logs we have not determined to be that tar --exclude somefile causes tar to exit 1 if somefile does not exist, and we are --excluding a file that doesnt exist on these jobs
19:42:39 <bnemec> Ah.
19:42:51 <bnemec> Maybe we should just touch that file before doing the tar.
19:42:55 <greghaynes> im working on a patch for that atm (pretty easy), but well still have the f20 fail to fix once that goes in
19:42:59 <greghaynes> yep
19:43:14 <bnemec> Okay, cool
19:43:58 <greghaynes> Any other comments about CI? Maybe other spurious failures I didnt mention
19:45:14 <greghaynes> ooook
19:45:21 <greghaynes> #topic Specs
19:45:44 <greghaynes> #link https://review.openstack.org/#/q/status:open+project:openstack/tripleo-specs,n,z
19:46:26 <greghaynes> looks like there are two specs to review, one which could maybe me merged but more +1's would be awesome (the selinux one)
19:46:38 <bnemec> Agreed.
19:47:59 <greghaynes> I doubt theres any more comments on that... since theres only two specs
19:48:11 <greghaynes> #topic open discussoin
19:48:38 <greghaynes> #action Everyone read the mid-cycle agenda and add/comment on what we have!
19:48:50 <greghaynes> #link https://etherpad.openstack.org/p/kilo-tripleo-midcycle-meetup
19:49:12 <pcm_> Had a question related to OoO if folks could entertain...
19:49:22 <greghaynes> sure!
19:49:45 <pcm_> Have a private OpenStack cloud that I'm trying to spin up two VMs and a connecting network.
19:49:50 <greghaynes> aaand looks like etherpad.o.o is down, so maybe dont go editing the agenda right now ;)
19:50:03 <pcm_> Want to run DevStack on each of the two VMs and test VPNaaS between them.
19:50:36 <pcm_> WIth VBOX, I would just have a L2 net, and the devstack would create a br-ex with and IP on the public net and eth1 in that bridge.
19:50:56 <pcm_> However, with OS, I'm having problems trying to do the interconnecting network in this manner.
19:51:01 <pcm_> Is it possible?
19:51:21 <greghaynes> Are you using tripleo to do this?
19:51:55 <pcm_> No. Mostly because I don't know how, and am constrained by the undercloud that I have to use.
19:52:10 <pcm_> But, I was wondering if OoO has tackled the same issue.
19:52:35 <pcm_> Two VMs running openstack, talking over a network that was created on the undercloud.
19:53:19 <bnemec> We don't run our vms on OpenStack (yet...)
19:53:26 <bnemec> They're created directly with virsh.
19:53:58 <pcm_> I can put IPs on eth1 and ping, so connectivity works, but if I add eth1 into br-ex to run DevStack, I loose connectivity.
19:54:03 <pcm_> bnemec: Ah.
19:54:05 <greghaynes> or are actual baremetal for a prod deployment
19:54:13 <bnemec> Well, that too. :-)
19:54:28 <bnemec> pcm_: Yeah, it's a little tricky running OpenStack nested in itself.
19:54:42 <pcm_> bnemec: Is it possible?
19:54:57 <bnemec> Neutron locks down even private networks pretty hard, so if the traffic it sees coming from a vm isn't exactly what it expects it blocks it.
19:55:19 <greghaynes> this sounds similar to the multinode devstack testing deal
19:55:28 <bnemec> pcm_: The only way I've gotten it to work is hacking up Neutron to allow more traffic.
19:55:40 <bnemec> Which may not be the only way, but it's what I've got.
19:55:46 <greghaynes> where they just made a gre tunnel to make an l2 between the nodes
19:55:47 * bnemec is not a networking expert
19:55:53 <pcm_> bnemec: Yeah, that's what I seem to be seeing. If I move the IP to br-ex and swap macs, I can ping the br-ex, but not anything else (like the public IP of the Neutron router).
19:56:34 <pcm_> greghaynes: Maybe I can explore that. I essentially want a L2 network between the VMs.
19:56:57 <greghaynes> yep, thats what they did. Its less than ideal obviously (tubes inside of tubes) but technically should work
19:57:17 <greghaynes> also one gotcha they ran into was that nova sets ebtables rules
19:57:33 <greghaynes> might make sure youre not running into that if youre seing packets not get where they should be going
19:58:23 <greghaynes> ok, I think meeting time is basically up
19:58:26 <pcm_> greghaynes: Thanks. I'll ping others here that (hopefully) know how to setup GRE tunnels and give it a try.
19:58:30 <pcm_> thanks all!
19:58:31 <greghaynes> pcm_: np
19:58:38 <greghaynes> Thanks for attending all
19:58:40 <greghaynes> #endmeeting