21:00:19 #startmeeting networking 21:00:20 Meeting started Mon Jan 18 21:00:19 2016 UTC and is due to finish in 60 minutes. The chair is armax. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:24 The meeting name has been set to 'networking' 21:00:38 lots of folks for a holiday. 21:00:52 Hi 21:00:54 Holiday for lots of folks 21:00:56 for whom made it, thanks for joining! 21:00:57 hello all! 21:01:00 ola 21:01:00 Holidays are for people who take time off dougwig. 21:01:01 night for some too! 21:01:14 night but not holiday 21:01:21 * carl_baldwin likes holidays 21:01:26 holidays, nights and time off is overrated :) 21:01:29 you have a good quorum armax 21:01:38 armax: Spoken like a true workaholic! 21:01:43 I suppose 21:01:47 ok, let’s dive in 21:02:02 we’ll see if we can keep this small 21:02:04 short 21:02:07 I mean 21:02:20 the agenda for today: 21:02:22 #link https://wiki.openstack.org/wiki/Network/Meetings 21:02:23 small AND short 21:02:33 #topic Announcements 21:02:41 this week is milestone week 21:02:57 mestery has already a patch up 21:02:59 #link https://review.openstack.org/#/c/269265/ 21:03:08 I'm stealthy AND fast 21:03:09 o/ 21:03:16 I think we’re gonna aggressively release as soon as we can confirm that all projects are sane 21:03:22 How many BPs do we have for M2 armax? 21:03:24 meaning that their gate CI is green 21:03:28 I mean, that finished? 21:03:32 #link https://launchpad.net/neutron/+milestone/mitaka-2 21:03:38 28 bps, and 71 bugs 21:03:47 thanks 21:04:09 3 blueprints implemented so far 21:04:37 for whom of you has its neutron work affected by nova 21:04:41 hello - sorry a bit late 21:04:48 next week the nova team meets in Bristol, UK 21:04:57 the etherpad to the mid-cycle 21:04:58 https://etherpad.openstack.org/p/mitaka-nova-midcycle 21:05:00 #link https://etherpad.openstack.org/p/mitaka-nova-midcycle 21:05:24 if you have something that you would like to bring to the attention or raise awareness, please reach out to me 21:05:36 next one 21:05:37 anyone from neutron going to the nova midcycle? i'm got a conflict. 21:05:44 armax: any plans for Neutron mid-cycle? 21:05:46 dougwig: carl_baldwin and armax re 21:05:47 are 21:05:54 Sukhdev: you beat me to it 21:06:10 if you guys recall we decided to call the Neutron mid-cycle off during the early part of the cycle 21:06:29 dougwig and a few others are looking into the logistics for a mid-cycle closer to the end of the cycle 21:06:40 right now we’re looking at the last week of Feb 21:06:43 somewhere on the east cose 21:06:43 So it's not really a mid-cycle then ... 21:06:44 cost 21:06:46 coast 21:06:52 mestery: pedant. 21:06:55 FLAHRIDA 21:06:56 mestery: call it whatever you fancy 21:07:02 :) 21:07:10 OK: "Beers with friends" 21:07:14 can it be first week of march? 21:07:19 last week of feb is infra 21:07:22 I'm allowed to invite my friends? 21:07:32 kevinbenton: Wait, you're coming? 21:07:37 I suppose the east coast is relatively easier to get to from both sides of the pond 21:07:38 anteaya: It can't 21:07:43 LAst week of feb 21:08:01 we’ll sort things out 21:08:07 kevinbenton : can I be your friend? 21:08:27 i charge by the hour :) 21:08:33 since your attendance may be determined on the topics of the meetup 21:08:43 watch out for a mail that I’ll send later today 21:08:52 kevinbenton : I pay in Indian Rupees :-) 21:09:00 are there any main topics in mind yet? 21:09:05 armax: just be aware that the QA midcycle is near end of feb - https://wiki.openstack.org/wiki/QA/CodeSprintMitakaBoston 21:09:21 sc68cal: thanks for the heads-up 21:09:27 or will it just be whatever blueprints are outstanding? 21:09:28 sc68cal: yeah it is the same week as infra 21:09:33 boston @ the end of february - that doesn't sound like good planning 21:09:38 rofl 21:09:39 kevinbenton: I’ll follow up on the ML 21:10:00 regXboi: winter is coming john snow 21:10:07 next one 21:10:11 #topic blueprints 21:10:23 #link https://launchpad.net/neutron/+milestone/mitaka-2 21:10:37 existing assignements for mitaka 21:10:38 #link https://blueprints.launchpad.net/neutron/mitaka/+assignments 21:11:01 if you are an assignee or approver for a blueprint please work together to update the BP whiteboard 21:11:18 if your work doesn’t have substantial code/documentation in place, most likely it will fall off Mitaka 21:11:36 as for RFE’s, we’ll go through a similiar diet exercise 21:11:54 after M2 is cut 21:12:21 the next meeting we’ll go over the list to see what can be spared by the slaugther 21:12:37 * sc68cal has already moved fwaas v2 out of mitaka 21:12:55 sc68cal: you ruin all the fun 21:13:48 sc68cal: thanks for letting me know 21:14:01 I need to document the steps to make deferred BPs show in this dashboard 21:14:04 https://blueprints.launchpad.net/neutron 21:14:05 #link https://blueprints.launchpad.net/neutron 21:14:14 o/ 21:14:14 the appropriate way 21:15:06 as for the folks involved in the existing backlog, I’ll try to reach out offline if I have questions/doubts 21:15:51 I would be ethernally grateful if you drop a note on the whiteboard/report page of the RFE/BP you’re assigned to in whichever capacity 21:17:49 ok the prolonged silence is my cue for the next section 21:18:08 #topic Bugs 21:19:01 so far it looks we fixed nearly 50% of the bugs submitted since Mitaka started 21:19:03 that’s impressive 21:19:20 but this means we have another 50% to go 21:19:27 which roughly counts to 300+ bugs 21:20:04 am I alone in this meeting? 21:20:13 armax: we're all behind you 21:20:15 no :) 21:20:19 * mestery grabs popcorn 21:20:27 I'm here :) 21:20:28 * ihrachysh makes a silly comment for stats 21:20:29 ah, for a second I thought I was speaking to myself 21:20:32 nice job with bugs 21:20:38 * mestery gives ihrachysh a gold star 21:20:53 I wasn’t asking for cheerleading, but I take it nonetheless :) 21:21:30 since we’re getting closer to the end of the release 21:21:50 I would encourage people to be conscious of the state of the gate when pushing stuff into the merge queue 21:22:32 I'll amend that to: I'd encourage people to generally always be conscious of the state of the gate 21:22:35 so give priority to targeted stuff, unless the gate is relatively clear or the patch run on a smaller more reliable set of jobs 21:22:39 mestery: +1000 21:22:40 mestery: ++ 21:22:52 But I get your point armax, sorry for the distraction 21:22:52 mestery: ++ 21:23:14 A great tool to be aware of the state of the gate is: http://docs.openstack.org/developer/neutron/dashboards/check.dashboard.html 21:23:16 we had a few setbacks during the past weeks 21:23:32 with the gate being broken, hopefully it’ll be better in the future 21:23:32 not setbacks, fire drills 21:23:34 amuller: ++ 21:23:42 and 21:23:44 #link http://docs.openstack.org/developer/neutron/dashboards/gate.dashboard.html 21:23:47 better? you gotta be kiddin 21:23:57 ihrachysh: ok 21:24:01 For example you can clearly see that the multinode regular and DVR jobs are broken ATM 21:24:08 we’re doomed and it’s gonan be worse from now on 21:24:10 ihrachysh: better? 21:24:22 amuller: we’re way ahead of you :) 21:24:33 armax: that's to be expected 21:24:34 armax: now I hear an engineer speaking 21:25:02 amuller: re multinode seems like failures are gate timeouts, could be just an issue of long bootstrap 21:25:11 amuller: but yeah I hope someone is on top of the recent failures, I know regXboi has been briefly looking into it? 21:25:35 luckily the dvr issue is not affecting the gate queue 21:25:40 at some point we should address that our baseline for a "good" gate is ~12% failure rate. that shaky foundation doesn't help any of the jobs built on top of that base. 21:25:51 I went and looked at the recent failures that I could find in jenkins and ihrachysh is correct - most of them are timeouts of the job itself 21:26:08 dougwig: You're a prophet sent from the future! 21:26:39 dougwig: 0% 21:27:14 if I look at 21:27:16 #link http://status.openstack.org/openstack-health 21:27:17 what caused the timeout? a bug we should file against infra? 21:27:20 we’re 97% pass rate 21:27:21 I am wondering if we've added enough tests that we are blowing the top of the runtime limit 21:27:27 on the gate queue 21:27:32 dougwig: that's for the check queue I think. If you look at the gate queue it's normally 0%, and you can also look at openstack-health which armax just linked, it also shows near 0% on the gate queue 21:27:40 speaking of multinode gate, we have some MTU related patches to push for that should fix some issues with ssh connection checks in gate: https://review.openstack.org/#/q/topic:multinode-neutron-mtu 21:27:54 so I’d say 100% pass rate with +/- 3%? :P 21:27:56 right - and due to it being MTU, that chews up a ton of retry time 21:27:59 regXboi: someone should walk thru logs and check when tests are started and how it's compared to regular runs 21:28:09 takes 5 minutes before SSH gives up and disconnects 21:28:17 ihrachysh: I might be able to spend some time tomorrow attempting that 21:28:30 regXboi: keep me informed 21:28:31 3% is great. 21:28:39 btw the linuxbridge job on the gate is a bit flaky recently 21:28:48 I wonder if sc68cal had a glimpse into that 21:28:55 #action regXboi to look at jenkins logs to compare run times of multinode tests 21:28:59 one thing to also remember, I think mtreinish mentioned that openstack-health only tracks failures where it gets to running the tests. Doesn't track failures early in the setup 21:29:23 sc68cal: I think so 21:29:31 openstack-health runs code that inserts the test results in to a mysql DB at the end of the run, the run has to complete 21:29:32 sc68cal: that's what we want. unless neutron causes setup failures 21:29:50 sc68cal: that is my understanding also 21:30:02 before we move on from bugs 21:30:13 anyone willing to be deputy for this week and the weeks coming next? 21:30:16 we’re exposed right now 21:30:18 so, the openstack-health % is a datapoint, but not the full picture. I think regXboi 's graphite queries measure the bigger picture 21:30:19 !!! 21:30:20 armax: Error: "!!" is not a valid command. 21:30:27 oops 21:30:53 sc68cal: actually, the graphite is the overall, openstack-health should eventually get us to a better drill down 21:31:06 at least once it includes the check pipelines 21:31:24 * ihrachysh will need to skip the bug duty for the next three weeks but is happy to get on board after 21:31:29 I can have a go this week 21:31:42 mlavalle you interested? 21:31:45 regXboi: I don't know if openstack-health ever will include check 21:32:01 anyone else? 21:32:11 sc68cal: ? 21:32:18 anteaya: any ideas if it will include stats for non-tempest jobs, like the neutron functional job? 21:32:21 armax: Next week I can 21:32:27 ook 21:32:29 cool 21:32:32 i can this week 21:32:42 #action mlavalle deputy for the week of 25th 21:32:49 ZZelle_: sweet 21:32:54 anteaya: I'm negotiating on that one :) 21:33:03 #action ZZelle_ deputy for week of 18th 21:33:03 amuller: I don't believe that is the goal of this tool 21:33:30 I believe it was created to be a gate health tool 21:34:12 sorry to derail 21:34:24 anteaya: yeah, I was about to tell you off :P 21:34:28 * armax kids 21:34:34 :) 21:34:40 I'll talk to mtreinish offline 21:34:46 ok, anything else on this section? Anything burning that we should be aware of and we aren’t? 21:34:53 sc68cal: so what is our false failure rate, then? 21:34:55 anteaya: again, I'm still negotiating that one 21:35:12 * regXboi now goes and looks for the re-railer 21:35:45 dougwig: haven't checked lately 21:36:01 next one 21:36:06 #topic Docs 21:36:17 mmm docs 21:36:17 emagana I know that Sam-I-Am is not around 21:36:28 I was hoping he was 21:36:31 he mentioned 21:36:34 that The networking guide now publishes stable releases rather than continuously so people using an older version of OpenStack can find information relevant to it. 21:36:34 He had some points to discuss 21:36:48 armax: indeed, that is completed 21:37:00 he’ll be updating the networking guide scenarios for Liberty. Once those are done, I think he’ll split time between populating the intro content and updates for Mitaka. 21:37:18 armax: that is correct 21:37:31 well, kudos to him :) 21:37:49 now we have http://docs.openstack.org/liberty/networking-guide/ (liberty version) from the top page of docs.o.o :) 21:37:51 we dont want to create confusion about scenarios but also we dont want to create more very similar ones 21:37:54 I personally have this review https://review.openstack.org/#/c/253229/ 21:37:59 sitting for ever in the queue 21:38:08 armax: I will take care of it 21:38:17 this should help us to tag the automatic doc bugs in LP 21:38:24 emagana that’s not a documentation patch 21:38:28 armax: sounds good 21:39:03 armax: oh got it.. will review anyway 21:39:05 amotoki: so I assume that we don’t publish trunk changes at all do we? 21:39:11 emagana: sure, thanks 21:39:37 armax: I think so too 21:39:38 armax: we do.. 21:39:41 ok 21:39:47 we pu have both Liberty and current 21:40:03 s/pu/do 21:41:09 Sam-I-Am wanted to discuss about the ability to attach VMs directly to public/external networks 21:41:47 but I do not have all the context what we wanted to review with you all. We have a networking guide meeting this Thursday. I will review it with him and the rest of the folks attending the meeting 21:41:48 emagana: what about it? it should work fine if people aren't using that stupid 'external_network_bridge' option in the l3 agent 21:42:14 kevinbenton: maybe as simple as a good explanation of how to use it 21:42:20 I think the issue is dhcp 21:42:28 an external network can either be a 'real' network that is wired by the L2 agent 21:42:41 kevinbenton: indeed, that is the way we use it.. 21:42:46 in which case everything works completely fine (even dhcp agents can bind to it if you have dhcp enabled) 21:43:11 or you use the 'external_network_bridge' on the l3 agent, and it's completely out of reach by everything but the l3 agent 21:43:14 The only issue is developer envs - most likely they already have a DHCP service running 21:44:07 sc68cal: yes, from a developer workflow they would need to have dhcp disabled 21:44:22 or hope dnsmasq is faster :) 21:44:26 :) 21:44:43 emagana: you can loop me into that conversation later 21:44:45 ok 21:44:49 shall we wrap it up? 21:44:51 kevinbenton: I will do that 21:44:51 correction. we have trunk (Mitaka) version of networking-guide under /draft/ now. http://docs.openstack.org/draft/networking-guide/ 21:44:59 * regXboi goes and looks for the bow ?!?! 21:45:11 amotoki: thanks for the link. That is that I thought 21:45:20 amotoki: thanks for the link 21:45:29 #link http://docs.openstack.org/draft/networking-guide/ 21:45:41 armax: about the networking schedule, we ran into the same odd/even bi-weekly meetings 21:45:41 for tracking master changes to openstack-manuals 21:45:51 but we can figure out that by ML 21:46:05 emagana: ok 21:46:12 armax: nothing else but as always extending the invite to help to complete the missing section in the networking guide 21:46:34 #link https://etherpad.openstack.org/p/networking-guide 21:46:48 feel free to grab a section :-) 21:47:12 #topic Open Discussion 21:47:23 Swami added a topic on the agenda 21:47:30 obondarev might be around 21:47:35 but I don’t see Swami 21:48:01 having said that, this topic is raised at hte nova mid-cycle 21:48:13 or whichever way mestery wants to call it 21:48:26 friends with beers? 21:48:40 armax: After a few of either, the order doesn't matter. 21:48:57 mestery: sure does not! :) 21:49:00 anyone wants to discuss anything in lieu of that? 21:49:12 if not, we can get 11 minutes of our lives back 21:49:19 What time is next meeting? 21:49:22 and keep breaking the gate, ihrachysh am I right? 21:49:33 "repent! cried the tick tock man" 21:49:35 as always sir 21:49:35 hichihara: same time same place same day 21:49:35 friendly note: The openstack foundation recommended Austin hotels are running out of rooms 21:49:46 armax: OK. Thanks 21:49:53 amuller: Tahnks for sharing 21:50:00 amuller: thanks for the heads up 21:50:02 * mestery senses the rush to the foundation website from those without a room 21:50:03 amuller: thanks 21:50:05 amuller: good lad 21:51:14 austin shouldn't be too hot. Bring your camping gear 21:51:37 ok, for folks who stayed late, for those which was time off and for those who simply participated thanks for joinin 21:52:00 have a blast! 21:52:04 #endmeeting