20:00:13 #startmeeting octavia 20:00:14 Meeting started Wed Sep 9 20:00:13 2015 UTC and is due to finish in 60 minutes. The chair is xgerman. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:18 The meeting name has been set to 'octavia' 20:00:25 hi 20:00:25 o/ 20:00:26 o/ 20:00:27 o/ 20:00:28 #chair blogan 20:00:29 Current chairs: blogan xgerman 20:00:48 o/ 20:00:59 o/ 20:01:03 #topic Announcements 20:01:17 hello! 20:01:24 Liberty Release schedule - #link https://wiki.openstack.org/wiki/Liberty_Release_Schedule 20:01:27 hi 20:01:29 o/ 20:01:49 RC1 is on 9/21 so we need to be on our toes + rumor has it the CLI is already cut 20:02:14 o/ 20:02:16 Howdy, howdy! 20:02:16 o/ 20:02:19 Mitaka design summit topics 20:02:26 #link https://etherpad.openstack.org/p/neutron-mitaka-designsummit 20:02:44 we need to start adding our topics (e.g. Octavia Active-Active) 20:03:52 Lastly: Flavor stuff 20:03:54 o/ 20:04:14 jwarendts has that working on his machine 20:04:31 cool 20:04:44 and itll start with provider and flavor together and eventually move to just flavor? 20:04:56 well neutron-lbaas 20:05:00 yes, it will show both 20:05:07 and only neutron-lbaas 20:05:15 for now 20:05:22 and hopefully RC1 20:05:53 but, yeah, pretty fun neutron labs-loadbalancer-create —flavor GOLD 20:06:03 Question on that for operators: Do you see yourself actually using a flavor that works on more than one provider? 20:06:06 0/ 20:06:33 sbalukoff I don’t see myself doing that but it is a valid future use case 20:06:37 I would be surprised if we do that. Rather, I expect we'll probably have multiple flavors per provider. 20:06:53 yeah i thinks o too 20:06:54 yeah, that’s what’s implemented 20:06:57 xgerman: Possibly-- if we know someone is going to use it, eh. I'm a fan of not building things nobody uses. :) 20:07:17 sbalukoff if I bought lets say A10 lbs and then get a good deal on Netscaler 20:07:18 But that's a bit of a digression. 20:07:23 and want both be gold flavor 20:07:55 GOLD_PLUS, GOLD_PREFERRED 20:07:56 it supporting multiple providers was a diplomatic thing to convince certain folks that it wasn't a vendor lock. 20:07:57 but that really complicates things since now you need weights and a scheduler 20:07:58 :P 20:08:02 Easy, gold-a and gold-b 20:08:18 johnsom: way more boring than my suggestions :P 20:08:49 I work for the "cold dead fish" company, what can I say.... 20:08:52 or just confuse the heck out of people with "GOLD_PLATINUM" (pretty sure i have seen that on a credit card before0 20:08:53 ) 20:08:55 Haha 20:09:03 hahaha 20:09:22 Anyway, we really don't need to discuss this now, IMO. We've got... healthier fish to fry. 20:09:25 sounds like airline tickets 20:09:31 indeed 20:09:39 i don tunderstand how that would cause vendor lock 20:09:40 * rm_work is on an airplane, so it may have come to mind 20:09:47 Haha 20:09:49 blogan: me either, but whatever 20:10:01 :) 20:10:51 #topic Liberty deadline stuff 20:11:14 Failover, Heartbeat, oh my 20:11:19 we need to get Failover merged 20:11:20 jeebus 20:11:27 yep 20:11:29 +100 on failover merged! 20:11:29 that and fix the subnet/port delete issue 20:11:30 L7 is out. I wasn't able to get my Octavia work to the point where I'm ready to commit anything yet, and nobody has reviewed any of the L7 Neutron LBaaS stuff. 20:11:35 i believe it is working for both ssh and rest now 20:11:40 blogan: the deletes? 20:11:45 rm_work: no fialover 20:11:45 it wasn't as of yesterday afternoon 20:11:47 oh 20:11:47 k 20:11:54 rm_work: i dont fully comprehend the deletes though, i thought this was fixed 20:12:00 can you point me to some failures to look at? 20:12:00 +1 20:12:13 let’s be organized 20:12:18 if failover works we merge it 20:12:23 delete should be his own patch 20:12:26 well ive testd it out, has anyone else? 20:12:36 would like to get double verification, or triple 20:12:43 https://review.openstack.org/#/c/214058/ 20:12:44 I haven't had a chance to try it with the latest patches 20:12:46 blogan: I'll try it out in the next day or two. 20:12:52 Can this afternoon though 20:12:53 delete lb ? 20:12:59 johnsom awesome 20:13:05 blogan: yeah i will get you logs 20:13:07 delete ports 20:13:18 where are we with the tempest job with an octavia backend? 20:13:28 bad news 20:13:33 so it'll work with an existing amphora that stops sending heartbeats, in the acse of there not being ane xisting amphora, it will not work as that will require some major flow changes 20:13:33 johnsom explain 20:13:38 blogan: basically the tempest api tests in neutron-lbaas create a LB, and then when they try to clean up, octavia tries to delete the subnet and fails because there is still a port (because nlbaas owns the port) 20:13:52 dougwig: that's what i was working on with min and what we need the subnet delete fix for 20:13:57 dougwig https://jenkins07.openstack.org/job/gate-neutron-lbaasv2-octavia-dsvm-api/11/console 20:13:57 rm_work: octavia tries to delete the subnet? 20:14:02 rm_work: that doesnt make sense 20:14:03 blogan: i think so 20:14:07 it's on o-cw 20:14:07 It's a topic item later on the list. 20:14:14 i will double-check 20:14:17 it seems really odd 20:14:22 Semi-working. Extremely slow test though 20:14:25 either way, the cleanup totally barfs, and leaves a subnet around 20:14:27 2+ hours 20:14:31 yeah 20:14:39 i think on RAX images it should be more like 80min 20:14:49 Eew. 20:14:52 oh, is that an hp vs rax throwdown that i see? 20:14:53 rm_work explain? 20:14:57 and the infra peeps said they'd take a look at it too once we get the job actually passing, in any amount of time 20:14:58 fanatical support? 20:15:03 johnsom: it's just... faster 20:15:18 Haha 20:15:36 johnsom: min and I tested RAX vs HP images yesterday and found average boot time for VMs to be ~4m on RAX-8G and ~8m on HP-30H 20:15:36 rm_work do we have a bug for the subnet? 20:15:37 2 hours for one test, that seems acceptable 20:15:41 Hahaha. The big issue is the gate systems not emulating vt-x, so qemu drops to TCG mode, which is painfully slow software emulation for our amps 20:15:42 * blogan ends sarcasm 20:15:43 *30G, which is what gate uses 20:15:46 xgerman: not sure 20:15:59 ok, let’s create one so we can track 20:16:00 blogan: i mean it's the whole test run, not "one test" 20:16:08 rm_work: oh, well still 20:16:13 2 hours for all tests? Not bad... 20:16:15 some already take 50m 20:16:19 for the dsvm that is normal 20:16:29 yeah but 2 hours is over double 20:16:40 yeah we figured out it is only creating one VM per file, not one VM per test, so it's really only like 15 VM creates, not 150 20:16:42 but the general Q is can we get infra ro provision VT-X hosts for us? 20:17:00 xgerman: i don't know, like i said, once we can get the experimental gate PASSING, they will look at it with us 20:17:05 dougwig as our liaison... 20:17:10 but until then it is hard for me to give them useful data 20:17:15 does trove have a similar issue? 20:17:16 I have been working a lot with them recently as well 20:17:21 not sure 20:17:27 brogan they only spin up three vms total 20:17:33 blogan you may want to work a bit on your patch, i post comment there, i think it is related to the port delete https://review.openstack.org/#/c/214058/ 20:18:00 blogan: i promise to get you logs by tomorrow 20:18:03 minwang2: ah you're using that patch to run the tests? 20:18:30 or you can try spinning up the test yourself -- my script works now 100%: https://gist.github.com/rm-you/f7585ca4932b3ee1eed9 20:18:32 blogan: ^^ 20:18:35 minwang2: does it work without that patch? 20:18:37 from a fresh Ubuntu 14.04 20:18:46 blogan: i was testing without it 20:19:00 100% eh? bold claim sir 20:19:04 heh 20:19:13 100% to the point where you can run tempest tests with "tox -e apiv2" 20:19:15 rm_work: you were testing without and it still got the same issue? 20:19:20 yes 20:19:23 the first time i got the error for teardown, ptoohill pointed out that this patch might help ,i cherry picked it but seem not help much 20:19:26 okay so not that patch tehn 20:19:51 okay ill run some local tests as well and see what happens 20:19:54 can we move the troubleshooting to after the meeting? 20:20:28 xgerman: +1 20:20:39 yes 20:20:43 thanks 20:20:50 Just a side, huge thanks to rm_work for helping with this 20:21:03 yep, he should come to Seattle more often!! 20:21:04 +1 20:21:23 my job right now seems to be basically making sure everything else works, since I don't have the bandwidth to take a specific task <_< 20:21:34 so just hopping around helping with random stuff seems to be useful :P 20:21:41 rm_work: That's an extremely important job. :) 20:21:48 rm_work did you try red bull? That works for dougwig... 20:21:51 heh 20:22:06 i had some of the seattle cold brew while i was in the office, love that stuff 20:22:14 anyway, what is next if we're done troubleshooting? 20:22:17 yep, it’s awesome!! 20:22:24 we merge everyhting 20:22:27 heh 20:22:34 Yes, merge, merge, merge 20:22:40 And then go nuts fixing bugs 20:22:41 well there's an additional patch in front of failover, right? 20:22:46 except Bertrans’s thing 20:22:47 but do we think we can merge that and failover ... today? 20:22:51 ok so assuming failover and the gat work, whats the next thign we need to get in for ref? 20:22:51 I will give +2's, etc this afternoon after I test the latest code 20:23:10 UDP listener at least 20:23:20 okay 20:23:20 ideally the followup for eventstreamer but that won't be dealbreaking 20:23:23 well, the DB stuff works 20:23:28 I tried that a while back 20:23:36 not because of the eventstreamer itself, but because of the refactor it includes for the DB updates 20:24:06 mmh, so need to try again... 20:24:09 leaving those non-mixins as mixins is ... irksome 20:24:22 k 20:24:24 though it doesn't affect functionality really, it's just gross 20:24:25 we can rename ;-) 20:24:30 yeah that is the refactor 20:24:43 Can we do that after we do the chain merge? 20:24:49 +1 20:25:04 yeah i mean if we get through to UDP merged it's good 20:25:15 Sweet. 20:25:50 ok, so moving on johnsom did some benchmarks 20:26:00 so dougwig can be proud :-) 20:26:05 Yeah, I would like to see everything in that chain up to UDP merged today/tomorrow if we can get the reviews/tests done 20:26:11 isnt dougwig on vacation or something 20:26:13 Yeah, ok 20:26:19 just commented on https://review.openstack.org/#/c/220747/5 but willing to merge it despite comments 20:26:20 I saw his ghost earlier 20:26:23 he came back early, after mechanical issues. 20:26:23 all minor code quality stuff 20:26:24 caveats - DevStack inside ESXi 6 on 14.04 guest - 4vCPU, 16GB RAM, 50GB disk. Two amphora ubuntu nova instances with GWAN web server, serving 100 byte text file. 20:26:33 I used two benchmarks, apache bench(ab) and httperf (full disclaimer HP labs code). 20:26:36 mechanical issues? Pictures? 20:27:02 On httperf Octavia did 88.5% of the requests per second as the namespace driver. 20:27:22 johnsom: i had some erlang based swarm testing code somewhere that ptoohill and I worked on a while back, we might throw that at some LBs too and see if we get similar results 20:27:26 On AB Octavia did 103% of the requests per second 20:27:38 Tsung 20:27:40 yeah 20:27:43 Tsung is GREAT 20:27:51 It is. 20:27:56 Locust is getting better 20:27:59 so johnsom, they look similar 20:28:19 Close enough not to make people hate Octavia on the basis of performance, at least. 20:28:22 but that's workable, even at ~90% that's fine 20:28:25 yep 20:28:33 sbalukoff +1 20:28:36 The test system was really CPU bound to get peak performance numbers. It really should be run on 4+ bare metal systems to get true requests per second #s 20:28:38 sbalukoff: yeah and octavia has more going for it as well 20:28:51 like the name Octavia 20:28:52 blogan: Indeed it does! Octavia has a bright future! 20:28:58 johnsom: ok, remind me later and we can look at that 20:29:16 the more benchmarks the better 20:29:34 absolutely — I like to be on stage and say now with 1000% more performance 20:29:41 Yep. This highly qualifies as "quick and dirty" 20:29:46 those numbers will do fine, especially since we now have scalability and (soon) failover. 20:29:55 Just to make dougwig sleep slightly better at night 20:30:08 Hehe! 20:30:17 moving on 20:30:23 Horizon 20:30:23 We are all extremely concerned about dougwig's sleep. 20:30:51 so Aish on our team tried installing that without success so far 20:31:11 anyone from ebay/paypal here today? 20:31:25 we have been in touch with them 20:31:48 so chances are that it will work 20:32:06 but I am not sure how we would package/release it 20:33:07 let's get it working and merged, then we can sort out the release timeline. 20:33:15 ok 20:33:20 sounds good 20:33:37 next up: Active-Passive 20:34:06 johnsom reviewed and Sherif is facing the bugs currently 20:34:13 facing=fixing 20:34:35 Yeah, I think he is pretty close to addressing all of the comments I left. 20:34:37 I talked with mystery and we have an FFE for that 20:34:49 mestery 20:35:00 hopefully the failover and udp reviews go faster and get merged so I can focus on that one and test it out well 20:35:13 yeah, that would be good 20:36:36 #Open Discussion 20:36:40 #topic Open Discussion 20:36:53 have we mentioned merges? 20:37:01 yes, we did :-) 20:37:12 oh 20:37:14 and we don’t want to merge things which are not critical right now 20:37:23 I have patches up on both Octavia and neutron-lbaas for adding a member state of "NO_MONITOR". 20:37:26 design sessions, should we have one for neutron-lbaas and one for octavia, or have it combined now? 20:38:23 As I alluded to last week, I have an IBM group based in Israel that is very interested in contributing to Octavia. 20:38:23 cool 20:38:23 I am working on getting them up to speed as to how to engage with the community effectively. 20:38:23 sbalukoff: they can sync up with sam and evgeny well too! 20:38:23 blogan — not sure how tight the space is but we can combine 20:38:24 i am working on octavia gate setting and cert rotation 20:38:24 I don't know if they'll make it to this meeting that often, as they are in a crappy timezone for it. But I'mma try to get them into our IRC channel. 20:38:24 yeah, cert/anchor stuff will be M 20:38:30 sbalukoff: yeah same reason sam and evgenyf dont make it most of the time 20:38:43 so we need a new time? 20:38:47 alternate? 20:38:51 For what it's worth, they're most interested in working on the active-active code, a heat-compute driver, and starting work on getting horizontal service delivery an actual reality. 20:38:59 xgerman: lets discuss that at the summit 20:39:01 So, they're probably going to be pushing hard for / diving into that. 20:39:03 xgerman: or make an ML 20:39:23 yeah, this clearly is an ML 20:40:04 So, expect to hear a lot more about that in the near-ish future. 20:40:12 cool 20:40:45 we are excited 20:40:52 On another note: I feel like the Neutron LBaaS pool sharing stuff should be ready to merge, though since nothing in L7 is going to get in in Liberty, that will probably remain on hold until the Liberty stuff is tagged at least. 20:41:01 What about doc updates for Liberty / Octavia? 20:41:11 johnsom: Yep, we need to do that. 20:41:14 +1 20:41:17 +1 20:41:32 first we make something that works… then :-) 20:41:40 xgerman: +1 20:41:53 I'm happy to work on that once the merge-fest is done and we're in bugfix mode for a while. 20:42:02 thanks 20:42:17 I am happy to help as well + remember we have a hands on lab in about a month 20:42:26 Right. 20:42:28 yes 20:42:33 which i *will* be there for 20:42:39 flights and hotel booked yesterday 20:42:44 Sweet! 20:42:52 I'm also 100% confirmed to be there. 20:42:57 cool 20:42:59 It doesn't look promising for me to show up guys... Makes me sad, and jealous. 20:42:59 cool 20:43:08 TrevorV: there might be another round 20:43:15 TrevorV just hide in blogans suitcase 20:43:15 and i was definitely WAY under what they expected for budget 20:43:17 Its unlikely they'll pick me anyway. 20:43:27 You gonna hostel it rm_work ? 20:43:34 no, but cheap hotel 20:43:37 I've signed up as a speaker, last week sometime. 20:43:38 staying at the same one as brandon 20:43:43 woo 3/5 star hotel 20:43:49 In japan, that might be bad. 20:43:50 lulz 20:43:52 heh yeah 20:44:01 Westin is 5-star <_< 20:44:07 worried a little about 3-star 20:44:07 it is bad 20:44:38 I'll make the mistake of getting into a 5-star, blowing the budget ideals there, and then getting in trouble and never being able to travel again... Except I'll have gone to tokyo, so "worth it" is all I'll say :P 20:44:40 it'll be a little room 20:44:52 heh 20:44:57 Haha 20:45:24 My objective is to not end up in a "pod" hotel... 20:45:26 https://www.youtube.com/watch?v=K0ViYfN420k 20:45:40 so once we are done with main topics, i will ask who all is staying afterwards and if anyone wants to go to Osaka with me :P 20:46:08 I'm there from 10/24 to 11/14 20:46:17 i want to ride on a plane that is tall person friendly 20:46:17 and no plans for week 2 yet 20:46:19 well, 10/31 is a big kid holiday 20:46:29 11/03 is Culture Day 20:46:38 festivals and stuff i think that are cool 20:46:41 if you stay longer 20:46:46 mmh 20:46:53 anywho, any other real topics 20:46:56 where were we? 20:47:03 I think no real topics 20:47:13 done early? 20:47:16 Don't have the vacation time for it, myself, unfortunately. Otherwise I'd love to, rm_work. :P 20:47:20 Done! 20:47:26 #endmeeting