20:00:12 <xgerman> #startmeeting Octavia
20:00:13 <openstack> Meeting started Wed Apr  8 20:00:12 2015 UTC and is due to finish in 60 minutes.  The chair is xgerman. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:17 <openstack> The meeting name has been set to 'octavia'
20:04:34 <johnsom> o/
20:04:34 <fnaval> o/
20:04:34 <ajmiller> o/
20:04:34 <xgerman> #chair blogan
20:04:34 <rm_work> o/
20:04:34 <jorgem> o/
20:04:34 <blogan> hi!
20:04:34 <TrevorV> o/
20:04:34 <openstack> Current chairs: blogan xgerman
20:04:34 <xgerman> #topic Announcements
20:04:34 <ptoohill> o/
20:04:35 <xgerman> meetbot stopped yielding to my commands
20:04:35 <dougwig> #action xgerman pull out hair
20:04:35 <dougwig> i asked about it in infra.
20:04:35 * blogan kicks meetbot
20:04:35 <xgerman> thanks
20:04:35 <dougwig> i think we can carry on...
20:04:35 <xgerman> yeah
20:04:35 <rm_work> #link https://www.youtube.com/watch?v=dQw4w9WgXcQ
20:04:36 * TrevorV kicks blogan
20:04:36 <xgerman> ok, after all the kicking - any announcements?
20:04:36 <xgerman> Anyhting from the Neutron meeting?
20:04:36 <dougwig> two notes.
20:04:36 <dougwig> RC1 is being cut tomorrow, so any last minute kilo bug fixes need to be in the merge queue today.
20:04:37 <dougwig> and the neutron mid-cycle for Liberty has been announced: https://etherpad.openstack.org/p/neutron-liberty-mid-cycle
20:04:37 <dougwig> #link https://etherpad.openstack.org/p/neutron-liberty-mid-cycle
20:04:44 * blogan hugs meetbot
20:05:03 <johnsom> meetbot is sleepy today
20:05:07 <xgerman> ok, meetbot was just getting a  coffee
20:05:12 <ptoohill> +1, sleepy day
20:05:49 <jorgem> no more announcements?
20:06:02 <xgerman> nope
20:06:14 <xgerman> #topic Brief progress reports
20:06:36 <dougwig> none from me. i need to switch from Kilo to the neutron-lbaas driver this week.
20:06:40 <ptoohill> Ive made a couple updates to templater. Begining to test drivers
20:06:42 <TrevorV> ssh_driver under review
20:06:51 * blogan kicks dougwig into action
20:07:01 <xgerman> api server almost done
20:07:01 <TrevorV> I'm also updating the API PUT methods to have database updates after the queue
20:07:02 <johnsom> Controller worker is progressing.  All of the framework is there.  I'm adding driver plugins with stevedore today.
20:07:15 <ajmiller> Octavia devstack plugin is getting close.
20:07:35 <ajmiller> #link https://review.openstack.org/#/c/167796/
20:07:38 <blogan> network driver is almost complete, i will need to coordinate with johnsom on the changes needed from the controller worker's perspective
20:07:42 <johnsom> TrevorV, I want to talk about that later in the agenda
20:07:58 <jorgem> johnsom: Do you have a timeframe on when you think the controller worker will no longer be a WIP?
20:08:24 <ajmiller> I have one TODO in there about shutdown tasks, for which I need working control plane to finish.
20:08:48 <johnsom> jorgem That depends on our discussion about the api/db.  I hope in the next week or two
20:09:08 <jorgem> johnsom: Gotcha, I'm guessing we will be talking about that later in the meeting
20:09:10 <blogan> ajmiller: thanks, this will be great to have once we have an end 2 end
20:09:40 <xgerman> yep, people will want to download an dplay with it right after the demo in 6 weeks
20:10:47 <TrevorV> sorry, disconnected for a bit there...
20:10:51 <xgerman> also mwang2 did some more work on the health manager and sballe is closer with the REST based driver
20:10:55 <mwang2> we had methos for health check in amphora driver, please reveiw and comment on the code too
20:11:03 <mwang2> #link https://review.openstack.org/#/c/170599/
20:12:06 <xgerman> any more progress?
20:12:21 <TrevorV> johnsom you had to talk to me about something?  Do you mean outside this meeting?
20:12:29 <blogan> mwang2: just commented on that review
20:12:54 <johnsom> TrevorV No, we can hit it in the API/DB agenda item
20:13:01 <TrevorV> Alrighty sounds good
20:13:05 <mwang2> blogan: thank you , let me take a look
20:13:05 <xgerman> ok,
20:13:13 <xgerman> #topic Summer Midcycle for LBaaS/Octavia
20:13:45 <xgerman> dougwig suggested having one and I was thinking about opening up to VPNaaS and FWaaS as well
20:14:02 <blogan> dougwig wanted to do it in boise
20:14:32 <dougwig> i wouldn't be opposed to anywhere.  certainly boise is available, but the group can decide.
20:14:52 <johnsom> Doesn't everything close at 7 in Boise?
20:14:57 <xgerman> I can offer Seattle
20:15:11 <blogan> we can offer San Antonio
20:15:14 * blogan shrugs
20:15:16 <jorgem> lol
20:15:23 <dougwig> johnsom: ha.
20:15:25 <jorgem> How about Hawaii?
20:15:33 <jorgem> :)
20:15:38 <johnsom> +2 for Hawaii
20:15:44 <johnsom> RAX is sponsoring!
20:15:45 <jorgem> We won't be able to go but you guys can!
20:15:51 <xgerman> lol
20:15:56 <xgerman> Hawaii, TX?
20:16:01 <dougwig> in boise, we can go out into the desert and shoot holes into hard drives containing v1.
20:16:05 <jorgem> Sadly, that probably exists
20:16:14 <ptoohill> +1 dougwig
20:16:17 <ptoohill> sounds fun!
20:16:24 <blogan> anyway, vpnaas, fwaas i dont mind joining, but that means there might need to be separated areas
20:16:36 <dougwig> vpn is usually just paul.  :)
20:16:59 <blogan> oh well i dont want him to come
20:17:01 <xgerman> and FWaaS is a cardboard cutout
20:17:15 <blogan> im kidding
20:17:48 <blogan> fwaas guys usually push for the bay area
20:19:09 <xgerman> well, let's start an etherpad?
20:19:19 <xgerman> and kick around some dates/venues
20:19:50 <xgerman> also if we don't do some 150 mile radius around San Antonio how many of you can come?
20:20:00 <johnsom> I will start one
20:20:07 <ptoohill> :/
20:20:13 <xgerman> #action johnsom start midcycle etherpad
20:20:23 <ptoohill> i will pay own way to drive a day a way. I would like to be part of these
20:20:29 <blogan> xgerman: hard to say right now, we shall see
20:20:47 <dougwig> mexico!
20:20:56 <xgerman> +1
20:21:00 <ptoohill> Thats def within a days drive ;)
20:21:00 <blogan> juarez?
20:21:26 <jorgem> ptoohill: so you saying stay in Texas then right?
20:21:28 <jorgem> lol
20:21:30 <rm_work> i'm down for the bay
20:21:38 <rm_work> :P
20:21:44 <ptoohill> eh, i can no-doze it
20:21:45 <dougwig> bay area attracts too many tourists, IMO.
20:21:46 <johnsom> #link https://etherpad.openstack.org/p/LBaaS-FWaaS-VPNaaS_Summer_Midcycle_meetup
20:21:56 <ptoohill> I just want to be part of these.
20:21:59 <blogan> dougwig: maybe for neutron
20:22:33 <xgerman> ok, let's move on
20:22:52 <xgerman> #topic Confirm API Server vs. Controller Worker database updates
20:22:54 <TrevorV> Wait is this conversations concerning Octavia meetup or LBaaS meetup?
20:23:10 <xgerman> Octavia, LBaaSm VPN, FW
20:23:10 <blogan> btoh
20:23:25 <xgerman> since the last two don't have a home
20:23:48 <johnsom> Ok, so I am about to update the controller worker code for the database changes we have talked about previously.
20:24:01 <dougwig> TrevorV: merged octavia/lbaas.
20:24:10 <johnsom> I want to make sure that the Consumer Worker as coded here: https://review.openstack.org/#/c/149789/16/octavia/controller/queue/endpoint.py
20:24:17 <johnsom> is the current plan.
20:24:46 <jorgem> johnsom: yes, the idea was to make controller workers calls
20:24:58 <johnsom> Meaning, when I am passed an ID the database was updated by something upstream of controller worker and I will reference.
20:25:31 <jorgem> negative I believe the worker is updating the database
20:25:42 <jorgem> the queue consumer is just a pass through delegator
20:25:46 <johnsom> For updates I will be passed an object representing the end game for the object and I will update after success.
20:25:48 <blogan> johnsom: the controller would update the entity to the object htat was passed to it
20:26:09 <blogan> johnsom: the object passed to it will probably be a dictionary, and updated as a PATCH
20:26:25 <jorgem> on updates you are given the updates as well as what is currently in the db
20:26:39 <jorgem> oh wait err
20:27:00 <jorgem> updates are passed to you along with id
20:27:08 <jorgem> so that you can make the appropriate db changes
20:27:20 <blogan> and it will be update by patch, not full replace
20:27:21 <johnsom> So, where is the data when I get "load_balancer_id" passed in for create_loadbalancer?
20:27:38 <jorgem> in the database
20:27:39 <blogan> johnsom: yes in the db
20:27:57 <blogan> johnsom: the api will insert on create
20:27:59 <sballe> xgerman: I am here sorry my other meeting ran over :-(
20:28:24 <xgerman> cool
20:28:59 <jorgem> johnsom: The reason you need to update is that in the case of failure there is no way to go back to a good state in the db
20:29:04 <johnsom> Ok, so where I only get IDs, you guys have already handled the DB insert.  I only need to update status when I'm done and update DB for the changes passed in the update dict.
20:29:19 <jorgem> correct
20:29:25 <johnsom> Yeah, I'm good on the update.  Just wanted to make sure on the rest
20:29:39 <blogan> johnsom: yes, you can always assume the object is there, its just on updates and deletes, you will need to make the update and delete calls to the db after the entire workflow has been successful
20:29:53 <johnsom> Perfect!
20:30:10 * TrevorV feels like johnsom didn't actually need him
20:30:18 <jorgem> if updates fail you rollback and the db doesn't get updated
20:30:33 <xgerman> what about marking ERROR?
20:30:35 <jorgem> still on the fence about putting the lb in an "ERROR" state in that case
20:30:41 <xgerman> +1
20:30:45 <jorgem> this is for 0.5 however
20:30:46 <blogan> xgerman: i think it should be marked as ERROR, but it'll still be running
20:30:56 <jorgem> for 1.0 we would want to make this more robust
20:31:06 <blogan> yeah thats how i see it
20:31:13 <xgerman> mmh, so we have ERROR = broken and ERROR=still running
20:31:13 <johnsom> TrevorV You mentioned API PUTs db and queue, so thought it might be related
20:31:28 <TrevorV> It is, ha ha, my teammates just answered for me
20:31:30 <jorgem> or perhaps a new status like "UPDATE_FAILED"
20:31:30 <TrevorV> :(
20:31:32 <blogan> well there needs to be some way to tell the user that something bad happened and their changes did not happen
20:31:39 <xgerman> jorgem +1
20:31:52 <jorgem> again this if for 0.5
20:32:01 <jorgem> so we can still live with ERROR for that
20:32:05 <blogan> not sure i like a lot of statuses
20:32:09 <johnsom> Currently I am putting ERROR in on failure
20:32:09 <ptoohill-oo> Do we not have a event feed planned?
20:32:11 <jorgem> I really want to get to something demoable ASAP
20:32:15 <TrevorV> I like putting it into error state... since error state doesn't mean they're not serving traffic, it means something failed.
20:32:16 <xgerman> well, you know how bad things have a live of their own :-)
20:32:19 <blogan> maybe we should add another field that can say what went wrong
20:32:38 <ptoohill-oo> An event feed would be better solution
20:32:41 <dougwig> a boolean field, called "towed"
20:32:48 <blogan> ptoohill-oo: that will come in the future
20:32:57 <xgerman> lol
20:32:58 <johnsom> +1 towed
20:33:02 <ptoohill-oo> So add a field for now?
20:33:16 <ptoohill-oo> Then remove later because useless?
20:33:34 <blogan> how bout just put it in ERROR for now, and improve it afetr the demo?
20:33:45 <ptoohill-oo> Too easy
20:33:46 <jorgem> I say leave as ERROR status since we are going to update the whole proivisiong error stuff in 1.0 anyway
20:33:46 <TrevorV> I think OP_STATUS = ONLINE and PROV_STATUS = ERROR makes sense.
20:33:48 <xgerman> I think status codes are cheap
20:33:49 <johnsom> Let's get controller worker in with ERROR and revisit
20:34:00 <jorgem> +1
20:34:02 <johnsom> +1 blogan
20:34:18 <jorgem> again we want to have a demo in time for summit right?
20:34:19 <xgerman> but +1
20:34:24 <blogan> xgerman: they're cheap, but feature creep likes to creep
20:34:37 <johnsom> Yes, we really want a demo for summit
20:34:47 <xgerman> 6 weeks
20:34:49 <ptoohill-oo> <six weeks
20:35:00 <crc32> 4 weeks now
20:35:18 <blogan> johnsom: i still need to get with you on the network driver changes, and also the amphora driver changes
20:35:25 <jorgem> johnsom: Was the db thing the only item you had questions on?
20:35:28 <xgerman> crc32 you are taking vacation?
20:35:32 <blogan> bc that will change what you insert in the db, and what you call
20:35:40 <johnsom> jorgem, yes, that was my topic
20:35:56 <johnsom> blogan after this meeting chat?
20:35:57 <xgerman> ok, moving on?
20:36:01 <jorgem> johnsom: Cool let me know when I can review once you get the rest of your changes in
20:36:10 <xgerman> #topic Review what still needs to be completed for the end to end demo
20:36:13 <blogan> johnsom: sure or i can just bring one of the items up as another topic at the end
20:36:23 <xgerman> +1
20:36:28 <TrevorV> Still could use more eyes on the ssh_driver review
20:36:30 <TrevorV> I'll link
20:36:41 <johnsom> jorgem you can hook up now, just no guarantee it will do the right thing yet
20:36:46 <TrevorV> #link https://review.openstack.org/#/c/160964/
20:37:24 <johnsom> blogan topic at the end works too
20:37:44 <xgerman> ok, so mostly we are lacking dougwig's lbaas-octavia driver
20:37:47 <crc32> don't forget to review --> https://review.openstack.org/#/c/149079/
20:38:25 <xgerman> k
20:38:57 <xgerman> so we have compute, network, amphora (ssh + hopfully soon REST)
20:39:07 <xgerman> controller worker, queue consumer
20:39:57 <blogan> its almost end to end
20:40:21 <johnsom> We are close
20:40:24 <xgerman> +1
20:40:34 <dougwig> any chance HP can help accelerate horizon, so we can go from horizon to an amphora?
20:41:03 <xgerman> mmh, will try
20:41:12 <xgerman> also vijay might have that
20:42:30 <crc32> #link https://review.openstack.org/#/c/149079/
20:43:24 <xgerman> #topic Should Octavia use tempest or Rally for integration tests? - 2
20:43:51 <crc32> should we vote?
20:44:00 <xgerman> did you guys have a chance to look?
20:44:00 <blogan> i still believe tempest simple bc it is what openstack uses
20:44:11 <TrevorV> +1 blogan
20:44:15 <dougwig> +1
20:44:22 <xgerman> it's not about religion...
20:44:35 <crc32> +1 tempest
20:45:28 <dougwig> xgerman: do you have a compelling reason to switch?
20:45:32 <rm_work> i don't care either way REALLY, but that makes me side with tempest just because then we're not different from how the rest of Openstack operates -- does that make sense? :/
20:45:43 <xgerman> I like the UI and also tempest-lib is a mess right now
20:45:56 <blogan> its not, but i fear the day that we would have to refactor tests to use tempest from rally because of some rule
20:45:58 <johnsom> Wondering if we are asking the right question...  Doesn't Rally run tempest as an action?
20:46:00 <fnaval> +1 cloudcafe
20:46:10 <rm_work> lol
20:46:12 <sballe> Do we know how many openstack projects have plans to move to rally?
20:46:14 * crc32 slaps fnaval around a bit with a large trout
20:46:20 <fnaval> oh, i mean opencafe
20:46:51 <dougwig> data points: rally was just accepted as an official openstack project.  also, i don't know of any plans for neutron itself to abandon tempest.
20:47:12 <dougwig> my gui is SSH, so i have no opinion on that point.
20:47:28 <rm_work> same, lol
20:47:32 <blogan> your gui is OS X crap
20:47:40 <ptoohill> i would be fine using rally if it was req and/or other projects are moving to it. But, on that note, theres nothing stopping anyone from doing rally and/or tempest is there?
20:48:20 <blogan> yeah similar to what we do with opencafe/cloudcafe
20:48:25 <TrevorV> ptoohill the only thing is that we should have consensus.  If we're using rally, we should use rally.  Supporting 2 different technologies to complete a test suite is a bad idea.
20:48:26 <crc32> time to vote?
20:48:33 <dougwig> well, our jobs are running vanilla devstack-gate, which is tempest aware.  but then we override it and call tox, so I guess it could run anything.
20:49:03 <dougwig> a vote is fine with me.
20:49:15 <ptoohill> bad idea?
20:49:25 <blogan> dougwig: it could, but tempest has the "seal of approval" for openstack, even with all its warts
20:49:33 <ptoohill> Tell that to all the people doing the same sort of thing now because different people have different reqs
20:50:28 <dougwig> blogan: i said we "could". that's just sharing a fact. scroll up, and see that i'm pro-tempest.
20:50:35 <TrevorV> ptoohill just remember when we had Cloud Cafe when it sprung up but we still had SOAP tests.  It was a nightmare.  I don't want something like that again.
20:50:36 <blogan> not to mention people who write tests for openstack will know tempest, they may or may not know tempest, so there woudl be a learning curve for them
20:50:48 <blogan> dougwig: i just like to counter everything you say
20:50:50 <ptoohill> we migrated away from soap because they sucked
20:51:15 <dougwig> anything in the proximity of soap sucks.  that mess infects everything near it.
20:51:16 <ptoohill> blogan: *rally?
20:51:20 <crc32> tempestI thought it was because it was SOAP
20:51:26 <johnsom> #link https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler
20:51:34 <blogan> they mean soapui, which was worse than soap
20:51:54 <crc32> I'm all SOAPed out just from CLB 1.0
20:51:55 <blogan> ptoohill: yeah, correction
20:52:02 <TrevorV> Either way, I don't like supporting both.  One or the other.
20:52:05 <ptoohill> that was the reason we went with opencafe stuffs. My point was that if we do something in tempest because we want to follow what other openstack projects are doing theres nothing stopping another company from writing rally tests and submitting those
20:52:36 <crc32> lets just vote. I'm wondering where every one stands.
20:52:57 <xgerman> we can always defer after the summit and see how things look there
20:53:00 <johnsom> I still think they are two different animals and not an either/or choice
20:53:01 <ptoohill> rally seems to be more geared towards performance testing with the ability to do other things
20:53:04 <xgerman> since we have nothing to test anyway
20:53:32 <fnaval> rally for performance testing
20:53:36 <blogan> xgerman: im fine with re-evaluating later when we actually start writing tests
20:53:42 <ptoohill> with the ability to do other things
20:53:44 <blogan> it is a bit premature
20:53:55 <dougwig> we can keep deferring this until german gets his UI, but ...
20:54:00 <ptoohill> lol
20:54:35 <johnsom> It does make nice pointy hair complaint pictures....
20:55:17 <xgerman> ok, let's defer after the summit with a preference for tempest tests
20:55:20 <dougwig> didn't that blog link show how to use rally to run tempest?  so can't you use rally either way?
20:55:31 <xgerman> probably
20:55:35 <blogan> yeah, so if you want to use rally, just do that?
20:55:36 <dougwig> i can't say as i feel strongly either way.
20:56:31 <xgerman> ok, I guess this is settled somewhat
20:56:46 <dougwig> anyone that wants to peek at a devstack fix, to make our job cleaner: https://review.openstack.org/#/c/171402/
20:56:55 <dougwig> oh wait, we're not in open discussion.  sorry.
20:56:56 <xgerman> #topic Rally automatically installs and configures Tempest, and automates running Tempest tests.
20:57:14 <xgerman> sorry
20:57:19 <xgerman> #topic: Open Discussion
20:57:23 <blogan> lol nice topic
20:57:37 <blogan> xgerman: back to rally vs tempest, sounds like rally would just be the test runner then
20:58:24 <blogan> okay on to my topic
20:58:33 <xgerman> you have 2 minutes ;-)
20:58:47 <blogan> just like we need a post_network_plug method in the amphora driver, we need a post_vip_plug method
20:59:19 <blogan> bc when you plug a vip, you may have to do some work on the amphora (such as bringing the interface up)
20:59:33 <johnsom> Ok
20:59:36 <blogan> but plug_vip is only called once, so post_vip_plug would only be called once
20:59:47 <dougwig> worst case, post_vip is "pass".
20:59:52 <blogan> yep
21:00:05 <blogan> which is why it would not be tagged as an abstactmethod
21:00:33 <ptoohill> times up, game over
21:00:41 <xgerman> #endmeeting