16:00:18 <sdake> #startmeeting kolla
16:00:19 <openstack> Meeting started Wed Aug  3 16:00:18 2016 UTC and is due to finish in 60 minutes.  The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:23 <openstack> The meeting name has been set to 'kolla'
16:00:29 <sdake> #topic rollcall
16:00:31 <britthouser> 0/
16:00:31 <duonghq> o/
16:00:33 <inc0_> o/
16:00:33 <pbourke> hello
16:00:40 <wirehead_> ( ˘▽˘)っ♨
16:00:43 <rhallisey> hey
16:00:48 <sdake> wirehead_ ftw :)
16:00:50 <pbourke> ha wirehead_
16:00:53 <coolsvap> o/
16:00:56 <sdake> lets make it harder ;)
16:01:14 <wirehead_> No, you are not going to make it harder by taking away my tea.
16:01:16 <akwasnie> Hi
16:01:27 <Jeffrey4l> o/
16:01:37 <sdake> gatoraid is thirst aid
16:02:40 <sdake> Daviey you here?
16:02:46 <sdake> #topic announcements
16:02:56 <inc0_> I have one
16:03:00 <sdake> shoot
16:03:12 <inc0_> We just got access to 132 hardware nodes in osic cluster
16:03:17 <inc0_> literally 10min ago
16:03:18 <sdake> hot
16:03:28 <Jeffrey4l> thats cool
16:03:31 <sdake> i did see the credentails notification but haven't yet read the email
16:03:33 <britthouser> w00t!
16:03:36 <inc0_> I've sent credentials to sdake and rhallisey, we'll waterfall them from there
16:04:13 <inc0_> because of all the schedule change, we got 4 weeks instead of 3
16:04:14 <sdake> when waterfalling please use secure email rather then irc - tia :)
16:04:15 <inc0_> which is cool
16:05:00 <sdake> cool 4 weeks is great
16:05:12 <sdake> i dont have any specific announcements
16:05:14 <inc0_> let's talk some more about plan later in the meeting
16:05:23 <sdake> anyone else from community have any?
16:06:07 <sdake> #topic osic cluster planning
16:06:10 <inc0_> ok
16:06:20 <sdake> at midcycle we defined a whole slew of test scenarios
16:06:23 <sdake> inc0_ mind linking those?
16:06:24 <inc0_> so, for anyone not familiar with topic
16:06:29 <sdake> my bookmarks are busted
16:06:31 <inc0_> sure, 1min
16:06:52 <inc0_> we got 132 hardware nodes, strong ones, 10gig networking everywhere and 12 intel ssds each
16:06:56 <inc0_> 256 gig of RAM
16:07:05 <inc0_> so nice plaything
16:07:14 <inc0_> we need to make use of them and test whatever we can test
16:07:31 <inc0_> and then come back to osic board with results and "publications"
16:07:53 <inc0_> publication might be a whitepaper, blog, video, bugs on launchpad
16:08:09 <inc0_> anything that will prove that we used cluster for the betterment of community
16:08:26 <sdake> publications as in plural?
16:08:35 <sdake> I thought we agreed on one whitepaper with analysis
16:09:01 <inc0_> sdake, it doesn't have to be anything really
16:09:07 <inc0_> just some public outcome
16:09:17 <inc0_> even blueprints and bugs will be ok
16:09:42 <sdake> cool - well I think a whitepaper would have the biggest impact in terms of communicating that kolla can actually run at scale
16:09:57 <sdake> (unless it can't, in which case bugs and blueprints would work:)
16:10:17 <inc0_> yeah
16:10:37 * coolsvap echo
16:10:40 <inc0_> #link https://etherpad.openstack.org/p/kolla-N-midcycle-osic
16:10:46 <sdake> sup coolsvap
16:10:56 <coolsvap> echo the same :)
16:11:01 <sdake> oh duh on my part
16:11:05 <sdake> sorry - early here :)
16:11:19 <sdake> and had a drinking contest lsat night with my wife
16:11:20 <sdake> she won
16:11:28 <inc0_> so first thing we need is to deploy bare metal
16:11:50 <sdake> shame sean isn't around
16:12:11 <sdake> i have been following the work on bifrost and it looks close
16:12:13 <britthouser> I thought OSIC had their own baremetal
16:12:13 <inc0_> any volunteers to lead this topic?
16:12:16 <sdake> I am not sure if its ready to go or not
16:12:31 <inc0_> britthouser, what I mean is to install base operating system
16:12:35 <inc0_> we have IPMI access
16:12:37 <sdake> britthouser osic uses bifrost from what i've seen
16:13:07 <britthouser> Ok I'm with you now
16:13:10 <sdake> pbourke since your fresh off PTO - up for the job?
16:13:37 <pbourke> sure
16:13:41 <coolsvap> i can help
16:13:42 <inc0_> we have several options on this front
16:13:47 <britthouser> OSIC uses cobbler it looks like
16:13:51 <britthouser> https://github.com/osic/ref-impl/blob/master/documents/bare_metal_provisioning.md
16:14:01 <inc0_> well, it's not osic per se
16:14:02 <sdake> britthouser cool - didn't know that
16:14:10 <inc0_> we can use whatever
16:14:21 <sdake> inc0_ lets hear the options?
16:14:25 <inc0_> this guide is good, I tested it over and over again
16:14:33 <inc0_> we can try our experimental bifrost
16:14:44 <britthouser> Lets keep experiment with bifrost for the last week
16:14:54 <inc0_> fair enough
16:15:05 <inc0_> so we can just follow this guide
16:15:06 <britthouser> use what is known to wok so we can get to the big stuff first.
16:15:13 <inc0_> my personal request
16:15:16 <sdake> ya britthouser good point
16:15:29 <pbourke> sdake: sean-k-mooney may want to drive it seen as he's done the bifrost work but happy to work with him on it. same timezones also
16:15:32 <sdake> britthouser as much as Id' like to see bifrost worked out - I think your proposal makes good sense
16:15:37 <inc0_> for anyone who follows this guide, record times and stuff and let me know if guide is lacking in any place
16:15:56 <inc0_> this guide was prepared specifically for hardware we're dealing with
16:16:05 <sdake> nice
16:16:24 <britthouser> I can help with baremetal stuff between now and Friday.  I'm travleing for work next week and probbly wouldn't be available much
16:16:43 <sdake> ^^ britthouser knows a whole bunch about bare metal deployment
16:16:45 <inc0_> so once we deploy it once, we don't need to repave ideally
16:16:54 <inc0_> all glory to kolla! and her containers
16:17:16 <rhallisey> :)
16:17:18 <sdake> ya we repave the last week to sort out bifrost at scale
16:17:30 <pbourke> sounds good
16:17:32 <sdake> and give sean the reins (if he is willing) at that point
16:17:51 <britthouser> So me and you pbourke?
16:17:56 <pbourke> britthouser: lets do it
16:18:01 <inc0_> I'll send you creds in a minute
16:18:04 <britthouser> thx
16:18:07 <inc0_> pm with emails plz
16:18:15 <sdake> ya - i'd like to see sean's work on bifrost not interrupted with setup of osic if possible
16:18:23 <sdake> unless he wants to be involved
16:19:12 <inc0_> ok, that's it
16:19:14 <sdake> Jeffrey4l you need creds as well - i'll send those along
16:19:14 <inc0_> from me
16:19:18 <inc0_> I'm here to help
16:19:25 <sdake> inc0_ I think we a ren't qiet done with this topic
16:19:27 <inc0_> I dealt with this hardware so ping me
16:19:30 <sdake> who else needs or wants creds?
16:19:32 <Jeffrey4l> thanks.
16:19:55 <inc0_> I'll send them over to whole core team
16:20:08 <sdake> inc0_ sounds good
16:20:14 <sdake> anyone outside of core team want access?
16:20:24 <britthouser> o/
16:20:32 <sdake> we already established you britthouser  :)
16:20:37 <inc0_> britthouser, obviously;) pm me with email plz
16:21:10 <sdake> inc0_ can ou link our etherpad discussion from midcycle?
16:21:30 <inc0_> already did
16:21:37 <sdake> oh
16:22:02 <sdake> can you link again - i don't see it in scrollback
16:22:16 <sdake> i keep short scrollback logs by default
16:22:28 <Jeffrey4l> https://etherpad.openstack.org/p/kolla-N-midcycle-osic
16:22:32 <sdake> thanks Jeffrey4l
16:22:39 <sdake> can folks open that up
16:23:28 <sdake> inc0_ line 22 is wrong
16:23:30 <britthouser> I'm guessing the dates have changed?
16:23:41 <britthouser> and lin 1
16:23:59 <inc0_> yes
16:24:05 <inc0_> 4 weeks from today
16:24:05 <pbourke> is rally testing useful to kolla?
16:24:18 <sdake> pbourke i think that could be helpful yes
16:24:27 <inc0_> as a validation after deployment, I'd say yes
16:24:39 <inc0_> and results can be published later
16:24:41 <berendt> pbourke yes for tests after deployments
16:24:41 <sdake> so what I'd like to see is usae of cluster as much as possible
16:24:44 <inc0_> all the pretty htmls
16:24:56 <pbourke> guess can also show performance impact of containers(if any)
16:25:10 <sdake> pbourke that would be hard to show - we would have to compare it to something
16:25:17 <inc0_> and features
16:25:20 <sdake> and comparisons just end in a shit show
16:25:21 <inc0_> like live migration and stuff
16:25:53 <sdake> who is doing #3
16:26:05 <sdake> if your interested, please register yourself ;)
16:26:09 <Jeffrey4l> we need test with tempest and rally to generate a full report.
16:26:28 <berendt> Jeffrey4l there is a tempest integration inside rally
16:26:42 <Jeffrey4l> berendt, yep.
16:27:14 <Jeffrey4l> I can take 4, if no one take it. :)
16:27:24 <sdake> berendt you interested in credentials for osic cluster to assist with scale testing
16:28:00 <sdake> inc0_ what are the proepr dates for line 23
16:28:07 <berendt> sdake assisting would be fine, but I think I do not have the time to take over whole tests
16:28:37 <sdake> berendt roger - its up to you - not sure if there was something you would like to validate or not in our 3 week window
16:28:51 <inc0_> sdake, I'll confirm and correct dates
16:29:07 <sdake> who is taking on line 5?
16:29:09 <sdake> rather item 5
16:29:22 <sdake> ceph caching layer
16:29:22 <berendt> I added "HA tests" to postinstall testing, that is something that is important for my own environment and I am doing a lot of tests there at the moment
16:29:47 <sdake> berendt if you can help there - that would be fantastic
16:30:00 <britthouser> I can work on 5 or 6 if I'm back from travel by then...just not sure what dates those will fall
16:30:07 <sdake> berendt people always question our HA - and a definitive answer noe way or another would help us either rprove it works or fix what is busted
16:30:26 <sdake> britthouser roger - schedule is fluid as are who will be doing the work
16:30:39 <rhallisey> berendt, maybe team up with a deployment and test HA as one of the tests after a deplo
16:30:50 <coolsvap> britthouser: add your name :)
16:30:53 <sdake> Jeffrey4l can you take #8 (reconfigure liberty)
16:30:56 <sdake> and I'll take #9
16:31:09 <Jeffrey4l> sdake, np
16:31:09 <sdake> (after you fixed any bugs found :)
16:31:17 <inc0_> so I propose standing tmux session
16:31:17 <inc0_> on first node
16:31:22 <inc0_> so we don't really need to do this separately
16:31:23 <sdake> see what i di dthere - you got the hard part :)
16:31:31 <berendt> rhallisey yes makes sense, but we have to make all services HA ready first. At lease MongoDB is missing at the moment and I think Elasicsearch is also not HA ready at the moment.
16:31:36 <inc0_> tmux + hangouts permanently would be great coop tool
16:31:57 <sdake> tmux is good
16:31:58 <inc0_> berendt, known gaps can be ommited
16:32:07 <sdake> inc0_ agree re gaps
16:32:17 <inc0_> so let's figure out deployment node
16:32:27 <inc0_> and use it as a gateway for everyone
16:32:37 <sdake> berendt if you can take #10 that would rock
16:32:40 <inc0_> also use tmux at all times so we won't step on each other toes
16:32:45 <Jeffrey4l> 4 and 5 almost the same. may be we can work together rhallisey :D
16:32:51 <rhallisey> Jeffrey4l, sure
16:32:56 <berendt> inc0_ ok, than I volunteer to write down some basic HA tests that should be tested
16:32:59 <mdnadeem_home> can any one post the link where task number  define
16:33:04 <coolsvap> i think Jeffrey4l and I can work together more
16:33:09 <berendt> this should be #10
16:33:10 <inc0_> berendt, much appreciated
16:33:14 <Jeffrey4l> coolsvap, that's cool
16:33:24 <inc0_> let's make use of our global distribution:)
16:33:26 <britthouser> mdnadeem_home: https://etherpad.openstack.org/p/kolla-N-midcycle-osic
16:33:27 <inc0_> and work around the clock
16:33:40 <mdnadeem_home> britthouser, thanks
16:33:48 <coolsvap> we also need to set the timezone overlap so that we are using it at most
16:33:48 <sdake> berendt writing down the test cases would be good so they are documented, but can you execute them too, or not enough time?
16:34:07 <pbourke> coolsvap: I'll help you out with 7) if that's cool
16:34:11 <inc0_> I suggest we make leaders based on geo;)
16:34:16 <coolsvap> pbourke: sure
16:34:24 <berendt> sdake I think I can do parts of the tests, but always plan a backup for me
16:34:25 <sdake> no need to have leaders per geo
16:34:31 <sdake> we all work well together
16:34:39 <pbourke> inc0_: also need somewhere good to take notes
16:34:42 <coolsvap> no leaders just we need to be aware
16:34:44 <pbourke> inc0_: etherpadd too messy?
16:34:51 <inc0_> sdake, by "leaders" I mean someone who will know what's happening and will be person to talk to
16:35:00 <berendt> And I think my location is perfect for postinstall tests, I am from Europe
16:35:10 <inc0_> berendt, same as pbourke
16:35:37 <sdake> inc0_ ok that makes sense then
16:35:52 <inc0_> I'll make this section in ehterpad
16:36:01 <Jeffrey4l> suggestion: 1. kolla deployment is fast. I think kolla deploy the 100 nodes in 1 hours. so deployment is not a big deal.
16:36:06 <sdake> inc0_ we have US, EMEA, and APAC
16:36:37 <sdake> Jeffrey4l one thing we need to do soon is record a list of information we want to capture in each of the test scenarios
16:36:46 <sdake> we hae part of that on line 55
16:36:48 <Jeffrey4l> yes.
16:36:53 <sdake> but we need more i think
16:37:42 <sdake> ok I think we have a basic plan in place
16:37:50 <sdake> lets iterate on the mailing list
16:37:54 <sdake> inc0_ care to start a thread?
16:37:55 <inc0_> let's use next few days to figure out cooperation model
16:38:03 <inc0_> will do
16:38:36 <sdake> i'll take on filling out lines 55+ today
16:38:40 <sdake> with something a little more concrete
16:38:51 <inc0_> ok
16:39:15 <inc0_> britthouser, will you be able to start deployment of stuff today?
16:39:32 <inc0_> I'll help you
16:39:47 <sdake> mdnadeem_home you will need credentials
16:39:59 <mdnadeem_home> I can take part in point 7(upgrade),
16:40:11 <mdnadeem_home> Please provide me credential :)
16:40:12 <sdake> mdnadeem_home can you send your email in a pm inc0_ so he can send you the info?
16:40:29 <mdnadeem_home> sdake, sure I will
16:41:31 <Jeffrey4l> isn't the US guys has the timezone UTC-x? why u have UTC+x?
16:42:09 <britthouser> I was just trying to figure out that myself @Jeffery4l
16:42:26 <rhallisey> ya britthouser was right
16:42:27 <sdake> ok
16:42:28 <sdake> cool
16:42:39 <sdake> well that was fantastic planning - now the tough part - following through :)
16:42:57 <pbourke> given myself and britthouser are on step #1 when exactly can we kick off
16:43:07 <sdake> pbourke asap
16:43:12 <rhallisey> inc0_, sound the horn
16:43:16 <sdake> pbourke the machines are ready to rock
16:43:23 <duonghq> Do we have any backlog for planing and working?
16:43:40 <sdake> duonghq you mean for milestone #3?
16:43:49 <inc0_> ok core team should get mail soon on gerrit email addr;)
16:43:58 <duonghq> sdake: I mean this osic
16:44:07 <sdake> duonghq its in the etherpad
16:44:18 <sdake> duonghq i've really got to move the agenda along, we are running short on time
16:44:19 <inc0_> duonghq, https://etherpad.openstack.org/p/kolla-N-midcycle-osic
16:44:27 <sdake> can we discuss further during overflow in #openstack-kolla?
16:44:37 <sdake> #topic openstack-kolla
16:44:45 <sdake> #undo
16:44:46 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x7f5bbffa3350>
16:44:47 <sdake> too many things :)
16:44:47 <rhallisey> ha
16:44:56 <sdake> #topic kolla-kubernetes
16:45:04 <rhallisey> ok so
16:45:09 <rhallisey> focus is on neutron + nova
16:45:21 <rhallisey> https://blueprints.launchpad.net/kolla-kubernetes/+spec/nova-kubernetes
16:45:25 <rhallisey> https://blueprints.launchpad.net/kolla-kubernetes/+spec/neutron-kubernetes
16:45:42 <duonghq> sdake, inc0_ I know this etherpad but I need something more details, but I will ask you later
16:45:44 <rhallisey> there are work items filled out in the neutron bp
16:45:54 <sdake> if we have any new contributors that want in pretty much on ground floor development, kolla-kubernetes is the place to contribute
16:46:10 <sdake> duonghq lets discuss in overflow - that is also logged
16:46:20 <rhallisey> I also added bps for other services
16:46:32 <wirehead_> I'll start filling out some more of the BP's with concrete places to get started.
16:46:34 <rhallisey> for example heat, cinder, ect...
16:46:49 <srwilkers_> as a new contributor, i think those opportunities would be awesome
16:46:56 <sdake> srwilkers_ welcome to the party!
16:47:03 <srwilkers_> and our organization is looking to proof out some concepts with kolla-kubernetes
16:47:05 <srwilkers_> thanks :D
16:47:17 <rhallisey> srwilkers_, https://blueprints.launchpad.net/kolla-kubernetes/+spec/heat-kubernetes
16:47:18 <britthouser> Is there a template to follow when working on these rhallisey? or is it breaking new ground?
16:47:20 <rhallisey> that's a goo dplace to start
16:47:30 <srwilkers_> thanks rhallisey
16:47:38 <sdake> srwilkers_ rhallisey is unofficially leading the kolla-kubernetes effort
16:47:39 <rhallisey> britthouser, you can follow the existing work in place
16:47:57 <rhallisey> so mariadb, glance, keystone, ect...
16:48:01 <sdake> rhallisey you can either ping him for ideas for getting ramped up or any of the other kolla-kubernetes devs
16:48:02 <britthouser> ok thx rhallisey
16:48:04 <inc0_> srwilkers_, we'll be preparing demos during next couple of days if you're interested
16:48:09 <sdake> rather srwilkers_ ^^
16:48:18 <rhallisey> the only piece that's not in place is the ansible orchestration on the front end
16:48:23 <rhallisey> but don't worry about that yet
16:48:25 <srwilkers_> absolutely :D
16:48:26 <wirehead_> Oh, so we closed the general openstack-services blueprint.  Because things are mostly there and behaving.
16:48:48 <sdake> wirehead_ ya its typical to close a blueprint once its finished
16:48:57 <sdake> wirehead_ if something needs fine tuning another blueprint can be opened
16:49:13 <wirehead_> Okay, that's how it should work.  I was going to say that maybe we should do exactly that. :)
16:49:24 <rhallisey> to reiterate, new contributors I'd recommend picking a service in here: https://blueprints.launchpad.net/kolla-kubernetes
16:49:33 <rhallisey> and ask questions in #openstack-kolla
16:49:46 <srwilkers_> great, thanks
16:50:14 <rhallisey> ok all set sdake
16:50:22 <sdake> cool
16:50:25 <rhallisey> srwilkers_, welcome :)
16:50:47 <sdake> in terms of work items, i see we are in need of some neutron work - but some work there has merged
16:50:55 <mdnadeem_home> rhallisey, Can you please assign this bp to me https://blueprints.launchpad.net/kolla-kubernetes/+spec/rabbitmq-kubernetes
16:51:12 <rhallisey> mdnadeem_home, sure
16:51:21 <mdnadeem_home> rhallisey, Thanks
16:51:37 <sdake> mdnadeem_home one quick note - i think it makes sense tot set a short deadline on the basic services to unblock others
16:51:41 <sdake> is 1 week doable for you?
16:52:00 <mdnadeem_home> sdake, yup, I ll try
16:52:14 <sdake> obligatory yoda quote inserted here :)
16:52:38 <mdnadeem_home> sdake: :)
16:52:40 <sdake> ok anything else from kolla-kubernetes cats?
16:52:41 <duonghq> Do we have any deadline on *-kubernetes?
16:52:58 <sdake> duonghq compute kit in 1-4 weeks
16:53:02 <rhallisey> no so much a deadline, but that we'd like to have a demo son
16:53:04 <rhallisey> soon*
16:53:16 <duonghq> thank sdake, rhallisey
16:53:17 <rhallisey> like within ~2 weeks ideally
16:53:45 <wirehead_> Yeah.  And a lot of those services are at least mostly there.  Just things like making sure that all of the details of operation are taken care of.
16:54:10 <sdake> lets move like I type - fast with lots of mistakes
16:54:24 <sdake> and clean up the mess after we get a working compute kit
16:54:39 <mdnadeem_home> cool idea  ^^
16:54:56 <sdake> wirehead_ thoughts on that approach?
16:55:14 <wirehead_> Well, that's mostly how we've been going.
16:55:14 <wirehead_> :)
16:55:20 <sdake> good
16:55:28 <rhallisey> sounds good
16:55:38 <sdake> the code base isn't stable and we don't claim it as such
16:55:59 <sdake> while its an official deliverable in the governance repo, its on an independent release model
16:56:25 <sdake> we are expected to release on milestones and whatnot
16:56:57 <sdake> but not follow the freeze policies we have in place for kolla-ansible
16:57:17 <sdake> any questions?
16:57:48 <sdake> #topic open discussion
16:57:52 <sdake> 3 minutes
16:58:07 <sdake> - apologies planning for osic cluster took so long leaving little time for open discussion
16:58:16 <sdake> if there is significant discussion needed, we can overflow into #openstack-kolla
16:58:38 <coolsvap> +1
16:58:44 <duonghq> o/
16:59:37 <sdake> ok then i'll close meeting
16:59:38 <sdake> thanks everyone for coming
16:59:43 <duonghq> We run out of time, thanks
16:59:46 <pbourke> bye
16:59:46 <sdake> and contributing to the osic work especially :)
16:59:49 <rhallisey> thanks
16:59:57 <sdake> and a big thank you to osic for loaning us gear
17:00:02 <mdnadeem_home> Thank you all
17:00:18 <sdake> inc0_ if you can send me contact info for who organized that, I'd like to send them a personal thank you
17:00:20 <sdake> thanks :)
17:00:22 <sdake> #endmeeting