20:00:28 <hub_cap> #startmeeting trove
20:00:29 <openstack> Meeting started Wed Sep 11 20:00:28 2013 UTC and is due to finish in 60 minutes.  The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:33 <openstack> The meeting name has been set to 'trove'
20:00:43 <SlickNik> here
20:00:46 <dmakogon_> o/
20:00:47 <imsplitbit> o/
20:00:55 <robertmyers> o/
20:00:56 <juice> o/
20:00:58 <cp16net> o^/
20:01:00 <vipul> \o
20:01:00 <kevinconway> 7o7
20:01:01 <isviridov_> o/
20:01:07 <hub_cap> can someone ask grapex to join? ;)
20:01:13 <pdmars> o/
20:01:18 <hub_cap> kevinconway: are you walking like an egyptian?
20:01:28 <hub_cap> #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting
20:01:28 <dmakogon_> lol
20:01:33 <hub_cap> #link http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-09-04-20.02.html
20:01:33 <imsplitbit> I don't thinnk I can get my arms to make that shape
20:01:46 <hub_cap> im thinking thats his hands
20:01:52 <hub_cap> anyhooooooo
20:02:01 <hub_cap> #topic action items
20:02:02 <imsplitbit> lets go
20:02:28 <hub_cap> cp16net: did you add the db model to schedule_task?
20:02:49 <cp16net> sorry i've dropped the ball and i am picking up the pieces now
20:02:54 <hub_cap> smh
20:02:54 <amytron> grapex is coming back. sorry - i distracted him with something else
20:03:04 <hub_cap> oh amytron!
20:03:04 <cp16net> i plan to update that by the end of the week.
20:03:09 <cp16net> for real this time.
20:03:10 <amytron> my bad hub_cap
20:03:12 <hub_cap> ok re-add plz cp16net
20:03:13 <cp16net> its on record.
20:03:22 <hub_cap> cp16net: it was on record last wk!!!! ;)
20:03:26 <cp16net> #action cp16net add the db model to schedule_task
20:03:31 <cp16net> shhhhhh
20:03:34 <cp16net> :-P
20:03:38 <SlickNik> cp16net: ping in #openstack-trove when you add it.
20:03:44 <cp16net> word
20:03:49 <SlickNik> sweet
20:03:50 <hub_cap> so,  consistent JSON notation across API?
20:03:58 <hub_cap> im on that, and i assume it has not happened?
20:04:05 <grapex> o/
20:04:11 <SlickNik> hub_cap: we talked about it.
20:04:19 <vipul> it's a touchy subject
20:04:20 <juice> underscores underscores underscores
20:04:26 <kevinconway> we decided hungarian notation is best
20:04:28 <hub_cap> ok cool. i know there was some talk of it
20:04:29 <imsplitbit> I *think* we said underscores not camel case
20:04:29 <juice> in by best balmer impersonation
20:04:33 <juice> which isn't that good
20:04:47 <juice> kevinconway: nice
20:04:50 <vipul> motion to agree on underscores?
20:04:52 <SlickNik> basically majority were of the opinion that we should stick with underscores.
20:04:58 <hub_cap> sSize kevinconway?
20:05:07 <hub_cap> err
20:05:11 <hub_cap> iSize
20:05:17 <isviridov_> undescores
20:05:22 <imsplitbit> underscores
20:05:27 <robertmyers> underscores
20:05:30 <SlickNik> Since that was what the python guidelines suggest and what we've been mostly using anyway (least deviation from current API)
20:05:30 <dmakogon_> +1 to underscore
20:05:33 <kevinconway> are the underscores for json only or are we reforming xml to match?
20:05:43 <vipul> you know this meetbot does voting well :)
20:06:03 <imsplitbit> I think we said new stuff would use underscores
20:06:15 <hub_cap> kevinconway: id think it would be best ot use _ for both
20:06:15 <SlickNik> yah this is for new API.
20:06:16 <imsplitbit> then v2 would be full refactor yes?
20:06:28 <hub_cap> for sure
20:06:51 <imsplitbit> good deal
20:06:53 <hub_cap> ok so moving on?
20:07:04 <dmakogon_> yes
20:07:05 <SlickNik> yeah, it's looking like underscores to me. Move on.
20:07:12 <hub_cap> #topic reducing pep8 ignores
20:07:16 <vipul> i'll update the api next wiki
20:07:16 <hub_cap> dmakogon_: thats u eh?
20:07:17 <dmakogon_> mine
20:07:25 <kevinconway> +1
20:07:28 <dmakogon_> https://review.openstack.org/#/c/46063/
20:07:39 <dmakogon_> i' started to do that
20:07:48 <hub_cap> so off the bat
20:07:58 <hub_cap> are there any rules we _want_ to ignore?
20:07:59 <dmakogon_> so now trove has meaningless ignored rules
20:08:08 <hub_cap> as in, we have consensus that XXXX is stupid and we will continue to ignore
20:08:09 <dmakogon_> my idea to reduce most of them
20:08:25 <SlickNik> not any that I'm aware of.
20:08:31 <cp16net> i dont know what all those numbers mean by heart so i cant say
20:08:38 <dmakogon_> no, we need to reduce amount of igrored rules
20:08:43 <kevinconway> i have a gist somewhere of all the codes we ignore and what they mean
20:08:46 <kevinconway> but i lost it
20:08:58 <hub_cap> kevinconway: whats your github username?
20:09:02 <hub_cap> u can just keep pouring thru your old gists
20:09:06 <dmakogon_> http://pep8.readthedocs.org/en/latest/intro.html#error-codes
20:09:06 <hub_cap> itd be nice to have
20:09:14 <kevinconway> it was an openstack gist
20:09:19 <SlickNik> Most of the ignored rules are in place cause we have code that would break the tests if they weren't. (And we wanted to gate on the tests)
20:09:33 <hub_cap> LOL kevinconway ya u have like 3 gists
20:09:39 <SlickNik> kevinconway: That gist'd be golden if you can find it.
20:09:46 <dmakogon_> SlickNik: ok
20:09:47 <hub_cap> ok so then assuming we dont have any that are just stupid
20:09:53 <hub_cap> im all for dmakogon_ tackling these
20:10:00 <hub_cap> plz look @ his review
20:10:03 <hub_cap> great work dmakogon_
20:10:07 <dmakogon_> thanks)
20:10:09 <hub_cap> and NO MERGES till tomorrow!!!!!!!!!!!!!!!!!!!!!!!!!!
20:10:09 <cp16net> yeah lookin good
20:10:11 <SlickNik> Go for it dmakogon_
20:10:22 <dmakogon_> SlickNik: thanks
20:10:24 <hub_cap> SlickNik: vipul grapex NO MERGES!!!! tomorrow rc1 is cut
20:10:30 <vipul> fine :P
20:10:33 <hub_cap> just gonna randomly say that all day today
20:10:41 <dmakogon_> lol, mad hub_cap)))
20:10:48 <hub_cap> ANGRY HUBCAP
20:10:53 <SlickNik> heh
20:10:54 <isviridov_> will it be separate branch?
20:10:56 <grapex> hub_cap: So, we still just look at stuff and +2 it right? Because there's a ton of pull requests
20:11:03 <hub_cap> #topic project/branch status
20:11:11 <hub_cap> so isviridov_ (and all)
20:11:20 <hub_cap> when RC1 is cut, icehouse will be "open for development"
20:11:28 <hub_cap> which means RC1 will be cut to a branch
20:11:38 <hub_cap> and trunk will be open for merging icehouse stuff
20:11:53 <dmakogon_> so, for now HavanaRC is closed
20:11:58 <juice> this is new
20:11:58 <hub_cap> if we find critical havana bugs, ill have to manually backport them to RC1
20:11:58 <vipul> so if thre is a critical issue in RC1, does gerit support pushing to that branch?
20:12:07 <hub_cap> vipul: yes and no
20:12:17 <hub_cap> ill have to do manual work but it goes thru gerrit yes
20:12:47 <vipul> so checkout rc1 branch, push to gerrit.. and that's it?
20:12:58 <hub_cap> its outlined somewhere
20:13:00 <hub_cap> its not a ton of work
20:13:01 <dmakogon_> vipul: in common - yes
20:13:06 <SlickNik> #link https://wiki.openstack.org/wiki/StableBranch
20:13:07 <hub_cap> but if we have 50 bugs, then its a lot of work lol
20:13:19 <hub_cap> thats why they request the stable projects take ~1 wk to do RC1
20:13:19 <vipul> kk
20:13:20 <juice> thanks SlickNik
20:13:28 <hub_cap> if you look @ other RC1's there are like 50+ bugs on them
20:13:44 <dmakogon_> hub_cap: vipul: SlickNik: we should re-target all BPs and Bugs
20:14:02 <hub_cap> we should retarget realistically
20:14:05 <vipul> Yea I don't thikn we have an icehouse target
20:14:08 <hub_cap> if we think itll make i1 then yes
20:14:10 <hub_cap> we do vipul
20:14:10 <vipul> so those should be created first
20:14:15 <hub_cap> icehouse-1 is out
20:14:15 <vipul> nvm then
20:14:27 <dmakogon_> hub_cap: vipul: SlickNik:yes, that is why we have 'future' target
20:14:28 <hub_cap> https://launchpad.net/trove/+milestone/icehouse-1
20:14:34 <dmakogon_> hub_cap: vipul: SlickNik:not even ice house
20:14:41 <vipul> thanks hub_cap
20:14:41 <hub_cap> looks like our friends @ mirantis are being awesome
20:14:49 <hub_cap> and have already retargetted their bps
20:15:24 <hub_cap> so if we think its going to be ok for i1, bring it up and we can retarget
20:15:24 <dmakogon_> hub_cap: you are welcome)
20:15:34 <hub_cap> thx dmakogon_!
20:15:44 <hub_cap> so tomorrow we will be in merge/rebase/hell
20:15:51 <dmakogon_> i think we could retarget out stuff manualy
20:15:58 <hub_cap> the one with the highest bribe will get merged first
20:15:59 <dmakogon_> by ourselvs
20:16:05 <hub_cap> dmakogon_: i think so if you own it
20:16:11 <hub_cap> but only retarget if its realistic
20:16:21 <hub_cap> i dont want to have to keep moving stuff out. id rather move more stuff in
20:16:43 <dmakogon_> hub_cap: vipul: SlickNik: for the next meeting we could collect a pack of BPs and bugs for retargeting
20:16:46 <SlickNik> agreed hub_cap
20:17:00 <hub_cap> ya we should take a pass dmakogon_, and talk to the people they are assigned to, if any
20:17:07 <hub_cap> and try to move realistic items into i1
20:17:19 <hub_cap> i had to do a lot of "move this to h2, move this to h3" this time around :)
20:17:23 <vipul> Sure, i'll try to look at them between now an then
20:17:59 <SlickNik> dmakogon_: We can talk about candidates, but ultimately it's up to the people doing them to target them to the correct milestone...
20:18:19 <hub_cap> yes
20:18:31 <hub_cap> the project is dependent on people getting paid by companies to do the work :)
20:18:33 <dmakogon_> hub_cap: vipul: SlickNik: i think we should organise our future job in next way - be already up-to-date with new BPs and Bugs
20:18:42 <hub_cap> yup
20:18:45 <vipul> unless you are an independent like me
20:18:54 <hub_cap> vipul: :)
20:18:57 <cp16net> and me
20:18:59 <cp16net> heh
20:19:07 <hub_cap> put the "nice ot haves" in "trove next", and the "waaaaay future" in "trove future"
20:19:12 <hub_cap> ok good to move on?
20:19:13 <dmakogon_> that will lead us to focused development and planning
20:19:16 <hub_cap> weve got a packed meeting
20:19:20 <hub_cap> yes agreed dmakogon_
20:19:27 <SlickNik> yup, agreed
20:19:36 <SlickNik> good to move on
20:19:40 <hub_cap> #topic secgroup perms/ownership
20:19:41 <dmakogon_> yes
20:19:47 <hub_cap> #link https://review.openstack.org/#/c/44380/
20:19:51 <dmakogon_> which one ?
20:19:51 <hub_cap> amcrn around?
20:19:54 <amcrn> yes
20:19:58 <hub_cap> gogogo
20:20:08 <amcrn> just would like folks to read the review above, review the gist, and get a consensus
20:20:17 <amcrn> done :)
20:20:28 <hub_cap> hah
20:20:31 <hub_cap> not off the hook that easy
20:20:37 <hub_cap> anything in particular you'd like to bring up
20:20:42 <hub_cap> that may cause wrenches to be thrown
20:20:44 <SlickNik> amcrn I'm all for this. But I had a question.
20:20:48 <vipul> kinda agree with the comment there.  the only port that should be available to open is the port the instance is listening on
20:20:52 <amcrn> SlickNik: sure, what's up?
20:20:58 <SlickNik> Same as vipuls
20:21:04 <SlickNik> now that he's mentioned it.
20:21:05 <dmakogon_> about review - looks good
20:21:09 <SlickNik> (beat me to it)
20:21:18 <amcrn> There's a lot more involved than just the port, please review the gist.
20:21:19 <hub_cap> well wait
20:21:30 <hub_cap> imsplitbit: can tell u that MANY companies have compliance issues
20:21:34 <hub_cap> and will NOT run mysql on 3306
20:21:35 <hub_cap> period
20:21:38 <hub_cap> PERIOD
20:21:42 <amcrn> exactly, hence the point in my gist :)
20:21:44 <imsplitbit> correct
20:21:53 <SlickNik> Should we allow them to open up ports that the service of service_type is clearly not running on?
20:21:56 <amcrn> https://gist.github.com/amcrn/14501657c5a5e9ee78dd
20:22:05 <hub_cap> amcrn: :P
20:22:07 <amcrn> SlickNik: Make that configurable, see ^^
20:22:22 <dmakogon_> hub_cap: vipul: SlickNik: before trove will have parameter groups, we should leave this question as it is
20:22:36 <SlickNik> Well, the guest agent doesn't support running it on a diff port.
20:22:44 <hub_cap> can u explain dmakogon_
20:22:48 <hub_cap> SlickNik: yet
20:23:07 <dmakogon_> hub_cap: vipul: SlickNik: we still use default ports for services
20:23:19 <hub_cap> sure but when configuration edits drops
20:23:20 <amcrn> with parameter groups you could pass a different port, etc.
20:23:21 <amcrn> agreed
20:23:22 <SlickNik> And even if it did, you could update the port in the config to the port it was using.
20:23:22 <hub_cap> all bets could be off
20:23:44 <amcrn> the suggestion in the gist will accomodate future changes like that
20:23:48 <vipul> But the port info would still be tied to that instance
20:23:54 <dmakogon_> hub_cap: vipul: SlickNik: when (Amazon) paraters group implementation will be done, user could specify optional ports to be opened
20:24:05 <hub_cap> so maybe we do this
20:24:13 <vipul> instead of allowing the user to specify the port in the API, we should figure out a way to pick it from the instance
20:24:15 <hub_cap> the api says "open the port the guest is listening on"
20:24:21 <vipul> regardless of what port it is
20:24:23 <hub_cap> and the guest syas "what port am i on", ok, "open it"
20:24:40 <SlickNik> hub_cap / vipul ++
20:24:49 <dmakogon_> vipul: but ports are specified by configs
20:24:58 <hub_cap> sure dmakogon_ and the guest will know that
20:25:13 <vipul> ask the guest.. or push that info to the guest if need be
20:25:21 <vipul> (on provision)
20:25:27 <SlickNik> dmakogon_: it could be different for different service types.
20:25:28 <dmakogon_> vipul: configs of each service, and parameters groups allows user to configurate service as they want it
20:25:50 <SlickNik> So only good way to do that is to check with the guest which will know its service type and have that info
20:25:56 <vipul> Sure, but I think what we are saying is the Guest knows what that config is
20:26:08 <dmakogon_> vipul: +1 for pushing specific data to GA
20:26:11 <SlickNik> well not _only_ good way, but _a_ good way.
20:26:41 <hub_cap> amcrn: does that meet your needs?
20:26:50 <hub_cap> just have a "open mah port"
20:26:56 <SlickNik> Suggestion
20:26:59 <dmakogon_> SlickNik: you mean to store networking confs of each service type ?
20:27:40 <SlickNik> dmakogon_: not networking confs, but probably ports that the service communicates on.
20:27:49 <SlickNik> It's possible that it could be multiple ports.
20:27:56 <SlickNik> depending on service type.
20:27:56 <amcrn> i fail to see how the suggestion doesn't fit the model I described
20:28:32 <dmakogon_> hub_cap: vipul: SlickNik: amcrn: my idea to (for now) keep it as it is, then, when params. group will come, specify ports in it
20:28:57 <amcrn> again, dkamogon, your suggestion doesn't preclude what I've described in the gist
20:29:03 <amcrn> you still need a way of specifying what ports are eligible
20:29:07 <amcrn> for which service_type
20:29:19 <dmakogon_> hub_cap: vipul: SlickNik: and trove should create sec. rules for default port(as SlickNik sad) and custom spicified by user
20:29:47 <vipul> amcrn: I think the one contention is the user is still the one specifying hte ports
20:29:51 <dmakogon_> amcrn: we need mechanism to register services
20:29:58 <hub_cap> id really prefer to make this easier for the user by having a "create security group"
20:30:00 <amcrn> vipul: not true, the gist explains how that can be turned off
20:30:07 <dmakogon_> in it will be defined default port
20:30:12 <dmakogon_> for each service
20:30:13 <hub_cap> and then letting the app figure out what ports are open
20:30:15 <hub_cap> the guest knows
20:30:22 <hub_cap> the guest owns the service
20:30:31 <hub_cap> and the logic to find out what port its on
20:30:32 <vipul> amcrn: apologize if it's already in there.. first time lookign at it
20:30:46 <hub_cap> amcrn: lol do u expect us to do our homework?!?!?!!?!?!
20:30:49 <hub_cap> ;)
20:30:54 <amcrn> :|
20:31:01 <dmakogon_> hub_cap: vipul: SlickNik: amcrn: GA could store ports as meta-data on instance
20:31:09 <SlickNik> amcrn: reading the gist, I think the idea is exactly the same
20:31:37 <hub_cap> ok lets do this
20:31:40 <hub_cap> table this discussion
20:31:42 <amcrn> I think we're all in consensus, we just don't know it yet ;)
20:31:42 <hub_cap> and read the gist
20:31:47 <hub_cap> and talk about it tomorrow
20:31:48 <SlickNik> except that the onus is on the guest to do the port config vs the api / taskmgr (the way it is today)
20:31:51 <hub_cap> since amcrn knwos this stuff
20:31:58 <SlickNik> amcrn: I suspect you are correct :)
20:31:58 <hub_cap> i believe him
20:32:01 <hub_cap> he says we are in consensus
20:32:06 <vipul> the main thing is we should keep this a managed service
20:32:08 <hub_cap> still, table it
20:32:13 <vipul> and not have to let the user figure all this out
20:32:15 * hub_cap picks up the gavel
20:32:22 <hub_cap> we have too much on agenda
20:32:27 <vipul> next..
20:32:38 <hub_cap> #topic MongoDB support
20:32:39 <dmakogon_> moving on
20:32:52 <dmakogon_> i thinks we
20:32:55 <SlickNik> aha
20:33:01 <SlickNik> so this came up last week
20:33:02 <dmakogon_> i think we're done with it
20:33:04 <hub_cap> i like it! lets do mongo
20:33:06 <hub_cap> good
20:33:08 <hub_cap> moving on
20:33:09 <amcrn> lol
20:33:16 <isviridov_> )
20:33:28 <dmakogon_> main goal for Ice House  - global refactoring
20:33:37 <dmakogon_> for pluggability
20:33:38 <hub_cap> refactoring everything into globals?
20:33:42 <SlickNik> lol
20:33:43 <hub_cap> hehehe i kid i kid!!!!
20:33:44 <kevinconway> +1 hub_cap
20:33:44 <robertmyers> haha
20:33:48 <dmakogon_> lol
20:33:51 <pdmars> yes
20:33:58 <hub_cap> period
20:34:00 <hub_cap> but yes i agree we need refactoring first
20:34:01 <hub_cap> PERIOD
20:34:04 <isviridov_> global HEATing
20:34:10 <hub_cap> lol nice
20:34:12 <hub_cap> #topic virgo
20:34:17 <hub_cap> so who came up w/ this doozie?
20:34:21 <hub_cap> and dont say me
20:34:23 <hub_cap> cuz i didnt
20:34:27 <imsplitbit> you did
20:34:28 <hub_cap> i mightve tried to plant seeds
20:34:30 <dmakogon_> so many IRONY(c)
20:34:30 <cp16net> hub_cap
20:34:32 <cp16net> :-P
20:34:34 <SlickNik> isviridov: wanted some clarification
20:34:36 <hub_cap> but i didnt water them and they died
20:34:39 <SlickNik> again from last week.
20:34:43 <hub_cap> ok whats up
20:35:16 <isviridov_> hup_cap, came from trove channel, seems from you. Please comment if it is on roadmap or something&
20:35:20 <dmakogon_> anything new to discuss ?
20:35:21 <hub_cap> fwiw, i think we need to fundamentally rule out a python guest beffore we move to virgo
20:35:29 <hub_cap> isviridov_ shhhhhhhhh
20:35:33 <SlickNik> lol
20:35:51 <isviridov_> hub_cap, i'm all silence
20:35:53 <hub_cap> http://summit.openstack.org/cfp/details/53
20:35:55 <cp16net> or any other lang
20:35:55 <kevinconway> do we only need to rule out the standard python imll?
20:36:08 <kevinconway> or can jython be suggested?
20:36:25 <grapex> kevinconway: I like it
20:36:28 <dmakogon_> i would like to stay pure python
20:36:28 <hub_cap> wait you arent running this whole infra in jython kevinconway?
20:36:35 <vipul> what ahppened to the my little pony one
20:36:36 <hub_cap> dmakogon_: lets pray hes joking
20:36:42 <kevinconway> no, i'm running iron python
20:36:44 <hub_cap> vipul: lady rainicorn still lives
20:36:45 <SlickNik> my little pony one?
20:36:46 <dmakogon_> hub: ok :)
20:36:50 <hub_cap> :)
20:36:55 <SlickNik> oh, that was adventure time methinks
20:36:56 <kevinconway> so i can use my vs2014
20:37:07 <hub_cap> i need to put some feelers out between projects that use guests
20:37:12 <hub_cap> see if there is enough overlap between them
20:37:23 <hub_cap> or if we should just say screw it and make our own (outside trove) guest
20:37:40 <imsplitbit> I think the community already has plenty of those
20:37:47 <grapex> My feeling is a guest by definition should be very small, so the basis of a common implementation is less important than a common interface.
20:37:50 <dmakogon_> hub_cap: vipul: SlickNik: do not forget about OpenStack TC acceptance
20:37:51 <imsplitbit> I'd love to see a semi-unified agent
20:37:58 <hub_cap> dmakogon_: fair point
20:38:11 <isviridov_> dmakogon_, +1
20:38:23 <kevinconway> what's the clever name we would use for an openstack guest project?
20:38:28 <kevinconway> i think that's the most important part
20:38:32 <imsplitbit> or just a common reference spec at the very least
20:38:42 <hub_cap> kevinconway: obvi
20:38:45 <hub_cap> imsplitbit: agreed
20:38:50 <vipul> imsplitbit: i think a common implementation that works in production for all shoudl be the goal
20:38:59 <hub_cap> ok so lets move on. ther is not much to do here for us in this meeting
20:39:02 <hub_cap> we are all singing in concert
20:39:04 <juice> vipul +1
20:39:06 <grapex> If kevinconway or someone wants to use a guest on a Windows OS maybe they'll even use a more VS2014-centric language to code it. :)
20:39:08 <hub_cap> sans kevinconway and his iron jython
20:39:14 <imsplitbit> but grapex +100000 I think python isn't necessarily the best way to implement a guest agent, esp for really small vms/containers
20:39:20 <dmakogon_> kevinconway: burglar
20:39:26 <SlickNik> +1 to moving on.
20:39:30 <imsplitbit> +
20:39:32 <imsplitbit> ++1
20:39:37 <hub_cap> #topic trove refactoring
20:39:38 <isviridov_> move on, thx hub_cap
20:39:41 <hub_cap> dmakogon_: go go go
20:39:42 <cp16net> 1++
20:39:45 <grapex> imsplitbit: And once you get there, I think enforcing an implementation only seems like a good idea if you've never encountered other people's specific problems.
20:39:47 <hub_cap> are these all from last wk?
20:39:54 <dmakogon_> yes
20:40:08 <imsplitbit> hub_cap: yep
20:40:14 <dmakogon_> we already doing some approved stuff
20:40:21 <dmakogon_> so, nothing new
20:40:33 <dmakogon_> i think we could move one
20:40:49 <SlickNik> sounds good
20:40:54 <hub_cap> ok good
20:41:02 <hub_cap> ill clean this up (the meeting stuff)
20:41:03 <hub_cap> for next wk
20:41:13 <hub_cap> #topic moving guest into a new repo
20:41:17 <dmakogon_> hub_cap: sounds good
20:41:28 <hub_cap> so we talked a while ago abotu moving the guest
20:41:32 <hub_cap> this kinda goes back to topic -1
20:41:42 <SlickNik> Yes, we discussed it in the past.
20:41:44 <hub_cap> i think its still a good idea (tm)
20:41:46 <dmakogon_> hub_cap: vipul: SlickNik: i have something to say
20:41:50 <hub_cap> dmakogon_: plz do
20:41:55 <SlickNik> go on
20:42:06 <SlickNik> I also think it's a good idea.
20:42:17 <dmakogon_> hub_cap: vipul: SlickNik: we could create sepparate setup for GA package
20:42:37 <dmakogon_> hub_cap: vipul: SlickNik: 2 setup.py and 2 setup.cfg in one repo
20:42:46 <SlickNik> dmakogon_: In order to do that we need to move it out into a separate package
20:43:02 <dmakogon_> hub_cap: vipul: SlickNik: but still in one repo
20:43:03 <SlickNik> dmakogon_: openstack ci doesn't support 2 setup.py's in the same package.
20:43:06 <hub_cap> 2 setup.py in same repo is not good i believe from mordred...
20:43:11 <vipul> we could.. just wouldn't work too well with the tooling CI gives us
20:43:15 * hub_cap waits for mordred to magically appear
20:43:18 <mordred> correct
20:43:23 <isviridov_> With different repos, active development will we have double reviews in gerrit?
20:43:24 <SlickNik> never fails.
20:43:28 * hub_cap waits for mordred to magically vanish
20:43:32 <amcrn> didn't even have to say his name three times
20:43:38 <dmakogon_> lol
20:43:45 <SlickNik> heh
20:43:50 <hub_cap> HAHAHAHA that was awesome
20:43:52 <mordred> 2 setup.py and 2 setup.cfg in one repo completely unsupported
20:43:58 <hub_cap> good timing mordred
20:44:02 <mordred> one repo == one release artifact
20:44:17 <hub_cap> == one happy ci team
20:44:23 <mordred> now - what are you talking about? :)
20:44:29 <isviridov_> mordred, double reviews in gerrit?
20:44:34 <hub_cap> mezcal
20:44:38 <mordred> isviridov_: why would you do double reviews?
20:44:49 <hub_cap> lets not rabbit hole this
20:44:53 <hub_cap> its not supported
20:44:58 <hub_cap> i trust thre are good reasons
20:45:00 <dmakogon_> hub_cap: vipul: SlickNik: mordred: we could use second setup at instance, while tests, CI builds trove with original setup.py
20:45:10 <hub_cap> mordred: and team are super smart, so lets go w. them on this
20:45:38 <vipul> I don't know if the arguments for keeping them in the same repo are > keeping them separate
20:45:41 <isviridov_> mordred, If we are adding feature and changing core, and changing guest agent...
20:45:41 <mordred> hub_cap: basically, python tooling is not good enough for this, it's confusing, it's hard to reason about, it breaks everything, and it will cause your children to lose all of their limbs
20:45:44 <hub_cap> like a client, it can be a separate artifact. i think its a perfectly sane idea
20:46:06 <hub_cap> oh god mordred but my sons only 10mo old! hes barely understanding his limbs
20:46:06 <mordred> hub_cap: ++
20:46:15 <mordred> hub_cap: yup. they'll fall off
20:46:20 <hub_cap> shiiiiiiii
20:46:20 <dmakogon_> hub_cap +1 for separate artifact
20:46:24 <vipul> isviridov_: sure, there may be those cases, but you ahve the same issue with troveclient when you chagne the trove API
20:46:30 <hub_cap> so this was more about the timeline
20:46:34 <hub_cap> not whether to do it
20:46:38 <hub_cap> we decided to do it already :)
20:46:41 <hub_cap> like 6mo ago
20:46:41 * mordred goes back into his hole
20:46:50 <hub_cap> thx mordred, say hi to the other rabbits
20:46:56 <mordred> squirrel!
20:47:00 <dmakogon_> and to Alise
20:47:03 <hub_cap> HA
20:47:10 <SlickNik> thanks mordred, say hi to clarkb for me :)
20:47:10 <isviridov_> vipul, means not so critical. Thx
20:47:35 <hub_cap> ok so my gift to the group
20:47:40 <SlickNik> So, this kinda ties in with the trove conductor work that datsun180b's been working on
20:47:47 <kevinconway> hub_cap: bbq?
20:47:49 <hub_cap> ill make sure i get a better timeline for this
20:47:58 <hub_cap> hehe kevinconway nice
20:47:59 <vipul> we should shoot for icehouse
20:48:04 <dmakogon_> btw, what about conductor ?
20:48:07 <hub_cap> we need to shoot for i1 vipul
20:48:15 <vipul> even better
20:48:18 <hub_cap> thats when we get to rip shit out like a mad scientist
20:48:31 <SlickNik> all the big changes happen in 1 :)
20:48:33 <hub_cap> i need a scalpel and duct tape STAT
20:48:43 <hub_cap> do we need to discuss versions / service_types
20:48:48 <hub_cap> or
20:48:48 <hub_cap> guest_agent service registry
20:48:48 <dmakogon_> no
20:48:52 <vipul> no
20:48:58 <hub_cap> again i dont knwo if these are last wks
20:48:59 <SlickNik> both were remnants of last week
20:49:01 <dmakogon_> last one interesting
20:49:03 <hub_cap> pdmars: configuration management  ya?
20:49:06 <hub_cap> #topic configuration management
20:49:06 <pdmars> yes
20:49:11 <hub_cap> #link https://wiki.openstack.org/wiki/Trove/Configurations
20:49:12 <pdmars> so i picked this bp up
20:49:12 <SlickNik> go for it pdmars
20:49:23 <pdmars> mostly makes sense, but i have some questions
20:49:35 <pdmars> specifically about handling dynamic vs non-dynamic mysql vars
20:50:00 <pdmars> dynamic don't require a restart, non-dynamic do
20:50:48 <pdmars> i was thinking that when a configuration group is attached to an instance, it should set the dynamic vars and inform the user they need to restart to set non-dynamic
20:50:54 <pdmars> do others have thoughts/opinions on that?
20:51:17 <hub_cap> i think thats a valid point... maybe having a message upon group creation too?
20:51:18 <vipul> what about returning wehther one is dynamic or not in an API call
20:51:25 <pdmars> hub_cap: sure
20:51:30 <vipul> so list of all available options
20:51:36 <pdmars> yeah, so that info is in /configurations/parameters
20:51:37 <hub_cap> vipul: thre is some api around "avail options"
20:51:41 <vipul> ok
20:51:42 <amcrn> pdmars: are you thinking of a state verification (like seen in a resize), force sending of ack=true, or ?
20:51:43 <hub_cap> ya what pdmars says
20:51:45 <pdmars> it lists what you can change, what the bounds are, and if it's dynamic
20:52:02 <SlickNik> pdmars: that is good
20:52:04 <vipul> so the question is whether to rquire the user to issue a restart vs. us doing it
20:52:13 <hub_cap> amcrn: i think that maint windows would be good for restart but thats a ways off
20:52:21 <amcrn> sorry, i think i worded it poorly
20:52:23 <pdmars> hub_cap: amcrn: right
20:52:24 <amcrn> i was asking what vipul is
20:52:33 <pdmars> vipul: yes
20:52:37 <hub_cap> hes american
20:52:39 <hub_cap> sry
20:52:41 <hub_cap> Merican
20:52:43 <vipul> lol
20:52:46 <amcrn> MURICA
20:52:52 <SlickNik> heh
20:53:02 <hub_cap> ha
20:53:18 <pdmars> heh
20:53:23 <dmakogon_> let's keep this question for the nex meeting
20:53:33 <hub_cap> well we have 7 min left, i think its ok to discuss
20:53:34 <amcrn> you could send in a 'force-apply' in the request stating that if a restart-applicable parameter is included, that the instance be restarted
20:53:35 <vipul> i think that's fair.. we shouldn't restart it
20:53:35 <SlickNik> So I prefer informing the user that a restart is needed.
20:53:41 <pdmars> i'd like some initial feedback today
20:53:43 <pdmars> if possible
20:53:44 <dmakogon_> and. i think, we need some specs for it
20:53:45 <hub_cap> amcrn: i think that will naturally happy
20:53:47 <hub_cap> *happen
20:53:54 <hub_cap> dmakogon_: see the link
20:54:00 <cp16net> amcrn: i was thinking something similar
20:54:02 <SlickNik> amcrn: I like that approach (force-apply)
20:54:03 <kevinconway> maybe we should patch mysql to never need restart
20:54:04 <hub_cap> when i change subject
20:54:08 <hub_cap> guys
20:54:15 <dmakogon_> hub_cap: sorry missed that
20:54:16 <hub_cap> if the config is edited
20:54:20 <hub_cap> and a restart happens
20:54:22 <hub_cap> you get a force apply
20:54:23 <hub_cap> period
20:54:28 <hub_cap> resizes restart an instance
20:54:30 <vipul> kevinconway: maybe _you_ should :P
20:54:34 <hub_cap> this will naturally happen
20:54:36 <pdmars> hub_cap: right
20:54:38 <dmakogon_> hub_cap: i mean specs for force applying
20:54:39 <cp16net> like an api to /apply gets a response do what it needs to do
20:54:40 <hub_cap> we dont need a flag for this
20:54:50 <pdmars> hub_cap: agreed
20:54:53 <cp16net> either restart or say you're good
20:55:04 <hub_cap> on no i dont think we need a response
20:55:08 <hub_cap> thatll be complicated
20:55:21 <hub_cap> just warn "these wont go into affect till you /instanc/id/actions {restart}
20:55:27 <SlickNik> so if a user doesn't want a restart, he doesn't change the config?
20:55:28 <hub_cap> "
20:55:29 <hub_cap> w/o the newline
20:55:31 <pdmars> hub_cap: yes
20:55:36 <dmakogon_> hub_cap: algorithm for this operation is not so clear is it should be, is some one got perfect vision of this task ?
20:55:40 <hub_cap> SlickNik: the dynamics get changed
20:55:44 <cp16net> maybe i was misstaken... so you can push >1 config at a time?
20:55:46 <hub_cap> mysql in to the instance, change what you can
20:55:49 <cp16net> or not?
20:56:01 <hub_cap> the non dynamics, which require restart, will just sit ther ein the cofnig file
20:56:02 <kevinconway> 1 config or 1 config option?
20:56:04 <hub_cap> *config
20:56:22 <hub_cap> dmakogon_: thats why pdmars is asking these questions :)
20:56:30 <amcrn> I see what you're saying hub_cap, that works
20:56:51 <hub_cap> if you submit things that are dynamic and non dynamic (is there a better word for that)
20:56:55 <hub_cap> the dynamic ones get applied
20:56:56 <amcrn> as long as on a instance-get, it has some sort of flag saying outstanding changes haven't been applied
20:56:59 <hub_cap> and both get written
20:57:01 <amcrn> otherwise you might forget
20:57:08 <hub_cap> amcrn: thats a fair point
20:57:11 <pdmars> amcrn: hmm
20:57:11 <hub_cap> restart_pending
20:57:12 <vipul> are we storing these configs?
20:57:17 <hub_cap> vipul: yes
20:57:25 <vipul> besides just on the my.cnf right?
20:57:25 <cp16net> maybe yeah a different state change
20:57:26 <pdmars> vipul: we write them to an overrides.cnf file for mysql
20:57:31 <hub_cap> thatll be useful for maint windows (restart_pending or whatever)
20:57:41 <cp16net> yeah i like that
20:57:44 <hub_cap> maint window can see if a user needs to restart
20:57:44 <vipul> i see use cases where you may want to spin up a new instance with an existing config
20:57:53 <hub_cap> that is valid too vipul
20:57:57 <kevinconway> would this be a use case for: 205 Reset Content
20:57:58 <pdmars> vipul: also can do
20:57:59 <hub_cap> its in the spec i believe
20:58:02 <vipul> cool
20:58:05 <pdmars> hub_cap: yes
20:58:22 <pdmars> so the other major question is the unassign ... same basic idea
20:58:23 <SlickNik> I'm fine with the warn as long as we have the restart_pending
20:58:37 <pdmars> SlickNik: ok, i think that's reasonable
20:58:44 <hub_cap> +1 SlickNik amcrn et al
20:58:45 <SlickNik> else the user has no way to know that only half his config is applied upon inspection of the instance
20:58:47 <amcrn> pdmars: after we fully read your wiki, where would you like comments/questions posted? Is there a bug/bp, or in-line on the wiki?
20:59:00 <robertmyers> why not just allways warn they need to restart?
20:59:06 <hub_cap> i wish bp had a better way to do this amcrn :(
20:59:11 <amcrn> word
20:59:15 <pdmars> in line, or in irc to me/the group, whatever works for you
20:59:23 <hub_cap> ok meeting is over. we didnt get to finish
20:59:23 <amcrn> pdmars: ok, cool.
20:59:32 <hub_cap> lets move this discussion back to #openstack-trove
20:59:39 <hub_cap> to get consensus on last item for pdmars
20:59:45 <hub_cap> and to have open discussion
20:59:46 <pdmars> ok
20:59:50 <SlickNik> sounds good
20:59:56 <dmakogon_> ok
20:59:59 <hub_cap> any final words of wisdom?
21:00:08 <SlickNik> I want to make a suggestion
21:00:11 <dmakogon_> Love for everyone !
21:00:18 <SlickNik> the wiki page for the meeting.
21:00:26 <cp16net> <3
21:00:30 <SlickNik> Can we have a _previous_ meeting section
21:00:31 <dmakogon_> <3
21:00:39 <grapex> <3
21:00:40 <hub_cap> not a bad idea SlickNik
21:00:42 <dmakogon_> SlickNik +1
21:00:44 <hub_cap> HUGS
21:00:45 <hub_cap> #endmeeting