17:59:38 <SlickNik> #startmeeting trove
17:59:39 <openstack> Meeting started Wed Mar 12 17:59:38 2014 UTC and is due to finish in 60 minutes.  The chair is SlickNik. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:59:40 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:59:42 <openstack> The meeting name has been set to 'trove'
18:00:09 <robertmyers> o/
18:00:12 <SlickNik> #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting
18:00:13 <imsplitbit> o/
18:00:15 <mat-lowery> o/
18:00:16 <kevinconway> o/
18:00:20 <grapex> o/
18:00:22 <amcrn> o/
18:00:23 <k-pom> \o
18:00:29 <cweid> o/
18:00:33 <abramley> o/
18:00:38 <cp16net> o\
18:00:46 <denis_makogon> o/
18:00:57 <SlickNik> Last meetings logs: http://eavesdrop.openstack.org/meetings/trove/2014/trove.2014-03-05-18.02.html
18:00:59 <SlickNik> #link http://eavesdrop.openstack.org/meetings/trove/2014/trove.2014-03-05-18.02.html
18:01:04 <glucas> o_
18:01:12 <amytron> o|
18:01:27 <SlickNik> #topic Action Items
18:01:44 <SlickNik> Just one from last meeting, and it's complete.
18:01:54 <SlickNik> 1.	File a BP for adding datastore type/version to backup model
18:02:00 <denis_makogon> SlickNik, done
18:02:12 <SlickNik> Yup, thanks denis_makogon
18:02:24 <SlickNik> #topic Trove Guest Agent Upgrades bp follow up
18:02:30 <denis_makogon> #link https://blueprints.launchpad.net/trove/+spec/validation-on-restore-by-the-backup-strategy
18:03:00 <SlickNik> esp, around to take it away?
18:03:16 <SlickNik> #link https://blueprints.launchpad.net/trove/+spec/upgrade-guestagent
18:03:51 <denis_makogon> SlickNik, i guess it's approved
18:04:26 <esp> o/
18:04:27 <denis_makogon> SlickNik, i'd like to raise one question in terms of this topic
18:04:30 <esmute> o/
18:04:34 <amcrn> a wild esp appeared!
18:04:38 * amcrn throws a pokeball
18:04:40 <denis_makogon> esp, nice to see you
18:04:50 <juice> he's working on it
18:04:52 <ViswaV> o/
18:04:53 <SlickNik> denis_makogon: it's still in discussion.
18:04:55 <esp> amcrn: sorry, asleep at the keyboard again
18:05:04 <esp> hello denis_makogon :)
18:05:18 <grapex> So at the meetup we had a discussion on how to handle rpc calls if the guest agent was too old- I'd like to submit we move that to a seperate blueprint to follow this one
18:05:44 <grapex> It would let this one focus on making a good reference implementation of the update call
18:05:59 <esp> grapex: sounds good to me
18:06:00 <amcrn> fair enough, sounds reasonable
18:06:10 <denis_makogon> agreed
18:06:17 <SlickNik> Sounds good to me.
18:06:21 <grapex> One thing I'd like to bring up is there was talk in this blueprint of an audit trail in the database
18:06:45 <esp> shoot
18:06:53 <grapex> My concern is I think there are multiple routines in Trove which could benefit from this and it might be wise to make that into its own blueprint / feature
18:07:06 <denis_makogon> grapex, agreed
18:07:12 <grapex> so it could be re-used elsewhere. For example this could help with resize operations
18:07:19 <kevinconway> +1 for audit tables
18:07:27 <esp> grapex: yep I hear ya
18:07:35 <kevinconway> there are some sqlalchemy samples that add it to your models
18:07:48 <grapex> and who knows how useful it might be with replication in the future
18:08:02 <esp> kevinconway: cool, I'll ping ya on this
18:08:16 <amcrn> grapex: are we convinced that we can land a generic audit trail solution in a reasonable timeframe? as long as it's prioritized and doesn't land months and months after guest-upgrades does, i'm with you.
18:08:56 <grapex> amcrn: I think we could. If we made it too simple I'm not sure if there'd be major backwards compatability concerns if we wanted to change it later
18:08:58 <esp> grapex: could we come back and implement the audit trail after the first cut of the upgrades? or is gonna too messy?
18:09:01 <denis_makogon> is it possible to do this tasks in parallel ?
18:09:11 <amcrn> grapex: alright, sounds good
18:09:22 <grapex> Maybe "audit" log is too much- to me this is just a way in the database to give a better idea of what status some action went into or what happened to a long running job
18:10:12 <kevinconway> well, there's technically nothing stopping a deployer from adding triggers to existing tables that record record state on INSERT/UPDATE
18:10:13 <grapex> And not to get too far down this rabbit hole, but this sort of ties into an ancient problem with Trove's API in that sometimes an API call which is async might fail but there aren't good methods for showing this failure
18:10:47 <esp> one thing I might thrown into the conversation is that backups kind of already keeps a record of backup history.
18:10:49 <grapex> kevinconway: We could, but I think the fact so much work was done for an audit trail on this upgrade feature shows a hunger exists for a more general solution
18:10:58 <grapex> esp: I agree
18:11:05 <SlickNik> We spoke of something like this when we talked about "action-events" in the past.
18:11:17 <kevinconway> #link http://docs.sqlalchemy.org/en/latest/orm/examples.html#versioning-objects
18:11:20 <esp> which I'm not saying is right or wrong but it lives there now :)
18:11:27 <grapex> SlickNik: Exactly, it's unfortunate that was never finished
18:12:06 <cp16net> grapex: this sounds strikingly similar to the tasks that hub_cap did
18:12:08 <grapex> Part of my own view on this is I'm not sure why an upgrade routine should have this feature more than several other existing routines in Trove. It adds a lot of complexity to add it for just this
18:12:19 <grapex> cp16net: Yeah, SlickNik was referencing hub_cap's work
18:12:36 <amcrn> so the summary is to amend the blueprint to have a single row vs. a row per update, and to in parallel start fleshing out the details of action-events/historyaudit
18:12:49 <esp> yep, would be nice to talk to hup_cap, maybe he ran into some issues with this
18:12:59 <grapex> Maybe we should all agree to accepting a simple v1 implementation. If we don't change the API there won't be too much danger with it.
18:13:07 <SlickNik> I'm all for building something for auditing that's extensible, and backward compatible. Just need to be careful to not try and make it encompass _everything_. I fear down that path lies timeline and feature bloat.
18:13:12 <grapex> Then agent upgrades could simply use it
18:13:22 <grapex> SlickNik: Agreed
18:13:46 <kevinconway> SlickNik: auditing shouldn't extend much past the models
18:13:59 <grapex> So I think as a first pass let's just try naming the "audit trail's" table something else.
18:14:22 <esp> grapex: yeah I'm good with that
18:14:22 <grapex> One other note on schema is IIRC we're adding a single new table called "upgrades"
18:14:34 <esp> and try to make it generic
18:14:46 <denis_makogon> sound good
18:15:07 <denis_makogon> one question
18:15:20 <denis_makogon> how do we plan to extract agents code from trove ?
18:15:23 <grapex> Maybe we could just put the version of the guest agent into an existing table, such as heartbeats?
18:15:40 <SlickNik> kevinconway: As an example (perhaps not a very good one, though), something mentioned earlier was resize. We don't have any of that historical data in the models so it would have to be a trail on the action (API call).
18:15:41 <grapex> denis_makogon: I think hub_cap wants us to wait for Juno to do that
18:15:57 <amcrn> SlickNik: good example
18:16:19 <esp> grapex: yep, I think that would work ok.  I was looking at that last night
18:16:25 <SlickNik> I see it as a good idea, I don't think we need to hash out the design during this meeting though :)
18:16:34 <kevinconway> SlickNik: yeah, i understand the concern. Those are simply things we have to create models for. Create a ResizeAction record and keep a table that tracks the state of that record over time
18:16:39 <denis_makogon> SlickNik, agreed
18:16:45 <grapex> SlickNik: Ok, as long as those concerns are noted I don't think we should hash out everything either.
18:17:00 <kevinconway> the "versioning" of the record should happen behind the scenes
18:17:06 <grapex> kevinconway: I guess that's the issue though is having an explosion of models.
18:17:30 <denis_makogon> how do we plan to version the guest API ?
18:17:32 <SlickNik> esp: You good with updating/modifying the bp with these concerns? Can you action it, please?
18:17:34 <esp> grapex: I have to look at agent_heartbeats a little more but I was hoping not to loose the state of a particular upgrade
18:17:34 <grapex> kevinconway: I think I see what you're saying...
18:17:52 <grapex> kevinconway esp SlickNik: Let's start a thread about an "audit" trail or whatever on the mailing list
18:18:00 <kevinconway> as an analyst i should be able to see the state of a record at any given point in time
18:18:03 <kevinconway> grapex: ok
18:18:07 <grapex> And we *will* actually try to talk about it this time. :)
18:18:14 <esp> grapex and amcrn also suggested moving guest_agent_version to the instance table
18:18:27 <grapex> esp: I think I like that idea better.
18:18:29 <grapex> amcrn: ^
18:18:30 <esp> SlickNik: yep, give me a sec
18:18:48 <SlickNik> esp: np, thanks!
18:19:06 <esp> #action esp do update trove-guest-upgrades wiki with schema changes
18:19:24 <amcrn> grapex: i think it makes sense to have it in the instance table; using the heartbeat table as the source of truth for a guestagent version seems overreaching
18:19:40 <amcrn> but i haven't spent many mental cycles putting it through the paces
18:19:59 <esp> grapex: you meant that only status goes in agent_heart_beats right?
18:20:10 <esp> version can live in the instance table
18:20:16 <denis_makogon> amcrn, agreed instance table looks like the most appropriate place
18:20:45 <esp> and 'upgrades' --> 'events'  or 'history' or 'audit trail'
18:21:10 <grapex> esp: Yeah I think version should live in the instance table, I like that more
18:21:29 <esp> grapex: cool
18:21:37 <SlickNik> esp grapex amcrn +1 to "version" in the instance table
18:21:48 <SlickNik> Okay so we've got some clear next actions here.
18:22:02 <amcrn> so going into the guestagent packaging, do we have a gut feel on what the default packaging strategy should be?
18:22:08 <grapex> One last question, then I'll shut up I swear
18:22:25 <grapex> The MGMT API shows very specific info on updating the guest agent, such as where the swift file is
18:22:38 <denis_makogon> amcrn, looks like Swift could be the best option for any clouds
18:22:44 <grapex> I think long term it would be good to simplify this somehow, so that Trove would simply know where to look for the upgrade
18:23:03 <amcrn> denis_makogon: wasn't referring to the endpoint, was referring to the packaging itself; i.e. deb vs. tar vs. <whatever>
18:23:12 <grapex> So Trove could for instance query Swift's metadata and see what version it said it had, and see if that was more recent and if so make the RPC call to update the agent
18:23:21 <esp> demorris: yep swift seems like it would fit for a lot of folks
18:23:32 <denis_makogon> amcrn, ah, i see, then DEB looks like production variant
18:23:33 <grapex> That way operators wouldn't have to figure it out on their own when they made the mgmt call- they'd just say "update it!"
18:23:55 <esp> grapex: I like this idea but was hoping to get to it in a follow up blue print
18:24:02 <grapex> esp: Ok, so long as it's a follow up
18:24:07 <grapex> I'm good. Thanks esp!
18:24:12 * notmyname is available for questions on this topic
18:24:19 <notmyname> https://swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/ may be interesting to you too
18:24:27 <esp> grapex: auto updates would simplify things a lot
18:24:40 <SlickNik> Okay, let's move on to the next topic in the interest of time.
18:24:47 <esp> thx notmyname!
18:25:12 <SlickNik> #topic [openstack-dev] [Trove] MySQL 5.6 disk-image-builder element
18:25:36 <amcrn> #link http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg19036.html
18:26:07 <amcrn> now that multiple datastores and versions are supported, we're coming to a point in which we need to think about a holistic strategy
18:26:24 <amcrn> on how to handle variances in versions; whether it be handled in the diskimage, the guestagent, or a combination thereof
18:26:37 <denis_makogon> i guess there's one question related to agent - do we plan to refactor mysql manager and make it able to skip installing packages
18:26:45 <amcrn> some in-flight examples include MongoDB 2.4.9 (https://review.openstack.org/#/c/77461/)
18:26:58 <amcrn> and MySQL 5.6 (https://review.openstack.org/#/c/79413/)
18:27:06 <juice> denis_makogon: is your question part of the current topic or the previous one?
18:27:13 <denis_makogon> current
18:27:16 <kevinconway> amcrn: haven't we already solved this in the agent with the packages portion of a datastore?
18:27:31 <amcrn> kevinconway: no, because the later versions of datastores don't have prebuilt packages
18:27:34 <denis_makogon> as said at the ML there's no official 5.6 mysql package
18:27:52 <grapex> Well could we generalize how the guest agent pulls down the things it installs?
18:28:07 <grapex> Or have it call something else that already exists and is general enough to do that?
18:28:25 <amcrn> well, to quickly summarize: if you take a look at that MySQL 5.6 review, it becomes clear that the installation for bleeding edge versions is fairly involved
18:28:33 <grapex> Right now we're having the guest agent call a package system like apt-get. If we let it call some other smart thing as a strategy it could get us there.
18:28:37 <kevinconway> is there something wrong with rolling a package for custom code and using that?
18:28:38 <amcrn> so, if one provider wants it on cloud-init, but another wants it in the diskimage-builder
18:28:45 <amcrn> you've effectively duplicated code in two different places
18:29:03 <esp> I think we are gonna use pip
18:29:20 <esp> I was looking at pip wheels but not sure it's mature
18:29:29 <grapex> amcrn: I think there's a divide here though between operators who will want to customize Trove to run images that they've totally perfected. Since we allow datastores to be associated with images we already have that ability.
18:29:42 <amcrn> grapex: correct
18:29:53 <amcrn> what's unclear right now, is how we're going to accept patches
18:29:57 <grapex> However Trove should also be able to build a single image thats vanilla and spin up a variety of stuff so its easier to dev on and for the less initiated to play with
18:30:09 <grapex> So I wonder if maybe mySQL 5.6 is just too bleeding edge
18:30:15 <kevinconway> amcrn: does every datastore need to go in kickstart?
18:30:20 <amcrn> kevinconway: no
18:30:42 <amcrn> grapex: i'm not sure the gating factor of whether we should support a version should be predicated on whether the install is complicated
18:31:00 <grapex> Maybe it's ok to say that if people want to deploy a datastore version like 5.6 they can, but they need to build their own images or packages rather than have it happen in trove-integration.
18:31:09 <SlickNik> grapex: I'm wondering the same thing.
18:31:22 <SlickNik> Do we only accept patches to the elements once datastores have proper package versions in appropriate upstream repos?
18:31:34 <amcrn> that's the underlying question here
18:31:38 <kevinconway> is the question about patches to trove or redstack:kick-start?
18:31:41 <ViswaV> trove-integration provides the framework/ability to automatically test things in a easy way.
18:31:43 <grapex> amcrn: I think we should support what we can
18:31:45 <denis_makogon> grapex, its totally ok, so i would suggest to look at heat-jeos
18:31:57 <grapex> but in terms of plugging into trove-integration CI, there should be a limit on how complex it is
18:31:57 <amcrn> let's forget redstack kickstart
18:32:13 <kevinconway> amcrn: so what's the question then?
18:32:27 <imsplitbit> so I have a question
18:32:31 <kevinconway> i'm confused about the problem
18:32:33 <SlickNik> kevinconway: patches to trove image elements, I think - which currently live in trove-integration
18:32:52 <imsplitbit> is it too difficult to build a mysql .deb package or is it that we don't want to build one since oracle isn't making one for us?
18:33:09 <imsplitbit> it's certainly not bleeding edge
18:33:27 <imsplitbit> I just want to be sure I understand why we're not including that
18:33:53 <amcrn> so, let me re-explain the problem statement: we have 3 different ways to support a datastore install, the question is: what is the accepted set of guidelines to determine whether that gets merged publicly?
18:34:04 <amcrn> is a diskimage only install patch-set acceptable?
18:34:14 <amcrn> if yes, how are we going to handle versioning
18:34:18 <grapex> amcrn: Good summary
18:34:48 <amcrn> will we have dozens and dozens of /elements/ubuntu-mysql-X.X/ folders as time continues
18:34:50 <kevinconway> amcrn: so is the question what gets merged into Trove or the dev environment setup in redstack?
18:35:21 <amcrn> kevinconway: nothing to do with redstack, it's about what can get merged in here https://github.com/openstack/trove-integration/tree/master/scripts/files/elements
18:35:32 <grapex> kevinconway: And another question is should what goes into Trove differ from what can work in redstack? Because if it won't work in the later we won't be able to test it at which point who knows if it works
18:35:35 <SlickNik> So I think we should accept upstream, only datastores that we can test upstream.
18:36:03 <grapex> So in CI, how many images will we need to make for Trove?
18:36:15 <denis_makogon> grapex, a lot
18:36:18 <amcrn> so are we saying that unless it's redstack kick-startable, it shouldn't be merged into https://github.com/openstack/trove-integration/tree/master/scripts/files/elements ?
18:36:32 <denis_makogon> grapex, one per each datastore and N for each version
18:36:41 <grapex> Heh
18:36:50 <kevinconway> is there a facility in redstack for pointing at an image and using it instead of kick-starting one?
18:37:16 <kevinconway> so those with needs beyond the normal kick-start can prepare an image any way and attach it to the datastore
18:37:24 <juice> kevinconway: there is nothing truly special about the image
18:37:35 <SlickNik> amcrn: I think it's stronger than that. Unless it's redstack startable, and has int-tests actually running testing the image / datastore.
18:37:45 <juice> it's just that there are assumptions made by trove about what is included in the image
18:37:56 <amcrn> SlickNik: i'm fine with that. so now what's the criteria for when a manager should be forked vs. shared?
18:38:35 <amcrn> ex: you could modify the mongodb manager to deal with setParameter because that's in 2.4.9, but not in 2.0.X, and still share that manager across both versions
18:38:35 <juice> amcrn: are we talking about images or managers
18:38:41 <amcrn> it's all interconnected
18:39:05 <denis_makogon> amcrn, i guess, if datastore has public package that can be pulled by apt-get/yum - in this case manager could be shared
18:39:12 <denis_makogon> amcrn, if not - forked
18:39:14 <juice> amcrn: true but as you pointed out a manager could handle a wide range of images (point releases of the underlying datastore)
18:39:41 <amcrn> juice: right, but if you're clever enough you could create a god mysql manager class that handles 5.1, 5.5, and 5.6 if you were insane
18:39:56 <amcrn> so for example: should the manager always be force forked on a major version? etc.
18:40:15 <juice> amcrn: I am :)  I think having the manager restrict the version is a reasonable approach
18:40:32 <SlickNik> amcrn: That's an interesting point. I'd anticipate you'd have logic in your manager to deal with nuances like those. And when it starts to get unwieldy, you'd probably have to fork.
18:40:55 <SlickNik> amcrn: Haven't thought this through fully, so all seat of the pants here.
18:40:58 <grapex> What's "force forked?"
18:41:09 <juice> amcrn: if a manager was proven to work with a given point release, it would be added to the acceptable range
18:41:13 <kevinconway> git fork --force?
18:41:24 <amcrn> grapex: turn of phrase to mean force them to create a new manager
18:41:53 <SlickNik> kevinconway: Knowing git, that's probably already a thing that exists :P
18:41:56 <grapex> amcrn: In general I think different major versions will necessitate new managers, but let's not make that a rule.
18:42:03 <denis_makogon> amcrn, at least each new manager for new version could be inherited for base manager
18:42:15 <grapex> So here's a question- much of this image builder stuff just seems to be bash scripts that run to initialize an image
18:42:19 <denis_makogon> example: mysql-5.5 and 5.6
18:42:21 <kevinconway> denis_makogon: i don't like that idea
18:42:32 <kevinconway> object oriented programming means we need to program with more objects
18:42:42 <grapex> So we have two routes- the bash scripts initialize the image while it's being built, so their results are baked in , or the guest agent installs a package later
18:42:55 <SlickNik> Okay, 2 more minutes on this, and we need to move on in the interest of time :)
18:42:59 <grapex> what if we enabled the guest agent to run these scripts to help with dev efforts?
18:43:13 <grapex> Then it becomes a choice you make later, whether to bake the image for a specific type or noit
18:43:14 <SlickNik> But there's clearly some more thinking that needs to be done around this.
18:43:15 <grapex> *not
18:43:40 <amcrn> grapex: i'd need to see more specifics on how you'd accomplish that, but that sounds promising
18:44:08 <grapex> amcrn: It seems like we've made bash our deployment technology of choice. :p
18:44:17 <amcrn> anyway, please review the existing patch-sets plus mailing list thread. because i can easily see without governance, having multiple versions tacked onto a single manager, then that becomes unwieldy, making it brittle to change, etc.
18:44:18 <grapex> Which is ok for some use cases I guess
18:44:36 <grapex> amcrn: Ok. I'll try to review this soon
18:44:49 <amcrn> it's a bit difficult to convey the concerns via this medium
18:45:24 <SlickNik> amcrn: Agreed. Let's discuss this outside this meeting, perhaps offline.
18:45:45 <denis_makogon> so, moving on
18:45:46 <denis_makogon> ?
18:45:48 <kevinconway> SlickNik: as in we all disconnect and type into empty terminals?
18:46:16 <grapex> kevinconway: Thats sounds like a metaphor for some kind of philosophical journey into the self.
18:46:42 <SlickNik> #action SlickNik set up something to continue "supported datastore versions" discussion.
18:47:20 <SlickNik> #topic Open Discussion
18:47:24 <denis_makogon> wow
18:47:28 <kevinconway> i have a topic
18:47:29 <denis_makogon> another topic
18:47:39 <denis_makogon> SlickNik, Point in time recovery [denis_makogon]
18:47:42 <kevinconway> if you were part of the key signing party then you need to actually go sign all the keys
18:47:49 <grapex> kevinconway: LOL!
18:47:54 <kevinconway> i'm looking at you EVERYONE
18:48:02 <grapex> I was just there to commit identity theft.
18:48:04 <amcrn> kevinconway: i only looked at your id to steal your identity
18:48:16 <grapex> amcrn: Lol! Beat you to the joke. :)
18:48:24 <denis_makogon> guys, two topics were skipped
18:48:25 * amcrn shakes his fist @ grapex
18:48:28 <amcrn> ;)
18:49:00 <SlickNik> denis_makogon: Sorry I didn't refresh the page since last night.
18:49:14 <denis_makogon> SlickNik, ok
18:49:22 <SlickNik> #topic Point in time recovery
18:49:29 <denis_makogon> its mine
18:49:32 <denis_makogon> Trove is able to perform instance restoration (whole new instance, from scratch) from previously stored backup in remote storage (OpenStack Swift, Amazon AWS S3, etc). From administration/regular user perspective Trove should be able to perform point in time recovery. Basically it�s almost the same as restoring new instance, but the difference between restore (in terms of Trove) and recovery is huge.
18:49:32 <denis_makogon> Restore gives an ability to spin-up new instance from backup (as mentioned earlier), but the Recovery gives an ability to restore already running instance from backup. For the beginning Trove would be able to recover/restore running instance from full backup.
18:49:50 <denis_makogon> i've sent the ML  about that
18:49:55 <denis_makogon> #link https://wiki.openstack.org/wiki/Trove/PointInTimeRecovery
18:50:15 <kevinconway> so i was confused. you seem to be combining two ideas denis_makogon
18:50:29 <denis_makogon> so the main idea is to be able to restore you instance from given backup at any time
18:50:30 <kevinconway> one is point-in-time recover and the other is recovering into a live instance
18:51:12 <SlickNik> kevinconway: +1
18:51:26 <denis_makogon> googling said that restoring instance from the backup (that was take at some point in time) at any time is called point in time recovery
18:51:47 <kevinconway> yes but google runs a cloud service. they are competitors!
18:51:53 <kevinconway> you cannot trust  them
18:51:53 <SlickNik> denis_makogon: I don't think people have had time to review this since it was just added this morning.
18:52:12 <SnowDust> kevinconway :  what what what
18:52:21 <amcrn> +1 SlickNik
18:53:07 <SlickNik> I just want to re-iterate that we should add items to discuss at the Wednesday Meeting on or before Monday that week.
18:53:28 <grapex> SlickNik: +1
18:53:32 <cweid> SlickNik: +1
18:53:34 <denis_makogon> ok
18:53:41 <SlickNik> That way folks have some time to read the related bps.
18:54:11 <denis_makogon> but the ML with link to the BP and wiki page was sent like 2 weeks or less ago
18:54:12 <SnowDust> denis_makogon: i support the idea of restoring the same running instance, restoring to new instances is what we have right now
18:54:32 <SlickNik> denis_makogon: I think the next item was added a bit late, too. Let's get to it next meeting, as we won't be able to give it the full discussion time otherwise.
18:54:40 <denis_makogon> i see
18:54:54 <denis_makogon> then lets skip them all
18:55:19 <amcrn> denis_makogon: right, but you updated the content significantly since the monday discussion, and then added it to the meeting on tuesday night. we'll make sure to give it a look-see this week hopefully, appreciate the updates.
18:55:21 <denis_makogon> can we jump to the open discussion
18:55:37 <SlickNik> #topic Open Discussion
18:55:40 <denis_makogon> https://bugs.launchpad.net/trove/+bug/1291516
18:55:43 <denis_makogon> #link https://bugs.launchpad.net/trove/+bug/1291516
18:55:43 <kevinconway> so yeah key signing party
18:55:48 <kevinconway> sign my keys
18:55:50 <cp16net> what will be allowed to go into the icehouse now? just just bugs?
18:56:02 <grapex> kevinconway: This is about that federal investigation isn't it?
18:56:03 <denis_makogon> i think broken tests are more significant problem
18:56:10 <cweid> i like pi.
18:56:13 <juice> kevinconway: do you see my signature on your key
18:56:15 <esp> kevinconway: lol
18:56:18 <SlickNik> kevinconway: I've been signing on the key slacking :) (or was it the other way around?)
18:56:25 <juice> kevinconway: you may have to refresh your key from the keyserver
18:57:02 <SlickNik> cp16net: Only bugs unless you ask for a Feature Freeze exception for your bp.
18:57:02 <kevinconway> juice: i see yours, but there were quite a few of us there
18:57:08 <kevinconway> hub_cap hasn't even signed yet
18:57:25 <SlickNik> denis_makogon: that bug is a dupe
18:57:32 <cp16net> SlickNik: ok
18:57:32 <denis_makogon> SlickNik, its new
18:57:34 <grapex> Pretty awesome their sandbox can detect when Python code is trying to execute commands
18:57:35 <SlickNik> denis_makogon: I recently submitted a fix for the issue.
18:57:38 <denis_makogon> SlickNik, gate failing again
18:57:56 <juice> kevinconway: I am getting the feeling that most folks are unclear about the process
18:57:59 <cp16net> i think hub_cap told me i might be able to get the config params in the db with a ffe
18:58:08 <grapex> cp16net: ffe?
18:58:10 <denis_makogon> SlickNik, take a look at date and submission date
18:58:14 <kevinconway> juice: i'll send out a bash script
18:58:16 <amytron> feature freeze exception?
18:58:19 <cp16net> but he gave me a deadline of tuesday and i missed it by a day
18:58:20 <cp16net> :-/
18:58:21 <grapex> Ah
18:58:22 <kevinconway> you just enter your password and it will sign my key for you
18:58:24 <cp16net> yes
18:58:29 <SlickNik> denis_makogon: Where are you seeing it fail?
18:58:34 <kevinconway> don't read the source though
18:58:36 <grapex> kevinconway: Ok. It's "1234567890"
18:58:44 <denis_makogon> SlickNik, i added links at bug description
18:58:45 <cp16net> i'll ping him and see what the dealio is
18:58:48 <SlickNik> denis_makogon: You might have to rebase your patch to make sure the fix is in.
18:59:08 <denis_makogon> SlickNik, == Agenda for Mar. 12 ==
18:59:08 <denis_makogon> * Trove Guest Agent Upgrades bp follow up
18:59:08 <denis_makogon> ** https://blueprints.launchpad.net/trove/+spec/upgrade-guestagent
18:59:08 <denis_makogon> * "[openstack-dev] [Trove] MySQL 5.6 disk-image-builder element" [http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg19036.html]
18:59:08 <denis_makogon> ** Discuss how to strategically handle multiple versions of a datastore in terms of diskimage-builder elements and/or guestagent managers.
18:59:11 <denis_makogon> ** https://review.openstack.org/#/c/77461/ (MongoDB 2.4.9)
18:59:13 <denis_makogon> ** https://review.openstack.org/#/c/79413/ (MySQL 5.6)
18:59:15 <grapex> Thanks SlickNik!
18:59:15 <denis_makogon> ** https://review.openstack.org/#/c/72804/ (trove-integration changes to allow building different version of a datastore by setting ENV vars appropriately before kick-starting)
18:59:18 <denis_makogon> * Point in time recovery [denis_makogon]
18:59:22 <denis_makogon> ** https://wiki.openstack.org/wiki/Trove/PointInTimeRecovery
18:59:24 <denis_makogon> * Data volume snapshot [denis_makogon]
18:59:26 <denis_makogon> ** https://wiki.openstack.org/wiki/Trove/volume-data-snapshot-design
18:59:28 <denis_makogon> oh, sorry
18:59:29 <SlickNik> denis_makogon: Let's take this offline.
18:59:30 <denis_makogon> the patch is already up-to-date
18:59:31 <cp16net> wow
18:59:34 <denis_makogon> ok
18:59:38 <SlickNik> in #openstack-trove
18:59:53 <denis_makogon> done
18:59:55 <SlickNik> Thanks , that's all folks!
18:59:59 <SlickNik> #endmeeting