21:05:04 <hub_cap> #startmeeting reddwarf
21:05:05 <openstack> Meeting started Tue May  7 21:05:04 2013 UTC.  The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:05:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:05:08 <openstack> The meeting name has been set to 'reddwarf'
21:05:11 <hub_cap> lets put it on the agenda
21:05:17 <cp16net> sure
21:05:19 <hub_cap> #link https://wiki.openstack.org/wiki/Meetings/RedDwarfMeeting
21:05:22 <hub_cap> someone edit plz
21:05:33 <SlickNik> I'm on it
21:05:34 <cp16net> got it
21:05:47 <hub_cap> #link http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-04-30-21.03.html
21:05:51 <hub_cap> so lets start with
21:05:57 <datsun180b> perfect timing
21:05:59 <hub_cap> #topic update to action items
21:06:12 <hub_cap> wow there are virtually non
21:06:13 <hub_cap> e
21:06:29 <hub_cap> the only real tangible one is SlickNik
21:06:30 <vipul> we didn't record many last time
21:06:42 <hub_cap> did u reset the exectors on jenkins
21:06:46 <SlickNik> yes
21:07:07 <SlickNik> We now have 4 executors, and they seem to be doing well.
21:07:11 <SlickNik> So we're good with that.
21:07:38 <hub_cap> sweet
21:07:47 <hub_cap> ok well done w/ Action items
21:07:50 <hub_cap> HA
21:07:56 <cp16net> i've still seen some random failures but running again seems to be fine
21:08:06 <hub_cap> #topic TC
21:08:12 <hub_cap> well we are incubated
21:08:17 <hub_cap> WOO
21:08:18 <cp16net> +1
21:08:20 <SlickNik> good job all!
21:08:48 <vipul> nice work hub_cap fielding
21:08:58 <vipul> so what did we have to agree to?
21:09:06 <hub_cap> a lifetime of solitude
21:09:08 <hub_cap> answering emails
21:09:20 <hub_cap> and fighting about flaygs
21:09:25 <vipul> figured
21:09:34 <hub_cap> :P
21:09:43 <hub_cap> #topic OpenVZ
21:09:47 <hub_cap> imsplitbit: wanna talk about the ovz status?
21:09:57 <SlickNik> No listening to the sound of nails on chalkboard, at least...
21:10:31 <hub_cap> hah
21:11:07 <hub_cap> imsplitbit: is on his way
21:11:07 <imsplitbit> ovz status
21:11:09 <imsplitbit> ...
21:11:14 <hub_cap> nice
21:11:16 <imsplitbit> oh the packages?
21:11:17 <hub_cap> its in braille?
21:11:22 <SlickNik> S?
21:11:31 <SlickNik> morse code :)
21:11:50 <imsplitbit> theres a ppa up, I'm still trying to figure out how to get it to build for multiple archs and distros
21:12:01 <datsun180b> No, "..." is JRPG for "Saying nothing"
21:12:34 <vipul> jrpg?
21:12:45 <datsun180b> video games
21:12:47 <imsplitbit> anyone with extensive debian packaging background is more than welcome to send me a chat and help cause this is my first time
21:12:48 <hub_cap> ha SlickNik
21:12:49 <esp1> lol
21:12:52 <SlickNik> Japanese role playing game.
21:13:16 <datsun180b> sorry to distract
21:13:17 <vipul> ugh didn't know there were categories of rpg's
21:13:20 <hub_cap> imsplitbit: ...
21:13:24 <imsplitbit> ppa is here
21:13:28 <hub_cap> link it
21:13:29 <imsplitbit> #link https://launchpad.net/~imsplitbit/+archive/openvz-nova-driver
21:13:35 <hub_cap> woot
21:13:36 <imsplitbit> I was dang...
21:14:02 <imsplitbit> but yeah, if anyone has packaging exp please hit me up tomorrow
21:14:05 <hub_cap> ok nice work. cant wait to integrate it into our workflow @rax
21:14:05 <vipul> 19 seconds ago nice
21:14:09 <imsplitbit> I'd love some expertise on this
21:14:10 <hub_cap> imsplitbit: do u think i can help?
21:14:19 <hub_cap> i can try at least
21:14:26 <imsplitbit> sure we can look at it
21:14:36 <hub_cap> we can also ask zigo, who has mad debian packaging skills
21:14:38 <imsplitbit> more than one set of eyes is always good
21:14:44 <imsplitbit> kk
21:14:47 <hub_cap> ill use my 3rd eye
21:14:55 <imsplitbit> I don't even want to know
21:14:55 <hub_cap> #topic Jenkins woes
21:14:57 <hub_cap> heh
21:15:03 <hub_cap> SlickNik: brief update plz sir
21:15:19 <SlickNik> Most of woes should be fixed by now.
21:15:44 <vipul> maybe not jenkins related, but anyone looking into the int-tests?
21:15:56 <vipul> concerned that nearly every patch is failing
21:15:59 <SlickNik> This includes the issue with the stderr logging in the plugin and the reg-exp for success.
21:16:08 <hub_cap> i tried to run them last night and it failed like 10+ tests
21:16:17 <hub_cap> SlickNik: nice on the regex
21:16:18 <SlickNik> THere is still issues with some tests failing on-and-off.
21:16:40 <vipul> So if we were to move to openstack ci tomorrow.. we still wouldn't be much better off without good tests
21:16:40 <datsun180b> Before I left yesterday the only failure example I could see was, from what I can tell, in the client
21:16:41 <esp1> ^ I got a failure on the resize tests
21:16:42 <SlickNik> The two primary ones being "Restart Mysql"
21:16:53 <hub_cap> vipul: #agreed
21:17:01 <SlickNik> and "Resize down"
21:17:08 <hub_cap> we need to make sure those are stupid solid before moving to official tests
21:17:10 <vipul> datsun180b: that one was because of a rebase issue
21:17:34 <SlickNik> datsun180b: vipul and I figured out that one last evening.
21:17:34 <datsun180b> vipul: oh good, i was worried i couldn't see more of what was running to diagnose further
21:18:35 <SlickNik> So we really need to either: 1. solidify those 2 tests, or 2. take them out / replace them with equivalent tests that are not as unstable.
21:18:40 <cp16net> the public tests worked for me yesterday
21:18:47 <datsun180b> I haven't had any problems recently with the tests
21:18:47 <cp16net> is there something else that i am missing?
21:18:48 <hub_cap> cp16net: sweet
21:19:01 <hub_cap> i tried them last night w/ a completely fresh install and first run failed w/ like 20 tests failing
21:19:06 <hub_cap> 2nd run instance never came online
21:19:13 <hub_cap> oh but good news
21:19:13 <cp16net> because the jenkins job ran for my code to be merged
21:19:14 <vipul> stability i think... if you look at most of the patches they have a -1
21:19:18 <robertmyers> would it be good to move those tests into the main repo?
21:19:18 <hub_cap> 12.04.2 networking works again
21:19:20 <grapex> datsun180b: Could you look at them?
21:19:36 <hub_cap> robertmyers: the int-tests?
21:19:40 <robertmyers> yes
21:19:43 <hub_cap> im gonna have a conversation about the tests w/ ttx too
21:19:47 <esp1> hub_cap: nice.  so should we all be using 12.04.2?
21:19:57 <hub_cap> esp1: dont update to it yet i had failing tests
21:20:02 <robertmyers> that way we could roll back bad patches
21:20:04 <hub_cap> not sure if it was just the flakiness of the env or what
21:20:09 <esp1> hub_cap: k
21:20:13 <SlickNik> Here's a run of the tests where the restart Mysql test failed: https://rdjenkins.dyndns.org/job/Reddwarf-Gate/128/console
21:20:15 <datsun180b> i'll blow away my vm and pull everything down tabula rasa
21:20:19 <hub_cap> yup robertmyers
21:20:25 <hub_cap> one thing i really want to see tho
21:20:33 <hub_cap> is that if a test suite needs, say, keystone users, it adds them
21:20:42 <hub_cap> rather than redstack
21:20:46 <SlickNik> And then passed again on a subsequent run: https://rdjenkins.dyndns.org/job/Reddwarf-Gate/133/
21:20:48 <hub_cap> thats just _an_ example
21:21:03 <vipul> hub_cap: yes agree
21:21:09 <SlickNik> hub_cap ++
21:21:12 <cp16net> btw in these tests is there a way we can archive the logs so that we can diagnose a little better?
21:21:33 <hub_cap> ^ ^ SlickNik
21:21:49 <cp16net> that would be a HUGE help
21:21:57 <SlickNik> cp16net: that's something that I can work with Matty on.
21:22:14 <cp16net> awesome should there be a blueprint for that stuff?
21:22:23 <cp16net> i guess it doesnt really apply
21:22:25 <SlickNik> #action SlickNik look into archiving logs for rdjenkins test runs.
21:22:25 <hub_cap> naw
21:22:29 <hub_cap> ya exactly cp16net
21:22:38 <vipul> another thing is a lot of these pass in fusion but do not in cloud
21:22:43 <cp16net> thx SlickNik
21:22:52 <SlickNik> cp16net / others: what logs would be helpful to archive...
21:22:59 <cp16net> the guest log
21:23:03 <cp16net> nova logs
21:23:08 <SlickNik> rd logs
21:23:08 <cp16net> reddwarf logs
21:23:12 <SlickNik> and rdtest.log?
21:23:14 <SlickNik> anything else?
21:23:31 <hub_cap> all of the nova logs too
21:23:47 <SlickNik> yup, nova logs too
21:23:52 <hub_cap> basically anythign thats running, and has a log, and is a openstack project, archive it
21:23:59 <hub_cap> (that includes us now :P)
21:24:02 <cp16net> yes sirs
21:24:15 <cp16net> <3
21:24:17 <robertmyers> so there is a config for devstack to do this
21:24:19 <hub_cap> awesome
21:24:35 <cp16net> robertmyers: oh yeah there is
21:24:36 <hub_cap> i think we already use that config robertmyers
21:24:40 <hub_cap> in redstack
21:24:53 <cp16net> yeah that just writes the screens to a file
21:24:57 <SlickNik> We set the logdir in devstack, if that's what you mean robertmyers...
21:24:59 <cp16net> we need to make sure those are archived
21:25:01 <robertmyers> SCREEN_LOGDIR
21:25:02 <hub_cap> basically everything is in ./scripts/../report/logs
21:25:03 <robertmyers> ?
21:25:27 <SlickNik> We just need to copy them to a location that we can access even after the machine goes *poof*...
21:25:32 <hub_cap> naw its a different param to get them to log somewhere robertmyers
21:25:33 <cp16net> yeah we had that setup in our ci
21:25:36 <hub_cap> we are using _somethign else_
21:25:44 <vipul> we could just scp them on to the jenkins?
21:25:47 <hub_cap> right SlickNik, jenkins archive works well
21:25:53 <hub_cap> archive artifacts
21:26:07 <hub_cap> yes vipul, move them to jenkins, archive, keep last, ~10 or whatever
21:26:17 <hub_cap> jclouds does all that too ;)
21:26:21 <vipul> :p
21:26:37 <vipul> this will be somewhat temporary
21:26:41 <SlickNik> Sounds good. Will look into it.
21:26:41 <hub_cap> hehe ya
21:26:47 <hub_cap> well vipul not really sure
21:26:47 <vipul> since we'll be moving this to openstack soon
21:26:53 <cp16net> ok moving on?
21:26:53 <SlickNik> Yeah, eventually we need to move to devstack-vm-gate.
21:26:55 <hub_cap> this is 1off from EVERY other project
21:27:03 <hub_cap> so this might stay as is for now
21:27:12 <vipul> hmm fair nuff
21:27:19 <SlickNik> ++ to movin on...
21:27:27 <hub_cap> kk, grapex did u have anything to add?
21:27:58 <grapex> hub_cap: Not really, though locally at Rackspace I'd like datsun180b to look at these tests and try to make them more solid
21:28:06 <datsun180b> Spinning up now
21:28:13 <vipul> awesome thanks
21:28:20 <grapex> #action datsun180b to look at failing resize and other tests
21:28:37 <hub_cap> sweet just making sure u got that action in there :)
21:28:42 <hub_cap> #topic Backup status
21:28:51 <SlickNik> thanks grapex / datsun180b. That will be sweet...
21:29:03 <datsun180b> I want to see logs of your failures to compare
21:29:27 <hub_cap> whos?
21:29:30 <SlickNik> I'm close to being done with the int-tests for the restore scenario and updating my patch.
21:29:31 <esp1> datsun180b: I can send you one
21:29:44 <SlickNik> for Backup and Restore.
21:29:48 <datsun180b> ed.cranford@rackspace.com
21:30:00 <vipul> datsun180b you could also look at previous failed jenkins run in rdjenkins
21:30:07 <hub_cap> ewwwww dont put your email addy on irc
21:30:18 <SlickNik> Should be able to get it out for a complete review in the next day or so.
21:30:18 <datsun180b> oh no, people know my work email
21:30:19 <hub_cap> thats now logged forever on eavesdrop
21:30:31 <hub_cap> hah just wait for all that spam to roll in
21:30:35 <vipul> this channel is not eavesdropped
21:30:40 <hub_cap> SlickNik: very cool
21:30:45 <datsun180b> i'm no stranger to looking at the logs from rdjenkins
21:30:53 <datsun180b> i was doing that last night, recall
21:30:59 <hub_cap> vipul: http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-04-30-21.03.log.txt
21:31:03 <hub_cap> u sure bout that ;)
21:31:31 <hub_cap> ok so backups is almost ready to go in 1 pr?
21:31:35 <hub_cap> err, review
21:31:47 <SlickNik> Also need to get eyeballs on:
21:31:48 <SlickNik> https://review.openstack.org/#/c/27291/
21:31:54 <SlickNik> #link https://review.openstack.org/#/c/27291/
21:32:02 <SlickNik> #link https://review.openstack.org/#/c/26288/
21:32:15 <SlickNik> #link https://review.openstack.org/#/c/27299/
21:32:32 <cp16net> how about EVERY review? :-P
21:32:43 <SlickNik> that too.
21:32:44 <esp1> hehe
21:32:47 <hub_cap> HA
21:32:54 <hub_cap> ya lets all do some reviewing tomorrow / today
21:32:55 <esp1> yeah we have a backlog of reviews
21:33:02 <hub_cap> yup
21:33:06 <hub_cap> looks bad on us
21:33:07 <grapex> SlickNik: Could you run recheck on this one? https://review.openstack.org/#/c/26288/
21:33:21 <SlickNik> will do grapex.
21:33:26 <hub_cap> grapex: u mean thru jenkins?
21:33:34 <datsun180b> #link https://review.openstack.org/#/q/is:watched+status:open,n,z
21:33:45 <datsun180b> Bookmark it, pin it to your homepage
21:33:50 <hub_cap> https://rdjenkins.dyndns.org/gerrit_manual_trigger/?
21:33:56 <SlickNik> Mark any ones that are work in progress.
21:34:05 <grapex> Sorry if there's frustration on the fact I haven't looked at some reviews, but there's been some confusing issues. 1) so many builds have failed CI its hard to tell if these PRs are working, and 2) I wasn't certain if the backup API was done or not and which one to start with
21:34:53 <SlickNik> retriggered https://review.openstack.org/#/c/26288/
21:35:17 <grapex> We talked last time about consolidating the backups pull request, since as I understood it it would have been possible to make a fresh PR from git since the code was finished, and I thought we were going to just do that this one time and then start making smaller ones. Sorry if I misunderstood.
21:35:18 <hub_cap> ya the jenkins thing caused us all setbacks i think
21:35:49 <hub_cap> grapex: we can review things as they are i think for now. u ok w/ that?
21:35:55 <hub_cap> also, did we get int-tests for backups?
21:36:04 <hub_cap> id like to see that they are merged when we merge the code itself
21:36:15 <SlickNik> sorry for the confusion grapex: https://review.openstack.org/#/c/26288/ should be ready to go. https://review.openstack.org/#/c/27299/ still has the int-tests in progress.
21:36:44 <grapex> hub_cap: I'll try, since I know there's a need for these things to get merged.
21:36:53 <cp16net> yeah
21:37:01 <SlickNik> hub_cap, https://review.openstack.org/#/c/27299/ is WIP since I'm working on the int-tests and that should be uploaded in the next couple of days.
21:37:18 <grapex> SlickNik: No problem. In the future PM me and let me know your plan behind the order of merging if things get this complicated on a new feature.
21:37:56 <SlickNik> grapex: Will definitely keep you up to date.
21:38:00 <hub_cap> word
21:38:14 <cp16net> maybe it would be good to outline the plan of attack in the blueprint
21:38:29 <cp16net> i've seen the graphs of deps in blueprints
21:38:33 <grapex> cp16net: Great idea!
21:38:37 <cp16net> maybe thats how we can mitigate this
21:38:44 <grapex> ^^+1
21:39:00 <hub_cap> +1
21:39:04 <cp16net> i dont know how to create it but it looks really nice
21:39:58 <hub_cap> ok so next topic?
21:40:19 <SlickNik> Agreed. It's been tricky staying on the same page, so this would be better called out in the blueprint.
21:40:51 <vipul> also, if while we're in this unknown period, can we all pull down selected reviews
21:40:52 <grapex> Just to summarize, I think when there are any red marks on a pull request it keeps people from looking at it. In those case we need to be explicit on requesting what we do and don't want people to approve.
21:40:53 <cp16net> #link https://blueprints.launchpad.net/nova/+spec/baremetal-compute-takeover
21:40:54 <vipul> and run int-tests?
21:40:59 <cp16net> theres kinda an example
21:41:25 <hub_cap> grapex: i totally agree w/ u
21:41:33 <hub_cap> it moves my eyes away at first glance too
21:41:47 <grapex> vipul: We could, in this case though the features in a few pieces so its hard to pull it down and just run int-tests.
21:42:17 <esp1> I kind of think we should all run int-tests independently anyways.
21:42:18 <vipul> if it helps, we can consolidate the backups patches
21:42:30 <vipul> because that might be the only big one spanning multiple
21:42:34 <hub_cap> esp1: for sure, but its hard to know _fi_ we did that :P
21:42:35 <hub_cap> *if
21:42:53 <robertmyers> the main problem i see is the two repos
21:42:55 <vipul> it's just that these tests may continue to fail in jenkins, since it's in-cloud
21:42:58 <SlickNik> The issue is that we can't consolidate
21:43:06 <SlickNik> Since changes are across repos
21:43:07 <robertmyers> why not?
21:43:11 <esp1> maybe if you see a red −1 from reddwarf you could run the int-tests manually and copy it to the review to show that it works.
21:43:13 <grapex> vipul: It would, though I already agreed no one has to go to the trouble and would feel bad if anyone had to do that.
21:43:34 <datsun180b> we should certainly be running int-tests on our own
21:44:00 <vipul> esp1: yea that will work for now.. until we get in-cloud tests stabilized
21:44:24 <hub_cap> lets also focus on getting them stabilized _now_ rather than later
21:44:31 <hub_cap> datsun180b: is our man for that :D
21:44:32 <esp1> vipul: maybe overkill and not perfect tho
21:44:48 <esp1> hub_cap: cool
21:45:20 <vipul> grapex: ok cool, i missed that part of the disucssion
21:45:25 <SlickNik> hub_cap ++: we shouldn't let esp1's idea stop us from working on stabilizing the tests.
21:45:35 <hub_cap> sure
21:46:04 <SlickNik> so that they aren't iffy on an automated run.
21:47:25 <vipul> is this thing still on?
21:47:30 <hub_cap> ok so _now_ moving on?
21:47:36 <vipul> yup
21:47:37 <hub_cap> but to summarize
21:47:41 <hub_cap> 1) our tests are unstable
21:47:47 <hub_cap> 2) look @ code anyway
21:47:54 <hub_cap> 3) check to see if someone has manually run the code
21:48:00 <hub_cap> and if u have, put it in the review
21:48:19 <hub_cap> 1 caveat, _all_ the code, including all of openstack shoudl be up to date when doing this
21:48:22 <hub_cap> tahts the only thing that scares me
21:48:39 <hub_cap> jsut cuz it works in the microcosm of commit 3ef7aa3 it doesnt mean it works w/ master heh
21:48:49 <hub_cap> so keep that in mind
21:48:55 <hub_cap> #topic Notifications
21:49:11 <datsun180b> If we're going to share results of manually-run tests, I think we need to have some way to basically show a freeze of the components
21:49:16 <robertmyers> #link https://review.openstack.org/#/c/26884/
21:49:22 <hub_cap> datsun180b: i think thats fair to say
21:49:25 <datsun180b> We can't exactly use pip freeze, but we can share the commit hashes I bet
21:49:40 <hub_cap> ya datsun180b
21:49:57 <vipul> robertmyers: i've been trying to get that to pass reddwarf, but no luck with vm-gate
21:49:58 <SlickNik> honestly, in my case if I know it works (having run it locally) I just re-run rdjenkins if I hit an unstable test. It pretty much succeeds on the second run.
21:50:03 <robertmyers> can haz merge?
21:50:22 <datsun180b> "can haz". Son, people can see you!
21:50:28 <robertmyers> ha
21:50:33 <vipul> robertmyers: is the exists stuff coming in later patch?
21:50:36 <hub_cap> hah robertmyers, did u run itn tests
21:50:37 <hub_cap> heh
21:50:38 <grapex> datsun180b: Like that's ever stopped us before.
21:50:40 <robertmyers> vipul: what erro?
21:50:45 <robertmyers> error?
21:50:58 <robertmyers> exists is part 2
21:51:11 <robertmyers> this is  the start of notifications
21:51:20 <SlickNik> robertmyers: https://rdjenkins.dyndns.org/job/Reddwarf-Gate/129/console
21:51:30 <robertmyers> basically this is our internal code.
21:51:40 <datsun180b> Looks like timeout during mysql stop
21:51:42 <hub_cap> hehe the test failed
21:51:44 <robertmyers> the exists stuff doesn't exist yet
21:51:49 <hub_cap> no can not has merge robertmyers
21:51:58 <robertmyers> boo
21:52:14 <datsun180b> oh that's 129, I'm still on 128
21:52:32 <SlickNik> robertmyers: looks like "test_instance_delete_event_sent" and some of the resize_instance_usage event tests are failing.
21:52:55 <robertmyers> yeah, it *should* be using the fake verifier
21:53:28 <grapex> robermyers: Refresh my memory - that fake verifier actually works in real mode, right?
21:53:29 <hub_cap> crap im double booked
21:53:58 <grapex> Because I remember you had some test doubles (fakes) that worked in fake mode with tox. I'm guessing that actually sends a rabbit event or something you check using rabbit in the VM?
21:54:03 <datsun180b> "the fake verifier works in real mode" We are wizards, aren't we
21:54:08 <robertmyers> grapex: yes
21:54:31 <grapex> Ironically this may mean the notifications code is working. :( We've hit issues where they catch bugs for async actions before.
21:54:56 <hub_cap> ok im 1/2 paying attn cuz im chatting in #openstack-meeting too
21:55:00 <robertmyers> it also needs to have a notification_driver setting in the config
21:55:00 <hub_cap> so someone ping me to change topic
21:55:22 <esp1> datsun180b: I'm more like harry potter in 3rd year
21:55:41 <robertmyers> so it could be an out of date reddwarf-integration
21:56:22 <datsun180b> clean bill of health from my int-tests
21:56:31 <grapex> Is there anyway to determine the shas of the git projects that were used in a CI run?
21:56:39 <grapex> Maybe we should add that to redstack install
21:56:55 <datsun180b> https://gist.github.com/ed-/e2947f91ba624595e4d0
21:57:01 <grapex> just iterate all the /opt/stack directories and save a chunk of the git logs to a file we can archive
21:57:19 <esp1> grapex: seems reasonable to me
21:57:36 <grapex> It would solve the "I think this works but I'm not sure if you tested what I tested" problem
21:57:48 <grapex> Who's with me? Should we make it an action item?
21:57:59 <esp1> +1
21:58:05 <cp16net> datsun180b: that means its good now
21:58:11 <grapex> Also an action item: redstack should include more ascii art like datsun180b's VM
21:58:20 <datsun180b> that's just the motd of my vm
21:58:37 <grapex> Starbug is also what a little girl would name a pony.
21:58:37 <datsun180b> HEY SlickNik tell them the relevance of the name "Starbug"
21:58:40 <grapex> Just saying.
21:59:19 <hub_cap> woa
21:59:23 <SlickNik> starbug is one of the ships in RedDwarf…in case anyone was wondering.. :)
21:59:29 <hub_cap> i go to the other meeting room and look what happens
21:59:32 <datsun180b> it's Red Dwarf's shuttlecraft
21:59:37 <hub_cap> #action keep the crew on topic
21:59:47 <robertmyers> #action robertmyers look into failing notifications tests
21:59:53 <cp16net> lol
22:00:19 <SlickNik> @action SlickNik to see if there's an easy way to save off the git hashes for a rdjenkins run in case we need to look it up.
22:00:29 <SlickNik> #action SlickNik to see if there's an easy way to save off the git hashes for a rdjenkins run in case we need to look it up.
22:00:33 <grapex> Wait, what is this Reddwarf you speak of? Are you talking about our project, the only thing I know of which ever has used that name outside of astronauts?
22:00:39 <datsun180b> protip: it'll involve "git log" and some --flags and some awk i bet
22:00:48 <grapex> hub_cap: What about the idea to save git histories
22:00:58 <grapex> so we know if the Jenkins test is testing what we're running locally
22:00:59 <grapex> ?
22:01:03 <grapex> vipul: ^^
22:01:09 <SlickNik> I hear you datsun180b, pretty much what I was thinking.
22:01:12 <hub_cap> i mean
22:01:17 <hub_cap> i dont know if we need to go that far
22:01:21 <hub_cap> whos gonna verify it?
22:01:25 <hub_cap> just be sure your up to date
22:01:28 <grapex> hub_cap: That's not the point
22:01:34 <hub_cap> then what is the point?
22:01:40 <datsun180b> record keeping
22:01:41 <esp1> vipul ain't at his desk atm
22:01:44 <grapex> The point is to be a sanity check so if Jenkins is failing, and you're positive it's working locally, you can know if the code is the same.
22:02:03 <datsun180b> #agree
22:02:07 <grapex> Because maybe you make a mistake and are on a different version of RDI, or the client, or Nova, or some other thing than Jenkins is using
22:02:17 <cp16net> you know that could be some "change log" which we are lacking in the public now
22:02:24 <datsun180b> you hush
22:02:32 <grapex> cp16net: We need that too but this would be simpler
22:02:40 <hub_cap> ok my Q is this
22:02:41 <cp16net> sure but it could be one in the same
22:02:43 <hub_cap> whos gonna use that info
22:02:45 <datsun180b> i think we need releases to have changelogs
22:02:47 <cp16net> or at least a first step
22:02:58 <hub_cap> are u gonna look @ the SHA's if someones stuff doesnt run on CI?
22:03:03 <hub_cap> or if htey claim it does run locally
22:03:13 <hub_cap> are u gonna validate all 9 of the SHA's against whats in master for all of openstack?
22:03:21 <grapex> hub_cap: That doesn't matter
22:03:32 <datsun180b> we'd look at them in the case of a discrepancy
22:03:38 <grapex> all you do is say "Hey, it works locally! What are my shas?" Check them, then "ok, what are the Jenkins shas", check them
22:03:39 <hub_cap> ok lets take this off to the regular room, cuz we are already at time
22:03:42 <grapex> And move on with your life
22:03:52 <SlickNik> hub_cap: I'm more interested across our 3 projects than across openstack.
22:03:58 <SlickNik> But I see your point.
22:04:38 <grapex> Right now what I'm reading is robertmyers has tested his code in the VM and thinks Jenkins isn't running the same code. If we had the shas archived it would be trivial to verify if that was the case.
22:04:45 <cp16net> #agreed hub_cap
22:04:55 <hub_cap> ok so lets quickly do open discussion
22:05:07 <hub_cap> grapex: sure but jenkins is always pulling latest
22:05:09 <hub_cap> we _know_ that
22:05:12 <grapex> hub_cap: Exactly
22:05:19 <hub_cap> it does a fresh VM every pull
22:05:24 <hub_cap> ok so u mean only jenkins recording the sha's
22:05:42 <robertmyers> grapex: it could be a true error. I thought it was a resize down failure
22:05:43 <hub_cap> i thought this was if i ran it locally, to prove that im up to date, if jenkins fails, ill post my shas
22:05:53 <robertmyers> which I get locally
22:05:55 <hub_cap> if its for a user to verify against jenkins sure
22:06:05 <hub_cap> if its for a user to verify another user, then we wont do it, guaranteed
22:06:13 <robertmyers> so i'll look into my code
22:06:19 <grapex> hub_cap: It's for the first case.
22:06:28 <hub_cap> then its fine by me
22:06:29 <grapex> I don't fully get what you mean by the second...
22:06:34 <hub_cap> if jenkins fails
22:06:48 <hub_cap> and u say, look i ran int-tests, they passed on sha 34eef7
22:07:01 <hub_cap> i would say ok they passed, i believe u, who cares bout the sha
22:08:03 <datsun180b> if that sha is found to be ancient or not anywhere grounded in reality wrt the rep, that's another story
22:08:52 <grapex> Ok- I think we have some more stuff to talk about but we can take it to the #reddwarf room
22:08:58 <hub_cap> yes grapex +1
22:08:58 <cp16net> ok
22:09:02 <hub_cap> can we call this meeting?
22:09:06 <SlickNik> yes...
22:09:06 <hub_cap> and just go to #reddwaf
22:09:09 <hub_cap> #endmeeting