Thursday, 2012-02-23

*** hggdh has quit IRC00:03
*** dolphm has quit IRC00:07
*** hggdh has joined #openstack-meeting00:15
*** ohnoimdead has quit IRC00:22
*** nikhil__ has quit IRC00:22
*** mikal has quit IRC00:22
*** carlp has quit IRC00:22
*** comstud has quit IRC00:22
*** yehudasa has quit IRC00:22
*** dwalleck_nova has quit IRC00:23
*** deshantm_ is now known as deshantm00:24
*** sleepsonthefloo has quit IRC00:24
*** ohnoimdead has joined #openstack-meeting00:28
*** nikhil__ has joined #openstack-meeting00:28
*** mikal has joined #openstack-meeting00:28
*** carlp has joined #openstack-meeting00:28
*** comstud has joined #openstack-meeting00:28
*** yehudasa has joined #openstack-meeting00:28
*** mikal has quit IRC00:28
*** mikal has joined #openstack-meeting00:31
*** deshantm has quit IRC00:31
*** mikal has quit IRC00:31
*** jakedahn has joined #openstack-meeting00:33
*** mikal has joined #openstack-meeting00:33
*** deshantm has joined #openstack-meeting00:34
*** ohnoimdead has quit IRC00:44
*** jog0 has quit IRC00:47
*** jakedahn has quit IRC00:48
*** dendro-afk is now known as dendrobates00:49
*** patelna has quit IRC00:57
*** joearnold has joined #openstack-meeting00:59
*** dtroyer has joined #openstack-meeting01:01
*** mahmoh has quit IRC01:02
*** dolphm has joined #openstack-meeting01:06
*** ravi has joined #openstack-meeting01:07
*** ravi has left #openstack-meeting01:07
*** ohnoimdead has joined #openstack-meeting01:15
*** dolphm has quit IRC01:16
*** mahmoh has joined #openstack-meeting01:17
*** mahmoh has quit IRC01:22
*** donaldngo_hp has quit IRC01:25
*** mahmoh has joined #openstack-meeting01:37
*** dendrobates is now known as dendro-afk01:39
*** dolphm has joined #openstack-meeting01:44
*** donaldngo_hp has joined #openstack-meeting01:45
*** nyov_ is now known as nyov01:53
*** joearnold has quit IRC01:57
*** gyee has quit IRC02:04
*** ravi has joined #openstack-meeting02:20
*** jeremyb_ has quit IRC02:29
*** jeremyb_ has joined #openstack-meeting02:29
*** jeremyb_ is now known as jeremyb02:30
*** jakedahn has joined #openstack-meeting02:32
*** jakedahn has quit IRC02:32
*** jakedahn has joined #openstack-meeting02:32
*** sandywalsh has quit IRC02:32
*** ravi has quit IRC02:46
*** sandywalsh has joined #openstack-meeting02:47
*** anotherjesse has quit IRC02:56
*** jakedahn has quit IRC02:56
*** bengrue has quit IRC03:00
*** andrewsben has quit IRC03:02
*** dolphm has quit IRC03:02
*** novas0x2a|laptop has quit IRC03:06
*** jdurgin has quit IRC03:07
*** danwent has quit IRC03:12
*** dolphm has joined #openstack-meeting03:14
*** mahmoh has left #openstack-meeting03:19
*** dendro-afk is now known as dendrobates03:35
*** ohnoimdead has quit IRC03:47
*** dtroyer has quit IRC04:04
*** anotherjesse has joined #openstack-meeting04:07
*** dtroyer has joined #openstack-meeting04:10
*** martine has joined #openstack-meeting04:15
*** reed has quit IRC04:37
*** danwent has joined #openstack-meeting04:44
*** martine has quit IRC04:46
*** sleepsonthefloo has joined #openstack-meeting05:03
*** davlap has quit IRC05:06
*** zigo has joined #openstack-meeting05:14
*** andycjw has joined #openstack-meeting05:42
*** mdomsch has quit IRC05:48
*** mjfork has quit IRC06:16
*** littleidea has quit IRC06:19
*** danwent has quit IRC06:27
*** zigo has quit IRC06:31
*** anotherjesse1 has joined #openstack-meeting06:46
*** deshantm has quit IRC06:47
*** dtroyer has quit IRC06:50
*** ghe_ is now known as GheRivero07:11
*** donaldngo_hp has quit IRC07:23
*** donaldngo_hp has joined #openstack-meeting07:24
*** donaldngo_hp has quit IRC07:31
*** donaldngo_hp has joined #openstack-meeting07:34
*** anotherjesse1 has quit IRC07:38
*** shang_ has quit IRC07:51
*** darraghb has joined #openstack-meeting08:34
*** journeeman has joined #openstack-meeting08:58
*** adjohn has quit IRC09:02
*** adjohn has joined #openstack-meeting09:02
*** adjohn has quit IRC09:06
*** derekh has joined #openstack-meeting09:08
*** zigo has joined #openstack-meeting09:11
*** shang has joined #openstack-meeting09:16
*** justinsb has quit IRC09:18
*** shang has quit IRC09:22
*** justinsb has joined #openstack-meeting09:25
*** shang has joined #openstack-meeting09:36
*** zigo has quit IRC09:46
*** shang has quit IRC09:49
*** zigo has joined #openstack-meeting09:50
*** sleepsonthefloo has quit IRC10:02
*** zigo has quit IRC10:14
*** shang has joined #openstack-meeting10:33
*** andycjw has quit IRC10:46
*** soren_ is now known as soren11:01
*** shang has quit IRC11:01
*** soren has quit IRC11:01
*** soren has joined #openstack-meeting11:01
*** dayou has quit IRC11:09
*** dayou has joined #openstack-meeting11:10
*** shang has joined #openstack-meeting11:12
*** dayou has quit IRC11:25
*** zigo has joined #openstack-meeting11:48
*** dayou has joined #openstack-meeting12:20
*** markvoelker has joined #openstack-meeting12:33
*** zigo has quit IRC12:41
*** zigo has joined #openstack-meeting12:44
*** mdomsch has joined #openstack-meeting12:50
*** dprince has joined #openstack-meeting13:06
*** mjfork has joined #openstack-meeting13:07
*** journeeman has quit IRC13:09
*** littleidea has joined #openstack-meeting13:11
*** mdomsch has quit IRC13:19
*** littleidea has quit IRC13:31
*** littleidea has joined #openstack-meeting13:31
*** littleidea has quit IRC13:32
*** zigo has quit IRC13:45
*** mattray has joined #openstack-meeting14:07
*** littleidea has joined #openstack-meeting14:11
*** dolphm has quit IRC14:37
*** dtroyer has joined #openstack-meeting14:39
*** donaldngo_hp has quit IRC14:42
*** mancdaz has quit IRC14:46
*** mancdaz1203 has joined #openstack-meeting14:47
*** GheRivero_ has joined #openstack-meeting14:50
*** martine has joined #openstack-meeting14:51
*** dtroyer has quit IRC14:53
*** donaldngo_hp has joined #openstack-meeting14:58
*** sandywalsh has quit IRC15:01
*** dtroyer has joined #openstack-meeting15:06
*** ravi has joined #openstack-meeting15:09
*** sandywalsh has joined #openstack-meeting15:15
*** dendrobates is now known as dendro-afk15:17
*** dolphm has joined #openstack-meeting15:21
*** zigo has joined #openstack-meeting15:29
*** ravi has quit IRC15:39
*** cdub has quit IRC15:54
*** thrawn01 has joined #openstack-meeting16:01
*** deshantm has joined #openstack-meeting16:01
*** mattray has quit IRC16:04
*** oubiwann has joined #openstack-meeting16:04
*** danwent has joined #openstack-meeting16:14
*** DuncanT has quit IRC16:17
*** DuncanT has joined #openstack-meeting16:21
*** Yak-n-Yeti has joined #openstack-meeting16:21
*** aweiss has joined #openstack-meeting16:23
*** mattray has joined #openstack-meeting16:23
*** ravi has joined #openstack-meeting16:25
*** ravi has quit IRC16:26
*** ravi has joined #openstack-meeting16:26
*** littleidea has quit IRC16:27
*** dendro-afk is now known as dendrobates16:28
*** jastr has joined #openstack-meeting16:53
*** davidkranz_ has joined #openstack-meeting17:00
jaypipesQA meeting starting now...17:00
openstackMeeting started Thu Feb 23 17:00:37 2012 UTC.  The chair is jaypipes. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic.17:00
*** ravi has left #openstack-meeting17:00
jaypipesgah.. so stressed I'm forgetting the friggin bot commands...17:00
jaypipes#topic status update17:01
*** openstack changes topic to "status update"17:01
jaypipesdonaldngo_hp: noticed a bunch of patches from you, thank you! I'll try to get reviews on them ASAP.17:02
jaypipesWe still need to eval davidkranz_ ' stress test work as well. If anyone has some spare cycles, would be great to get reviews on:
*** deshantm_ has joined #openstack-meeting17:03
jaypipesIn fact, reviews are piling up at,status:open+project:openstack/tempest,n,z17:04
jaypipesWould be great to get some help in reviews from those in the QA team17:04
*** mdomsch has joined #openstack-meeting17:04
davidkranz_I will take a look17:05
jaypipesdavidkranz_: thx David :)17:05
jaypipesI got a note from Daryl offline that he and his team are finalizing some work to make the Tempest test suite easier to use. Looking forward to that work.17:06
*** dubsquared has joined #openstack-meeting17:06
jaypipesUnless anyone has anything else to bring up, I'm happy to close this meeting and get on to doing reviews on Tempest commits... ?17:06
davidkranz_How do Smokestack and Torpedo relate to Tempest? There was an email about them this morning.17:06
jastrI'll look at some as well.17:06
*** deshantm has quit IRC17:07
*** dendrobates is now known as dendro-afk17:08
*** sleepsonthefloo has joined #openstack-meeting17:12
*** deshantm_ is now known as deshantm17:13
*** CarlosMartinez has joined #openstack-meeting17:13
jaypipesdavidkranz_: Smokestack is a framework that stands up OpenStack in both bare-metal and virtual server clusteres. Torpedo is a functional test suite (more a set of exercises) that is exectured against a SmokeStack environment.17:13
jaypipesdavidkranz_: Smokestack/Torpedo is executed against every commit to all core project trunks.17:13
*** openstack changes topic to "Status and Progress (Meeting topic: keystone-meeting)"17:18
openstackMeeting ended Thu Feb 23 17:18:30 2012 UTC.  Information about MeetBot at . (v 0.1.4)17:18
openstackMinutes (text):
*** donaldngo_hp has quit IRC17:18
*** donaldngo_hp has joined #openstack-meeting17:18
*** JoseSwiftQA has joined #openstack-meeting17:19
*** adjohn has joined #openstack-meeting17:19
*** gregburek has joined #openstack-meeting17:22
*** zigo has quit IRC17:22
*** anotherjesse1 has joined #openstack-meeting17:23
*** JoseSwiftQA has quit IRC17:24
*** CarlosMartinez has quit IRC17:25
*** zigo has joined #openstack-meeting17:25
*** cdub has joined #openstack-meeting17:26
*** darraghb has quit IRC17:28
*** ohnoimdead has joined #openstack-meeting17:31
*** reed has joined #openstack-meeting17:33
*** ohnoimdead has joined #openstack-meeting17:37
*** andrewbogott has joined #openstack-meeting17:39
*** jdurgin has joined #openstack-meeting17:41
*** davidkranz_ has quit IRC17:44
*** derekh has quit IRC17:47
*** clayg has joined #openstack-meeting17:49
*** shang_ has joined #openstack-meeting17:54
*** shang_ has quit IRC17:54
*** shang has quit IRC17:56
*** bvanzant has joined #openstack-meeting17:57
*** jdg has joined #openstack-meeting17:58
*** andrewsben has joined #openstack-meeting17:59
*** YorikSar has joined #openstack-meeting17:59
*** heckj has joined #openstack-meeting18:00
jdgFolks here for the volume meeting?18:01
jdgAlrighty then...18:01
openstackMeeting started Thu Feb 23 18:01:34 2012 UTC.  The chair is jdg. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic.18:01
claygI don't see vlad, or renuka, or vish :(18:01
jdgWe should give them a minute or two... I know Renuka was planning to attend18:02
jdgPerhaps we can start talking about current work until they get here?18:02
jdgI put an agenda up:18:02
*** dricco has joined #openstack-meeting18:02
*** joearnold has joined #openstack-meeting18:03
clayghas anyone started using/testing the new volumes api endpoint?18:03
*** renuka has joined #openstack-meeting18:03
jdg#topic new/current work for Essex18:03
*** openstack changes topic to "new/current work for Essex"18:03
bcwaldonvishy and I were going to add support for using it through novaclient18:03
bcwaldonbut not yet :)18:03
DuncanTWe only started looking at it today18:04
bcwaldonone major pain point I've seen is when the volumes compute extensions get out of sync with the volumes endpoint18:04
bcwaldonI would love to see that code merged somehow18:04
YorikSarbcwaldon: I still could not find time to look into it18:05
*** gyee has joined #openstack-meeting18:05
bcwaldonok, no worries18:05
jdgOk, anybody have anything specific they want to talk about regarding Essex, or do we need to spend some tim discussing this issue?18:06
YorikSarAs I saw, it should not take a lot of time18:06
YorikSarI think, this issue should be solved befor essex18:06
jdgSounds good18:06
claygbcwaldon: regarding api/openstack/compute/contrib/volumes getting out of sync with "volumes endpoint" - don't they currently both use the same db?18:07
bcwaldonthe code gets out of sync, not the data18:07
bcwaldonthey're the same code, but copied to two separate places18:07
YorikSarThe extension is not covered with tests at all now18:07
bcwaldonso its easy to fix a bug in one place18:07
claygright, yes18:07
claygright - i saw the volumes_type fix that you did fixed it in both - you're attentive like that - most of us aren't18:08
claygYorikSar: which extention? rly?18:08
*** sleepsonthefloo_ has joined #openstack-meeting18:08
bcwaldonnot necessarily attentive, I've just already felt the pain of not fixing it in both places ;)18:08
YorikSarclayg: The os_volume extension... I bumped into 500 error using novaclient, spent a lot of time wondering how could it pass all the tests.18:09
claygjdg: ok, you're running the show here - what's next! ;)18:09
jdg:) alright....18:10
YorikSarclayg: And then I realized that tests cover only endpoint18:10
jdgSounds like there's some more discussion needed on this, maybe at the end of the meeting or after18:10
jdgI wanted to see if anything has anything specific to work they've done for Essex that should be shared to a wider audience18:10
*** sleepsonthefloo has quit IRC18:11
*** sleepsonthefloo_ is now known as sleepsonthefloo18:11
jdgIE Blueprints18:11
claygVlad had that bit on the multi type driver, don't think it ever got wrote18:11
claygbut like when he talked about it - I never understood how he was planning on implementing it... so I think I was missing some understanding of his use case.18:12
YorikSarBlueprints that were approved to FFe are all merged or postponed now18:12
jdgOk... I was more or less trying to sync folks up but maybe not applicable here.18:12
jdgMoving on...18:12
jdg#topic outstanding work that folks need help with18:13
*** openstack changes topic to "outstanding work that folks need help with"18:13
*** donaldngo_hp has quit IRC18:13
jdgAny specific bugs issues that folks are workign on and could use some help on?18:13
claygbut #89707518:13
claygvolume int is id not uuid18:13
claygheh *bug18:14
*** renuka has quit IRC18:14
*** renuka has joined #openstack-meeting18:14
*** oubiwann has quit IRC18:15
jdgOk, anybody looked at this one?18:15
YorikSarclayg: Is it a real issue?18:15
bcwaldonI have looked into it, and gave up when I realized how much work it was going to be18:16
YorikSarI mean, isn't it just aestetic?18:16
renukano i think even wrt security18:17
claygYorikSar: there's some risk with the id's being auto incrementing that can be a collision problem in staging...18:17
renukayou should not be able to predict volume ids for a particular user18:17
bcwaldonyes, so I think it does need to happen18:17
YorikSarGot it.18:18
YorikSarAnd, may be, it should also happen before Essex18:18
jdgbcwaldon: How far did you get into finding where the changes need to be made?18:18
jdgOr anybody else that's familiar with the issue18:19
YorikSarSince people heard the word LTS in one sentence with Essex18:19
bcwaldonoh, it's basically a sweep of the entire volume-related codebase18:19
*** x86brandon has joined #openstack-meeting18:19
bcwaldonI did some of the work for instances uuids18:19
jdgOh... that's all?  :)18:19
bcwaldonso I know the depth of the changes18:19
bcwaldonyes...that's the problem!18:19
jdgWell, I'll volunteer to take a piece of it and see how it goes.  Anyone else?18:20
bcwaldonthe thing is you can't really break it up into pieces, since as soon as you make the change *all* the tests break18:21
renukawhy is it a sweep of the code base? i would assume that once a volume id is assigned, it just gets used without interpretation18:21
YorikSarThere can also be a problem with drivers there...18:21
*** bengrue has joined #openstack-meeting18:21
jdgbcwaldon:  Yeah, my hope was someobdy else would step up and we could divide the effort18:21
bcwaldonjdg: I'm saying I don't think that would work very well ;)18:21
jdgbcwaldon:  Ah, ok.18:22
jdgwell my offer still stands to work on this if somebody wants to bring me up to speed on the issue later?18:22
renukajdg can help out with the creation bit.. after that point, everyone ensures their drivers keep working?18:22
jdgrenuka: I'm willing to got that route if others agree18:23
YorikSarrenuka: Well, the thing with drivers is that there can be some that are not supported too active18:23
jdgRemember I'm still relatively new so I may need a little guidance to start.18:23
claygjdg: yeah I mean if you get a branch up I'll definately check it out and deploy/test/review18:23
*** novas0x2a|laptop has joined #openstack-meeting18:24
jdgOk, I have time to devote so I can work on it if folks are in agreement.  bcwaldon, sound reasonable?  Or are we dreaming here?18:24
claygYorikSar: if Theirry can rip out Hyper-V we can rip out sheepdog?18:24
bcwaldonjdg: no, I was just too lazy to do it18:25
renukaBy their drivers, i meant either those you wrote or are interested in... if you are not familiar, all you need to do is file a bug and bring it to peoples notice.. :)18:25
bcwaldonjdg: have fun with it18:25
jdgLOL.. that's always ominous18:25
YorikSarclayg: It can be too late for this18:25
renukayea, we ought to check with vish if they will have this for essex18:26
*** joearnold has quit IRC18:26
YorikSarActually we can be more optimistic and hope that drivers can accept long ugly string as id18:26
*** js42 has joined #openstack-meeting18:26
renukaI don't expect drivers to manipulate the ids so I am optimistic, yes18:26
YorikSarFor example, the Nexenta one can handle this (I hope so)18:27
DuncanTWe make use of the ID, but it is easy enough to work with the change as long as the length is well defined18:27
*** dtroyer has quit IRC18:28
claygI think we mostly see the 'vol-0000001' looking "id"18:29
claygjdg: DuncanT: YorikSar: renuka: is SolidFire, HP, Nexenta, or Citrix using "volume_types" ???18:30
jdgNegative for SolidFire18:31
DuncanTNot yet in production but we've plans around it18:31
renukaDuncanT: Since compute has already converted to use uuids, there is already code in there doing what we need. I don't think we need to worry about how long the resulting string is18:31
*** mdomsch has quit IRC18:31
YorikSarclayg: I've been looking for it, only Zadara used them18:31
renukaclayg: Citrix is not using it yet... I haven't gotten around to changing the SM driver to start18:32
DuncanTrenuka: We use the volume id as a key into our own databases inside our own databases, so we need a spec for it. As long as there is a spec, we don't care too much what it is18:32
renukabut we do need it18:32
*** mdomsch has joined #openstack-meeting18:32
ogelbukhI thought this is tied to volume scheduler18:32
claygyeah I was hopeing to get a "state of the union" update on Xen Storage Manager support :)18:32
renukaclayg: heh I had switched to devstack work for the last couple of months :)18:33
claygogelbukh: the types is just an attribute on the volume model - in theory it could be used by the scheduler - on in our case (maybe hp too) passed along unmodified18:33
DuncanTclayg: Yeah, we want it unmodified too18:33
renukaso give me about a week.. there are some bugs that are fixed on our internal branch that need to be rebased18:33
*** oubiwann has joined #openstack-meeting18:33
ogelbukhI see18:33
claygrenuka: so you're just here being nosy - you don't really _care_ about volumes any more :D18:33
claygoh... wow nm, looking forward to it!18:34
renukaclayg: haha, no!18:34
renukaclayg: I am supposed to be working on everything :D18:34
*** danwent has quit IRC18:35
claygjdg: what do you mean by BSaaS?18:35
jdgOk, so WRT bug #897075 it sounds like initial thought is move forward with putting in a "real" UUID for ID correct?18:35
uvirtbot`Launchpad bug 897075 in nova/essex "volume int is id not uuid" [Medium,Triaged]
YorikSarI looked through id usage in drivers. Looks like it will go well if iSCSI supports long IDs18:35
jdgclayg:  Block Storage as a Service (sorry, should've added that)18:36
ogelbukhthis was Lunr once18:36
renuka#agreed fix bug #897075 for essex18:36
jdgI'll explain more next18:36
clayg... sry, i sort of assumed that... I meant to say tell me what you think "Block Storage as Service" means?18:36
jdgI'll get there next...18:36
jdgOk, so I'll create a branch and try to get started on this.  I may need a quick run down from folks more familiar.18:37
* clayg bubbles with excitement18:37
jdgIf bcwaldon or somebody else wants to give me a quick over view later that would be great.18:37
jdgOk clayg  :)18:37
bcwaldonjdg: I'm free later18:37
jdgGreat thanks!18:37
jdg#topic BSaaS18:38
*** openstack changes topic to "BSaaS"18:38
jdgSo I think this has come up before with mixed feeling from folks, but...18:38
*** dtroyer has joined #openstack-meeting18:38
YorikSarWe (with ogelbukh) did some drawing and writing on this one18:38
jdgThere's been some more thoughts about spinning Block Storage out into it's own project seperate from Nova18:38
*** littleidea has joined #openstack-meeting18:38
ogelbukhprobably invented a kind of bicycle here18:39
YorikSarWe believe that can be done a Quantum way18:39
*** aweiss has quit IRC18:39
jdgYorikSar: thanks for the link18:40
YorikSarSo that it can become easier to add more backends and protocols etc18:40
DuncanTSo what would be left in nova-volume?18:40
ogelbukhjdg: I think its a fate of Lunr that has caused this confusion18:40
YorikSarNothing :)18:40
jdgogelbukh: exactly18:40
ogelbukhDuncanT: we actually thought of making VolumeManager18:40
jdgSo one thing I've run into is there seems to be pockets of work/ideas around this18:41
ogelbukhthat can replace nova-volume18:41
renukai worry about complicating things ... we dont have enough contributors, so I tend feel easier the code, the better...18:41
ogelbukhlike Quantum replaces nova-network18:41
YorikSarI think, we can propose a way to switch between nova-volume and (let's say) Lunr18:41
YorikSarAnd then cut nova-volume out entirely18:41
jdgrenuka:  ultimately wouldn't it make the code "easier" as you suggest to seperate it?18:41
* clayg has no idea how quantum "works"18:41
YorikSarIt will be easier18:42
YorikSarI believe in this one :)18:42
jdgThe growing pains are tough, but the end result would be better I believe18:42
renukajdg: if someone can dedicate enough time to it to make it work well... people already familiar with nova volume, there is a knowledge base.18:42
YorikSarWe can do a lot of abstractions and reuse if we rearchitect things a bit18:42
renuka... for the record, I know this is shortsighted, but given the size of community contributing to volumes at the moment, it seems like a big task18:43
jdgrenuka: I agree, but I think there's a growing interest here18:43
renukaYorikSar: what is the most painful thing about nova volume right now18:43
ogelbukhwe have at least 6 months in incubation18:43
ogelbukhand probably more18:43
renukacan it be fixed by Vlad's suggestion for volume scheduler18:43
jdgI also believe that if it was a first class citizen of it's own it would gain even more attention/interest18:43
YorikSarrenuka: For example, there is no way to add another protocol, there is only iSCSI18:44
ogelbukhbtw, I heard that Lunr is still in development18:44
jdgogelbukh: I think Lunr has morphed into something different18:44
renukayou can add drivers for any type of backend18:44
DuncanTWriting a driver that doesn't use iSCSI is easy enough?18:44
claygYorikSar: rbd?18:45
bvanzantiSCSI is a limitation of the hypervisor, right?18:45
ogelbukhcouldn't find anything more specific on this morph, alas18:45
claygif we're still talking about hte host connecting to storage and exposting it to the guest - then any BSaaS would be limited by what is supported in the virtdriver18:45
renukaSM on xenserver can connect to a large number of backends, including netapp, nfs, iscsi, etc.18:45
YorikSarclayg: I mean, you can add FibreChannel as option to standard driver, but it will be painfull to use itin other drivers (e.g. Nexenta)18:46
jdgogelbukh: Sorry... I believe it's turned more into actually creating an iSCSI target from commodity hardware, ie Swift but for Block18:46
ogelbukhoh, I see18:46
ogelbukhsounds like Ceph18:47
renukaI think this is a long discussion which ought to happen on the mailing list with more visibility18:47
claygogelbukh: not really like ceph, more like iscsidriver now - but with backups to cold storage (swift)18:47
jdgrenuka: Yes, but I wanted to try to get the ball rolling (even if folks throw it at me)18:47
YorikSarAm I wrong thinking that if we can presen a block device in the host system, we can attach it to VM in any hypervisor?18:47
renukawe should deal with current essex issues like adding tests, finding/fixing bugs18:47
ogelbukhclayg: the idea is to get client-agent that can connect devices to compute hosts18:47
ogelbukhvia any storage protocol18:47
* clayg 's mind is blow18:48
ogelbukhand make it into VM like local storage18:48
jdgrenuka: Agreed, but the summit is coming up and we should have some sort of plan/goal don't you think?18:48
jdgclayg: sorry, you're right.18:49
YorikSarjdg: I don't really see a difference in "BS for commodity hardware" and nova-volume... It gives block storage spreaded over some Linux hosts too...18:49
renukaagain, I think its a topic that needs more visibility. And this has been tried before. Folks tend to say they are interested, but it doesn't really translate to contributions.. so perhaps it isnt pinching them enough yet18:49
jdgYorikSar: probably a topic for another discussion18:49
vishyI'm not sure the point of rewriting a new BSaaS as opposed to breaking out nova-volume18:49
DuncanTIf what is being proposed is to rename nova-volume, make it a first class citizen then growing it organically, that seems quite reasonable18:50
jdgvishy: So maybe that's what it ends up being, I'm not proposing a specific "plan" or "design"18:50
YorikSarvishy: As I told, the wiring protocols is a good example where just separating nova-volume is not enough18:50
jdgJust the concept of seperating it18:50
claygidk it soulds like the idea of a client-agent is dramatically different then what nova-volume is now18:50
*** dhellmann_ has joined #openstack-meeting18:50
*** renuka_ has joined #openstack-meeting18:50
vishyYorikSar: wiring protocols?18:51
YorikSarvishy: iSCSI, FibreChannel, etc18:51
ogelbukhclayg: it's about removing volume code from virt driver actually18:51
jdgOnce you have an API into the "volume" code however doesn't it make life easier to do things like add protocols etc?18:51
ogelbukhmaking it more lightweight18:51
*** dhellmann_ has joined #openstack-meeting18:51
ogelbukhI believe I've seen couple of lines on it just today18:51
claygwith a guest that has to run on a guest environment that I don't really need/want to log into?  It's one way to do it...18:52
vishyYorikSar: we already support different protocols18:52
vishyYorikSar: I don't see why we would need to rearchitect for that18:52
DuncanTYorikSar: Anything that presents as a block device is supported now18:52
claygDuncanT: only as far as the virt layer supports that connection type18:52
DuncanTAre there any that don't support raw devices? I wasn't aware of any18:53
YorikSarI don't see a way to mix protocols in one driver...18:53
*** renuka has quit IRC18:53
*** dhellmann has quit IRC18:54
*** dhellmann_ is now known as dhellmann18:54
vishyYorikSar: you can pass back whatever you want in initilalize_connection18:54
claygDuncanT: "raw devices" oh erm... I'm not sure... like currently libvirt uses iscsiadm to connect to remote target, and xen uses xapi to make the calls to setup the iscsi SR...18:54
vishyand as long as there is corresponding logic on the compute side it will work18:54
vishyso one driver could pass back iscsi/rbd/sheepdog/...18:55
YorikSarvishy: But what if we want volume to be accessible over any protocol? It can be helpful in any environment18:55
claygDuncanT: I hadn't really though of the storage already existing as a "raw device" on the hypervisor?  What the connection type ofr htat?18:55
YorikSarvishy: s/any/mixed/18:55
DuncanTclayg: We set up complex dm devices and pass them to back via discover (diablo not essex, some of the driver names have changed a bit)18:55
vishyYorikSar: then we just have to extend initialize_connection to allow you to specify a type of connection18:56
clayg"we set up complex dm devices" durning attach, or this is preconfigured on the hypervisor?18:56
DuncanTclayg: on attach18:56
claygDuncanT: so... doesn't nova-compute need to know how to do all that?18:57
YorikSarvishy: Then we need to somehow control information about compute host (it can do one protocol, but not another)18:57
clayger... does nova-volume run on every compute node!?18:57
YorikSarvishy: And I think that this should be controlled by separate agent, not nova-compute18:57
vishyit should explicitly try to make one type of connection18:58
DuncanTclayg: We only have one instance of nova-volume, it passes all the work via rpc to our backend18:58
vishyrunning another agent on the compute host seems excessive18:58
claygDuncanT: sigh... so but... then... who exactly is setting up the "complex dm devices" on the compute node during attach :D18:58
claygDuncanT: sorry, I just find this very interesting...18:59
YorikSarThen we end up with volume logic spread over nova-compute and nova-volume18:59
vishyYorikSar: you can't get around taht18:59
claygvishy: ++18:59
vishyYorikSar: we tried initially18:59
vishyYorikSar: you have too many potential backends on the compute side18:59
DuncanTnova-compute calls our driver discover method in our driver. It sets up the device(s)18:59
vishyand each backend needs its own logic to connect to volumes19:00
jdgOk, we're unfortunately running out of time19:00
claygDuncanT: yes got it!  I remember that in diablo nova-compute used to have an instance of the volume-driver19:00
YorikSarvishy: Well, then I have another card to draw. What about usability as a stand-alone service?19:00
claygYorikSar: this is acctually an interesting use case19:01
vishyYorikSar: the point of separating nova-volume is to turn it into BSaaS19:01
DuncanTclayg: I admit I haven't looked very carefully at essex recently19:01
vishyown repo, on rest endpoint, own api, own extensions19:01
DuncanTclayg: I'll get somebody here to look and check our approach still works :-)19:01
claygwhoa... DuncanT has "people" for that sort of thing.19:02
YorikSarvishy: if we keep logic in compute (not in agent), we can not reuse it for some other service19:02
vishyYorikSar: It could even be rearchitected in the way you suggest, I just don't think you need to start from scratch19:02
jdg+1 Start by separating19:02
vishyYorikSar: if there is some common code that could live in BS service and be imported by nova-compute I'm all for it19:02
ogelbukhvinayp: we definitely were not thinking of it in this way19:02
vishyI just don't think there is much reuse there19:02
vishylook for example at the iscsi code in libvirt vs xen19:03
vishy0 shared code19:03
claygvishy: and not a lot of oppertunity to share either, the hv's approach it differently19:03
*** GheRivero_ has quit IRC19:03
vishyyou could in nova compute have from bssass.hyepervisor.drivers import libvirt19:03
ogelbukhsorry, it was for you vishy19:03
YorikSarYes, but isn't iscsi driver ensure connection anyway?19:04
claygvishy: maybe better to leave that to the guys that know the hypervisors (i.e. nova.virt)19:04
vishybut then the authors of bsaas have to understand all potential hypervisors19:04
*** nati2 has joined #openstack-meeting19:04
claygahahahahhghghghgh no!19:05
vishyclayg: that is my thinking, define a common interface for what will be requested and returned19:05
vishya la initialize_connection19:05
claygyup it's pretty good IMHO19:05
vishythen the hypervisor maintainers in nova can figure out how to connect to the different potential exports19:05
claygYorikSar: what do you think?  Long term this is big limitation?19:05
claygseems like the most pragmatic approach to me19:06
YorikSarvishy: I think, we should be able to provide external interface like "make that volume attached to current host", so that someone who does not know anything about iSCSI could get a volume.19:06
claygI'm scared of the agent based attach, even just running a shared storage pool that has direct connection to running guests is scary (must eaiser to just have connectivity to hv)19:06
vishyYorikSar: that is fine for libvirt19:07
vishyYorikSar: but xen doesn't work that way19:07
vishyYorikSar: everything hast to be implemented as a xenapi plugin19:07
vishyYorikSar: because all extra code runs in a vm (nova-compute nova-network, etc.)19:07
vishyYorikSar: I initially tried to do it exactly that way, but you can't expect that every hypervisor is running the same code on the host19:08
YorikSarvishy: Hm... I was talking about the world out of Nova19:08
vishyso you really have to let the hypervisor control the BS connection19:08
YorikSarvishy: But I get the point19:08
YorikSarvishy: We can later add an option "attach to host" along with that agent19:08
vishyYorikSar: providing general client code to connect to volumes seems excellent19:09
vishyYorikSar: we could even put it in python-xxxxclient19:09
vishyand the hypervisors could use it where in makes sense19:09
vishyyou could even have guests connecting on their own19:10
clayghuh, that's interesting...19:10
*** dendro-afk is now known as dendrobates19:10
jdgIt sounds like we have a concensus to move forward with this yes?19:10
YorikSarvishy: I still insist on an agent for this so that it caould be used as a stand-alone service with persistent attachements etc. But make it optionally, to let user delegate attachement burden to its code (like xenapi plugin)19:11
vishy+1 to having generic connection code, I'm just not convinced that it will work for all hypervisors, so I think you need to let the hypervisors optionally use it.19:11
jdgRenuka dropped off unfortunatley19:11
vishyYorikSar: sure seems very useful19:11
renuka_no i am here19:11
vishyYorikSar: I would say that is priority 2 vs getting all of the other stuff working19:11
DuncanTSounds like we all basically want the same thing, just different priorities on the layers19:12
vishysolidifying api and extensions, getting the code separated, etc.19:12
jdgSo I have a proposal...19:12
YorikSarSounds like we found common ground to start with :)19:12
jdgCan we agree as DuncanT pointed out to start laying a plan19:12
*** joearnold has joined #openstack-meeting19:13
jdgWe can phase things over time, prioritize sepearation for Folsom19:13
vishyjdg: yes, I was going to propose a discussion at the summit19:13
jdgvishy: great19:13
YorikSarDo we want to force separation in Folsom?19:13
jdgI would also like to get a discussion going via email19:13
vishyjdg: I think we can separate into a new repo in the first couple of weeks19:13
jdgAs Renuka suggested to get more buy in from everybody19:13
YorikSarI mean, shouldn't we keep nova-volume deprecated for one release?19:14
vishyYorikSar: we can leave existing nova-volume in19:14
vishyYorikSar: but I really think we can complete the separation pretty quickly19:14
vishywe already have the api separated, need a few extensions, etc.19:14
vishywaldon and I are planning on improving the documentation of the api a bit so that people can start using it.19:15
DuncanTCan we make nova-volume a shim that jsut calls our new stuff?19:15
vishyDuncanT: My plan is to replace all of the volume_api calls in nova19:15
vishywith a little wrapper that imports python-xxxxclient19:16
*** anotherjesse1 has left #openstack-meeting19:16
vishyand makes the calls through the client19:16
DuncanTvishy: Clearly you've thought more about this than me :-)19:16
vishyI've been planning this out for about 6 months19:16
YorikSarI think, after the start of separation, user will have to run bsaas-api alongside the nova-api and bsaas-storage-agent instead of nova-volume - and that's it19:16
claygvishy: attach still goes to compute endpoint yes?19:16
vishyclayg: yes19:16
vishyso now it goes19:17
vishyattach -> volume_api -> initialize_connection19:17
vishyit will go19:17
vishyattach --> volume_shim -> python-xxxclient -> volume_api -> initialize_connection19:17
vishyonce that is done volume_api can be running from an external repo no problem19:18
vishyso basically it is making sure all of the calls that are just going over the queue are going over the rest api instead19:18
claygvishy: awesome19:19
vishythen xxx-core can rearchitect components if they feel it is necessary19:19
claygso the canonical representation of guest xyz is attached to volume xyz is in volumes service or nova database?19:19
vishyas long as they maintain consistent api and extensions it is totally decoupled19:19
clayglike on a migration, when the guest is coming up on the new host, where does it look for the list of volumes to make initialize connection calls for?19:20
vishyclayg: that is a good question.  I think based on the current implementation you can reserve a volume19:20
vishyclayg: it is on both sides19:20
jdgapi call in to the volume service?19:20
vishyclayg: compute has a list of block_device_mapping19:20
*** joearnold has quit IRC19:21
vishyclayg: the volumes on the other end should know that they have an active connection from <something>19:21
vishyclayg: and I think the reservation idea allows us to specify a uuid for what is connecting to it19:21
YorikSarI think, there will be some client_migration call in volume code too...19:21
claygvishy: yeah right, but generally it's to the host, so you don't know which geust except for metadata19:21
*** renuka has joined #openstack-meeting19:22
vishyclayg: it probably has to be metadata in the reserve19:22
claygyes, makes sense, thanks19:22
vishyclayg: these are things that need to be hammered out, so hopefully we have a core volume team that owns all of this stuff19:22
vishyin prep for the summit we will document what exists19:22
vishyI will lay out my plan for getting volume into its own repo19:23
vishywe will come up with a code name (< most important part)19:23
jdgvishy: Do you want to do that before the summit or during the summit?19:23
*** joearnold has joined #openstack-meeting19:23
vishyjdg: which?19:23
claygwhat?  cinder already has momentum!19:23
jdgvishy: Lay out the plan19:23
YorikSarShouldn't we consider Lunr vacant now?19:23
jdgYour paln19:23
*** renuka_ has quit IRC19:24
*** jastr has quit IRC19:24
vishyLunr already is in use19:24
*** dhellmann has quit IRC19:24
vishyclayg: i love cinder, apparently there is another opensource project by that name so there is concern19:25
vishyjdg: I can lay out the plan in advance of the summit19:25
jdgvishy: Got it thanks19:25
claygawww man!19:25
YorikSarHm... I still don't get it, what will Lunr do what nova-volume (or xxx) does not19:25
*** letterj has joined #openstack-meeting19:25
vishyjdg: I don't think much will get done on it in advance19:25
claygoh ummm... can someone core review:19:25
vishyLunr is just a backend19:25
clayg^ for python-novaclient19:25
uvirtbot`clayg: Error: "for" is not a valid command.19:25
vishyvolumes on commodity hardware19:25
YorikSarBut nova-volume installed on that hw gives us volumes on it too19:26
*** ayoung-afk has quit IRC19:26
claygYorikSar: lunr is very much like what you currently get from nova-volume19:26
js42vishy: are there any docs or information about lunr or is it proprietary?19:26
*** dhellmann has joined #openstack-meeting19:26
*** Mike656 has joined #openstack-meeting19:26
vishyYorikSar: you could say that it is just a better version of the existing iscsi backend for nova-volume19:27
claygjs42: currently being developed internally at rackspace19:27
vishyYorikSar: if lunr gets opensourced we could just tear out the existing one19:27
Mike656Can nova work without keystone?19:27
claygMike656: noauth works a treat!19:27
vishyor, perhaps the lunr team will pull code in gradually19:27
YorikSarSo we'll get a good version and a better version?..19:28
Mike656clayg: how do they interact?19:28
jdgUnfortunatly I have to drop off and end the meeting.19:28
*** sleepsonthefloo has quit IRC19:28
js42clayg: Rackspace proprietary? or are there public design docs?19:28
claygjdg: thanks for putting this together!19:28
claygjs42: there are not19:28
ogelbukhthank you gentlemen19:28
jdgclayg: Thank you .. and everyone else for that matter.19:29
*** sleepsonthefloo has joined #openstack-meeting19:29
jdgThis was good.  So I'll plan on meeting next week as well.19:29
YorikSarWe should do it again19:29
YorikSarjdg: Yes, very good idea19:29
claygjdg: well _yeah_ we can't wait to get a status update on the id -> uuid branch19:29
Mike656How should I arrange nova and keystone to work together?19:29
jdgIt's scheduled as weekly so jump in19:29
DuncanTjdg: I'd like to get boot from volume firmly on the agenda for next week if possible?19:30
jdgclayg: :)  I'll keep you posted, maybe even before next week19:30
*** openstack changes topic to "Status and Progress (Meeting topic: keystone-meeting)"19:30
openstackMeeting ended Thu Feb 23 19:30:17 2012 UTC.  Information about MeetBot at . (v 0.1.4)19:30
openstackMinutes (text):
* andrewbogott has questions19:30
renukai am most definitely working on boot from volume next week19:30
claygMike656: try asking in #openstack instead of #openstack-meeting?19:30
renukai had a change which was based on diablo19:30
Mike656ok, thanks19:30
*** Mike656 has left #openstack-meeting19:31
jdgIf folks feel there are pressing issues we should discuss before next week perhaps they can propose another meeting between now and then.  Or utilize email?19:31
DuncanTrenuka: I'm interested on hearing about what you're up to. I'm only just getting my head around waht is there.19:31
DuncanTrenuka: Some obvious use-cases seem to be missing19:31
renukaDuncanT: are you using it with libvirt or xen?19:31
renukaDuncanT: what i am working on now is to get it working for xen19:32
renukaDuncanT: but i agree it needs more work..19:33
DuncanTrenuka: I'll put some thoughts down in an email, and you can tell me if I appear to be missing the point :-)19:33
ogelbukhI believe there is a specific nova-volume ML we can utilize for questions and thoughts19:33
renukaok i have another meeting now.. we should continue this on email or at the next meeting19:33
DuncanTI have to head off too... bye all19:34
*** renuka has quit IRC19:35
*** novas0x2a|laptop has quit IRC19:44
*** novas0x2a|laptop has joined #openstack-meeting19:45
*** littleidea has quit IRC19:46
*** jastr has joined #openstack-meeting19:48
*** mattray1 has joined #openstack-meeting19:51
*** jakedahn has joined #openstack-meeting19:52
*** mattray has quit IRC19:52
*** dendrobates is now known as dendro-afk19:54
*** mattray1 has quit IRC19:55
*** jdg has quit IRC19:56
*** mattray has joined #openstack-meeting19:56
*** YorikSar has left #openstack-meeting19:59
*** n0ano has joined #openstack-meeting20:01
*** joearnold has quit IRC20:05
n0anoanyone here for the orchestration meeting?20:06
*** jastr has quit IRC20:07
*** thrawn01 has left #openstack-meeting20:07
*** danwent has joined #openstack-meeting20:08
*** mdomsch has quit IRC20:09
*** sriramhere has joined #openstack-meeting20:10
sriramhereany orchestration meeting attendees around?20:11
n0anojust us chickens20:12
n0anosriramhere, did you have anything you wanted to discuss today?20:13
sriramhereDon Dugger had a proposal of having an Orchestration session during the summit - to kind of revive20:14
sriramherei would like to followup  on that20:14
openstackMeeting started Thu Feb 23 20:15:24 2012 UTC.  The chair is n0ano. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic.20:15
n0ano#topic orchestration session at Folsom summit20:15
*** openstack changes topic to "orchestration session at Folsom summit"20:15
n0anoWell, Don Dugger would be me :-)20:15
sriramhereOh Sorry Don, this is my first time, and didnt get your irc id20:16
n0anoNP, I like to confuse people (clearly it's working :-)20:16
n0anoI think we need to setup a blueprint and make a proposal for the summit20:16
sriramheredo we have any drafts yet?20:17
sriramhereor previous bps?20:17
sriramherei am looking at
n0anoI haven't done anything yet, I believe that Sandy Walsh had one for the Essex summit, we could redo that for Folsom20:18
sriramhereShall i take a first crack at it?20:18
sriramhereand u can do a review?20:18
sriramheredisclaimer - i havent done any bps yet. i am getting involved only now, but would like to take it on, if thats ok with u/ team20:19
n0anoThat would certainly work for me.20:19
n0anowe seem to work on the swim lesson plan, throw people in the deep end and see if they swim20:19
*** mikeyp has joined #openstack-meeting20:20
sriramherelet me get a version by Monday, so we go through review iterations couple of times before next thurs - is that a reasaonle time line?20:20
n0anoThere seem to be 2 separate areas for orchestration, scheduler enhancements (what I'm interested in) and message/process serialization.20:20
n0anosure, will you be talking specifically about scheduler or also address the serialziation issues?20:21
sriramherei will start with scheduler. my intiial gut tellsme it might require two. i might be wrong20:21
sriramherei will start with that, then may be combine both if thats correct20:22
n0anolet's start from there and see where we go, I think that be fine.20:22
mikeyp(sorry, I'm late)20:22
n0anoNP, we're talking about creating a blueprint for an orchestration session at the Folsom summit20:22
n0anosriramhere, has agreed to start one by Monday that we can then review20:23
*** mdomsch has joined #openstack-meeting20:23
mikeypThe Essex bluebrint is still valid for the most port.20:23
n0anothat's what I was thinking, just update that a little bit for Folsom20:24
n0anodo you have a direct link to the Essex blueprint?20:24
sriramhereanything specific that is missing?20:24
sriramhereno i dont - can u please point me that?20:24
mikeypThe essex presentation is also archived #link
n0anoso, shall we have sriramhere update the Orch. BP and see if we can use that as a basis for a talk at the Folsom summit?20:28
*** jastr has joined #openstack-meeting20:29
mikeypsounds like a plan.  maoy and beekhof both had proposals on implementation, so they will likely have some feedback20:30
sriramheregr8, wat more do i need to watch out for / look for?20:31
mikeypI think it will mostly involve the implications of the tactical scheduler enhancements20:32
mikeypdon't have much real world experience with those yet20:32
sriramherei ll check out existing systems like yagi. and try to include the enhancements20:33
sriramheredo we need to worry abt scoping now?20:34
n0anowhat was your concern?20:34
*** sandywalsh has quit IRC20:35
sriramheren0ano - is that question for me or mikeyp?20:36
n0anosriramhere, sorry, yes was to you, what is your scoping conern?20:36
sriramherescoping - i wanted to know if the BP should inlcude any scoping - for instance features only for Folsom, or can start big, and can be contained/ scoped as Folsom, Folsom+ features20:38
*** sandywalsh has joined #openstack-meeting20:38
sriramherescheduler improvements - one can do keep doing this forever. my assumtion is BP shoul contain what we are proposing for Folsom alone20:39
sriramhereand may be call out some as Folsom+ if doesn't appear to be fit20:40
n0anoas long as we don't restrict ourselves too much focusing on Folsom would be good, there will be an G summit for followon work20:40
sriramhereok - thats good to know, think big then:)20:41
n0anoin that case:20:41
n0ano#action sriramhere to update Orchestration blueprint for a Folsom session20:42
n0anoanything else we want to discuss20:42
sriramhererequest to give appropriate permissions20:43
sriramherei am good otherwise20:43
n0anoI'm not sure about the permissions issue, we might need someone more knowledgble about Launchpad than I if there's an issue there20:44
sriramherethanks. cu all20:45
n0anoIn that case I'll thank everyone and I'll be here next week.20:45
*** openstack changes topic to "Status and Progress (Meeting topic: keystone-meeting)"20:46
openstackMeeting ended Thu Feb 23 20:46:02 2012 UTC.  Information about MeetBot at . (v 0.1.4)20:46
openstackMinutes (text):
*** mikeyp has quit IRC20:46
*** sriramhere has quit IRC20:47
*** Yak-n-Yeti1 has joined #openstack-meeting20:59
*** Yak-n-Yeti has quit IRC21:01
*** anotherjesse1 has joined #openstack-meeting21:05
*** dprince has quit IRC21:07
*** reed has quit IRC21:07
*** jastr has quit IRC21:09
*** zigo has quit IRC21:13
*** jastr has joined #openstack-meeting21:22
*** reed has joined #openstack-meeting21:23
*** adjohn has quit IRC21:23
*** dubsquared has quit IRC21:24
*** dubsquared has joined #openstack-meeting21:24
*** joearnold has joined #openstack-meeting21:29
*** Yak-n-Yeti1 has quit IRC21:29
*** GheRivero_ has joined #openstack-meeting21:29
*** dolphm_ has joined #openstack-meeting21:32
*** Yak-n-Yeti has joined #openstack-meeting21:34
*** dolphm has quit IRC21:35
*** letterj has left #openstack-meeting21:37
*** reed_ has joined #openstack-meeting21:41
*** dolphm_ has quit IRC21:43
*** bvanzant has quit IRC21:43
*** reed has quit IRC21:45
*** anotherjesse1 has quit IRC21:48
*** anotherjesse1 has joined #openstack-meeting21:50
*** heckj has quit IRC22:07
*** GheRivero_ has quit IRC22:12
*** clayg has left #openstack-meeting22:14
*** deshantm has quit IRC22:17
*** mdomsch has quit IRC22:19
*** sandywalsh has quit IRC22:19
*** dolphm has joined #openstack-meeting22:19
*** deshantm has joined #openstack-meeting22:21
*** dolphm has quit IRC22:24
*** oubiwann has quit IRC22:24
*** deshantm has quit IRC22:27
*** martine has quit IRC22:32
*** dubsquared1 has joined #openstack-meeting22:35
*** sandywalsh has joined #openstack-meeting22:35
*** dubsquared has quit IRC22:39
*** adjohn has joined #openstack-meeting22:41
*** n0ano has left #openstack-meeting22:42
*** dolphm has joined #openstack-meeting22:48
*** Yak-n-Yeti has quit IRC22:56
*** markvoelker has quit IRC22:57
*** joearnol_ has joined #openstack-meeting23:28
*** joearnold has quit IRC23:28
*** shang has joined #openstack-meeting23:34
*** x86brandon has quit IRC23:41
*** mattray has quit IRC23:45
*** jastr has quit IRC23:49
*** joearnol_ has quit IRC23:58

Generated by 2.14.0 by Marius Gedminas - find it at!