Friday, 2011-05-06

*** clauden_ has joined #openstack00:02
*** dendrobates is now known as dendro-afk00:04
*** jamiec has quit IRC00:07
*** jwilcox has joined #openstack00:08
*** jwilcox has joined #openstack00:10
*** jwilcox is now known as japanjeff00:11
*** adjohn has joined #openstack00:11
*** monaDeveloper has quit IRC00:11
*** andy-hk has joined #openstack00:12
*** guynaor has joined #openstack00:14
*** japanjeff is now known as jeffjapan_00:17
*** stewart has quit IRC00:21
*** neuro_damage has joined #openstack00:22
*** pothos has quit IRC00:22
*** guynaor has left #openstack00:24
*** Ryan_Lane has quit IRC00:27
*** MarkAtwood has joined #openstack00:31
*** kashyap has quit IRC00:34
*** odyi has quit IRC00:36
*** Ryan_Lane has joined #openstack00:37
*** aliguori has joined #openstack00:38
*** adjohn has quit IRC00:41
*** johnpur has quit IRC00:42
*** pguth66 has quit IRC00:43
*** aliguori has quit IRC00:43
*** kashyap has joined #openstack00:50
*** odyi has joined #openstack00:51
*** odyi has joined #openstack00:51
*** jdurgin has quit IRC00:52
*** vernhart has quit IRC00:52
*** mahadev has quit IRC01:00
*** rchavik has quit IRC01:02
*** rchavik has joined #openstack01:03
*** ChameleonSys has quit IRC01:04
Ryan_LaneThis blueprint could use some love, if others want to make it better:
*** jbryce has quit IRC01:14
*** woleium has quit IRC01:14
*** MarcMorata has joined #openstack01:16
*** larry__ has joined #openstack01:17
*** MotoMilind has quit IRC01:19
*** taihen_ has joined #openstack01:20
*** carlp__ has joined #openstack01:20
*** larzy has quit IRC01:20
*** syah has quit IRC01:20
*** carlp_ has quit IRC01:20
*** Daviey has quit IRC01:20
*** taihen has quit IRC01:20
*** jpuchala has quit IRC01:20
*** syah has joined #openstack01:21
*** jpuchala has joined #openstack01:22
*** Daviey has joined #openstack01:24
*** mgoldmann has quit IRC01:24
*** jmckind has quit IRC01:24
*** yamahata_lt has joined #openstack01:37
*** ChameleonSys has joined #openstack01:37
*** obino has quit IRC01:38
*** Ryan_Lane has quit IRC01:40
*** BK_man has quit IRC01:41
*** obino has joined #openstack01:41
*** mray has joined #openstack01:42
*** clauden_ has quit IRC01:45
*** jmckind has joined #openstack01:47
*** mahadev has joined #openstack01:52
*** stewart has joined #openstack01:54
*** NelsonN has quit IRC01:54
*** stewart has quit IRC01:54
*** stewart has joined #openstack01:55
*** mray has quit IRC01:55
*** jeffjapan has joined #openstack01:58
*** mahadev has quit IRC01:59
*** jeffjapan_ has quit IRC02:00
*** santhosh has joined #openstack02:00
*** mahadev has joined #openstack02:06
*** woleium has joined #openstack02:12
*** jeffjapan has quit IRC02:15
*** jeffjapan has joined #openstack02:18
*** woleium has quit IRC02:23
*** mahadev has quit IRC02:24
*** mahadev has joined #openstack02:28
*** masudo has quit IRC02:30
*** masudo has joined #openstack02:31
*** obino has quit IRC02:31
*** hadrian has quit IRC02:35
*** kashyap has quit IRC02:38
*** dmi_ has quit IRC02:38
*** Zangetsue has joined #openstack02:40
*** dmi_ has joined #openstack02:42
*** miclorb_ has quit IRC02:43
*** brobergj has joined #openstack02:43
*** miclorb__ has joined #openstack02:43
*** dmi_ has quit IRC02:47
*** alekibango has quit IRC02:48
*** santhosh has quit IRC02:48
*** mahadev has quit IRC02:59
*** mahadev has joined #openstack03:05
*** mdomsch has joined #openstack03:06
*** larry__ has quit IRC03:12
*** mahadev has quit IRC03:15
*** mray has joined #openstack03:31
*** mray has quit IRC03:31
*** MarkAtwood has left #openstack03:55
*** santhosh has joined #openstack04:05
uvirtbotNew bug: #778269 in nova "ImageNotFound exception refers to non existant variable 'image_id'" [Undecided,New]
*** santhosh has quit IRC04:27
*** santhosh has joined #openstack04:28
*** santhosh has joined #openstack04:28
*** yamahata_lt has quit IRC04:29
uvirtbotNew bug: #778271 in nova "Cant install virtualenv in natty/python2.7" [Undecided,New]
*** andy-hk has quit IRC04:43
*** KnuckleSangwich has quit IRC04:45
*** f4m8_ is now known as f4m804:47
*** mahadev has joined #openstack04:47
*** mahadev has quit IRC04:49
*** gregp76 has joined #openstack04:51
*** hagarth has joined #openstack04:53
*** mahadev has joined #openstack04:55
*** kashyap has joined #openstack04:56
*** mahadev has quit IRC05:00
*** crescendo has quit IRC05:05
*** mahadev has joined #openstack05:06
*** adjohn has joined #openstack05:11
*** vernhart has joined #openstack05:17
*** obino has joined #openstack05:26
*** crescendo has joined #openstack05:29
uvirtbotNew bug: #778282 in nova "nova-manage doesn't report problem information in case of error with creation network" [Undecided,In progress]
*** jamiec has joined #openstack05:45
*** omidhdl has joined #openstack05:46
*** naehring has joined #openstack05:52
*** jmckind has quit IRC06:04
*** crescendo has quit IRC06:09
*** thickskin has quit IRC06:19
*** crescendo has joined #openstack06:23
*** thickskin has joined #openstack06:23
*** naehring has quit IRC06:26
*** naehring has joined #openstack06:26
*** allsystemsarego has joined #openstack06:29
*** allsystemsarego has joined #openstack06:29
*** omidhdl has quit IRC06:32
*** Ryan_Lane has joined #openstack06:32
*** omidhdl has joined #openstack06:33
*** nerens has joined #openstack06:42
*** mahadev has quit IRC06:43
*** MarkAtwood has joined #openstack06:43
*** gregp76 has quit IRC06:44
*** brobergj has quit IRC06:45
*** dendro-afk is now known as dendrobates06:48
*** omidhdl has quit IRC06:48
*** omidhdl has joined #openstack06:49
*** RickB17 has quit IRC06:57
*** RickB17 has joined #openstack06:58
*** smoser has quit IRC06:58
*** smoser has joined #openstack07:00
*** omidhdl has quit IRC07:01
*** omidhdl has joined #openstack07:01
fabiandpgregory: ah nice.07:02
*** fabiand__ has joined #openstack07:06
*** jtimberman has quit IRC07:07
*** jtimberman has joined #openstack07:09
*** allsystemsarego has quit IRC07:16
*** hagarth has quit IRC07:16
*** dendrobates is now known as dendro-afk07:17
*** miclorb__ has quit IRC07:25
*** hagarth has joined #openstack07:26
fabiand__pgregory: so did I understand it correct that you just installed nova and glance? or just nova?07:26
*** freeflyi1g has joined #openstack07:26
*** Ryan_Lane has quit IRC07:26
*** freeflying has quit IRC07:29
*** MarkAtwood has quit IRC07:31
*** nacx has joined #openstack07:40
*** toluene has joined #openstack07:43
*** jdurgin has joined #openstack07:44
toluenehi openstack guys! I have installed the openstack by following the instruction in I'm now going thr the However, I have problem in Registering the image, The system return "Unable to run euca-describe-imges, Is euca2ools env set up ?". Can somebody helps me ?07:46
*** heden has joined #openstack07:50
hedenAnyone know why my uploaded AMI file from uec-publish-tarball gets registered with Glance but the state of the image is QUEUED?07:51
*** e1mer has joined #openstack07:54
*** e1mer has joined #openstack07:54
*** viirya has quit IRC07:55
*** jdurgin has quit IRC07:56
*** jeffjapan has quit IRC07:56
*** koolhead11 has joined #openstack07:57
*** jeffjapan has joined #openstack07:58
*** naehring has quit IRC07:58
*** omidhdl has quit IRC07:59
*** omidhdl has joined #openstack08:00
*** keds has joined #openstack08:06
zykes-spectorclan: around ?08:06
*** allsystemsarego has joined #openstack08:09
*** viirya has joined #openstack08:13
*** naehring has joined #openstack08:13
*** santhosh has quit IRC08:16
pgregoryfabiand__: sorry for the slow reply, I installed nova first, got that all working, then installed glance.08:19
RichiHbtw, is anyone from jenkins in here, too?08:20
zykes-does swift have built in CDNs ?08:21
fabiand__pgregory: no problem. and cheers .. :)08:21
*** SwiftestGuy has joined #openstack08:31
SwiftestGuyGood morning people =)08:32
*** zenmatt has quit IRC08:33
*** jeffjapan has quit IRC08:34
*** toluene has quit IRC08:35
fabiand__pgregory: I also got it up and running - that went smooth :)08:39
fabiand__I struggled to get ostack on fedora up and running ... on ubuntu it worked within an hour ..08:40
pgregoryfabiand__: cool, it's not too bad, and the more you get familiar with the way things work, it becomes easier.08:40
pgregoryI also installed the Dashboard from source, and that was painless too.08:40
pgregoryhaven't got volumes working yet, or multiple compute nodes.08:41
fabiand__Yep, I've also installed the dashbaord - also working. very neat.08:46
fabiand__So much going on ..08:46
fabiand__Multiple nodes is also an outstanding item for me.08:46
* pgregory just needs 1/some decent machines to test it on properly, still running instances in qemu, which is slow.08:47
*** cromartie-x182 has joined #openstack08:57
*** cromartie-x182 has left #openstack08:57
*** daveiw has joined #openstack09:00
*** xavicampa has joined #openstack09:04
*** kaz_ has quit IRC09:04
*** CloudChris has joined #openstack09:23
*** CloudChris has left #openstack09:24
*** bkkrw has joined #openstack09:25
*** Zangetsue has quit IRC09:28
*** kashyap has quit IRC09:29
SwiftestGuyany swift developers in da hood? :D09:29
SwiftestGuyi might need some information09:31
*** kashyap has joined #openstack09:32
*** carlp__ has quit IRC09:34
SwiftestGuyis any body implementing new api compliance on swift?09:35
*** carlp__ has joined #openstack09:36
*** SwiftestGuy has left #openstack09:42
*** Zangetsue has joined #openstack09:45
*** Vek has quit IRC09:45
*** dh has joined #openstack09:58
*** xavicampa has quit IRC10:03
*** holoway has quit IRC10:03
*** mattrobinson has quit IRC10:03
*** pquerna has quit IRC10:04
*** Guest74692 has quit IRC10:04
*** RoAkSoAx has quit IRC10:04
*** fabiand has quit IRC10:11
*** krish|wired-in has joined #openstack10:15
*** xavicampa has joined #openstack10:16
*** colinnich_ has quit IRC10:24
*** colinnich has joined #openstack10:24
*** zul has quit IRC10:39
*** zul has joined #openstack10:40
*** krish|wired-in has quit IRC10:56
*** krish|wired-in has joined #openstack10:56
*** heden has quit IRC11:18
*** miclorb has joined #openstack11:19
*** naehring has quit IRC11:32
*** santhosh has joined #openstack11:34
*** koolhead11 has quit IRC11:35
*** kashyap has quit IRC11:36
*** pquerna has joined #openstack11:38
*** adjohn has quit IRC11:39
*** ctennis has quit IRC11:41
*** Vek has joined #openstack11:42
*** kashyap has joined #openstack11:46
*** markvoelker has joined #openstack11:47
*** ctennis has joined #openstack11:56
*** ctennis has joined #openstack11:56
*** antenagora has joined #openstack12:01
*** bkkrw has quit IRC12:03
*** kashyap has quit IRC12:04
*** bkkrw has joined #openstack12:18
*** guynaor has joined #openstack12:20
uvirtbotNew bug: #778463 in glance "'No module named paste' error when installing glance (apt-get glance)" [Undecided,New]
*** kashyap has joined #openstack12:21
JoelioHi! I've inherited a bunch of Dell 2950's and I'd like to put them towards an openstack cluster. What's the recommended deployment options for object storage now? I'd love to use distibuted hashed strage like Ceph or Sheepdog but I'm unsure as to their maturity. This is a test cluster but I'd still like it to be stable (it's part of my masterplan to introduce openstack to our core services.. muahahahaaa!)12:23
*** bkkrw has quit IRC12:23
JoelioI'm happy to use shared storage, but iSCSI, AOE etc..? Recommendations on a hard disk shaped post-card please :)12:24
notmynameJoelio: for object storage, look at swift ( It's the code that runs Rackspace's Cloud Files and a few other companies' public cloud storage systems12:25
notmynamefor block storage, check out the nova volume manager or help jump in to projects that are just getting started (like lunr)12:25
Joelio+notmyname: Sure, that's the plan, but presumabley that needs some backend storage?12:26
notmynameswift is the storage system. all you need are drives with a filesystem that support xattrs. we use/recommend xfs12:26
* Joelio admits to being a newb on this12:26
*** zns has joined #openstack12:26
Joelio+notmyname: Is that effectively hashed across the swift nodes then?12:27
notmyname(we in this case == Rackspace)12:27
notmynameyes it is12:27
Joelioah ok, cool12:27
notmynameand you can grow/shrink a cluster easily12:27
JoelioI take it  I can run the compute and storage nodes on the same system then too? And it's redundant too?12:28
notmynamethere is no good integration of running nova and swift on the same cluster. we (openstack people) all want that, but for now it doesn't do anything special12:29
notmynamethat is, you don't gain anything currently except dual use of your hardware12:29
notmynamethere isn't yet anything like computing based on the locallity of your data in swift12:30
notmynamefor example12:30
Joelio.. plus what about different disk size geometry. Mosy have 146Gb SAS in RAID1, but some  are 500Gb SATA etc.. do they show up as just one big, well, LUN effectively? Cheers for the answers most appreciated12:30
notmynamebut yes, swift is redundant. swift stores, by default, 3 copies of each piece of data12:30
*** larry__ has joined #openstack12:30
* Joelio goes off to PXE boot some Ubuntu :)12:30
notmynameI think you are getting a little confused about nova storage and swift storage. swift is not a block storage system12:31
JoelioYes, probably!12:31
Joelioswift == S3 right12:32
JoelioNova == Block storage for VMs12:32
notmynamenova == compute (similar to EC2)12:32
JoelioGlance is the block IO?12:33
*** jmckind has joined #openstack12:33
notmynamethe closest to EBS is currently either the nascent project lunr (not yet officially part of openstack but being written by some of the original swift devs) or the existing nova volume manager stuff12:34
*** mdomsch has quit IRC12:34
notmynameglance is a system for managing the VM images in nova (and optionally storing them in swift)12:34
*** allsystemsarego_ has joined #openstack12:35
notmynameand to go back to a previous question (maybe I misread it earlier), different size storage volumes are supported in swift12:36
notmynamebut they aren't exposed as a block device. swift has a REST API and the total cluster size is not exposed to the client. from the client perspective, it's supposed to be unlimited12:36
*** miclorb has quit IRC12:36
*** allsystemsarego has quit IRC12:36
zulis there like a canonical list of projects affiflated with openstack?12:43
notmynamezul: currently, it's what's listed on
zulnotmyname: thanks12:45
*** hadrian has joined #openstack12:46
*** kashyap has quit IRC12:47
*** dprince has joined #openstack12:51
*** dmi_ has joined #openstack12:51
dprincesoren: you around?12:52
*** kakella has joined #openstack12:52
*** kakella has left #openstack12:52
sorendprince: I am indeed.12:53
sorendprince: Fresh glance available in the ppa.12:53
dprincedprince: great. So a more general question. Does lp:~openstack-ubuntu-packagers/glance/ubuntu follow glance trunk?12:54
dprincesoren: Sorry. That was for you...12:55
sorendprince: Depends on what you mean by "follow"?12:55
sorenIf you mean "works with", then yes, that's the idea.12:55
sorenIf you mean something else, please elaborate :)12:56
dprincesoren: Exactly. I want to use the branch that is meant to work with glance trunk the best.12:56
dprincesoren: likewise. lp:~openstack-ubuntu-packagers/nova/ubuntu works with nova trunk?12:56
sorenThose are the branches we use for the ppa builds.12:57
sorenI sent out a stack of e-mails on the subject yesterday.12:57
sorenI'm sure some of them must have been addressed to the list.12:57
dprincesoren: Okay. Great. Just saw that. If we move code hosting elsewhere do you plan on moving these as well?12:58
*** aliguori has joined #openstack12:59
dprincesoren: Excellent. One less thing to move around.12:59
sorenHang on, phone call. I'll elaborate in a bit.12:59
dprincesoren: sure. NP.13:00
sorenYay. That was quick.13:00
sorenOk, so no, I don't expect to move them.13:00
sorenThey're a cooperative effort between us and Ubuntu, and Ubuntu has a lot (and more is coming) infrastructure tied into bzr.13:01
sorenThis is as least as much a decision for them.13:01
sorenHm... That wasn't exactly accurately phrased.13:02
*** antenagora has quit IRC13:02
sorenWorking on Ubuntu does not require using bzr at all. It's just that the processes we've chosen to use for packaging stuff in Ubuntu happens to be rather entangled with bzr, and I'd rather not have to untangle that.13:03
sorenEspecially because there are more benefits on the way with that sort of setup.13:03
sorenThey are also completely separate repositories, so I don't think it should be much of a problem.13:04
dprincesoren: Sure. I'm fine w/ that. Thanks for the explanation. Couple more things for you now that I'm mucking in PPA things. Can you check out lp:~dan-prince/glance/nocheck_nodoc?j13:04
sorendprince: Certainly.13:05
*** citral has joined #openstack13:05
dprincesoren: Also. When I push branches for review on the PPA stuff should it be to lp:~dan-prince/ubuntu/glance or lp:~dan-prince/glance/ubuntu?13:05
*** citral has quit IRC13:05
dprincesoren: kind of confuses me...13:05
sorendprince: Funny. We actually moved everything *away* from the ubuntu namespace yesterday to reduce confusion :)13:06
sorenlp:~dan-prince/glance/ubuntu is the answer.13:06
dprincesoren: well. I just find it odd that my glance code (a different project) goes into the same namespace as the glance PPA code.13:07
*** hagarth has quit IRC13:07
sorendprince: That's *exactly* why I didn't put it there to begin with :)13:07
dprincesoren: Sure. Either way. I'll follow protocol.13:08
sorendprince: But the last 9 months have revealed that people have a hard time finding the code if it's outside of glance's (or nova's or swift's) namespace.13:08
sorendprince: ...and they end up thinking it's just black box stuff applied out of nowhere.13:08
dprincesoren: Yeah. My vote would have been keep it under ubuntu and just educate people.13:09
dprincesoren: but its fine. really.13:09
sorendprince: You've identified a pattern.13:12
dprincesoren: which one? The sqlite slashes thing?13:13
sorendprince: My giving up arguing for what's reasonable.13:13
dprincesoren: oh. branches. Yep.13:14
sorenYes, that too.13:14
sorenBut nevermind that.13:14
*** alex-meade has joined #openstack13:17
*** jmckind has quit IRC13:17
JoelioOk guy,s,13:17
JoelioCould someome recommend a layout for a 4 node cluster please so I can get a better understanding?13:20
JoelioI've got an openfiler too to test if I need shared storage13:20
notmynameJoelio: describes 1 proxy + 5 storage nodes. in your case you could run either 1 proxy and 3 storage nodes or 1 proxy and 4 storage nodes (the proxy also running the storage servers)13:21
*** dprince has quit IRC13:21
*** krish|wired-in has quit IRC13:22
sorenJoelio: Are you wanting to set up a swift or a nova cluster? (compute or object storage)13:22
Joelio+soren: A compute cluster13:23
notmynameJoelio: ah. my mistake then. sorry :-)13:23
Joelion/p :)13:23
JoelioBasically I just want to be able to spawn VMs for testing puppet manifests as well as other sysadmin type stuff13:24
Joelio.. but I want resiliency where possible13:25
sorenNova won't give you much in terms of resiliency (on its own).13:25
sorenDesigning for the cloud means designing for failure.13:26
soren...when we're talking compute, that is.13:26
sorenYou should be able to rely pretty well on Swift.13:27
*** mahadev has joined #openstack13:27
JoelioOk, I appreciate that.. what I'm after is distributed storage for VM images and the possibility to reinstantiate the VM on other nodes in the event of a node  failure (doesn't need to be automatic)13:28
Joeliowith ideall vm storage and compute running on same system13:28
sorenI'm not sure why you want to complicate things by adding Nova to the mix.13:28
sorenSounds like you just want SAN backed regular VM's. Or sheepdog backed or whatever.13:28
JoelioYea, pretty much!13:29
*** zns has quit IRC13:29
sorenJoelio: Ok... So do that :)13:29
*** santhosh has quit IRC13:30
*** santhosh has joined #openstack13:30
JoelioCan I use openstack in this way or am I completely missing the point13:31
*** zenmatt has joined #openstack13:31
*** jpuchala has quit IRC13:31
*** mahadev has quit IRC13:31
sorenJoelio: Well, you can, but as I said: I'm not sure why wyou want to complicate things by adding Nova to the mix. Just use regular VM's?13:32
*** Zangetsue has quit IRC13:34
*** mahadev has joined #openstack13:34
*** omidhdl has quit IRC13:34
*** Zangetsue has joined #openstack13:36
*** j05h has quit IRC13:38
*** dendro-afk is now known as dendrobates13:41
*** arun_ has quit IRC13:46
*** zns has joined #openstack13:48
*** f4m8 is now known as f4m8_13:49
*** jmckind has joined #openstack13:51
*** Zangetsue has quit IRC13:52
*** santhosh has quit IRC13:52
*** j05h has joined #openstack13:53
*** yamahata_lt has joined #openstack13:55
*** amccabe has joined #openstack13:58
*** dprince has joined #openstack13:59
*** zns has quit IRC14:00
*** guynaor has left #openstack14:01
*** jamesurquhart has joined #openstack14:02
*** j05h has quit IRC14:03
*** j05h has joined #openstack14:04
*** arun_ has joined #openstack14:05
dprincesoren: When patching nova with the PPA patches I get the following hunking offsets:
dprincesoren: want me to push a branch to fix those?14:06
*** zns has joined #openstack14:06
*** kakoni has quit IRC14:09
creihtzykes-: It doesn't have a built in CDN service, but incorporating a CDN service on top of swift isn't that difficult14:13
*** pquerna has quit IRC14:13
*** pquerna has joined #openstack14:13
sorendprince: Sure.14:14
creihtzykes-: There are public containers that allow public access, but you don't get geographic distrobution14:14
*** mahadev has quit IRC14:16
*** shentonfreude has joined #openstack14:19
*** imsplitbit has joined #openstack14:20
*** mray has joined #openstack14:25
*** jkoelker has joined #openstack14:28
*** jmckind has quit IRC14:30
ccookeAnyone worked on the Xenserver support?14:30
*** mray has quit IRC14:31
ccookeI'm trying to work out hwo I can tell whether the openstack components installed on the Xen dom0 are actually doing anything14:31
*** zns has quit IRC14:32
*** jmckind has joined #openstack14:32
daboccooke: what specifically are you looking for?14:32
ccookedabo: documentation would be nice :-)14:33
daboccooke: it always is :) But do you mean by "doing anything"?14:33
ccookewell, says to copy a number of files from the openstack tree onto the Xen host14:34
ccookeI'd like some way of finding out if those files are, in fact, doing anything14:34
daboccooke: Those would be copied to the domU instance, not dom0.14:35
*** bkkrw has joined #openstack14:35
zykes-creiht: is it planned to integrate anything like it into swift?14:35
ccookedabo: ah, terminology. Yes.14:35
ccookethey have been copied to domU.14:35
creihtzykes-: Are you talking about CDN like services, or integration with CDNs?14:36
dabook, so your domU is configured as described on the wiki page. Now start up the services14:36
*** mray has joined #openstack14:36
ccookedabo: Start which services, where?14:37
ccooke(nova services are already started. I am trying to debug things)14:39
daboI usually run a screen session, with each service in its own window. But to run, say, compute, do the following: 1) cd ~/openstack/nova 2) . novarc  3) sudo ./bin/nova-compute --flagfile ../nova.conf14:39
ccookeI'm using the packaged versions, which start via upstart14:39
ccookebut they are configured and started already14:39
zykes-creiht: service.14:39
dabook; didn't realize that14:39
creihtzykes-: not that I know of14:40
ccooke(and yes, I'm using the latest PPA packages)14:40
ccooke(which match the latest branch in bzr)14:40
dabowhat are you trying to debug, then?14:40
ccookedabo: why nothing works :-)14:40
ccookeI'm just nearing the end of a complete wipe and reinstall just to make sure I can get some hopefully-clean error reoprts14:41
daboI don't mean to sound abrupt, but if you were to spell out what you tried to do, and what, if any, output you got, I might be able to help more14:41
ccookeAh, okay14:41
ccookeI've been in here several times with these errors; sorry14:42
ccookebasically, I create an instance and it remains in "scheduling" forever14:42
ccookecan't see any sign that it tried to contact the Xen host14:42
creihtzykes-: though a simple cdn could be created by using swift as the backing storage, and putting some caching servers at the edges (like squid or varnish) that have some extra smarts that know how to talk to swift14:42
dabowhen you do 'xe vm-list', does the instance show up? is it marked as running?14:43
ccookeno and no14:43
daboccooke: ok, this is where I start messing with the code. Since I explicitly start the services, I can easily stop them, add some debugging output to the code, and then restart the service14:44
daboI don't know the best way to do that in your setup14:44
ccookeFinally got through the reinstall14:46
ccookeand with the latest packages... it's now stuck at networking14:46
daboccooke: I generally stick a bunch of log output messages into the code so I can see where the code gets stuck.14:49
ccookeI thought this was released code?14:49
daboccooke: it is, and is working for most people. This is to determine what's different about your installation that's messing it up.14:50
*** nelson has quit IRC14:50
*** zns has joined #openstack14:50
*** nelson has joined #openstack14:50
*** jmckind has quit IRC14:51
*** jmckind has joined #openstack14:53
*** j05h has quit IRC14:53
ccookeHmm. Well, looks like I didn't add a network this time through.14:53
ccookeThe VM is now "building"14:53
ccookeand has been for a few minutes. Can't tell if it's stuck or not :-/14:54
ccookeah. Looks like a python error14:55
ccooke(nova): TRACE: Error: local variable 'instance_obj' referenced before assignment14:55
ccookefrom nova-compute14:55
* ccooke will check the code and see what's up14:55
ccookeright, I see14:56
ccookedabo: do you assume that nova-compute will be running *on* the Xen domU?14:56
daboccooke: Yes. That is required; the other services can be running anywhere.14:57
ccookethen you need to fix your documentation, because it doesn't say that and *implies* that it is not the case14:57
daboccooke: that's only true for XenServer. Other hypervisors do not have that restriction14:58
ccookedabo: I'm referring specifically to the XenServer documentation14:58
*** mahadev has joined #openstack14:58
*** zaitcev has joined #openstack14:59
dabothat page is a guide to setting up the domU. It isn't a description of everything involved with using XenServer14:59
ccookethat is the only documentation you have for this feature, so I've been told and so I've been able to find15:00
*** zenmatt has quit IRC15:00
ccookeI mean, if someone can help me get a working installation I have no problem *creating* the documentation15:01
daboccooke: perhaps annegentle can help you locate the docs you need15:02
ccookethat would be excellent15:02
ccookeand as I say, I have no problem creating the docs... I just need a working system in order to do so15:03
*** j05h has joined #openstack15:03
*** jmckind has quit IRC15:04
* soren heads to dinner15:04
pgregoryyay, I found a way of ssh'ing into an instance from a remote machine without the need for public IP's!15:04
pgregorynot 'really' easy, but it works.15:05
*** j05h has quit IRC15:05
*** dragondm has joined #openstack15:06
ccookeSo... annegentle or anyone else... where *is* the documentation for setting up openstack to use a XenServer hypervisor?15:06
ccookeor, if such documentation does not exist, how can I actually get it working?15:07
termieccooke: i think it is in the wiki, will look15:07
*** zenmatt has joined #openstack15:07
termieccooke (it is still a little early for developers to be online)15:07
termieccooke: the majority being in gmt-6 through gmt-815:08
* ccooke is at work, in London. Not convenient for help...15:09
termiethere are definitely more xen guys around soonly, i am not one of them but i am looking in the wiki15:09
*** xavicampa has quit IRC15:09
termieccooke: here is some stuff:
ccooketermie: Sorry... that's the documentation I've just been abusuing as incorrect and misleading :-)15:10
termiewell, hopefully there is more :)15:10
termiei want the info too as at some point i am expected to work with xen things as well15:10
termiepvo, dragondm: ping?15:11
*** daveiw has quit IRC15:12
dprinceccooke: I recently struggled through some of the Xenserver setup myself.15:13
ccookedprince: oh yes?15:13
dprinceccooke: let me catch up on the IRC stuff to see where you are at.15:13
pvotermie: pong15:13
termiepvo: ccooke is looking for some xenserver setup docs15:14
dprinceccooke: do you have dom0 setup with the xenapi plugins?15:14
*** j05h has joined #openstack15:14
pvowe have some on the wiki... one sec.15:14
dprinceccooke: and basically configured dom0 according to the wiki termie mentioned?15:14
antonymccooke: those are the main docs, it's just missing the part where you have to install the environment into a VM on the machine15:14
pvohmm, had some...looks like they've moved. looking.15:15
ccookeokay. To be clear...15:15
ccookeI've tried following this:
antonymccooke: yeah, those are the primary instructions15:15
ccookeI've reached the understanding that this document is incorrect and misleading.15:15
antonymwhat problem did you run into?15:15
ccookewell, let's take this slightly slowly and avoid misunderstandings, please?15:16
termie(thanks for helping out folks)15:16
ccookeI think the main problem is there's no context in the document and some information is missing. I'd be happy to update the document once I know *what* is wrong15:17
*** krish|wired-in has joined #openstack15:17
ccookethere's also been a little confusion here about domU/dom0 - I've had one person say that document is for domU, one for dom0. The only actual interaction in the *document* only makes sense on dom015:17
dprinceccooke: Can you check this link out:
ccookeit's also clear that nova-compute has to run on the XenServer host, but the documentation does not indicate this at all15:18
antonymfor the xenserver hypervisor, you have to run an instance on the xenserver in order to provision instances15:18
dprinceccooke: That thread (w/ Ant and Ewan) might explain the state of XenServer and your options in running nova-compute on dom0 vs. domu.15:18
antonymoriginally that was not the case but changes where made that required that15:19
ccookeso, there's no actual clear answer, as yet?15:19
*** RickB17 has quit IRC15:19
dprinceccooke: and yes. That information should be more clear on the wiki. So having you add it would be great.15:19
antonymlemmie look at the wiki and i'll update a few things on it15:19
ccookethat explains why there's no real documentation15:19
antonymit's accurate for the most part except for a few core pieces :)15:19
ccookeantonym: I'm sorry. When you say "you have to run an instance", do you mean an instance of nova-compute or an a VM with nova-compute in it? The context is unclear15:20
ccookeantonym: that's what I thought :-)15:20
termieccooke: vm with nova-compute, we usually refer to instance as a guest vm15:20
antonymyeah, so you run an instance of nova-compute within a vm15:20
dprinceccooke: anyway. You are talking to the experts now (pvo/ antonym) so you should be in good hands now. (I bow out as well)15:20
ccookedprince: Thanks a lot15:21
antonymand inside the instance, you configure nova-compute to point to the hypervisor it resides on15:21
ccookehmm. So you can't run it in dom0 at all?15:21
ccookethat sounds... complicated to manage15:21
ccookealthough I guess it's more robust for a cluster.15:21
pvoccooke: you *could* I suppose, but it isn't recommended by citrix... that and the ancient version of python.15:21
antonymccooke: unfortunately not, it would involve installing a lot of packages to the hypervisor itself which could potenially break xenserver15:21
termieccooke: from what i understand it is the preferred way, xenserver has many specific compatibility requirements15:22
antonymplus by default xenserver partitions only have 4gb of space15:22
antonymand they don't really give you the flexibility to change that15:22
antonymwe're kind of irked about running it in a domU :P15:22
ccookeI can imagine15:22
antonymbut it seems to work well once it's up and running15:22
antonymdisadvantage is that it's just more things to maintain, advantage is, that it's portable and very easy to deploy out and doesn't eat up resources in dom015:23
ccookeWhat is it that makes it necessary to site the nova-compute on a VM?15:23
ccookeaccess to /sys/hypervisor ?15:23
ccookecan't you bridge that?15:23
ccookeI mean, you've already added things plugins to Xen15:24
ccookethen nova-compute could live wherever the rest of nova does15:24
antonymalthough we're doing a different route.  citrix injects data into an image, where as we drop the VHD directly to the filesystem ready to o15:24
antonymccooke: nova-compute is a worker tho, you need one of those per hypervisor15:24
ccookeantonym: ... argh.15:25
*** vernhart has quit IRC15:25
termiein general i actually rather like the idea of standardizing on running the instances within jails/VMs, i think in the future it might not be crazy to run nova-compute in an lxc container15:25
antonymso having it live per server makes more sense than centralizing it15:25
ccookeantonym: I'm sure something could be done, though15:25
antonymwell that's a big piece of the scalability :), if all the hypervisors pull jobs, it scales better than running a ton of compute workers on one box15:26
*** stewart has quit IRC15:27
*** stewart has joined #openstack15:29
ccookeso, do you need one nova-compute per hypervisor, or can you use a redundant pair, say, to command an entire Xen pool?15:29
antonymat this point it's one per hypervisor15:29
ccookeno support for pools, then?15:30
antonymas each nova.conf for compute has settings to point to the specific xenapi15:30
antonymccooke: not at this point15:30
ccookeI see15:30
ccookeokay, then15:30
ccookeBuild a natty image on the Xen box, install nova-compute into that, I guess?15:31
*** troytoman-away is now known as troytoman15:31
ccooke... hmm. This makes the networking rather more complex, doesn't it?15:31
antonymmaverick might be better since it supports python 2.6 out of the box15:31
antonymi think natty uses 2.7 now15:31
ccookeantonym: it does, yes15:31
ccookeI'm currently running the rest of nova on natty15:31
antonymshould be backwards compatable15:32
antonymwe've been running everything on squeeze without an issue15:32
antonymccooke: made a few notes on the wiki page to clarify15:33
antonymyou can technically run all of nova in the domU if you want, but if you want to seperate it out, just make sure the compute node points out to the rabbit, glance, and db of the reset of the nova env15:34
*** dendrobates is now known as dendro-afk15:35
ccookeantonym: Right15:35
*** dendro-afk is now known as dendrobates15:35
antonymccooke: does that help out a bit then?15:36
ccookeI'll probably move that to a natty VM in the Xen host15:36
ccookeMassively :-)15:36
ccookeIt've great to get replies :-)15:36
ccookeSorry for being a little aggressive over the docs, I've been beating my head against them for days15:36
antonymcool, ping me if you have other questions, i'm extremely familiar with xenserver :D15:36
antonymno problem15:36
*** rnirmal has joined #openstack15:37
ccookeheh. I've spent a lot of time adminning ESX and XenServer installations, but this is my first serious look at openstack.15:37
ccookeHopefully I'll be doing a PoC here to see if we can use it internally15:37
antonymcool, it's got a few things to work out still but i think you'll be happy with it once you get it working15:38
ccookewe're talking here about targetting the release after next, from the looks of things15:38
ccookepersonally, I'm rather wanting to build something I had a few years ago - hypervisors on demand :-)15:40
ccooke(well, back then it was webservers on demand, but hey)15:40
termieccooke: not a bad idea for a service, we've actually been doing some talk around bare-metal provisioning so that we can do OpenStackAsAService15:41
termieccooke: which would amount to in most cases, hypervisor on demand15:42
ccookesaw that15:42
ccookewhat I had a few years ago is a policy engine15:42
*** Ryan_Lane has joined #openstack15:43
ccookeFeed it stats and mnitoring, and it responded by shutting down and starting servers15:43
termieoh, you mean more of an auto-scaling system15:43
ccookesort of15:43
ccookesimple power use, really15:43
termiecomes to the same approach, just depends which metrics you care about15:44
*** zenmatt has quit IRC15:44
ccookewe have quite heavy power bills, and being able to have 80% of the servers turned off for half the day would make a *big* difference15:44
ccookeyes, quite15:44
termiei haven't looked into it too much but scalr has done a lot fo work in that area15:44
ccookethere are only a few issues, really.15:44
termieand have mentioned that specific use case15:44
ccookeone of the biggest issues is that any system like that *has* to have a dryrun mode :-)15:45
ccooke(properly simulated, too)15:45
ccookethe simplest case, though, is just looking at aggregated memory/cpu/io load and making simple decisions about how much headroom you need.15:46
ccookeOh, and always starting *two* servers any time you need *one* :-)15:46
ccookeThe thing I'd love to do with openstack, though, is make that idea more flexible. Look at predicted traffic load and shunt capacity between datacentres, or to Amazon and any other cloud provider when necessary15:47
ccookeShut down expensive sites in preference15:47
ccookethat sort of thing15:47
termieccooke: i think there are a lot of people interested in that, so you may want to figure out which posse to be part of15:48
termielots of people refer to part of that as 'hydrid' cloud as well, in reference to it using multiple providers15:48
*** rchavik has quit IRC15:49
ccooketermie: it'll depend heavily on the PoC I'll be doing here15:49
ccookeif that goes well, I should be able to commit some dev time to it15:49
termiecool, hope it does then15:49
ccookeme too :-)15:50
*** Ryan_Lane has quit IRC15:53
*** maplebed has joined #openstack15:58
*** MotoMilind has joined #openstack15:58
*** krish|wired-in has left #openstack15:59
*** obino has quit IRC16:00
*** bkkrw has quit IRC16:00
*** zenmatt has joined #openstack16:02
*** photron_ has joined #openstack16:03
*** dprince has quit IRC16:06
*** purpaboo is now known as lurkaboo16:06
*** jakedahn has joined #openstack16:12
*** KnuckleSangwich has joined #openstack16:17
*** nacx has quit IRC16:23
*** jmeredit has joined #openstack16:26
*** h0cin has joined #openstack16:29
*** fabiand__ has quit IRC16:34
*** obino has joined #openstack16:36
*** jakedahn has quit IRC16:38
*** Ryan_Lane has joined #openstack16:43
*** jtran has joined #openstack16:45
*** zenmatt has quit IRC16:55
*** zenmatt has joined #openstack16:59
*** nopzor- has joined #openstack17:00
*** jdurgin has joined #openstack17:04
*** shentonfreude has quit IRC17:04
*** jkoelker has quit IRC17:06
*** obino has quit IRC17:07
*** jmeredit has quit IRC17:08
*** obino has joined #openstack17:09
*** NelsonN has joined #openstack17:09
*** shentonfreude has joined #openstack17:10
*** nerens has quit IRC17:13
*** pguth66 has joined #openstack17:16
*** dmi_ has quit IRC17:22
*** obino has quit IRC17:23
*** obino has joined #openstack17:23
vishysoren: ping17:25
*** dmi_ has joined #openstack17:27
*** jakedahn has joined #openstack17:30
*** jtran has left #openstack17:32
*** obino has quit IRC17:34
*** obino has joined #openstack17:36
*** KnuckleSangwich has quit IRC17:37
*** dmi_ has quit IRC17:37
*** e1mer has quit IRC17:40
*** jmeredit has joined #openstack17:40
*** dmi_ has joined #openstack17:42
*** koolhead17 has joined #openstack17:43
*** yamahata_lt has quit IRC17:46
*** ccooke has quit IRC17:49
creihthas anyone else noticed that the openstack mailing list archive is a bit behind?17:49
*** rostik has joined #openstack17:50
termiecreiht: yeah, we emailed thierry about it this morning17:52
*** ccooke has joined #openstack17:56
*** clauden_ has joined #openstack17:58
*** nopzor- has quit IRC17:59
*** clauden_ has quit IRC17:59
*** clauden_ has joined #openstack17:59
*** rnirmal has quit IRC18:00
openstackjenkinsProject nova build #882: SUCCESS in 2 min 39 sec:
openstackjenkinsTarmac: Sanitize get_console_output results. See bug #75805418:04
uvirtbotLaunchpad bug 758054 in nova "If the console.log contains control characters, get console output fails with UnknownError" [Medium,In progress]
*** mahadev has quit IRC18:08
*** mahadev has joined #openstack18:08
*** patcoll has joined #openstack18:09
*** AlexNeef has joined #openstack18:17
*** rnirmal has joined #openstack18:22
*** tblamer has joined #openstack18:24
*** rnirmal_ has joined #openstack18:24
*** rnirmal has quit IRC18:28
*** rnirmal_ has quit IRC18:29
*** tblamer has quit IRC18:30
*** dubsquared has joined #openstack18:31
vishyjaypipes: ping18:32
*** rnirmal has joined #openstack18:35
*** bcwaldon has joined #openstack18:42
*** jmckind has joined #openstack18:44
*** mahadev has quit IRC18:45
uvirtbotNew bug: #778678 in nova "nova.virt.xenapi.vmops _run_ssl() should write directly to stdin instead of file" [High,Triaged]
*** mray is now known as mattray18:46
termiejaypipes: also ping18:51
*** mattray is now known as mray18:52
*** fabiand__ has joined #openstack18:52
*** mray has left #openstack18:53
*** mdomsch has joined #openstack19:00
*** agarwalla has joined #openstack19:00
*** mattray has joined #openstack19:06
*** koolhead17 has quit IRC19:13
*** koolhead17 has joined #openstack19:13
*** ctennis has quit IRC19:18
*** mgoldmann has joined #openstack19:18
*** mdomsch_ has joined #openstack19:22
*** brd_from_italy has joined #openstack19:23
*** mdomsch has quit IRC19:24
*** ctennis has joined #openstack19:31
*** ctennis has joined #openstack19:31
*** agarwalla has quit IRC19:31
*** markvoelker has quit IRC19:36
*** zul has quit IRC19:43
*** hggdh has quit IRC19:50
*** hggdh has joined #openstack19:50
*** guynaor has joined #openstack19:52
*** Dumfries has joined #openstack19:53
Dumfrieskpepple: about?19:53
*** msivanes has quit IRC19:57
*** jmckind has quit IRC19:59
*** jmckind has joined #openstack19:59
*** holoway has joined #openstack20:01
*** icarus901 has quit IRC20:01
*** bcwaldon has quit IRC20:07
*** fabiand__ has quit IRC20:08
*** NelsonN has quit IRC20:08
*** NelsonN has joined #openstack20:10
jaypipesvishy, termie: pong (but I'm taking a personal day, so might not be here for long..)20:10
*** mdomsch_ has quit IRC20:11
termiejaypipes: just heard that you were talking about separating out a service library20:11
termiejaypipes: so wanted to be pointed at any docs on what you were thinking if they exist20:12
vishyjaypipes: no worries, I think I figured it out.  I was trying to find the milestones for nova but I figured out that I had to create them separately20:12
jaypipesvishy: gotcha20:13
jaypipestermie: yeah, well kind of :) we talked about standardizing the way openstack server daemons are a) spin up (the wsgi/paste stuff) and b) controlled with a daemon script that can especially be used in testing.20:14
*** bcwaldon has joined #openstack20:14
jaypipestermie: I haven't gotten around to writing up that doc yet :(20:15
jaypipestermie: I should be able to get to it this weekend, though.20:15
*** rnirmal has quit IRC20:17
*** imsplitbit has quit IRC20:18
jaypipestermie: when I create the etherpad for it, I'll shoot you an email, ok?20:18
vishypvo: ping20:19
termiejaypipes: sure20:19
pvovishy: pong20:19
vishypvo: shared ip groups?  Did we decide we are not implementing?20:19
*** AlexNeef has quit IRC20:20
vishy(I'm going through all of the old blueprints and retargeting as necessary20:20
pvofrom what I understand, the concept is fairly complicated with the context... and how its going to work with the natting20:20
pvoit isn't our focus this next sprint.20:20
vishypvo: but it will be implemented (at some point)?20:21
pvoat some point, yes.20:21
vishyok I'm going to put it in diablo with no milestone for now20:21
vishyshould i assign it to your team or leave it unassigned?20:22
pvoyea, that would work for now.20:22
pvoassign it back to us.20:22
pvoI know tr3buchet was talking to you about some of it yesterday, no?20:22
*** jmeredit has quit IRC20:28
*** jfluhmann has quit IRC20:32
vishypvo: we were talking about multinic mostly20:32
vishypvo: is cory wright on your team?20:32
vishypvo: it seems like this is superseeded by NaaS stuff? or do you still need this for something?
pvovishy: I think that one isn't exactly superseeded. That work is is for getting support into xenserver itself for ovs which is independent of naas.20:35
pvonaas will provide info to ovs, but you don't have to run the ovs controller.20:35
vishypvo: it seems like the NaaS implementation is supposed to have hypervisor integration20:36
vishypvo: so I would expect that part of it to be implemented there20:36
vishypvo: is this blocking your launch somehow though?20:36
vishypvo: because NaaS may not be available by your move-over date20:36
pvoare you in SAT next week?20:37
vishypvo: yes20:38
pvolets talk more then. I think there are lots of timelines to go through20:38
vishyi think we're down on tuesday?20:38
vishypvo: sounds good.  I'm just trying to clean up all the old hanging blueprints20:38
dubsvishy: that xs-ovs work is close to being complete, btw.20:38
vishyi'll leave this one as deferred for now20:38
comstudvishy- i need to update the guest agent BP to reflect the new agent and its location it looks like20:40
comstudi see it references the old agent20:40
comstudnot sure when it should be considered complete, either20:40
comstudit works for xenserver20:41
comstudthe BP title _is_ prefixed with 'xs'20:42
vishycomstud: I think it should really be supported in all hypervisors before we consider it complete, but perhaps we should create separate blueprints for other hypervisors20:42
vishyshould we mark it complete and create new blueprints?20:42
comstudthat would sound good to me20:43
comstudI notice there's an ESX agent in tools/ in nova20:43
comstudnot sure what to do with that20:43
comstudie, fold it in to or keep it separate20:44
comstudor move our agents into the nova code base20:44
comstudatm, i probably have more important things to work on, so i'm not going to worry about it20:45
comstudmakes sense to close xs-guest-agent and create new BPs for whatever work needs done now20:45
*** omidhdl has joined #openstack20:45
vishyok, marking implemented20:46
*** brd_from_italy has quit IRC20:46
comstudcools, thnx20:47
comstudoops, race condition on the update20:47
*** h0cin has quit IRC20:52
*** bcwaldon has quit IRC20:56
*** rnirmal has joined #openstack20:58
*** rnirmal has quit IRC20:58
*** rnirmal has joined #openstack20:59
openstackjenkinsProject nova build #883: SUCCESS in 2 min 41 sec:
openstackjenkinsTarmac: Simple fix for this issue.  Tries to raise an exception passing in a variable that doesn't exist, which causes an error.21:04
*** dragondm has quit IRC21:05
_vinayI am running21:06
_vinaynova-manage network delete
_vinay2011-05-06 07:15:08,011 CRITICAL nova [-] Network must be disassociated from project admin before delete21:07
*** alex-meade has quit IRC21:07
_vinaywhere is the association b/w project and network21:07
_vinayand how do I delete it?21:07
_vinayis it done by nova-manage?21:10
vishyis anyone here that was at the watch/notification discussion during the summit?21:10
vishy_vinay: nova-manage project scrub admin21:11
_vinaycool .. it worked.. thanks vishy21:12
comstudvishy: i was there21:13
*** omidhdl1 has joined #openstack21:14
comstudin some form21:14
*** guynaor has left #openstack21:15
*** omidhdl has quit IRC21:15
comstudvishy: matt dietz is working on that for RAX21:16
_cerberus_I was there, yeah21:16
_cerberus_glenc was the one giving the talk at the time21:17
comstudsomeone from ntt and glen21:17
comstuddisney guy in the back had lots of comments21:17
comstudsorry i'm horrible with names21:17
_vinayvishy   so how do I associate my new network with project admin ?21:17
*** allsystemsarego_ has quit IRC21:17
*** lborda has quit IRC21:17
*** lborda has joined #openstack21:19
*** dendrobates is now known as dendro-afk21:20
vishyit happens automaticallyh when you launch an instance21:23
*** lborda has quit IRC21:25
*** lborda has joined #openstack21:26
_vinayyep it does :) thanks vishy21:30
*** keds has quit IRC21:35
*** dendro-afk is now known as dendrobates21:36
*** aa_driancole has joined #openstack21:38
*** aa_driancole has left #openstack21:38
*** mgoldmann has quit IRC21:46
*** dragondm has joined #openstack21:51
*** nphase has quit IRC21:51
*** shentonfreude has quit IRC21:52
*** omidhdl1 has left #openstack21:54
*** patcoll has quit IRC22:00
*** photron_ has quit IRC22:01
vishy_cerberus_: ping22:04
_cerberus_Hey man22:04
_cerberus_vishy: ^^22:05
vishy_cerberus_: were you in the notifications/watch discussion at the summit?22:06
vishyah i see comstud also responded22:06
vishyso I'm wondering what the result was22:06
_cerberus_There was an etherpad created with the feedback. Hold on22:06
vishythere is a blueprint about storing data in the db as well22:06
*** rostik has left #openstack22:07
vishyok cool22:09
vishyso did everyone just decide to keep working on their own version?22:09
_cerberus_Unfortunately that wasn't made entirely clear.22:09
*** ctennis has quit IRC22:09
*** jakedahn has quit IRC22:09
_cerberus_I was hoping that by simply pushing messages to a queue, people could do whatever the hell they wanted to22:09
vishyis the suggested queue going to be burrow?22:10
*** jakedahn has joined #openstack22:10
_cerberus_Well, spoke to eday about that. My current implementation uses rabbit22:10
_cerberus_But I tried to write it in a modular way so you can dump into whatever queue you felt like22:10
_cerberus_From there, what *we* want to do is implement PubSubHubBub. I've got a worker that consumes the queue and presents the ATOM feed
_cerberus_Very incomplete atm22:11
vishyso it sounds like this is still undefined22:11
pgregoryhey all22:11
_cerberus_vishy: yes and no22:11
_cerberus_I think we settled on a generic message format, along with JSON blob for the rest of the pertinent data22:12
pgregoryI've spent the last few hours scouring the interweb looking for any clues as to how I might implement a poor mans version of Amazon's IP'less ssh access.22:12
_cerberus_From there, the delivery mechanism is up in the air22:12
vishy_cerberus_: I'm going to approve yours for the moment, but think there needs to be a little more communication between the groups22:12
pgregoryIf anyone has any ideas, I'd really appreciate it.22:12
_cerberus_vishy: TBH, I haven't see anything other than what NTT had mentioned at the time.22:12
_cerberus_vishy: My plan was to next week aggregate everything in that pad with everything I've got going and send an email to the list asking for feedback22:13
vishy_cerberus_: which ilestone should i target it to?22:13
vishymilestone 2 seem reasonable?22:13
_cerberus_What's the date on that one?22:13
vishy(That is 2 months from now)22:13
_cerberus_Yeah, we can target that22:14
vishyok good.  I think the approach of aggregating and emailing is good22:14
*** jakedahn_ has joined #openstack22:15
*** posulliv has quit IRC22:15
*** dubsquared has quit IRC22:17
*** troytoman is now known as troytoman-away22:17
*** dysinger has joined #openstack22:18
*** jakedahn has quit IRC22:18
*** jakedahn_ is now known as jakedahn22:18
*** dendrobates is now known as dendro-afk22:19
pgregoryseems what's needed is some sort of mod_rewrite equivalent for ssh, anyone know of such a beast?22:28
*** dmi_ has quit IRC22:29
*** _vinay has quit IRC22:29
dysingerpregory like sshuttle ?22:30
dysingersorry I am late to the convo22:30
pgregorydysinger: sshuttle?22:30
*** lborda has quit IRC22:32
*** dragondm has quit IRC22:32
*** dragondm has joined #openstack22:33
*** j05h has quit IRC22:35
*** clauden_ has quit IRC22:36
pgregorydysinger: hmm, I can't work out from that description if it does what I need or not.22:38
dysingersorry - what was the question ? All I saw was "we need mod_rewrite for ssh"22:38
pgregorydysinger: I need a way of allowing remote ssh access to instances on the cloud without the need for public IP's.22:39
pgregorydysinger: Amazon does this by encoding the local IP into a URL, like this...22:40
pgregoryssh -i mykey.priv ec2-192-168-1-1-eu-west.compute.amazonaws.com22:40
pgregoryand I presume they have some clever proxy server that redirects (mod_rewrite style) to the correct instance IP.22:41
*** mattray has quit IRC22:43
*** rnirmal has quit IRC22:44
pgregoryany thoughts, or is it just not doable?22:45
dysingerpgregory: setup a VPC @ ec2 and use sshuttle to get to the "gateway" node22:52
dysingerthen you can ssh to the rest of them22:52
dysingerI don't know of a ssh proxy server-like thing that you are looking for22:53
*** jmckind has quit IRC22:53
pgregorydysinger: thanks, I'll take a look at what sshuttle does.22:55
dysingerif you got a mac & homebrew - it's easy "brew install sshuttle"22:56
dysingersshuttle -r <remote-ip-addr>
*** dmi has joined #openstack22:58
*** dmi is now known as Guest7931222:58
*** amccabe has quit IRC23:02
vishypgregory: just set up a route and ssh to private ip?23:14
vishyseems a little easier than trying to set up a proxy on the server...23:15
vishypgregory: or are you worried about it being too complicated for clients?23:15
*** zns has quit IRC23:20
*** ctennis has joined #openstack23:31
*** ctennis has joined #openstack23:32
*** ctennis has joined #openstack23:32
*** dragondm has quit IRC23:37
*** foxtrotgulf has joined #openstack23:45
*** nelson has quit IRC23:45
*** nelson has joined #openstack23:46
*** Dumfries has quit IRC23:49
*** dragondm has joined #openstack23:52
*** koolhead17 has quit IRC23:54

Generated by 2.14.0 by Marius Gedminas - find it at!