*** clauden_ has joined #openstack | 00:02 | |
*** dendrobates is now known as dendro-afk | 00:04 | |
*** jamiec has quit IRC | 00:07 | |
*** jwilcox has joined #openstack | 00:08 | |
*** jwilcox has joined #openstack | 00:10 | |
*** jwilcox is now known as japanjeff | 00:11 | |
*** adjohn has joined #openstack | 00:11 | |
*** monaDeveloper has quit IRC | 00:11 | |
*** andy-hk has joined #openstack | 00:12 | |
*** guynaor has joined #openstack | 00:14 | |
*** japanjeff is now known as jeffjapan_ | 00:17 | |
*** stewart has quit IRC | 00:21 | |
*** neuro_damage has joined #openstack | 00:22 | |
*** pothos has quit IRC | 00:22 | |
*** guynaor has left #openstack | 00:24 | |
*** Ryan_Lane has quit IRC | 00:27 | |
*** MarkAtwood has joined #openstack | 00:31 | |
*** kashyap has quit IRC | 00:34 | |
*** odyi has quit IRC | 00:36 | |
*** Ryan_Lane has joined #openstack | 00:37 | |
*** aliguori has joined #openstack | 00:38 | |
*** adjohn has quit IRC | 00:41 | |
*** johnpur has quit IRC | 00:42 | |
*** pguth66 has quit IRC | 00:43 | |
*** aliguori has quit IRC | 00:43 | |
thickskin | . | 00:49 |
---|---|---|
*** kashyap has joined #openstack | 00:50 | |
*** odyi has joined #openstack | 00:51 | |
*** odyi has joined #openstack | 00:51 | |
*** jdurgin has quit IRC | 00:52 | |
*** vernhart has quit IRC | 00:52 | |
*** mahadev has quit IRC | 01:00 | |
*** rchavik has quit IRC | 01:02 | |
*** rchavik has joined #openstack | 01:03 | |
*** ChameleonSys has quit IRC | 01:04 | |
Ryan_Lane | This blueprint could use some love, if others want to make it better: http://wiki.openstack.org/PublicAndPrivateDNSForNova | 01:07 |
*** jbryce has quit IRC | 01:14 | |
*** woleium has quit IRC | 01:14 | |
*** MarcMorata has joined #openstack | 01:16 | |
*** larry__ has joined #openstack | 01:17 | |
*** MotoMilind has quit IRC | 01:19 | |
*** taihen_ has joined #openstack | 01:20 | |
*** carlp__ has joined #openstack | 01:20 | |
*** larzy has quit IRC | 01:20 | |
*** syah has quit IRC | 01:20 | |
*** carlp_ has quit IRC | 01:20 | |
*** Daviey has quit IRC | 01:20 | |
*** taihen has quit IRC | 01:20 | |
*** jpuchala has quit IRC | 01:20 | |
*** syah has joined #openstack | 01:21 | |
*** jpuchala has joined #openstack | 01:22 | |
*** Daviey has joined #openstack | 01:24 | |
*** mgoldmann has quit IRC | 01:24 | |
*** jmckind has quit IRC | 01:24 | |
*** yamahata_lt has joined #openstack | 01:37 | |
*** ChameleonSys has joined #openstack | 01:37 | |
*** obino has quit IRC | 01:38 | |
*** Ryan_Lane has quit IRC | 01:40 | |
*** BK_man has quit IRC | 01:41 | |
*** obino has joined #openstack | 01:41 | |
*** mray has joined #openstack | 01:42 | |
*** clauden_ has quit IRC | 01:45 | |
*** jmckind has joined #openstack | 01:47 | |
*** mahadev has joined #openstack | 01:52 | |
*** stewart has joined #openstack | 01:54 | |
*** NelsonN has quit IRC | 01:54 | |
*** stewart has quit IRC | 01:54 | |
*** stewart has joined #openstack | 01:55 | |
*** mray has quit IRC | 01:55 | |
*** jeffjapan has joined #openstack | 01:58 | |
*** mahadev has quit IRC | 01:59 | |
*** jeffjapan_ has quit IRC | 02:00 | |
*** santhosh has joined #openstack | 02:00 | |
*** mahadev has joined #openstack | 02:06 | |
*** woleium has joined #openstack | 02:12 | |
*** jeffjapan has quit IRC | 02:15 | |
*** jeffjapan has joined #openstack | 02:18 | |
*** woleium has quit IRC | 02:23 | |
*** mahadev has quit IRC | 02:24 | |
*** mahadev has joined #openstack | 02:28 | |
*** masudo has quit IRC | 02:30 | |
*** masudo has joined #openstack | 02:31 | |
*** obino has quit IRC | 02:31 | |
*** hadrian has quit IRC | 02:35 | |
*** kashyap has quit IRC | 02:38 | |
*** dmi_ has quit IRC | 02:38 | |
*** Zangetsue has joined #openstack | 02:40 | |
*** dmi_ has joined #openstack | 02:42 | |
*** miclorb_ has quit IRC | 02:43 | |
*** brobergj has joined #openstack | 02:43 | |
*** miclorb__ has joined #openstack | 02:43 | |
*** dmi_ has quit IRC | 02:47 | |
*** alekibango has quit IRC | 02:48 | |
*** santhosh has quit IRC | 02:48 | |
*** mahadev has quit IRC | 02:59 | |
*** mahadev has joined #openstack | 03:05 | |
*** mdomsch has joined #openstack | 03:06 | |
*** larry__ has quit IRC | 03:12 | |
*** mahadev has quit IRC | 03:15 | |
*** mray has joined #openstack | 03:31 | |
*** mray has quit IRC | 03:31 | |
*** MarkAtwood has left #openstack | 03:55 | |
*** santhosh has joined #openstack | 04:05 | |
uvirtbot | New bug: #778269 in nova "ImageNotFound exception refers to non existant variable 'image_id'" [Undecided,New] https://launchpad.net/bugs/778269 | 04:11 |
*** santhosh has quit IRC | 04:27 | |
*** santhosh has joined #openstack | 04:28 | |
*** santhosh has joined #openstack | 04:28 | |
*** yamahata_lt has quit IRC | 04:29 | |
uvirtbot | New bug: #778271 in nova "Cant install virtualenv in natty/python2.7" [Undecided,New] https://launchpad.net/bugs/778271 | 04:31 |
*** andy-hk has quit IRC | 04:43 | |
*** KnuckleSangwich has quit IRC | 04:45 | |
*** f4m8_ is now known as f4m8 | 04:47 | |
*** mahadev has joined #openstack | 04:47 | |
*** mahadev has quit IRC | 04:49 | |
*** gregp76 has joined #openstack | 04:51 | |
*** hagarth has joined #openstack | 04:53 | |
*** mahadev has joined #openstack | 04:55 | |
*** kashyap has joined #openstack | 04:56 | |
*** mahadev has quit IRC | 05:00 | |
*** crescendo has quit IRC | 05:05 | |
*** mahadev has joined #openstack | 05:06 | |
*** adjohn has joined #openstack | 05:11 | |
*** vernhart has joined #openstack | 05:17 | |
*** obino has joined #openstack | 05:26 | |
*** crescendo has joined #openstack | 05:29 | |
uvirtbot | New bug: #778282 in nova "nova-manage doesn't report problem information in case of error with creation network" [Undecided,In progress] https://launchpad.net/bugs/778282 | 05:36 |
*** jamiec has joined #openstack | 05:45 | |
*** omidhdl has joined #openstack | 05:46 | |
*** naehring has joined #openstack | 05:52 | |
*** jmckind has quit IRC | 06:04 | |
*** crescendo has quit IRC | 06:09 | |
*** thickskin has quit IRC | 06:19 | |
*** crescendo has joined #openstack | 06:23 | |
*** thickskin has joined #openstack | 06:23 | |
*** naehring has quit IRC | 06:26 | |
*** naehring has joined #openstack | 06:26 | |
*** allsystemsarego has joined #openstack | 06:29 | |
*** allsystemsarego has joined #openstack | 06:29 | |
*** omidhdl has quit IRC | 06:32 | |
*** Ryan_Lane has joined #openstack | 06:32 | |
*** omidhdl has joined #openstack | 06:33 | |
*** nerens has joined #openstack | 06:42 | |
*** mahadev has quit IRC | 06:43 | |
*** MarkAtwood has joined #openstack | 06:43 | |
*** gregp76 has quit IRC | 06:44 | |
*** brobergj has quit IRC | 06:45 | |
*** dendro-afk is now known as dendrobates | 06:48 | |
*** omidhdl has quit IRC | 06:48 | |
*** omidhdl has joined #openstack | 06:49 | |
*** RickB17 has quit IRC | 06:57 | |
*** RickB17 has joined #openstack | 06:58 | |
*** smoser has quit IRC | 06:58 | |
*** smoser has joined #openstack | 07:00 | |
*** omidhdl has quit IRC | 07:01 | |
*** omidhdl has joined #openstack | 07:01 | |
fabiand | pgregory: ah nice. | 07:02 |
*** fabiand__ has joined #openstack | 07:06 | |
*** jtimberman has quit IRC | 07:07 | |
*** jtimberman has joined #openstack | 07:09 | |
*** allsystemsarego has quit IRC | 07:16 | |
*** hagarth has quit IRC | 07:16 | |
*** dendrobates is now known as dendro-afk | 07:17 | |
*** miclorb__ has quit IRC | 07:25 | |
*** hagarth has joined #openstack | 07:26 | |
fabiand__ | pgregory: so did I understand it correct that you just installed nova and glance? or just nova? | 07:26 |
*** freeflyi1g has joined #openstack | 07:26 | |
*** Ryan_Lane has quit IRC | 07:26 | |
*** freeflying has quit IRC | 07:29 | |
*** MarkAtwood has quit IRC | 07:31 | |
*** nacx has joined #openstack | 07:40 | |
*** toluene has joined #openstack | 07:43 | |
*** jdurgin has joined #openstack | 07:44 | |
toluene | hi openstack guys! I have installed the openstack by following the instruction in http://wiki.openstack.org/Novaintstall/DevPkgInstall. I'm now going thr the http://wiki.openstack.org/RunningNova. However, I have problem in Registering the image, The system return "Unable to run euca-describe-imges, Is euca2ools env set up ?". Can somebody helps me ? | 07:46 |
*** heden has joined #openstack | 07:50 | |
heden | Anyone know why my uploaded AMI file from uec-publish-tarball gets registered with Glance but the state of the image is QUEUED? | 07:51 |
*** e1mer has joined #openstack | 07:54 | |
*** e1mer has joined #openstack | 07:54 | |
*** viirya has quit IRC | 07:55 | |
*** jdurgin has quit IRC | 07:56 | |
*** jeffjapan has quit IRC | 07:56 | |
*** koolhead11 has joined #openstack | 07:57 | |
*** jeffjapan has joined #openstack | 07:58 | |
*** naehring has quit IRC | 07:58 | |
*** omidhdl has quit IRC | 07:59 | |
*** omidhdl has joined #openstack | 08:00 | |
*** keds has joined #openstack | 08:06 | |
zykes- | spectorclan: around ? | 08:06 |
*** allsystemsarego has joined #openstack | 08:09 | |
*** viirya has joined #openstack | 08:13 | |
*** naehring has joined #openstack | 08:13 | |
*** santhosh has quit IRC | 08:16 | |
pgregory | fabiand__: sorry for the slow reply, I installed nova first, got that all working, then installed glance. | 08:19 |
RichiH | btw, is anyone from jenkins in here, too? | 08:20 |
zykes- | does swift have built in CDNs ? | 08:21 |
fabiand__ | pgregory: no problem. and cheers .. :) | 08:21 |
*** SwiftestGuy has joined #openstack | 08:31 | |
SwiftestGuy | Good morning people =) | 08:32 |
*** zenmatt has quit IRC | 08:33 | |
*** jeffjapan has quit IRC | 08:34 | |
*** toluene has quit IRC | 08:35 | |
fabiand__ | pgregory: I also got it up and running - that went smooth :) | 08:39 |
fabiand__ | I struggled to get ostack on fedora up and running ... on ubuntu it worked within an hour .. | 08:40 |
pgregory | fabiand__: cool, it's not too bad, and the more you get familiar with the way things work, it becomes easier. | 08:40 |
pgregory | I also installed the Dashboard from source, and that was painless too. | 08:40 |
pgregory | haven't got volumes working yet, or multiple compute nodes. | 08:41 |
fabiand__ | Yep, I've also installed the dashbaord - also working. very neat. | 08:46 |
fabiand__ | So much going on .. | 08:46 |
fabiand__ | Multiple nodes is also an outstanding item for me. | 08:46 |
* pgregory just needs 1/some decent machines to test it on properly, still running instances in qemu, which is slow. | 08:47 | |
*** cromartie-x182 has joined #openstack | 08:57 | |
*** cromartie-x182 has left #openstack | 08:57 | |
*** daveiw has joined #openstack | 09:00 | |
*** xavicampa has joined #openstack | 09:04 | |
*** kaz_ has quit IRC | 09:04 | |
*** CloudChris has joined #openstack | 09:23 | |
*** CloudChris has left #openstack | 09:24 | |
*** bkkrw has joined #openstack | 09:25 | |
*** Zangetsue has quit IRC | 09:28 | |
*** kashyap has quit IRC | 09:29 | |
SwiftestGuy | any swift developers in da hood? :D | 09:29 |
SwiftestGuy | i might need some information | 09:31 |
*** kashyap has joined #openstack | 09:32 | |
*** carlp__ has quit IRC | 09:34 | |
SwiftestGuy | is any body implementing new api compliance on swift? | 09:35 |
*** carlp__ has joined #openstack | 09:36 | |
*** SwiftestGuy has left #openstack | 09:42 | |
*** Zangetsue has joined #openstack | 09:45 | |
*** Vek has quit IRC | 09:45 | |
*** dh has joined #openstack | 09:58 | |
*** xavicampa has quit IRC | 10:03 | |
*** holoway has quit IRC | 10:03 | |
*** mattrobinson has quit IRC | 10:03 | |
*** pquerna has quit IRC | 10:04 | |
*** Guest74692 has quit IRC | 10:04 | |
*** RoAkSoAx has quit IRC | 10:04 | |
*** fabiand has quit IRC | 10:11 | |
*** krish|wired-in has joined #openstack | 10:15 | |
*** xavicampa has joined #openstack | 10:16 | |
*** colinnich_ has quit IRC | 10:24 | |
*** colinnich has joined #openstack | 10:24 | |
*** zul has quit IRC | 10:39 | |
*** zul has joined #openstack | 10:40 | |
*** krish|wired-in has quit IRC | 10:56 | |
*** krish|wired-in has joined #openstack | 10:56 | |
*** heden has quit IRC | 11:18 | |
*** miclorb has joined #openstack | 11:19 | |
*** naehring has quit IRC | 11:32 | |
*** santhosh has joined #openstack | 11:34 | |
*** koolhead11 has quit IRC | 11:35 | |
*** kashyap has quit IRC | 11:36 | |
*** pquerna has joined #openstack | 11:38 | |
*** adjohn has quit IRC | 11:39 | |
*** ctennis has quit IRC | 11:41 | |
*** Vek has joined #openstack | 11:42 | |
*** kashyap has joined #openstack | 11:46 | |
*** markvoelker has joined #openstack | 11:47 | |
*** ctennis has joined #openstack | 11:56 | |
*** ctennis has joined #openstack | 11:56 | |
*** antenagora has joined #openstack | 12:01 | |
*** bkkrw has quit IRC | 12:03 | |
*** kashyap has quit IRC | 12:04 | |
*** bkkrw has joined #openstack | 12:18 | |
*** guynaor has joined #openstack | 12:20 | |
uvirtbot | New bug: #778463 in glance "'No module named paste' error when installing glance (apt-get glance)" [Undecided,New] https://launchpad.net/bugs/778463 | 12:21 |
*** kashyap has joined #openstack | 12:21 | |
Joelio | Hi! I've inherited a bunch of Dell 2950's and I'd like to put them towards an openstack cluster. What's the recommended deployment options for object storage now? I'd love to use distibuted hashed strage like Ceph or Sheepdog but I'm unsure as to their maturity. This is a test cluster but I'd still like it to be stable (it's part of my masterplan to introduce openstack to our core services.. muahahahaaa!) | 12:23 |
*** bkkrw has quit IRC | 12:23 | |
Joelio | I'm happy to use shared storage, but iSCSI, AOE etc..? Recommendations on a hard disk shaped post-card please :) | 12:24 |
notmyname | Joelio: for object storage, look at swift (swift.openstack.org). It's the code that runs Rackspace's Cloud Files and a few other companies' public cloud storage systems | 12:25 |
notmyname | for block storage, check out the nova volume manager or help jump in to projects that are just getting started (like lunr) | 12:25 |
Joelio | +notmyname: Sure, that's the plan, but presumabley that needs some backend storage? | 12:26 |
notmyname | swift is the storage system. all you need are drives with a filesystem that support xattrs. we use/recommend xfs | 12:26 |
* Joelio admits to being a newb on this | 12:26 | |
*** zns has joined #openstack | 12:26 | |
Joelio | +notmyname: Is that effectively hashed across the swift nodes then? | 12:27 |
notmyname | (we in this case == Rackspace) | 12:27 |
notmyname | yes it is | 12:27 |
Joelio | ah ok, cool | 12:27 |
notmyname | and you can grow/shrink a cluster easily | 12:27 |
Joelio | I take it I can run the compute and storage nodes on the same system then too? And it's redundant too? | 12:28 |
notmyname | there is no good integration of running nova and swift on the same cluster. we (openstack people) all want that, but for now it doesn't do anything special | 12:29 |
notmyname | that is, you don't gain anything currently except dual use of your hardware | 12:29 |
notmyname | there isn't yet anything like computing based on the locallity of your data in swift | 12:30 |
notmyname | for example | 12:30 |
Joelio | .. plus what about different disk size geometry. Mosy have 146Gb SAS in RAID1, but some are 500Gb SATA etc.. do they show up as just one big, well, LUN effectively? Cheers for the answers most appreciated | 12:30 |
notmyname | but yes, swift is redundant. swift stores, by default, 3 copies of each piece of data | 12:30 |
*** larry__ has joined #openstack | 12:30 | |
Joelio | sweeet | 12:30 |
* Joelio goes off to PXE boot some Ubuntu :) | 12:30 | |
notmyname | I think you are getting a little confused about nova storage and swift storage. swift is not a block storage system | 12:31 |
Joelio | Yes, probably! | 12:31 |
Joelio | swift == S3 right | 12:32 |
Joelio | Nova == Block storage for VMs | 12:32 |
notmyname | nova == compute (similar to EC2) | 12:32 |
Joelio | gotcah | 12:33 |
Joelio | Glance is the block IO? | 12:33 |
*** jmckind has joined #openstack | 12:33 | |
notmyname | the closest to EBS is currently either the nascent project lunr (not yet officially part of openstack but being written by some of the original swift devs) or the existing nova volume manager stuff | 12:34 |
*** mdomsch has quit IRC | 12:34 | |
notmyname | glance is a system for managing the VM images in nova (and optionally storing them in swift) | 12:34 |
*** allsystemsarego_ has joined #openstack | 12:35 | |
notmyname | and to go back to a previous question (maybe I misread it earlier), different size storage volumes are supported in swift | 12:36 |
notmyname | but they aren't exposed as a block device. swift has a REST API and the total cluster size is not exposed to the client. from the client perspective, it's supposed to be unlimited | 12:36 |
*** miclorb has quit IRC | 12:36 | |
*** allsystemsarego has quit IRC | 12:36 | |
zul | is there like a canonical list of projects affiflated with openstack? | 12:43 |
notmyname | zul: currently, it's what's listed on http://www.openstack.org/ | 12:43 |
zul | notmyname: thanks | 12:45 |
*** hadrian has joined #openstack | 12:46 | |
*** kashyap has quit IRC | 12:47 | |
*** dprince has joined #openstack | 12:51 | |
*** dmi_ has joined #openstack | 12:51 | |
dprince | soren: you around? | 12:52 |
*** kakella has joined #openstack | 12:52 | |
*** kakella has left #openstack | 12:52 | |
soren | dprince: I am indeed. | 12:53 |
soren | dprince: Fresh glance available in the ppa. | 12:53 |
soren | fwiw | 12:53 |
dprince | dprince: great. So a more general question. Does lp:~openstack-ubuntu-packagers/glance/ubuntu follow glance trunk? | 12:54 |
dprince | soren: Sorry. That was for you... | 12:55 |
soren | dprince: Depends on what you mean by "follow"? | 12:55 |
soren | If you mean "works with", then yes, that's the idea. | 12:55 |
soren | If you mean something else, please elaborate :) | 12:56 |
dprince | soren: Exactly. I want to use the branch that is meant to work with glance trunk the best. | 12:56 |
dprince | soren: likewise. lp:~openstack-ubuntu-packagers/nova/ubuntu works with nova trunk? | 12:56 |
soren | Those are the branches we use for the ppa builds. | 12:57 |
soren | I sent out a stack of e-mails on the subject yesterday. | 12:57 |
soren | I'm sure some of them must have been addressed to the list. | 12:57 |
dprince | soren: Okay. Great. Just saw that. If we move code hosting elsewhere do you plan on moving these as well? | 12:58 |
*** aliguori has joined #openstack | 12:59 | |
soren | No. | 12:59 |
dprince | soren: Excellent. One less thing to move around. | 12:59 |
soren | Hang on, phone call. I'll elaborate in a bit. | 12:59 |
dprince | soren: sure. NP. | 13:00 |
soren | Yay. That was quick. | 13:00 |
soren | Ok, so no, I don't expect to move them. | 13:00 |
soren | They're a cooperative effort between us and Ubuntu, and Ubuntu has a lot (and more is coming) infrastructure tied into bzr. | 13:01 |
soren | This is as least as much a decision for them. | 13:01 |
soren | Hm... That wasn't exactly accurately phrased. | 13:02 |
*** antenagora has quit IRC | 13:02 | |
soren | Working on Ubuntu does not require using bzr at all. It's just that the processes we've chosen to use for packaging stuff in Ubuntu happens to be rather entangled with bzr, and I'd rather not have to untangle that. | 13:03 |
soren | Especially because there are more benefits on the way with that sort of setup. | 13:03 |
soren | They are also completely separate repositories, so I don't think it should be much of a problem. | 13:04 |
dprince | soren: Sure. I'm fine w/ that. Thanks for the explanation. Couple more things for you now that I'm mucking in PPA things. Can you check out lp:~dan-prince/glance/nocheck_nodoc?j | 13:04 |
soren | dprince: Certainly. | 13:05 |
*** citral has joined #openstack | 13:05 | |
dprince | soren: Also. When I push branches for review on the PPA stuff should it be to lp:~dan-prince/ubuntu/glance or lp:~dan-prince/glance/ubuntu? | 13:05 |
*** citral has quit IRC | 13:05 | |
dprince | soren: kind of confuses me... | 13:05 |
soren | dprince: Funny. We actually moved everything *away* from the ubuntu namespace yesterday to reduce confusion :) | 13:06 |
soren | lp:~dan-prince/glance/ubuntu is the answer. | 13:06 |
dprince | soren: well. I just find it odd that my glance code (a different project) goes into the same namespace as the glance PPA code. | 13:07 |
*** hagarth has quit IRC | 13:07 | |
soren | dprince: That's *exactly* why I didn't put it there to begin with :) | 13:07 |
dprince | soren: Sure. Either way. I'll follow protocol. | 13:08 |
soren | dprince: But the last 9 months have revealed that people have a hard time finding the code if it's outside of glance's (or nova's or swift's) namespace. | 13:08 |
soren | dprince: ...and they end up thinking it's just black box stuff applied out of nowhere. | 13:08 |
dprince | soren: Yeah. My vote would have been keep it under ubuntu and just educate people. | 13:09 |
dprince | soren: but its fine. really. | 13:09 |
soren | dprince: You've identified a pattern. | 13:12 |
dprince | soren: which one? The sqlite slashes thing? | 13:13 |
soren | dprince: My giving up arguing for what's reasonable. | 13:13 |
dprince | soren: oh. branches. Yep. | 13:14 |
soren | Yes, that too. | 13:14 |
soren | But nevermind that. | 13:14 |
*** alex-meade has joined #openstack | 13:17 | |
*** jmckind has quit IRC | 13:17 | |
Joelio | Ok guy,s, | 13:17 |
Joelio | Could someome recommend a layout for a 4 node cluster please so I can get a better understanding? | 13:20 |
Joelio | I've got an openfiler too to test if I need shared storage | 13:20 |
notmyname | Joelio: http://swift.openstack.org/howto_installmultinode.html describes 1 proxy + 5 storage nodes. in your case you could run either 1 proxy and 3 storage nodes or 1 proxy and 4 storage nodes (the proxy also running the storage servers) | 13:21 |
*** dprince has quit IRC | 13:21 | |
*** krish|wired-in has quit IRC | 13:22 | |
soren | Joelio: Are you wanting to set up a swift or a nova cluster? (compute or object storage) | 13:22 |
Joelio | +soren: A compute cluster | 13:23 |
notmyname | Joelio: ah. my mistake then. sorry :-) | 13:23 |
Joelio | n/p :) | 13:23 |
Joelio | Basically I just want to be able to spawn VMs for testing puppet manifests as well as other sysadmin type stuff | 13:24 |
Joelio | .. but I want resiliency where possible | 13:25 |
soren | Nova won't give you much in terms of resiliency (on its own). | 13:25 |
soren | Designing for the cloud means designing for failure. | 13:26 |
soren | ...when we're talking compute, that is. | 13:26 |
soren | You should be able to rely pretty well on Swift. | 13:27 |
*** mahadev has joined #openstack | 13:27 | |
Joelio | Ok, I appreciate that.. what I'm after is distributed storage for VM images and the possibility to reinstantiate the VM on other nodes in the event of a node failure (doesn't need to be automatic) | 13:28 |
Joelio | with ideall vm storage and compute running on same system | 13:28 |
soren | I'm not sure why you want to complicate things by adding Nova to the mix. | 13:28 |
soren | Sounds like you just want SAN backed regular VM's. Or sheepdog backed or whatever. | 13:28 |
Joelio | Yea, pretty much! | 13:29 |
*** zns has quit IRC | 13:29 | |
soren | Joelio: Ok... So do that :) | 13:29 |
*** santhosh has quit IRC | 13:30 | |
*** santhosh has joined #openstack | 13:30 | |
Joelio | Can I use openstack in this way or am I completely missing the point | 13:31 |
*** zenmatt has joined #openstack | 13:31 | |
*** jpuchala has quit IRC | 13:31 | |
*** mahadev has quit IRC | 13:31 | |
soren | Joelio: Well, you can, but as I said: I'm not sure why wyou want to complicate things by adding Nova to the mix. Just use regular VM's? | 13:32 |
*** Zangetsue has quit IRC | 13:34 | |
*** mahadev has joined #openstack | 13:34 | |
*** omidhdl has quit IRC | 13:34 | |
*** Zangetsue has joined #openstack | 13:36 | |
*** j05h has quit IRC | 13:38 | |
*** dendro-afk is now known as dendrobates | 13:41 | |
*** arun_ has quit IRC | 13:46 | |
*** zns has joined #openstack | 13:48 | |
*** f4m8 is now known as f4m8_ | 13:49 | |
*** jmckind has joined #openstack | 13:51 | |
*** Zangetsue has quit IRC | 13:52 | |
*** santhosh has quit IRC | 13:52 | |
*** j05h has joined #openstack | 13:53 | |
*** yamahata_lt has joined #openstack | 13:55 | |
*** amccabe has joined #openstack | 13:58 | |
*** dprince has joined #openstack | 13:59 | |
*** zns has quit IRC | 14:00 | |
*** guynaor has left #openstack | 14:01 | |
*** jamesurquhart has joined #openstack | 14:02 | |
*** j05h has quit IRC | 14:03 | |
*** j05h has joined #openstack | 14:04 | |
*** arun_ has joined #openstack | 14:05 | |
dprince | soren: When patching nova with the PPA patches I get the following hunking offsets: http://paste.openstack.org/show/1301/ | 14:06 |
dprince | soren: want me to push a branch to fix those? | 14:06 |
*** zns has joined #openstack | 14:06 | |
*** kakoni has quit IRC | 14:09 | |
creiht | zykes-: It doesn't have a built in CDN service, but incorporating a CDN service on top of swift isn't that difficult | 14:13 |
*** pquerna has quit IRC | 14:13 | |
*** pquerna has joined #openstack | 14:13 | |
soren | dprince: Sure. | 14:14 |
creiht | zykes-: There are public containers that allow public access, but you don't get geographic distrobution | 14:14 |
*** mahadev has quit IRC | 14:16 | |
*** shentonfreude has joined #openstack | 14:19 | |
*** imsplitbit has joined #openstack | 14:20 | |
*** mray has joined #openstack | 14:25 | |
*** jkoelker has joined #openstack | 14:28 | |
*** jmckind has quit IRC | 14:30 | |
ccooke | Anyone worked on the Xenserver support? | 14:30 |
*** mray has quit IRC | 14:31 | |
ccooke | I'm trying to work out hwo I can tell whether the openstack components installed on the Xen dom0 are actually doing anything | 14:31 |
*** zns has quit IRC | 14:32 | |
*** jmckind has joined #openstack | 14:32 | |
dabo | ccooke: what specifically are you looking for? | 14:32 |
ccooke | dabo: documentation would be nice :-) | 14:33 |
dabo | ccooke: it always is :) But do you mean by "doing anything"? | 14:33 |
ccooke | well, http://wiki.openstack.org/XenServerDevelopment says to copy a number of files from the openstack tree onto the Xen host | 14:34 |
ccooke | I'd like some way of finding out if those files are, in fact, doing anything | 14:34 |
dabo | ccooke: Those would be copied to the domU instance, not dom0. | 14:35 |
*** bkkrw has joined #openstack | 14:35 | |
zykes- | creiht: is it planned to integrate anything like it into swift? | 14:35 |
ccooke | dabo: ah, terminology. Yes. | 14:35 |
ccooke | they have been copied to domU. | 14:35 |
creiht | zykes-: Are you talking about CDN like services, or integration with CDNs? | 14:36 |
dabo | ok, so your domU is configured as described on the wiki page. Now start up the services | 14:36 |
*** mray has joined #openstack | 14:36 | |
ccooke | dabo: Start which services, where? | 14:37 |
ccooke | (nova services are already started. I am trying to debug things) | 14:39 |
dabo | I usually run a screen session, with each service in its own window. But to run, say, compute, do the following: 1) cd ~/openstack/nova 2) . novarc 3) sudo ./bin/nova-compute --flagfile ../nova.conf | 14:39 |
ccooke | I'm using the packaged versions, which start via upstart | 14:39 |
ccooke | but they are configured and started already | 14:39 |
zykes- | creiht: service. | 14:39 |
dabo | ok; didn't realize that | 14:39 |
creiht | zykes-: not that I know of | 14:40 |
ccooke | (and yes, I'm using the latest PPA packages) | 14:40 |
ccooke | (which match the latest branch in bzr) | 14:40 |
zykes- | aok | 14:40 |
dabo | what are you trying to debug, then? | 14:40 |
ccooke | dabo: why nothing works :-) | 14:40 |
ccooke | I'm just nearing the end of a complete wipe and reinstall just to make sure I can get some hopefully-clean error reoprts | 14:41 |
dabo | I don't mean to sound abrupt, but if you were to spell out what you tried to do, and what, if any, output you got, I might be able to help more | 14:41 |
ccooke | Ah, okay | 14:41 |
ccooke | I've been in here several times with these errors; sorry | 14:42 |
ccooke | basically, I create an instance and it remains in "scheduling" forever | 14:42 |
ccooke | can't see any sign that it tried to contact the Xen host | 14:42 |
creiht | zykes-: though a simple cdn could be created by using swift as the backing storage, and putting some caching servers at the edges (like squid or varnish) that have some extra smarts that know how to talk to swift | 14:42 |
dabo | when you do 'xe vm-list', does the instance show up? is it marked as running? | 14:43 |
ccooke | no and no | 14:43 |
dabo | ccooke: ok, this is where I start messing with the code. Since I explicitly start the services, I can easily stop them, add some debugging output to the code, and then restart the service | 14:44 |
dabo | I don't know the best way to do that in your setup | 14:44 |
ccooke | huh. | 14:46 |
ccooke | Finally got through the reinstall | 14:46 |
ccooke | and with the latest packages... it's now stuck at networking | 14:46 |
ccooke | hmm | 14:47 |
dabo | ccooke: I generally stick a bunch of log output messages into the code so I can see where the code gets stuck. | 14:49 |
ccooke | right | 14:49 |
ccooke | I thought this was released code? | 14:49 |
dabo | ccooke: it is, and is working for most people. This is to determine what's different about your installation that's messing it up. | 14:50 |
*** nelson has quit IRC | 14:50 | |
*** zns has joined #openstack | 14:50 | |
*** nelson has joined #openstack | 14:50 | |
*** jmckind has quit IRC | 14:51 | |
*** jmckind has joined #openstack | 14:53 | |
*** j05h has quit IRC | 14:53 | |
ccooke | Hmm. Well, looks like I didn't add a network this time through. | 14:53 |
ccooke | The VM is now "building" | 14:53 |
ccooke | and has been for a few minutes. Can't tell if it's stuck or not :-/ | 14:54 |
ccooke | ah. Looks like a python error | 14:55 |
ccooke | (nova): TRACE: Error: local variable 'instance_obj' referenced before assignment | 14:55 |
ccooke | from nova-compute | 14:55 |
* ccooke will check the code and see what's up | 14:55 | |
ccooke | right, I see | 14:56 |
ccooke | dabo: do you assume that nova-compute will be running *on* the Xen domU? | 14:56 |
dabo | ccooke: Yes. That is required; the other services can be running anywhere. | 14:57 |
ccooke | right | 14:57 |
ccooke | then you need to fix your documentation, because it doesn't say that and *implies* that it is not the case | 14:57 |
dabo | ccooke: that's only true for XenServer. Other hypervisors do not have that restriction | 14:58 |
ccooke | dabo: I'm referring specifically to the XenServer documentation | 14:58 |
ccooke | http://wiki.openstack.org/XenServerDevelopment | 14:58 |
*** mahadev has joined #openstack | 14:58 | |
*** zaitcev has joined #openstack | 14:59 | |
dabo | that page is a guide to setting up the domU. It isn't a description of everything involved with using XenServer | 14:59 |
ccooke | that is the only documentation you have for this feature, so I've been told and so I've been able to find | 15:00 |
*** zenmatt has quit IRC | 15:00 | |
ccooke | I mean, if someone can help me get a working installation I have no problem *creating* the documentation | 15:01 |
dabo | ccooke: perhaps annegentle can help you locate the docs you need | 15:02 |
ccooke | that would be excellent | 15:02 |
ccooke | and as I say, I have no problem creating the docs... I just need a working system in order to do so | 15:03 |
*** j05h has joined #openstack | 15:03 | |
*** jmckind has quit IRC | 15:04 | |
* soren heads to dinner | 15:04 | |
pgregory | yay, I found a way of ssh'ing into an instance from a remote machine without the need for public IP's! | 15:04 |
pgregory | not 'really' easy, but it works. | 15:05 |
*** j05h has quit IRC | 15:05 | |
*** dragondm has joined #openstack | 15:06 | |
ccooke | So... annegentle or anyone else... where *is* the documentation for setting up openstack to use a XenServer hypervisor? | 15:06 |
ccooke | or, if such documentation does not exist, how can I actually get it working? | 15:07 |
termie | ccooke: i think it is in the wiki, will look | 15:07 |
*** zenmatt has joined #openstack | 15:07 | |
termie | ccooke (it is still a little early for developers to be online) | 15:07 |
termie | ccooke: the majority being in gmt-6 through gmt-8 | 15:08 |
ccooke | Damn | 15:08 |
* ccooke is at work, in London. Not convenient for help... | 15:09 | |
termie | there are definitely more xen guys around soonly, i am not one of them but i am looking in the wiki | 15:09 |
*** xavicampa has quit IRC | 15:09 | |
termie | ccooke: here is some stuff: http://wiki.openstack.org/XenServerDevelopment | 15:09 |
ccooke | termie: Sorry... that's the documentation I've just been abusuing as incorrect and misleading :-) | 15:10 |
termie | ah | 15:10 |
termie | well, hopefully there is more :) | 15:10 |
termie | i want the info too as at some point i am expected to work with xen things as well | 15:10 |
termie | pvo, dragondm: ping? | 15:11 |
*** daveiw has quit IRC | 15:12 | |
dprince | ccooke: I recently struggled through some of the Xenserver setup myself. | 15:13 |
ccooke | dprince: oh yes? | 15:13 |
dprince | ccooke: let me catch up on the IRC stuff to see where you are at. | 15:13 |
pvo | termie: pong | 15:13 |
termie | pvo: ccooke is looking for some xenserver setup docs | 15:14 |
dprince | ccooke: do you have dom0 setup with the xenapi plugins? | 15:14 |
*** j05h has joined #openstack | 15:14 | |
pvo | we have some on the wiki... one sec. | 15:14 |
dprince | ccooke: and basically configured dom0 according to the wiki termie mentioned? | 15:14 |
antonym | ccooke: those are the main docs, it's just missing the part where you have to install the environment into a VM on the machine | 15:14 |
pvo | hmm, had some...looks like they've moved. looking. | 15:15 |
ccooke | okay. To be clear... | 15:15 |
ccooke | I've tried following this: http://wiki.openstack.org/XenServerDevelopment | 15:15 |
antonym | ccooke: yeah, those are the primary instructions | 15:15 |
ccooke | I've reached the understanding that this document is incorrect and misleading. | 15:15 |
pvo | yep | 15:15 |
antonym | what problem did you run into? | 15:15 |
ccooke | well, let's take this slightly slowly and avoid misunderstandings, please? | 15:16 |
termie | (thanks for helping out folks) | 15:16 |
termie | s/out/out,/ | 15:16 |
ccooke | I think the main problem is there's no context in the document and some information is missing. I'd be happy to update the document once I know *what* is wrong | 15:17 |
*** krish|wired-in has joined #openstack | 15:17 | |
ccooke | there's also been a little confusion here about domU/dom0 - I've had one person say that document is for domU, one for dom0. The only actual interaction in the *document* only makes sense on dom0 | 15:17 |
dprince | ccooke: Can you check this link out: http://www.mail-archive.com/openstack-xenapi@lists.launchpad.net/msg00021.html | 15:18 |
ccooke | it's also clear that nova-compute has to run on the XenServer host, but the documentation does not indicate this at all | 15:18 |
antonym | for the xenserver hypervisor, you have to run an instance on the xenserver in order to provision instances | 15:18 |
dprince | ccooke: That thread (w/ Ant and Ewan) might explain the state of XenServer and your options in running nova-compute on dom0 vs. domu. | 15:18 |
antonym | originally that was not the case but changes where made that required that | 15:19 |
ccooke | so, there's no actual clear answer, as yet? | 15:19 |
*** RickB17 has quit IRC | 15:19 | |
dprince | ccooke: and yes. That information should be more clear on the wiki. So having you add it would be great. | 15:19 |
antonym | lemmie look at the wiki and i'll update a few things on it | 15:19 |
ccooke | that explains why there's no real documentation | 15:19 |
antonym | it's accurate for the most part except for a few core pieces :) | 15:19 |
ccooke | antonym: I'm sorry. When you say "you have to run an instance", do you mean an instance of nova-compute or an a VM with nova-compute in it? The context is unclear | 15:20 |
ccooke | antonym: that's what I thought :-) | 15:20 |
termie | ccooke: vm with nova-compute, we usually refer to instance as a guest vm | 15:20 |
antonym | yeah, so you run an instance of nova-compute within a vm | 15:20 |
dprince | ccooke: anyway. You are talking to the experts now (pvo/ antonym) so you should be in good hands now. (I bow out as well) | 15:20 |
ccooke | dprince: Thanks a lot | 15:21 |
antonym | and inside the instance, you configure nova-compute to point to the hypervisor it resides on | 15:21 |
ccooke | hmm. So you can't run it in dom0 at all? | 15:21 |
ccooke | that sounds... complicated to manage | 15:21 |
ccooke | although I guess it's more robust for a cluster. | 15:21 |
pvo | ccooke: you *could* I suppose, but it isn't recommended by citrix... that and the ancient version of python. | 15:21 |
antonym | ccooke: unfortunately not, it would involve installing a lot of packages to the hypervisor itself which could potenially break xenserver | 15:21 |
termie | ccooke: from what i understand it is the preferred way, xenserver has many specific compatibility requirements | 15:22 |
antonym | plus by default xenserver partitions only have 4gb of space | 15:22 |
ccooke | Okay. | 15:22 |
antonym | and they don't really give you the flexibility to change that | 15:22 |
antonym | we're kind of irked about running it in a domU :P | 15:22 |
ccooke | I can imagine | 15:22 |
antonym | but it seems to work well once it's up and running | 15:22 |
ccooke | hmm | 15:23 |
antonym | disadvantage is that it's just more things to maintain, advantage is, that it's portable and very easy to deploy out and doesn't eat up resources in dom0 | 15:23 |
ccooke | What is it that makes it necessary to site the nova-compute on a VM? | 15:23 |
ccooke | access to /sys/hypervisor ? | 15:23 |
antonym | yep | 15:23 |
ccooke | can't you bridge that? | 15:23 |
ccooke | I mean, you've already added things plugins to Xen | 15:24 |
ccooke | then nova-compute could live wherever the rest of nova does | 15:24 |
antonym | although we're doing a different route. citrix injects data into an image, where as we drop the VHD directly to the filesystem ready to o | 15:24 |
antonym | ccooke: nova-compute is a worker tho, you need one of those per hypervisor | 15:24 |
ccooke | antonym: ... argh. | 15:25 |
*** vernhart has quit IRC | 15:25 | |
termie | in general i actually rather like the idea of standardizing on running the instances within jails/VMs, i think in the future it might not be crazy to run nova-compute in an lxc container | 15:25 |
antonym | so having it live per server makes more sense than centralizing it | 15:25 |
ccooke | antonym: I'm sure something could be done, though | 15:25 |
antonym | well that's a big piece of the scalability :), if all the hypervisors pull jobs, it scales better than running a ton of compute workers on one box | 15:26 |
*** stewart has quit IRC | 15:27 | |
*** stewart has joined #openstack | 15:29 | |
ccooke | so, do you need one nova-compute per hypervisor, or can you use a redundant pair, say, to command an entire Xen pool? | 15:29 |
antonym | at this point it's one per hypervisor | 15:29 |
ccooke | no support for pools, then? | 15:30 |
antonym | as each nova.conf for compute has settings to point to the specific xenapi | 15:30 |
antonym | ccooke: not at this point | 15:30 |
ccooke | I see | 15:30 |
ccooke | okay, then | 15:30 |
ccooke | Build a natty image on the Xen box, install nova-compute into that, I guess? | 15:31 |
*** troytoman-away is now known as troytoman | 15:31 | |
ccooke | ... hmm. This makes the networking rather more complex, doesn't it? | 15:31 |
antonym | maverick might be better since it supports python 2.6 out of the box | 15:31 |
antonym | i think natty uses 2.7 now | 15:31 |
ccooke | antonym: it does, yes | 15:31 |
ccooke | I'm currently running the rest of nova on natty | 15:31 |
antonym | cool | 15:32 |
antonym | should be backwards compatable | 15:32 |
antonym | we've been running everything on squeeze without an issue | 15:32 |
antonym | ccooke: made a few notes on the wiki page to clarify | 15:33 |
antonym | you can technically run all of nova in the domU if you want, but if you want to seperate it out, just make sure the compute node points out to the rabbit, glance, and db of the reset of the nova env | 15:34 |
*** dendrobates is now known as dendro-afk | 15:35 | |
ccooke | antonym: Right | 15:35 |
*** dendro-afk is now known as dendrobates | 15:35 | |
antonym | ccooke: does that help out a bit then? | 15:36 |
ccooke | I'll probably move that to a natty VM in the Xen host | 15:36 |
ccooke | Massively :-) | 15:36 |
ccooke | It've great to get replies :-) | 15:36 |
ccooke | Sorry for being a little aggressive over the docs, I've been beating my head against them for days | 15:36 |
antonym | cool, ping me if you have other questions, i'm extremely familiar with xenserver :D | 15:36 |
antonym | no problem | 15:36 |
*** rnirmal has joined #openstack | 15:37 | |
ccooke | heh. I've spent a lot of time adminning ESX and XenServer installations, but this is my first serious look at openstack. | 15:37 |
ccooke | Hopefully I'll be doing a PoC here to see if we can use it internally | 15:37 |
antonym | cool, it's got a few things to work out still but i think you'll be happy with it once you get it working | 15:38 |
ccooke | *nod* | 15:38 |
ccooke | we're talking here about targetting the release after next, from the looks of things | 15:38 |
ccooke | personally, I'm rather wanting to build something I had a few years ago - hypervisors on demand :-) | 15:40 |
ccooke | (well, back then it was webservers on demand, but hey) | 15:40 |
termie | ccooke: not a bad idea for a service, we've actually been doing some talk around bare-metal provisioning so that we can do OpenStackAsAService | 15:41 |
termie | ccooke: which would amount to in most cases, hypervisor on demand | 15:42 |
ccooke | saw that | 15:42 |
ccooke | what I had a few years ago is a policy engine | 15:42 |
*** Ryan_Lane has joined #openstack | 15:43 | |
ccooke | Feed it stats and mnitoring, and it responded by shutting down and starting servers | 15:43 |
termie | oh, you mean more of an auto-scaling system | 15:43 |
ccooke | sort of | 15:43 |
ccooke | simple power use, really | 15:43 |
termie | comes to the same approach, just depends which metrics you care about | 15:44 |
*** zenmatt has quit IRC | 15:44 | |
ccooke | we have quite heavy power bills, and being able to have 80% of the servers turned off for half the day would make a *big* difference | 15:44 |
ccooke | yes, quite | 15:44 |
termie | i haven't looked into it too much but scalr has done a lot fo work in that area | 15:44 |
ccooke | there are only a few issues, really. | 15:44 |
termie | and have mentioned that specific use case | 15:44 |
ccooke | one of the biggest issues is that any system like that *has* to have a dryrun mode :-) | 15:45 |
ccooke | (properly simulated, too) | 15:45 |
ccooke | the simplest case, though, is just looking at aggregated memory/cpu/io load and making simple decisions about how much headroom you need. | 15:46 |
ccooke | Oh, and always starting *two* servers any time you need *one* :-) | 15:46 |
ccooke | The thing I'd love to do with openstack, though, is make that idea more flexible. Look at predicted traffic load and shunt capacity between datacentres, or to Amazon and any other cloud provider when necessary | 15:47 |
ccooke | Shut down expensive sites in preference | 15:47 |
ccooke | that sort of thing | 15:47 |
termie | ccooke: i think there are a lot of people interested in that, so you may want to figure out which posse to be part of | 15:48 |
termie | lots of people refer to part of that as 'hydrid' cloud as well, in reference to it using multiple providers | 15:48 |
*** rchavik has quit IRC | 15:49 | |
ccooke | termie: it'll depend heavily on the PoC I'll be doing here | 15:49 |
ccooke | if that goes well, I should be able to commit some dev time to it | 15:49 |
termie | cool, hope it does then | 15:49 |
ccooke | me too :-) | 15:50 |
*** Ryan_Lane has quit IRC | 15:53 | |
*** maplebed has joined #openstack | 15:58 | |
*** MotoMilind has joined #openstack | 15:58 | |
*** krish|wired-in has left #openstack | 15:59 | |
*** obino has quit IRC | 16:00 | |
*** bkkrw has quit IRC | 16:00 | |
*** zenmatt has joined #openstack | 16:02 | |
*** photron_ has joined #openstack | 16:03 | |
*** dprince has quit IRC | 16:06 | |
*** purpaboo is now known as lurkaboo | 16:06 | |
*** jakedahn has joined #openstack | 16:12 | |
*** KnuckleSangwich has joined #openstack | 16:17 | |
*** nacx has quit IRC | 16:23 | |
*** jmeredit has joined #openstack | 16:26 | |
*** h0cin has joined #openstack | 16:29 | |
*** fabiand__ has quit IRC | 16:34 | |
*** obino has joined #openstack | 16:36 | |
*** jakedahn has quit IRC | 16:38 | |
*** Ryan_Lane has joined #openstack | 16:43 | |
*** jtran has joined #openstack | 16:45 | |
*** zenmatt has quit IRC | 16:55 | |
*** zenmatt has joined #openstack | 16:59 | |
*** nopzor- has joined #openstack | 17:00 | |
*** jdurgin has joined #openstack | 17:04 | |
*** shentonfreude has quit IRC | 17:04 | |
*** jkoelker has quit IRC | 17:06 | |
*** obino has quit IRC | 17:07 | |
*** jmeredit has quit IRC | 17:08 | |
*** obino has joined #openstack | 17:09 | |
*** NelsonN has joined #openstack | 17:09 | |
*** shentonfreude has joined #openstack | 17:10 | |
*** nerens has quit IRC | 17:13 | |
*** pguth66 has joined #openstack | 17:16 | |
*** dmi_ has quit IRC | 17:22 | |
*** obino has quit IRC | 17:23 | |
*** obino has joined #openstack | 17:23 | |
vishy | soren: ping | 17:25 |
*** dmi_ has joined #openstack | 17:27 | |
*** jakedahn has joined #openstack | 17:30 | |
*** jtran has left #openstack | 17:32 | |
*** obino has quit IRC | 17:34 | |
*** obino has joined #openstack | 17:36 | |
*** KnuckleSangwich has quit IRC | 17:37 | |
*** dmi_ has quit IRC | 17:37 | |
*** e1mer has quit IRC | 17:40 | |
*** jmeredit has joined #openstack | 17:40 | |
*** dmi_ has joined #openstack | 17:42 | |
*** koolhead17 has joined #openstack | 17:43 | |
*** yamahata_lt has quit IRC | 17:46 | |
*** ccooke has quit IRC | 17:49 | |
creiht | has anyone else noticed that the openstack mailing list archive is a bit behind? | 17:49 |
creiht | https://lists.launchpad.net/openstack/ | 17:49 |
*** rostik has joined #openstack | 17:50 | |
termie | creiht: yeah, we emailed thierry about it this morning | 17:52 |
*** ccooke has joined #openstack | 17:56 | |
creiht | k | 17:57 |
*** clauden_ has joined #openstack | 17:58 | |
*** nopzor- has quit IRC | 17:59 | |
*** clauden_ has quit IRC | 17:59 | |
*** clauden_ has joined #openstack | 17:59 | |
*** rnirmal has quit IRC | 18:00 | |
openstackjenkins | Project nova build #882: SUCCESS in 2 min 39 sec: http://jenkins.openstack.org/job/nova/882/ | 18:04 |
openstackjenkins | Tarmac: Sanitize get_console_output results. See bug #758054 | 18:04 |
uvirtbot | Launchpad bug 758054 in nova "If the console.log contains control characters, get console output fails with UnknownError" [Medium,In progress] https://launchpad.net/bugs/758054 | 18:04 |
*** mahadev has quit IRC | 18:08 | |
*** mahadev has joined #openstack | 18:08 | |
*** patcoll has joined #openstack | 18:09 | |
*** AlexNeef has joined #openstack | 18:17 | |
*** rnirmal has joined #openstack | 18:22 | |
*** tblamer has joined #openstack | 18:24 | |
*** rnirmal_ has joined #openstack | 18:24 | |
*** rnirmal has quit IRC | 18:28 | |
*** rnirmal_ has quit IRC | 18:29 | |
*** tblamer has quit IRC | 18:30 | |
*** dubsquared has joined #openstack | 18:31 | |
vishy | jaypipes: ping | 18:32 |
*** rnirmal has joined #openstack | 18:35 | |
*** bcwaldon has joined #openstack | 18:42 | |
*** jmckind has joined #openstack | 18:44 | |
*** mahadev has quit IRC | 18:45 | |
uvirtbot | New bug: #778678 in nova "nova.virt.xenapi.vmops _run_ssl() should write directly to stdin instead of file" [High,Triaged] https://launchpad.net/bugs/778678 | 18:46 |
*** mray is now known as mattray | 18:46 | |
termie | jaypipes: also ping | 18:51 |
*** mattray is now known as mray | 18:52 | |
*** fabiand__ has joined #openstack | 18:52 | |
*** mray has left #openstack | 18:53 | |
*** mdomsch has joined #openstack | 19:00 | |
*** agarwalla has joined #openstack | 19:00 | |
*** mattray has joined #openstack | 19:06 | |
*** koolhead17 has quit IRC | 19:13 | |
*** koolhead17 has joined #openstack | 19:13 | |
*** ctennis has quit IRC | 19:18 | |
*** mgoldmann has joined #openstack | 19:18 | |
*** mdomsch_ has joined #openstack | 19:22 | |
*** brd_from_italy has joined #openstack | 19:23 | |
*** mdomsch has quit IRC | 19:24 | |
*** ctennis has joined #openstack | 19:31 | |
*** ctennis has joined #openstack | 19:31 | |
*** agarwalla has quit IRC | 19:31 | |
*** markvoelker has quit IRC | 19:36 | |
*** zul has quit IRC | 19:43 | |
*** hggdh has quit IRC | 19:50 | |
*** hggdh has joined #openstack | 19:50 | |
*** guynaor has joined #openstack | 19:52 | |
*** Dumfries has joined #openstack | 19:53 | |
Dumfries | kpepple: about? | 19:53 |
*** msivanes has quit IRC | 19:57 | |
*** jmckind has quit IRC | 19:59 | |
*** jmckind has joined #openstack | 19:59 | |
*** holoway has joined #openstack | 20:01 | |
*** icarus901 has quit IRC | 20:01 | |
*** bcwaldon has quit IRC | 20:07 | |
*** fabiand__ has quit IRC | 20:08 | |
*** NelsonN has quit IRC | 20:08 | |
*** NelsonN has joined #openstack | 20:10 | |
jaypipes | vishy, termie: pong (but I'm taking a personal day, so might not be here for long..) | 20:10 |
*** mdomsch_ has quit IRC | 20:11 | |
termie | jaypipes: just heard that you were talking about separating out a service library | 20:11 |
termie | jaypipes: so wanted to be pointed at any docs on what you were thinking if they exist | 20:12 |
vishy | jaypipes: no worries, I think I figured it out. I was trying to find the milestones for nova but I figured out that I had to create them separately | 20:12 |
jaypipes | vishy: gotcha | 20:13 |
jaypipes | termie: yeah, well kind of :) we talked about standardizing the way openstack server daemons are a) spin up (the wsgi/paste stuff) and b) controlled with a daemon script that can especially be used in testing. | 20:14 |
*** bcwaldon has joined #openstack | 20:14 | |
jaypipes | termie: I haven't gotten around to writing up that doc yet :( | 20:15 |
jaypipes | termie: I should be able to get to it this weekend, though. | 20:15 |
*** rnirmal has quit IRC | 20:17 | |
*** imsplitbit has quit IRC | 20:18 | |
jaypipes | termie: when I create the etherpad for it, I'll shoot you an email, ok? | 20:18 |
vishy | pvo: ping | 20:19 |
termie | jaypipes: sure | 20:19 |
pvo | vishy: pong | 20:19 |
vishy | pvo: shared ip groups? Did we decide we are not implementing? | 20:19 |
*** AlexNeef has quit IRC | 20:20 | |
vishy | (I'm going through all of the old blueprints and retargeting as necessary | 20:20 |
pvo | from what I understand, the concept is fairly complicated with the context... and how its going to work with the natting | 20:20 |
pvo | it isn't our focus this next sprint. | 20:20 |
vishy | pvo: but it will be implemented (at some point)? | 20:21 |
pvo | at some point, yes. | 20:21 |
vishy | ok I'm going to put it in diablo with no milestone for now | 20:21 |
vishy | should i assign it to your team or leave it unassigned? | 20:22 |
pvo | yea, that would work for now. | 20:22 |
pvo | assign it back to us. | 20:22 |
pvo | I know tr3buchet was talking to you about some of it yesterday, no? | 20:22 |
*** jmeredit has quit IRC | 20:28 | |
*** jfluhmann has quit IRC | 20:32 | |
vishy | pvo: we were talking about multinic mostly | 20:32 |
vishy | pvo: is cory wright on your team? | 20:32 |
pvo | yes | 20:33 |
vishy | pvo: it seems like this is superseeded by NaaS stuff? or do you still need this for something? https://blueprints.launchpad.net/nova/+spec/xs-ovs | 20:33 |
pvo | vishy: I think that one isn't exactly superseeded. That work is is for getting support into xenserver itself for ovs which is independent of naas. | 20:35 |
pvo | naas will provide info to ovs, but you don't have to run the ovs controller. | 20:35 |
vishy | pvo: it seems like the NaaS implementation is supposed to have hypervisor integration | 20:36 |
vishy | pvo: so I would expect that part of it to be implemented there | 20:36 |
vishy | pvo: is this blocking your launch somehow though? | 20:36 |
vishy | pvo: because NaaS may not be available by your move-over date | 20:36 |
pvo | are you in SAT next week? | 20:37 |
vishy | pvo: yes | 20:38 |
pvo | lets talk more then. I think there are lots of timelines to go through | 20:38 |
vishy | i think we're down on tuesday? | 20:38 |
vishy | pvo: sounds good. I'm just trying to clean up all the old hanging blueprints | 20:38 |
dubs | vishy: that xs-ovs work is close to being complete, btw. | 20:38 |
vishy | i'll leave this one as deferred for now | 20:38 |
comstud | vishy- i need to update the guest agent BP to reflect the new agent and its location it looks like | 20:40 |
comstud | i see it references the old agent | 20:40 |
comstud | not sure when it should be considered complete, either | 20:40 |
comstud | it works for xenserver | 20:41 |
comstud | the BP title _is_ prefixed with 'xs' | 20:42 |
vishy | comstud: I think it should really be supported in all hypervisors before we consider it complete, but perhaps we should create separate blueprints for other hypervisors | 20:42 |
vishy | should we mark it complete and create new blueprints? | 20:42 |
comstud | that would sound good to me | 20:43 |
comstud | now | 20:43 |
comstud | I notice there's an ESX agent in tools/ in nova | 20:43 |
comstud | not sure what to do with that | 20:43 |
comstud | ie, fold it in to https://launchpad.net/openstack-guest-agents or keep it separate | 20:44 |
comstud | or move our agents into the nova code base | 20:44 |
comstud | etc | 20:44 |
comstud | atm, i probably have more important things to work on, so i'm not going to worry about it | 20:45 |
comstud | makes sense to close xs-guest-agent and create new BPs for whatever work needs done now | 20:45 |
comstud | IMO | 20:45 |
*** omidhdl has joined #openstack | 20:45 | |
vishy | ok, marking implemented | 20:46 |
*** brd_from_italy has quit IRC | 20:46 | |
comstud | cools, thnx | 20:47 |
comstud | oops, race condition on the update | 20:47 |
comstud | heh | 20:48 |
*** h0cin has quit IRC | 20:52 | |
*** bcwaldon has quit IRC | 20:56 | |
*** rnirmal has joined #openstack | 20:58 | |
*** rnirmal has quit IRC | 20:58 | |
*** rnirmal has joined #openstack | 20:59 | |
openstackjenkins | Project nova build #883: SUCCESS in 2 min 41 sec: http://jenkins.openstack.org/job/nova/883/ | 21:04 |
openstackjenkins | Tarmac: Simple fix for this issue. Tries to raise an exception passing in a variable that doesn't exist, which causes an error. | 21:04 |
*** dragondm has quit IRC | 21:05 | |
_vinay | Hi | 21:06 |
_vinay | I am running | 21:06 |
_vinay | nova-manage network delete 10.0.0.0/27 | 21:07 |
_vinay | 2011-05-06 07:15:08,011 CRITICAL nova [-] Network must be disassociated from project admin before delete | 21:07 |
*** alex-meade has quit IRC | 21:07 | |
_vinay | where is the association b/w project and network | 21:07 |
_vinay | and how do I delete it? | 21:07 |
_vinay | is it done by nova-manage? | 21:10 |
vishy | is anyone here that was at the watch/notification discussion during the summit? | 21:10 |
vishy | _vinay: nova-manage project scrub admin | 21:11 |
_vinay | cool .. it worked.. thanks vishy | 21:12 |
comstud | vishy: i was there | 21:13 |
*** omidhdl1 has joined #openstack | 21:14 | |
comstud | in some form | 21:14 |
*** guynaor has left #openstack | 21:15 | |
*** omidhdl has quit IRC | 21:15 | |
comstud | vishy: matt dietz is working on that for RAX | 21:16 |
_cerberus_ | I was there, yeah | 21:16 |
_cerberus_ | glenc was the one giving the talk at the time | 21:17 |
comstud | someone from ntt and glen | 21:17 |
comstud | yea | 21:17 |
comstud | disney guy in the back had lots of comments | 21:17 |
comstud | sorry i'm horrible with names | 21:17 |
_vinay | vishy so how do I associate my new network with project admin ? | 21:17 |
*** allsystemsarego_ has quit IRC | 21:17 | |
*** lborda has quit IRC | 21:17 | |
*** lborda has joined #openstack | 21:19 | |
*** dendrobates is now known as dendro-afk | 21:20 | |
vishy | it happens automaticallyh when you launch an instance | 21:23 |
*** lborda has quit IRC | 21:25 | |
*** lborda has joined #openstack | 21:26 | |
_vinay | yep it does :) thanks vishy | 21:30 |
*** keds has quit IRC | 21:35 | |
*** dendro-afk is now known as dendrobates | 21:36 | |
*** aa_driancole has joined #openstack | 21:38 | |
*** aa_driancole has left #openstack | 21:38 | |
*** mgoldmann has quit IRC | 21:46 | |
*** dragondm has joined #openstack | 21:51 | |
*** nphase has quit IRC | 21:51 | |
*** shentonfreude has quit IRC | 21:52 | |
*** omidhdl1 has left #openstack | 21:54 | |
*** patcoll has quit IRC | 22:00 | |
*** photron_ has quit IRC | 22:01 | |
vishy | _cerberus_: ping | 22:04 |
_cerberus_ | Hey man | 22:04 |
_cerberus_ | vishy: ^^ | 22:05 |
vishy | _cerberus_: were you in the notifications/watch discussion at the summit? | 22:06 |
_cerberus_ | Yep | 22:06 |
vishy | ah i see comstud also responded | 22:06 |
_cerberus_ | Right | 22:06 |
vishy | so I'm wondering what the result was | 22:06 |
_cerberus_ | There was an etherpad created with the feedback. Hold on | 22:06 |
vishy | there is a blueprint about storing data in the db as well | 22:06 |
_cerberus_ | http://etherpad.openstack.org/notifications | 22:07 |
*** rostik has left #openstack | 22:07 | |
vishy | ok cool | 22:09 |
vishy | so did everyone just decide to keep working on their own version? | 22:09 |
_cerberus_ | Unfortunately that wasn't made entirely clear. | 22:09 |
*** ctennis has quit IRC | 22:09 | |
*** jakedahn has quit IRC | 22:09 | |
_cerberus_ | I was hoping that by simply pushing messages to a queue, people could do whatever the hell they wanted to | 22:09 |
vishy | is the suggested queue going to be burrow? | 22:10 |
*** jakedahn has joined #openstack | 22:10 | |
_cerberus_ | Well, spoke to eday about that. My current implementation uses rabbit | 22:10 |
_cerberus_ | But I tried to write it in a modular way so you can dump into whatever queue you felt like | 22:10 |
_cerberus_ | From there, what *we* want to do is implement PubSubHubBub. I've got a worker that consumes the queue and presents the ATOM feed https://github.com/Cerberus98/yagi | 22:11 |
_cerberus_ | Very incomplete atm | 22:11 |
vishy | so it sounds like this is still undefined | 22:11 |
pgregory | hey all | 22:11 |
_cerberus_ | vishy: yes and no | 22:11 |
_cerberus_ | I think we settled on a generic message format, along with JSON blob for the rest of the pertinent data | 22:12 |
pgregory | I've spent the last few hours scouring the interweb looking for any clues as to how I might implement a poor mans version of Amazon's IP'less ssh access. | 22:12 |
_cerberus_ | From there, the delivery mechanism is up in the air | 22:12 |
vishy | _cerberus_: I'm going to approve yours for the moment, but think there needs to be a little more communication between the groups | 22:12 |
pgregory | If anyone has any ideas, I'd really appreciate it. | 22:12 |
_cerberus_ | vishy: TBH, I haven't see anything other than what NTT had mentioned at the time. | 22:12 |
_cerberus_ | vishy: My plan was to next week aggregate everything in that pad with everything I've got going and send an email to the list asking for feedback | 22:13 |
vishy | _cerberus_: which ilestone should i target it to? | 22:13 |
vishy | milestone 2 seem reasonable? | 22:13 |
_cerberus_ | What's the date on that one? | 22:13 |
vishy | (That is 2 months from now) | 22:13 |
_cerberus_ | Yeah, we can target that | 22:14 |
vishy | ok good. I think the approach of aggregating and emailing is good | 22:14 |
_cerberus_ | Cool | 22:15 |
*** jakedahn_ has joined #openstack | 22:15 | |
*** posulliv has quit IRC | 22:15 | |
*** dubsquared has quit IRC | 22:17 | |
*** troytoman is now known as troytoman-away | 22:17 | |
*** dysinger has joined #openstack | 22:18 | |
*** jakedahn has quit IRC | 22:18 | |
*** jakedahn_ is now known as jakedahn | 22:18 | |
*** dendrobates is now known as dendro-afk | 22:19 | |
pgregory | seems what's needed is some sort of mod_rewrite equivalent for ssh, anyone know of such a beast? | 22:28 |
*** dmi_ has quit IRC | 22:29 | |
*** _vinay has quit IRC | 22:29 | |
dysinger | pregory like sshuttle ? | 22:30 |
dysinger | sorry I am late to the convo | 22:30 |
pgregory | dysinger: sshuttle? | 22:30 |
dysinger | https://github.com/apenwarr/sshuttle | 22:30 |
*** lborda has quit IRC | 22:32 | |
*** dragondm has quit IRC | 22:32 | |
*** dragondm has joined #openstack | 22:33 | |
*** j05h has quit IRC | 22:35 | |
*** clauden_ has quit IRC | 22:36 | |
pgregory | dysinger: hmm, I can't work out from that description if it does what I need or not. | 22:38 |
dysinger | sorry - what was the question ? All I saw was "we need mod_rewrite for ssh" | 22:38 |
pgregory | dysinger: I need a way of allowing remote ssh access to instances on the cloud without the need for public IP's. | 22:39 |
pgregory | dysinger: Amazon does this by encoding the local IP into a URL, like this... | 22:40 |
pgregory | ssh -i mykey.priv ec2-192-168-1-1-eu-west.compute.amazonaws.com | 22:40 |
pgregory | and I presume they have some clever proxy server that redirects (mod_rewrite style) to the correct instance IP. | 22:41 |
*** mattray has quit IRC | 22:43 | |
*** rnirmal has quit IRC | 22:44 | |
pgregory | any thoughts, or is it just not doable? | 22:45 |
dysinger | pgregory: setup a VPC @ ec2 and use sshuttle to get to the "gateway" node | 22:52 |
dysinger | then you can ssh to the rest of them | 22:52 |
dysinger | (shrug) | 22:52 |
dysinger | I don't know of a ssh proxy server-like thing that you are looking for | 22:53 |
*** jmckind has quit IRC | 22:53 | |
pgregory | dysinger: thanks, I'll take a look at what sshuttle does. | 22:55 |
dysinger | if you got a mac & homebrew - it's easy "brew install sshuttle" | 22:56 |
dysinger | sshuttle -r <remote-ip-addr> 0.0.0.0/0 | 22:56 |
*** dmi has joined #openstack | 22:58 | |
*** dmi is now known as Guest79312 | 22:58 | |
*** amccabe has quit IRC | 23:02 | |
vishy | pgregory: just set up a route and ssh to private ip? | 23:14 |
vishy | seems a little easier than trying to set up a proxy on the server... | 23:15 |
vishy | pgregory: or are you worried about it being too complicated for clients? | 23:15 |
*** zns has quit IRC | 23:20 | |
*** ctennis has joined #openstack | 23:31 | |
*** ctennis has joined #openstack | 23:32 | |
*** ctennis has joined #openstack | 23:32 | |
*** dragondm has quit IRC | 23:37 | |
*** foxtrotgulf has joined #openstack | 23:45 | |
*** nelson has quit IRC | 23:45 | |
*** nelson has joined #openstack | 23:46 | |
*** Dumfries has quit IRC | 23:49 | |
*** dragondm has joined #openstack | 23:52 | |
*** koolhead17 has quit IRC | 23:54 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!