*** nati2 has joined #openstack | 00:00 | |
*** kbringard has quit IRC | 00:04 | |
*** nati2 has quit IRC | 00:07 | |
*** RicardoSSP has quit IRC | 00:07 | |
*** nati2 has joined #openstack | 00:09 | |
*** rods has quit IRC | 00:15 | |
vidd | bwong, dashboard gets that ino from keystone | 00:16 |
---|---|---|
*** jdg has quit IRC | 00:18 | |
bwong | vidd ok | 00:19 |
bwong | vidd i don't know what happen but all my mysql data is missing after I installed it and it work just fine. | 00:19 |
bwong | vidd im going to reinstall it. | 00:20 |
vidd | bwong, your dashboard creates a token when you log in and that gets stored into dashboard's database | 00:23 |
vidd | but if the token is expired in dashboard, then it verifies against keystone and makes another token | 00:23 |
*** morfeas has joined #openstack | 00:24 | |
*** negronjl has quit IRC | 00:25 | |
*** jog0 has quit IRC | 00:29 | |
*** localhost has joined #openstack | 00:29 | |
*** jakedahn has joined #openstack | 00:31 | |
*** dwcramer has joined #openstack | 00:34 | |
*** obino has quit IRC | 00:38 | |
bwong | vidd hmm couldn't figure out what happen | 00:41 |
bwong | vidd good thing I have a extra copy of openstack installed! | 00:42 |
*** nati2 has quit IRC | 00:42 | |
bwong | vidd I just finished installing openstack, I created a instance | 00:42 |
bwong | just for testing | 00:42 |
bwong | but i can't ping it. | 00:42 |
bwong | according to dashboard it is up. | 00:42 |
*** nati2 has joined #openstack | 00:43 | |
*** bcwaldon has quit IRC | 00:44 | |
*** nati2 has quit IRC | 00:45 | |
uvirtbot | New bug: #892404 in devstack "keystone connection / authentication problems" [Undecided,New] https://launchpad.net/bugs/892404 | 00:46 |
*** nati2 has joined #openstack | 00:46 | |
vidd | bwong, what security group rules are active on that instance? | 00:53 |
*** bwong has quit IRC | 00:54 | |
*** ksteward1 has joined #openstack | 00:55 | |
*** Pr0toc0l has joined #openstack | 00:56 | |
uvirtbot | New bug: #891936 in horizon "Missing pycrypto package from pip-requires list of openstack-dashboard" [Undecided,New] https://launchpad.net/bugs/891936 | 00:58 |
*** bengrue has quit IRC | 01:00 | |
*** oubiwann1 has quit IRC | 01:01 | |
*** dpippenger has quit IRC | 01:02 | |
*** rnorwood has quit IRC | 01:03 | |
*** baffle has quit IRC | 01:04 | |
*** chrism_ has quit IRC | 01:05 | |
*** aliguori has quit IRC | 01:05 | |
*** CaptTofu has quit IRC | 01:11 | |
*** baffle has joined #openstack | 01:16 | |
*** aliguori has joined #openstack | 01:23 | |
*** baffle has quit IRC | 01:24 | |
*** rnorwood has joined #openstack | 01:26 | |
*** aliguori has quit IRC | 01:30 | |
*** sloop has quit IRC | 01:30 | |
*** sloop has joined #openstack | 01:31 | |
*** nati2 has quit IRC | 01:35 | |
*** hezekiah_ has quit IRC | 01:36 | |
*** nati2 has joined #openstack | 01:38 | |
*** chomping has quit IRC | 01:40 | |
*** FallenPegasus has joined #openstack | 01:42 | |
*** jdurgin has quit IRC | 01:43 | |
*** MarkAtwood has quit IRC | 01:45 | |
*** haji has quit IRC | 01:47 | |
*** nati2 has quit IRC | 01:53 | |
*** jakedahn has quit IRC | 01:53 | |
*** nati2 has joined #openstack | 01:54 | |
*** ohnoimdead has quit IRC | 01:56 | |
*** nati2 has quit IRC | 01:56 | |
*** dpippenger has joined #openstack | 01:57 | |
*** FallenPegasus has quit IRC | 01:57 | |
*** nati2 has joined #openstack | 01:58 | |
*** Pr0toc0l has quit IRC | 01:59 | |
*** nati2 has quit IRC | 02:00 | |
*** nati2 has joined #openstack | 02:00 | |
*** dpippenger has quit IRC | 02:02 | |
*** nati2 has quit IRC | 02:02 | |
*** nati2 has joined #openstack | 02:03 | |
*** nati2 has quit IRC | 02:07 | |
*** sdake has quit IRC | 02:10 | |
jasona | hmm | 02:17 |
jasona | is anyone using a hypervisor other than KVM under openstack ? | 02:17 |
*** maplebed has quit IRC | 02:18 | |
*** po has quit IRC | 02:20 | |
jasona | hmm | 02:21 |
jasona | anyone awake at all ? | 02:21 |
vidd | yes | 02:21 |
jasona | so.. do people put anything under kvm under os, in reality ? | 02:22 |
*** nati2 has joined #openstack | 02:22 | |
jasona | other than kvm i mean | 02:22 |
vidd | i dont understand | 02:22 |
jasona | ok. you can pick different hypervisors | 02:22 |
vidd | right | 02:22 |
jasona | people i've talked to are using kvm. | 02:22 |
jasona | is anyone using hypervisrs other than kvm ? | 02:22 |
*** nati2 has quit IRC | 02:23 | |
jasona | for example, esxi ? | 02:23 |
vidd | yes...there are ppl using other hypervisors.... | 02:23 |
vidd | not me personally =] | 02:23 |
jasona | and is it safe to say that kvm is the predominant hypervisor being used by openstack compared to the others ? | 02:23 |
vidd | kvm is usually the lowest common denominator | 02:24 |
jasona | ok. | 02:24 |
jasona | so another question, if you are buying kit for an openstack implemenation i have the following | 02:24 |
jasona | o compute boxes for nova | 02:24 |
jasona | o jbods for swift | 02:24 |
jasona | what are the pieces' you buy for nova-volume and glance | 02:24 |
vidd | i dont understand your question here either | 02:25 |
vidd | these are open source programs...you dont "buy them | 02:25 |
jasona | no i know | 02:27 |
jasona | but yo buy hardware to run them on | 02:27 |
jasona | i wasn't referring to the software layer | 02:27 |
jasona | but to the infrastructure under them | 02:27 |
jasona | so starting again, | 02:27 |
jasona | for nov ayoud buy servers with lots of ram and cpus right ? | 02:27 |
vidd | you put them on whatever you feel like putting them on | 02:27 |
jasona | for swift you'd buy jbods attached to 'basic' servers that don't need lots of ram or cpu | 02:27 |
jasona | so far am i roughly on track ? | 02:28 |
vidd | im not sure what you mean by "jbods" | 02:28 |
jasona | just a bunch of disks. a carrier which has disks | 02:29 |
jasona | without raid etc | 02:29 |
vidd | i suppose you could do that | 02:30 |
jasona | which leads me to the last two parts. where do you run nova-volume from.. and what does glance need. | 02:30 |
vidd | glance just needs a home | 02:30 |
vidd | if you are going to implement swift, you might just as well have glance store its stuff there | 02:31 |
vidd | and for nova-volume, attach som sans to your nova-controller node | 02:31 |
vidd | *nas | 02:32 |
vidd | or, you can call your dell rep up on the phone and say "come set us us" and get everything....probably preconfigured | 02:34 |
vidd | =] | 02:34 |
vidd | dell has some awesome preconfigured servers | 02:34 |
*** obino has joined #openstack | 02:35 | |
*** woleium has quit IRC | 02:37 | |
*** gyee has quit IRC | 02:45 | |
cloudfly | wait... | 02:46 |
cloudfly | jbod? | 02:46 |
cloudfly | also the dell c series are crap | 02:46 |
cloudfly | i'd not buy them | 02:46 |
cloudfly | jbod is ancient btw | 02:46 |
cloudfly | that's like 1999 hardware | 02:46 |
vidd | cloudfly, he obviously doesnt want to do his homework =] | 02:47 |
cloudfly | i am kind of amazed he knew what a jbod was. | 02:47 |
*** nyeates has quit IRC | 02:47 | |
cloudfly | i mean that's pretty esoteric these days | 02:47 |
vidd | cloudfly, perhaps thats why i never heard of it =] | 02:48 |
cloudfly | http://en.wikipedia.org/wiki/JBOD#JBOD | 02:48 |
cloudfly | just a bunch of drives | 02:48 |
cloudfly | it's a relevant term | 02:48 |
jasona | hmm | 02:48 |
cloudfly | but not quite applicable | 02:48 |
cloudfly | see before there was decent san / nas... you'd buy jbod stacks | 02:48 |
cloudfly | which would allow you to plug more disks into your servers | 02:48 |
cloudfly | usually via scsi interfaces | 02:49 |
WormMan | and now with things like swift, you'll get JBOD again | 02:49 |
jasona | maybe i'm missing something but my understanding from swift is you definitely _dont_ want to put it on a san | 02:49 |
cloudfly | well... jbod wasn't a server | 02:49 |
jasona | so i don't know that i'd say 'doesnt want to do his homework' but maybe more 'checking his homework is in fact right?' | 02:49 |
cloudfly | it was literally a stack of disks | 02:49 |
cloudfly | that would plug into a scsi controller | 02:49 |
jasona | literally a stack of disks ? | 02:49 |
cloudfly | yes | 02:49 |
jasona | that are plugged into something | 02:49 |
cloudfly | like an enclosure as a machine | 02:49 |
cloudfly | yeah | 02:49 |
WormMan | sort of like buying a jbod from dell | 02:49 |
cloudfly | like the storage blade on a blade system | 02:50 |
jasona | and when i looked a t a literal stack of disks plugged into controller it still has to attach to a server in some way, whether inside a server or in an expansion chassis that is plugged into a server | 02:50 |
cloudfly | sure | 02:50 |
cloudfly | but a jbod alone would probably not run swift | 02:50 |
cloudfly | or it wouldn't be a jbod | 02:50 |
vidd | is there a reason you would NOT want to put swift on sans? | 02:50 |
jasona | sigh. and a literal stack of disks won't do it either | 02:50 |
jasona | so i understand. sorry if my terminology wasn't precise. i will fix in future. | 02:50 |
WormMan | and a SAN won't run swift either if you're being picky about it | 02:51 |
cloudfly | vidd it would in theory be conflicting | 02:51 |
jasona | vidd: because the openstack documentation says explicitly to NOT put it on a san ? | 02:51 |
jasona | so why would i do something opposite to what it recommends ? | 02:51 |
cloudfly | you are massively parallelizing accross spindles for speed | 02:51 |
cloudfly | involving SAN would just gum up the works | 02:51 |
jasona | what cloudfly said | 02:51 |
* vidd has not had any time with swift...still beatting keystone into a functional enterprise =] | 02:51 | |
jasona | so assuming i do understand nova and swift requirements, i'm still trying to make sure i understand nova-volume and glance | 02:52 |
cloudfly | swift is kind of like raid at OSI layer 7 | 02:52 |
cloudfly | =P | 02:52 |
jasona | and nova-volume looks like something that can live on a san.. or iscsi volumes or gpfs | 02:52 |
jasona | so then.. glance lives on what ? a separate 'basic' server ? | 02:52 |
cloudfly | glance can plug into swift | 02:52 |
cloudfly | or it can reside on a server | 02:52 |
cloudfly | usually the head node | 02:53 |
cloudfly | volumes reside on the compute hosts | 02:53 |
jasona | does glance need its own storage ? hmm | 02:53 |
cloudfly | inside of a specified volume group | 02:53 |
jasona | just trying to wrap my head around how it does what it does, sorry for the basic question there.. | 02:53 |
WormMan | glance just sort of sits there, even with swift you're almost certainly going to run its DB on the same place as the nova DB | 02:53 |
vidd | jasona, if you have swift, glance does not need anything except a home....it would be more of a traffic cop | 02:53 |
vidd | like keystone | 02:53 |
cloudfly | yeah | 02:53 |
jasona | right. so where do i give it a home.. do i buy another small box for it to live there ? | 02:53 |
cloudfly | without swift... it needs a butt load of storage | 02:54 |
vidd | but keystone is the traffic cop that likes to give out tickets =] | 02:54 |
cloudfly | jasona depends on your deployment size | 02:54 |
cloudfly | we've been putting glance onto the same box as nova schedulers / api | 02:54 |
jasona | to start with ? really small. maybe 100 cores and 200T disk. | 02:54 |
cloudfly | yeah | 02:54 |
WormMan | the only possible reason you might want glance on another node is if you're launching hundreds of instances at once, as it does do a bit of network traffic | 02:55 |
jasona | the next iteration in about 8 months will be about 4000 cores and 1-5P disk. | 02:55 |
cloudfly | you could house em all on the same server | 02:55 |
cloudfly | no problem | 02:55 |
jasona | hmm ok. | 02:55 |
jasona | so what i wanted to check i am saying is correct is | 02:55 |
jasona | when you buy stuff you will buy: | 02:55 |
jasona | o compute servers with a lot of ram and cpus for nova | 02:55 |
jasona | o JBOD disk bays with a 'cheap' server it is attached to for swift | 02:55 |
jasona | o 'something else' for nova-volume, e.g it could be a san array, a box with raid.. gpfs.. | 02:55 |
jasona | o glance needs a management type box. | 02:55 |
jasona | and i wasn't sure about my last line. | 02:55 |
cloudfly | nah | 02:55 |
cloudfly | not jbods | 02:55 |
cloudfly | storage servers | 02:55 |
cloudfly | basically light on cpu / ram... BIG on disk enclosures | 02:55 |
cloudfly | swift runs in software | 02:56 |
WormMan | or JBOD attached to a storage server, since tier 1 vendors don't do them | 02:56 |
cloudfly | it's better to spread it out | 02:56 |
cloudfly | then run it through a single point of failure | 02:56 |
cloudfly | the idea is healing network | 02:56 |
cloudfly | look at hour the dell c2100 is configured | 02:56 |
jasona | i think i'm fine with the jbod part. i'm just making sure i'm not saying something stupid by saying you can put your glance onto a separate small box ? | 02:56 |
cloudfly | it's intended to be a storage server | 02:56 |
cloudfly | but personally i prefer blades | 02:57 |
jasona | and yes cloud, i will be looking at storage servers but.. | 02:57 |
cloudfly | where you can attach storage blades | 02:57 |
jasona | if i am not in a position to buy them, then i end up with an expansion box with disks, attached to 'a server' which is all really basic stuff. | 02:57 |
cloudfly | you can | 02:57 |
cloudfly | put glance on a separate box | 02:57 |
cloudfly | but i don't think it would get you anything | 02:57 |
WormMan | the C2100, as a storage server, 12 whole drives, wow, I'm gonna stick to 1u server and 24 bays(minimum) | 02:57 |
WormMan | luckily, I also have 0 object storage needs in the next 6 months | 02:58 |
cloudfly | heh | 02:58 |
cloudfly | yeah the dells suck | 02:58 |
cloudfly | i said that earlier | 02:58 |
jasona | you can get 'storage servers' with 48 or 60 disks in them now | 02:58 |
jasona | too big for swift ? | 02:58 |
cloudfly | nah | 02:58 |
WormMan | jasona: not from Dell or HP :) | 02:58 |
cloudfly | just... remember | 02:58 |
cloudfly | the idea is swift is distributed across a lot of nodes | 02:58 |
cloudfly | less bottle necks, more redundancy | 02:58 |
cloudfly | massive parallelization is the goal | 02:59 |
jasona | ok. so a rack of 2RU boxes, with 12 drives in each (or 24 2.5" drives) | 02:59 |
jasona | gives you 40 servers and 400+ spindles | 02:59 |
cloudfly | i mean it's up to you tune as you see fit | 02:59 |
WormMan | jasona: the cost overhead for that few drives will be horrible | 02:59 |
jasona | er 20 servers i mean and 200+ | 02:59 |
cloudfly | HPC needs are different from regular LAMP stack needs | 02:59 |
cloudfly | you have to crunch some numbers yourself | 02:59 |
jasona | wrm: cost overhead for 12 drives in 2ru you mean ? | 03:00 |
WormMan | jasona: yea | 03:00 |
jasona | compared to what for example worm ? a disk expansion bay ? | 03:00 |
WormMan | I ran the numbers, 48 seems to be about right for most things | 03:00 |
WormMan | (for me) | 03:00 |
jasona | it is interesting to get conflicting information from different people here. obviously a bit of difference in how people do things | 03:00 |
jasona | or see things. | 03:01 |
cloudfly | jasona i'd look at what you can top the storage block out at in terms of network speed / disk read / write sustained | 03:01 |
cloudfly | then think about how your users are going to be polling data | 03:01 |
uvirtbot | New bug: #892415 in nova "Hung reboots periodic task should only act on it's own instances" [Undecided,New] https://launchpad.net/bugs/892415 | 03:01 |
cloudfly | more servers more network ports... more network i/o | 03:01 |
cloudfly | but that feeds into switch backplanes | 03:01 |
WormMan | in reality, 2 drives will happily saturate a 1GB port :) | 03:01 |
cloudfly | yeah | 03:01 |
cloudfly | but i feel like with this stuff you go 10G | 03:02 |
cloudfly | it just makes sense with the volumes of data being moved around. | 03:02 |
WormMan | time for a cheap swift optimized array, maybe some cute little ARM | 03:02 |
cloudfly | again that backplane matters | 03:02 |
jasona | so, i am not using any 1G ports at all in this | 03:02 |
jasona | except for mgmt interfaces. | 03:02 |
cloudfly | WormMan you find me an arm that doesn't get saturated by a 10G feed | 03:02 |
jasona | it's 10G to everything. | 03:03 |
WormMan | nah, 1 little arm per drive, 12 drives in an array with an internal 12 port to 10G switch, it will be cute :) | 03:03 |
*** woleium has joined #openstack | 03:03 | |
cloudfly | that would be ... bizarre | 03:03 |
jasona | an ARMy of ARMs wormman ? | 03:03 |
cloudfly | it'd be like swifts within swifts | 03:03 |
cloudfly | yo dawg, etc etc | 03:03 |
jasona | cloudfly it would be an inception of swifts. | 03:04 |
WormMan | I don't get paid to be realistic, we do video games after all :) | 03:04 |
jasona | who is 'we' ? | 03:04 |
WormMan | (I do get paid to curse at multicast though) | 03:04 |
cloudfly | oof | 03:04 |
cloudfly | sorry to hear that | 03:04 |
WormMan | SCEA aka Sony Playstation | 03:04 |
cloudfly | multicast is a nightmare | 03:04 |
jasona | oic. let me think, yes i have one of them. or three even. | 03:04 |
cloudfly | WormMan you guys still making fun of your infosec guys? | 03:04 |
jasona | anyway, time to go do housework. sigh. | 03:05 |
cloudfly | cause you should be =P | 03:05 |
WormMan | it's even more of a nightmare when the Openstack NAT rule translates multicast packets to the external IP on the node and sends them over the internal network :) | 03:05 |
jasona | i bought a ps3 and then the ps3 network died on me for some weeks. so i have no idea what psn is about yet. since i've never connected ;) | 03:05 |
*** sdake has joined #openstack | 03:05 | |
WormMan | luckily that's not even my company :) | 03:05 |
cloudfly | wormman that's okay try playing with SRIOV some day | 03:05 |
WormMan | we're just a consumer of their services | 03:05 |
cloudfly | holy crap was that a mistake | 03:05 |
cloudfly | "why doesn't the card understand 802.1q anymore... what did you do!?!" | 03:06 |
cloudfly | imma grab a quick bite then head to hackerdojo in mtv for a beer. | 03:06 |
cloudfly | cheers all | 03:07 |
uvirtbot | New bug: #891940 in nova "not supporting windows vms of vmware and virtual box" [Undecided,New] https://launchpad.net/bugs/891940 | 03:26 |
*** rustam has quit IRC | 03:29 | |
*** rnirmal has joined #openstack | 03:32 | |
*** rnirmal has quit IRC | 03:32 | |
_rfz | anyone know if openstack supports having the public IP's on the VM interface - and not use NAT? | 03:33 |
errr | wkelly_: ping | 03:35 |
cloudfly | _rfz in theory it's possible' | 03:35 |
cloudfly | but not in any way you'd like it | 03:36 |
*** slop has joined #openstack | 03:37 | |
_rfz | cloudfly - how? and why wouldnt I like it :) | 03:38 |
vidd | i dont understand why glance refuses to work with keystone | 03:54 |
vidd | it would appear that euca and keystone are playing together | 03:55 |
_rfz | vidd, is keystone working correctly? | 03:56 |
vidd | it would apear to be | 03:57 |
vidd | glance -A <token> index fails with error: [Errno 111] ECONNREFUSED | 03:57 |
_rfz | have you updated glance-api.conf and registry.conf' | 03:58 |
*** obino has quit IRC | 03:58 | |
vidd | yes | 03:58 |
_rfz | I had the same problem today | 03:58 |
vidd | you working now? | 03:58 |
_rfz | glance is, I can't ping or axs the vm's via the public address | 03:59 |
vidd | what did you do to fix your issue? | 03:59 |
_rfz | are you still using kials pckages? | 03:59 |
vidd | yes | 03:59 |
_rfz | I added the keystone directives to both glance config files, restarted the services and it worked | 04:00 |
errr | anyone know if with d5 the vnc console works with ie 8 or not? | 04:01 |
errr | wkelly_: if you see this tonight I need yer halp :) | 04:02 |
_rfz | I also made sure keystone was working with: curl -d '{"auth":{"passwordCredentials":{"username": "joeuser", "password": "secrete"}}}' -H "Content-type: application/json" http://localhost:35357/v2.0/tokens | 04:03 |
*** dragondm has joined #openstack | 04:09 | |
vidd | well....i found ONE issue | 04:10 |
vidd | old service port was being called in -api and -registry | 04:10 |
vidd | still doesnt work...but there was that issue | 04:10 |
_rfz | did you download kial's install scripts? | 04:12 |
vidd | ok...i THINK i got it | 04:12 |
vidd | his install scripts do not work for my sistuation | 04:12 |
vidd | but i use them as a guide to write my own | 04:13 |
_rfz | Yep I'm trying to do the same | 04:13 |
_rfz | but it's one problem after another :) | 04:14 |
vidd | glance-control all restart != service glance-api restart ; service glance-registry restart | 04:14 |
_rfz | working? | 04:14 |
vidd | THERE is my issue | 04:14 |
errr | so do any of yall know if in diablo d5 if the vnc console should work in ie 8 or not? I know chrome it doesnt not.. | 04:14 |
vidd | errr, i was not able to get vnc console to work on anything yet ... so IDK | 04:15 |
vidd | but....i've been beating on keystone for the past 2 weeks so i have not had any time to look at other stuff | 04:16 |
errr | vidd: ah ok, I have seen it in action on firefox but I only have ie 8 on this box and its locked to corp policy so I cant get ff on there to test with or not. | 04:16 |
_rfz | errr - I have no idea, I try and connect and it says connection time out : ) | 04:16 |
vidd | my windows machines run ie 9 (if they run ie anything) | 04:17 |
errr | ok thanks | 04:17 |
vidd | errr, do you have vnc software? | 04:17 |
*** miclorb_ has joined #openstack | 04:18 | |
errr | vidd: not on the machine Im trying to do the web vnc from | 04:19 |
vidd | errr, i only ask cuzz you might try vnc'ing to an outside comp with some decent apps on it =] | 04:21 |
errr | vidd: the openstack box is only accessable from this locked down corp box, or Id be using my own laptop | 04:22 |
_rfz | errr - what error do you get? | 04:23 |
errr | _rfz: a blank page that says canvas not supported | 04:24 |
errr | I guess its not blank if it says that :) | 04:25 |
*** krow has quit IRC | 04:25 | |
errr | there is also a button on the page that says send ctrl+alt+del | 04:25 |
vidd | errr, you have java on that thing? | 04:25 |
errr | vidd: not that Im seeing | 04:26 |
errr | it does actually have vnc viewer on it now that I have poked around looking for stuff | 04:26 |
vidd | id tell your NETSEC ppl that your getting rope burns on your wrists | 04:27 |
errr | I know right.. | 04:27 |
vidd | but...with both hands tied behind your back you can't slit your wrists =] | 04:28 |
*** duckhand has quit IRC | 04:29 | |
vidd | do you at least have telnet client on that thing? | 04:29 |
vidd | open command prompt, type telnet [ip address] [port] | 04:30 |
errr | vidd: yeah and it has putty thankfully cause i had to log on to restart its nova-vnc but now I only have ie to test with and its failing | 04:31 |
vidd | your NETSEC ppl might have your box locked down so the port isnt open | 04:31 |
* vidd is happy....glance and keystone are talking... | 04:32 | |
uvirtbot | New bug: #891718 in nova "nova components always log to standard error, even when syslog is turned on" [Undecided,New] https://launchpad.net/bugs/891718 | 04:32 |
vidd | now lets see if they will play nice together =] | 04:32 |
*** miclorb_ has quit IRC | 04:33 | |
*** helfrez has joined #openstack | 04:36 | |
zaitcev | I was trying to find out how to create an account since yesterday. | 04:41 |
vidd | an account on....? | 04:42 |
zaitcev | Sorry, on a Swift server. | 04:42 |
*** dragondm has quit IRC | 04:42 | |
vidd | sorry...cant help with that [yet] | 04:42 |
*** jakedahn has joined #openstack | 04:51 | |
uvirtbot | New bug: #892429 in keystone "is_global == 1 don't work with postgresql" [Undecided,New] https://launchpad.net/bugs/892429 | 04:51 |
*** jakedahn has quit IRC | 04:53 | |
*** vladimir3p has joined #openstack | 04:56 | |
vidd | so...horizon can only see keystone | 04:58 |
vidd | i guess those endpoints RE needed | 04:59 |
uvirtbot | New bug: #891555 in keystone "Keystone version response incorrect" [Undecided,New] https://launchpad.net/bugs/891555 | 04:59 |
vidd | _rfz, you still here? | 05:04 |
_rfz | yep | 05:08 |
_rfz | you got it working! | 05:08 |
vidd | yeah | 05:09 |
vidd | im ready to launch an image | 05:09 |
_rfz | did you get eucatools working? | 05:09 |
vidd | euca2ools have always worked for me | 05:10 |
vidd | never had any issues with that | 05:10 |
vidd | it was glance that hated me | 05:10 |
vidd | no glance, no images | 05:10 |
_rfz | i keep getting error: Warning: failed to parse error message from AWS: <unknown>:1:0: not well-formed (invalid token) | 05:11 |
vidd | thats cuzz your keystone and env are not talking the same language | 05:11 |
vidd | compare your EC2_ACCESS_KEY and EC2_SECRET_KEY from your env [env |grep EC2] with your EC2 creds in keystone | 05:13 |
vidd | if they dont match, euca wont work | 05:14 |
_rfz | Oh | 05:16 |
_rfz | I've put the right ones in | 05:17 |
_rfz | still getting the same error | 05:18 |
vidd | the right ones what in where? | 05:18 |
vidd | did you change env or keystoe? | 05:18 |
*** vladimir3p has quit IRC | 05:18 | |
_rfz | i checked the keystone table credentials | 05:18 |
vidd | ok...did they match? | 05:19 |
vidd | or did you have to change one? | 05:20 |
_rfz | they where completly different, I updated them in the novarc file | 05:20 |
_rfz | & soruced it | 05:20 |
vidd | NO!!!!!!!! | 05:20 |
_rfz | :/ | 05:21 |
vidd | now nova and keystone dont match | 05:21 |
vidd | its better to change keystone | 05:22 |
_rfz | Okay let me try that | 05:22 |
vidd | now your source file DID have something like 7858h-ytrtr97-yrtr68-876g86 right? | 05:23 |
_rfz | yep | 05:23 |
vidd | you didnt have the messed up username:projectname | 05:24 |
_rfz | na, I remember you told me that last time | 05:25 |
_rfz | it's 222223-232232312-2222-2222:projectname | 05:25 |
vidd | hehe | 05:26 |
vidd | does your nova.conf have "--keystone_ec2_url=http://192.168.15.200:5000/v2.0/ec2tokens" | 05:26 |
_rfz | yep | 05:27 |
_rfz | Okay added, lets see if it works :D | 05:27 |
_rfz | no :) | 05:28 |
vidd | i test it with "euca-describe-availability-zones verbose" | 05:29 |
*** miclorb_ has joined #openstack | 05:29 | |
vidd | what are you testing with? | 05:29 |
_rfz | yep with that | 05:29 |
vidd | =\ | 05:29 |
_rfz | euca-describe-availability-zones verbose | 05:29 |
_rfz | Warning: failed to parse error message from AWS: <unknown>:1:0: syntax error | 05:29 |
_rfz | nova-api.log says POST /services/Cloud/ None:None 400 | 05:30 |
vidd | add --debug | 05:30 |
vidd | and then pastebin the error | 05:30 |
*** Razique_ has joined #openstack | 05:35 | |
*** Razique has quit IRC | 05:36 | |
*** Razique_ is now known as Razique | 05:36 | |
vidd | hello razique | 05:36 |
*** MarkAtwood has joined #openstack | 05:43 | |
*** CaptTofu has joined #openstack | 05:46 | |
*** miclorb_ has quit IRC | 05:47 | |
*** zaitcev has quit IRC | 06:02 | |
*** jakedahn has joined #openstack | 06:20 | |
*** jakedahn has quit IRC | 06:21 | |
*** vidd is now known as vidd-away | 06:22 | |
*** jiva has quit IRC | 06:23 | |
*** ksteward1 has joined #openstack | 06:24 | |
*** jiva has joined #openstack | 06:24 | |
*** ksteward1 has quit IRC | 06:31 | |
*** hadrian has quit IRC | 06:39 | |
*** MarkAtwood has left #openstack | 06:46 | |
*** nati2 has joined #openstack | 06:57 | |
*** vladimir3p has joined #openstack | 07:16 | |
*** vladimir3p has quit IRC | 07:21 | |
*** nati2_ has joined #openstack | 07:22 | |
*** nati2 has quit IRC | 07:22 | |
*** bencherian has joined #openstack | 07:23 | |
*** livemoon has joined #openstack | 07:37 | |
*** ldlework has quit IRC | 07:40 | |
*** bencherian has quit IRC | 07:42 | |
*** ejat has joined #openstack | 07:43 | |
*** CaptTofu has quit IRC | 07:46 | |
*** nerens has joined #openstack | 08:01 | |
*** Pommi has quit IRC | 08:06 | |
Razique | hi all | 08:07 |
Razique | hi vidd-away :) | 08:07 |
*** vidd-away is now known as vidd | 08:08 | |
vidd | hey Razique | 08:08 |
vidd | finally got my scripts to work right | 08:08 |
vidd | im testing now to make sure everything is working | 08:09 |
Razique | great | 08:09 |
Razique | what were u on ? | 08:09 |
vidd | then im going to format, reinstall and go with just the scripts | 08:09 |
vidd | on? loke operating system? | 08:10 |
*** nati2_ has quit IRC | 08:13 | |
*** nati2 has joined #openstack | 08:14 | |
*** Pommi has joined #openstack | 08:14 | |
*** nati2 has quit IRC | 08:17 | |
*** nati2 has joined #openstack | 08:17 | |
*** duckhand has joined #openstack | 08:18 | |
*** ejat has quit IRC | 08:19 | |
*** nati2 has quit IRC | 08:20 | |
*** nati2 has joined #openstack | 08:21 | |
*** nati2 has quit IRC | 08:21 | |
*** nati2 has joined #openstack | 08:21 | |
livemoon | hi | 08:22 |
*** scottjg has joined #openstack | 08:22 | |
*** duckhand has quit IRC | 08:25 | |
*** ejat- has joined #openstack | 08:26 | |
*** woleium has quit IRC | 08:27 | |
*** duckhand has joined #openstack | 08:46 | |
*** livemoon has quit IRC | 08:52 | |
*** duckhand_ has joined #openstack | 08:58 | |
*** duckhand has quit IRC | 09:01 | |
*** nati2_ has joined #openstack | 09:02 | |
*** nati2 has quit IRC | 09:05 | |
*** ejat- is now known as ejat | 09:12 | |
*** ejat has joined #openstack | 09:12 | |
*** sannes1 has quit IRC | 09:17 | |
*** colemanc has quit IRC | 09:18 | |
*** scottjg has quit IRC | 09:29 | |
*** helfrez has quit IRC | 09:34 | |
*** helfrez has joined #openstack | 09:35 | |
*** TheOsprey has joined #openstack | 09:37 | |
*** dnjaramba has joined #openstack | 09:38 | |
*** helfrez has quit IRC | 09:41 | |
*** helfrez has joined #openstack | 09:41 | |
*** ejat has quit IRC | 09:50 | |
*** slop has quit IRC | 09:50 | |
*** slop has joined #openstack | 09:51 | |
*** jedi4ever has quit IRC | 09:58 | |
*** ejat has joined #openstack | 10:00 | |
*** ejat has joined #openstack | 10:00 | |
*** helfrez has quit IRC | 10:02 | |
*** helfrez has joined #openstack | 10:03 | |
*** helfrez has quit IRC | 10:07 | |
*** helfrez has joined #openstack | 10:08 | |
Razique | hey lionel | 10:15 |
*** tdi_ has joined #openstack | 10:18 | |
*** tdi has quit IRC | 10:20 | |
*** perlstein has quit IRC | 10:20 | |
*** duckhand_ has quit IRC | 10:20 | |
*** perlstein has joined #openstack | 10:21 | |
*** duckhand has joined #openstack | 10:21 | |
*** rbergeron has quit IRC | 10:23 | |
*** rbergeron has joined #openstack | 10:23 | |
*** nati2 has joined #openstack | 10:24 | |
*** nati2_ has quit IRC | 10:26 | |
*** ejat has quit IRC | 10:27 | |
*** kaigan_ has joined #openstack | 10:32 | |
*** Razique has quit IRC | 10:37 | |
*** nati2_ has joined #openstack | 10:39 | |
*** nati2 has quit IRC | 10:40 | |
*** nati2_ has quit IRC | 10:43 | |
*** nati2 has joined #openstack | 10:44 | |
*** rustam has joined #openstack | 11:16 | |
*** po has joined #openstack | 11:18 | |
*** fabiand__ has joined #openstack | 11:24 | |
*** jedi4ever has joined #openstack | 11:28 | |
*** pothos has quit IRC | 11:31 | |
*** pothos has joined #openstack | 11:32 | |
*** rods has joined #openstack | 11:40 | |
*** pixelbeat has joined #openstack | 12:02 | |
*** TheOsprey has quit IRC | 12:34 | |
*** pixelbeat has quit IRC | 12:37 | |
*** TheOsprey has joined #openstack | 12:42 | |
*** TheOsprey has quit IRC | 12:43 | |
*** coli has joined #openstack | 12:43 | |
coli | Hi, I'm trying to install openstack (nova, glance, swift) and decided to use keystone as a main identity source, however I'm reading in the documentation "Keystone currently allows any valid token to do anything with any account." | 12:46 |
coli | does it mean that Keystone cannot be used in production environement for identity purposes ? | 12:46 |
coli | what do you use in your production implementation for authorisation ? | 12:46 |
*** CaptTofu has joined #openstack | 12:50 | |
*** vernhart has joined #openstack | 12:55 | |
*** po has quit IRC | 12:56 | |
*** kaigan_ has quit IRC | 13:01 | |
*** CaptTofu has quit IRC | 13:11 | |
*** po has joined #openstack | 13:24 | |
*** fabiand__ has left #openstack | 13:28 | |
*** CaptTofu has joined #openstack | 13:29 | |
*** jedi4ever has quit IRC | 13:31 | |
_rfz | morning all! | 13:44 |
*** nickon has joined #openstack | 13:53 | |
*** jiva has left #openstack | 13:53 | |
coli | morning | 13:58 |
coli | by any chance do you have production working version of openstack ? | 13:59 |
*** TheOsprey has joined #openstack | 13:59 | |
_rfz | coli - I'm currently setting it up in production | 14:13 |
coli | what did you choose as a central authorisation platform ? keyston ? | 14:14 |
_rfz | yep keystone | 14:16 |
coli | did you use it just for nova or swift as well ? | 14:16 |
vidd | coli, your end users will be using the dashboard as a user interface...right? | 14:17 |
_rfz | right now I'm not using swift | 14:18 |
_rfz | heya vidd | 14:18 |
coli | are you aware that documentation for keystone states "Keystone currently allows any valid token to do anything with any account." ? | 14:18 |
coli | however I think this just applies to swift and not nova, do you have any news regarding this ? | 14:18 |
coli | vidd, I was planing to use dashboard which needs keystone, however this little note in keystone documentation which I have quoted above has made me think twice. | 14:19 |
*** sannes1 has joined #openstack | 14:21 | |
vidd | if the users are only going to use the dashboard to access your system, they are only going to see "thier stuff" | 14:21 |
*** marrusl has quit IRC | 14:21 | |
*** CaptTofu has quit IRC | 14:22 | |
coli | vidd, I was hoping to provide API access as well, not _only_ dashboard. providing API access allows users currenlty using AWS to migrate their own management applications easily. | 14:23 |
vidd | coli, all i can say is set up a test environment and see if you can cross-contaminate | 14:24 |
coli | vidd, I hoped that someone has that answer already ;-) | 14:25 |
vidd | this may have already been addressed and the documentation not updated =] | 14:26 |
coli | vidd, what do you use for nove and swift central identity/authorisation management | 14:26 |
coli | ? | 14:26 |
*** dwcramer has quit IRC | 14:26 | |
vidd | coli, well...in CLI, on nova, euca seems to have token isolation | 14:27 |
vidd | but im still in testing and do not have multiple accounts | 14:27 |
vidd | dont currently have swift | 14:28 |
*** bencherian has joined #openstack | 14:28 | |
vidd | _rfz, speaking of euca did you ever get your euca to work with keystone? | 14:29 |
coli | vidd, do I understand rightly that you use keystone ? | 14:30 |
vidd | yes | 14:30 |
*** kashyap_ has joined #openstack | 14:31 | |
vidd | but i use flat dhcp networking so only one project per controllor node | 14:31 |
*** sannes1 has quit IRC | 14:31 | |
*** sannes1 has joined #openstack | 14:31 | |
* vidd is still in the proof-of-concept stage | 14:31 | |
coli | vidd, another question then if I may. did you install nove-network on each nova-compute host and assign public pool to each or do you have a centralised nova-network. if the second the do you allow all traffic to flow through nova-network or do you use hardware gateway ? (I was think about hardware gateway or nova-network on each nova-compute host) | 14:32 |
_rfz | vidd - no I didn't :) I'm working on it again | 14:32 |
_rfz | I went to bed needed some sleep | 14:32 |
coli | I have tested all of mentioned solutions, however it appears to me that having a single nova-network through which all traffic flows for number of nova-compute hosts could cause major problems in future. | 14:33 |
vidd | coli, i would recommend the network-on-each-compute-node if you intend to have more that 4 compute nodes....or if you are going to have multiple physical locations | 14:35 |
*** sannes1 has quit IRC | 14:35 | |
vidd | but as i said...i am on a one-machine-proof-of-concept setup | 14:36 |
coli | vidd, I'm planning for tens or low hundreds of nodes and few physical locations ;-) | 14:36 |
*** rbergeron has quit IRC | 14:36 | |
*** rbergeron has joined #openstack | 14:36 | |
vidd | yeah...multi-node network for sure....or quantum when its ready | 14:36 |
coli | vidd, as proof of concept I'm using a setup with 5 identical servers and one extra as management server | 14:36 |
coli | "when its ready" ;-) | 14:36 |
*** TheOsprey_ has joined #openstack | 14:37 | |
coli | I need to get it working and selling by christmas ;-) | 14:37 |
vidd | coli, as a proof of concept im using a 1-core emachine desktop with one eth and 2GB rasm | 14:37 |
coli | vidd, by any chance do you know anybody who did implement it on a larger scale in production env ? | 14:38 |
*** TheOsprey has quit IRC | 14:39 | |
vidd | i personally dont know anyone with keystone in production | 14:39 |
vidd | Kiall uses keystone, but i dont know if his implimentation is in production | 14:39 |
vidd | WormMan is in production, but i dont know if he uses keystone | 14:40 |
guaqua | coli: i'm looking at the essex release, which will be in may 2012. you should go through the blueprints at launchpad. that's the best way to get acquinted with the plans | 14:41 |
coli | vidd, what do they use if not keystone (any ideas) ? | 14:42 |
vidd | _rfz, you have any luck with vnc? | 14:42 |
vidd | before there was keystone, there was swauth for swift and euca for nova | 14:43 |
coli | guaqua, tried to read the blueprints however essex is too far away in time, I need something in production by christmas. | 14:43 |
vidd | i know euca nice with keystone | 14:43 |
vidd | *plays nice | 14:44 |
coli | anybody knows if rackspace is using openstack on the main cloud platform yet ? | 14:44 |
vidd | ok...i have go for a bit...my truck is finally ready to come home from the shop | 14:46 |
*** vidd is now known as vidd-away | 14:46 | |
_rfz | vidd - not yet mine timesout | 14:47 |
_rfz | but if i laucnh vncproxy on its own it works | 14:48 |
_rfz | but right now I'm trying to figure out euca2ools | 14:48 |
vidd-away | when i get back from picking up my truck...i'll give you a hand.... | 14:50 |
_rfz | cheers, I'm going to pastbin all the info so you can check it out | 14:50 |
vidd-away | _rfz, but you might want to look at my scripts they might help too | 14:51 |
_rfz | where can I grab your scripts? | 14:55 |
*** CaptTofu has joined #openstack | 14:55 | |
vidd-away | https://github.com/vidd/vidd | 14:55 |
*** duckhand has quit IRC | 14:56 | |
_rfz | will do thanks | 14:56 |
*** CaptTofu has quit IRC | 14:59 | |
*** bencherian has quit IRC | 15:00 | |
*** helfrez has quit IRC | 15:06 | |
_rfz | vidd - I wasn't puting :$user in the key! stupid mistake. it's working now thanks | 15:07 |
*** jingizu_ has quit IRC | 15:09 | |
*** dwcramer has joined #openstack | 15:21 | |
*** kashyap_ has quit IRC | 15:24 | |
*** hack_ has joined #openstack | 15:25 | |
*** hack_ has quit IRC | 15:28 | |
*** sannes1 has joined #openstack | 15:28 | |
*** nati2_ has joined #openstack | 15:34 | |
*** nati2 has quit IRC | 15:37 | |
*** sannes1 has quit IRC | 15:41 | |
*** dwcramer has quit IRC | 15:47 | |
*** stevegjacobs has quit IRC | 15:48 | |
*** hingo has joined #openstack | 15:59 | |
*** dwcramer has joined #openstack | 16:01 | |
*** sannes1 has joined #openstack | 16:05 | |
*** freeflying has joined #openstack | 16:11 | |
uvirtbot | New bug: #892529 in openstack-manuals "nova-direct-api and stack are not documented" [Undecided,New] https://launchpad.net/bugs/892529 | 16:11 |
*** freeflyi1g has quit IRC | 16:13 | |
*** freeflying has quit IRC | 16:20 | |
*** freeflying has joined #openstack | 16:21 | |
*** duckhand has joined #openstack | 16:22 | |
*** dwcramer has quit IRC | 16:25 | |
*** sannes1 has quit IRC | 16:26 | |
*** dwcramer has joined #openstack | 16:26 | |
*** dendro-afk is now known as dendrobates | 16:28 | |
*** chomping has joined #openstack | 16:32 | |
*** dwcramer has quit IRC | 16:38 | |
*** praefect_ has quit IRC | 16:46 | |
*** duckhand has quit IRC | 16:50 | |
*** dendrobates is now known as dendro-afk | 16:52 | |
*** jog0 has joined #openstack | 16:56 | |
*** ejat has joined #openstack | 16:56 | |
*** ejat has joined #openstack | 16:56 | |
*** katkee has joined #openstack | 16:58 | |
*** hadrian has joined #openstack | 17:04 | |
*** dhimaz has joined #openstack | 17:05 | |
*** rnorwood has quit IRC | 17:11 | |
*** rnorwood has joined #openstack | 17:12 | |
*** jog0 has quit IRC | 17:14 | |
*** katkee has quit IRC | 17:15 | |
*** hingo has quit IRC | 17:16 | |
*** nati2_ has quit IRC | 17:17 | |
*** hingo has joined #openstack | 17:17 | |
*** helfrez has joined #openstack | 17:18 | |
*** hggdh has quit IRC | 17:22 | |
*** hggdh has joined #openstack | 17:32 | |
*** rnorwood has quit IRC | 17:33 | |
*** nerens has quit IRC | 17:45 | |
*** jakedahn has joined #openstack | 17:54 | |
*** stuntmachine has joined #openstack | 17:55 | |
*** Moltar has joined #openstack | 17:56 | |
*** nerens has joined #openstack | 18:01 | |
*** pixelbeat has joined #openstack | 18:10 | |
*** bryguy has quit IRC | 18:10 | |
*** bryguy has joined #openstack | 18:11 | |
*** nRy has quit IRC | 18:15 | |
*** nRy has joined #openstack | 18:16 | |
*** vidd-away is now known as vidd | 18:16 | |
vidd | _rfz, so you good now? | 18:17 |
_rfz | vidd - yep thanks mate | 18:18 |
*** nickon has quit IRC | 18:19 | |
vidd | awesome | 18:19 |
vidd | _rfz, do have vnc working? | 18:20 |
vidd | any kind of vnc =] | 18:20 |
*** nerens has joined #openstack | 18:21 | |
*** michael__ has joined #openstack | 18:22 | |
_rfz | not yet, I've been documenting everything to this point | 18:26 |
vidd | hehe | 18:26 |
vidd | what do you think of my scripts? | 18:27 |
_rfz | great | 18:29 |
_rfz | I learned quite a bit from them | 18:30 |
uvirtbot | New bug: #892555 in nova "409 Conflict returned when attempting to create an image after failed snapshot" [Undecided,New] https://launchpad.net/bugs/892555 | 18:31 |
*** vernhart has quit IRC | 18:32 | |
vidd | _rfz, im about to blow away my current set-up and verify the scripts are fully functional as is =] | 18:33 |
vidd | make sure there are no holes in them =]] | 18:33 |
*** vernhart has joined #openstack | 18:38 | |
*** rsampaio has joined #openstack | 18:43 | |
*** stuntmachine has quit IRC | 18:53 | |
*** hezekiah_ has joined #openstack | 18:57 | |
*** ejat has quit IRC | 19:01 | |
*** cdub has joined #openstack | 19:02 | |
*** j3ll3 has joined #openstack | 19:03 | |
*** hezekiah_ has quit IRC | 19:04 | |
*** rustam_ has joined #openstack | 19:13 | |
*** rustam has quit IRC | 19:13 | |
*** abecc has joined #openstack | 19:14 | |
*** fysa has quit IRC | 19:16 | |
*** rustam has joined #openstack | 19:18 | |
*** rustam_ has quit IRC | 19:18 | |
*** wilmoore has joined #openstack | 19:23 | |
*** nerens has quit IRC | 19:24 | |
*** fysa has joined #openstack | 19:26 | |
*** j3ll3 has quit IRC | 19:30 | |
*** wilmoore has quit IRC | 19:30 | |
*** rustam_ has joined #openstack | 19:41 | |
*** rustam has quit IRC | 19:41 | |
*** bencherian has joined #openstack | 19:47 | |
*** nerens has joined #openstack | 19:50 | |
*** rustam has joined #openstack | 19:57 | |
*** rustam_ has quit IRC | 19:57 | |
*** perlstein has quit IRC | 20:01 | |
*** bencherian has quit IRC | 20:11 | |
*** rustam has quit IRC | 20:11 | |
*** rustam has joined #openstack | 20:12 | |
*** scottjg has joined #openstack | 20:13 | |
*** bencherian has joined #openstack | 20:15 | |
*** bencherian has quit IRC | 20:15 | |
*** bencherian has joined #openstack | 20:16 | |
*** bencherian has left #openstack | 20:18 | |
*** nerdstein has joined #openstack | 20:19 | |
*** llang629 has joined #openstack | 20:20 | |
*** llang629 has left #openstack | 20:20 | |
*** j^2 has quit IRC | 20:23 | |
*** nerdstein has quit IRC | 20:25 | |
*** bencherian has joined #openstack | 20:25 | |
*** bencherian has quit IRC | 20:25 | |
*** j^2 has joined #openstack | 20:33 | |
*** abecc has quit IRC | 20:37 | |
*** abecc has joined #openstack | 20:40 | |
*** nerdstein has joined #openstack | 20:44 | |
*** edolnx has quit IRC | 20:45 | |
*** nerdstein has quit IRC | 20:45 | |
*** nerens has quit IRC | 20:48 | |
*** jakedahn has quit IRC | 20:56 | |
*** CaptTofu has joined #openstack | 21:00 | |
*** arBmind__ has joined #openstack | 21:14 | |
*** rsampaio has quit IRC | 21:15 | |
*** nerdstein has joined #openstack | 21:16 | |
*** Moltar has quit IRC | 21:20 | |
*** nerdstein has quit IRC | 21:22 | |
*** redconnection has joined #openstack | 21:23 | |
*** agoddard has quit IRC | 21:35 | |
*** dwcramer has joined #openstack | 21:46 | |
*** rsampaio has joined #openstack | 21:51 | |
*** perlstein has joined #openstack | 22:00 | |
*** jakedahn has joined #openstack | 22:06 | |
*** rsampaio has quit IRC | 22:14 | |
*** jakedahn has quit IRC | 22:17 | |
*** rsampaio has joined #openstack | 22:18 | |
cloudfly | ? | 22:22 |
*** nerdstein has joined #openstack | 22:24 | |
*** jkyle has joined #openstack | 22:33 | |
jkyle | heya | 22:33 |
*** ldlework has joined #openstack | 22:44 | |
*** nerdstein has quit IRC | 22:56 | |
*** jkyle has quit IRC | 23:05 | |
*** woleium has joined #openstack | 23:08 | |
*** sidarta has joined #openstack | 23:14 | |
*** TheOsprey_ has quit IRC | 23:21 | |
*** arBmind__ has quit IRC | 23:21 | |
*** WormMan has quit IRC | 23:27 | |
*** WormMan has joined #openstack | 23:28 | |
*** julian_c has joined #openstack | 23:34 | |
*** bryguy has quit IRC | 23:39 | |
*** bryguy has joined #openstack | 23:44 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!