*** rods has quit IRC | 00:00 | |
*** mnaser has quit IRC | 00:01 | |
*** stewart has quit IRC | 00:03 | |
*** dragondm has quit IRC | 00:04 | |
*** Lynelle has quit IRC | 00:05 | |
*** littleidea has joined #openstack | 00:08 | |
*** stewart has joined #openstack | 00:12 | |
*** mszilagyi has quit IRC | 00:12 | |
*** FallenPegasus has quit IRC | 00:19 | |
*** FallenPegasus has joined #openstack | 00:19 | |
*** FallenPegasus is now known as MarkAtwood | 00:19 | |
*** rchavik has joined #openstack | 00:19 | |
*** rchavik has joined #openstack | 00:19 | |
notmyname | RickB17: the best example is to use the swift tool that comes with the swift code | 00:26 |
---|---|---|
notmyname | RickB17: I've got some other, not so full-featured, examples in my github account (eg https://github.com/notmyname/python_scripts/tree/master/cf_speed) | 00:27 |
RickB17 | thanks, i'll check it out | 00:29 |
*** huslage has quit IRC | 00:32 | |
*** ton_katsu has joined #openstack | 00:34 | |
*** worstadmin has joined #openstack | 00:37 | |
*** netmarkjp has joined #openstack | 00:37 | |
*** po has quit IRC | 00:38 | |
*** littleidea has quit IRC | 00:38 | |
*** cole has quit IRC | 00:42 | |
*** netmarkjp has left #openstack | 00:44 | |
*** stewart has quit IRC | 00:45 | |
*** huslage has joined #openstack | 00:50 | |
*** ncode has joined #openstack | 00:53 | |
*** ncode has joined #openstack | 00:53 | |
*** ncode has quit IRC | 00:53 | |
*** worstadmin has quit IRC | 00:55 | |
*** worstadmin has joined #openstack | 00:56 | |
*** ncode has joined #openstack | 00:56 | |
*** ncode has joined #openstack | 00:56 | |
*** jdurgin has quit IRC | 00:58 | |
*** maplebed is now known as maplebed|afk | 01:02 | |
winston-d | notmyname : hi, may i ask you a question about swift? | 01:03 |
notmyname | winston-d: sure. I'll try to help | 01:03 |
winston-d | notmyname : thanks. i am wondering what would happen if i doubled the number of partitions while the zone/device remain the same. Will that affect the available of service (there are existing data on Swift)? | 01:05 |
notmyname | winston-d: you're wandering into uncharted territory :-) | 01:08 |
*** mgius has quit IRC | 01:08 | |
notmyname | so as a general rule, changing the partition power to an existing cluster doesn't loose data, it just makes it unavailable until the cluster settles down. replication pretty much has to move everything somewhere else. more data == longer time | 01:09 |
notmyname | it should be bad enough to strongly reconsider doing it | 01:09 |
notmyname | however | 01:10 |
*** mgius has joined #openstack | 01:10 | |
*** littleidea has joined #openstack | 01:11 | |
notmyname | err..scratch that "however". I was considering if there might be a corner case. I don't think there is (in the 30 seconds I've spent thinking of it) | 01:12 |
notmyname | winston-d: so consider your cluster unavailable until all of the data can be moved all around | 01:13 |
winston-d | notmyname : actually someone did that here in their setup, and found after that doing that, swift service became very unstable. The intention was to test the behavior when expending swift with more storage node & devices. They then suspect that Swift doesn't scale well. I strongly doubted that. And I challenged their way of doing the test. | 01:14 |
notmyname | if you keep the same partition count (ring partition power), adding zones and nodes is very easy. that's why we say choose a good partition power up front when you are building your cluster | 01:16 |
winston-d | notmyname : so can I say that the rule of thumb for extending/scaling swift is to make sure the partition size remain the same as original one? | 01:17 |
notmyname | in this person's tests, did the swift cluster eventually settle back down? | 01:17 |
notmyname | winston-d: yes. the partition power should never change for the life of the whole cluster (as a general rule--ie don't do it unless you know exactly what you're doing) | 01:18 |
*** ccc11 has joined #openstack | 01:19 | |
winston-d | notmyname : they did it last Saturday and said the service is not fully recovered. I will check if it's fully settle down now. | 01:19 |
notmyname | it should depend on the amount of data and the size of the internal network between the storage nodes | 01:20 |
winston-d | notmyname : what's the exact definition of 'good partition power'? Let's say, I have only 5 storage nodes and each with 24 disks for now. And I intend to extend it late this year or early next year to total 7 nodes with 24 disks each. what's the 'good partition number' for this case? | 01:21 |
Xenith | So I understand that RAID isn't really recommended with swift. But has anyone tried to build a swift deployment on top of FreeBSD and ZFS? | 01:21 |
notmyname | good partition power == smallest power of 2, N, such that 2**N > number_of_storage_nodes when your cluster is at maximum size | 01:23 |
notmyname | winston-d: so figure out how big your cluster can be (budget, DC space, etc) and go from there | 01:23 |
notmyname | winston-d: so with your numbers there, a power of 15 would be minimum (2**14 is 16384 which is < 7*24*100) | 01:25 |
notmyname | basically the target is roughly 100 partitions on each storage volume. that gives each partition about 1% of the space | 01:26 |
winston-d | notmyname : so you mean even if i have only 5 nodes for now, but in future, if it will grow to 50 nodes, then i should set partition power to 17? (2^16 < 50*24*100 < 2^17) | 01:26 |
notmyname | winston-d: exactly | 01:27 |
notmyname | winston-d: the disadvantage of setting the power too high is that ring management operations take longer and more space will be used by the fs for directory inodes | 01:28 |
*** kidrock has joined #openstack | 01:28 | |
winston-d | notmyname : i'm confused. if the partition number is set according to future size, then before i actually reach that size, each disk will have way more partitions than 100. right? | 01:29 |
notmyname | yes | 01:29 |
notmyname | Xenith: noone has and reported on it, that I'm aware of. I suspect that RAIDZ will have the same issues as RAID 5 or 6 in that small, random read/write loads (which swift does almost exclusively) will be bad for performance | 01:30 |
winston-d | notmyname : what if some day i found 50 nodes is too small and needs to be extended to 100. partition power will have to be altered eventually. | 01:30 |
notmyname | winston-d: and there is the confounding factor of differing volumes having differing weights in the ring :-) | 01:30 |
notmyname | winston-d: build a 2nd cluster and migrate the data. or have _long_ periods of downtime while the cluster rebalances all the data | 01:31 |
winston-d | notmyname : hmm. i have more questions now. :) | 01:33 |
notmyname | that's why I'm here :-) | 01:34 |
winston-d | notmyname : in previous case, what if one can keep the size of partition remains the same? for 50 nodes w/ 24 disks each, the partition power is set to 17, and for 100nodes, partition power set to 18. Will swift still suffer service downtime while rebalances data? | 01:37 |
notmyname | winston-d: the default partition power (ie what's in the docs) is 18. there probably isn't a problem with using that, even if you don't plan on getting that big. yes, it will be slightly less efficient, but probably not a problem | 01:37 |
*** osier has joined #openstack | 01:38 | |
notmyname | "keep the size of partition remains the same"?? I don't follow. I think you may have accidentally a word | 01:39 |
winston-d | notmyname : 17 for 50 nodes with 24 disks each will roughly keep one partition to 1% of disk size. and 18 for 100 nodes will do the same. i am asking because i though the size of the partition changed is root cause of downtime we saw here. | 01:41 |
notmyname | ah | 01:41 |
notmyname | the downtime has nothing to do with the percent of disk it uses | 01:41 |
notmyname | the partition power is used like this (python pseudocode): | 01:42 |
notmyname | node_list = md5_hash(obj_url)[:ring.partition_power] | 01:42 |
notmyname | err | 01:42 |
notmyname | partition = md5_hash(obj_url)[:ring.partition_power] | 01:43 |
notmyname | and then use the partition to find the 3 nodes it goes on | 01:43 |
*** miclorb_ has quit IRC | 01:43 | |
notmyname | so if the partition power changes, everything is remapped in the cluster | 01:43 |
notmyname | and the proxy will be looking in the new location but replication hadn't moved the data to the new location yet | 01:44 |
*** jakedahn has joined #openstack | 01:44 | |
winston-d | notmyname : i see. | 01:44 |
winston-d | notmyname : now questions for 2nd cluster scaling. | 01:45 |
winston-d | notmyname : my understanding is 'cluster' here means a swift setup, right? if i have multiple swift cluster, then there should be some sort of cluster-level proxy to determine which cluster the data resists in? | 01:47 |
notmyname | yes. by cluster I mean one logical, autonomous swift install | 01:48 |
notmyname | any multi-cluster stuff you have (for HA or whatever) should be implemented on top of it. the auth system would be a great place, for example | 01:48 |
notmyname | (well, HA is a little more complicated since you probably want synchronization in that case) | 01:49 |
winston-d | notmyname : yes, things will be way more complicated. | 01:49 |
notmyname | distributed systems always are :-) | 01:49 |
winston-d | notmyname : how can i migrate data bewteen clusters? if there existing code in swift to do this? | 01:50 |
notmyname | winston-d: there is the container sync feature which could be used as part of a migration. other than that, it's GETs and PUTs with a fat pipe between them | 01:51 |
notmyname | winston-d: for multi-cluser, if you (wikimedia) had one cluster for A-M and another for N-Z, the auth system could return both endpoints and then your code could choose the right cluster based on the request | 01:53 |
notmyname | (^ silly non-HA example) | 01:53 |
winston-d | notmyname : I also have some concern that if large partition power would affect performance. in the case, 18 for 5 nodes x 24disks? | 01:54 |
*** nati has joined #openstack | 01:55 | |
notmyname | ring management operations would be affected (building, rebalancing, etc) but not standard, live ring operations. ring management is done off-line | 01:55 |
winston-d | notmyname : that means once the ring is settle down, everything should work as fast as smaller partition power, say 16? | 01:57 |
notmyname | yes, I think so | 01:57 |
winston-d | notmyname : that's good. we may design a test case to verify that. :) | 01:58 |
*** kaz_ has quit IRC | 01:59 | |
*** Cyns has joined #openstack | 02:00 | |
*** miclorb_ has joined #openstack | 02:01 | |
winston-d | notmyname: by the way, do you have some insights about how many operations per second can one storage node support (1GbE to proxy, 20 or 30 something disks)? and how big is the cloud file cluster in rackspace? 50/100 nodes? | 02:01 |
*** huslage has quit IRC | 02:01 | |
*** clauden has quit IRC | 02:03 | |
notmyname | winston-d: I can't (not allowed to) say anything about Rackspace's size other than "billions of objects, petabytes of data". ops/sec depends a lot on your deployment (hardware, SSL, etc) | 02:03 |
*** kaz has joined #openstack | 02:03 | |
catarrhine | I would like to do a lab of swift and compute, what's a decent amount of nodes to be able to test all the features adequetely? 8? | 02:04 |
winston-d | notmyname : i understand. :) thanks all the same. | 02:04 |
notmyname | winston-d: sorry I can't share more | 02:04 |
winston-d | notmyname : it's ok. totally understand. | 02:05 |
notmyname | catarrhine: 8 for both or 8 each? | 02:05 |
catarrhine | well I'm thinking auth node, proxy node, 3 nodes for storage and 3 for compute? | 02:06 |
*** Dboy has joined #openstack | 02:06 | |
catarrhine | I'm not sure how the two projects work together | 02:06 |
notmyname | not quite like that, unfortunately | 02:06 |
HugoKuo_ | morning XD | 02:07 |
Dboy | Have a nice day everyone | 02:07 |
Dboy | I want to ask something about the Glance and Swift | 02:07 |
notmyname | catarrhine: it all depends on what you are trying to show. for a multi-server swift setup, I'd say one proxy/auth and 4 storage servers | 02:08 |
catarrhine | ok | 02:08 |
*** cole has joined #openstack | 02:08 | |
Dboy | is there any tutorial about the installed and confiruing the Glance and Swift | 02:08 |
Dboy | I have 1 cloud controler where installed Glance API | 02:10 |
notmyname | catarrhine: but again, it depends on what you want to show. for example, you could run multiple storage servers (to simulate more than you really have) on one box. this is how we do dev work (all services running on one VM) | 02:10 |
Dboy | and I have 1 host where I installed Swift Proxy | 02:11 |
Dboy | How can I connect them to store my data | 02:11 |
*** mrrk has quit IRC | 02:12 | |
catarrhine | what about for Nova notmyname? | 02:12 |
notmyname | no idea | 02:12 |
* notmyname passes the "swift answer guy" torch to someone else for the evening :-) | 02:13 | |
*** GeoDud has quit IRC | 02:16 | |
*** mattray has joined #openstack | 02:17 | |
Dboy | Hi everyone | 02:17 |
Dboy | I want to ask something about the Glance and Swift | 02:17 |
Dboy | is there any tutorial about the installed and confiruing the Glance and Swift | 02:17 |
Dboy | I have 1 cloud controler where installed Glance API | 02:17 |
Dboy | and I have 1 host where I installed Swift Proxy | 02:18 |
Dboy | How can I connect them to store my data | 02:18 |
Dboy | For installed and configured the swift proxy, I used this link http://swift.openstack.org/development_saio.html#optional-setting-up-rsyslog-for-individual-logging , and the result of testing is successful | 02:20 |
*** johnmark has left #openstack | 02:21 | |
Dboy | It configured in a single host, and now I want to connect the swift server host to glance host | 02:21 |
Dboy | like this link http://glance.openstack.org/architecture.html | 02:22 |
Dboy | So how can I do? any suggestion? | 02:23 |
Dboy | thank you in advance? | 02:23 |
*** primeministerp|h has left #openstack | 02:31 | |
creiht | winston-d: There isn't a good solution for changing the ring size of a real cluster, so I wouldn't even consider that a possibility (other than building a new cluster and possibly migrating) | 02:33 |
creiht | It is better to overshoot by a power or so on the ring size | 02:33 |
winston-d | creiht : hi~~ that's good to know. :) | 02:34 |
creiht | It has been a while since I have done swift stuff, but I'm not even certain that replication would fix a ring size change | 02:34 |
creiht | since replication works on partitions, not the objects in the partition | 02:34 |
notmyname | (I'm happy to be corrected by creiht) | 02:35 |
creiht | when expanding a cluster, when you add new nodes, the ring builder insures that a.) only one replica of a partition gets moved, and that it moves as few partitions as possible | 02:35 |
creiht | ensures | 02:36 |
creiht | or something... it is late :) | 02:36 |
notmyname | ya, I remember that now. all the existing stuff would have to be rehashed. replication doesn't do that | 02:36 |
creiht | right | 02:36 |
*** littleidea has quit IRC | 02:36 | |
creiht | and the way the mapping works, it is likely that a large majority of the objects would have to be moved as they likely map to different partitions | 02:36 |
creiht | so short story, don't change the partition power :) | 02:37 |
*** lborda has quit IRC | 02:37 | |
winston-d | creiht : so that means, altered partition number of an existing swift setup means killing data? because it won't eventually settle down? | 02:37 |
creiht | At one point I had a good chart that showed the max size of clusters for the different partition powers | 02:37 |
creiht | winston-d: correct | 02:38 |
winston-d | creiht : can you share the chart? | 02:38 |
creiht | unfortunately I can't remember where it is | 02:38 |
creiht | but I could re-create it pretty easily | 02:38 |
creiht | hrm... might be a good thing for a google spreadsheet | 02:39 |
*** mattray has quit IRC | 02:39 | |
creiht | the nice thing about this situation, is that certain aspects of the cluster actually get more efficient, as it gets bigger | 02:39 |
*** littleidea has joined #openstack | 02:40 | |
creiht | winston-d: I'll try to whip something together for the ring stuff | 02:40 |
winston-d | creiht : thanks. | 02:40 |
creiht | As to expected performance, I was doing some benchmarking a while back in the lab with 1 proxy and 5 storage nodes | 02:41 |
creiht | of course this was a while ago, so a lot might have changed | 02:41 |
winston-d | creiht : 'get more efficient', can you explain more? which part of swift works more efficient as cluster gets bigger (closer to upper limit to partition power, i guess?)? | 02:41 |
creiht | going for transactions, I was able to get around 2K puts/s | 02:42 |
*** mattray has joined #openstack | 02:42 | |
creiht | for very small objects | 02:42 |
*** mattray has quit IRC | 02:42 | |
winston-d | creiht : and 2k ops/s, that's for 1 proxy with only 1GbE? for how many disks? | 02:43 |
creiht | yeah | 02:43 |
*** andyandy_ has quit IRC | 02:43 | |
creiht | 24 disks per storage node | 02:43 |
creiht | and that stays pretty consistent until the objects are large enough to saturate the network | 02:43 |
creiht | I think GETs for those same object were around 10k GET/s | 02:44 |
creiht | that scales pretty linearly as you add proxys/storage nodes | 02:44 |
creiht | of course it may very depending on hardware/network/etc. | 02:45 |
winston-d | creiht : i see. 2k for PUTs, 10 for GETs same objects. we'll try to match that in our setup. | 02:45 |
creiht | that was also with a lot of tuning | 02:45 |
winston-d | creiht : is this data of swift-bench or some other tools? | 02:45 |
creiht | yeah | 02:45 |
winston-d | creiht : tuning for # of threads for different services and also TCP stuff? | 02:46 |
creiht | ok looking back at old notes, GETs were around 8k/s | 02:46 |
creiht | winston-d: yeah, I tried to capture a lot of that type of stuf here: http://swift.openstack.org/deployment_guide.html | 02:47 |
* winston-d writes down swift perf numbers. | 02:47 | |
winston-d | creiht : btw, my colleague confirmed that he did met CloudFiles java binding thread unsafe issue. he'll contact Lowell directly. | 02:48 |
creiht | towards the end of that doc has some system tuning stuff that we do | 02:48 |
creiht | winston-d: cool, and thanks | 02:48 |
winston-d | creiht : and he was using CloudFiles java bindings with Swift. not with CloudFiles. :) | 02:49 |
creiht | :) | 02:49 |
creiht | will be the same issue either way :) | 02:50 |
winston-d | right | 02:51 |
winston-d | creiht : could you be more specific on the 'more efficient' part for 'the nice thing about this situation, is that certain aspects of the cluster actually get more efficient, as it gets bigger'? | 02:52 |
creiht | oh yeah, sorry | 02:52 |
creiht | for example spreading reads/writes across the cluster | 02:52 |
creiht | also replication gets more efficient | 02:52 |
winston-d | creiht : our partner would like to use swift for production, so they are very concern about performance, as well as reliability and scalability. | 02:53 |
creiht | understood | 02:53 |
creiht | winston-d: Joe Arnold from cloud scaling also has given some great presentations on their experiences so far | 02:55 |
creiht | and blog posts | 02:55 |
winston-d | creiht : so, large partition power for a relatively smal swift setup, will impact the performance somehow. | 02:55 |
creiht | http://cloudscaling.com/blog/author/joe | 02:55 |
winston-d | creiht : their project with KT? let me digg some useful information. :) thanks | 02:56 |
creiht | that and internap, and others | 02:56 |
creiht | Fortunately they can share a lot more than we can | 02:56 |
creiht | His presentation from the last dev summit is also very good | 02:56 |
creiht | he lays out a pretty good plan for a small 1PB cluster | 02:57 |
creiht | iirc | 02:57 |
winston-d | creiht : :) yes | 02:57 |
winston-d | creiht : thanks for your help, again. | 02:59 |
*** lborda has joined #openstack | 03:00 | |
winston-d | creiht : i actually send inquiry to Dr. Hwang from KT for their swift benchmark, but no response (yet). | 03:03 |
*** mrrk has joined #openstack | 03:05 | |
*** mrjazzcat has quit IRC | 03:06 | |
*** kashyap has joined #openstack | 03:08 | |
*** mnaser has joined #openstack | 03:12 | |
kidrock | Hi all! | 03:12 |
kidrock | I deployed newest dashboard which check out form GitHub. | 03:12 |
kidrock | I login into the dashboard, and then select tab 'instances', and following error occurred: | 03:12 |
kidrock | error: Unable to get instance list: This server could not verify that you are authorized to access the document you requested. Either you supplied the wrong credentials (e.g., bad password), or your browser does not understand how to supply the credentials required. | 03:13 |
*** mnaser has quit IRC | 03:13 | |
kidrock | does anyone use this version of dashboard and how can i config keystone to authenticate against to openstack. | 03:13 |
creiht | winston-d: https://spreadsheets.google.com/spreadsheet/ccc?key=0At4SyQdtaLTjdGp6R0ZiNnhYQV9WUDJFcllHODJJZFE&hl=en_US | 03:14 |
creiht | should give you a rough estimate | 03:14 |
creiht | though it is late, while it looks right at the moment, I will have my coworkers take a look at it tomorrow :) | 03:15 |
creiht | winston-d: you can change the numbers at the bottom (like drive size) and everything else should auto update | 03:16 |
creiht | well.. hrm, not sure if you can change it, does it let you make a local copy of it? | 03:17 |
creiht | I haven't done a lot in google docs yet | 03:17 |
winston-d | creiht : thanks! it seems I can't change it. | 03:18 |
creiht | hrm | 03:18 |
*** Cyns has quit IRC | 03:19 | |
creiht | winston-d: as to disk failure, if you read the recent backblaze article about their storage pod, we have noticed very similar types of disk failure rates | 03:24 |
winston-d | creiht : great. | 03:24 |
*** miclorb_ has quit IRC | 03:24 | |
* winston-d googling | 03:25 | |
creiht | http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/ | 03:25 |
winston-d | creiht : you're so considerate. :) | 03:25 |
*** DrHouseMD is now known as HouseAway | 03:26 | |
creiht | hrm... I seem to have messed up the spreadsheet :/ | 03:27 |
winston-d | 5% disk failure per year. hmm. | 03:31 |
winston-d | i've been seeing more than that with drives of certain brand(s). | 03:31 |
*** Cyns has joined #openstack | 03:31 | |
*** stewart has joined #openstack | 03:31 | |
creiht | yeah, it will vary a bit by brand | 03:32 |
creiht | It is also a good idea to do some burn in of the drives before using, as most drives, if they are going to fail, they will fail pretty early | 03:32 |
winston-d | exactlly! yesterday, one drive failed right after i installed it, on a RAID 0. | 03:33 |
*** rchavik has quit IRC | 03:34 | |
*** mrrk has quit IRC | 03:37 | |
*** shentonfreude has joined #openstack | 03:37 | |
*** osier has quit IRC | 03:45 | |
*** jiboumans has quit IRC | 03:47 | |
*** jiboumans has joined #openstack | 03:47 | |
*** obino1 has quit IRC | 03:54 | |
*** HowardRoark has quit IRC | 03:55 | |
creiht | winston-d: updated the spreadsheet btw, I thought the numbers looked a little low :) | 03:58 |
creiht | I forgot that there are 3 (replicas) * the number of paritions to spread among the drives | 03:58 |
*** MarkAtwood has left #openstack | 03:59 | |
creiht | winston-d: I should also note that it is really a soft limit | 04:00 |
creiht | you can go fewer than say 100 parts per drive, it just means that there will be a bit more variance between drives | 04:00 |
creiht | we chose 100, because that would mean that there would be a +/- 1% difference on the number of partitions on each drive | 04:00 |
*** mfer has quit IRC | 04:02 | |
*** ejat has joined #openstack | 04:05 | |
*** ejat has joined #openstack | 04:05 | |
*** Dboy has left #openstack | 04:06 | |
*** Dboy has joined #openstack | 04:07 | |
Dboy | hi everyone | 04:08 |
Dboy | please describe for me about this error | 04:08 |
Dboy | root@CNServerCC:/var/log/glance# glance add name="TestSwift-1" distro="Test1" is_public=True < /root/ttylinux-uec-i686-12.1_2.6.35-22_1.tar.gz Failed to add image. Got error: 400 Bad Request The server could not comply with the request since it is either malformed or otherwise incorrect. Error uploading image: No module named swift.common Note: Your image metadata may still be in the registry, but the image's status will likely | 04:08 |
Dboy | http://paste.openstack.org/show/2036/ | 04:09 |
*** mrrk has joined #openstack | 04:12 | |
*** RickB17 has quit IRC | 04:15 | |
*** cole has quit IRC | 04:17 | |
*** obino has joined #openstack | 04:17 | |
*** j05h has joined #openstack | 04:18 | |
*** miclorb_ has joined #openstack | 04:25 | |
*** undertree has quit IRC | 04:29 | |
*** jtanner has quit IRC | 04:29 | |
*** TimR has quit IRC | 04:30 | |
*** TimR has joined #openstack | 04:31 | |
*** Dboy has left #openstack | 04:33 | |
*** deepest has joined #openstack | 04:34 | |
deepest | Hi all you guys | 04:35 |
deepest | I highly appreciate your comment for my error | 04:35 |
deepest | please check it for me and correct me | 04:35 |
deepest | http://paste.openstack.org/show/2036/ | 04:35 |
deepest | I got that error when I tried to add a new image for testing Glance | 04:36 |
*** jakedahn has quit IRC | 04:37 | |
*** shentonfreude has quit IRC | 04:39 | |
winston-d | creiht : still there? | 04:43 |
deepest | yep | 04:43 |
deepest | sorry | 04:43 |
*** wariola has joined #openstack | 04:43 | |
*** miclorb_ has quit IRC | 04:43 | |
*** shentonfreude has joined #openstack | 04:43 | |
*** miclorb_ has joined #openstack | 04:53 | |
*** _mage_ has joined #openstack | 04:58 | |
*** everett has joined #openstack | 04:58 | |
everett | hi all. regarding vnc proxy... where did http://github.com/openstack/noVNC.git go? | 04:59 |
*** ejat has quit IRC | 05:00 | |
*** osier has joined #openstack | 05:01 | |
*** ejat has joined #openstack | 05:01 | |
*** jakedahn has joined #openstack | 05:01 | |
*** Cyns has quit IRC | 05:03 | |
*** littleidea has quit IRC | 05:08 | |
everett | ah. i see it's at https://github.com/openstack/noVNC.git | 05:18 |
everett | the docs at http://nova.openstack.org/runnova/vncconsole.html need to be updated. | 05:18 |
*** everett has left #openstack | 05:19 | |
*** f4m8_ is now known as f4m8 | 05:20 | |
*** hingo has joined #openstack | 05:28 | |
*** msinhore has quit IRC | 05:33 | |
*** stewart has quit IRC | 05:38 | |
*** reed has quit IRC | 05:41 | |
*** cp16net has quit IRC | 05:52 | |
*** cp16net has joined #openstack | 05:52 | |
*** wariola has quit IRC | 06:01 | |
*** kashyap has quit IRC | 06:01 | |
*** nati has quit IRC | 06:03 | |
*** Alowishus has quit IRC | 06:07 | |
*** sdadh011 has quit IRC | 06:09 | |
*** Alowishus has joined #openstack | 06:10 | |
*** sdadh01 has joined #openstack | 06:12 | |
*** cp16net has quit IRC | 06:16 | |
*** ejat has quit IRC | 06:16 | |
*** stewart has joined #openstack | 06:23 | |
*** javiF has joined #openstack | 06:23 | |
*** kashyap has joined #openstack | 06:25 | |
*** kidrock has quit IRC | 06:28 | |
*** kidrock has joined #openstack | 06:28 | |
*** RMS-ict has quit IRC | 06:34 | |
*** Tribaal has joined #openstack | 06:34 | |
*** mgoldmann has joined #openstack | 06:36 | |
*** deepest has quit IRC | 06:40 | |
*** YorikSar has joined #openstack | 06:41 | |
*** kidrock_ has joined #openstack | 06:41 | |
*** ejat has joined #openstack | 06:41 | |
*** ejat has joined #openstack | 06:41 | |
*** kidrock has quit IRC | 06:42 | |
*** jedi4ever has joined #openstack | 06:45 | |
*** Pjack has joined #openstack | 06:45 | |
*** HugoKuo__ has joined #openstack | 06:45 | |
*** miclorb_ has quit IRC | 06:47 | |
*** Pjack has quit IRC | 06:48 | |
*** HugoKuo_ has quit IRC | 06:48 | |
*** kidrock_ has quit IRC | 06:50 | |
YorikSar | jeblair: Hi | 06:50 |
YorikSar | jeblair: I've wrote on maillist about Gerrit problem. | 06:50 |
*** ejat has quit IRC | 06:56 | |
*** rchavik has joined #openstack | 06:56 | |
*** ejat has joined #openstack | 07:01 | |
*** ejat has joined #openstack | 07:01 | |
*** tudamp has joined #openstack | 07:01 | |
*** reidrac has joined #openstack | 07:12 | |
*** cole has joined #openstack | 07:14 | |
*** rchavik has quit IRC | 07:14 | |
*** guigui has joined #openstack | 07:15 | |
*** cole has quit IRC | 07:16 | |
*** ejat has quit IRC | 07:18 | |
*** kidrock has joined #openstack | 07:24 | |
*** mnour has quit IRC | 07:25 | |
*** ejat has joined #openstack | 07:26 | |
*** ejat has joined #openstack | 07:26 | |
*** stewart has quit IRC | 07:28 | |
*** mrrk has quit IRC | 07:34 | |
*** deepest has joined #openstack | 07:34 | |
*** ejat has quit IRC | 07:37 | |
*** nickon has joined #openstack | 07:50 | |
*** rchavik has joined #openstack | 07:57 | |
*** CloudAche84 has joined #openstack | 07:59 | |
*** willaerk has joined #openstack | 08:02 | |
*** ccc11 has quit IRC | 08:03 | |
*** jeffjapan has quit IRC | 08:04 | |
*** 45PAAEYQV has joined #openstack | 08:04 | |
*** ccc11 has joined #openstack | 08:04 | |
*** AhmedSoliman has joined #openstack | 08:06 | |
*** guigui has quit IRC | 08:07 | |
*** 45PAAEYQV has quit IRC | 08:09 | |
*** guigui has joined #openstack | 08:11 | |
*** irahgel has joined #openstack | 08:15 | |
*** darraghb has joined #openstack | 08:17 | |
*** MarcMorata has joined #openstack | 08:29 | |
*** sandywalsh has quit IRC | 08:34 | |
*** dirkx has joined #openstack | 08:36 | |
*** daysmen has joined #openstack | 08:49 | |
*** deepest has quit IRC | 08:54 | |
*** ejat has joined #openstack | 08:54 | |
*** ejat has joined #openstack | 08:54 | |
*** TimR has quit IRC | 08:56 | |
*** TimR has joined #openstack | 08:57 | |
*** npmapn has joined #openstack | 09:01 | |
TimR | Hi - anybody ohter there running multiple glance-api serves begind a laod balancer ? | 09:03 |
CloudAche84 | no bute I'd like to know how to do that too! | 09:05 |
CloudAche84 | but | 09:05 |
CloudAche84 | we are planning to | 09:05 |
CloudAche84 | I can't imagine there being too much of an issue with it. Does glance use sessions? | 09:07 |
*** mnour has joined #openstack | 09:08 | |
*** mnour has quit IRC | 09:12 | |
*** mnour has joined #openstack | 09:12 | |
*** TeTeT has joined #openstack | 09:15 | |
*** argonius has joined #openstack | 09:15 | |
argonius | hi | 09:15 |
CloudAche84 | hi | 09:15 |
argonius | i was trying to install openstack nova on centos 5 using the instructions on http://wiki.openstack.org/NovaInstall/CentOSNotes | 09:16 |
TeTeT | hello, I have a Ubuntu 11.10 Oneiric system where I tried to test openstack / nova with lxc. I have plenty of shutdown instances, but I cannot terminate them completely - compute log contains info 'Found instance ... in DB but no VM. State=5, so setting state to shutoff' | 09:16 |
argonius | on step "cp /opt/nova/etc/nova-api.conf /etc/nova" i ran into the problem that there is no nova-api.conf ?? | 09:16 |
TeTeT | now I cannot launch new instances any longer, instanceLimitExceeded - any way to remove the dead instances completely w/o tearing down the cloud completely? | 09:16 |
CloudAche84 | TeTeT: I expect you need to update some records in the DB | 09:17 |
argonius | there is only a file called api-paste.ini but this is not the same as nova-api.conf isn't it? | 09:17 |
TeTeT | CloudAche84: ouch, any pointers / guides on how to do that? | 09:19 |
*** ejat has quit IRC | 09:19 | |
CloudAche84 | are you familiar with MySQL at all? | 09:19 |
CloudAche84 | actually hold on | 09:20 |
TeTeT | CloudAche84: rusty at it, but I guess I can cope with it | 09:20 |
CloudAche84 | argonius: bear with me I might be able to help you in a minute | 09:20 |
argonius | CloudAche84: ah fine, i will wait for you ;) | 09:20 |
CloudAche84 | TeTeT: try this script first: http://pastebin.com/0mr64Atu | 09:22 |
CloudAche84 | should terminate all running instances cleanly | 09:22 |
CloudAche84 | if not then we may need to hack the DB | 09:22 |
TimR | Cloudach84: I have set up a dual API config one node running API-server, Registry and MySQL; 2nd node running just the API-Server. Have not config'd the LoadBalancer yet | 09:22 |
TeTeT | CloudAche84: already tried 'euca-terminate-instances $( euca-describe-instances | grep INST| awk '{ print $2 }' | xargs ) | 09:23 |
TeTeT | CloudAche84: multiple times, they just never get out of shutdown :( | 09:23 |
CloudAche84 | ah ok | 09:23 |
CloudAche84 | in that case lets try something else | 09:24 |
TeTeT | CloudAche84: I can't see a mysql instance running , ps aux | grep -i mysql, is it possible that nova @ ubuntu uses a different db backend? | 09:25 |
*** zz_bonzay|away is now known as bonzay | 09:25 | |
CloudAche84 | doubt it | 09:26 |
CloudAche84 | unless it uses sqlite or something | 09:26 |
CloudAche84 | but I wouldnt have thought so | 09:26 |
CloudAche84 | what does your --sql_connection= say in /etc/nova/nova.conf? | 09:27 |
TeTeT | CloudAche84: not specified | 09:27 |
*** mnour has quit IRC | 09:27 | |
*** mnour has joined #openstack | 09:27 | |
TeTeT | CloudAche84: content of nova.conf is http://pastebin.ubuntu.com/658539 | 09:28 |
CloudAche84 | argonius: on my Ubuntu system nova-api.conf is the init script and therefore lives in /etc/init/nova-api.conf | 09:28 |
CloudAche84 | oh.. | 09:28 |
CloudAche84 | argonius: all nova service config lives in /etc/nova/nova.conf | 09:29 |
CloudAche84 | TeTeT: there seems to be a lot of stuff missing there.. | 09:29 |
CloudAche84 | how did you install? | 09:30 |
TeTeT | CloudAche84: it's the most basic setup I've got from zul's blogpost on getting nova / lxc started | 09:30 |
CloudAche84 | which os? | 09:30 |
TeTeT | CloudAche84: I started on ubuntu 11.04, then updated to 11.10 | 09:31 |
TeTeT | CloudAche84: just digging up the article from zul that presents details | 09:31 |
TeTeT | CloudAche84: and thanks for your time and help! Much appreciated | 09:32 |
CloudAche84 | no prob. | 09:32 |
CloudAche84 | Happy to share my (limited!) knowledge | 09:32 |
TeTeT | CloudAche84: http://zulcss.wordpress.com/2011/03/31/running-nova-openstack-on-amazon-ec2/ | 09:32 |
TeTeT | sudo apt-get install python-nova nova-common nova-api nova-compute nova-network nova-objectstore nova-scheduler nova-volume python-nova.adminclient unzip euca2ools | 09:33 |
CloudAche84 | this article: http://zulcss.wordpress.com/2011/03/31/running-nova-openstack-on-amazon-ec2/? | 09:33 |
CloudAche84 | ah | 09:33 |
*** mnour has quit IRC | 09:33 | |
CloudAche84 | lol | 09:33 |
TeTeT | he he | 09:33 |
*** mnour has joined #openstack | 09:34 | |
CloudAche84 | what does this produce: nova-manage conf li|grep sql | 09:36 |
*** mnour has quit IRC | 09:36 | |
TeTeT | CloudAche84: using sqlite, --db_backend=sqlalchemy | 09:37 |
argonius | re | 09:37 |
argonius | sry customer calling .... | 09:37 |
argonius | ;) | 09:37 |
*** dirkx has quit IRC | 09:38 | |
argonius | CloudAche84: as i said, i use centos and installing from branch (bzr) | 09:38 |
argonius | CloudAche84: i have to copy nova-api.conf to /etc/nova/ | 09:38 |
argonius | but in the branch, there is no nova-api.conf file :( | 09:39 |
*** bonzay has left #openstack | 09:39 | |
argonius | CloudAche84: maybe a mistake in the actual branch of nova? | 09:39 |
CloudAche84 | possible | 09:39 |
argonius | CloudAche84: can you send me your nova-api.conf | 09:39 |
mattt | morning morning | 09:39 |
argonius | 'cause i do not really know, what should be in this file | 09:39 |
mattt | nova-api.conf? | 09:39 |
argonius | it's the first time of trying openstack | 09:39 |
TeTeT | CloudAche84: found the db in /var/lib/nova/nova.sqlite - I'm not sure how to see the tables in the db though | 09:40 |
argonius | mattt: yes | 09:40 |
mattt | argonius: newer versions use a single conf file, nova.conf | 09:40 |
argonius | mattt: ahh and where can i find it | 09:40 |
mattt | argonius: how did you install nova? if you use a package manager, then /etc/nova | 09:40 |
argonius | mattt: i was using http://wiki.openstack.org/NovaInstall/CentOSNotes | 09:40 |
mattt | argonius: which version of nova did you use? | 09:41 |
argonius | mattt: i am not really sure how to find this out? | 09:41 |
mattt | argonius: rpm -qa | grep nova maybe? | 09:42 |
mattt | (not sure how the packages are named) | 09:42 |
argonius | nothing | 09:42 |
argonius | i take it via bzr | 09:42 |
argonius | not sure about this but maybe this will get an actual branch | 09:42 |
argonius | ? | 09:42 |
mattt | argonius: yeah, you're right ... you're not pulling from a package manager | 09:42 |
CloudAche84 | TeTeT: apt-get install sqlite | 09:42 |
*** deepest has joined #openstack | 09:42 | |
argonius | nova --version does not really work | 09:42 |
CloudAche84 | then sqlite <dbname> | 09:43 |
TeTeT | CloudAche84: already past that, I found the instance table, shall I just delete all of them? | 09:43 |
mattt | argonius: sorry, don't have _any_ experience w/ nova on centos -- i will tell you i used the ppas on ubuntu (http://wiki.openstack.org/PPAs) and everything just works out of the box | 09:43 |
CloudAche84 | but I thiknk you should consider setting it up to use a proper DB | 09:43 |
argonius | hmm | 09:43 |
TeTeT | CloudAche84: yeah, once this showcase vm works ;) | 09:43 |
argonius | mattt: but i do not want to use ubuntu .... :( | 09:43 |
mattt | argonius: i may try it when i get some time, but at work at the minute :( | 09:43 |
argonius | mattt: i am at work 2 :p | 09:43 |
argonius | but i just take the time | 09:44 |
argonius | ;) | 09:44 |
mattt | haha | 09:44 |
*** deepest has left #openstack | 09:44 | |
CloudAche84 | no | 09:44 |
argonius | can i just build the nova.conf file by my self? | 09:44 |
*** deepest has joined #openstack | 09:44 | |
mattt | argonius: let me see if i can get you my default conf | 09:45 |
deepest | Hi you guys | 09:45 |
CloudAche84 | TeteT:paste me the first record | 09:45 |
mattt | what paste servers do you guys use? | 09:45 |
argonius | mattt: this would be nicccceee | 09:45 |
argonius | :) | 09:45 |
CloudAche84 | I used pastebin | 09:45 |
argonius | yeah me 2 | 09:45 |
CloudAche84 | but it seems there is a pastebin.ubuntu.com | 09:45 |
mattt | argonius: http://pastebin.com/KWe7wPBc | 09:46 |
argonius | mattt: thx a lot | 09:46 |
deepest | I want to ask you about my error of swift | 09:46 |
mattt | np, that's stock on the ubuntu packages tho, so i'm not entirely sure if it'll work on centos :( | 09:46 |
mattt | brb, got some stuff to do | 09:46 |
TeTeT | CloudAche84: oops, too late, already delete from instances issued, sorry. Instances seem gone, but I guess not cleanly | 09:46 |
deepest | can anyone help me? | 09:46 |
argonius | mattt: this is your nova.conf?? | 09:46 |
argonius | which references self to nova.conf?? | 09:46 |
CloudAche84 | ouch | 09:46 |
CloudAche84 | restart the services and see if a new instances table is created | 09:47 |
TeTeT | CloudAche84: thanks for your help, the table is there, I only deleted the rows | 09:47 |
CloudAche84 | nice hatchet job tho :P | 09:47 |
CloudAche84 | ah ok | 09:47 |
argonius | damn | 09:47 |
argonius | next failure | 09:47 |
argonius | i think for first tests i should use ubuntu and .deb packages | 09:48 |
argonius | :( | 09:48 |
mattt | argonius: yeah man, that's the default conf! | 09:48 |
mattt | then i add to it | 09:48 |
CloudAche84 | It seems Ubuntu is best supported at the moment argonius | 09:48 |
mattt | i'd have to agree | 09:48 |
TeTeT | CloudAche84: I started a new instance and terminated it right away, seems to work now | 09:49 |
mattt | once you get familiar w/ openstack, you can start hacking with it on centos | 09:49 |
CloudAche84 | yep | 09:49 |
argonius | CloudAche84: that's realy crazy | 09:49 |
argonius | CloudAche84: cause this software comes from rackspace | 09:49 |
CloudAche84 | not really considering the amount of Canonical people on it :) | 09:49 |
argonius | which really often uses rhel | 09:49 |
argonius | i am confused | 09:50 |
argonius | :D | 09:50 |
argonius | maybe someone has 2-3 minutes to answer some additional questions about the basics of openstack? | 09:51 |
CloudAche84 | will try | 09:51 |
*** RMS-ict has joined #openstack | 09:51 | |
argonius | i was reading the introduction but i am not really sure about the coherence between the differents softwaresssss | 09:51 |
argonius | i was just seeing a video about openstack | 09:52 |
argonius | in wich i can just click and click | 09:52 |
argonius | to deploy many virtual instances | 09:52 |
CloudAche84 | yep | 09:52 |
argonius | so | 09:52 |
argonius | in different groups | 09:52 |
argonius | the webinterface can be running on my server? | 09:52 |
argonius | or is this just running on installer.openstack.org? | 09:52 |
argonius | this was not really clear in the vid | 09:53 |
argonius | ;) | 09:53 |
CloudAche84 | web interface runs on your server | 09:53 |
argonius | ok | 09:53 |
argonius | and the webinterface is openstack nova? | 09:53 |
CloudAche84 | it isnt part of the main nova packages yet though | 09:53 |
CloudAche84 | no | 09:53 |
argonius | oh ok | 09:53 |
argonius | nova is just the software for deploying the virtual instances, isn't it? | 09:53 |
argonius | like a xml parser or rpc daemon | 09:54 |
CloudAche84 | penstack nova is nova-compute, nova-api, nova-network, nova-scheduler | 09:54 |
CloudAche84 | nova is the compute platform which distributes VMs around hypervisors | 09:54 |
argonius | ok | 09:55 |
CloudAche84 | Glance is an image server which organises and retreieves images from low cost file storage (such as S3) | 09:55 |
argonius | i read something about these nova-* apps | 09:55 |
CloudAche84 | Swift is a scale-out storage solution like S3 designed to be resilient on commodity hardware | 09:55 |
argonius | ah ok | 09:56 |
CloudAche84 | Openstack-Dashboard is a first go at a UI for Openstack Nova | 09:56 |
CloudAche84 | which uses the API presented by Nova & Glance | 09:56 |
CloudAche84 | does that help? | 09:56 |
argonius | is it possible to firstly use just one server for everything? | 09:56 |
argonius | yes a bit ;) | 09:56 |
CloudAche84 | yes | 09:57 |
argonius | so local storage is supported? | 09:57 |
CloudAche84 | although it may make sense to run swift on a different server | 09:57 |
argonius | yes of course | 09:57 |
CloudAche84 | but you dont HAVE to use Swift (I'm not) | 09:57 |
argonius | but for beginning i just have one server | 09:57 |
argonius | ah ok | 09:57 |
*** miclorb_ has joined #openstack | 09:57 | |
*** ccc111 has joined #openstack | 09:57 | |
deepest | what is the relationship between Glace and Swift? | 09:57 |
argonius | so the main parts for me are | 09:57 |
CloudAche84 | you can tell Glance to just use a folder on the server to store images | 09:58 |
argonius | the nova-suite | 09:58 |
argonius | glance | 09:58 |
argonius | and openstack dashboard? | 09:58 |
CloudAche84 | yes | 09:58 |
CloudAche84 | although you dont need to use Glance either | 09:58 |
argonius | ah ok | 09:58 |
argonius | and hypervisor? | 09:58 |
CloudAche84 | as you can set --image_service= to --image_service=nova.image.local.LocalImageService in nova.conf | 09:59 |
argonius | kvm is supported afaik, isn't it | 09:59 |
CloudAche84 | yep | 09:59 |
argonius | hmm | 09:59 |
CloudAche84 | KVM is enabled as part of the default setup | 09:59 |
CloudAche84 | but there are a number of "supported" hypervisros | 09:59 |
argonius | hmm but with cpu virtualization support | 09:59 |
*** miclorb_ has quit IRC | 09:59 | |
CloudAche84 | inc Xen, ESX, Lxc etc | 10:00 |
argonius | kvm is the best at the moment i think | 10:00 |
argonius | isn't it? | 10:00 |
CloudAche84 | Horses for courses I think but most development seems to be done against KVM | 10:00 |
argonius | do u have some experience | 10:00 |
argonius | hmmhh | 10:00 |
*** ccc11 has quit IRC | 10:01 | |
argonius | ok so i will just try to setup ubuntu with nova suite and openstack-dashboard ;) | 10:01 |
CloudAche84 | although Citrix are developing a commercial Openstack solution which will use Xen | 10:01 |
CloudAche84 | so I think Xen may be the stronger candidate in the future | 10:01 |
argonius | hmm maybe | 10:01 |
argonius | we will c ;) | 10:01 |
CloudAche84 | yep | 10:01 |
argonius | are you often here? | 10:01 |
CloudAche84 | try to be | 10:01 |
argonius | fine fine | 10:01 |
argonius | i will try it to in the future | 10:02 |
CloudAche84 | the more people in here the better I think. Will help drive the development quicker | 10:02 |
argonius | i think openstack will be a great project | 10:02 |
CloudAche84 | me too | 10:02 |
argonius | CloudAche84: where r you from, if i may ask | 10:02 |
CloudAche84 | I hope so anyway | 10:02 |
CloudAche84 | because we are deploying a lot of it :) | 10:02 |
argonius | hrhr | 10:03 |
deepest | Hi CloudAche84, are U still there? | 10:04 |
deepest | can I ask U a question | 10:04 |
CloudAche84 | yep | 10:04 |
deepest | I have some problem about Swift | 10:04 |
deepest | I want U check for me about the error | 10:04 |
CloudAche84 | I havent deployed swift yet ( we arent planning to use it) | 10:05 |
deepest | I dont know where it is | 10:05 |
CloudAche84 | where what is? | 10:05 |
deepest | the error | 10:05 |
argonius | so ok, i will now try to deploy it on ubuntu, thanks a lot for your help CloudAche84 | 10:05 |
CloudAche84 | no prob | 10:05 |
*** argonius is now known as argonius_afk | 10:05 | |
argonius_afk | <--- afk | 10:05 |
deepest | here is my glance-api.conf | 10:05 |
deepest | http://paste.openstack.org/show/2039/ | 10:05 |
*** RMS-ict has quit IRC | 10:05 | |
*** miclorb_ has joined #openstack | 10:06 | |
*** RMS-ict has joined #openstack | 10:06 | |
deepest | and here http://paste.openstack.org/show/2038/, this is an error what I got | 10:06 |
deepest | How can I solve this error? | 10:07 |
CloudAche84 | like I say I havent deployed swift but to me your store_key looks wrong | 10:08 |
*** miclorb_ has quit IRC | 10:09 | |
deepest | I think so, but I dont know how to find the correct store_key | 10:09 |
deepest | because the tutorial didn't say anything about the store_key http://swift.openstack.org/development_saio.html#optional-setting-up-rsyslog-for-individual-logging | 10:10 |
deepest | do U have any suggestion? | 10:10 |
*** cian has joined #openstack | 10:11 | |
CloudAche84 | not really apar from check this out. Maybe it will help generaet a proper key? http://docs.openstack.org/cactus/openstack-object-storage/admin/content/verify-swift-installation.html | 10:16 |
deepest | Ok | 10:18 |
deepest | thanks U | 10:18 |
deepest | I will check it | 10:18 |
*** RMS-ict has quit IRC | 10:19 | |
*** RMS-ict has joined #openstack | 10:20 | |
*** vernhart has quit IRC | 10:24 | |
*** RMS-ict has quit IRC | 10:24 | |
*** vernhart has joined #openstack | 10:24 | |
*** miclorb_ has joined #openstack | 10:27 | |
*** miclorb_ has quit IRC | 10:28 | |
*** TeTeT has quit IRC | 10:31 | |
*** deepest has quit IRC | 10:38 | |
*** ccc111 has quit IRC | 10:45 | |
*** TimR has quit IRC | 10:51 | |
*** TimR has joined #openstack | 10:51 | |
*** guigui has quit IRC | 10:55 | |
*** zigo has joined #openstack | 10:58 | |
*** ton_katsu has quit IRC | 10:59 | |
*** kidrock has quit IRC | 10:59 | |
*** guigui has joined #openstack | 11:01 | |
*** Ryan_Lane has joined #openstack | 11:05 | |
*** ctennis has quit IRC | 11:16 | |
*** TimR has quit IRC | 11:17 | |
*** TimR has joined #openstack | 11:18 | |
*** markvoelker has joined #openstack | 11:18 | |
*** shang has quit IRC | 11:19 | |
*** truijllo has joined #openstack | 11:22 | |
*** dirkx has joined #openstack | 11:22 | |
truijllo | hi guys! | 11:23 |
CloudAche84 | hi | 11:25 |
truijllo | I'm working on the dashboard, using trunk versions ( for nova and dashboard ) launching an instance, in the db in user_id field there's a dump of the user objects ( similar thing in project_id ) | 11:27 |
truijllo | in nova-api.log I see something like | 11:27 |
truijllo | 2011-08-04 13:21:26,696 nova.compute.api: Casting to scheduler for Project('1234', '1234', 'joeuser', '1234', ['admin', 'joeadmin', 'joetest', 'joeuser', 'muriel', 'truijllo'])/User('joeuser', 'joeuser', '13988189-dc63-4088-9f1b-eb14af725679', '4e4c9ea6-31d5-4f39-b33e-7b114cc8f0da', False)'s instance 24 (single-shot) 2011-08-04 13:21:26,697 nova.rpc: Making asynchronous cast on scheduler... 2011-08-04 13:21:26,697 nova.rpc: Crea | 11:27 |
truijllo | ( ops ) | 11:27 |
truijllo | so...I obtain an error | 11:27 |
truijllo | BUT .... If I try to run an instance using eucatools everything works and in the user_id field there's the old-fashoned username | 11:29 |
*** ctennis has joined #openstack | 11:30 | |
YorikSar | truijllo: What is that error? | 11:30 |
*** PeteDaGuru has joined #openstack | 11:30 | |
truijllo | ( the readable log is here http://pastie.org/2319268 ) | 11:30 |
YorikSar | What version of nova do you use? | 11:32 |
truijllo | in the dashboard I obtain this message: Unable to launch instance: 400 Bad Request The server could not comply with the request since it is either malformed or otherwise incorrect. | 11:33 |
truijllo | nova is in the current trunk version | 11:33 |
*** shang has joined #openstack | 11:35 | |
*** littleidea has joined #openstack | 11:36 | |
YorikSar | Do you get any signs of this request in comput log? | 11:37 |
*** littleidea has quit IRC | 11:37 | |
truijllo | nope | 11:39 |
*** viraptor has joined #openstack | 11:40 | |
truijllo | nothing in compute.log or schedule.log | 11:40 |
*** vcaron has joined #openstack | 11:41 | |
YorikSar | It looks like I understand where did this problem come from. | 11:42 |
*** littleidea has joined #openstack | 11:42 | |
YorikSar | You should try the version before commit #1348 | 11:43 |
truijllo | of nova ? | 11:43 |
YorikSar | Yes | 11:43 |
truijllo | so... 1348 is the first release with this problem, I'am right ? | 11:44 |
YorikSar | It's my guess. There were a lot of work related to users and projects in that commit. | 11:45 |
truijllo | thank you so much, I'll try 1347 asap, I've looked for blueprints o something about but nothing appears for this issue | 11:46 |
YorikSar | And it looks like the source of your problem is sending User object instead of user id in cast context | 11:46 |
truijllo | yes, you're right | 11:47 |
*** mfer has joined #openstack | 11:49 | |
*** CloudAche84 has quit IRC | 11:55 | |
*** _mage_ has quit IRC | 11:56 | |
*** lborda has quit IRC | 11:56 | |
*** dirkx has quit IRC | 11:58 | |
*** ncode has quit IRC | 12:00 | |
*** ccc11 has joined #openstack | 12:04 | |
*** jtanner has joined #openstack | 12:04 | |
*** yosh has quit IRC | 12:06 | |
*** yosh has joined #openstack | 12:08 | |
*** CloudAche84 has joined #openstack | 12:14 | |
*** RMS-ict has joined #openstack | 12:15 | |
*** AhmedSoliman has quit IRC | 12:16 | |
*** Kiall has quit IRC | 12:16 | |
*** Kiall has joined #openstack | 12:17 | |
*** AhmedSoliman has joined #openstack | 12:17 | |
*** KAM has joined #openstack | 12:20 | |
*** Ryan_Lane has quit IRC | 12:23 | |
*** yosh has quit IRC | 12:26 | |
*** blamar has quit IRC | 12:27 | |
*** blamar has joined #openstack | 12:27 | |
*** dprince has joined #openstack | 12:28 | |
*** ncode has joined #openstack | 12:31 | |
*** yosh has joined #openstack | 12:35 | |
*** javiF has quit IRC | 12:35 | |
*** tomh__ has quit IRC | 12:42 | |
*** marrusl has joined #openstack | 12:43 | |
*** Tribaal is now known as Tribaalatchu | 12:48 | |
*** Ryan_Lane has joined #openstack | 12:49 | |
*** msivanes has joined #openstack | 12:51 | |
*** truijllo has quit IRC | 12:53 | |
*** osier has quit IRC | 12:53 | |
*** alfred_mancx has joined #openstack | 12:54 | |
*** tomh_ has joined #openstack | 12:56 | |
*** freeflying has quit IRC | 12:58 | |
*** freeflying has joined #openstack | 12:58 | |
*** mnaser has joined #openstack | 12:59 | |
*** mnaser has quit IRC | 13:02 | |
*** mnaser has joined #openstack | 13:02 | |
*** alfred_mancx has quit IRC | 13:03 | |
*** sandywalsh has joined #openstack | 13:05 | |
*** Ryan_Lane has quit IRC | 13:05 | |
*** RMS-ict has quit IRC | 13:09 | |
*** mnaser has quit IRC | 13:10 | |
*** RMS-ict has joined #openstack | 13:11 | |
*** truijllo has joined #openstack | 13:11 | |
*** hadrian has joined #openstack | 13:12 | |
*** BK_man has joined #openstack | 13:12 | |
*** jtanner has quit IRC | 13:13 | |
*** MarcMorata has quit IRC | 13:13 | |
*** MarcMorata has joined #openstack | 13:14 | |
*** vcaron has quit IRC | 13:15 | |
*** Shentonfreude has joined #openstack | 13:15 | |
*** imsplitbit has joined #openstack | 13:17 | |
*** adambergstein_ has joined #openstack | 13:19 | |
*** adambergstein has quit IRC | 13:19 | |
*** adambergstein_ is now known as adambergstein | 13:19 | |
*** HowardRoark has joined #openstack | 13:24 | |
*** cereal_bars has joined #openstack | 13:25 | |
*** aaron_husng has joined #openstack | 13:26 | |
*** dolphm has joined #openstack | 13:26 | |
*** TimR has quit IRC | 13:27 | |
*** TimR has joined #openstack | 13:28 | |
*** jkoelker has joined #openstack | 13:29 | |
adambergstein | hi folks, i have an issue. euca-describe-instances has shown my instance as 'building' for several hours, my nova-compute log shows this 'Found instance 'instance-0000000d' in DB but no VM. State=9, so assuming spawn is in progress.' | 13:31 |
adambergstein | does anyone know what that means? | 13:31 |
*** pimpministerp has joined #openstack | 13:33 | |
*** bcwaldon has joined #openstack | 13:34 | |
*** primeministerp has quit IRC | 13:34 | |
*** pimpministerp is now known as primeminsterp | 13:34 | |
*** kashyap has quit IRC | 13:35 | |
*** undertree has joined #openstack | 13:35 | |
*** jatsrt has joined #openstack | 13:37 | |
*** bsza has joined #openstack | 13:37 | |
*** worstadmin has quit IRC | 13:39 | |
*** mitchless has quit IRC | 13:40 | |
*** reidrac has quit IRC | 13:40 | |
*** dolphm has quit IRC | 13:40 | |
jatsrt | anyone around this morning | 13:41 |
jatsrt | or evening or afternoon depending on your part of the world | 13:42 |
*** lborda has joined #openstack | 13:43 | |
*** vishnu has joined #openstack | 13:44 | |
*** jj0hns0n has joined #openstack | 13:44 | |
ttx | chmouel: any progress on the London meetup thing ? At this point, that would make a very late announcement (cc: Daviey) | 13:45 |
*** netmarkjp has joined #openstack | 13:47 | |
*** lborda_ has joined #openstack | 13:47 | |
*** jtanner has joined #openstack | 13:47 | |
jatsrt | so I asked the other day, but had to run, so I'll ask again | 13:48 |
jatsrt | any reason with the latest code that my instances no longer get a hostname | 13:48 |
jatsrt | was there a change to the way they are generated/assigned? | 13:48 |
*** lborda has quit IRC | 13:48 | |
*** kbringard has joined #openstack | 13:51 | |
*** worstadmin has joined #openstack | 13:51 | |
truijllo | YorikSar: yeah! it works! revision 1347 worked like a charm | 13:52 |
*** vcaron has joined #openstack | 13:52 | |
*** vcaron has left #openstack | 13:53 | |
*** huslage has joined #openstack | 13:53 | |
*** KAM has quit IRC | 13:54 | |
*** aaron_husng has quit IRC | 13:54 | |
*** f4m8 is now known as f4m8_ | 13:55 | |
*** javiF has joined #openstack | 13:57 | |
YorikSar | truijllo: You should try a 1348 revision and if it does not work file a bug. | 13:58 |
*** dolphm has joined #openstack | 13:58 | |
*** littleidea has quit IRC | 13:59 | |
YorikSar | adambergstein: you should check the log before that 'Found instance' messages appeared. | 14:00 |
*** mdomsch has joined #openstack | 14:01 | |
*** LiamMac has joined #openstack | 14:01 | |
*** ejat has joined #openstack | 14:04 | |
*** nmistry has joined #openstack | 14:04 | |
*** whitt has joined #openstack | 14:05 | |
*** dendro-afk is now known as dendrobates | 14:06 | |
jatsrt | hmmm, going through the code and not finding anything that looks like the hostname functionality would have changed | 14:07 |
*** AhmedSoliman has quit IRC | 14:09 | |
*** cloudvirt has joined #openstack | 14:10 | |
tudamp | hi all, can anyone help me with the nova scheduler? | 14:11 |
*** cmdrsizzlorr has joined #openstack | 14:11 | |
*** cloudvirt has quit IRC | 14:11 | |
*** dolphm has quit IRC | 14:12 | |
*** dolphm has joined #openstack | 14:13 | |
*** cloudvirt has joined #openstack | 14:14 | |
adambergstein | YorikSar: which log? nova-compute? | 14:14 |
jatsrt | OK: with the hostname problem | 14:15 |
jatsrt | instead of the instance_id it is now sending "server_x" | 14:15 |
jatsrt | which appears to come from display name? | 14:15 |
jatsrt | _ is invalid to the "hostname" command | 14:16 |
*** reidrac has joined #openstack | 14:16 | |
jatsrt | so no hostname is being set by coud-init | 14:16 |
*** nmistry has quit IRC | 14:16 | |
TREllis | jatsrt: btw, I'm also seeing this behaviour | 14:17 |
TREllis | yeah, meta-data shows server_x instead of i-xxxxxxxx | 14:17 |
jatsrt | so this is "hostname" in the db | 14:17 |
jatsrt | for the instance | 14:17 |
jatsrt | just not too sure where that is coming from yet | 14:18 |
TREllis | is there a bug open? | 14:18 |
jatsrt | not too sure yet | 14:18 |
*** ldlework has joined #openstack | 14:19 | |
*** worstadmin has quit IRC | 14:20 | |
jatsrt | hard bug to search for but doesn't look like it | 14:20 |
jatsrt | searching for server_, hostname, too generic | 14:20 |
TREllis | ok, want me to create one? or will you? | 14:21 |
TREllis | I planned to do that yesterday, but got sidetracked | 14:21 |
YorikSar | adambergstein: Yes. I guess, compute node failed to finish provisioning and came to silent error | 14:21 |
adambergstein | YorikSar: http://paste.openstack.org/show/2040/ | 14:22 |
adambergstein | that was nova-compute | 14:23 |
*** reed has joined #openstack | 14:23 | |
adambergstein | YorikSar: ill check the other logs | 14:23 |
adambergstein | under the nova dir | 14:23 |
YorikSar | adambergstein: Yeah, I thought so. I don't know, why this happens, but sometimes that lock files are just stuck around during nova reboot. | 14:25 |
YorikSar | adambergstein: Delete all of them and this should do it. | 14:25 |
adambergstein | YorikSar: how do i do that? | 14:25 |
jatsrt | TREllis: still diggin | 14:25 |
jatsrt | this change: | 14:25 |
jatsrt | http://bazaar.launchpad.net/~hudson-openstack/nova/trunk/revision/1188.4.1 | 14:25 |
*** kidrock has joined #openstack | 14:25 | |
adambergstein | go into my lock directory and clear the files? | 14:25 |
adambergstein | might there be a permissions issue? | 14:25 |
jatsrt | is what made it work differently, but it seems somewhere that it changed more | 14:26 |
YorikSar | adambergstein: Yeah, jus clear it. Or if it's empty anyway. there should be a permission issue | 14:26 |
jatsrt | otherwise you need to have a proper display name set for every instance started | 14:26 |
adambergstein | YorikSar: Thanks, i'll give this a shot! | 14:26 |
jatsrt | still seems like a bug that the default behavior is to set it to server_%d | 14:27 |
jatsrt | which would always be invalid | 14:27 |
*** jj0hns0n has quit IRC | 14:29 | |
adambergstein | YorikSar: should i clear out my instances and/or images and rebuild them? | 14:29 |
*** HowardRoark has quit IRC | 14:29 | |
jatsrt | TREllis: looking at it more, that whole checkin was just broken, it takes display name and replaces " " with "_" which is invalid even if it worked right | 14:31 |
jatsrt | wondering why it was though tat we needed to not use the instance_id? | 14:31 |
*** kidrock has quit IRC | 14:32 | |
adambergstein | YorikSar: Thank you!!! | 14:32 |
adambergstein | that worked | 14:33 |
*** aliguori has quit IRC | 14:33 | |
adambergstein | i have a new issue now but i will research it | 14:33 |
vishnu | I just installed cactus on RHEL6. When I start my instances it went from launching to shutdown. where should I look for errors? | 14:33 |
jatsrt | "Made hostname independent from ec2 id. Add generation of hostnames based on display name." | 14:33 |
*** jfluhmann has joined #openstack | 14:33 | |
jatsrt | So, the problem with this logic is that if you are using basic euca tools to run instances, there is no way to set a display name, so this would never work | 14:34 |
jatsrt | TREllis: would you like to enter the bug | 14:34 |
*** dgags has joined #openstack | 14:35 | |
*** mgoldmann has quit IRC | 14:36 | |
chmouel | ttx: I didn't get any answers as well for the budget side so I got to say there is not going to be any organised | 14:36 |
*** ejat has quit IRC | 14:37 | |
ttx | chmouel: right, let's drop the idea, then | 14:37 |
*** jj0hns0n has joined #openstack | 14:40 | |
*** dirkx has joined #openstack | 14:40 | |
*** MarcMorata has quit IRC | 14:40 | |
jatsrt | TREllis: Sorry, I entered the bug https://bugs.launchpad.net/nova/+bug/820962 my first one :-) | 14:41 |
adambergstein | YorikSar: I don't know if you have time to look at this, but here is my recent issue: http://paste.openstack.org/show/2041/ and i found this bug report: https://bugs.launchpad.net/nova/+bug/807764. it looks like there is a patch, how would i go about installing that? | 14:41 |
*** TimR has quit IRC | 14:42 | |
*** mattray has joined #openstack | 14:42 | |
TREllis | jatsrt: cool thanks | 14:42 |
jatsrt | np | 14:42 |
*** imsplitbit has quit IRC | 14:42 | |
*** mattray has quit IRC | 14:46 | |
*** rnirmal has joined #openstack | 14:47 | |
*** dendrobates is now known as dendro-afk | 14:47 | |
*** cp16net has joined #openstack | 14:49 | |
*** mattray has joined #openstack | 14:53 | |
*** javiF has quit IRC | 14:54 | |
*** cp16net has quit IRC | 14:54 | |
*** cp16net has joined #openstack | 14:54 | |
*** tudamp has left #openstack | 14:55 | |
*** cp16net_ has joined #openstack | 14:56 | |
*** willaerk has quit IRC | 14:57 | |
*** cp16net has quit IRC | 14:59 | |
*** cp16net_ is now known as cp16net | 14:59 | |
cmdrsizzlorr | hi, I'm having some trouble with a "waiting for metadata service" entry in console output. Using the 11.04 uec image and FlatNetworking. Would anyone have any suggestions as to what to check? | 14:59 |
cmdrsizzlorr | According to some pages, I've also added suitable iptables prerouting entries. | 15:00 |
*** deshantm_laptop has joined #openstack | 15:01 | |
*** aliguori has joined #openstack | 15:03 | |
*** jj0hns0n_ has joined #openstack | 15:03 | |
*** amccabe has joined #openstack | 15:04 | |
*** jj0hns0n has quit IRC | 15:05 | |
*** jj0hns0n_ is now known as jj0hns0n | 15:05 | |
*** dragondm has joined #openstack | 15:06 | |
*** reidrac has quit IRC | 15:08 | |
*** llang629 has joined #openstack | 15:09 | |
*** llang629 has left #openstack | 15:09 | |
*** ccc11 has quit IRC | 15:10 | |
*** Ryan_Lane has joined #openstack | 15:12 | |
*** truijllo has quit IRC | 15:14 | |
*** RMS-ict has quit IRC | 15:16 | |
*** RMS-ict has joined #openstack | 15:16 | |
jatsrt | anyone using python-novaclient | 15:18 |
jatsrt | and know why everything I do is a 404 or a 500 | 15:18 |
adambergstein | has anyone seen this issue before? http://paste.openstack.org/show/2042/ | 15:19 |
*** HowardRoark has joined #openstack | 15:20 | |
*** mdomsch has quit IRC | 15:21 | |
*** msivanes1 has joined #openstack | 15:21 | |
*** herve06 has joined #openstack | 15:22 | |
*** cmdrsizzlorr has quit IRC | 15:22 | |
*** jj0hns0n_ has joined #openstack | 15:22 | |
*** msivanes has quit IRC | 15:23 | |
*** Ryan_Lane has quit IRC | 15:23 | |
*** obino has quit IRC | 15:24 | |
*** Tribaalatchu has quit IRC | 15:25 | |
*** jj0hns0n has quit IRC | 15:26 | |
*** jj0hns0n_ is now known as jj0hns0n | 15:26 | |
*** alandman has joined #openstack | 15:26 | |
jatsrt | adambergstein: yes | 15:27 |
jatsrt | a few things I have noticed with that | 15:28 |
*** nickon has quit IRC | 15:28 | |
jatsrt | do you have virtualization enabled for you processors? | 15:28 |
jatsrt | not just capable, but enabled in the bios if needs be | 15:28 |
kbringard | that error seems to mask other issues a lot of the time as well | 15:28 |
jatsrt | are you sharing the instances directory for live migration | 15:29 |
*** aliguori has quit IRC | 15:29 | |
jatsrt | if so, are all your machines the same proc type | 15:29 |
jatsrt | kbringard: yep, a very generic something went wrong error | 15:29 |
creiht | winston-d: I'm in now, if you are still up | 15:30 |
jatsrt | since this one came from "create_new_domain" seems like it would be virtualization being enabled or not | 15:30 |
*** dendro-afk is now known as dendrobates | 15:30 | |
*** dolphm has quit IRC | 15:33 | |
*** guigui has quit IRC | 15:37 | |
*** msinhore has joined #openstack | 15:37 | |
*** dolphm has joined #openstack | 15:38 | |
*** CatKiller has joined #openstack | 15:39 | |
*** Cyns has joined #openstack | 15:43 | |
*** aliguori has joined #openstack | 15:43 | |
*** MarcMorata has joined #openstack | 15:45 | |
*** jj0hns0n has quit IRC | 15:46 | |
*** dolphm has quit IRC | 15:51 | |
*** obino has joined #openstack | 15:52 | |
*** imsplitbit has joined #openstack | 15:56 | |
*** maplebed|afk has quit IRC | 15:58 | |
*** javiF has joined #openstack | 15:59 | |
*** stewart has joined #openstack | 16:00 | |
*** negronjl has quit IRC | 16:00 | |
*** negronjl has joined #openstack | 16:00 | |
*** jtanner has quit IRC | 16:02 | |
*** mrrk has joined #openstack | 16:03 | |
*** joar has quit IRC | 16:08 | |
*** jtanner has joined #openstack | 16:12 | |
*** herve06 has quit IRC | 16:12 | |
*** dirkx has quit IRC | 16:14 | |
*** maplebed has joined #openstack | 16:16 | |
*** mrrk has quit IRC | 16:19 | |
*** javiF has quit IRC | 16:20 | |
*** galstrom has joined #openstack | 16:21 | |
*** huslage has quit IRC | 16:22 | |
*** cole has joined #openstack | 16:25 | |
jatsrt | OK, so it seems almost everything I trie to do with nova command line is getting 501 errors, anyone else experience this? | 16:27 |
*** Nagaraju has joined #openstack | 16:27 | |
Nagaraju | hi | 16:27 |
jatsrt | is it a possible version mismatch, though both are current head | 16:27 |
kbringard | by nova command line, do you mean nova-manage? | 16:28 |
kbringard | the openstack tools? | 16:28 |
kbringard | the euca2ools? | 16:28 |
Nagaraju | how about nova scheduler framework | 16:28 |
Nagaraju | can we write our own nova scheduler by inheriting or using scheduler framework | 16:28 |
jatsrt | no I mean "nova" from pyton-novaclient" | 16:28 |
jatsrt | nova list | 16:28 |
jatsrt | ok | 16:29 |
kbringard | ah, OK | 16:29 |
jatsrt | nova boot | 16:29 |
jatsrt | not ok | 16:29 |
kbringard | nova list | 16:29 |
kbringard | 'x-server-management-url' | 16:29 |
kbringard | that's what mine is doing | 16:29 |
jatsrt | I had that too, it was pointing to 8773 and not 8774 | 16:29 |
viraptor | jatsrt: logs of nova-api should tell you what's failing (provided you actually get to the right server) | 16:29 |
WormMan | kbringard: I've seen that when using the wrong URL | 16:29 |
kbringard | ah, I don't use the openstack api as much as the ec2 | 16:30 |
jatsrt | viraptor: getting generic 501, not implemented, no errors | 16:30 |
*** primeminsterp has quit IRC | 16:30 | |
kbringard | do I need to recreate my rc file? | 16:30 |
jatsrt | is it just the case that it isn't actually working yet | 16:30 |
* kbringard thinks | 16:30 | |
jatsrt | kbrngard: no it's an environment variable | 16:30 |
jatsrt | check for your NOVA_ exports | 16:31 |
jatsrt | should be in your novarc | 16:31 |
*** lborda_ has quit IRC | 16:31 | |
kbringard | right, I meant, I wonder if I'm getting the wrong url because the endpoint changed and I hadn't regenned new rc files | 16:31 |
*** lborda has joined #openstack | 16:31 | |
kbringard | but I regrabbed the zip file and the URL is the same, and I'm getting the same thing now | 16:32 |
kbringard | no error in the api log though, weird | 16:32 |
*** undertree has left #openstack | 16:32 | |
*** marrusl has quit IRC | 16:32 | |
WormMan | I usually just try random endpoints until it works :) | 16:32 |
kbringard | haha, yea, I was going to use it's internal IP and see if that fixes it | 16:32 |
WormMan | or run lsof on the control node to see where it's listening | 16:32 |
*** JStoker has quit IRC | 16:33 | |
*** Nagaraju has quit IRC | 16:33 | |
kbringard | yea, I checked netstat and it's on 0.0.0.0:8774 | 16:33 |
kbringard | nova-api 1864 nova 5u IPv4 62461843 0t0 TCP *:8774 (LISTEN) | 16:33 |
jatsrt | maybe that is my problem, should there be something "more" in the nova.conf | 16:34 |
*** JStoker has joined #openstack | 16:34 | |
*** JStoker has quit IRC | 16:34 | |
*** JStoker has joined #openstack | 16:34 | |
*** marrusl has joined #openstack | 16:34 | |
WormMan | kbringard: try connecting with a web client and see what you get?(curl/wget) | 16:34 |
kbringard | yea, I was just about to do that | 16:35 |
WormMan | when I hist my NOVA_URL with curl I get an unauthorized message | 16:35 |
jatsrt | these are the defaults: | 16:35 |
jatsrt | DEFINE_string('osapi_host', '$my_ip', 'ip of api server') | 16:35 |
jatsrt | DEFINE_string('osapi_scheme', 'http', 'prefix for openstack') | 16:35 |
jatsrt | DEFINE_integer('osapi_port', 8774, 'OpenStack API port') | 16:35 |
jatsrt | could try to explicitly set them | 16:35 |
kbringard | oh, snap, mine looks to be a keystone thing | 16:35 |
*** obino has quit IRC | 16:35 | |
kbringard | The resource must be accessed through a proxy located at <a href="http://127.0.0.1:5001">http://127.0.0.1:5001</a>; | 16:36 |
kbringard | but I don't have the keystone daemon up atm | 16:36 |
kbringard | that would certainly be the issue :-) | 16:36 |
*** irahgel has left #openstack | 16:37 | |
*** netmarkjp has left #openstack | 16:37 | |
*** netmarkjp has joined #openstack | 16:37 | |
*** jfluhmann_ has joined #openstack | 16:37 | |
*** jc_smith has quit IRC | 16:38 | |
*** mwhooker has quit IRC | 16:38 | |
*** jfluhmann has quit IRC | 16:38 | |
*** CloudAche84 has quit IRC | 16:39 | |
*** obino has joined #openstack | 16:39 | |
*** murkk has quit IRC | 16:40 | |
*** RMS-ict has quit IRC | 16:40 | |
*** Ephur has joined #openstack | 16:41 | |
*** heckj has joined #openstack | 16:41 | |
jatsrt | I think I'm going to chalk up the behavio of nova client, to "not done yet" | 16:42 |
*** vishnu has quit IRC | 16:44 | |
kbringard | seems reasonable to me | 16:46 |
*** jdurgin has joined #openstack | 16:47 | |
*** dendrobates is now known as dendro-afk | 16:50 | |
*** pguth66 has joined #openstack | 16:50 | |
*** clauden has joined #openstack | 16:54 | |
*** primeministerp has joined #openstack | 16:57 | |
*** dirkx has joined #openstack | 16:57 | |
*** cole has quit IRC | 16:57 | |
*** primeministerp has quit IRC | 16:58 | |
*** primeministerp has joined #openstack | 16:59 | |
*** mrrk has joined #openstack | 16:59 | |
*** mwhooker has joined #openstack | 17:04 | |
*** jc_smith has joined #openstack | 17:07 | |
*** kashyap has joined #openstack | 17:07 | |
*** dendro-afk is now known as dendrobates | 17:09 | |
*** jedi4ever has quit IRC | 17:10 | |
*** rchavik has quit IRC | 17:15 | |
*** dirkx has quit IRC | 17:15 | |
*** imsplitbit has quit IRC | 17:17 | |
*** imsplitbit has joined #openstack | 17:18 | |
*** RickB17 has joined #openstack | 17:23 | |
WormMan | ooo, this looks fun, transparent_hugepages... too bad my Ubbuntu kernel doesn't have it enabled. | 17:25 |
RickB17 | does storageblaze run SWIFT? or do theyhave their own platform? | 17:25 |
*** adjohn has joined #openstack | 17:26 | |
*** jtanner has quit IRC | 17:32 | |
*** kashyap_ has joined #openstack | 17:39 | |
*** kashyap has quit IRC | 17:40 | |
*** dirkx has joined #openstack | 17:44 | |
*** littleidea has joined #openstack | 17:49 | |
*** daysmen has quit IRC | 17:49 | |
*** zigo has quit IRC | 17:50 | |
*** mnaser has joined #openstack | 17:59 | |
*** adjohn has quit IRC | 17:59 | |
*** adjohn has joined #openstack | 17:59 | |
*** dprince has quit IRC | 18:00 | |
*** tomh_ has quit IRC | 18:00 | |
*** cloudvirt has quit IRC | 18:01 | |
*** alandman has quit IRC | 18:02 | |
creiht | RickB17: you mean backblaze? If so, they have their own software | 18:03 |
*** dirkx has quit IRC | 18:05 | |
*** aliguori has quit IRC | 18:05 | |
*** MarcMorata has quit IRC | 18:07 | |
*** dirkx has joined #openstack | 18:07 | |
*** hggdh has quit IRC | 18:10 | |
*** joearnold has joined #openstack | 18:11 | |
*** jtanner has joined #openstack | 18:12 | |
*** adiantum has joined #openstack | 18:12 | |
*** hggdh has joined #openstack | 18:13 | |
*** stewart has quit IRC | 18:13 | |
*** huslage has joined #openstack | 18:15 | |
adambergstein | jatsrt: kbringard: sorry i was away from the computer for a little while | 18:15 |
*** nickethier has quit IRC | 18:15 | |
adambergstein | i am fairly certain the machine is set up for virtualization | 18:15 |
*** dirkx has quit IRC | 18:16 | |
adambergstein | i ran some of the checks before i installed KVM | 18:16 |
*** huslage has quit IRC | 18:17 | |
*** iRTermite has quit IRC | 18:17 | |
*** huslage has joined #openstack | 18:17 | |
*** Cyns has quit IRC | 18:17 | |
*** huslage has quit IRC | 18:19 | |
*** iRTermite has joined #openstack | 18:20 | |
adambergstein | kbringard: i have made it a few more steps since yesterday, but still not there | 18:20 |
*** ejat has joined #openstack | 18:20 | |
*** ejat has joined #openstack | 18:20 | |
*** dolphm has joined #openstack | 18:21 | |
adambergstein | i tried flatdhcp but it made my nova-network go bad, so i reverted back to flat network | 18:21 |
*** huslage has joined #openstack | 18:21 | |
*** aliguori has joined #openstack | 18:22 | |
kbringard | hmmm, I know the least about flat, so I'm probably not going to be much help | 18:23 |
*** GeoDud has joined #openstack | 18:26 | |
adambergstein | kbringard: did you see the issue i posted earlier? i don't think it was related to flat | 18:27 |
kbringard | oh, sorry, I didn't | 18:27 |
*** huslage has quit IRC | 18:27 | |
kbringard | oh wait | 18:27 |
kbringard | the qemu thing | 18:27 |
adambergstein | http://paste.openstack.org/show/2042/ | 18:27 |
adambergstein | yes | 18:27 |
*** huslage has joined #openstack | 18:28 | |
adambergstein | kbringard: i didn't find a whole lot of info with that error | 18:29 |
*** huslage has quit IRC | 18:29 | |
kbringard | I've seen that error for a lot of things… let me think for a moment | 18:29 |
adambergstein | ok | 18:29 |
*** huslage has joined #openstack | 18:29 | |
*** cole has joined #openstack | 18:30 | |
jatsrt | adambergstein: do you have /dev/kvm ? | 18:34 |
adambergstein | let me check | 18:35 |
adambergstein | yes | 18:35 |
*** maplebed has quit IRC | 18:35 | |
jatsrt | ok, single machine or multiple machines | 18:36 |
RickB17 | creiht: yup, thanks. | 18:36 |
jatsrt | also, assuming nothing is running, clear out /var/lib/nova/instances | 18:36 |
adambergstein | single machine | 18:36 |
adambergstein | ok.. | 18:36 |
adambergstein | there are three directories in there | 18:37 |
adambergstein | _base instance-00000010 instance-00000011 | 18:37 |
*** huslage has quit IRC | 18:37 | |
adambergstein | clear them all out? | 18:37 |
*** huslage has joined #openstack | 18:37 | |
jatsrt | are you actually running any instances | 18:37 |
adambergstein | i have attempted to | 18:38 |
adambergstein | i believe all are 'shutdown' right now because of the issue | 18:38 |
jatsrt | if not then yes clear it all out, that will recreate them, watch all your logs for errors when you run again | 18:38 |
adambergstein | http://paste.openstack.org/show/2056/ | 18:38 |
jatsrt | terminate them all | 18:39 |
jatsrt | then clear out the dirs | 18:39 |
adambergstein | ok | 18:39 |
*** maplebed_ has joined #openstack | 18:39 | |
adambergstein | the only thing left now is '_base' | 18:40 |
*** FallenPegasus has joined #openstack | 18:40 | |
*** stewart has joined #openstack | 18:40 | |
adambergstein | do you want that removed too? | 18:40 |
*** darraghb has quit IRC | 18:40 | |
*** FallenPegasus is now known as MarkAtwood | 18:40 | |
jatsrt | yeah, you can clear that too | 18:41 |
*** Ephur has quit IRC | 18:41 | |
jatsrt | not that I expect this to fix anything, just make sure we are starting clean :-) | 18:41 |
adambergstein | ok | 18:41 |
adambergstein | cool deal | 18:41 |
adambergstein | i am a total noob, so forgive me | 18:41 |
*** huslage has quit IRC | 18:41 | |
jatsrt | not a problem | 18:42 |
*** ejat has quit IRC | 18:42 | |
adambergstein | ok its all clear | 18:42 |
*** iRTermite has quit IRC | 18:42 | |
*** npmapn has quit IRC | 18:42 | |
*** npmapn has joined #openstack | 18:43 | |
*** iRTermite has joined #openstack | 18:45 | |
adambergstein | jatsrt: should i try pushing another instance? | 18:46 |
jatsrt | what are using for your image, stock ubuntu cloud image? | 18:46 |
adambergstein | or should we fix something first? | 18:46 |
adambergstein | let me check | 18:47 |
jatsrt | euca-describe-images | 18:47 |
*** jedi4ever has joined #openstack | 18:47 | |
adambergstein | EasyCrawler2/lucid-server-uec-amd64.img.manifest.xml | 18:47 |
adambergstein | yea | 18:47 |
jatsrt | ok, and you have 64 bit machine? | 18:47 |
adambergstein | i have ran that a whole bunch :) got it down now | 18:47 |
adambergstein | yes | 18:47 |
jatsrt | ok, run it and watch all of our logs | 18:47 |
jatsrt | see if you get the same exception | 18:47 |
adambergstein | ok | 18:47 |
adambergstein | euca-run-instance? | 18:48 |
adambergstein | right? | 18:48 |
jatsrt | yep | 18:48 |
jatsrt | -t m1.tiny -z nova -k yourkey ami-000000?? | 18:48 |
adambergstein | yea | 18:48 |
adambergstein | check nova-compute log? | 18:49 |
jatsrt | yes | 18:49 |
adambergstein | same error | 18:49 |
kbringard | you may be missing some dependency somewhere… did you install from packages on Ubuntu? | 18:49 |
adambergstein | http://paste.openstack.org/show/2057/ | 18:49 |
adambergstein | i did, yes | 18:50 |
*** argv0 has joined #openstack | 18:50 | |
adambergstein | i also set up KVM from packages | 18:50 |
*** adjohn has quit IRC | 18:51 | |
jatsrt | so, I believe when I saw this it was the instances not getting made right, check the _base folder and the instance folder | 18:51 |
jatsrt | see what is in them | 18:51 |
jatsrt | are you using glance or just objectstore | 18:51 |
kbringard | jatsrt: yes | 18:52 |
jatsrt | also is the instances directory local, ifs, etc? | 18:52 |
kbringard | that ^^ | 18:52 |
kbringard | the last time I saw this it was because the cached image on the compute node was 0 bytes | 18:52 |
jatsrt | did you see any errors in the glance log | 18:52 |
jatsrt | kbringard: right | 18:52 |
kbringard | good catch dude | 18:53 |
jatsrt | we just cleared out the cache and it did it again, so something is not copying right or it is 0 bytes in glance | 18:53 |
jatsrt | can check that in /var/lib/glance | 18:53 |
kbringard | sorry, I have much else going on so I'm only sort of paying attention :-/ | 18:53 |
adambergstein | i am using glance | 18:53 |
jatsrt | might not have registered properly | 18:53 |
*** HouseAway is now known as DrHouseMD | 18:53 | |
jatsrt | can you paste the euca-describe-images for me | 18:53 |
adambergstein | yes | 18:53 |
jatsrt | also ls -lah of /var/lib/glance | 18:53 |
*** galstrom is now known as jshepher | 18:54 | |
adambergstein | I'm getting it right now | 18:54 |
adambergstein | http://paste.openstack.org/show/2058/ | 18:54 |
*** jshepher has joined #openstack | 18:54 | |
jatsrt | sorry ls the images directory | 18:55 |
adambergstein | ok | 18:55 |
*** jshepher has left #openstack | 18:55 | |
adambergstein | http://paste.openstack.org/show/2059/ | 18:55 |
jatsrt | ok, your images ar 0 bytes | 18:56 |
jatsrt | that's the problem | 18:56 |
jatsrt | your uec-publish-tarball did not work right | 18:56 |
adambergstein | hmmmm | 18:56 |
adambergstein | ok | 18:56 |
adambergstein | want me to clear them and try new ones? | 18:56 |
adambergstein | i don't get any errors when i run that | 18:56 |
*** bcwaldon has quit IRC | 18:57 | |
jatsrt | I believe I had that issue at some point, nut sure why though | 18:57 |
jatsrt | you can just try it again into a new bucket | 18:57 |
adambergstein | i will do a euca-deregister | 18:57 |
adambergstein | clear out some | 18:57 |
jatsrt | can you paste your nova.conf too | 18:58 |
adambergstein | yep | 18:58 |
adambergstein | http://paste.openstack.org/show/2060/ | 18:58 |
*** npmapn has quit IRC | 18:59 | |
adambergstein | images reregistered | 18:59 |
adambergstein | can you step me through this from scratch to verify i am doing this right? | 18:59 |
adambergstein | i was following these instructions previously: http://wiki.openstack.org/RunningNova | 18:59 |
*** npmapn has joined #openstack | 19:00 | |
*** johnmark has joined #openstack | 19:00 | |
*** jakedahn has quit IRC | 19:01 | |
jatsrt | so what point are you at o | 19:01 |
*** jakedahn has joined #openstack | 19:01 | |
adambergstein | i have all images deregistered | 19:01 |
jatsrt | did you rerun ue-publish-tarball? | 19:01 |
adambergstein | no | 19:01 |
*** med_out is now known as med | 19:01 | |
adambergstein | want me to do that? | 19:01 |
jatsrt | one sec | 19:02 |
adambergstein | should i create a new bucket? | 19:02 |
adambergstein | ok | 19:02 |
jatsrt | so if you look at /var/lib/nova/buckets/<bucket name> | 19:02 |
jatsrt | do you see a bunch of files of non 0 size | 19:02 |
*** sugar_punkin has joined #openstack | 19:03 | |
adambergstein | let me look | 19:03 |
*** alfred_mancx has joined #openstack | 19:03 | |
*** sugar_punkin has quit IRC | 19:04 | |
adambergstein | jatsrt: http://paste.openstack.org/show/2062/ | 19:04 |
*** sugar_punkin has joined #openstack | 19:04 | |
jatsrt | looks ok there, you could just try to reregister the image | 19:04 |
adambergstein | ok, uec-publish-tarball? | 19:04 |
jatsrt | euca-register EasyCrawler2/lucid-server-uec-amd64-vmlinuz-virtual.manifest.xml | 19:05 |
jatsrt | first to register the kernel | 19:05 |
jatsrt | see what that generates in the glance/images dir | 19:05 |
*** rnirmal has quit IRC | 19:05 | |
adambergstein | UnknownError: An unknown error has occurred. Please try your request again. | 19:06 |
jatsrt | hmmm | 19:06 |
*** imsplitbit has quit IRC | 19:06 | |
adambergstein | ill check the log | 19:06 |
jatsrt | hmm | 19:07 |
*** cp16net_ has joined #openstack | 19:07 | |
jatsrt | make sure you source your novarc | 19:07 |
adambergstein | ok | 19:07 |
jatsrt | and you could just use uec-publish-tarball to a new bucket | 19:07 |
jatsrt | trying to save you some space though :-) | 19:07 |
kbringard | you can try my glance-uploader script | 19:07 |
kbringard | https://github.com/kevinbringard/OpenStack-tools | 19:08 |
kbringard | there is a bash script which relies on the glance-upload stuff that come with glance | 19:08 |
*** jakedahn_ has joined #openstack | 19:08 | |
adambergstein | oh, cool | 19:08 |
kbringard | and there is a ruby script, that uses ogle | 19:08 |
kbringard | which is a gem that interfaces directly with the glance API | 19:08 |
adambergstein | i created a new bucket | 19:08 |
jatsrt | ok euca-describe-images quickly | 19:09 |
jatsrt | if you can | 19:09 |
*** brd_from_italy has joined #openstack | 19:09 | |
*** sugar_punkin has quit IRC | 19:09 | |
adambergstein | kbringard: ill keep working with this for now | 19:09 |
adambergstein | btu i will try it later | 19:09 |
kbringard | okie | 19:09 |
kbringard | no worries | 19:09 |
kbringard | just trying to help :-) | 19:09 |
*** imsplitbit has joined #openstack | 19:09 | |
adambergstein | i appreciate it, you guys rock | 19:09 |
jatsrt | ok thought | 19:10 |
adambergstein | jatsrt: i have not ran uec-publish yet | 19:10 |
adambergstein | should i try that again? | 19:10 |
jatsrt | your bucket is all owned by root? | 19:10 |
adambergstein | let me look | 19:10 |
adambergstein | i ran sudo when making the bucket | 19:10 |
adambergstein | so i think so | 19:10 |
jatsrt | yes run uec-publish-tarball <blah>.tar.gz newbucket | 19:10 |
adambergstein | ok | 19:10 |
jatsrt | don't pre make the bucket | 19:10 |
*** cp16net has quit IRC | 19:10 | |
adambergstein | sudo or not? | 19:10 |
jatsrt | nope | 19:11 |
adambergstein | ok | 19:11 |
adambergstein | its going... | 19:11 |
*** jakedahn has quit IRC | 19:11 | |
*** jakedahn_ is now known as jakedahn | 19:11 | |
jatsrt | do an ls on the bucket directory when it's done | 19:11 |
*** cp16net_ has quit IRC | 19:11 | |
adambergstein | ok | 19:11 |
jatsrt | wondering if it was just nova not being able to read the file contents | 19:12 |
jatsrt | shouldn't be though | 19:12 |
jatsrt | also as soon as the publish finishes | 19:12 |
adambergstein | http://paste.openstack.org/show/2063/ | 19:12 |
*** MarcMorata has joined #openstack | 19:12 | |
jatsrt | run euca-describe-images | 19:12 |
adambergstein | http://paste.openstack.org/show/2064/ | 19:13 |
jatsrt | I'm surprised they are all owned by root | 19:13 |
jatsrt | ok ls the glances images dir | 19:13 |
jatsrt | look for 12 and 13 | 19:13 |
jatsrt | should be > 0 | 19:13 |
jatsrt | sorry 11 and 12 | 19:13 |
adambergstein | http://paste.openstack.org/show/2065/ | 19:14 |
adambergstein | not greater than 0 | 19:14 |
jatsrt | still 0 | 19:14 |
adambergstein | :( | 19:14 |
*** cp16net has joined #openstack | 19:14 | |
adambergstein | yes | 19:14 |
jatsrt | might just be a permissions issue, not too sure why your buckets are owned by root | 19:14 |
jatsrt | though it shouldn't make a difference | 19:15 |
jatsrt | nothing in the glance log? | 19:15 |
adambergstein | is there any specific configuration needed for glance? | 19:15 |
adambergstein | is it in /var/log/glance? | 19:15 |
jatsrt | so what happens is glance reads the bucket and assembles the image in the /var/lib/glance/images | 19:15 |
adambergstein | api or registry? | 19:15 |
jatsrt | that is what gets copied to the host for KVM to manipulate | 19:15 |
adambergstein | oh ok | 19:15 |
jatsrt | not too suew | 19:15 |
jatsrt | sure | 19:15 |
adambergstein | ill check both | 19:16 |
jatsrt | should look something like: | 19:16 |
jatsrt | http://paste.openstack.org/show/2067/ | 19:16 |
*** bcwaldon has joined #openstack | 19:17 | |
adambergstein | hmmm | 19:17 |
adambergstein | weird. | 19:17 |
jatsrt | you aren't out of FS space right/ | 19:17 |
adambergstein | glance api log | 19:17 |
adambergstein | http://paste.openstack.org/show/2068/ | 19:17 |
adambergstein | no i have tons | 19:17 |
adambergstein | prob over 200 gigs | 19:17 |
jatsrt | so, maybe just to be clean.... | 19:19 |
jatsrt | apt-get purge glance | 19:19 |
jatsrt | rm -rf /var/lib/glance | 19:19 |
jatsrt | rm -rf /var/log/glance | 19:19 |
jatsrt | apt-get install glance | 19:19 |
adambergstein | ok, done | 19:20 |
jatsrt | did you change any /etc/glance/ files | 19:20 |
adambergstein | no | 19:20 |
jatsrt | hmmm, should work | 19:20 |
adambergstein | ok, want me to try again? | 19:21 |
adambergstein | since i redid glance? | 19:21 |
jatsrt | yeah, register the tar ball again | 19:21 |
jatsrt | see what you get | 19:21 |
*** adiantum has quit IRC | 19:21 | |
jatsrt | tail /var/log/nova/nova-objectstore.log too | 19:21 |
adambergstein | ok | 19:21 |
adambergstein | its running | 19:21 |
adambergstein | http://paste.openstack.org/show/2070/ | 19:24 |
adambergstein | want me to check glance images? | 19:24 |
jatsrt | yep | 19:24 |
*** adjohn has joined #openstack | 19:24 | |
adambergstein | still 0 | 19:24 |
adambergstein | :9 | 19:24 |
adambergstein | :( | 19:24 |
jatsrt | well crap | 19:24 |
*** mablebed__ has joined #openstack | 19:24 | |
adambergstein | i concur | 19:24 |
jatsrt | no other errors anywhere? | 19:25 |
adambergstein | no other configuration needed for glance/compute | 19:25 |
jatsrt | sudo restart nova-objectstore | 19:25 |
jatsrt | ? | 19:25 |
jatsrt | No I'm reaching | 19:25 |
adambergstein | compute | 19:26 |
adambergstein | http://paste.openstack.org/show/2072/ | 19:26 |
*** Ephur has joined #openstack | 19:26 | |
jatsrt | are all of you nova services running | 19:26 |
jatsrt | you can manually restart them, or just reboot | 19:27 |
jatsrt | looks temporary though | 19:27 |
adambergstein | http://paste.openstack.org/show/2074/ | 19:27 |
adambergstein | want me to uec-publish again? | 19:27 |
jatsrt | not sure it will help, but yes | 19:27 |
jatsrt | not too sure what would keep glance from getting the data from the bucket | 19:28 |
adambergstein | failed to upload because it was there | 19:28 |
adambergstein | let me euca-rereg | 19:28 |
*** maplebed_ has quit IRC | 19:28 | |
adambergstein | and ill try again | 19:28 |
jatsrt | I think I had this and it was a permissions issue, but I had an error | 19:28 |
adambergstein | want me to check objectstore log? | 19:28 |
*** argonius_afk has quit IRC | 19:28 | |
jatsrt | look for any clue you can | 19:29 |
*** mablebed__ is now known as maplebed | 19:29 | |
*** daysmen has joined #openstack | 19:30 | |
adambergstein | I'm trying uec-publish again | 19:30 |
adambergstein | and ill dig around the logs | 19:30 |
adambergstein | images dir still 0 | 19:30 |
jatsrt | got me, I'm stumpped | 19:30 |
adambergstein | nothing in object store | 19:31 |
*** dendrobates is now known as dendro-afk | 19:31 | |
*** nphase_ has joined #openstack | 19:31 | |
*** nphase_ has joined #openstack | 19:31 | |
adambergstein | network log looks clean | 19:31 |
*** nphase has quit IRC | 19:32 | |
*** nphase_ is now known as nphase | 19:32 | |
adambergstein | hmm i dont know | 19:32 |
adambergstein | jatsrt: i can blow this away and start over, but id like to make sure i build it right | 19:32 |
adambergstein | would you be willing to help me through it? | 19:33 |
jatsrt | well, my day is winding up soon, but yes and there are others here | 19:35 |
jatsrt | another option is to try to clear pieces out | 19:35 |
jatsrt | openstack is pretty good with not leaving garbage behind | 19:36 |
jatsrt | so maybe purge out nova-objectstore | 19:36 |
jatsrt | rm the dirs | 19:36 |
*** Ephur has quit IRC | 19:36 | |
jatsrt | then reinstall and try again | 19:36 |
*** Ephur has joined #openstack | 19:39 | |
adambergstein | ok | 19:39 |
adambergstein | i will try that | 19:40 |
adambergstein | thanks again :) | 19:40 |
*** mitchless has joined #openstack | 19:42 | |
*** Ephur has quit IRC | 19:43 | |
*** alfred_mancx has quit IRC | 19:44 | |
*** daysmen has quit IRC | 19:45 | |
*** hggdh has quit IRC | 19:49 | |
*** hggdh_ has joined #openstack | 19:49 | |
*** dendro-afk is now known as dendrobates | 19:51 | |
*** hggdh_ is now known as hggdh | 19:52 | |
*** daysmen has joined #openstack | 19:53 | |
*** imsplitbit has quit IRC | 19:55 | |
*** bcwaldon has quit IRC | 19:55 | |
*** brd_from_italy has quit IRC | 19:56 | |
*** mgius has quit IRC | 19:56 | |
*** bcwaldon has joined #openstack | 19:56 | |
*** imsplitbit has joined #openstack | 19:57 | |
*** daysmen has quit IRC | 19:57 | |
*** pguth66 has quit IRC | 19:57 | |
*** rnirmal has joined #openstack | 19:58 | |
*** cp16net_ has joined #openstack | 20:00 | |
*** bsza has quit IRC | 20:01 | |
*** HowardRoark has quit IRC | 20:01 | |
*** cp16net has quit IRC | 20:04 | |
*** cp16net_ is now known as cp16net | 20:04 | |
*** HowardRoark has joined #openstack | 20:05 | |
*** ejat has joined #openstack | 20:15 | |
*** jatsrt has left #openstack | 20:15 | |
*** cereal_bars has quit IRC | 20:18 | |
*** stewart has quit IRC | 20:18 | |
*** imsplitbit has quit IRC | 20:19 | |
*** imsplitbit has joined #openstack | 20:20 | |
*** nphase has quit IRC | 20:20 | |
*** nphase has joined #openstack | 20:21 | |
*** nphase has joined #openstack | 20:21 | |
*** adjohn has quit IRC | 20:23 | |
*** jedi4ever has quit IRC | 20:27 | |
*** mdomsch has joined #openstack | 20:27 | |
*** stewart has joined #openstack | 20:30 | |
*** ctennis has quit IRC | 20:32 | |
*** mgoldmann has joined #openstack | 20:39 | |
*** HowardRoark has quit IRC | 20:43 | |
*** npmapn has quit IRC | 20:44 | |
*** HowardRoark has joined #openstack | 20:46 | |
*** shaon has joined #openstack | 20:47 | |
*** dgags has quit IRC | 20:47 | |
shaon | I am trying to regiser an image, but it got stuck everytime when untarring the test.img.manifest.xml | 20:49 |
*** clauden has quit IRC | 20:50 | |
*** GeoDud has quit IRC | 20:51 | |
*** GeoDud has joined #openstack | 20:54 | |
*** ejat has quit IRC | 20:56 | |
*** cp16net has quit IRC | 21:04 | |
*** adjohn has joined #openstack | 21:05 | |
*** ctennis has joined #openstack | 21:08 | |
*** dolphm has quit IRC | 21:08 | |
*** mgoldmann has quit IRC | 21:09 | |
*** rnirmal has quit IRC | 21:09 | |
*** deshantm_laptop has quit IRC | 21:09 | |
*** dendrobates is now known as dendro-afk | 21:10 | |
*** daysmen has joined #openstack | 21:13 | |
*** ejat has joined #openstack | 21:14 | |
*** marrusl has quit IRC | 21:16 | |
*** javiF has joined #openstack | 21:16 | |
*** adjohn has quit IRC | 21:17 | |
*** bcwaldon has quit IRC | 21:18 | |
*** adjohn has joined #openstack | 21:24 | |
*** liemmn has joined #openstack | 21:24 | |
*** KnuckleSangwich has quit IRC | 21:24 | |
*** stewart has quit IRC | 21:26 | |
*** stewart has joined #openstack | 21:27 | |
*** _adjohn has joined #openstack | 21:29 | |
*** adjohn has quit IRC | 21:29 | |
*** _adjohn is now known as adjohn | 21:29 | |
*** daysmen has quit IRC | 21:30 | |
*** daysmen has joined #openstack | 21:32 | |
*** MarcMorata has quit IRC | 21:33 | |
*** imsplitbit has quit IRC | 21:34 | |
*** sloop has joined #openstack | 21:45 | |
*** jj0hns0n has joined #openstack | 21:47 | |
*** jtanner has quit IRC | 21:48 | |
*** jj0hns0n_ has joined #openstack | 21:49 | |
*** jj0hns0n has quit IRC | 21:49 | |
*** jj0hns0n_ is now known as jj0hns0n | 21:49 | |
*** pothos_ has joined #openstack | 21:50 | |
*** pothos has quit IRC | 21:52 | |
*** pothos_ is now known as pothos | 21:53 | |
argv0 | whats the best single-machine install script for ubuntu 11 ? | 21:53 |
argv0 | (for nova) | 21:54 |
*** msinhore has quit IRC | 21:59 | |
*** msinhore1 has joined #openstack | 21:59 | |
*** mrrk has quit IRC | 22:03 | |
*** jfluhmann__ has joined #openstack | 22:05 | |
*** LiamMac has quit IRC | 22:05 | |
*** markvoelker has quit IRC | 22:07 | |
*** daysmen has quit IRC | 22:07 | |
*** jfluhmann_ has quit IRC | 22:07 | |
*** adjohn has quit IRC | 22:12 | |
*** mdomsch has quit IRC | 22:14 | |
*** ldlework has quit IRC | 22:14 | |
*** adjohn has joined #openstack | 22:17 | |
*** kbringard has quit IRC | 22:20 | |
*** jj0hns0n has quit IRC | 22:21 | |
*** msivanes1 has quit IRC | 22:22 | |
*** ncode has quit IRC | 22:23 | |
heckj | argv0: check out the cloudbuilders or nebula's "auto.sh" script - | 22:23 |
heckj | argv0: https://github.com/4P/deployscripts or https://github.com/cloudbuilders/deploy.sh | 22:23 |
argv0 | thanks! | 22:24 |
*** HowardRoark has quit IRC | 22:26 | |
*** MarkAtwood has quit IRC | 22:29 | |
*** cole has quit IRC | 22:30 | |
*** shaon has quit IRC | 22:31 | |
*** Shentonfreude has quit IRC | 22:35 | |
*** miclorb_ has joined #openstack | 22:36 | |
*** mitchless has quit IRC | 22:38 | |
*** jkoelker has quit IRC | 22:38 | |
*** mattray has quit IRC | 22:41 | |
*** aliguori has quit IRC | 22:44 | |
*** netmarkjp has left #openstack | 22:51 | |
*** amccabe has quit IRC | 23:03 | |
*** FallenPegasus has joined #openstack | 23:07 | |
*** FallenPegasus is now known as MarkAtwood | 23:08 | |
*** mfer has quit IRC | 23:09 | |
*** ckmason has joined #openstack | 23:09 | |
*** mattray has joined #openstack | 23:10 | |
*** mattray has quit IRC | 23:10 | |
*** HowardRoark has joined #openstack | 23:10 | |
*** javiF has quit IRC | 23:12 | |
*** huslage has joined #openstack | 23:13 | |
*** MarkAtwood has quit IRC | 23:13 | |
*** FallenPegasus has joined #openstack | 23:13 | |
*** ckmason has quit IRC | 23:17 | |
*** joearnold has quit IRC | 23:19 | |
*** nphase has quit IRC | 23:21 | |
*** littleidea has quit IRC | 23:21 | |
*** nphase has joined #openstack | 23:21 | |
*** nphase has joined #openstack | 23:21 | |
*** RickB17 has quit IRC | 23:22 | |
*** FallenPegasus has quit IRC | 23:23 | |
*** CatKiller has quit IRC | 23:26 | |
*** CatKiller has joined #openstack | 23:26 | |
*** ncode has joined #openstack | 23:28 | |
*** RickB17 has joined #openstack | 23:33 | |
*** RobertLaptop has quit IRC | 23:35 | |
*** RobertLaptop has joined #openstack | 23:35 | |
*** marrusl has joined #openstack | 23:36 | |
*** erichagedorn has joined #openstack | 23:39 | |
*** FallenPegasus has joined #openstack | 23:40 | |
*** jeffjapan has joined #openstack | 23:44 | |
*** martin has quit IRC | 23:45 | |
*** ejat has quit IRC | 23:47 | |
*** martin has joined #openstack | 23:51 | |
*** mfer has joined #openstack | 23:52 | |
*** mrrk has joined #openstack | 23:55 | |
*** adjohn has quit IRC | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!