*** sdake_ has joined #openstack-containers | 00:00 | |
*** adrian_otto has quit IRC | 00:00 | |
*** yuanying has quit IRC | 00:03 | |
*** Marga_ has joined #openstack-containers | 00:04 | |
*** yuanying has joined #openstack-containers | 00:05 | |
*** madhuri has joined #openstack-containers | 00:05 | |
*** yuanying_ has joined #openstack-containers | 00:10 | |
*** fredlhsu has quit IRC | 00:10 | |
*** yuanying has quit IRC | 00:10 | |
*** sdake__ has joined #openstack-containers | 00:10 | |
*** sdake_ has quit IRC | 00:14 | |
*** thomasem has quit IRC | 00:23 | |
*** Marga_ has quit IRC | 00:23 | |
*** Marga_ has joined #openstack-containers | 00:24 | |
*** thomasem has joined #openstack-containers | 00:28 | |
*** achanda has quit IRC | 00:30 | |
*** Tango has quit IRC | 00:43 | |
*** yuanying-alt has joined #openstack-containers | 00:44 | |
*** yuanying-alt has quit IRC | 00:49 | |
*** yuanying_ has quit IRC | 00:59 | |
*** yuanying has joined #openstack-containers | 01:01 | |
*** suro_ has quit IRC | 01:14 | |
*** vilobhmm1 has quit IRC | 01:20 | |
*** yuanying-alt has joined #openstack-containers | 01:45 | |
*** yuanying-alt has quit IRC | 01:49 | |
*** suro_ has joined #openstack-containers | 02:07 | |
*** kebray has joined #openstack-containers | 02:07 | |
*** unicell has quit IRC | 02:12 | |
*** erkules_ has joined #openstack-containers | 02:30 | |
*** jay-lau-513 has joined #openstack-containers | 02:31 | |
*** coolsvap has joined #openstack-containers | 02:33 | |
*** erkules has quit IRC | 02:33 | |
*** suro_ has quit IRC | 02:33 | |
*** coolsvap has quit IRC | 02:45 | |
*** coolsvap has joined #openstack-containers | 02:46 | |
*** coolsvap has quit IRC | 02:46 | |
*** coolsvap has joined #openstack-containers | 02:46 | |
*** yuanying has quit IRC | 02:46 | |
*** achanda has joined #openstack-containers | 02:47 | |
*** madhuri has quit IRC | 02:50 | |
*** unicell has joined #openstack-containers | 02:51 | |
*** jay-lau-513 has quit IRC | 02:53 | |
*** jay-lau-513 has joined #openstack-containers | 02:53 | |
*** yuanying has joined #openstack-containers | 03:01 | |
*** yuanying has quit IRC | 03:02 | |
*** daneyon has joined #openstack-containers | 03:03 | |
*** yuanying has joined #openstack-containers | 03:06 | |
*** sdake__ has quit IRC | 03:23 | |
*** dims_ has quit IRC | 03:32 | |
*** yuanying-alt has joined #openstack-containers | 03:33 | |
*** yuanying has quit IRC | 03:38 | |
*** yuanying-alt has quit IRC | 03:38 | |
*** diga has quit IRC | 03:38 | |
*** diga has joined #openstack-containers | 03:38 | |
*** achanda has quit IRC | 03:40 | |
*** madhuri has joined #openstack-containers | 03:40 | |
*** yuanying has joined #openstack-containers | 03:41 | |
*** suro_ has joined #openstack-containers | 03:41 | |
*** sdake_ has joined #openstack-containers | 03:47 | |
*** dboik_ has quit IRC | 03:51 | |
*** adrian_otto has joined #openstack-containers | 03:52 | |
*** vilobhmm has joined #openstack-containers | 04:00 | |
*** achanda has joined #openstack-containers | 04:06 | |
*** adrian_otto has quit IRC | 04:25 | |
*** adrian_otto has joined #openstack-containers | 04:29 | |
*** jay-lau-513 has quit IRC | 04:32 | |
*** dims has joined #openstack-containers | 04:32 | |
*** adrian_otto has quit IRC | 04:33 | |
*** jay-lau-513 has joined #openstack-containers | 04:34 | |
*** dims has quit IRC | 04:38 | |
*** dims has joined #openstack-containers | 04:40 | |
*** dims has quit IRC | 04:45 | |
*** suro_ has quit IRC | 04:47 | |
*** yuanying-alt has joined #openstack-containers | 04:49 | |
*** yuanying-alt has quit IRC | 04:54 | |
*** kebray has quit IRC | 05:17 | |
*** unicell1 has joined #openstack-containers | 05:18 | |
*** unicell has quit IRC | 05:18 | |
*** juggler has quit IRC | 05:19 | |
*** adrian_otto has joined #openstack-containers | 05:19 | |
*** juggler_ has quit IRC | 05:19 | |
*** kebray has joined #openstack-containers | 05:28 | |
*** adrian_otto has quit IRC | 05:35 | |
*** adrian_otto has joined #openstack-containers | 05:36 | |
*** adrian_otto has quit IRC | 05:45 | |
*** achanda has quit IRC | 05:54 | |
*** achanda has joined #openstack-containers | 05:55 | |
*** harlowja_ is now known as harlowja_away | 05:57 | |
*** suro_ has joined #openstack-containers | 06:00 | |
*** suro_ has quit IRC | 06:01 | |
openstackgerrit | Madhuri Kumari proposed stackforge/magnum: Make resource creation return 400 with empty manifest https://review.openstack.org/162878 | 06:06 |
---|---|---|
*** juggler has joined #openstack-containers | 06:08 | |
*** achanda has quit IRC | 06:11 | |
*** achanda has joined #openstack-containers | 06:11 | |
*** dims has joined #openstack-containers | 06:21 | |
*** vilobhmm1 has joined #openstack-containers | 06:22 | |
*** vilobhmm has quit IRC | 06:23 | |
*** dims has quit IRC | 06:26 | |
*** achanda has quit IRC | 06:28 | |
*** suro_ has joined #openstack-containers | 06:30 | |
*** yuanying-alt has joined #openstack-containers | 06:38 | |
*** yuanying-alt has quit IRC | 06:43 | |
*** coolsvap is now known as coolsvap|afk | 06:47 | |
*** vilobhmm1 has quit IRC | 06:53 | |
*** vilobhmm has joined #openstack-containers | 06:54 | |
*** coolsvap|afk is now known as coolsvap | 06:58 | |
*** suro_ has quit IRC | 06:59 | |
Kennan_ | :yuanying-alt ping | 07:02 |
yuanying | pong | 07:02 |
Kennan_ | :yuanying | 07:07 |
yuanying | I'm here | 07:07 |
Kennan_ | have you setup devstack in VM and then try install magnum ? | 07:07 |
Kennan_ | I hit stack-creation failed for such case | 07:08 |
Kennan_ | so I am wondering if you use vm for devstack or physical machine | 07:08 |
yuanying | last week, i suceeded | 07:08 |
yuanying | I use devstack on vm | 07:08 |
yuanying | does your vm on vm connect to public internet? | 07:09 |
Kennan_ | could you share what's your vm configuration ? like hypverviosr (virtaulbox or ) cpu, memory, network ? | 07:10 |
yuanying | I use parallels | 07:10 |
Kennan_ | I want to make sure if it is ok for my vm | 07:10 |
yuanying | assign 8GB memory | 07:10 |
Kennan_ | my devstack is OK, since I checked network(external and internal flow) | 07:10 |
yuanying | virtualbox doesn't support nested virtualization | 07:10 |
yuanying | so it may cause fail of stack-creation | 07:11 |
Kennan_ | but I boot instance in devstack(virtualbox vm), it is OK | 07:11 |
Kennan_ | it used qemu | 07:11 |
Kennan_ | I think it can ok | 07:11 |
Kennan_ | I just not understand heat k8s template | 07:12 |
yuanying | qemu vm is very slow | 07:12 |
Kennan_ | it the the most diffucult part to make env work | 07:12 |
yuanying | hmm | 07:13 |
yuanying | what error is raised? | 07:13 |
yuanying | I think heat k8s template will be work with virtualbox if there is no problem of qemu. | 07:14 |
yuanying | there is no specific configuration of virtualization type. | 07:15 |
Kennan_ | I folllowed dev guide and use minion as 1, and I checked | 07:16 |
Kennan_ | nova have two instances up | 07:16 |
Kennan_ | cinder have one volume attach | 07:16 |
Kennan_ | so it means some work | 07:16 |
Kennan_ | but at last the stack-create failed | 07:16 |
Kennan_ | did you know what else did the heat do for k8s ? | 07:16 |
yuanying | did you use `heat resource-list` and `heat resource-show` ? | 07:17 |
Kennan_ | did the 2 instances (k8s master and minion) connect to internet ? | 07:17 |
Kennan_ | did it download k8s code from github ? | 07:17 |
yuanying | no | 07:18 |
yuanying | sorry network is not related | 07:19 |
yuanying | please check resource-list and login to vm and check /var/log/cloud-config.log | 07:19 |
Kennan_ | thanks, heat resorce-list need parameters. I am not famiiar with that? did you know ? | 07:20 |
yuanying | heat resource-list <NAME of stack> --n 3 | 07:22 |
yuanying | heat resource-list <NAME of stack> -n 3 | 07:22 |
yuanying | and check failing resource | 07:22 |
Kennan_ | | CREATE_COMPLETE | 2015-03-09T09:57:13Z | | | master_wait_condition | | AWS::CloudFormation::WaitCondition | CREATE_FAILED | 2015-03-09T10:00:02Z | | +-----------------------------+------------------------------------------------------------ | 07:23 |
yuanying | hmm | 07:24 |
yuanying | it seems that k8s cluster is created | 07:24 |
yuanying | master_wait_condition is a notification from VM to heat api service | 07:25 |
yuanying | can VM access to heat api endpoint? | 07:25 |
yuanying | or please `heat resource-show <NAME> master_wait_condition` | 07:25 |
*** nshaikh has joined #openstack-containers | 07:26 | |
Kennan_ | ------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Property | Value | +------------------------+------------ | 07:26 |
Kennan_ | description | | | links | http://10.10.10.96:8004/v1/b94bafdf9d3e49f687024bd25674e8ea/stacks/testbay-kvfwud4nbyu7/865d61af-5d0e-4463-8be8-8fa12999610c/resources/master_wait_condition (self) | | | http://10.10 | 07:27 |
Kennan_ | | resource_name | master_wait_condition | | resource_status | CREATE_FAILED | | resource_status_reason | CREATE abo | 07:27 |
Kennan_ | sorry, bad format | 07:27 |
yuanying | is there any logs on vm? | 07:30 |
Kennan_ | you mean consloe-log for nova ? | 07:32 |
yuanying | no k8s master or minion vm | 07:32 |
Kennan_ | sorry, what's the location for that logs ? in VM ? | 07:32 |
yuanying | maybe /var/logs/cloud-init.log | 07:34 |
*** openstackgerrit has quit IRC | 07:35 | |
yuanying | these software config is used by cloud-init | 07:36 |
*** openstackgerrit has joined #openstack-containers | 07:36 | |
Kennan_ | OK. Thanks :yuanying, let me check | 07:38 |
*** yuanying-alt has joined #openstack-containers | 07:39 | |
*** yuanying-alt has quit IRC | 07:43 | |
*** kebray has quit IRC | 07:48 | |
*** coolsvap is now known as coolsvap|afk | 07:53 | |
*** coolsvap|afk is now known as coolsvap | 08:00 | |
*** jay-lau-513 has quit IRC | 08:07 | |
*** jay-lau-513 has joined #openstack-containers | 08:08 | |
*** erkules_ is now known as erkules | 08:09 | |
*** erkules has joined #openstack-containers | 08:09 | |
*** oro_ has joined #openstack-containers | 08:37 | |
*** oro has joined #openstack-containers | 08:38 | |
*** jay-lau-513 has quit IRC | 08:51 | |
*** jay-lau-513 has joined #openstack-containers | 08:52 | |
*** dims has joined #openstack-containers | 09:11 | |
*** dims has quit IRC | 09:18 | |
*** nshaikh has quit IRC | 09:20 | |
*** yuanying-alt has joined #openstack-containers | 09:24 | |
*** yuanying-alt has quit IRC | 09:29 | |
*** yuanying has quit IRC | 09:39 | |
*** Kennan_ has quit IRC | 09:40 | |
*** jay-lau-513 has quit IRC | 09:46 | |
openstackgerrit | Digambar proposed stackforge/python-magnumclient: Allow specification of ssh_authorized_key. https://review.openstack.org/159909 | 09:46 |
*** Kennan has joined #openstack-containers | 09:47 | |
*** vilobhmm has quit IRC | 09:48 | |
*** dims has joined #openstack-containers | 09:56 | |
*** sdake_ has quit IRC | 10:09 | |
*** sdake_ has joined #openstack-containers | 10:14 | |
*** oro has quit IRC | 10:23 | |
*** oro_ has quit IRC | 10:24 | |
*** coolsvap is now known as coolsvap|afk | 10:33 | |
*** sdake_ has quit IRC | 10:51 | |
*** yuanying-alt has joined #openstack-containers | 11:10 | |
*** madhuri has quit IRC | 11:13 | |
*** yuanying-alt has quit IRC | 11:14 | |
*** coolsvap|afk is now known as coolsvap | 11:54 | |
*** EricGonczer_ has joined #openstack-containers | 11:54 | |
*** oro_ has joined #openstack-containers | 12:09 | |
*** yuanying-alt has joined #openstack-containers | 12:11 | |
*** yuanying-alt has quit IRC | 12:15 | |
*** thomasem has quit IRC | 12:18 | |
*** thomasem has joined #openstack-containers | 12:18 | |
*** dims has quit IRC | 12:25 | |
*** dims has joined #openstack-containers | 12:25 | |
*** oro has joined #openstack-containers | 12:39 | |
*** EricGonczer_ has quit IRC | 12:54 | |
*** yuanying-alt has joined #openstack-containers | 12:54 | |
*** oro has quit IRC | 12:55 | |
*** oro_ has quit IRC | 12:55 | |
*** dboik has joined #openstack-containers | 12:59 | |
*** oro_ has joined #openstack-containers | 13:02 | |
*** oro has joined #openstack-containers | 13:02 | |
*** zul has quit IRC | 13:04 | |
*** coolsvap is now known as coolsvap|afk | 13:06 | |
*** zul has joined #openstack-containers | 13:09 | |
*** yuanying-alt has quit IRC | 13:13 | |
*** yuanying-alt has joined #openstack-containers | 13:13 | |
*** yuanying-alt has quit IRC | 13:18 | |
dims | some container discussion at the ops meetup - https://etherpad.openstack.org/p/PHL-ops-security | 13:38 |
*** sdake_ has joined #openstack-containers | 13:49 | |
*** oro_ has quit IRC | 13:52 | |
*** oro_ has joined #openstack-containers | 13:53 | |
*** kaufer has joined #openstack-containers | 13:54 | |
*** nshaikh has joined #openstack-containers | 14:03 | |
*** yuanying-alt has joined #openstack-containers | 14:13 | |
*** yuanying-alt has quit IRC | 14:18 | |
*** prad has joined #openstack-containers | 14:23 | |
*** achanda has joined #openstack-containers | 14:34 | |
*** adrian_otto has joined #openstack-containers | 14:34 | |
*** hongbin has joined #openstack-containers | 14:35 | |
*** adrian_otto has quit IRC | 14:36 | |
*** kebray has joined #openstack-containers | 14:39 | |
*** achanda has quit IRC | 14:43 | |
*** nshaikh has quit IRC | 14:44 | |
*** dims has quit IRC | 14:58 | |
*** dimsum__ has joined #openstack-containers | 15:01 | |
*** fredlhsu has joined #openstack-containers | 15:01 | |
*** kebray has quit IRC | 15:03 | |
*** achanda has joined #openstack-containers | 15:03 | |
*** achanda has quit IRC | 15:05 | |
*** vilobhmm has joined #openstack-containers | 15:08 | |
*** suro_ has joined #openstack-containers | 15:09 | |
*** vilobhmm has left #openstack-containers | 15:12 | |
*** unicell has joined #openstack-containers | 15:17 | |
*** unicell1 has quit IRC | 15:18 | |
*** hongbin has quit IRC | 15:27 | |
*** diga_ has joined #openstack-containers | 15:29 | |
*** diga_ has joined #openstack-containers | 15:30 | |
*** sdake__ has joined #openstack-containers | 15:30 | |
*** sdake_ has quit IRC | 15:34 | |
*** kebray has joined #openstack-containers | 15:36 | |
mfalatic | How much disk space does Magnum require to run on devstack for the purpose of the devstack-based example on stackforge/magnum? | 15:41 |
mfalatic | (I'm getting a CREATE_FAILED from Heat after waiting for bay-create to complete. The reason for the error is unclear. Latest Kilo code.) | 15:44 |
*** hongbin has joined #openstack-containers | 15:45 | |
*** adrian_otto has joined #openstack-containers | 15:48 | |
apmelton | mfalatic: is this the first bay you've created on your devstack? | 15:53 |
*** unicell has quit IRC | 15:53 | |
mfalatic | yes | 15:53 |
*** unicell has joined #openstack-containers | 15:53 | |
mfalatic | Ran through the example freshly, new and update 14.04 install + devstack + all the magnum and kubernetes bits. | 15:53 |
apmelton | mfalatic: by default the heat template is going to try to create multiple 25 gig volumes in cinder, and cinder only has 10 gigs total to give out | 15:54 |
apmelton | you'll need to shrink the docker volume size | 15:54 |
mfalatic | Ah.... my devstack VM is around 20 or 32 GB. | 15:54 |
*** yuanying-alt has joined #openstack-containers | 15:54 | |
*** sdake__ has quit IRC | 15:54 | |
*** sdake_ has joined #openstack-containers | 15:55 | |
apmelton | mfalatic: let me find the proper arg to override that in the baymodel | 15:55 |
mfalatic | By default, how many volumes will it try to create? | 15:55 |
apmelton | we usually set it to 5 | 15:55 |
apmelton | mfalatic: one for each kubernetes minion | 15:55 |
mfalatic | == the node-count? | 15:56 |
apmelton | mfalatic: can you do the baymodel-create call again and provide "--docker-volume-size 5" | 15:56 |
apmelton | mfalatic: yes | 15:56 |
*** suro_ has quit IRC | 15:56 | |
mfalatic | Good, just making sure. | 15:56 |
*** yuanying-alt has quit IRC | 15:59 | |
*** fredlhsu has quit IRC | 16:02 | |
mfalatic | It's in progress | 16:03 |
mfalatic | apmelton: I am taking notes and will look into submitting changes to the docs to clarify this (I'm probably not the first one to run into this).. | 16:04 |
apmelton | mfalatic: sounds great | 16:04 |
mfalatic | apmelton: Hmm it failed again (more quickly though thanks to the smaller space) | 16:05 |
*** unicell has quit IRC | 16:05 | |
apmelton | hmmmm | 16:05 |
apmelton | mfalatic: can you do a heat resource-list | 16:06 |
apmelton | and look for which resource failed | 16:06 |
mfalatic | apmelton: arg for resource-list? Nothing I'm trying is working. | 16:08 |
apmelton | mfalatic: you may have to grab the stack id from 'heat stack-list' | 16:08 |
mfalatic | wait got it | 16:08 |
mfalatic | kube_minions failed | 16:09 |
mfalatic | the rest are CREATE_COMPLETE and a few are INIT_COMPLETE after the fail (in the big table of output) | 16:09 |
mfalatic | apmelton: I'm using the fedora-21-atomic image as described in the doc, if that helps. | 16:10 |
mfalatic | apmelton: (I aim to debug this sort of problem myself, but first I need to get through the example) | 16:12 |
apmelton | mfalatic: heat debugging is definitely not my strong suit, so the only thing showing as failed/errored is kube_minions? | 16:12 |
apmelton | what's the resource_type by that | 16:13 |
mfalatic | Yep. I'm just not sure where to look for more info. OS::Heat::ResourceGroup | 16:13 |
apmelton | mfalatic: you didn't touch iptables at all did you? | 16:15 |
mfalatic | I did, yes. Can't route in or out of devstack without that. | 16:15 |
apmelton | alright, that's good | 16:15 |
mfalatic | But nothing dramatic - basic things I've always done to ensure devstack works properly, nothing fancy. | 16:16 |
apmelton | did you set up NAT with something like sudo iptables -t nat -A POSTROUTING -o bond0.101 -j MASQUERADE | 16:16 |
mfalatic | The error in heat engine is "Went to status due to "Unknown"" | 16:16 |
mfalatic | hang on, let me get the exact thing I did. | 16:17 |
mfalatic | The exatra iptables bit is just sudo iptables -t nat -A POSTROUTING -o p2p1 -j MASQUERADE | 16:18 |
mfalatic | wait | 16:18 |
apmelton | and p2p1 is your public interface? | 16:18 |
mfalatic | sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE | 16:18 |
apmelton | alright | 16:19 |
mfalatic | and yes, eth0 is what routes out. eth1 = management (on a local host-only adapter | 16:19 |
mfalatic | Otherwise it'd be a mess of port redirections. | 16:19 |
apmelton | mfalatic: can you do a nova list and a cinder list to see if theres anything in errored | 16:20 |
mfalatic | But if that's a big problem here then I can redo without. | 16:20 |
mfalatic | let me look | 16:20 |
apmelton | mfalatic: nope, I was just making sure those steps were done | 16:20 |
mfalatic | nova list has nothing, cinder list has two minions in the error state | 16:21 |
apmelton | ah.... | 16:21 |
apmelton | in magnum have you deleted the old bay? | 16:21 |
mfalatic | Hmm. I had to delete the baymodel | 16:23 |
mfalatic | I didn't delete anything else but perhaps I ought to. I can roll back to just before I installed magnum and retry with the new args. | 16:24 |
mfalatic | I'm not sure if it'll help but I'm also not sure just how much needs to be unrolled if bay-create fails. | 16:24 |
mfalatic | Marga_: Did you run into this problem as well? | 16:25 |
apmelton | usually I just delete the bay which ends up deleting the stack in heat | 16:25 |
apmelton | which has seemed to clean up after itself alright | 16:25 |
mfalatic | let me give it a try | 16:25 |
mfalatic | SO I'm not sure if it was present before but I definitely deleted the bay now. Re-running bay-create. | 16:27 |
mfalatic | A little concerned that this may be really sensitive to a particular network config. | 16:28 |
mfalatic | Hopefully not. | 16:28 |
mfalatic | That's new - I got a timeout waiting for a reply to a message during bay-create. | 16:29 |
*** daneyon_ has joined #openstack-containers | 16:29 | |
mfalatic | retrying. | 16:29 |
mfalatic | and again. Hmm. Will delete the model and try again from that point. | 16:29 |
apmelton | hmmm | 16:29 |
*** dboik_ has joined #openstack-containers | 16:30 | |
*** daneyon has quit IRC | 16:31 | |
mfalatic | I'm gonna roll back to the snapshot I took right before I tried to create bays | 16:32 |
apmelton | mfalatic: sounds good | 16:32 |
*** dboik has quit IRC | 16:33 | |
apmelton | mfalatic: I've gotta run and grab some lunch, I'll be back in a little bit | 16:34 |
*** unicell has joined #openstack-containers | 16:34 | |
*** suro_ has joined #openstack-containers | 16:36 | |
mfalatic | Ok, it's in progress here | 16:37 |
mfalatic | thank you! | 16:37 |
*** harlowja_away is now known as harlowja_ | 16:44 | |
mfalatic | apmelton: Well, it was in progress for a fair amount of time before failing out again. | 16:44 |
*** oro_ has quit IRC | 16:45 | |
*** oro has quit IRC | 16:45 | |
mfalatic | apmelton: We'll talk more when you return, but in the end I just wonder what the actual error is and where to find it? "Unknown" is a bit vague... | 16:45 |
*** dimsum__ has quit IRC | 16:46 | |
*** dimsum__ has joined #openstack-containers | 16:47 | |
mfalatic | Introspection helps! (Or it breaks Rabbit - not really sure which yet)... Minions seem to fail on AWS::CloudFormation::WaitCondition (which has no physical resource ID to introspect) | 16:50 |
mfalatic | Interestingly after doing that last resource-list messaging appears to break down completely. What's up with that, and how do I restart messaging? | 16:51 |
*** kebray has quit IRC | 16:56 | |
*** kebray has joined #openstack-containers | 16:57 | |
apmelton | mfalatic: so, you mention that the wait condition failed | 16:59 |
apmelton | generally that means the instance took too long to set itself up | 16:59 |
apmelton | mfalatic: so, you mentioned a snapshot earlier, is your devstack running in a VM? | 16:59 |
mfalatic | Yes | 17:00 |
apmelton | mfalatic: we've found that virt-in-virt offers too poor performance to run magnum | 17:01 |
mfalatic | The fact that I can't list resources now is weird... it's like simply listing them earlier broke messaging. | 17:01 |
apmelton | mfalatic: that's really odd | 17:01 |
mfalatic | Well, bare metal will take forever restacking and reconfiguring every time I want to roll back. | 17:01 |
mfalatic | But that said, I'm not looking for high performance anyway... and it ought to work, albiet slowly (posibly slowly - this is backed by SSD) | 17:02 |
mfalatic | Actually, things are pretty snappy despite being in a VM. | 17:03 |
mfalatic | Have people not been able to run magnum in a VM this way? | 17:03 |
apmelton | well, the problem is that the kubernetes cluster that's being spawned by magnum is being run in the instances | 17:03 |
apmelton | mfalatic: I've given it a try multiple times | 17:03 |
apmelton | about the only thing I haven't been able to try is running it on a HVM guest that exposes the underlying hardware accelerated virt | 17:04 |
apmelton | mfalatic: I think pretty much all of the devs here are running devstack directly on a their workstation/laptop of a dedicated host somewhere | 17:05 |
mfalatic | Has anyone successfully run this in a VM? THe alternative means having to wipe and reinstall from OS on up every time you want to roll back. That's not optimal. | 17:05 |
mfalatic | And how do they roll back? | 17:06 |
mfalatic | How do you snapshot on bare metal? | 17:06 |
apmelton | mfalatic: I haven't been using snapshots/roll back | 17:07 |
mfalatic | (Because that would certainly make bare metal more reasonable. | 17:07 |
apmelton | most of the time magnum or heat have cleaned up after themselves reasonably well | 17:08 |
mfalatic | Hmm. Openstack has a nasty habit of changing things that only a reinstall will fully undo. | 17:08 |
mfalatic | but ok, will try that. It'll take longer (bare metal in my environment is a PITA) | 17:09 |
mfalatic | hopefully it'll work. It'll be something else if I end up with the same errors. | 17:10 |
mfalatic | Will let you know. | 17:10 |
*** yuanying-alt has joined #openstack-containers | 17:10 | |
apmelton | mfalatic: sounds good | 17:11 |
*** hblixt has joined #openstack-containers | 17:14 | |
*** yuanying-alt has quit IRC | 17:14 | |
*** unicell has quit IRC | 17:15 | |
*** unicell has joined #openstack-containers | 17:15 | |
suro_ | mfalatic: apmelton: I am running magnum with devstack on VM - after I resolved the cinder issue, I also found the heat stack deployment stuck at "AWS::CloudFormation::WaitCondition" | 17:18 |
suro_ | I had removed that condition and wait and got it proceeding | 17:18 |
mfalatic | How did you do that? | 17:18 |
*** achanda has joined #openstack-containers | 17:19 | |
*** achanda has quit IRC | 17:19 | |
mfalatic | Because anything that avoids multiple levels of VPN redirection and obfuscation just to get to a bare metal server would be a boon today. | 17:19 |
*** achanda has joined #openstack-containers | 17:19 | |
apmelton | suro_: do you know how long it takes after the bay create for the nodes to set themselves up? | 17:20 |
suro_ | Here are the changes from my sandbox - http://paste.openstack.org/show/191353/ | 17:20 |
*** sdake_ has quit IRC | 17:21 | |
suro_ | I had removed the wait condition | 17:21 |
apmelton | since you don't have the wait condition, I believe the stack goes active basically as soon as the instances go active, which isn't necessarily when cloud init in the instances has finished | 17:21 |
suro_ | apmelton: without removing that wait it was waiting forever and not converging - I had posted the question here yday - and finally I resorted to remove this to get going | 17:22 |
mfalatic | Interesting. | 17:22 |
apmelton | suro_: did cloud init actually finish in each of the nodes? | 17:23 |
mfalatic | (Right now it's not even clear if I'll have any external connectivity if I attempt to log into the bare metal I have available to me - watching this with keen interest.) | 17:23 |
suro_ | apmelton: I will check that and get back | 17:24 |
mfalatic | Actually, it might be easier to do this in an external cloud - it's openstack, not proprietary. | 17:27 |
mfalatic | apmelton: You're Rackspace, right? Any caveats to how this would work on a Rackspace Cloud instance? | 17:27 |
apmelton | mfalatic: I'm using our 'on metal' service | 17:27 |
mfalatic | Ah... ok. | 17:28 |
mfalatic | Hmm. So it probably would be the same issue if this was on RS Cloud (or Digitalocean or whatever) | 17:29 |
mfalatic | If in fact that's the issue. I can see a long delay but a forever timeout seems odd. | 17:29 |
apmelton | mfalatic: yup, I've tried a couple of different times | 17:29 |
mfalatic | Tried what? RS Cloud or a loval VBox VM or...? | 17:30 |
apmelton | RS Cloud VMs | 17:30 |
mfalatic | Gotcha | 17:30 |
mfalatic | Good to know. Not sure how much OnMetal will cost if I try that. | 17:30 |
apmelton | it's pretty expensive | 17:31 |
mfalatic | Ugh. | 17:31 |
mfalatic | Yeah, I think solving this problem will be a high priority of mine then, as time permits. | 17:31 |
apmelton | mfalatic: if it's just spawn time for the bay that's slow, that might not be too bad | 17:32 |
apmelton | but since we have to interact with the kubernetes cluster inside those instances, I just can't see it performing | 17:32 |
*** coolsvap|afk is now known as coolsvap | 17:32 | |
mfalatic | I don't know where that timeout is configured. Also not clear why the messaging service is dying. | 17:32 |
apmelton | mfalatic: that kinda surprises me, I haven't had any issues with rabbit | 17:33 |
apmelton | mfalatic: how much memory are you allocating for your vm? | 17:33 |
mfalatic | Understood. Not sure how well a bunch of virtual cluster machines will perform anyway. | 17:33 |
mfalatic | 3072 MB currently. Maybe too little? | 17:33 |
apmelton | oh yea | 17:33 |
apmelton | let me see what my usage is currently | 17:33 |
apmelton | https://gist.github.com/ramielrowe/c16ebe89eed1ca818528 | 17:34 |
mfalatic | I don;t think it's crashing out on memory starvation but it's possible. | 17:34 |
*** unicell has quit IRC | 17:34 | |
apmelton | perhaps 8 or so gigs | 17:34 |
*** unicell has joined #openstack-containers | 17:34 | |
mfalatic | I see 16 GB in use in what you posted. | 17:35 |
*** achanda has quit IRC | 17:35 | |
mfalatic | (or allocated at least) | 17:35 |
apmelton | but 7~ of that is cache | 17:35 |
mfalatic | ok | 17:35 |
apmelton | mfalatic: if you've got ram to spare, might start high then lower as necessary | 17:35 |
mfalatic | Well, I'll give bare metal a try. The network configs here are... weird. | 17:36 |
mfalatic | I have 16 GB to work with if I'm trying VM, but on bare metal not sure. | 17:36 |
mfalatic | Will see. | 17:36 |
*** achanda has joined #openstack-containers | 17:37 | |
mfalatic | The network weirdness complicates it unfortunately... it can never just work. | 17:37 |
mfalatic | Just for comparison, if I need to compare later, how many NICs have you got configured on your OnMetal instance and are you using port forwarding to ssh/web connect to your devstack? | 17:39 |
apmelton | mfalatic: essentially two NICs | 17:39 |
mfalatic | Ok | 17:39 |
apmelton | one public, one private service net | 17:39 |
mfalatic | Given port blocks and other goodness, it's hard to say what's going to work where I am. Fingers crossed. | 17:40 |
*** Tango has joined #openstack-containers | 17:55 | |
*** diga_ has quit IRC | 17:57 | |
*** oro_ has joined #openstack-containers | 18:08 | |
*** oro has joined #openstack-containers | 18:08 | |
*** adrian_otto1 has joined #openstack-containers | 18:20 | |
*** adrian_otto has quit IRC | 18:22 | |
*** dimsum__ is now known as dims | 18:27 | |
*** sdake_ has joined #openstack-containers | 18:28 | |
*** yuanying-alt has joined #openstack-containers | 18:59 | |
*** Marga_ has quit IRC | 19:03 | |
*** yuanying-alt has quit IRC | 19:03 | |
*** wshao has joined #openstack-containers | 19:03 | |
*** dboik_ has quit IRC | 19:04 | |
*** hongbin has quit IRC | 19:04 | |
*** dboik has joined #openstack-containers | 19:04 | |
*** adrian_otto1 has quit IRC | 19:09 | |
*** coolsvap is now known as coolsvap|afk | 19:20 | |
*** adrian_otto has joined #openstack-containers | 19:24 | |
apmelton | hey, anyone around that knows about our use of alembic | 19:26 |
apmelton | ? | 19:26 |
apmelton | when I do autogenerate, it seems like it's adding columns and contraints I didn't touch | 19:27 |
apmelton | so I'm wondering if our migrations aren't complete | 19:27 |
*** unicell has left #openstack-containers | 19:27 | |
*** wshao has quit IRC | 19:35 | |
*** wshao has joined #openstack-containers | 19:37 | |
*** kebray has quit IRC | 19:38 | |
*** suro_1 has joined #openstack-containers | 19:40 | |
*** suro_ has quit IRC | 19:41 | |
*** wshao has quit IRC | 19:41 | |
*** suro_ has joined #openstack-containers | 19:42 | |
*** suro_ has joined #openstack-containers | 19:42 | |
*** suro_1 has quit IRC | 19:44 | |
*** daneyon has joined #openstack-containers | 19:50 | |
*** dims has quit IRC | 19:51 | |
*** daneyon_ has quit IRC | 19:53 | |
*** kebray has joined #openstack-containers | 19:55 | |
*** dims has joined #openstack-containers | 20:01 | |
*** hongbin has joined #openstack-containers | 20:06 | |
*** unicell has joined #openstack-containers | 20:11 | |
*** jogo has joined #openstack-containers | 20:24 | |
jogo | adrian_otto: I think I called it correctly | 20:24 |
*** tcammann_ has joined #openstack-containers | 20:26 | |
*** yuanying-alt has joined #openstack-containers | 20:29 | |
adrian_otto | jogo: ye, sir! | 20:33 |
*** yuanying-alt has quit IRC | 20:34 | |
*** Marga_ has joined #openstack-containers | 20:35 | |
*** oro has quit IRC | 20:42 | |
*** oro_ has quit IRC | 20:42 | |
*** dboik_ has joined #openstack-containers | 20:45 | |
*** dboik has quit IRC | 20:49 | |
*** fredlhsu has joined #openstack-containers | 20:56 | |
*** suro_ has quit IRC | 20:56 | |
*** suro_ has joined #openstack-containers | 20:57 | |
openstackgerrit | Tom Cammann proposed stackforge/magnum: Add devstack module to contrib https://review.openstack.org/160328 | 20:57 |
*** muralia has joined #openstack-containers | 21:04 | |
*** dboik_ has quit IRC | 21:07 | |
sdake_ | tom - would devstack not take that directly in their repo? | 21:08 |
*** dboik has joined #openstack-containers | 21:08 | |
*** tcammann_ has quit IRC | 21:34 | |
*** sdake_ has quit IRC | 21:37 | |
*** sdake_ has joined #openstack-containers | 21:40 | |
*** suro_ has quit IRC | 21:48 | |
*** kaufer has quit IRC | 21:54 | |
*** yuanying-alt has joined #openstack-containers | 21:58 | |
adrian_otto | Our team meeting will begin in a moment in #openstack-meeting-alt | 21:59 |
*** coolsvap|afk has quit IRC | 22:03 | |
*** suro_ has joined #openstack-containers | 22:07 | |
*** kebray has quit IRC | 22:10 | |
*** sdake__ has joined #openstack-containers | 22:33 | |
*** sdake_ has quit IRC | 22:37 | |
*** fredlhsu has quit IRC | 22:41 | |
*** kebray has joined #openstack-containers | 22:44 | |
*** dboik_ has joined #openstack-containers | 22:52 | |
*** fredlhsu has joined #openstack-containers | 22:55 | |
*** dboik has quit IRC | 22:55 | |
*** Marga_ has quit IRC | 22:57 | |
*** Marga_ has joined #openstack-containers | 22:57 | |
*** dboik_ has quit IRC | 22:57 | |
*** madhuri has joined #openstack-containers | 23:02 | |
*** adrian_otto has quit IRC | 23:03 | |
*** mfalatic has quit IRC | 23:05 | |
*** Marga_ has quit IRC | 23:06 | |
*** Marga_ has joined #openstack-containers | 23:06 | |
*** kebray has quit IRC | 23:10 | |
*** Marga_ has quit IRC | 23:11 | |
*** sdake__ has quit IRC | 23:11 | |
*** prad has quit IRC | 23:14 | |
*** yuanying-alt has quit IRC | 23:14 | |
*** Marga_ has joined #openstack-containers | 23:17 | |
*** fredlhsu has quit IRC | 23:24 | |
*** madhuri has quit IRC | 23:28 | |
*** sdake_ has joined #openstack-containers | 23:39 | |
*** sdake_ has quit IRC | 23:44 | |
Kennan | :apmelton, you can follow our migrate guide to do that | 23:45 |
*** fredlhsu has joined #openstack-containers | 23:49 | |
*** kebray has joined #openstack-containers | 23:51 | |
*** zul has quit IRC | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!