*** david-lyle is now known as david-lyle_afk | 00:14 | |
*** hj-hp has joined #openstack-operators | 00:15 | |
*** hj-hp has quit IRC | 00:16 | |
*** hj-hp has joined #openstack-operators | 00:17 | |
*** j05h has quit IRC | 00:18 | |
*** Piet has joined #openstack-operators | 00:20 | |
*** jaypipes has quit IRC | 00:40 | |
*** Marga__ has quit IRC | 00:41 | |
*** mdorman has quit IRC | 00:44 | |
*** blair has joined #openstack-operators | 00:52 | |
*** hj-hp has quit IRC | 01:02 | |
*** j05h has joined #openstack-operators | 01:04 | |
*** VW_ has quit IRC | 01:11 | |
*** blair has quit IRC | 01:19 | |
*** blair has joined #openstack-operators | 01:20 | |
*** hj-hp has joined #openstack-operators | 01:31 | |
*** hj-hp has quit IRC | 01:33 | |
*** georgem1 has joined #openstack-operators | 01:54 | |
*** andrewbogott is now known as andrewbogott_afk | 02:19 | |
*** georgem1 has quit IRC | 02:36 | |
*** georgem1 has joined #openstack-operators | 02:38 | |
*** markvoelker has quit IRC | 02:47 | |
*** markvoelker has joined #openstack-operators | 02:47 | |
*** markvoelker has quit IRC | 02:51 | |
*** georgem1 has quit IRC | 02:56 | |
*** andrewbogott_afk is now known as andrewbogott | 02:59 | |
*** georgem1 has joined #openstack-operators | 03:02 | |
*** zerda has joined #openstack-operators | 03:38 | |
*** markvoelker has joined #openstack-operators | 03:43 | |
*** jlk has quit IRC | 04:05 | |
*** jlk has joined #openstack-operators | 04:06 | |
*** jlk has quit IRC | 04:11 | |
*** jlk has joined #openstack-operators | 04:11 | |
*** signed8bit_ZZZzz has quit IRC | 04:21 | |
*** georgem1 has quit IRC | 04:31 | |
*** markvoelker has quit IRC | 04:44 | |
*** markvoelker has joined #openstack-operators | 04:45 | |
*** markvoelker has quit IRC | 04:49 | |
*** alop has quit IRC | 05:00 | |
*** blairo has joined #openstack-operators | 05:06 | |
*** blair has quit IRC | 05:08 | |
*** blairo has quit IRC | 05:15 | |
*** andrewbogott is now known as andrewbogott_afk | 05:25 | |
*** zerda has quit IRC | 06:14 | |
*** fifieldt has joined #openstack-operators | 06:26 | |
*** zerda has joined #openstack-operators | 06:36 | |
*** Marga_ has joined #openstack-operators | 06:49 | |
*** Marga_ has quit IRC | 06:49 | |
*** Marga_ has joined #openstack-operators | 06:50 | |
*** belmoreira has joined #openstack-operators | 07:03 | |
*** subscope has quit IRC | 07:14 | |
*** Marga_ has quit IRC | 07:35 | |
*** Miouge has joined #openstack-operators | 07:43 | |
*** markvoelker has joined #openstack-operators | 07:44 | |
*** sanjayu has joined #openstack-operators | 07:46 | |
*** markvoelker has quit IRC | 07:50 | |
*** racedo has quit IRC | 08:12 | |
*** matrohon has joined #openstack-operators | 08:14 | |
*** derekh has joined #openstack-operators | 09:17 | |
*** bvandenh has joined #openstack-operators | 09:18 | |
*** jemangs_ has quit IRC | 09:32 | |
*** jemangs has joined #openstack-operators | 09:35 | |
*** markvoelker has joined #openstack-operators | 09:47 | |
*** markvoelker has quit IRC | 09:52 | |
*** VW_ has joined #openstack-operators | 10:07 | |
*** VW_ has quit IRC | 11:03 | |
*** pcaruana has joined #openstack-operators | 11:19 | |
*** pboros has joined #openstack-operators | 11:28 | |
*** VW_ has joined #openstack-operators | 11:30 | |
*** VW__ has joined #openstack-operators | 11:38 | |
*** VW__ has quit IRC | 11:39 | |
*** VW__ has joined #openstack-operators | 11:39 | |
*** VW_ has quit IRC | 11:41 | |
*** markvoelker has joined #openstack-operators | 12:50 | |
*** markvoelker has quit IRC | 12:56 | |
*** georgem1 has joined #openstack-operators | 13:02 | |
*** bvandenh has quit IRC | 13:03 | |
*** jkraj has joined #openstack-operators | 13:05 | |
*** esker has joined #openstack-operators | 13:07 | |
*** sanjayu has quit IRC | 13:26 | |
*** zerda has quit IRC | 13:26 | |
*** signed8bit has joined #openstack-operators | 13:45 | |
*** markvoelker has joined #openstack-operators | 13:53 | |
*** markvoelker has quit IRC | 13:59 | |
*** cpschult has joined #openstack-operators | 14:00 | |
*** priteau has joined #openstack-operators | 14:08 | |
*** markvoelker has joined #openstack-operators | 14:10 | |
*** georgem1 has quit IRC | 14:13 | |
*** bvandenh has joined #openstack-operators | 14:21 | |
*** VW__ has quit IRC | 14:30 | |
*** VW_ has joined #openstack-operators | 14:31 | |
*** radez_g0n3 is now known as radez | 14:34 | |
*** georgem1 has joined #openstack-operators | 14:35 | |
*** csoukup has joined #openstack-operators | 14:55 | |
*** signed8b_ has joined #openstack-operators | 15:03 | |
*** signed8bit has quit IRC | 15:06 | |
*** dmsimard_away is now known as dmsimard | 15:10 | |
*** georgem1 has quit IRC | 15:17 | |
*** markvoelker has quit IRC | 15:21 | |
*** markvoelker has joined #openstack-operators | 15:21 | |
*** matrohon has quit IRC | 15:22 | |
*** markvoelker has quit IRC | 15:26 | |
*** VW_ has quit IRC | 15:33 | |
*** VW_ has joined #openstack-operators | 15:34 | |
*** georgem1 has joined #openstack-operators | 15:38 | |
*** reed has joined #openstack-operators | 15:48 | |
*** mdorman has joined #openstack-operators | 15:50 | |
*** markvoelker has joined #openstack-operators | 15:51 | |
*** hj-hp has joined #openstack-operators | 16:06 | |
*** Marga_ has joined #openstack-operators | 16:08 | |
*** VW_ has quit IRC | 16:12 | |
*** VW_ has joined #openstack-operators | 16:17 | |
georgem1 | do you have any recommendations for the partitioning scheme of the compute nodes? (no shared storage, KVM, Ubuntu) | 16:17 |
---|---|---|
georgem1 | I'm thinking /boot 500 MB, swap 128 GB (50% of RAM), / 50 GB and the rest for /var; there will be no memory overcommit so I don't want to waste more space for swap | 16:17 |
*** VW_ has quit IRC | 16:17 | |
*** VW_ has joined #openstack-operators | 16:18 | |
jlk | georgem1: do you have a reason for splitting off /boot ? | 16:24 |
klindgren | VW_, Got some good news on the Neutron errors that we have been seeing. I configured memcached in neutron that was added in Juno and the metadata errors have dropped off complelty. Now the only errors that I see are _heal_instance_info_cache timing out connection to neutron. | 16:24 |
jlk | klindgren: do you get any problems with instance boot and neutron vif plug timeout? | 16:25 |
klindgren | We have always had a separate boot partition. However ours use to be 200mb - which now with new kernels and larger ram disks is problematic | 16:25 |
klindgren | jlk, not really | 16:25 |
jlk | yeah, tradition is a separate boot, but that was due to lilo and grub issues with large disks or with LVM set ups | 16:26 |
klindgren | I did see 1-2 errors about vif creation valid due to neutron timeouts | 16:26 |
georgem1 | jlk: /boot has to be ext2 usually and I want to use xfs for everything else, so I carve a partition for it | 16:26 |
jlk | which largely isn't a problem these days, so unless you're using a huge (more than 2TB disk) or a filesystem grub can't understand, a separate partition may be a wasted effort | 16:26 |
klindgren | however 99% of the errors are either metadata related, or instance_info_cache | 16:26 |
jlk | klindgren: grub supports XFS | 16:27 |
georgem1 | jlk: we are going with 6x2 TB drives, so the RAID container will be quite large | 16:27 |
klindgren | jlk - I get that - its just that by default our build system already partitions it that way so *shrug* | 16:27 |
VW_ | nice klindgren | 16:27 |
jlk | georgem1: well that's a good reason to split off /boot then :) | 16:28 |
klindgren | I thought grub2 could do gpt partition tables as well | 16:28 |
jlk | grub2 perhaps. | 16:30 |
jlk | lots has changed there for secure boot too | 16:30 |
jlk | and eufi | 16:30 |
klindgren | VW_, still not fixed though. Thinking that this is pointing to some issue with how neutrons doing database stuff with multiple servers/workers is causing some sort of lock contention. | 16:31 |
VW_ | I would vote yes | 16:32 |
VW_ | :) | 16:32 |
VW_ | we've had some db contention problems too | 16:33 |
*** david-lyle_afk is now known as david-lyle | 16:36 | |
georgem1 | anybody running 2 x 10 Gb bonded and trunked with opensvswitch and Ubuntu 14.04? I need some feedback about performance | 16:40 |
*** Marga_ has quit IRC | 16:42 | |
klindgren | georgem1, active-active or active-passive? | 16:46 |
georgem1 | active/active | 16:49 |
*** pcaruana has quit IRC | 16:50 | |
georgem1 | the plan is to have 2x10 Gb going to a ToR, LACP, trunk and create openvswitch bridges for management, GRE, storage, monitoring, etc over different VLANS | 16:51 |
georgem1 | but I heard there were performance issues with openvswitch and bond links, I'm not sure about the newest version | 16:52 |
klindgren | eh - them ajority of the perofmance issues i have had with OVS were the result of the flow stuff being single threaded | 16:55 |
*** Marga_ has joined #openstack-operators | 16:55 | |
*** Marga_ has quit IRC | 16:56 | |
klindgren | however, I dont have 10gig links in active/active | 16:56 |
klindgren | and I use real vlans instead of gre | 16:56 |
*** Marga_ has joined #openstack-operators | 16:56 | |
klindgren | I can tell you that we are looking at removing ovs all together and going to linux bridging. Mainly because we are using shared networks with real vlans, so ovs buys us exactly nothing. INfact all packets have to be filtered through both linux bridge and OVS (due to security groups), so we figured it would be simpler/faster to just have linuxbridge. | 16:58 |
georgem1 | ok, but what's the performance like now on a 10 Gb link? | 17:00 |
georgem1 | I don't have control of the number of neutron networks that will be created by my tenants, so the idea of having 3-4000 VLANs across all the switches is not ideal for me, hence I have to go with GRE | 17:02 |
*** belmoreira has quit IRC | 17:23 | |
*** bvandenh has quit IRC | 17:27 | |
*** Marga_ has quit IRC | 17:38 | |
*** Marga_ has joined #openstack-operators | 17:38 | |
*** derekh has quit IRC | 17:40 | |
*** georgem1 has quit IRC | 17:44 | |
*** alop has joined #openstack-operators | 17:44 | |
*** Marga_ has quit IRC | 17:56 | |
*** emagana has joined #openstack-operators | 18:03 | |
*** Ctina has joined #openstack-operators | 18:06 | |
Ctina | anyone in here using ceilometer? I'm configuring it for the first time and had some dumb questions.. | 18:06 |
*** emagana has quit IRC | 18:07 | |
klindgren | We used to use it - stopped due to terrible performance and overhead | 18:07 |
Ctina | i booted an instance and then did a ceilometer meter-list and i'm not seeing cpu meters like i would expect based on some of the quick guides i've seen | 18:08 |
Ctina | i see instance:<flavor>, instance.scheduled, etc along with disk.root.size meters, just not cpu or cpu_util | 18:09 |
klindgren | some of them are based upon calculations | 18:10 |
klindgren | I think cpu_util was basedupon the average cpu usage over 10 minutes or something like that | 18:10 |
klindgren | it dpended on when the sample was taken | 18:10 |
Ctina | oh okay, that makes sense | 18:10 |
klindgren | so we updated some of them to put data in faster | 18:11 |
klindgren | I forgot the file that we did that on | 18:11 |
Ctina | ceilometer/pipeline.yaml probably | 18:11 |
Ctina | I'll mess with some of those intervals and see what i can get, thank you :) | 18:13 |
klindgren | yea pretty sure that was it | 18:13 |
*** andrewbogott_afk is now known as andrewbogott | 18:16 | |
*** georgem1 has joined #openstack-operators | 18:19 | |
*** VW_ has quit IRC | 18:20 | |
*** VW_ has joined #openstack-operators | 18:20 | |
jlk | any of you have an easy way for end users to see what cinder volume capacity there is? | 18:26 |
georgem1 | jlk: what do you mean by "cinder volume capacity"? | 18:32 |
jlk | as in how much space is left to make volumes | 18:32 |
jlk | a customer tried to make 20 volumes at 20g each and eventually ran out of space, but there wasn't a way to know that ahead of time | 18:32 |
georgem1 | no, the available capacity is only available to the cloud admin, unless you change the policy.json | 18:33 |
georgem1 | the user can see what his quota is | 18:33 |
jlk | how can the cloud admin see it ( because in this case, the user /was/ an admin) | 18:37 |
*** Miouge has quit IRC | 18:39 | |
*** Miouge has joined #openstack-operators | 18:45 | |
*** Marga_ has joined #openstack-operators | 18:46 | |
*** Marga_ has quit IRC | 18:47 | |
*** Marga_ has joined #openstack-operators | 18:47 | |
*** Marga_ has quit IRC | 18:48 | |
*** Marga_ has joined #openstack-operators | 18:48 | |
*** Marga_ has quit IRC | 18:49 | |
*** Marga_ has joined #openstack-operators | 18:49 | |
*** andrewbogott is now known as andrewbogott_afk | 19:08 | |
*** priteau has quit IRC | 19:15 | |
*** matrohon has joined #openstack-operators | 19:31 | |
*** hj-hp has quit IRC | 19:38 | |
georgem1 | jlk:I'm afraid the information is not available to the admin either :( | 19:45 |
jlk | that was the conclusion I came to as well. Seems like a missing bit of usefulness there | 19:45 |
jlk | as a provider, I"d love to be able to monitor available capacity, and pre-plan capacity increases | 19:46 |
klindgren | jlk, but the cloud is infinite | 19:46 |
jlk | I can cobble something together by looking at all the backend storage devices, but that requires touching operating system or NAS/SAN bits, rather than poking openstack APIs | 19:46 |
georgem1 | jlk: https://blueprints.launchpad.net/cinder/+spec/list-backends-and-capabilities | 19:47 |
jlk | registered in 2013, not touched since :( | 19:48 |
georgem1 | jlk: as admin, you could run "cinder list --all-tenants" and add up the volumes | 19:48 |
georgem1 | jlk:as admin, you should also know the total capacity of your storage (ceph pool, NAS, local filesystem, etc) | 19:49 |
jlk | "should", but I'm dealing with 10s of clouds | 19:49 |
jlk | someday 100s | 19:50 |
jlk | programing that into an alert or metric takes... effort | 19:50 |
*** jkraj has quit IRC | 19:55 | |
georgem1 | jlk: query the cinder db mayb | 19:57 |
georgem1 | https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/zabbix/files/scripts/monitoring.conf | 19:58 |
*** Ctina_ has joined #openstack-operators | 20:01 | |
*** VW_ has quit IRC | 20:02 | |
*** Ctina has quit IRC | 20:05 | |
*** Ctina_ has quit IRC | 20:06 | |
*** hj-hp has joined #openstack-operators | 20:18 | |
*** andrewbogott_afk is now known as andrewbogott | 20:21 | |
*** Marga_ has quit IRC | 20:28 | |
*** hj-hp has quit IRC | 20:44 | |
*** georgem1 has quit IRC | 20:52 | |
*** hj-hp has joined #openstack-operators | 20:56 | |
*** VW_ has joined #openstack-operators | 20:56 | |
*** georgem1 has joined #openstack-operators | 20:58 | |
*** georgem1 has quit IRC | 21:16 | |
*** Marga_ has joined #openstack-operators | 21:29 | |
*** Marga_ has quit IRC | 21:33 | |
jlk | Can anybody tell me what the difference is between a server "image" create and a server "snapshot" create? | 21:37 |
*** andrewbogott is now known as andrewbogott_afk | 21:37 | |
*** andrewbogott_afk is now known as andrewbogott | 21:37 | |
*** Marga_ has joined #openstack-operators | 21:40 | |
*** georgem1 has joined #openstack-operators | 21:40 | |
*** georgem1 has quit IRC | 21:41 | |
*** georgem1 has joined #openstack-operators | 21:41 | |
*** Miouge has quit IRC | 21:46 | |
*** hj-hp has quit IRC | 21:48 | |
*** markvoelker has quit IRC | 21:51 | |
*** markvoelker has joined #openstack-operators | 21:51 | |
*** hj-hp has joined #openstack-operators | 21:53 | |
*** radez is now known as radez_g0n3 | 21:54 | |
*** hj-hp has quit IRC | 21:54 | |
*** markvoelker has quit IRC | 21:56 | |
*** markvoelker has joined #openstack-operators | 21:57 | |
*** turnerg has joined #openstack-operators | 22:10 | |
*** VW_ has quit IRC | 22:16 | |
*** VW_ has joined #openstack-operators | 22:17 | |
*** georgem1 has quit IRC | 22:33 | |
*** matrohon has quit IRC | 22:51 | |
*** cpschult has quit IRC | 22:55 | |
*** alop has quit IRC | 23:09 | |
*** alop has joined #openstack-operators | 23:12 | |
*** signed8b_ has quit IRC | 23:15 | |
*** pboros has quit IRC | 23:18 | |
klindgren | jlk - asside from glance needing special flags to see them. I can't | 23:39 |
jlk | huh. | 23:39 |
klindgren | one thing I did notice is that snapshots have a serverid component - not really sure what that means though | 23:39 |
klindgren | [14:36] <jlk> Can anybody tell me what the difference is between a server "image" create and a server "snapshot" create? | 23:39 |
klindgren | iirc when doing a nova image-list you get different results than glance image-list | 23:40 |
klindgren | you have to pass another flag to glance to see snapshots | 23:40 |
jlk | yeah, the huh was in "yeah that's kind of what I thought too." | 23:40 |
*** dmsimard is now known as dmsimard_away | 23:48 | |
*** csoukup has quit IRC | 23:53 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!