Sunday, 2012-05-20

*** stimble has joined #openstack00:01
*** ryanpetrello has quit IRC00:07
*** rendar has quit IRC00:12
*** koolhead17 has quit IRC00:14
*** koolhead17 has joined #openstack00:37
*** mestery has joined #openstack00:38
trapnizykes-: we use servers from Thomas Krenn AG (german server provider)00:42
trapnizykes-: mostly Intel CPUs00:42
*** pmezard has quit IRC00:44
WormMantrapni: either central or nova-network on every compute is fine, just have to choose one. I use nova-network on every host so there's not a central point of failure00:56
WormManbut in return I waste an external IP for every node00:57
*** edygarcia has quit IRC00:57
*** albert23 has left #openstack00:59
uvirtbotNew bug: #1001828 in nova "EC2 volumes - attach_time is null" [Undecided,New]
*** epim has joined #openstack01:02
trapniWormMan: well, but I have only one switch for public IPs01:02
trapniWormMan: exactly. and I am fine with a central nova-network node (well, 2 tbh, as active/passive HA cluster... at least that's the plan)01:03
zykes-what storage you use for instances trapni ?01:03
WormManhopefully people have gotten that working better, but we last looked at it 6 months ago01:03
WormMan(HA for nove-network)01:03
trapniWormMan: but how does nova know what I use? Either nova-network on *every* compute node or just a central one? (I am for the latter)01:04
WormMantrapni: there's a flag: multi_host01:04
trapnizykes-: currently hardware-raid 101:04
WormManI don't remember if that's global or per network01:04
trapniWormMan: yeah, never understood it. maybe I misinterpreted it. just recently set to T01:05
WormManmulti_host means every compute node runs nova network01:05
zykes-how much mem trapni ?01:05
trapnizykes-: hardware-raid 1 with rotated media per compute node, VM disks are stored on the hypervisor's local storage01:05
trapnizykes-: as said above, 48GB :)01:05
zykes-so 2 gb pr core then ?01:06
trapniWormMan: oh, okay, in that case I need to leave it --mutli_host=F then01:06
zykes-you run mostly linux or ?01:06
WormMantrapni: exactly01:06
trapnizykes-: highly depends on the application you use. we have a few VMs that are fine with 2 VCPUs but need up to 20GB RAM (memcache, redis for example) and others that need lots of compute as well as considerable amount of RAM (Ruby Web application nodes) and a few that are kind of t1.micro (in terms of EC2) for serviing tiny things01:07
trapniWormMan: many thanks! :)01:07
zykes-hypervisor you have ?01:08
trapniyou mean how many?01:09
zykes-which type01:10
trapni(via libvirt)01:10
zykes-I wonder if people use blades for public clouds :p01:10
trapniI love the KSM feature (kernel same-page merging)01:10
zykes-trapni: ubuntu ?01:10
trapnizykes-: dude, I hade blade centers.... please enlighten me if you think they are still bleeding edge tech01:11
trapnizykes-: in the dc? then yes - privately? I'm more a fan of source distros :-D01:11
zykes-I don't think blades are bleeding edge01:12
zykes-why should they be `?01:12
WormManI prefer the idea of cheap hardware for my cloud01:12
trapniWormMan: a college of mine still wants them, and I am doing hard in arguments why they suck (he still wants them)01:12
trapni-WormMan +zykes01:13
*** leetrout has joined #openstack01:13
zykes-what sucks trapni ? blades ?01:13
trapnizykes-: yes, blades. you have a big black blob with lots of small-sized hosts (so called blades) of fixed size.01:14
trapnithis is (in my point of view) the old-age virtualization.01:14
zykes-why is that old age contra 2u boxes ?01:15
WormMan(although I think the 4 servers in 2U with shared power also looks nice) I'm probably sticking with 1u01:15
zykes-WormMan: 10 gig ?01:15
WormMan1G so far01:15
zykes-i'm looking at the r71001:16
trapniWormMan: we're using 1U nodes - why would you prefer 2U units?01:16
WormManI'd like 10, but it doesn't buy us much for our workload01:16
zykes-we've got windows hosts01:16
trapnisorry for that01:16
zykes-so 10 gig is allmost a necassary thing to have (windows vm's sorry)01:16
trapni10gib ram or disk size?01:16
zykes-10 gbe01:17
WormManwe only boot from local disk, and the only thing with remote storage right now is databases, and that's maybe 1 in 20 instances01:17
zykes-deploying a windows image on gigabit is slow ?01:17
WormManeverything else is limited to our Internet feed, which is only 10GB01:17
WormManer Gb01:17
WormMan(I wish we had 10GB...)01:18
*** zul has quit IRC01:18
trapniWormMan: remote storage via iscsi or what?01:19
WormMantrapni: NFS for now01:19
trapniWormMan: mhm, but why's db remote storage backed? I mean, how big is the db, in GB?01:20
WormManit's remote backed for node failure protection and our storage can do snapshots and similar01:21
trapniI'd always try to make use of local block device caching and avoid unnecessary network I/O (by disk i/o)01:21
WormManand we've always done our databases that way, openstack is the same as our pure-hardware DB01:21
*** mjfork has quit IRC01:22
trapniwe're using dual-master mysql setup (both nodes local storage). but we didn't have OpenStack back in these days anyways :)01:22
zykes-I landed at 8k $ roughly pr node with 600 gb local disk for instance storage, 96 gb mem, 12 cores and a dual 10 gbe connector01:22
trapniWormMan: is there any reason for *not* putting big mySQL DBs into a VM?01:22
WormMantrapni: not that we've found yet01:25
zykes-trapni: use red-dwarf instead01:25
trapniWormMan: we're in the middle of migrating from old plain OpenVZ env to OpenStack/KVM/VLAN and I'd ideally put everything into a KVM, including mysql, but there I am unsure.... yet :)01:26
zykes-trapni: > red-dwarf :p01:27
WormManmy coworker who attended the red-dwarf stuff at the conference said it should be nice once it's actually usable :)01:28
zykes-doesn't RS use it internally ?01:28
zykes-or cloud databases01:28
*** cryptk|offline is now known as cryptk01:29
zykes-trapni: the crappy thing about 1u is that it's really not too expandable01:29
WormManit's a cloud, you don't expand them, you add more of them :)01:29
zykes-i mean for NIC or similar WormMan01:31
zykes-I guess you use like 5 NICs in a public cloud no ?01:32
WormMannope, just 201:32
WormMan1 primary, one backup01:32
*** nRy has joined #openstack01:32
WormMan(or LACP bonded, depending on your layout)01:32
zykes-my case it's 2 inet, 2-4 mgmt / vm ?01:32
WormManours are all VLAN01:33
WormManmgmt vlan, public vlan and 20 customer VLANs01:33
zykes-all in on the same network switches ?01:33
trapniWormMan: I hope you can help me then! (I'm having some vlan troubles).. do you have some minutes? (:01:34
WormManwe're sized for 100 customer VLANs, I'm hoping it doesnt come to that :)01:34
WormManI can pretend to be helpful01:34
trapniwell, you can have up to somewhere above 4k VLAN IDs, I wonder how Amazon is handling that limit :)01:35
WormManI'm pretty sure amazon(like rackspace I think) does more filtering at the VM level and not using vlans to segregate traffic01:35
trapniWormMan: okay, thanks. pretent you're about to introduce OpenStack in VLAN mode on 4 racks, with one switch per rack, and all connected in line.01:35
WormMan(or newer tech like netflow)01:35
*** nRy_ has quit IRC01:36
zykes-trapni: Nicira alike products (Amazon) maybe ?01:36
zykes-super secret sauce01:36
trapnimy problem is, I were now berely able to enable VLAN communication on one rack's switch (HP switch) but failed to let these VLAN traffic communicate to another rack's switch (Cisco Switch)01:36
*** zynzel has quit IRC01:36
*** zynzel has joined #openstack01:37
WormManthe links between switches need to be set to trunks with the same enabled protocol, I don't remember what the common one is, but cisco likes their non-normal one01:37
trapniI (now) know I have to explictely allow all VIDs on every port I am having compute nodes running on. that's all working fine as long as I am on the same switch01:37
zykes-any of you using hp servers ?01:38
WormManwe've got about 80 HPs01:38
trapnizykes-: yes01:38
WormMan(among other hardware)01:38
trapniI find them easier to configure than those Cisco thingies01:38
zykes-trapni: which kinds ?01:38
*** cryptk is now known as cryptk|offline01:38
WormMantrapni: I've always liked the cisco CLI, but not how they default to protocols that no one else uses01:39
trapnizykes-: I'm pretty happy with HP 1910 Switch01:39
*** arBmind has quit IRC01:40
trapniWormMan: but I can use *any* port on either switches as long as I've set both to trunk and the same protocol?01:40
WormMantrapni: yes, should work01:40
trapnizykes-: HP 1910-48G, to be exact.01:41
zykes-that's the switch I got in my lab01:41
zykes-damned HP, why can't they deliver a 12*3.5 server01:41
zykes-their current servers sucks for swift usage contra Dells01:42
WormManas far as I can tell. they all suck unless you slap a JBOD on them(or go with whitebox)01:42
zykes-dell supports jbods01:43
trapniWormMan: do I need to set "Link Aggregation" on the trunk ports?01:43
zykes-with 12*3.501:43
WormMantrapni: once you get a single link working, then you can worry about bonding them together :)01:44
trapniWormMan: ah, lol. so Link Aggregation is for bonding01:44
trapniI did not know that :)01:44
zykes-WormMan: thing with 1U from HP is that they have too little ram01:46
WormManzykes-: 192GB should be enough for anyone :)01:46
zykes-in 1U ?01:47
zykes-from HP ?01:47
WormMan(we're actually running 8xcores so we only need 128)01:47
WormManI know I've gotten 144 in an HP, not sure about 192 actually01:47
zykes-whaqt servers are you running on if you can ?01:48
WormMansupposedly the new E5 series will do 768GB, that really should be enough for anyone01:48
WormMan(of course, that's only 16 cores)01:48
zykes-WormMan: dell already has a box doing 1TB01:48
WormManwe're on DL360 G7 12core(2x6) and 96GB for our HPs01:48
trapniWormMan: on Cisco, it seams, that all ports are set by *default* to Trunk. do you know why?01:48
trapni(not that it is working anyways :D)01:48
WormMantrapni: no idea, I've not seen a cisco out of the box in years, I only get them after our network guys have screwed with them..01:49
trapnihehe :D01:49
uvirtbotNew bug: #1001832 in nova "Tests failing on trunk (after test_long_vs_short_flags)" [Undecided,New]
*** emenope has joined #openstack01:54
*** rnorwood has quit IRC01:57
*** koolhead17 has quit IRC02:00
trapniWormMan: got it working, however, you need to set both endpoint ports to trunk *and* add the VIDs their :)02:00
trapniso just setting a port to trunk is not enough02:00
trapniWormMan: what scheduler are you using? simple, filter, ...?02:01
WormMantrapni: we wrote our own :)02:01
trapniWormMan: what does it do better?02:01
WormMancompared to Diablo, everything02:01
WormMan(not really)02:02
WormManbut it mostly just tries to fill a physical host before going to the next one02:02
WormManso it looks for the lowest spec system with the highest usage, and then schedules the instance there if it fits02:02
trapniWormMan: I am looking for a scheduler (filter looked the best) which allows me to say that once instance (e.g. varnish) is not spawned more than once on the same hardware node. (to be more fault tollerant)02:02
WormManyea, we'll be looking at that in our coming upgrade02:02
trapniWormMan: interesting.02:03
*** ryanpetrello has joined #openstack02:03
WormManwe want the same, to make sure it's spread over nodes(at least) or maybe even different racks02:03
trapniit seems that filter scheduler is the way to go, however, I wonder how to specify the filters. because the filter might look different from VM to VM type02:03
trapniWormMan: exactly that's the point! :)02:03
trapniI can't believe we're the only two users of that case, however.02:04
WormMannah, plenty out there I'm sure, I've just not written mine yet02:05
trapniseems like I have to learn yet another language... I hardly tried to avoid (like Python) :)02:05
*** stimble has quit IRC02:05
*** rnorwood has joined #openstack02:08
*** ryanpetrello has quit IRC02:10
trapnibed time. g'n8 all, and thanks WormMan for the helpful tips ;)02:12
WormMangood luck :)02:12
*** ryanpetrello has joined #openstack02:12
*** koolhead17 has joined #openstack02:13
*** inteq has quit IRC02:13
*** stimble has joined #openstack02:14
*** krow has joined #openstack02:16
*** koolhead17 has quit IRC02:23
*** paulmillar has joined #openstack02:25
*** koolhead17 has joined #openstack02:35
*** ryanpetrello has quit IRC02:40
*** rods has quit IRC02:44
*** littleidea has joined #openstack02:45
*** judd7 has quit IRC02:48
*** koolhead17 has quit IRC02:56
*** randomubuntuguy has joined #openstack03:02
*** koolhead17 has joined #openstack03:10
*** axisys has left #openstack03:10
*** ywu has quit IRC03:12
*** littleidea has quit IRC03:15
*** littleidea has joined #openstack03:16
*** teskew has quit IRC03:17
*** davidha has quit IRC03:19
*** davidha_who_took has joined #openstack03:19
*** garyk has quit IRC03:30
*** wilmoore has quit IRC03:35
*** koolhead17 has quit IRC03:37
*** littleidea has quit IRC03:46
*** littleidea has joined #openstack03:48
*** koolhead17 has joined #openstack03:50
*** nati_ueno has joined #openstack03:57
*** localhost has quit IRC03:59
*** localhost has joined #openstack04:01
*** fukushima has joined #openstack04:05
*** nati_uen_ has joined #openstack04:12
*** nati_ueno has quit IRC04:13
*** andresambrois has joined #openstack04:18
*** natea has quit IRC04:21
*** littleidea has quit IRC04:22
*** dwcramer has quit IRC04:24
*** garyk has joined #openstack04:24
*** nmistry has quit IRC04:39
*** Turicas has joined #openstack04:41
*** supriya has joined #openstack05:05
*** supriya has quit IRC05:14
*** cryptk|offline is now known as cryptk05:16
*** Entonian has joined #openstack05:19
*** nati_uen_ has quit IRC05:19
*** supriya has joined #openstack05:20
*** andresambrois has quit IRC05:20
*** andresambrois has joined #openstack05:20
*** supriya has quit IRC05:25
*** nati_ueno has joined #openstack05:27
*** Entonian has quit IRC05:31
*** supriya has joined #openstack05:40
*** Karmaon is now known as Karmaon205:46
*** Karmaon2 is now known as Karmaon05:46
*** hattwick has quit IRC06:02
*** shaon has joined #openstack06:07
*** andresambrois has quit IRC06:07
*** nati_ueno has quit IRC06:09
*** shaon has quit IRC06:10
*** andresambrois has joined #openstack06:13
*** davidha_who_took is now known as davidha06:15
*** GeorgeH has quit IRC06:18
*** vitiho__ has joined #openstack06:22
*** vitiho_ has quit IRC06:24
*** supriya has quit IRC06:26
*** dtroyer is now known as dtroyer_zzz06:26
*** Guest50362 has quit IRC06:30
*** Guest50362 has joined #openstack06:32
*** cryptk is now known as cryptk|offline06:32
*** rmt has joined #openstack06:33
*** rnorwood has quit IRC06:35
*** judd7 has joined #openstack06:50
*** andresambrois has quit IRC06:55
*** SmoothSage has joined #openstack06:56
*** koolhead17 has quit IRC06:57
*** dnaori has joined #openstack06:57
*** prakasha-log_ has quit IRC07:00
*** prakasha-log has joined #openstack07:00
*** mohits has joined #openstack07:02
*** Turicas has quit IRC07:05
*** mohits has quit IRC07:10
*** koolhead17 has joined #openstack07:10
*** mohits has joined #openstack07:10
*** BenC has quit IRC07:11
*** Guest50362 has quit IRC07:14
*** Guest50362 has joined #openstack07:17
*** pmezard has joined #openstack07:20
*** koolhead17 has quit IRC07:22
*** MarkAtwood has joined #openstack07:22
*** mohits has quit IRC07:43
*** adalbas has quit IRC07:43
*** adalbas has joined #openstack07:48
*** hattwick has joined #openstack07:58
*** krow has quit IRC07:58
*** matt1_ has joined #openstack08:00
matt1_is anyone here using any infinband in the swift or Nova networks08:01
matt1_trying to see if I can deploy openstack to connect to the proxy over standard tcp/IP over the two lag Ethernet ports while connecting to other swift boxes for replication over infinband08:04
*** azbarcea has joined #openstack08:05
*** andresambrois has joined #openstack08:11
*** Guest50362 has quit IRC08:20
*** Guest50362 has joined #openstack08:22
*** gasbakid has quit IRC08:25
*** adalbas has quit IRC08:32
*** stimble has quit IRC08:37
*** adalbas has joined #openstack08:38
*** Mkenneth has joined #openstack08:39
*** gasbakid has joined #openstack08:39
*** matt1_ has quit IRC08:41
*** andresambrois has quit IRC08:41
*** matt1_ has joined #openstack08:43
*** adalbas has quit IRC08:45
*** tuf8 has quit IRC08:48
*** tuf8 has joined #openstack08:48
*** mancdaz has quit IRC08:49
*** adalbas has joined #openstack08:51
*** albert23 has joined #openstack08:54
*** vitiho_ has joined #openstack08:59
*** vitiho__ has quit IRC09:00
*** nRy has quit IRC09:03
*** koolhead17 has joined #openstack09:03
*** azbarcea has quit IRC09:04
*** vitiho has joined #openstack09:12
*** vitiho_ has quit IRC09:13
*** nRy has joined #openstack09:17
*** gongys has joined #openstack09:21
*** matt1_ has quit IRC09:22
*** nijaba has quit IRC09:23
*** dpippenger has quit IRC09:25
*** rendar has joined #openstack09:25
*** nijaba has joined #openstack09:31
*** revert has joined #openstack09:36
*** pixelbeat has joined #openstack09:41
*** arBmind has joined #openstack09:43
*** Guest50362 has quit IRC10:08
*** matt1_ has joined #openstack10:11
*** Mkenneth has quit IRC10:12
*** adalbas has quit IRC10:13
*** davidha has quit IRC10:14
*** davidha has joined #openstack10:14
*** adalbas has joined #openstack10:15
*** Guest50362 has joined #openstack10:26
*** Guest50362 has quit IRC10:31
*** Guest50362 has joined #openstack10:32
*** adalbas has quit IRC10:36
*** adalbas has joined #openstack10:39
*** Guest50362 has quit IRC10:40
*** willaerk has joined #openstack10:52
*** matt1_ has quit IRC10:59
*** matt1_ has joined #openstack11:00
*** Guest50362 has joined #openstack11:00
*** davidha has quit IRC11:01
*** davidha has joined #openstack11:01
*** ttrifonov is now known as ttrifonov_zZzz11:02
*** Mkenneth has joined #openstack11:09
*** davidha has quit IRC11:13
*** davidha has joined #openstack11:17
*** matt1_ has quit IRC11:20
*** oubiwann has quit IRC11:29
*** adalbas has quit IRC11:35
*** jackh has joined #openstack11:35
*** davidha has quit IRC11:35
*** davidha has joined #openstack11:36
*** zul has joined #openstack11:36
*** adalbas has joined #openstack11:37
*** gray-- has joined #openstack11:38
zykes-what Cisco Nexus switches can one get for openstack ?11:40
zykes-or should one get11:40
*** adalbas has quit IRC11:42
*** supriya has joined #openstack11:50
*** adalbas has joined #openstack11:56
*** ttrifonov_zZzz is now known as ttrifonov12:05
*** adalbas has quit IRC12:12
*** gray-- has quit IRC12:12
*** adalbas has joined #openstack12:18
*** adalbas has quit IRC12:24
*** adalbas has joined #openstack12:30
*** supriya has quit IRC12:34
*** adalbas has quit IRC12:37
*** adalbas has joined #openstack12:44
*** ninkotech has quit IRC12:47
*** ninkotech has joined #openstack12:47
Madkisserr. where do I find a template to create the sql-based endpoints for keystone?12:47
*** ttrifonov is now known as ttrifonov_zZzz12:48
*** davidha has quit IRC12:50
*** davidha has joined #openstack12:51
*** natea has joined #openstack12:54
*** natea has quit IRC12:58
*** natea_ has joined #openstack12:59
*** adalbas has quit IRC13:00
*** adalbas has joined #openstack13:03
*** chasing`Sol has joined #openstack13:08
*** Trixboxer has joined #openstack13:08
*** arBmind has quit IRC13:11
*** ninkotech has quit IRC13:21
*** ninkotech has joined #openstack13:22
*** spidersddd has quit IRC13:31
*** ryanpetrello has joined #openstack13:35
uvirtbotNew bug: #1001941 in quantum "Linux bridge print error" [Undecided,In progress]
*** dwcramer has joined #openstack13:38
zykes-willaerk: ping13:38
*** vila has quit IRC13:40
*** ryanpetrello has quit IRC13:41
*** spidersddd has joined #openstack13:43
myz_does anyone has a sucessful windows instance on openstack essex??13:45
Trixboxermyz_: Im going to try today :)13:47
myz_Trixboxer: I'm trying a couple of days now13:48
myz_Trixboxer: It always gives error in spawning stage13:49
TrixboxerI'm referring this13:49
myz_Trixboxer: and nothing is woring in the log files13:49
Trixboxermyz_: I suppose you need vt-d flag13:49
myz_Trixboxer: I already followed this document13:49
TrixboxerI mean have you enabled virtualization feature of your CPU from BIOS13:49
*** sstent has quit IRC13:50
myz_Trixboxer: the virtualization is enabled on the processor13:50
*** sstent has joined #openstack13:50
myz_Trixboxer: But when trying to run the windows image directly on the KVM, it run sucessfuly13:51
myz_Trixboxer: but it fails to run through openstack13:51
Trixboxerwill give a try now13:51
TrixboxerIOS is being copy13:51
myz_ok, let's keep updating each other13:52
*** RicardoSSP has joined #openstack13:53
*** RicardoSSP has joined #openstack13:53
*** chasing`Sol has quit IRC13:54
Trixboxermy base hypervisor is xen13:56
Trixboxerso its giving error13:56
Trixboxerdo you have any idea on how to run it for xen ?13:56
*** joearnold has joined #openstack13:58
*** gongys has quit IRC13:58
*** melmoth has joined #openstack14:01
*** melmoth has quit IRC14:04
*** melmoth has joined #openstack14:04
zykes-anyone here now running Nexus switches ?14:05
*** eglynn__ has joined #openstack14:07
myz_Trixboxer: .14:08
Trixboxermyz_, I have KVM but somehow its giving me an error14:08
Trixboxerrishi@ubuntu:~/images$ sudo kvm -m 1024 -cdrom Windows\ server\ 2008R2\ 64bit.ISO -drive file=windowsserver.img,if=virtio,boot=on -drive file=virtio-win-0.1-22.iso,media=cdrom -boot d -nographic -vnc :014:08
TrixboxerCould not access KVM kernel module: No such file or directory14:08
Trixboxerfailed to initialize KVM: No such file or directory14:08
TrixboxerBack to tcg accelerator.14:08
Trixboxerqemu-kvm: boot=on|off is deprecated and will be ignored. Future versions will reject this parameter. Please update your scripts14:08
myz_you got to errors so early :)14:09
*** eglynn has joined #openstack14:09
Trixboxerlsmod has kvm14:09
Trixboxerbut /dev/kvm is absent14:09
myz_how many modules??14:09
zykes-tcp accelerator ?14:09
Trixboxerrishi@ubuntu:~$ lsmod|grep kvm14:09
Trixboxerkvm                   415459  014:09
myz_no, there should be two modules14:09
Trixboxerhow many have you got myz_14:10
myz_you are using intel or amd??14:10
*** eglynn_ has quit IRC14:10
myz_there is another module called kvm-intel.so14:10
*** freeflyi1g has quit IRC14:10
myz_it will be loased automatically if intel vt-x is enables on your bios14:11
Trixboxerrishi@ubuntu:~$ sudo modprobe kvm-intel14:11
TrixboxerFATAL: Error inserting kvm_intel (/lib/modules/3.2.0-23-generic/kernel/arch/x86/kvm/kvm-intel.ko): Operation not supported14:11
Trixboxerrishi@ubuntu:~$ ll /lib/modules/3.2.0-23-generic/kernel/arch/x86/kvm/kvm-intel.ko14:11
Trixboxer-rw-r--r-- 1 root root 205808 Apr 11 05:57 /lib/modules/3.2.0-23-generic/kernel/arch/x86/kvm/kvm-intel.ko14:11
*** jackh has quit IRC14:11
*** freeflying has joined #openstack14:11
myz_well then virtualization is not enabled on your hw14:12
*** eglynn__ has quit IRC14:12
Trixboxergoing in BIOS now14:13
Trixboxerthought /proc/cpuinfo shows vmx14:13
Trixboxervme seems absent14:13
myz_should be enabled from the bios14:13
myz_i will be back in 15 min, you solve the bios issue14:14
*** melmoth has quit IRC14:18
*** WilliamHerry has joined #openstack14:21
*** Free_maN has joined #openstack14:21
*** Mkenneth has quit IRC14:22
*** dtroyer_zzz is now known as dtroyer14:22
zykes-zynzel: ping14:28
*** joearnold has quit IRC14:30
WilliamHerrydoes glance use rabbit too, I find rabbit config in glance-api.conf file14:30
zykes-it can WilliamHerry for notifications14:35
WilliamHerryit can? does glance use it for notifications by default?14:37
*** dnaori has quit IRC14:39
Trixboxermyz_: windows booting now14:40
*** adalbas has quit IRC14:41
*** BenC has joined #openstack14:42
*** BenC has joined #openstack14:42
*** eglynn has quit IRC14:44
*** gasbakid has quit IRC14:45
*** nati_ueno has joined #openstack14:47
myz_Trixboxer: nice,14:53
myz_Trixboxer: once finished upload the image to the imaging service and try to fire up an instance14:54
myz_Trixboxer: and tell me what do you see :)14:54
Trixboxerseems an issue with my glance service14:54
*** nati_ueno has quit IRC14:55
*** rmt has quit IRC14:55
*** dtroyer is now known as dtroyer_zzz14:57
*** gasbakid has joined #openstack14:59
*** eglynn has joined #openstack15:01
*** livemoon has joined #openstack15:03
*** Neptu has quit IRC15:08
*** freeflyi1g has joined #openstack15:11
*** Karmaon has quit IRC15:12
*** Karmaon has joined #openstack15:12
*** freeflying has quit IRC15:15
*** matt1_ has joined #openstack15:16
*** edygarcia has joined #openstack15:20
*** edygarcia has quit IRC15:21
*** Neptu has joined #openstack15:21
*** dtroyer_zzz is now known as dtroyer15:22
*** littleidea has joined #openstack15:23
*** eglynn has quit IRC15:23
*** arun_ has quit IRC15:23
willaerkzykes-: you rang ?15:23
*** eglynn has joined #openstack15:36
Trixboxermyz_: sorry have to go but I'll let you know tomo15:37
*** Trixboxer has quit IRC15:39
WilliamHerryI reseted rabbitmq, now nova-* network cann't connect to rabbitmq, help15:41
*** Free_maN has quit IRC15:42
*** eglynn has quit IRC15:43
*** judd7 has quit IRC15:48
*** judd7 has joined #openstack15:49
*** garyk has quit IRC15:49
*** Free_maN has joined #openstack15:49
*** wariola has joined #openstack15:50
*** emenope has left #openstack15:55
*** paulmillar has quit IRC16:02
*** paulmillar has joined #openstack16:02
*** osier has joined #openstack16:02
*** leetrout has quit IRC16:03
*** Free_maN has quit IRC16:04
*** matt1_ has quit IRC16:05
*** randomubuntuguy has quit IRC16:08
*** epim has quit IRC16:08
*** epim has joined #openstack16:09
*** WilliamHerry has quit IRC16:10
*** wariola has quit IRC16:14
*** arun_ has joined #openstack16:15
*** livemoon has left #openstack16:18
*** garyk has joined #openstack16:19
*** vila has joined #openstack16:22
*** Karmaon has quit IRC16:23
*** matt1_ has joined #openstack16:25
matt1_is anyone using infinband with swift?16:26
*** gasbakid__ has joined #openstack16:27
jeremybmatt1_: swift should be used with local disks. why would you do otherwise?16:28
jeremybmatt1_: (also no RAID)16:28
*** gasbakid has quit IRC16:28
*** eglynn has joined #openstack16:29
zykes-anyone here know of the cisco nexus's and fabric extenders ?16:29
*** mhzarei has joined #openstack16:29
matt1_infiniband is a network type storage will be onboard with no raid16:29
matt1_@zykes- what about them?16:31
zykes-matt1_: wanting to know what the diff is on 5010 / 5020 - 55xx16:32
zykes-I'm wondering if for a deployment16:34
zykes-a Cisco 5xxx with fabric extenders16:34
zykes-is nice16:34
*** krow has joined #openstack16:34
*** eglynn has quit IRC16:35
jeremybmatt1_: again, why would you use network backed spindles to store swift data?16:36
jeremybmatt1_: use local disks16:36
jeremyb(also use hot swappable disks!)16:36
matt1_just wandering but why the nexus over a TOR/core topology with calysts16:37
matt1_zykes - using fcoe or something wacky?16:37
zykes-no matt1_16:38
zykes-more because of the 10 giga16:39
zykes-and openflow16:39
matt1_jeremyb: you aren't understanding I AM using local hit swapable disks with no raid!  again infiniband is a network interface/type and I am wandering if people are using it16:40
zykes-basically matt1_ wondering to use a Nexus as a core switch16:40
zykes-and fabric extenders out to nodes16:41
jeremybmatt1_: you mean as a transport between swift nodes?16:41
zykes-10 gbe extender to compute and 1 gigabit swift storage16:41
jeremybyou could have just said that to begin with ;)16:41
*** Kiall has quit IRC16:41
matt1_I did! but sorry I clearly didn't make it clear enough16:42
*** stimble has joined #openstack16:43
jeremybanswer: probably not using infiniband for network layer? that's harder to guess? what's the use case?16:43
matt1_@zykes - how big is the network roughly?16:43
jeremybthere are already >10PB clusters16:43
zykes-matt1_: not big16:45
zykes-but I want 10 gbe for compute / core16:45
*** Karmaon has joined #openstack16:47
matt1_basically looking at moving from 2u dell boxes which have 6gbe connectivity into 4u boxes would need about 10g for replication and 2g internet facing doing 2*10g Ethernet for each node is expensive so was thinking 2*GE for internet traffic and 10g infiniband for replication on each node16:47
zykes-matt1_: for swift ?16:47
zykes-what's so expensive with 10 gb ?16:47
matt1_zykes - yep16:47
*** littleidea has quit IRC16:48
zykes-what's expensive with it ?16:49
matt1_@zykes looked at juniper ex series? big 10g aggregation switches and ge Tor switches too16:50
*** jgriffith has joined #openstack16:51
matt1_infiniband is about a quarter the price of Ethernet at 10g and 40g16:51
zykes-matt1_: we're a Cisco / HP shop strait through16:52
*** eglynn has joined #openstack16:52
*** sejo has quit IRC16:56
*** sejo has joined #openstack16:56
zykes-we burned ourselves the last time buying BNT16:57
*** esm has joined #openstack16:58
*** esm is now known as Guest5040916:58
*** arun_ has quit IRC17:00
zykes-matt1_: you looked at openflow capable switches yet ?17:02
*** littleidea has joined #openstack17:02
*** monster_ has joined #openstack17:05
*** capricorn_one has joined #openstack17:06
*** leetrout has joined #openstack17:06
*** osier has quit IRC17:07
matt1_@ZYKES- sorta Cisco want silly money while a Taiwanese no name vendor can sell us fantastic value openflow switches but need 1000+ volume of any given model so at least 48,000 ports which is not even close to our size at the moment one 48 port switch covers 2 racks so would need at least 2000 racks17:07
zykes-matt1_: what kind of switches you on ?17:08
*** littleidea has quit IRC17:11
*** dachary has joined #openstack17:13
*** jgriffith has quit IRC17:13
matt1_ex3300 TOR ex4500 core17:15
*** eglynn has quit IRC17:15
matt1_@zykes - assume your on some other type of Cisco/HP stuff?17:17
*** monster_ has quit IRC17:19
zykes-is that juniper ?17:22
zykes-Cisco / HP we use yes17:22
*** rmt has joined #openstack17:24
matt1_ex is a juniper switch range yea17:25
*** dachary has quit IRC17:25
zykes-links ?17:26
*** gray-- has joined #openstack17:28
matt1_3300 is in the left hand bar17:29
*** leetrout has quit IRC17:34
*** pretec has joined #openstack17:42
*** shaon has joined #openstack17:44
*** dachary has joined #openstack17:45
*** stimble has quit IRC17:45
*** stimble has joined #openstack17:48
*** tmichael has quit IRC17:49
*** mhzarei has quit IRC17:50
*** dachary has quit IRC17:54
*** Guest50409 has quit IRC17:55
*** RicardoSSP has quit IRC18:00
*** epim has quit IRC18:04
*** dachary has joined #openstack18:05
*** davem_ is now known as davem18:06
*** hggdh has joined #openstack18:07
*** arBmind has joined #openstack18:17
*** shaon has quit IRC18:18
*** dachary has quit IRC18:20
zykes-matt1_: what networking would you get for 10g ?18:20
uvirtbotNew bug: #992918 in keystone "ValueError: rounds too low (sha512_crypt requires >= 1000 rounds) " [Undecided,Fix committed]
*** dachary has joined #openstack18:24
*** ttrifonov_zZzz is now known as ttrifonov18:25
matt1_if we wanted 10g Ethernet for each server then probably a few EX8216's they do 640 10g ports per chassis18:27
zykes-you think nexus is ok ?18:29
*** ttrifonov is now known as ttrifonov_zZzz18:35
*** ttrifonov_zZzz is now known as ttrifonov18:35
*** dachary has quit IRC18:37
matt1_@zykes - I haven't used one personally but I know people use seem to have no problems with them would definitely check out arista and HP18:37
*** ttrifonov is now known as ttrifonov_zZzz18:37
*** Neptu has quit IRC18:42
zykes-don't think they would want arista here18:45
*** dachary has joined #openstack18:45
*** gasbakid__ has quit IRC18:46
matt1_ok then! glad I don't have to deal with that kinda stuff18:48
*** ttrifonov_zZzz is now known as ttrifonov18:48
*** esm has joined #openstack18:49
*** esm is now known as Guest841218:50
*** dtroyer is now known as dtroyer_zzz18:50
*** azbarcea has joined #openstack18:51
*** dachary has quit IRC18:52
matt1_arista and juniper kit is definitely what I tend to prefur but Cisco/HP stuff I hear is good too it's just personal/technical preferance certianly I feel juniper routers are better technically that Cisco asr's18:52
*** clopez has joined #openstack18:57
*** dtroyer_zzz is now known as dtroyer18:59
*** dendro-afk is now known as dendrobates19:03
*** dtroyer is now known as dtroyer_zzz19:05
matt1_so I know I have asked this before but now America has logged in ill ask it again19:06
*** ttrifonov is now known as ttrifonov_zZzz19:07
matt1_ Is anyone using or thinking about using infiniband for their internal node to bode network19:08
zykes-for the management network19:09
zykes-shouldn't one do 2*gigabit for that ?19:09
*** dtroyer_zzz is now known as dtroyer19:11
matt1_zykes-, for the 'replecation' network19:12
zykes-matt1_: I'm on nova now19:12
*** SmoothSage has quit IRC19:12
matt1_zykes-, oh right yea I was talking about swift19:14
*** pakolinux has joined #openstack19:16
*** dachary has joined #openstack19:16
matt1_I was thinking a infiniband network for inrternal node to node replication and 2*ge for internet access19:16
*** azbarcea_ has joined #openstack19:17
*** azbarcea has quit IRC19:17
zykes-should a management network / communications network for nova be stacked ?19:24
*** epim has joined #openstack19:28
*** asavu has joined #openstack19:28
*** pakolinux has quit IRC19:29
*** epim has quit IRC19:32
*** eglynn has joined #openstack19:32
*** dachary has quit IRC19:37
*** dendrobates is now known as dendro-afk19:45
*** c0by has joined #openstack19:46
*** andresambrois has joined #openstack19:52
*** andresambrois has quit IRC19:57
*** Guest8412 has quit IRC20:01
*** hggdh has quit IRC20:02
*** littleidea has joined #openstack20:02
*** andresambrois has joined #openstack20:03
*** ryanpetrello has joined #openstack20:07
*** Neptu has joined #openstack20:08
*** rgoodwin is now known as rgoodwin_away20:12
*** matt1_ has quit IRC20:12
*** ryanpetrello has quit IRC20:12
*** reed has joined #openstack20:20
*** esm has joined #openstack20:20
*** esm is now known as Guest1112720:20
*** ryanpetrello has joined #openstack20:26
*** reed has quit IRC20:28
*** arBmind has quit IRC20:30
*** matt1_ has joined #openstack20:31
zykes-matt1_: here ?20:38
*** dendro-afk is now known as dendrobates20:39
*** dtroyer is now known as dtroyer_zzz20:42
*** Guest11127 has quit IRC20:43
*** matt1_ has quit IRC20:50
*** esm_ has joined #openstack20:56
*** dendrobates is now known as dendro-afk21:01
*** c0by has quit IRC21:03
*** c0by has joined #openstack21:03
*** natea_ has quit IRC21:06
*** CristianDM has joined #openstack21:06
*** esm_ has quit IRC21:15
*** fukushim_ has joined #openstack21:16
*** fukushima has quit IRC21:18
*** pmezard has quit IRC21:21
*** issackelly has quit IRC21:21
*** matt1_ has joined #openstack21:24
*** dendro-afk is now known as dendrobates21:27
*** matt1_ has quit IRC21:29
*** todon has quit IRC21:29
*** rmt has quit IRC21:30
*** davidha has quit IRC21:35
*** matt1_ has joined #openstack21:36
*** davidha has joined #openstack21:36
*** azbarcea_ has quit IRC21:51
*** fukushim_ has quit IRC22:01
*** fukushima has joined #openstack22:03
*** rnorwood has joined #openstack22:03
*** natea has joined #openstack22:08
*** ttrifonov_zZzz is now known as ttrifonov22:12
*** ttrifonov is now known as ttrifonov_zZzz22:14
*** rendar has quit IRC22:16
*** matt1_ has quit IRC22:19
*** asavu has quit IRC22:19
*** Entonian has joined #openstack22:21
*** stimble has quit IRC22:24
*** stimble has joined #openstack22:29
*** Entonian has quit IRC22:30
*** tmichael has joined #openstack22:32
*** nerdalert has joined #openstack22:34
*** nerdalert_ has joined #openstack22:36
*** pixelbeat has quit IRC22:37
*** nerdalert has quit IRC22:39
*** nerdalert_ is now known as nerdalert22:39
*** miclorb has joined #openstack22:40
*** joearnold has joined #openstack22:45
*** c0by has quit IRC22:45
*** c0by has joined #openstack22:45
*** natea has quit IRC22:52
*** gongys has joined #openstack22:59
*** joearnold has quit IRC23:04
*** andresambrois has quit IRC23:16
*** joearnold has joined #openstack23:17
*** andresambrois has joined #openstack23:30
*** albert23 has left #openstack23:30
*** nati_ueno has joined #openstack23:33
*** littleidea has quit IRC23:41
*** littleidea has joined #openstack23:43
*** joearnold has quit IRC23:45
*** koolhead17 has quit IRC23:46
uvirtbotNew bug: #1002093 in anvil "git dependencies in component pip-requiers" [Undecided,New]
*** willaerk has quit IRC23:59

Generated by 2.14.0 by Marius Gedminas - find it at!