*** zodbot_ is now known as zodbot | 00:00 | |
*** fcoelho has quit IRC | 00:02 | |
*** rook has quit IRC | 00:07 | |
*** rook has joined #rdo | 00:07 | |
*** gildub has joined #rdo | 00:26 | |
*** jubapa has joined #rdo | 00:34 | |
*** jubapa has quit IRC | 00:39 | |
*** limao has joined #rdo | 00:39 | |
*** saneax is now known as saneax_AFK | 00:42 | |
*** mbound has joined #rdo | 00:47 | |
*** leanderthal|afk has quit IRC | 00:51 | |
*** mbound has quit IRC | 00:52 | |
*** chlong has quit IRC | 00:54 | |
*** rain has joined #rdo | 01:03 | |
*** rain is now known as Guest3563 | 01:04 | |
*** chlong has joined #rdo | 01:11 | |
*** chlong is now known as chlong_POffice | 01:21 | |
*** akshai has joined #rdo | 01:33 | |
*** akshai has quit IRC | 01:44 | |
*** coolsvap has joined #rdo | 01:59 | |
*** crossbuilder has joined #rdo | 02:35 | |
*** crossbuilder_ has quit IRC | 02:35 | |
*** ashw has joined #rdo | 02:53 | |
*** kambiz has quit IRC | 03:02 | |
*** paragan has joined #rdo | 03:04 | |
*** pilasguru has quit IRC | 03:08 | |
*** gbraad has joined #rdo | 03:08 | |
*** gbraad has quit IRC | 03:08 | |
*** gbraad has joined #rdo | 03:08 | |
*** ashw has quit IRC | 03:12 | |
*** rdas has joined #rdo | 03:14 | |
*** gbraad has quit IRC | 03:21 | |
*** gbraad has joined #rdo | 03:21 | |
*** imcleod has joined #rdo | 03:22 | |
*** morazi has quit IRC | 03:24 | |
*** Amita has joined #rdo | 03:30 | |
*** abregman has quit IRC | 03:38 | |
*** vimal has joined #rdo | 03:41 | |
*** pilasguru has joined #rdo | 04:05 | |
*** imcleod has quit IRC | 04:16 | |
*** nehar has joined #rdo | 04:23 | |
*** pilasguru has quit IRC | 04:29 | |
*** jubapa has joined #rdo | 04:34 | |
*** vimal has quit IRC | 04:38 | |
*** jubapa has quit IRC | 04:39 | |
*** saneax_AFK is now known as saneax | 04:41 | |
*** Amita has quit IRC | 04:44 | |
*** oshvartz has quit IRC | 04:52 | |
*** jhershbe__ has joined #rdo | 04:54 | |
*** abregman has joined #rdo | 04:57 | |
*** vimal has joined #rdo | 04:58 | |
*** Amita has joined #rdo | 04:59 | |
*** chandankumar has joined #rdo | 04:59 | |
*** Amita has quit IRC | 05:01 | |
*** Amita has joined #rdo | 05:12 | |
*** Alex_Stef has joined #rdo | 05:18 | |
*** Poornima has joined #rdo | 05:19 | |
*** ekuris has joined #rdo | 05:32 | |
*** satya4ever has joined #rdo | 05:41 | |
*** mosulica has joined #rdo | 05:51 | |
*** anilvenkata has joined #rdo | 05:57 | |
*** vaneldik has quit IRC | 05:57 | |
*** pradiprwt has joined #rdo | 05:59 | |
pradiprwt | Hi Everyone, I want to do some post deployment changes in RDO using director as plugin, can any one please guide me how I can start developing plugin for that ..??? | 06:03 |
---|---|---|
*** mosulica has quit IRC | 06:03 | |
*** pgadiya has joined #rdo | 06:04 | |
*** rcernin has joined #rdo | 06:05 | |
pradiprwt | How to develop plugin for RHEL-OSP, Is there any document ..??? | 06:08 |
*** dgurtner has joined #rdo | 06:09 | |
*** dgurtner has joined #rdo | 06:09 | |
*** ganesh has joined #rdo | 06:13 | |
*** ganesh is now known as Guest57450 | 06:14 | |
*** Guest57450 is now known as gkadam | 06:19 | |
*** oshvartz has joined #rdo | 06:21 | |
*** Amita has quit IRC | 06:21 | |
*** Amita has joined #rdo | 06:22 | |
*** rasca has joined #rdo | 06:28 | |
*** edannon has joined #rdo | 06:29 | |
*** jprovazn has joined #rdo | 06:32 | |
*** Amita has quit IRC | 06:34 | |
*** Amita has joined #rdo | 06:36 | |
*** pcaruana has joined #rdo | 06:37 | |
*** tesseract- has joined #rdo | 06:41 | |
*** Amita has quit IRC | 06:41 | |
*** bandini has joined #rdo | 06:41 | |
*** smeyer has joined #rdo | 06:42 | |
*** florianf has joined #rdo | 06:44 | |
*** Amita has joined #rdo | 06:44 | |
*** mosulica has joined #rdo | 06:46 | |
*** milan has joined #rdo | 06:50 | |
*** jtomasek has joined #rdo | 06:55 | |
*** rdas has quit IRC | 06:58 | |
*** nmagnezi has joined #rdo | 06:59 | |
*** tshefi has joined #rdo | 07:00 | |
*** yfried has joined #rdo | 07:02 | |
*** hynekm has joined #rdo | 07:02 | |
*** vaneldik has joined #rdo | 07:03 | |
*** Alex_Stef has quit IRC | 07:08 | |
*** eliska has joined #rdo | 07:08 | |
*** zoli_gone-proxy is now known as zoliXXL | 07:14 | |
*** rdas has joined #rdo | 07:14 | |
*** apevec has joined #rdo | 07:14 | |
zoliXXL | good morning | 07:16 |
*** ccamacho has joined #rdo | 07:18 | |
*** jtomasek has quit IRC | 07:18 | |
*** jtomasek has joined #rdo | 07:19 | |
*** paramite has joined #rdo | 07:25 | |
*** tshefi has quit IRC | 07:27 | |
*** tshefi has joined #rdo | 07:28 | |
*** abregman_ has joined #rdo | 07:28 | |
*** garrett has joined #rdo | 07:31 | |
*** abregman has quit IRC | 07:31 | |
*** ihrachys has joined #rdo | 07:32 | |
*** kaminohana has quit IRC | 07:40 | |
*** gildub has quit IRC | 07:42 | |
*** gildub has joined #rdo | 07:43 | |
*** gildub has quit IRC | 07:48 | |
*** jpich has joined #rdo | 07:52 | |
*** Guest3563 is now known as leanderthal | 07:55 | |
*** leanderthal is now known as leanderthal|afk | 07:55 | |
*** pblaho has joined #rdo | 07:55 | |
*** abregman_ is now known as abregman_|mtg | 07:56 | |
number80 | o/ | 07:56 |
number80 | damn, I just saw the tripleo/swift ticket | 07:57 |
number80 | I'm glad that I didn't read it during the w-e :) | 07:57 |
*** jlibosva has joined #rdo | 07:57 | |
number80 | apevec, slagle: bottom line is to never test from *unreleased* packages | 07:58 |
*** jtomasek has quit IRC | 07:58 | |
number80 | *upgrades | 07:58 |
*** jtomasek has joined #rdo | 07:58 | |
apevec | well, slagle is testing it | 07:58 |
*** fzdarsky has joined #rdo | 07:59 | |
number80 | test from N releases, current milestones | 07:59 |
number80 | well, if they do, we can't support it | 07:59 |
number80 | that's just not possible unless we ship everything in monolithic packages | 07:59 |
*** tumble has joined #rdo | 08:00 | |
number80 | or that'd mean that we'd have to review carefully changes in DLRN so accepting that CI may be broken for one or two days | 08:01 |
*** vaneldik has quit IRC | 08:01 | |
number80 | or not ninja introducing packages | 08:01 |
apevec | I'm not sure what do you mean? If we want to CD, we need to support upgrades with trunk packages | 08:02 |
*** fragatina has joined #rdo | 08:02 | |
apevec | upgrade issue w/ swift rpm was only that obsoletes was too restrictive | 08:02 |
number80 | apevec: then no more ninja merges, no more changes accepted without careful review | 08:02 |
apevec | sure, that should be always the case, unless there's promotion breakage | 08:03 |
flepied1 | number80: apevec: what is the problem? | 08:03 |
number80 | well, that's the current case | 08:03 |
number80 | https://bugzilla.redhat.com/show_bug.cgi?id=1359377 | 08:03 |
openstack | bugzilla.redhat.com bug 1359377 in openstack-swift "can't yum update to new swift packages" [Unspecified,On_qa] - Assigned to apevec | 08:03 |
apevec | I see no big deal really | 08:04 |
number80 | apevec: technical debt will grow quickly, if we don't have flexibility | 08:05 |
*** abregman_|mtg has quit IRC | 08:05 | |
*** mcornea has joined #rdo | 08:05 | |
apevec | but what is the tech debt in this particular case? | 08:05 |
*** jtomasek has quit IRC | 08:06 | |
apevec | do you think unversioned Obsoletes ? | 08:06 |
number80 | Yes, but I can think of other non-corner case | 08:06 |
number80 | new package with incorrect splitting that has to be kept forever | 08:06 |
apevec | Only use-case for Obsoletes <= is if we want to reintroduce obsoleted subpackage, which I don't think we ever should | 08:07 |
misc | error happen | 08:07 |
number80 | There could be good example | 08:07 |
*** mbound has joined #rdo | 08:07 | |
misc | but obsoletes should have a time where they are being removed, they also make computation of deps a bit more difficult, no ? | 08:07 |
*** flepied1 is now known as flepied | 08:07 | |
number80 | Yeah, but nobody ever risked to do that | 08:08 |
number80 | especially with DLRN, you can't know for sure, which snapshots people are using | 08:08 |
*** shardy has joined #rdo | 08:08 | |
misc | just put a policy to remove the obsoletes after X releases or something | 08:08 |
misc | unless you want upgrade to be supported on package level from all possible snapshot of the past | 08:09 |
misc | and if we want that, it has to be tested | 08:09 |
number80 | misc: except that we can have people out there upgrading after X+1 releases and it'll break too. | 08:09 |
misc | number80: sure, but did you promise to make it work ? | 08:10 |
*** fragatina has quit IRC | 08:10 | |
number80 | misc: no, that's the point | 08:10 |
number80 | but well, in the end, I'm just saying bad idea to do it. | 08:10 |
flepied | apevec: number80: do you have time to discuss the way to solve the issue regarding tag building of some components this morning? | 08:14 |
apevec | flepied, what is your free slot? in 1h ? | 08:15 |
number80 | flepied: yes, but jpena was working on it though in a different context (he cames back next week) | 08:15 |
* apevec missed breakfast | 08:15 | |
number80 | oops, then, I remember what I forgot | 08:15 |
misc | mhh breakfast, good idea | 08:15 |
apevec | number80, he did dlrn change required | 08:15 |
number80 | \o/ | 08:15 |
apevec | so we just need rdoinfo change, which I'll send review then we can discuss in gerrit | 08:16 |
number80 | I need to pick up this change in my virtualenv | 08:16 |
apevec | but call is also fine | 08:16 |
apevec | just not right now :0 | 08:16 |
flepied | apevec: number80: at 11h? we'll see if what jpena did is enough | 08:16 |
number80 | ack | 08:16 |
apevec | ack | 08:16 |
*** abregman has joined #rdo | 08:18 | |
*** vaneldik has joined #rdo | 08:18 | |
*** ushkalim has joined #rdo | 08:19 | |
*** vaneldik has quit IRC | 08:23 | |
*** vaneldik has joined #rdo | 08:25 | |
*** dcotton has joined #rdo | 08:26 | |
*** snecklifter_ has joined #rdo | 08:27 | |
*** Alex_Stef has joined #rdo | 08:27 | |
*** iranzo has joined #rdo | 08:34 | |
*** pilasguru has joined #rdo | 08:37 | |
*** devvesa has joined #rdo | 08:38 | |
*** hewbrocca-afk is now known as hewbrocca | 08:39 | |
*** pilasguru has quit IRC | 08:42 | |
*** iranzo has quit IRC | 08:43 | |
*** derekh has joined #rdo | 08:45 | |
*** gildub has joined #rdo | 08:47 | |
*** artem_panchenko_ has joined #rdo | 08:48 | |
*** milan has quit IRC | 08:55 | |
*** beagles has quit IRC | 08:56 | |
*** milan has joined #rdo | 08:59 | |
*** limao has quit IRC | 08:59 | |
*** egallen has joined #rdo | 09:01 | |
*** limao has joined #rdo | 09:03 | |
*** jtomasek has joined #rdo | 09:05 | |
*** hynekm has quit IRC | 09:05 | |
*** steveg_afk has quit IRC | 09:06 | |
*** hynekm has joined #rdo | 09:10 | |
*** Goneri has joined #rdo | 09:10 | |
*** milan has quit IRC | 09:16 | |
*** gszasz has joined #rdo | 09:16 | |
*** steveg_afk has joined #rdo | 09:18 | |
*** gfidente has joined #rdo | 09:22 | |
*** mvk has quit IRC | 09:27 | |
*** jubapa has joined #rdo | 09:32 | |
*** limao_ has joined #rdo | 09:36 | |
*** Son_Goku has quit IRC | 09:36 | |
*** limao has quit IRC | 09:36 | |
*** Son_Goku has joined #rdo | 09:37 | |
*** jubapa has quit IRC | 09:37 | |
*** Son_Goku has quit IRC | 09:38 | |
*** Son_Goku has joined #rdo | 09:39 | |
*** akrivoka has joined #rdo | 09:39 | |
*** Son_Goku has quit IRC | 09:40 | |
*** Son_Goku has joined #rdo | 09:41 | |
*** pgadiya has quit IRC | 09:43 | |
*** pgadiya has joined #rdo | 09:43 | |
*** satya4ever has quit IRC | 09:43 | |
*** paragan has quit IRC | 09:45 | |
*** fragatina has joined #rdo | 09:47 | |
*** satya4ever has joined #rdo | 09:52 | |
*** fragatina has quit IRC | 09:54 | |
*** tosky has joined #rdo | 09:55 | |
*** chem has joined #rdo | 09:57 | |
*** mvk has joined #rdo | 10:00 | |
*** limao_ has quit IRC | 10:01 | |
*** jubapa has joined #rdo | 10:02 | |
*** zoliXXL is now known as zoli|lunch | 10:03 | |
*** gildub has quit IRC | 10:04 | |
*** egallen has quit IRC | 10:06 | |
*** jubapa has quit IRC | 10:07 | |
*** _degorenko|afk is now known as degorenko | 10:08 | |
*** egallen has joined #rdo | 10:15 | |
*** Amita has quit IRC | 10:20 | |
*** Amita has joined #rdo | 10:22 | |
*** panda|sick is now known as panda | 10:22 | |
*** gchamoul is now known as gchamoul|afk | 10:26 | |
*** gchamoul|afk is now known as gchamoul | 10:27 | |
*** egallen has quit IRC | 10:29 | |
*** imcleod has joined #rdo | 10:43 | |
*** fragatina has joined #rdo | 10:45 | |
*** KarlchenK has joined #rdo | 10:45 | |
rdogerrit | hguemar proposed openstack/glanceclient-distgit: Added py2 and py3 subpackage http://review.rdoproject.org/r/1620 | 10:50 |
*** anshul has joined #rdo | 10:52 | |
apevec | weshay_afk, dmsimard - actually our probability to pass is 1/16 :( Both oooq jobs must hit dusy chassis | 10:53 |
*** paragan has joined #rdo | 10:53 | |
apevec | in current run ha ended up on dusty and should succeed, but minimal is on gusty: https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/531/ | 10:54 |
hewbrocca | heh | 10:55 |
hewbrocca | that would be funny if it wasn't so sad | 10:56 |
apevec | I really don't know what makes those AMDs so slow, but in the current run both oooq jobs started at the same time, | 10:56 |
apevec | 10:24:34 TASK [tripleo/overcloud : Deploy the overcloud] on AMD | 10:56 |
apevec | 10:08:37 TASK [tripleo/overcloud : Deploy the overcloud] on Intel | 10:57 |
hewbrocca | weird. So is that the difference, dusty is intel hardware and the rest are AMD? | 10:57 |
apevec | kbsingh, ^ AMDs must have some serious bottleneck, not just CPU? | 10:57 |
apevec | hewbrocca, yes | 10:58 |
hewbrocca | really odd | 10:58 |
hewbrocca | heh | 10:58 |
kbsingh | apevec: to be fair, I have asked about this a few times over the last few weeks - what is the real problem you guys have ? | 10:58 |
hewbrocca | we do have virt turned on in the bios, right? | 10:58 |
apevec | that's what I can tell from https://wiki.centos.org/QaWiki/PubHardware | 10:58 |
kbsingh | apevec: I am fairly sure your tests are busted | 10:58 |
kbsingh | apevec: typically, a slower machine means test take a bit longer, not just fail and top themselves | 10:58 |
apevec | kbsingh, sure they are :) | 10:58 |
kbsingh | has anyone looked then at the tests to see why it fails ? | 10:59 |
apevec | Intel works in upstream OpenStack do make them pass only on their machines:) | 10:59 |
kbsingh | apevec: can you quantify this ? if so, can i publicly state that RDO only aims to work on intel hardware ? | 10:59 |
apevec | kbsingh, I have provided some numbers in https://bugs.centos.org/view.php?id=11200 | 10:59 |
apevec | kbsingh, that was j/k | 10:59 |
kbsingh | apevec: so that helps, but what is the actual bottleneck ? | 11:00 |
apevec | that I'm not sure, dmsimard ^ can we monitor more closely machines while the oooq job is running? | 11:01 |
apevec | I think we have sar output, just a sec | 11:01 |
*** ade_b has joined #rdo | 11:02 | |
apevec | https://ci.centos.org/job/tripleo-quickstart-promote-master-delorean-minimal/403/ passed in ~1h on dusty, next 404 failed after ~2h on gusty | 11:04 |
hewbrocca | kbsingh: It wouldn't surprise me if the deployment is simply timing out | 11:05 |
hewbrocca | there are a bunch of different things that can cause that to happen | 11:05 |
apevec | hm, we don't have sar, that was in upstream jobs | 11:05 |
apevec | timeout on API requests was already increased to improve the odds, we could bump it some more I guess | 11:06 |
apevec | EmilienM, ^ double it? https://review.openstack.org/#/c/334996/4/lib/puppet/provider/openstack.rb | 11:07 |
kbsingh | someone must own these tests right ? | 11:07 |
kbsingh | to me the fundamental problem is how they are setup - it looks like you guys are trying to replicate a developer laptop rather than use the infra :/ | 11:07 |
apevec | yes,it's all in one | 11:08 |
apevec | in VMs | 11:08 |
kbsingh | for example - why are you embedding the whole stack on one host - if i am reading it right, you have nested, and then nesting inside the nested setup | 11:08 |
kbsingh | apevec: yeah :/ but it looks like you just need 3 machines to spread this out | 11:08 |
apevec | weshay_afk, ^ can we spread oooq to multiple nodes? | 11:09 |
apevec | kbsingh, but we might be hitting inter-node networking issues then | 11:09 |
kbsingh | the second thing to maybe look at is - rather than increas the timeout's - can you maybe just use the VMs in the rdo-cloud.cico ? | 11:09 |
apevec | that was biggest issue previously with the upstream multinode jobs | 11:09 |
hewbrocca | kbsingh: so, the reason we test it all on a single machine using virt is that we have better control of the environment | 11:10 |
kbsingh | those are E5 nodes, per core is about 40% faster than the E3's in the cico pool. | 11:10 |
kbsingh | well | 11:10 |
hewbrocca | we know exactly how the networking is configured, how the VMs are set up, etc. | 11:10 |
hewbrocca | I'd love to spread it out but it will require some additional work | 11:10 |
kbsingh | you have 7 network ports to play with in cico... there is some code in there that locks the ports 1-7 for the session, so you can do whatever you want really | 11:10 |
hewbrocca | which is fine, if that's what it takes | 11:10 |
kbsingh | just eth0 needs to stay where it is | 11:11 |
kbsingh | ( its not that simple, but if you only need another 3 network ports, from eth1 to eth3, it can be that simple ) | 11:11 |
*** pmyers has joined #rdo | 11:12 | |
kbsingh | so, i think there are a few options - dont disregard the rdo-cloud.cico as well, if that can be used. | 11:12 |
*** nyechiel has joined #rdo | 11:13 | |
kbsingh | the problem with having duffy only hand out specific nodes from specific pools is that you can never be sure about availability - and its very hard to have a warm-standby-cache that way | 11:15 |
*** imcleod has quit IRC | 11:15 | |
kbsingh | eg. what happens when allocation in that hardware pool gets to 80% - do we still keep handing them out ? if so, what happens when there are no machines - you might be waiting upto 24 hrs (hopefully not, but thats the reaper timeout ) before a machine comes available | 11:16 |
*** gildub has joined #rdo | 11:17 | |
kbsingh | we can however try and run some hardware profiles and see if there is a huge difference, hopefully its not something like a numa/cpu screwup | 11:17 |
*** fragatina has quit IRC | 11:19 | |
*** spr2 has joined #rdo | 11:19 | |
apevec | kbsingh, thanks - I'll ask dmsimard and weshay_afk when they're online to look at adding some perf monitoring to job logs | 11:22 |
apevec | for now I don't have visibility where is the bottleneck | 11:22 |
kbsingh | i jsut tried to trawl through some of this - and cant workout either :/ but then I've never looked at this code side of things before, and there is a lot of it! | 11:23 |
kbsingh | btw, does nova pin cpu/cores by default ? | 11:23 |
kbsingh | just noticed that on the E5 machines, i can get 20 - 22% more compute capacity from a single core, compared with a nova run VM on the same host using the same core | 11:24 |
hewbrocca | kbsingh: it definitely does not do that | 11:25 |
hewbrocca | pin cpu/cores | 11:25 |
hewbrocca | I believe you can ask it to try to do some things | 11:25 |
*** egafford has quit IRC | 11:26 | |
hewbrocca | But it's a bit strange because in our case Nova isn't actually creating the VMs | 11:26 |
hewbrocca | we pre-create them with oooq, and then Ironic "boots" them | 11:26 |
hewbrocca | which simulates a bare-metal deployment | 11:26 |
hewbrocca | So... IIUC... it might actually be possible to tell oooq to do nova pinning, if we thought that would help? | 11:27 |
kbsingh | let me play with this a bit more | 11:27 |
kbsingh | one thing we noticed in our devcloud was that having libvirt do a cpuset=auto made a noticeable difference; and we didnt need to faff around with numa and pinning and all that | 11:27 |
number80 | dmsimard: shouldn't the result of the voting gates be pushed in the review? | 11:28 |
*** aortega has joined #rdo | 11:28 | |
number80 | and not waiting non-voting jobs to finish? -> http://review.rdoproject.org/r/1620 | 11:28 |
kbsingh | hewbrocca: apevec: let me figure out some metrics on the hardware side and get back in a day or so | 11:29 |
apevec | number80, it's the same job | 11:29 |
*** shardy is now known as shardy_lunch | 11:30 | |
apevec | number80, so zuul will report when all is finished | 11:30 |
*** danielbruno has joined #rdo | 11:31 | |
*** danielbruno has quit IRC | 11:31 | |
*** danielbruno has joined #rdo | 11:31 | |
*** zoli|lunch is now known as zoli | 11:31 | |
*** zoli is now known as zoliXXL | 11:32 | |
hewbrocca | kbsingh: many thanks for all your help | 11:32 |
hewbrocca | I'll raise the "libvirt cpuset=auto" question with weshay_afk or trown|outtypewww when one of them comes on line | 11:33 |
number80 | apevec: then, we need separate jobs beyond the PoC | 11:34 |
number80 | it can be very long for trivial changes | 11:34 |
apevec | I think we want this to be voting actually | 11:36 |
apevec | you don't know change is trivial until it passes full CI | 11:36 |
number80 | at some point yes | 11:36 |
apevec | but there might be some zuul config magic to report parent job early? | 11:37 |
number80 | well fixing typo in description or SourceURL are :) | 11:37 |
kbsingh | hewbrocca: absolutely, hopefully we can get this fixed soon | 11:37 |
apevec | number80, what if you insert invalid unicode? :) | 11:37 |
apevec | anyway, let's check w/ David later | 11:37 |
number80 | ack | 11:38 |
apevec | and now, just for fun, oooq HA job failed on Intel, in overcloud deploy after 2h | 11:38 |
*** flepied1 has joined #rdo | 11:40 | |
hewbrocca | oof | 11:40 |
*** rhallisey has joined #rdo | 11:42 | |
apevec | InstanceDeployFailure: Failed to provision instance e0ed7164-28b8-4cac-8cf5-7911808e0e8d: Timeout reached while waiting for callback for node 100bd23b-a188-46b9-abcf-d75ccebe3280 | 11:42 |
*** flepied has quit IRC | 11:43 | |
apevec | that might be some nova/ironic race that trown mentioned | 11:43 |
*** dpeacock has joined #rdo | 11:45 | |
*** rbrady_ has quit IRC | 11:46 | |
*** rodrigods has quit IRC | 11:47 | |
*** weshay_afk is now known as weshay | 11:47 | |
apevec | so that's 25. TBD ironic/nova timeout | 11:47 |
*** thrash has quit IRC | 11:47 | |
*** rodrigods has joined #rdo | 11:47 | |
*** jdob has joined #rdo | 11:48 | |
apevec | I'll try to find LP# that trown mentioned | 11:48 |
*** pkovar has joined #rdo | 11:48 | |
apevec | weshay, ^ do you now about that nova/ironic timeout? | 11:48 |
hewbrocca | apevec: hold on, no, I've seen that before | 11:49 |
hewbrocca | I think that is an ipxe failure | 11:49 |
*** rdas has quit IRC | 11:50 | |
hewbrocca | Do *all* the virthosts we are operating on definitely have the correct repo setup that get the ipxe rpm we are shipping in RDO instead of the one on the baremetal machine? | 11:51 |
*** flepied has joined #rdo | 11:51 | |
weshay | sorry.. reading through | 11:52 |
*** flepied2 has joined #rdo | 11:52 | |
*** flepied1 has quit IRC | 11:52 | |
*** sdake has joined #rdo | 11:54 | |
*** flepied has quit IRC | 11:56 | |
*** tosky has quit IRC | 11:56 | |
*** fbo has quit IRC | 11:56 | |
*** tosky has joined #rdo | 11:57 | |
weshay | apevec, we see that timeout from time to time.. in the deploy log it manifests as Message: No valid host was found. There are not enough hosts available., | 11:57 |
*** milan has joined #rdo | 11:59 | |
*** gkadam has quit IRC | 12:00 | |
*** morazi has joined #rdo | 12:01 | |
weshay | kbsingh, btw.. running virt based tests on multiple hosts would be a cool thing for us to pull off, but there is nothing wrong w/ running the entire stack on one virthost | 12:01 |
*** ushkalim has quit IRC | 12:01 | |
hewbrocca | eurrgh, weshay that does sound vaguely like the nova/ironic race | 12:02 |
*** amuller has joined #rdo | 12:02 | |
*** mvk has quit IRC | 12:02 | |
weshay | hewbrocca, trying the latest test image on my minidell | 12:06 |
hewbrocca | weshay: great | 12:06 |
weshay | if we're hitting a race.. theoretically.. I could check the deploy log or nova logs for that error.. and retry in CI if we hit it | 12:08 |
weshay | if it works eventually after a few retries that would prove it | 12:08 |
hewbrocca | let's pass it by lucasagomes and see if I'm diagnosing it correctly | 12:08 |
hewbrocca | another solution might be to provision one extra virt host that isn't going to get booted | 12:09 |
*** beagles has joined #rdo | 12:09 | |
hewbrocca | (you just don't know which one it's going to be) | 12:09 |
weshay | hewbrocca, we did get 2+ full passes and promotes on liberty and mitaka over the weekend | 12:09 |
hewbrocca | Yeah I saw that! | 12:10 |
hewbrocca | that is excellent | 12:10 |
*** Guest32906 is now known as flaper87 | 12:10 | |
*** flaper87 has quit IRC | 12:10 | |
*** flaper87 has joined #rdo | 12:10 | |
weshay | hewbrocca, an extra virthost.. for like the compute virt guests? | 12:10 |
hewbrocca | yeah | 12:10 |
*** imcleod has joined #rdo | 12:10 | |
hewbrocca | If Ironic is going to deploy to 5 guests | 12:11 |
hewbrocca | then put 6 vms in its inventory | 12:11 |
hewbrocca | but -- check it with lucasagomes | 12:11 |
weshay | then we get into a bunch of networking issues | 12:11 |
weshay | using multple virthosts is not viable atm.. | 12:12 |
hewbrocca | ahh... maybe not wort the trouble then | 12:12 |
hewbrocca | no, you could put all the guests on the same host | 12:12 |
hewbrocca | the same virthost | 12:12 |
*** kgiusti has joined #rdo | 12:12 | |
*** milan has quit IRC | 12:12 | |
weshay | because the neutron network bridge etc.. we have to ensure some how we're not bringing up tests w/ dupe ips | 12:12 |
weshay | etc | 12:12 |
hewbrocca | one of them won't ever get booted, so it won't consume any resources | 12:12 |
*** trown|outtypewww is now known as trown | 12:13 | |
weshay | sorry.. still early for me.. what advantage do we get w/ a guest that is not booted? | 12:13 |
hewbrocca | It avoids this Nova race | 12:13 |
weshay | trown, morning | 12:13 |
hewbrocca | well, lessens the impact of it | 12:14 |
weshay | oh i c | 12:14 |
hewbrocca | There's a time lag between the time Nova requests a host and the time Ironic delivers it | 12:14 |
hewbrocca | and in that time Nova can request the same host again | 12:14 |
*** gkadam has joined #rdo | 12:15 | |
*** gkadam is now known as Guest18250 | 12:15 | |
*** ushkalim has joined #rdo | 12:16 | |
kbsingh | weshay: just trying to see how best we can unblock on the perf issue, if it really os a perf issue | 12:16 |
*** Guest18250 is now known as gkadam | 12:16 | |
*** d0ugal_ is now known as d0ugal | 12:17 | |
hewbrocca | It retries a whole bunch of times, which is why this doesn't come up as often as it used to | 12:17 |
*** d0ugal has quit IRC | 12:17 | |
*** d0ugal has joined #rdo | 12:17 | |
kbsingh | weshay: but i dont know how much work is needed to use multihost v/s other options | 12:17 |
*** mvk has joined #rdo | 12:17 | |
*** rbrady has joined #rdo | 12:17 | |
hewbrocca | Well, we *shouldn't* have to do that -- I do like the idea of doing that because it'll let us test more/better configurations, but I don't think it is the first thing we should address | 12:18 |
trown | weshay: morning | 12:18 |
hewbrocca | trown: feeling better?? | 12:18 |
trown | not much actually, but its ok | 12:19 |
weshay | kbsingh, the networking there gets difficult. At any given moment we would have to know exactly what ips, even the bridge ips our systems are using. | 12:19 |
weshay | so we don't need this to merge? apevec | 12:19 |
weshay | https://review.openstack.org/#/c/346469/ | 12:19 |
*** jlibosva has quit IRC | 12:19 | |
*** jlibosva has joined #rdo | 12:20 | |
kbsingh | hewbrocca: ack | 12:21 |
*** thrash has joined #rdo | 12:22 | |
*** thrash has quit IRC | 12:22 | |
*** thrash has joined #rdo | 12:22 | |
hewbrocca | blearrgh | 12:23 |
hewbrocca | trown: if you need to go away and rest please do so | 12:23 |
weshay | trown, fyi.. I have a ha overcloud deploying in progress atm w/ http://trunk.rdoproject.org/centos7/f0/0e/f00ed98048a1a24e55dfea64171771ff73216335_969c6c49 | 12:23 |
*** nmagnezi_ has joined #rdo | 12:25 | |
*** nmagnezi has quit IRC | 12:26 | |
weshay | hewbrocca, looks like I did not hit the race on my deployment.. | 12:26 |
*** hrw has quit IRC | 12:26 | |
hewbrocca | weshay: excellent | 12:26 |
*** hrw has joined #rdo | 12:27 | |
*** eliska has quit IRC | 12:27 | |
number80 | need reviewer for this rdoinfo change: https://review.rdoproject.org/r/#/c/1685/ | 12:30 |
*** kaminohana has joined #rdo | 12:31 | |
number80 | (it's dummy release to allow usage of rdopkg to build common deps builds | 12:31 |
*** jprovazn has quit IRC | 12:35 | |
*** hynekm has quit IRC | 12:36 | |
apevec | weshay, 346469 should still be merged, I just took it temporary into RPM until upstream gate is fixed | 12:37 |
apevec | (unrelated setuptools/devstack thing) | 12:37 |
weshay | nice.. /me loves hot wiring things | 12:37 |
rdogerrit | Merged openstack/glanceclient-distgit: Added py2 and py3 subpackage http://review.rdoproject.org/r/1620 | 12:38 |
apevec | weshay, nice thing is that we'll get notification (FTBFS) as soon as it is merged upstream | 12:39 |
apevec | so it's managed hot wiring | 12:40 |
weshay | cool man +1 | 12:40 |
*** rlandy has joined #rdo | 12:41 | |
*** eliska has joined #rdo | 12:41 | |
lucasagomes | hewbrocca, reading (/me was having lunch) | 12:43 |
hewbrocca | lucasagomes: thank you sir | 12:43 |
*** Son_Goku has quit IRC | 12:44 | |
*** mosulica has quit IRC | 12:46 | |
*** sasha2 has joined #rdo | 12:47 | |
*** vimal has quit IRC | 12:47 | |
*** shardy_lunch is now known as shardy | 12:48 | |
*** morazi has quit IRC | 12:50 | |
*** ohochman has joined #rdo | 12:50 | |
*** hynekm has joined #rdo | 12:50 | |
*** mosulica has joined #rdo | 12:51 | |
*** fragatina has joined #rdo | 12:52 | |
*** nehar has quit IRC | 12:52 | |
*** egafford has joined #rdo | 12:52 | |
*** abregman is now known as abregman|mtg | 12:52 | |
lucasagomes | weshay, yeah so, this race is very annoying I was checking the work in nova that address the problem but it's not completed merged yet https://blueprints.launchpad.net/nova/+spec/host-state-level-locking | 12:52 |
*** shaunm has joined #rdo | 12:54 | |
hewbrocca | lucasagomes: I'm glad they've finally admitted it's a problem at least | 12:54 |
lucasagomes | weshay, problem is that all "workarounds" to mitigate the problem is not very good. One is like hewbrocca proposed, to have more hosts available so when the nova scheduler pick one node twice for deployment the retry filter can be activate and fallback to a node that is idle | 12:55 |
lucasagomes | hewbrocca, yeah | 12:55 |
lucasagomes | weshay, another way would be to not try to deploy all the nodes at the same time | 12:55 |
weshay | lucasagomes, is that a deployment setting? | 12:55 |
lucasagomes | if we could do it in batches... e.g depoy 3 and 3 more | 12:56 |
weshay | to stagger it? | 12:56 |
lucasagomes | weshay, the problem is that there's no lock between the nova scheduler and the resource tracker | 12:56 |
lucasagomes | weshay, so nova can pick the same node for 2 different instances | 12:56 |
*** imcleod has quit IRC | 12:56 | |
lucasagomes | weshay, it becomes more apparent in our deployment scenario because we usually deploy all the nodes at the same time | 12:57 |
*** dneary has joined #rdo | 12:57 | |
*** zaneb has joined #rdo | 12:57 | |
lucasagomes | where in a normal nova usage you always have spare nodes, so the retry filter end up covering this problem | 12:57 |
weshay | lucasagomes, k | 12:57 |
lucasagomes | weshay, that blueprint that I pasted address this problem, the code seems to be up but not merged yet | 12:58 |
weshay | so if you run w/ HA.. could we deploy one controller at a time? | 12:58 |
*** jprovazn has joined #rdo | 12:58 | |
*** _elmiko is now known as elmiko | 12:59 | |
*** imcleod has joined #rdo | 12:59 | |
lucasagomes | not sure, I haven't tried. We maybe can consult the guys that worked on the HA modules | 12:59 |
weshay | lucasagomes, the other option I was considering was to check the deployment log.. and if we get "not enough hosts".. delete the stack and retry | 13:00 |
*** puzzled has joined #rdo | 13:01 | |
*** eliska has quit IRC | 13:01 | |
rbowen | Good morning #rdo | 13:01 |
lucasagomes | weshay, "no valid host" you mean? But that's the thing, this shouldn't be a problem | 13:02 |
weshay | ya | 13:02 |
lucasagomes | because of the retry filter it should try again and get another node :-/ | 13:02 |
lucasagomes | you can try to increase the number of attempts as well | 13:02 |
weshay | hewbrocca, apevec good details here | 13:02 |
lucasagomes | but again, all that just mitigate the problem it does not solve it | 13:02 |
*** Alex_Stef has quit IRC | 13:03 | |
weshay | lucasagomes, k.. just to make sure I'm looking at the right setting.. can you paste it | 13:03 |
hewbrocca | lucasagomes: I suspect we don't hit it all that often | 13:03 |
hewbrocca | and increasing the retry number would help | 13:03 |
lucasagomes | hewbrocca, yeah, I think we have a code in the nova ironic driver also trying to help with this | 13:03 |
lucasagomes | but yeah :-/ | 13:03 |
apevec | weshay, lucasagomes - thanks, copy/pasted to etherpad :) Iis there ironic LP# for this? | 13:05 |
*** unclemarc has joined #rdo | 13:05 | |
*** snecklifter has quit IRC | 13:06 | |
*** eliska has joined #rdo | 13:06 | |
*** jhershbe__ has quit IRC | 13:07 | |
*** ushkalim has quit IRC | 13:07 | |
lucasagomes | apevec, not in Ironic, because the problem is in nova | 13:07 |
lucasagomes | apevec, it's just more apparent in Ironic because of the usage of it | 13:07 |
lucasagomes | in our case, where we try to deploy all nodes at the same time | 13:08 |
*** ashw has joined #rdo | 13:08 | |
apevec | lucasagomes, do you know is thre nova lp# for this? | 13:13 |
*** manous has joined #rdo | 13:13 | |
*** Amita has quit IRC | 13:13 | |
apevec | or is it just that spec? | 13:13 |
lucasagomes | apevec, it's a spec: http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/host-state-level-locking.html | 13:13 |
*** jcoufal has joined #rdo | 13:13 | |
lucasagomes | apevec, and the patches proposed are linked here in the blueprint: https://blueprints.launchpad.net/nova/+spec/host-state-level-locking | 13:13 |
apevec | and: This looks more or less stalled/abandoned and we're nearly at non-priority feature freeze (6/30) so I'm going to defer this from Newton. -- mriedem 20160629 | 13:14 |
hewbrocca | apevec: yeah do not expect a fix in Newton | 13:14 |
*** koko has joined #rdo | 13:15 | |
*** koko is now known as Guest98361 | 13:16 | |
*** dustins has joined #rdo | 13:17 | |
*** akshai has joined #rdo | 13:17 | |
*** manous has quit IRC | 13:17 | |
*** manousd has joined #rdo | 13:18 | |
EmilienM | it seems like python-mistralclient is broken in stable/mitaka | 13:18 |
*** JuanDRay has joined #rdo | 13:18 | |
EmilienM | http://logs.openstack.org/95/346195/1/check/gate-puppet-mistral-puppet-beaker-rspec-centos-7/82b59d4/console.html#_2016-07-24_02_25_02_614462 | 13:18 |
EmilienM | or it's a virtual package? | 13:18 |
*** jpich_ has joined #rdo | 13:18 | |
*** manousd has quit IRC | 13:18 | |
*** Guest98361 has quit IRC | 13:18 | |
*** Alex_Stef has joined #rdo | 13:18 | |
*** ushkalim has joined #rdo | 13:19 | |
rdogerrit | Fabien Boucher created config: Initial commit to activate the stable branch build on CBS http://review.rdoproject.org/r/1727 | 13:19 |
*** jmelvin has joined #rdo | 13:19 | |
chem | EmilienM: it lacks the openstack tag in the manifests/clients, I try adding it | 13:20 |
*** dyasny has joined #rdo | 13:20 | |
rdogerrit | Fabien Boucher proposed config: Initial commit to activate the stable branch build on CBS http://review.rdoproject.org/r/1727 | 13:20 |
EmilienM | chem: yeah I was looking this | 13:21 |
EmilienM | chem: was it backported? | 13:21 |
*** jpich has quit IRC | 13:21 | |
chem | EmilienM: arghh ... no | 13:21 |
EmilienM | should we do it? | 13:22 |
chem | EmilienM: we could give it a try | 13:22 |
*** snecklifter has joined #rdo | 13:22 | |
chem | EmilienM: so the problem doesn't exist in master ? | 13:23 |
EmilienM | chem: I don't understand, we have https://github.com/openstack/puppet-openstack-integration/blob/stable/mitaka/manifests/init.pp#L18-L20 | 13:23 |
EmilienM | so it should not fail | 13:23 |
chem | EmilienM: no tag https://github.com/openstack/puppet-mistral/blob/master/manifests/client.pp#L18 | 13:24 |
EmilienM | nice catch | 13:24 |
EmilienM | I'm adding it | 13:24 |
chem | EmilienM: I'm adding it right now and backporting | 13:24 |
EmilienM | chem: okk thanks ! | 13:24 |
*** pgadiya has quit IRC | 13:26 | |
*** jhershbe__ has joined #rdo | 13:26 | |
*** mlammon has joined #rdo | 13:27 | |
*** anshul has quit IRC | 13:29 | |
*** anshul has joined #rdo | 13:30 | |
*** julim has joined #rdo | 13:31 | |
*** jpich_ is now known as jpich | 13:31 | |
*** vaneldik has quit IRC | 13:31 | |
*** weshay is now known as weshay_brb | 13:32 | |
*** ayoung has joined #rdo | 13:32 | |
*** jhershbe__ has quit IRC | 13:33 | |
*** trown is now known as trown|brb | 13:34 | |
*** pilasguru has joined #rdo | 13:34 | |
*** thrash has quit IRC | 13:34 | |
*** hynekm has quit IRC | 13:35 | |
*** hynekm has joined #rdo | 13:35 | |
*** mgarciam has joined #rdo | 13:36 | |
*** saneax is now known as saneax_AFK | 13:38 | |
*** snecklifter has quit IRC | 13:38 | |
*** zaneb has quit IRC | 13:39 | |
*** zaneb has joined #rdo | 13:39 | |
*** READ10 has joined #rdo | 13:40 | |
*** Alex_Stef has quit IRC | 13:40 | |
*** ekuris has quit IRC | 13:41 | |
*** thrash has joined #rdo | 13:42 | |
*** thrash has quit IRC | 13:42 | |
*** thrash has joined #rdo | 13:42 | |
rdogerrit | hguemar proposed openstack/aodhclient-distgit: Fixed py2 and py3 subpackage http://review.rdoproject.org/r/1625 | 13:42 |
*** jeckersb_gone is now known as jeckersb | 13:43 | |
*** weshay_brb is now known as weshay | 13:44 | |
*** morazi has joined #rdo | 13:44 | |
*** sdake_ has joined #rdo | 13:44 | |
*** sdake has quit IRC | 13:45 | |
*** jhershbe__ has joined #rdo | 13:46 | |
*** vaneldik has joined #rdo | 13:49 | |
*** Poornima has quit IRC | 13:49 | |
*** chandankumar has left #rdo | 13:50 | |
number80 | https://bugzilla.redhat.com/show_bug.cgi?id=1359820 | 13:50 |
openstack | bugzilla.redhat.com bug 1359820 in Package Review "Review Request: python-cloudkittyclient - Client library for CloudKitty" [Medium,New] - Assigned to nobody | 13:50 |
number80 | this is an existing package that was never reviewed so should be quick | 13:50 |
*** hynekm has quit IRC | 13:51 | |
*** milan has joined #rdo | 13:52 | |
*** READ10 has quit IRC | 13:53 | |
weshay | apevec, the yum.log on the overcloud is empty because the overcloud nodes are images w/ everything pre-installed. We're going to make sure we have a rpm_list.txt in /var/log | 13:54 |
apevec | weshay, thanks, I was going to ask for that as I couldn't find it | 13:55 |
apevec | weshay, also sar or something like that during test execution, to see where would be bottleneck | 13:55 |
*** Pharaoh_Atem has quit IRC | 13:56 | |
trown|brb | weshay: I failed to reproduce the Ironic issue, but the undercloud is very slammed during that step. LA ~9 | 13:57 |
*** trown|brb is now known as trown | 13:57 | |
trown | which I think means we have no chance outside of dusty | 13:57 |
*** richm has joined #rdo | 13:58 | |
*** zoliXXL is now known as zoli|mtg | 13:58 | |
*** vimal has joined #rdo | 13:58 | |
apevec | yeah | 13:58 |
weshay | apetrich, heh.. sorry I didn't realize where it was https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-master-delorean-minimal-405/overcloud-controller-0/var/log/extra/rpm-list.txt.gz | 13:59 |
apevec | trown, funily enough, https://ci.centos.org/job/tripleo-quickstart-promote-master-delorean-minimal/405/ did pass today on n10.gusty | 13:59 |
*** Amita has joined #rdo | 13:59 | |
weshay | /var/log/extra | 13:59 |
apevec | ahh | 13:59 |
trown | apevec: ya the minimal job should pass there, but it might be a bit more racy | 13:59 |
*** ushkalim has quit IRC | 14:00 | |
trown | apevec: but ha almost never passes outside of dusty | 14:00 |
apevec | trown, ok, so odds a bit better than 1/16 :) | 14:00 |
*** JuanDRay has quit IRC | 14:00 | |
*** fultonj has joined #rdo | 14:00 | |
*** ohochman has left #rdo | 14:00 | |
*** Son_Goku has joined #rdo | 14:00 | |
trown | ya, but not much, because there are still odd failures on the weirdo jobs too, and those all compound since we have to have 100% pass at the same time | 14:00 |
apevec | would be nice to have all those job facts in the db, so we could query for stats | 14:01 |
trown | I think we need to fix that part | 14:01 |
trown | being able to retry a single job would save a lot of pain | 14:01 |
*** jhershbe__ has quit IRC | 14:01 | |
apevec | trown, yes, can we do that ? | 14:01 |
*** pilasguru has quit IRC | 14:02 | |
trown | not with multi-job how we have it now | 14:03 |
*** Son_Goku has quit IRC | 14:03 | |
dmsimard | trown, apevec: mitaka just passed twice recently | 14:03 |
dmsimard | also, hello #rdo | 14:04 |
trown | morning dmsimard | 14:04 |
apevec | dmsimard, hey | 14:04 |
apevec | dmsimard, yeah, pure luck? :) | 14:04 |
dmsimard | I don't know. Were the quickstart jobs always *that* flappy ? | 14:04 |
dmsimard | It seems much more of a problem recently | 14:05 |
*** nehar has joined #rdo | 14:05 | |
rbowen | Hi, dmsimard | 14:05 |
trown | dmsimard: ya something changed in the last month (and was backported to mitaka) that increased resource usage | 14:06 |
*** nehar has quit IRC | 14:07 | |
trown | from my unscientific observation, mosty CPU | 14:07 |
dmsimard | trown: I noticed somewhere we were running quickstart with 1 worker of everything everywhere | 14:07 |
trown | but liberty jobs dont have the issue | 14:07 |
dmsimard | Are we RAM constrained ? Could we increase the workers ? | 14:07 |
trown | we are definitely RAM constrained too, and increasing workers if we have a CPU bottleneck wont help either | 14:08 |
*** fragatina has quit IRC | 14:08 | |
*** Pharaoh_Atem has joined #rdo | 14:08 | |
*** fragatina has joined #rdo | 14:08 | |
dmsimard | Well what I mean is that one worker and it's threads can only process so many requests simultaneously and could be generating a lot of queueing | 14:09 |
dmsimard | especially if not behind apache | 14:09 |
dmsimard | I understand the CPUs aren't super awesome, but the load can also be artificially generated by lack of workers and processes just waiting and retrying all the time. | 14:11 |
trown | well overcloud nodes are only given 1 cpu | 14:12 |
trown | and undercloud does not have any specific configuration, so should have workers equal to the number of cpus | 14:13 |
dmsimard | Also, do we load kvm_amd and nested if the node happens to be an AMD machine ? | 14:13 |
trown | which is 4 for minimal and 2 for ha | 14:13 |
trown | dmsimard: ya, https://github.com/openstack/tripleo-quickstart/blob/master/roles/parts/kvm/tasks/main.yml#L61-L64 | 14:14 |
*** ushkalim has joined #rdo | 14:14 | |
dmsimard | cool | 14:14 |
jjoyce | number80++ | 14:16 |
zodbot | jjoyce: Karma for hguemar changed to 12 (for the f24 release cycle): https://badges.fedoraproject.org/tags/cookie/any | 14:16 |
trown | dmsimard: and looking at 09:49:12 on https://ci.centos.org/view/rdo/view/promotion-pipeline/job/tripleo-quickstart-promote-master-delorean-minimal/405/consoleFull we are successfully determining AMD and loading appropriate module | 14:17 |
*** pilasguru has joined #rdo | 14:17 | |
apevec | trown, isn't that loaded automatically? | 14:18 |
apevec | but modprobe doesn't hurt | 14:18 |
*** yfried has quit IRC | 14:19 | |
weshay | trown, https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-master-delorean-minimal-405/undercloud/var/log/extra/lsmod.txt.gz | 14:19 |
trown | hmm dont know, anisble tasks all have "changed=true", but not sure if that is meaningful | 14:19 |
*** anshul has quit IRC | 14:19 | |
trown | weshay: that is from the undercloud | 14:20 |
*** abregman|mtg has quit IRC | 14:20 | |
trown | weshay: the undercloud is not a virthost :) | 14:20 |
weshay | oh sec | 14:20 |
weshay | ha | 14:20 |
*** abregman has joined #rdo | 14:20 | |
social | hmm do we have some master overcloud deployment jenkins jobs? | 14:20 |
weshay | https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-master-delorean-minimal-405/172.19.2.138/var/log/extra/lsmod.txt.gz | 14:20 |
trown | social: https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-master/ | 14:21 |
weshay | kvm_amd 65072 12 | 14:21 |
weshay | dmsimard, ^ | 14:21 |
trown | I just got a successful local ha run with the last CI image, so we only have infra issues | 14:22 |
apevec | trown, nested kvm might actually help in overcloud-novacompute | 14:22 |
apevec | trown, yep | 14:22 |
apevec | trown, and that nova/ironic race, sometimes | 14:22 |
*** dmsimard sets mode: +v rdogerrit | 14:22 | |
trown | apevec: ya logs in the etherpad dont look like nova/ironic race to me | 14:22 |
trown | apevec: looks like slow undercloud failed to get callback from i-p-a ramdisk | 14:23 |
apevec | trown, 25. ? | 14:23 |
trown | apevec: but we do hit the nova/ironic race too sometimes, it just looks different than that | 14:23 |
apevec | please update | 14:23 |
*** gildub has quit IRC | 14:23 | |
trown | j | 14:24 |
trown | k | 14:24 |
apevec | where can we see more i-p-a logs? | 14:24 |
social | trown: https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-master-delorean-ha-401/undercloud/var/log/ironic/ironic-conductor.log.gz | 14:24 |
social | trown: Stderr: u"/bin/dd: error writing '/dev/disk/by-path/ip-192.0.2.10:3260-iscsi-iqn.2008-10.org.openstack:a328351a-bdde-4625-82d7-490151848031-lun-1-part2': Input/output error\n4608+0 records in\n4607+0 records out\n4830789632 bytes (4.8 GB) copied, 628.016 s, 7.7 MB/s\n" | 14:24 |
social | trown: this is issue with neutron using ovs native instead vsctl | 14:24 |
*** mvk has quit IRC | 14:25 | |
social | trown: it causes ovs to timeout and reconnect which should not disrupt anything but it drops all the connections eg dd fails for ironic | 14:25 |
social | trown: so we have it also on CI | 14:25 |
social | jlibosva: ^^ | 14:25 |
apevec | social, is there LP# ? | 14:25 |
social | apevec: I found it today | 14:25 |
social | still getting data | 14:25 |
trown | nice one | 14:26 |
*** mvk has joined #rdo | 14:26 | |
social | we have workaround | 14:26 |
*** fultonj_ has joined #rdo | 14:28 | |
*** fultonj has quit IRC | 14:28 | |
*** eliska has quit IRC | 14:29 | |
*** jhershbe has joined #rdo | 14:30 | |
rdogerrit | Jon Schlueter created openstack/glance-distgit: Conditionalize -doc building http://review.rdoproject.org/r/1728 | 14:30 |
jschlueter | apevec: ^^ | 14:30 |
*** ohochman has joined #rdo | 14:33 | |
dmsimard | number80: https://review.rdoproject.org/r/#/c/1620/ lol... the one project out of the 1000 that has integration jobs | 14:34 |
dmsimard | but hey, it passed too ! | 14:34 |
*** vaneldik has quit IRC | 14:34 | |
*** ohochman has left #rdo | 14:34 | |
number80 | dmsimard: yeah, just wondering if we can separate the votes | 14:35 |
*** Ryjedo has joined #rdo | 14:35 | |
number80 | (in this case, it's ok, but in some cases, it can get very long) | 14:35 |
*** pradiprwt has quit IRC | 14:35 | |
rdogerrit | Merged openstack/aodhclient-distgit: Fixed py2 and py3 subpackage http://review.rdoproject.org/r/1625 | 14:38 |
*** jcoufal_ has joined #rdo | 14:39 | |
*** beekneemech is now known as bnemec | 14:40 | |
*** paramite is now known as paramite|afk | 14:41 | |
*** jcoufal has quit IRC | 14:42 | |
*** ushkalim has quit IRC | 14:44 | |
dmsimard | number80: hm, not sure we can separate the votes, it's a set of jobs for a commit | 14:44 |
dmsimard | number80: I can see how the integration jobs can take some time to build, but at least they're just in check and non-voting right now | 14:44 |
dmsimard | number80: I can check if we can bump the specs of the VMs we're running tests on | 14:45 |
dmsimard | but if we bump the specs, it'll result in less overall capacity | 14:45 |
dmsimard | i.e, we currently run on 4 cores/8GB RAM VMs at a quota of 10 VMs we can relatively safely bump to 20, If we bump specs to 8 cores, I'm not so sure we could as easily bump to 20. | 14:46 |
dmsimard | *but* -- here, we could try switching to 8 cores and perhaps reconsider if we happen to have longer queues that could be alleviated by more concurrent jobs | 14:46 |
*** nmagnezi_ is now known as nmagnezi | 14:49 | |
*** kaminohana has quit IRC | 14:49 | |
*** Alex_Stef has joined #rdo | 14:49 | |
weshay | trown, so re: the master etherpad.. #25 is really caused by the above issue w/ the ipa image? | 14:49 |
trown | weshay: looks like it | 14:50 |
trown | social: is that issue racy? | 14:50 |
rdogerrit | Jon Schlueter proposed openstack/glance-distgit: Conditionalize -doc building http://review.rdoproject.org/r/1728 | 14:50 |
*** vimal has quit IRC | 14:51 | |
number80 | dmsimard: well, the thing is, we can run more jobs in // but since they take much longer time, it could still result in jamming the queue | 14:51 |
number80 | not sure how we can fix that | 14:51 |
*** mburned has joined #rdo | 14:52 | |
flepied2 | apevec: I added a card in the backlog regarding our discussion this morning: https://trello.com/c/guK9Ag12/157-this-column-represents-work-that-the-team-has-taken-a-first-pass-through-and-added-some-details-no-commitments-to-completing-the# | 14:52 |
dmsimard | number80: yeah. I'll ask for bumping the specs within our current quota and we can reconsider later. | 14:54 |
dmsimard | number80: upstream jobs in openstack-infra gate finish in around 30 minutes which I feel is completely reasonable | 14:55 |
*** vimal has joined #rdo | 14:55 | |
*** mosulica has quit IRC | 14:56 | |
*** fbo has joined #rdo | 14:56 | |
number80 | dmsimard: it was more than 1 hour here | 14:56 |
number80 | 1h30/2h | 14:56 |
*** ushkalim has joined #rdo | 14:57 | |
dmsimard | number80: for that particular glanceclient job I see 45 mins + dlrn-rpmbuild | 14:57 |
*** tshefi has quit IRC | 14:57 | |
number80 | zuul displayed 82 minutes and it was still running | 14:57 |
*** dtrainor_ has joined #rdo | 14:57 | |
*** fragatina has quit IRC | 14:58 | |
*** fragatin_ has joined #rdo | 14:58 | |
*** Guest98668 is now known as melwitt | 14:58 | |
*** dtrainor has joined #rdo | 14:59 | |
social | trown: so far I always reproduced it | 15:00 |
*** rcernin has quit IRC | 15:00 | |
* trown retries | 15:00 | |
*** JuanDRay has joined #rdo | 15:01 | |
*** dtrainor_ has quit IRC | 15:03 | |
*** Amita has quit IRC | 15:03 | |
*** milan has quit IRC | 15:04 | |
*** linuxgeek_ has joined #rdo | 15:05 | |
*** Alex_Stef has quit IRC | 15:05 | |
*** linuxaddicts has quit IRC | 15:09 | |
*** KarlchenK has quit IRC | 15:13 | |
*** aortega has quit IRC | 15:16 | |
dmsimard | apevec: could you create a CNAME status.rdoproject.org -> master.monitoring.rdoproject.org ? | 15:17 |
apevec | I could, once I find email from rbowen how to edit DNS zone, just a sec | 15:18 |
rbowen | You should still have git clone from last time, right? | 15:19 |
rbowen | I can send again if you need - and if I can find what I sent. :-) | 15:19 |
apevec | nope, I've changed laptop :) | 15:19 |
apevec | rbowen, np, found email | 15:19 |
rbowen | ok, good. | 15:19 |
*** tosky has quit IRC | 15:19 | |
*** ade_b has quit IRC | 15:20 | |
*** tosky has joined #rdo | 15:20 | |
*** KarlchenK has joined #rdo | 15:20 | |
*** iberezovskiy|off is now known as iberezovskiy | 15:22 | |
apevec | dmsimard, master.monitoring is already CNAME, so entry is: | 15:25 |
apevec | status IN CNAME monitoring | 15:25 |
dmsimard | apevec: sure | 15:25 |
*** jhershbe has quit IRC | 15:25 | |
*** pcaruana has quit IRC | 15:26 | |
apevec | dmsimard, pushed, serial 2016072501 | 15:27 |
dmsimard | apevec: ty | 15:28 |
*** milan has joined #rdo | 15:31 | |
*** READ10 has joined #rdo | 15:31 | |
*** zoli|mtg is now known as zoli | 15:33 | |
*** zoli is now known as zoliXXL | 15:33 | |
*** satya4ever has quit IRC | 15:34 | |
social | trown: /etc/neutron/plugins/ml2/openvswitch_agent.ini ovsdb_interface = vsctl | 15:35 |
social | trown: for workaround | 15:35 |
*** dtrainor has quit IRC | 15:35 | |
*** choirboy|afk is now known as choirboy | 15:35 | |
*** fragatin_ has quit IRC | 15:36 | |
*** milan has quit IRC | 15:36 | |
*** fragatina has joined #rdo | 15:37 | |
*** vimal has quit IRC | 15:37 | |
*** dtrainor has joined #rdo | 15:39 | |
*** ade_b has joined #rdo | 15:39 | |
*** zodbot_ has joined #rdo | 15:40 | |
*** zodbot has quit IRC | 15:40 | |
trown | social: hmm not seeing that in CI or local run any more | 15:40 |
trown | unless it is racy | 15:40 |
trown | CI failed with openstack client command timeout Error: Could not prefetch keystone_tenant provider 'openstack': Command: 'openstack [\"project\", \"list\", \"--quiet\", \"--format\", \"csv\", \"--long\"]' has been running for more than 20 seconds (tried 7, for a total of 170 seconds) | 15:41 |
*** zodbot_ is now known as zodbot | 15:41 | |
*** rbrady has quit IRC | 15:41 | |
social | trown: link on CI to check? | 15:41 |
trown | social: https://ci.centos.org/artifacts/rdo/jenkins-tripleo-quickstart-promote-master-delorean-ha-402 | 15:41 |
trown | social: but it failed well after Ironic was done | 15:42 |
*** spr2 has quit IRC | 15:42 | |
social | trown: yeah, that one passed well | 15:43 |
*** rpioso has joined #rdo | 15:43 | |
* social not happy | 15:43 | |
*** edannon has quit IRC | 15:44 | |
*** sdake has joined #rdo | 15:46 | |
apevec | trown, so we need to double timeouts again? https://review.openstack.org/#/c/334996/4/lib/puppet/provider/openstack.rb | 15:46 |
apevec | but really, 3 mins no reply is bad | 15:46 |
trown | apevec: well I think we patched the wrong thing originally | 15:46 |
trown | apevec: we patched retries, not timeout | 15:47 |
apevec | above is request_timeout | 15:47 |
trown | apevec: so if a command fails in 20 seconds that would succeed in 30, we retry when in fact we should just wait longer | 15:47 |
trown | apevec: look on 17 | 15:47 |
trown | request_timeout is the max time | 15:48 |
apevec | ah many timeouts | 15:48 |
*** nyechiel has quit IRC | 15:48 | |
*** abregman has quit IRC | 15:49 | |
*** dtrainor has quit IRC | 15:49 | |
apevec | ok, yeah, 3x 20s doesn't help | 15:49 |
*** sdake_ has quit IRC | 15:49 | |
trown | it actually did help some... but it is just racing more times to win rather than just extending the time so we dont have a race | 15:49 |
*** rbrady has joined #rdo | 15:50 | |
apevec | right | 15:50 |
*** dtrainor has joined #rdo | 15:51 | |
dmsimard | apevec: on second thought, I'm not going to host status.rdo on the same server as the monitoring so we can have somewhere to post if we have issues with rcip-dev that hosts both monitoring and review.rdo | 15:54 |
dmsimard | apevec: so could you do status.rdoproject.org an A record on 209.132.178.96 | 15:54 |
*** nmagnezi has quit IRC | 15:54 | |
dmsimard | It's an OS1 instance, it'll have to do for now | 15:54 |
apevec | ha, good point | 15:54 |
*** anilvenkata has quit IRC | 15:55 | |
*** Liuqing has joined #rdo | 15:56 | |
*** florianf has quit IRC | 15:57 | |
*** Liuqing has quit IRC | 15:57 | |
*** Liuqing has joined #rdo | 15:58 | |
*** fragatina has quit IRC | 15:58 | |
weshay | panda, you avail? | 16:00 |
*** fragatina has joined #rdo | 16:00 | |
apevec | dmsimard, 2016072502 | 16:00 |
panda | weshay: ye | 16:01 |
apevec | trown, so you're going to sed-1liner this or propose in puppet-openstacklib ? | 16:01 |
apevec | and if the former ^ dmsimard can we add this hack in weirdo too? | 16:02 |
trown | apevec: I will propose to puppet-openstacklib | 16:02 |
*** milan has joined #rdo | 16:02 | |
dmsimard | apevec: what ? | 16:02 |
dmsimard | I'm not following | 16:02 |
trown | apevec: given they accepted the patch that was intended to fix it, seems likely a patch that fixes the fix would also be ok | 16:02 |
dmsimard | weirdo jobs don't need a bump in timeouts | 16:02 |
apevec | dmsimard, https://review.openstack.org/#/c/334996/4/lib/puppet/provider/openstack.rb@17 | 16:02 |
apevec | dmsimard, they do, see etherpad | 16:03 |
apevec | I've seen timeouts | 16:03 |
dmsimard | looking .. | 16:03 |
apevec | hm, no that was timeout nova->neutron | 16:03 |
apevec | and other one was cirros d/l failure | 16:04 |
*** Goneri has quit IRC | 16:04 | |
apevec | re. cirros could we mirror it in ci.centos ? | 16:04 |
dmsimard | the 60s timeout is already very generous | 16:04 |
dmsimard | and we shouldn't be running into it | 16:04 |
dmsimard | 60s is a *long* time | 16:04 |
apevec | dmsimard, it was only 20s per attempt actuallyt | 16:04 |
dmsimard | ah | 16:04 |
apevec | that's puppet, not sure what timeouts apply for inter-service comms | 16:05 |
dmsimard | apevec: for cirros the challenge is that the location is provided from upstream | 16:06 |
*** ushkalim has quit IRC | 16:06 | |
dmsimard | apevec: we *could* change the location of the cirros image | 16:06 |
dmsimard | but weirdo tries to modify as little things as possible | 16:06 |
apevec | isn't that puppet-tempest parameter? | 16:07 |
dmsimard | p-o-i and packstack retrieve it differently | 16:08 |
dmsimard | p-o-i: https://github.com/openstack/puppet-openstack-integration/blob/master/run_tests.sh#L159 | 16:08 |
*** devvesa has quit IRC | 16:09 | |
apevec | ah so we could prepare it like upstream in ~/cache/files/ | 16:10 |
dmsimard | packstack: https://github.com/openstack/packstack/blob/e192b6f202fd1624b48e2cbfc56c7c9ec7842c8e/packstack/puppet/modules/packstack/manifests/provision/glance.pp (which refers back to https://github.com/openstack/packstack/blob/e192b6f202fd1624b48e2cbfc56c7c9ec7842c8e/packstack/plugins/provision_700.py#L32 ( | 16:10 |
*** Liuqing has quit IRC | 16:10 | |
pabelanger | apevec: dmsimard: Ya, we cache the images during on DIB process | 16:11 |
pabelanger | our* | 16:11 |
apevec | dmsimard, it would help if packstack gate could learn about ~/cache | 16:11 |
pabelanger | apevec: dmsimard: cache-devstack element handles that | 16:12 |
dmsimard | apevec: so what I would do is to align packstack to be able deploy the image from a file location instead of an url location | 16:12 |
apevec | yeah, we don't use DIB in ci.centos ? | 16:12 |
dmsimard | apevec: and then we can pre-provision that image | 16:12 |
apevec | dmsimard, ack | 16:12 |
dmsimard | apevec: no, ci.centos are centos-minimal virgin installs | 16:12 |
dmsimard | apevec: but we could leverage caching in review.rdo nodepool | 16:12 |
*** oshvartz has quit IRC | 16:13 | |
pabelanger | Ya, I'd recommend it. | 16:13 |
*** nyechiel has joined #rdo | 16:14 | |
trown | apevec: EmilienM https://review.openstack.org/346915 | 16:14 |
apevec | trown, thanks! | 16:14 |
*** dsneddon_ has quit IRC | 16:14 | |
*** dsneddon has quit IRC | 16:15 | |
*** dsneddon has joined #rdo | 16:15 | |
trown | ah crap forgot to fix spec | 16:15 |
EmilienM | again? | 16:15 |
trown | EmilienM: first attempt did not really address the issue, I tried to explain in commit message | 16:16 |
EmilienM | ok | 16:17 |
*** hrw has quit IRC | 16:17 | |
EmilienM | chem: ^ fyi | 16:17 |
trown | EmilienM: first attempt did help, but only by racing more | 16:17 |
*** garrett has quit IRC | 16:17 | |
kbsingh | hewbrocca: apevec: dmsimard: trown: weshay: hi guys, bstinson's been doing some perf metrics, and i/o rates on both amd and intel machines are identical; however per core compute on amd is about 50% that of the intel ones; but since amd's have higher core count, the overall compute capacity of the chassis is identical | 16:18 |
*** Goneri has joined #rdo | 16:18 | |
kbsingh | will collect as much data as we can and try to put it all into that bug report on bugs.c.o | 16:18 |
apevec | thanks! | 16:18 |
dmsimard | kbsingh: thanks for your time | 16:19 |
*** gkadam has quit IRC | 16:19 | |
*** mosulica has joined #rdo | 16:19 | |
apevec | so yeah, we're doing single vcpu undercloud right? | 16:20 |
*** smeyer has quit IRC | 16:20 | |
trown | apevec: no undercloud has 4 in the minimal case, and 2 in the ha case | 16:21 |
*** tesseract- has quit IRC | 16:21 | |
*** ekuris has joined #rdo | 16:21 | |
trown | apevec: we could bump to 4 in the ha case as well to see if that helps, but we would probably need to explicitly make some services use fewer than 4 workers as RAM is very tight there | 16:22 |
kbsingh | 32gb is tight ? | 16:22 |
*** ade_b has quit IRC | 16:22 | |
*** KarlchenK has quit IRC | 16:22 | |
apevec | meet tripleo :) | 16:22 |
trown | kbsingh: ya to run tripleo allinone it is | 16:22 |
*** hrw has joined #rdo | 16:23 | |
trown | it is 5 vms, one of which is deploying a massive nested heat stack | 16:23 |
kbsingh | aha | 16:23 |
chem | trown: not sure I get it, if I remember well, when we had the issue the process was just stuck in a strange way (could connect to the socket, but to no available answer), so killing it and retrying was the way to go it seemed. | 16:23 |
kbsingh | next year we are going to try and canvas for some more hardware, the aim is going to be beefier, but fewer, machines. | 16:24 |
kbsingh | hopefully, we can unblock some of this stuff then, but its all a bit in the air and budgets etc | 16:24 |
*** hewbrocca is now known as hewbrocca-afk | 16:25 | |
*** tumble has quit IRC | 16:25 | |
trown | chem: hmm maybe I did not understand the specific previous issue in that case... but for RDO CI we hit the actual command timeout quite a bit too | 16:25 |
chem | trown: making the test fail or just delaying it ? | 16:26 |
trown | kbsingh: thanks a ton for looking into it... I have some ideas around bridging multiple machines together as well... even 2 virthosts would help a ton | 16:26 |
kbsingh | lets keep exploring options | 16:27 |
*** KarlchenK has joined #rdo | 16:27 | |
kbsingh | we also have the cloud setup, if that helps - its much faster machines | 16:27 |
trown | chem: well say a command is just slow because we have a CPU bottleneck, it runs on average in 21 seconds time. Sometimes we will get lucky and it will run in 19 seconds and win the race, but often it will run in >20 seconds 8 times in a row and fail | 16:27 |
trown | average of 21 seconds there is not the best as that will win the race in 8 tries alot, but even 30 second average and we are unlikely to have one run be 20 seconds | 16:28 |
*** ohochman has joined #rdo | 16:32 | |
chem | trown: oki, then, let's see how this goes. By the way the command_timeout was untouched by the previous fix, only the total timeout (request_timeout) | 16:34 |
*** jlibosva has quit IRC | 16:34 | |
trown | chem: ya, for current issues in RDO we need to increase command_timeout | 16:35 |
*** trown is now known as trown|lunch | 16:36 | |
*** chandankumar has joined #rdo | 16:37 | |
*** lucasagomes is now known as lucas-afk | 16:39 | |
*** jubapa has joined #rdo | 16:40 | |
*** derekh has quit IRC | 16:43 | |
*** jubapa has quit IRC | 16:45 | |
*** zoliXXL is now known as zoli|gone | 16:46 | |
*** zoli|gone is now known as zoli_gone-proxy | 16:47 | |
*** ekuris has quit IRC | 16:49 | |
*** jpich has quit IRC | 16:49 | |
*** fzdarsky is now known as fzdarsky|afk | 16:51 | |
*** paragan has quit IRC | 16:54 | |
*** mbound has quit IRC | 16:55 | |
*** lucas-afk is now known as lucasagomes | 17:00 | |
*** linuxgeek_ has quit IRC | 17:00 | |
*** pkovar has quit IRC | 17:03 | |
*** jlabocki has joined #rdo | 17:06 | |
*** ifarkas is now known as ifarkas_afk | 17:07 | |
*** linuxgeek_ has joined #rdo | 17:08 | |
*** KarlchenK has quit IRC | 17:08 | |
*** dmsimard is now known as dmsimard|afk | 17:09 | |
*** mosulica has quit IRC | 17:10 | |
*** mvk has quit IRC | 17:12 | |
*** anilvenkata has joined #rdo | 17:13 | |
*** apevec has quit IRC | 17:15 | |
*** jhershbe has joined #rdo | 17:16 | |
*** linuxgeek_ has quit IRC | 17:22 | |
*** pcaruana has joined #rdo | 17:23 | |
*** ihrachys has quit IRC | 17:23 | |
*** fbo has quit IRC | 17:26 | |
*** apevec has joined #rdo | 17:27 | |
rdogerrit | Merged rdoinfo: rdopkg: map rdo-common branch to the right CBS build target http://review.rdoproject.org/r/1685 | 17:27 |
*** abregman has joined #rdo | 17:28 | |
*** mbound has joined #rdo | 17:33 | |
*** fragatina has quit IRC | 17:35 | |
*** trown|lunch is now known as trown | 17:40 | |
*** gfidente has quit IRC | 17:40 | |
*** imcsk8_ has joined #rdo | 17:41 | |
*** sdake has quit IRC | 17:41 | |
*** nmagnezi has joined #rdo | 17:41 | |
*** iberezovskiy is now known as iberezovskiy|off | 17:43 | |
*** imcsk8 has quit IRC | 17:44 | |
*** degorenko is now known as _degorenko|afk | 17:44 | |
*** imcsk8_ is now known as imcsk8 | 17:44 | |
*** mcornea has quit IRC | 17:54 | |
*** dtrainor has quit IRC | 17:58 | |
*** fragatina has joined #rdo | 17:59 | |
*** lucasagomes is now known as lucas-dinner | 18:01 | |
*** nmagnezi has quit IRC | 18:03 | |
*** dneary has quit IRC | 18:03 | |
*** fragatina has quit IRC | 18:04 | |
*** aortega has joined #rdo | 18:04 | |
*** tosky has quit IRC | 18:04 | |
*** vaneldik has joined #rdo | 18:07 | |
*** rbowen has quit IRC | 18:10 | |
*** rbowen has joined #rdo | 18:10 | |
*** ChanServ sets mode: +o rbowen | 18:10 | |
*** dtrainor has joined #rdo | 18:10 | |
*** dtrainor has quit IRC | 18:11 | |
*** dtrainor has joined #rdo | 18:11 | |
*** dtrainor has quit IRC | 18:13 | |
*** danielbruno has quit IRC | 18:13 | |
*** dtrainor has joined #rdo | 18:13 | |
dhill_ | I found a bug in python-tripleoclient in liberty... but it seems to affect only liberty | 18:14 |
dhill_ | https://bugzilla.redhat.com/show_bug.cgi?id=1359911 | 18:14 |
openstack | bugzilla.redhat.com bug 1359911 in openstack-tripleo "Cannot generate overcloud images with liberty" [Low,New] - Assigned to jslagle | 18:14 |
rdogerrit | Alan Pevec created rdoinfo: Normalize rdo.yml http://review.rdoproject.org/r/1729 | 18:19 |
*** jubapa has joined #rdo | 18:21 | |
*** mbound has quit IRC | 18:23 | |
*** mbound has joined #rdo | 18:24 | |
*** mbound has quit IRC | 18:25 | |
apevec | dhill_, which rpm version-release is that? | 18:25 |
*** jubapa has quit IRC | 18:25 | |
trown | ya thought we had a patch for that | 18:26 |
dhill_ | 0.3.5_20160724 | 18:26 |
weshay | rlandy, comments on https://review.gerrithub.io/#/c/272349/11 | 18:27 |
trown | dhill_: in general though it is better to use images from CI http://artifacts.ci.centos.org/rdo/images/liberty/delorean/stable/ | 18:27 |
dhill_ | trown: do we have RHEL images ? | 18:27 |
apevec | dhill_, this is fixed in http://cbs.centos.org/koji/buildinfo?buildID=7087 | 18:27 |
apevec | ah you have dlrn build | 18:27 |
trown | dhill_: oh, no we cant publish RHEL images | 18:27 |
weshay | dhill_, not RHEL public images | 18:27 |
dhill_ | trown: so this is why I'm building my images ;) | 18:27 |
rlandy | weshay: looking | 18:28 |
trown | dhill_: curious, why RDO on RHEL? | 18:28 |
apevec | trown, looks like that's not fixed on stable/liberty ? | 18:28 |
dhill_ | trown: because I'm not the only one who'll try it ... | 18:28 |
trown | fair enough, was just curious if there was a specific use case | 18:29 |
dhill_ | trown : nah... it's curiosity... | 18:29 |
dhill_ | trown: I'm used to building my own images | 18:29 |
*** ayoung has quit IRC | 18:30 | |
trown | dhill_: you are probably better off using the newer image building method at least then... you will need newer tripleo-common than liberty but only that package | 18:30 |
trown | dhill_: old tripleoclient code around image building is a mess of hacks | 18:30 |
weshay | rlandy, can we call this bug resolved? https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028 | 18:31 |
openstack | Launchpad bug 1571028 in tripleo-quickstart "[RFE] Baremetal support" [Wishlist,In progress] - Assigned to John Trowbridge (trown) | 18:31 |
dhill_ | trown: It's not openstack overcloud images build? | 18:31 |
apevec | dhill_, so for dlrn, this should be fixed upstream on stable/liberty branch | 18:31 |
trown | dhill_: that is the tripleoclient method | 18:31 |
apevec | but what trown said | 18:31 |
trown | dhill_: that will be patched soon to run https://github.com/openstack/tripleo-common/blob/master/scripts/tripleo-build-images instead | 18:32 |
trown | dhill_: ^ takes a yaml file that describes the images and is much cleaner | 18:32 |
trown | dhill_: example yamls are in https://github.com/openstack/tripleo-common/tree/master/image-yaml | 18:32 |
rlandy | weshay: https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028 will be resolved when there is sufficient documentation for bm and quickstart | 18:32 |
openstack | Launchpad bug 1571028 in tripleo-quickstart "[RFE] Baremetal support" [Wishlist,In progress] - Assigned to John Trowbridge (trown) | 18:32 |
weshay | rlandy, k.. thanks | 18:32 |
dhill_ | trown: when will that be merged? | 18:33 |
trown | dhill_: ?, that is merged | 18:33 |
trown | there is not a patch up to switch `openstack image build` yet, but hopefully that will make newton | 18:34 |
trown | err `openstack overcloud image build` | 18:34 |
trown | dhill_: but the script is usable as is | 18:34 |
apevec | damn, 2 greens in a row https://ci.centos.org/job/tripleo-quickstart-promote-master-delorean-minimal/ but HA wasn't lucky | 18:35 |
dhill_ | trown: cool, great to know ! Thanks for the info | 18:35 |
trown | never lucky | 18:35 |
*** dtrainor has quit IRC | 18:36 | |
trown | current ha run is on gusty too, so not too likely | 18:36 |
trown | maybe timeout patch will hit dlrn before the next run | 18:36 |
*** oshvartz has joined #rdo | 18:37 | |
*** dtrainor has joined #rdo | 18:43 | |
*** julim has quit IRC | 18:43 | |
*** sdake has joined #rdo | 18:44 | |
*** julim has joined #rdo | 18:45 | |
*** fzdarsky|afk has quit IRC | 18:45 | |
*** mosulica has joined #rdo | 18:52 | |
*** mosulica has quit IRC | 18:53 | |
weshay | hrybacki, https://review.gerrithub.io/#/c/280691/ | 18:55 |
weshay | myoung, https://review.gerrithub.io/#/c/280684/ | 18:56 |
weshay | trown, is ansible-role-ci-centos owned by dmsimard|afk ? | 18:56 |
weshay | https://review.gerrithub.io/#/c/280692/ | 18:56 |
*** Son_Goku has joined #rdo | 18:59 | |
trown | weshay: ya | 19:00 |
trown | though I am not sure anything even uses that now | 19:01 |
weshay | k | 19:01 |
*** mgarciam has quit IRC | 19:01 | |
trown | for quickstart we just use cico client directly | 19:01 |
weshay | ya.. thought that may have been cico.. thought wrong | 19:02 |
weshay | can we triage the role bugs? I'd like to move foward | 19:02 |
weshay | https://bugs.launchpad.net/tripleo-quickstart/+bug/1604517 | 19:02 |
openstack | Launchpad bug 1604517 in tripleo-quickstart "replace the native quickstart inventory with github.com/redhat-openstack/ansible-role-tripleo-inventory" [Undecided,New] | 19:02 |
weshay | https://bugs.launchpad.net/tripleo-quickstart/+bug/1604518 | 19:02 |
openstack | Launchpad bug 1604518 in tripleo-quickstart "Remove the tripleo-quickstart overcloud role and replace it w/ github.com/redhat-openstack/ansible-role-tripleo-overcloud" [Undecided,New] | 19:02 |
weshay | https://bugs.launchpad.net/tripleo-quickstart/+bug/1604520 | 19:02 |
openstack | Launchpad bug 1604520 in tripleo-quickstart "replace the native quickstart overcloud validation w/ github.com/redhat-openstack/ansible-role-tripleo-overcloud-validate" [Undecided,New] | 19:02 |
myoung | weshay, merged | 19:03 |
weshay | myoung, thanks | 19:03 |
*** laron has joined #rdo | 19:04 | |
hrybacki | weshay: merged | 19:04 |
trown | weshay: the bugs about moving stuff out of quickstart (rather than the one about moving stuff in) are tricky given the thread on openstack-dev | 19:05 |
weshay | which.. making oooq easier to consume? | 19:06 |
trown | lol, making tripleo-ci easier to consume, freudian slip, but ya | 19:06 |
weshay | trown, huh.. it seems like a pretty open reception to do what we thought was required | 19:07 |
*** READ10 has quit IRC | 19:07 | |
weshay | trown, want me to just hold off for now? | 19:07 |
*** dpeacock has quit IRC | 19:08 | |
*** metabsd has joined #rdo | 19:08 | |
trown | weshay: I dunno, seemed like we got feedback on not having "tripleo" code on redhat-openstack github, but it is a pretty grey area | 19:09 |
*** dtrainor has quit IRC | 19:10 | |
metabsd | Hi, I want to test openstack. I find that page about how to install openstack (packstack). I want to know what is the prereq for filesystem and partition. Thank you! | 19:11 |
mrunge | metabsd, for a small install, about 50 gigs of disk space is enough | 19:13 |
mrunge | metabsd, there's no specific requirement for partitioning | 19:14 |
mrunge | more space is better (obviously) | 19:14 |
*** laron has quit IRC | 19:15 | |
metabsd | I have 40G for the moment I plan to add more storage when I deploy my first vm. | 19:15 |
*** laron has joined #rdo | 19:15 | |
*** jcoufal_ has quit IRC | 19:15 | |
metabsd | I can use advanced lvm and split all the thing and I can also put all the stuff in / ? As a example where I store the vms ? I want to make sure that partition or fs is separate for flexibility | 19:16 |
mrunge | metabsd, packstack can create a storage for cinder for you | 19:16 |
mrunge | metabsd, there is no choice for vm placement | 19:17 |
metabsd | mrunge: If I want to add more storage to that cinder thing I have to increase something | 19:17 |
mrunge | metabsd, you can add more storage later | 19:17 |
mrunge | but | 19:17 |
mrunge | I would try a deployment first, and then go big with tripleo | 19:18 |
metabsd | mrunge: I will put all the thing in / | 19:18 |
mrunge | you want HA for infrastructure, no? | 19:18 |
metabsd | mrunge: I have only Server for my test. | 19:18 |
mrunge | so, no go big then | 19:18 |
mrunge | metabsd, don't worry about your / dir. it won't get polluted | 19:19 |
metabsd | Yes I have 3 node with 256G Ram and 6 CPU with multiple core :) I want to play with Openstack and migrate all our VMware stuff to Linux Hypervisor. I don't know if the better solution is RHEV or Openstack for the moment. | 19:19 |
*** gszasz has quit IRC | 19:19 | |
mrunge | depends, I'd say :D | 19:20 |
metabsd | mrunge: xfs or ext4 ? I don't find that information in the Documentation of RDO | 19:20 |
mrunge | doesn't really matter | 19:20 |
*** nyechiel has quit IRC | 19:21 | |
mrunge | if you want speed, you probably won't deploy block storage over a loopback mounted file | 19:21 |
metabsd | I just read RDO don't want to work with Network Manager. I have to disable it. | 19:21 |
mrunge | that is being deployed by packstack for demo reasons | 19:21 |
mrunge | not necessarily | 19:21 |
mrunge | for a single machine, you can use network manager, iirc | 19:22 |
mrunge | but: safe bet is to disable it | 19:22 |
metabsd | https://www.rdoproject.org/install/quickstart/ -- Network section. Disable Network Manager and Enable Network | 19:22 |
mrunge | yes | 19:22 |
metabsd | mrunge: Like SeLinux safe to disable it ... | 19:22 |
metabsd | :) | 19:22 |
mrunge | what? | 19:22 |
mrunge | do not disable selinux | 19:23 |
metabsd | joke ... I run my fedora 24 with Selinux right now :) | 19:23 |
*** dtrainor has joined #rdo | 19:23 | |
mrunge | there's no need to. if you get a denied, that's a bug | 19:23 |
mrunge | ;-) | 19:23 |
*** mbound has joined #rdo | 19:25 | |
*** Son_Goku has quit IRC | 19:26 | |
*** dmsimard|afk is now known as dmsimard | 19:26 | |
*** ihrachys has joined #rdo | 19:27 | |
metabsd | RDO have something like VDS (VMware) for switching ? | 19:29 |
*** chlong_POffice has quit IRC | 19:30 | |
*** chlong_POffice has joined #rdo | 19:31 | |
*** mbound has quit IRC | 19:31 | |
rdogerrit | Jon Schlueter proposed openstack/glance-distgit: Conditionalize -doc building http://review.rdoproject.org/r/1728 | 19:32 |
mrunge | what is vds? | 19:35 |
*** ohochman has quit IRC | 19:37 | |
metabsd | mrunge: it's a virtual switch. I can map physical network to multiple VM and that switch can tag vlan etc... | 19:40 |
*** jprovazn has quit IRC | 19:44 | |
*** danielbruno has joined #rdo | 19:47 | |
*** laron has quit IRC | 19:50 | |
*** dtrainor has quit IRC | 19:51 | |
*** laron has joined #rdo | 19:51 | |
*** ohochman has joined #rdo | 19:55 | |
*** laron has quit IRC | 19:55 | |
*** laron has joined #rdo | 19:56 | |
weshay | hrm.. | 19:57 |
weshay | Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146'\nexception: connect failed\nCould not retrieve fact='mongodb_is_master', resolution='<anonymous>': 757: unexpected token at '2016-07-25T19:22:17.325+0000 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused\n2016-07-25T19:22:17.326+0000 Error: couldn't connect to se | 19:57 |
weshay | rver 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146'\n\u001b[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::cpu_allocation_ratio'; class ::nova::scheduler::filter has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::ram_allocation_ratio'; cl | 19:57 |
weshay | ass ::nova::scheduler::filter has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Nova]): Could not look up qualified variable '::nova::scheduler::filter::disk_allocation_ratio'; class ::nova::scheduler::filter has not been evaluated\u001b[0m\n\u001b[1;31mWarning: Scope(Class[Aodh::Api]): | 19:57 |
weshay | ugh.. sry | 19:57 |
*** paramite|afk is now known as paramite | 19:58 | |
*** abregman has quit IRC | 19:58 | |
weshay | lastest master ha ^ | 19:58 |
EmilienM | weird | 19:59 |
EmilienM | weshay: did you investigate what's diff since last successful job? | 20:00 |
EmilienM | rpm -qa diff | 20:00 |
weshay | just getting into it now | 20:00 |
EmilienM | looks like tht change | 20:00 |
EmilienM | wonder why OOO CI is not broken | 20:00 |
EmilienM | wsI guess it's OOOQ again? | 20:00 |
trown | it is just gusty chassis | 20:01 |
trown | I did local ha with that image and it passed | 20:01 |
trown | disabled the promote while we wait for puppet-openstacklib change to hit dlrn | 20:02 |
weshay | https://www.diffchecker.com/u0gw4lvm | 20:02 |
trown | weshay: the relevant error is Error: Could not prefetch keystone_tenant provider 'openstack': Command: 'openstack [\"domain\", \"list\", \"--quiet\", \"--format\", \"csv\", []]' has been running for more than 20 seconds (tried 7, for a total of 170 seconds) | 20:03 |
weshay | trown, ah k | 20:03 |
trown | which will hopefully be resolved by the puppet-openstacklib patch that just merged | 20:03 |
trown | just waiting on dlrn to build it | 20:03 |
trown | s/resolved/worked around/ | 20:04 |
*** shardy has quit IRC | 20:04 | |
EmilienM | trown: ah :) | 20:05 |
EmilienM | so yeah it's timeout | 20:05 |
trown | just landed in dlrn, kicking promote | 20:05 |
*** chandankumar has quit IRC | 20:05 | |
*** ihrachys has quit IRC | 20:06 | |
*** dtrainor has joined #rdo | 20:07 | |
*** dtrainor has quit IRC | 20:08 | |
*** dtrainor has joined #rdo | 20:08 | |
*** spr1 has joined #rdo | 20:09 | |
*** zodbot has quit IRC | 20:10 | |
*** zodbot_ has joined #rdo | 20:10 | |
*** shardy has joined #rdo | 20:10 | |
*** zodbot_ is now known as zodbot | 20:11 | |
*** ihrachys has joined #rdo | 20:12 | |
*** sdake has quit IRC | 20:16 | |
dhill_ | why are we using mysql as a backend for ceilometer on the undercloud? It's really awful | 20:19 |
dmsimard | dhill_: ceilometer no longer collects/aggregates data afaik so maybe mysql makes sense since gnocchi is the one collecting data now | 20:20 |
dmsimard | dhill_: i.e, I wouldn't put mysql as the gnocchi backend (if that is even possible) | 20:21 |
dhill_ | dmsimard: I have an undercloud with a 30GB ceilometer database | 20:21 |
dhill_ | dmsimard: in a VM | 20:21 |
dhill_ | dmsimard: that VM is awfully slow and I was wondering | 20:21 |
dmsimard | dhill_: what version is that ? gnocchi was implemented in mitaka for tripleo I think | 20:22 |
dhill_ | 2015.1.3 | 20:22 |
dmsimard | so... kilo ? | 20:22 |
dmsimard | yeah back in kilo gnocchi still didn't exist and the default backend was mysql -- I think mongodb was the preferred backend back then | 20:23 |
dmsimard | if my memory is correct, gnocchi appeared in liberty and was implemented in mitaka | 20:24 |
dhill_ | ok | 20:24 |
dhill_ | I'm hungry now | 20:25 |
dhill_ | talking about gnoccis | 20:25 |
dmsimard | dhill_: https://review.openstack.org/#/c/252032/ april | 20:25 |
dmsimard | or maybe that didn't even make the mitaka release ? | 20:25 |
dmsimard | pradk would know .. | 20:26 |
dmsimard | dhill_: also, re-reading myself I wrote that wrong.. ceilometer is still the one collecting data but gnocchi is the one storing it and gnocchi has different backends | 20:26 |
dmsimard | excuse my french :) | 20:27 |
*** dtrainor has quit IRC | 20:31 | |
*** unclemarc has quit IRC | 20:31 | |
*** shardy has quit IRC | 20:34 | |
*** JuanDRay has quit IRC | 20:35 | |
*** akrivoka has quit IRC | 20:39 | |
*** akshai has quit IRC | 20:40 | |
*** milan has quit IRC | 20:43 | |
dmsimard | larsks: just came across http://blog.oddbit.com/2015/10/13/ansible-20-the-docker-connection-driver/, that's pretty cool :D | 20:43 |
larsks | Yeah, it's nifty! | 20:43 |
dmsimard | larsks: I was just looking for a way to do a docker exec in a specific container, it doesn't seem like there's a way !? | 20:44 |
dmsimard | I guess I could just do a command | 20:44 |
*** aortega has quit IRC | 20:50 | |
metabsd | I can specify a network card when I use packstack ?? | 20:51 |
imcsk8 | metabsd: yes, the device | 20:53 |
*** anilvenkata has quit IRC | 20:53 | |
*** ihrachys has quit IRC | 20:54 | |
metabsd | packstack --allinone --device=ens3f0 ? | 20:54 |
*** ihrachys has joined #rdo | 20:54 | |
*** fultonj_ has quit IRC | 20:55 | |
metabsd | imcsk8: I never use RDO. It's my first time :) | 20:55 |
*** pilasguru has quit IRC | 20:55 | |
metabsd | ok just find --help :) | 20:55 |
*** chlong_POffice has quit IRC | 20:56 | |
*** spr1 has quit IRC | 20:56 | |
imcsk8 | metabsd: np, we can help you | 20:57 |
imcsk8 | metabsd: you don't specify the network card, you specify hosts | 20:58 |
metabsd | I decide to test it like https://www.rdoproject.org/install/quickstart/ allinone | 20:59 |
*** ashw has quit IRC | 21:00 | |
*** ohochman has quit IRC | 21:01 | |
imcsk8 | that's ok | 21:01 |
metabsd | Puppet doing something :P | 21:01 |
*** morazi has quit IRC | 21:02 | |
*** jeckersb is now known as jeckersb_gone | 21:03 | |
openstackgerrit | Merged openstack/packstack: add wsgi threads for gnocchi::api https://review.openstack.org/341287 | 21:07 |
*** chlong_POffice has joined #rdo | 21:09 | |
*** trown is now known as trown|outtypewww | 21:09 | |
*** fragatina has joined #rdo | 21:15 | |
*** mvk has joined #rdo | 21:16 | |
*** rlandy has quit IRC | 21:16 | |
*** Goneri has quit IRC | 21:16 | |
*** ihrachys has quit IRC | 21:18 | |
*** Son_Goku has joined #rdo | 21:19 | |
*** egafford has quit IRC | 21:19 | |
*** ihrachys has joined #rdo | 21:19 | |
*** julim has quit IRC | 21:20 | |
*** Son_Goku has quit IRC | 21:20 | |
*** Son_Goku has joined #rdo | 21:21 | |
*** rhallisey has quit IRC | 21:30 | |
*** pilasguru has joined #rdo | 21:30 | |
*** ihrachys has quit IRC | 21:33 | |
*** ihrachys has joined #rdo | 21:34 | |
*** laron has quit IRC | 21:36 | |
dmsimard | number80: I saw you update https://github.com/redhat-openstack/openstack-utils recently... is that packaged somewhere or somethnig ? | 21:37 |
dmsimard | something* | 21:37 |
number80 | dmsimard: it is in CBS | 21:37 |
dmsimard | number80: as openstack-utils ? | 21:37 |
* dmsimard looks | 21:38 | |
number80 | http://cbs.centos.org/koji/buildinfo?buildID=11443 | 21:38 |
number80 | yes, in buildlogs repo | 21:38 |
number80 | (all releases) | 21:38 |
*** egafford has joined #rdo | 21:38 | |
dmsimard | number80: ah ok, was looking for a spec here.. https://github.com/rdo-packages?utf8=✓&query=utils | 21:38 |
dmsimard | I guess it's not built by dlrn | 21:39 |
*** laron has joined #rdo | 21:39 | |
number80 | dmsimard: openstack-utils is a downstream-only project | 21:40 |
number80 | https://github.com/redhat-openstack/openstack-utils | 21:40 |
*** snecklifter_ has quit IRC | 21:40 | |
number80 | spec file is in el7-rpm branch | 21:40 |
*** bnemec has quit IRC | 21:40 | |
dmsimard | oh, didn't notice the branch, ty | 21:40 |
number80 | wait | 21:41 |
dmsimard | number80: context was https://review.openstack.org/#/c/346551/ | 21:41 |
weshay | trown|outtypewww, EmilienM not sure why yet.. but the overcloud nodes in this test.. https://review.openstack.org/#/c/341616/ | 21:41 |
weshay | never get out of spawning | 21:41 |
weshay | | 91051124-3ae4-4bce-971b-504050aa0c85 | overcloud-controller-0 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.14 | | 21:41 |
weshay | | 8cb03d65-f6f5-4b26-9450-a5100800d316 | overcloud-novacompute-0 | BUILD | spawning | NOSTATE | ctlplane=192.0.2.10 | 21:41 |
number80 | dmsimard: it's in there now : https://github.com/rdo-common/openstack-utils | 21:43 |
dmsimard | number80: rdo-common, is that new ? | 21:43 |
dmsimard | first time I see it .. | 21:43 |
number80 | dmsimard: we use this branch for common deps | 21:44 |
number80 | openstack-utils had no official dist-git so I moved it there | 21:44 |
*** laron has quit IRC | 21:44 | |
*** jeckersb_gone is now known as jeckersb | 21:47 | |
*** fragatin_ has joined #rdo | 21:49 | |
*** ihrachys_ has joined #rdo | 21:50 | |
*** ihrachys has quit IRC | 21:50 | |
*** bnemec has joined #rdo | 21:50 | |
apevec | dmsimard, openstack-status has no idea about HA/pacemaker and we don't want to teach about it | 21:51 |
*** dustins has quit IRC | 21:51 | |
*** fragatin_ has quit IRC | 21:51 | |
apevec | openstack-utils is on it's way out, we already stripped unsupported parts of it | 21:51 |
apevec | like openstack-db | 21:52 |
*** fragatin_ has joined #rdo | 21:52 | |
dmsimard | ah, okay | 21:53 |
*** fragatina has quit IRC | 21:53 | |
* dmsimard shrugs | 21:53 | |
*** amuller has quit IRC | 21:53 | |
apevec | that said, I'm not exactly fan of that big fat scripts/tripleo.sh | 21:55 |
apevec | I'd expect sanity check to be in https://github.com/openstack/tripleo-validations | 21:56 |
apevec | hmm, is that empty project? | 21:57 |
*** apevec has quit IRC | 22:00 | |
*** coolsvap has quit IRC | 22:01 | |
*** rwsu has quit IRC | 22:04 | |
*** pilasguru has quit IRC | 22:04 | |
*** jmelvin has quit IRC | 22:08 | |
*** paramite has quit IRC | 22:08 | |
*** jhershbe has quit IRC | 22:09 | |
*** jhershbe has joined #rdo | 22:09 | |
*** rlandy has joined #rdo | 22:11 | |
*** thrash is now known as thrash|g0ne | 22:17 | |
*** jubapa has joined #rdo | 22:17 | |
*** akshai has joined #rdo | 22:20 | |
*** fragatina has joined #rdo | 22:23 | |
*** fragatin_ has quit IRC | 22:25 | |
*** akshai has quit IRC | 22:29 | |
*** jubapa has quit IRC | 22:30 | |
*** jubapa has joined #rdo | 22:31 | |
*** chem has quit IRC | 22:36 | |
*** chem has joined #rdo | 22:36 | |
*** danielbruno has quit IRC | 22:39 | |
*** kgiusti has left #rdo | 22:45 | |
*** danielbruno has joined #rdo | 22:45 | |
*** iranzo has joined #rdo | 22:46 | |
*** rhallisey has joined #rdo | 22:46 | |
*** elmiko is now known as _elmiko | 22:50 | |
*** lucas-dinner has quit IRC | 22:51 | |
*** pilasguru has joined #rdo | 22:53 | |
*** lucasagomes has joined #rdo | 22:57 | |
*** egafford has quit IRC | 23:06 | |
*** dgurtner has quit IRC | 23:11 | |
*** mlammon has quit IRC | 23:13 | |
*** ihrachys_ has quit IRC | 23:15 | |
*** ohochman has joined #rdo | 23:16 | |
*** dtrainor has joined #rdo | 23:19 | |
*** rpioso has quit IRC | 23:23 | |
*** rpioso has joined #rdo | 23:23 | |
*** rpioso has quit IRC | 23:23 | |
*** fragatina has quit IRC | 23:23 | |
*** fragatina has joined #rdo | 23:24 | |
*** chlong_POffice has quit IRC | 23:25 | |
*** danielbruno has quit IRC | 23:27 | |
*** gildub has joined #rdo | 23:41 | |
*** chlong_POffice has joined #rdo | 23:42 | |
*** kaminohana has joined #rdo | 23:49 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!