openstackgerrit | Monty Taylor proposed openstack-infra/nodepool feature/zuulv3: Fetch list of AZs from nova if it's not configured https://review.openstack.org/450345 | 00:14 |
---|---|---|
mordred | Shrews: I rebased on your docs patch. Also, I think I got your review comment, but I may not have | 00:15 |
Shrews | mordred: you did not. need me to correct the placement? | 00:28 |
Shrews | mordred: basically, the first 'if' in that 'for' is for pre-existing nodes, and the 2nd 'if' is for new nodes. we need to choose the az outside the for so it's the same for both types | 00:30 |
Shrews | actually, that's confusing since there are many 'fors' and 'ifs'. i'll put up a patch set tomorrow for you. | 00:31 |
*** jamielennox is now known as jamielennox|away | 00:54 | |
*** jamielennox|away is now known as jamielennox | 00:59 | |
*** bstinson has quit IRC | 01:30 | |
*** bstinson has joined #zuul | 01:32 | |
*** jamielennox is now known as jamielennox|away | 04:09 | |
*** jamielennox|away is now known as jamielennox | 04:30 | |
*** MarkMielke has quit IRC | 04:35 | |
*** bhavik1 has joined #zuul | 04:49 | |
*** isaacb has joined #zuul | 06:26 | |
*** jamielennox is now known as jamielennox|away | 06:36 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul feature/zuulv3: Change mutex to counting semaphore https://review.openstack.org/442563 | 07:12 |
*** jamielennox|away is now known as jamielennox | 07:21 | |
tobiash_ | jeblair: ^^ reworked regarding your comments but it looks like I have to rebase this again (merge failed) once your secrets work is merged or rebased | 07:31 |
*** hashar has joined #zuul | 07:35 | |
*** bhavik1 has quit IRC | 08:00 | |
*** rcarrillocruz has joined #zuul | 08:01 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/zuul feature/zuulv3: Use unicode for change number extraction https://review.openstack.org/450704 | 10:24 |
*** openstackgerrit has quit IRC | 11:33 | |
*** hashar is now known as hasharLunch | 11:47 | |
*** hasharLunch is now known as hashar | 12:32 | |
mordred | Shrews: ok. I see where you're talking about | 12:36 |
Shrews | mordred: did you see the import failure for that? | 12:42 |
mordred | nope - but will look at that next | 12:42 |
Shrews | maybe we need to add concurrent lib to shade's requirements.txt | 12:43 |
Shrews | hrm, no. that shouldn't be necessary | 12:44 |
mordred | Shrews: https://review.openstack.org/450765 | 12:49 |
mordred | Shrews: yes - "futures" is a backport from 3.2 to 2.7 ... | 12:49 |
mordred | Shrews: and we had it in our test-requirements - so we didn't notice it was missing from our requirements | 12:49 |
Shrews | ah | 12:49 |
mordred | Shrews: and I'm guessing we were saved in nodepool accidentally due to a transitive depend from one of the things we recently removed - like zmq or something | 12:50 |
mordred | anyway - we sohuld land that requirements patch to shade and release asap - shade is just broken now | 12:50 |
mordred | actually - let me add a release note real quick | 12:50 |
Shrews | i +3'd. but can +3 again | 12:51 |
mordred | Shrews: updated | 12:52 |
mordred | Shrews: btw - it was heatclient | 12:54 |
Shrews | lol | 12:54 |
mordred | Shrews: heatclient pulled in futures as a transitive - so it really was just the 1.18.0 release that was problematic | 12:54 |
mordred | *phew* I was worried that we had more breakage out there than that | 12:55 |
*** openstackgerrit has joined #zuul | 12:56 | |
openstackgerrit | Monty Taylor proposed openstack-infra/nodepool feature/zuulv3: Fetch list of AZs from nova if it's not configured https://review.openstack.org/450345 | 12:56 |
mordred | Shrews: that once again won't work, as I bumped the requirement to 1.18.1 | 12:56 |
mordred | but hopefully the logic is correct this time | 12:56 |
* Shrews looks | 12:57 | |
pabelanger | danke! | 12:57 |
mordred | Shrews: it's worth noting that it may result in not reusing nodes in the case where there are more than one az and there are nodes available in one az but the random choice picks one from another az | 12:58 |
pabelanger | just hit that issue | 12:58 |
Shrews | mordred: close. we need to have an 'if not self.chosen_az' surrounding that since that code can be executed more than once | 12:58 |
mordred | Shrews: ah - k, cool | 12:58 |
mordred | Shrews: I don't think we necessarily need to worry too much about skipping node reuse in a multi-az world just yet - but it's worth noting | 12:58 |
Shrews | mordred: well... wait a sec | 13:00 |
Shrews | mordred: oh, geez. i TOTALLY told you the wrong thing | 13:01 |
mordred | awesome | 13:01 |
Shrews | because i suck | 13:01 |
mordred | yah - I was just about to say I think we want the code back where it was - because if it is there we'll only reset it if it hasn't been set and if we haven't grabbed a node from existing nodes | 13:02 |
mordred | although I do think a trap to make sure that node.az is in self.pool.azs if there is one would be a good idea - just in case someone changes the config and there are existing nodes floating out there | 13:03 |
Shrews | mordred: your PS2 was correct | 13:03 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Remove unused timing constants https://review.openstack.org/450770 | 13:03 |
Shrews | mordred: and that trap sounds like a good idea too | 13:04 |
mordred | k. patch coming | 13:05 |
Shrews | though then those nodes would never be used | 13:05 |
mordred | yah ... I think that's overthikning it :) | 13:06 |
openstackgerrit | Monty Taylor proposed openstack-infra/nodepool feature/zuulv3: Fetch list of AZs from nova if it's not configured https://review.openstack.org/450345 | 13:06 |
mordred | Shrews: I believe at this point we REALLY understand that patch | 13:07 |
Shrews | yes | 13:07 |
Shrews | sorry i confused you earlier | 13:07 |
mordred | Shrews: no - it was good - I grok the code in questoin better as a result :) | 13:09 |
Shrews | good. explain it to me now :) | 13:10 |
Shrews | mordred: btw, it seems our ansible friends are getting closer to gerrit style functionality via their bot: https://github.com/ansible/ansibullbot/pull/445 | 13:18 |
Shrews | X number of 'shipit' comments merge the PR | 13:19 |
mordred | Shrews: heh | 13:24 |
mordred | Shrews: well, we should maybe explain to them why we have both a +2 and a +A vote | 13:24 |
mordred | but also neat | 13:24 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Log return code on failed dib build https://review.openstack.org/450782 | 13:30 |
pabelanger | mordred: I'm going to forward port 358730 to feature/zuulv3 for nodepool. It is our server console patch | 13:30 |
pabelanger | to avoid the large changes in nodepool yesterday | 13:31 |
*** isaacb has quit IRC | 13:32 | |
mordred | pabelanger: oh - wait a sec | 13:33 |
mordred | pabelanger: that patch needs to be updated now that we have a new shade out which does not need the task_manager changes | 13:34 |
pabelanger | ack | 13:34 |
mordred | pabelanger: which is to say - yes, plesae do - but skip the part where patches task_manager | 13:35 |
mordred | pabelanger: and probably rebase it on top of https://review.openstack.org/450345 so that you pick up the requirements bump to 1.18.1 (which we'll release in just a few minutes) | 13:35 |
pabelanger | will do | 13:36 |
openstackgerrit | Dirk Mueller proposed openstack-infra/nodepool master: [WIP]: Add opensuse 42.2 DIB testing https://review.openstack.org/450045 | 14:01 |
tobiash_ | mordred: regarding https://review.openstack.org/450345, is the double '_' intended? In provider manager self.__azs = None | 14:03 |
mordred | tobiash_: you know - I should fix that : | 14:06 |
mordred | :) | 14:06 |
tobiash_ | didn't know if there's some python magic behind that ;) | 14:07 |
jeblair | there is some python magic -- "__foo" becomes _ClassName__foo, so you can use it to automatically avoid collisions with subclasses. | 14:26 |
Shrews | that's a good catch. easily missed | 14:33 |
mordred | remote: https://review.openstack.org/450345 Fetch list of AZs from nova if it's not configured | 14:55 |
mordred | updated | 14:55 |
mordred | you want to re +2 https://review.openstack.org/#/c/449140 ? | 14:56 |
Shrews | mordred: maybe we need to have the providermanager reconfigure() method reset self.__az ? | 14:59 |
Shrews | to your earlier point on the AZ list changing in the config | 14:59 |
mordred | Shrews: ++ | 15:06 |
Shrews | pabelanger: care to +2 https://review.openstack.org/449140 again? had to fix merge conflicts | 15:31 |
pabelanger | Shrews: +3 | 15:33 |
Shrews | thx | 15:33 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Docs: Remove cron references https://review.openstack.org/449140 | 15:41 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Docs: Correct availability-zones documentation. https://review.openstack.org/449147 | 15:42 |
openstackgerrit | Merged openstack-infra/nodepool feature/zuulv3: Docs: Remove refs to removed nodepool commands https://review.openstack.org/449152 | 15:43 |
mordred | Shrews: wait - so on the reconfigure thing ... | 15:52 |
Shrews | mostly thinking outloud on that | 15:53 |
mordred | self.__az is the cached list of az's from the cloud - I'm not sure we need to reconfigure cached clouds things on reconfigure - unless we want to blank the flavor cache too | 15:53 |
Shrews | mordred: i was thinking in the case the AZs were listed explicitly | 15:54 |
mordred | yah - that I think is going to get covered already, no? | 15:54 |
Shrews | self.__az won't change in that case, will it? | 15:54 |
Shrews | getAZs() would return the outdated list | 15:56 |
mordred | getAZs isn't tied to the configured list | 15:57 |
mordred | getAZs only returns the azs detected from the cloud | 15:57 |
mordred | self.pool.azs is the configured list | 15:57 |
Shrews | gah | 15:57 |
Shrews | ignore me | 15:58 |
* mordred hands Shrews a pie | 15:58 | |
Shrews | my brain is just totally off today | 15:58 |
* Shrews goes for lunch to attempt to reset his brain | 15:59 | |
pabelanger | so, question / bikeshad, if we are calling it nodepool-launcher, and our service is nodepool-launcher, would the binary we call also be called nodepool-launcher? | 16:08 |
SpamapS | I think we should call the binary Roger | 16:11 |
SpamapS | it's a lovely name | 16:11 |
jeblair | pabelanger: yes, i think we should call it nodepool-launcher (or Gerald) | 16:12 |
pabelanger | :_ | 16:13 |
pabelanger | :) | 16:13 |
mordred | I vote henry | 16:14 |
SpamapS | Roger Henry Gerald Nodepool Launcher III, Lord of High SSHia. | 16:16 |
pabelanger | I lol'd | 16:17 |
jeblair | i really want to manipulate the proc table so it shows up as that, but fungi would kill me | 16:18 |
clarkb | if you really want to get fun like that you coudl do what elasticserach and other clusters do | 16:18 |
fungi | also not very portable across platforms | 16:18 |
clarkb | they randomly pull from a predefined list of themed names for naming their cluster nodes within the cluster | 16:18 |
clarkb | so each laucnher could register with zk as gerald and ford and theodore and george | 16:19 |
jeblair | clarkb: oh nice | 16:19 |
fungi | (zeddmore, venkman, stantz, spengler...) | 16:21 |
clarkb | jeblair: I've configured our ES servers to use their hostnamess as their names but by default its comic book characters? | 16:23 |
clarkb | something like that | 16:23 |
*** hashar has quit IRC | 16:25 | |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Rename nodepoold to nodepool-launcher https://review.openstack.org/450877 | 16:29 |
*** rcarrillocruz has quit IRC | 16:33 | |
openstackgerrit | K Jonathan Harker proposed openstack-infra/zuul feature/zuulv3: Catch and log url pattern formatting errors https://review.openstack.org/450468 | 16:35 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Rename nodepoold to nodepool-launcher https://review.openstack.org/450877 | 16:38 |
pabelanger | I'll update puppet-nodepool shortly | 16:46 |
SpamapS | hm | 16:48 |
SpamapS | so I'm messing with test_timer_sshkey | 16:48 |
SpamapS | it seems nothing writes .ssh_wrapper_$connection anymore | 16:48 |
jeblair | SpamapS: hrm, i thought the merger still did that? | 16:49 |
clarkb | jeblair: didn't we remove it because it was causing problems? | 16:49 |
clarkb | I think that happened at ptg | 16:49 |
SpamapS | jeblair: I see it being read. | 16:49 |
SpamapS | I don't see it being written. | 16:49 |
pabelanger | clarkb: I thought so too | 16:50 |
jeblair | http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/merger/merger.py?h=feature/zuulv3#n221 | 16:50 |
SpamapS | https://github.com/openstack-infra/zuul/blob/feature/zuulv3/zuul/merger/merger.py#L257-L260 | 16:51 |
SpamapS | that's never called | 16:51 |
SpamapS | oh no that's the method above | 16:51 |
jeblair | SpamapS: what seems more likely is this: | 16:51 |
jeblair | SpamapS: that because the merger isn't running pre-run merge checks on all changes yet, that doesn't occur during the test | 16:52 |
SpamapS | so the wrapper file isn't written anymore, but the ssh command is still calculated? | 16:52 |
jeblair | SpamapS: (however, the executor's merger will do that, but the test isn't looking for that) | 16:52 |
jeblair | if that's correct, then the two solutions are: 1) depend on the pre-merge check being added again, or 2) switch to checking for the executor merger's ssh wrapper | 16:53 |
SpamapS | So should we enable this test as part of implementing pre-run merge checks? | 16:53 |
jeblair | SpamapS: i'd stack it on top as a separate change | 16:55 |
jeblair | now is probably a good time for me to look at the test failures in 446275 | 16:56 |
SpamapS | where did that test even come from? it's not in master at all | 16:56 |
* SpamapS asks git | 16:56 | |
SpamapS | http://git.openstack.org/cgit/openstack-infra/zuul/commit/zuul/merger/merger.py?h=feature/zuulv3&id=da90a50b794f18f74de0e2c7ec3210abf79dda24 | 16:57 |
jeblair | SpamapS: https://review.openstack.org/430872 was to master | 16:59 |
SpamapS | yeah, the test isn't in master anymore | 16:59 |
jlk | SpamapS: I rebased my patch set on top of the Changish refactor and NullChange removal. There wasn't much interaction with it, a single patch. I haven't pushed it up yet tho because I'm coordinating with jesusaur on some other patches to rebase on top. | 16:59 |
SpamapS | jlk: Well that's better than "it blew the whole thing up". Anyway, they're in feature/zuulv3 now so your rebase is a safe one. :) | 17:00 |
jlk | yeah that's what I meant. I rebased on tip of feature/zuulv3 which had your patches in it | 17:01 |
jlk | mine were in merge conflict because of that. | 17:01 |
jeblair | SpamapS: http://git.openstack.org/cgit/openstack-infra/zuul/tree/tests/test_scheduler.py#n3080 | 17:01 |
SpamapS | zomg | 17:02 |
SpamapS | ignore me | 17:02 |
SpamapS | wow | 17:02 |
SpamapS | my master clone was a local clone | 17:02 |
jlk | geez clint. :D | 17:03 |
SpamapS | jeblair: I don't see an obvious story/task for adding pre-run merge checks to v3. Is it buried in something else? | 17:03 |
SpamapS | because would like to just move this one to blocked and mention what it's blocked on in the task comments. | 17:04 |
jeblair | SpamapS: https://review.openstack.org/446275 says 3468 | 17:05 |
SpamapS | oh already in flight, cool | 17:05 |
jesusaur | SpamapS: it's part of re-enabling test_build_configuration_conflict | 17:05 |
SpamapS | I can just stack on that.. :) | 17:05 |
SpamapS | still nothing writing that file | 17:10 |
jeblair | SpamapS: i'm not sure that change is sufficiently functional to do that yet | 17:10 |
SpamapS | Yeah now I see it's still a little explodey in its own tests. K | 17:11 |
openstackgerrit | Clint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Re-enable test_timer_sshkey https://review.openstack.org/450897 | 17:15 |
SpamapS | kk, there's the test stacked on top. Does not work, but I'll just move the task to blocked and rebase once the parent starts passing | 17:16 |
jesusaur | SpamapS: I've been having trouble tracking down the failures in test_v3.py in the parent change, if you have some spare cycles I would appreciate some help on that | 17:17 |
SpamapS | jesusaur: sure! I'll switch gears into that now. It needs a rebase on feature/zuulv3, can we start from there? | 17:18 |
jesusaur | oh, yeah, I'll rebase it now | 17:18 |
SpamapS | Cool, need to walk my dog then I'll jump in to the failures | 17:19 |
openstackgerrit | K Jonathan Harker proposed openstack-infra/zuul feature/zuulv3: Perform pre-launch merge checks https://review.openstack.org/446275 | 17:20 |
jeblair | jesusaur, SpamapS: left comments on PS9 of 446275 | 17:24 |
jeblair | jesusaur, SpamapS: and also PS8 | 17:25 |
jeblair | because i was apparently moving around during a rebase :) | 17:25 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Rename nodepoold to nodepool-launcher https://review.openstack.org/450877 | 17:36 |
jeblair | jesusaur: let me know if those make sense or not | 17:47 |
jesusaur | jeblair: thanks, for the reviews, but I'm a little confused with the comment about wanting to schedule the merge in _processOneItem | 17:48 |
jesusaur | jeblair: don't we want the merge to have already been performed at that point so that didMergerFail would be accurate? | 17:48 |
jeblair | jesusaur: processOneItem runs frequently and iteratively -- it looks at every item and makes sure that it makes progress if needed. so it used to call prepareRef (which is the thing that triggered the merger), and then, the next time it got called for that item, if the merger was done, it would call launchJobs | 17:51 |
jesusaur | oh, I see, so we need to add a function similar to prepareRef that will return a boolean based on whether or not the merge has finished | 17:56 |
jesusaur | right now that's kind of implicitly coming from prepareLayout | 17:57 |
jeblair | jesusaur: right; likely some combo of the old prepareRef (which has the state machine in it) and the new scheduleMerge (which has the new stuff about files). they already have some overlap. | 17:58 |
jeblair | jesusaur: and yes -- most of the state machine stuff is in prepareLayout now. but should probably be moved into the prepareRef/scheduleMerge function. | 17:59 |
jesusaur | ok | 17:59 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Rename nodepoold to nodepool-launcher https://review.openstack.org/450877 | 18:26 |
openstackgerrit | David Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Fix dev doc link in README.rst https://review.openstack.org/450922 | 18:29 |
openstackgerrit | K Jonathan Harker proposed openstack-infra/zuul feature/zuulv3: Perform pre-launch merge checks https://review.openstack.org/446275 | 18:49 |
jesusaur | SpamapS: I've pushed a new patchset that works a bit better :) | 18:51 |
jesusaur | except for pep8 :( | 18:52 |
SpamapS | jesusaur: cool, was just looking at jeblair's comments | 18:54 |
openstackgerrit | K Jonathan Harker proposed openstack-infra/zuul feature/zuulv3: Perform pre-launch merge checks https://review.openstack.org/446275 | 18:55 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: Fix dev doc link in README.rst https://review.openstack.org/450922 | 18:56 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Rename nodepoold to nodepool-launcher https://review.openstack.org/450877 | 18:58 |
jesusaur | hrm, still getting a bunch of failures | 19:09 |
*** hashar has joined #zuul | 19:23 | |
*** hashar_ has joined #zuul | 19:24 | |
*** hashar_ has quit IRC | 19:24 | |
*** hashar has quit IRC | 19:24 | |
*** hashar has joined #zuul | 19:24 | |
pabelanger | mordred: jeblair: Shrews: here is something new, we are defaulting for ipv6 for dsvm jobs now: http://logs.openstack.org/70/450770/1/check/gate-dsvm-nodepool/03f6229/logs/screen-nodepool.txt.gz#_2017-03-28_16_17_35_899 | 19:47 |
pabelanger | and for some reason, we are not able to boot it properly | 19:48 |
pabelanger | also: http://logs.openstack.org/52/449152/3/check/gate-dsvm-nodepool/598f3e3/logs/screen-nodepool.txt.gz#_2017-03-27_21_24_13_734 | 19:54 |
Shrews | pabelanger: for the second thing, https://review.openstack.org/449737 | 19:56 |
Shrews | pabelanger: for the first thing, https://review.openstack.org/#/c/449705/ | 20:04 |
pabelanger | Shrews: thanks. I think the fix is to force ipv4 in clouds.yaml | 20:05 |
Shrews | pabelanger: yeah. also curious that the gate-dsvm-nodepool-src-nv jobs seem to be cut short now | 20:06 |
Shrews | they're finishing in under 20m | 20:06 |
pabelanger | Shrews: I have fixed that | 20:07 |
pabelanger | was a bug in JJB | 20:07 |
Shrews | oh good | 20:07 |
Shrews | pabelanger: do we need to update nl01 nodepool.yaml for force ipv4? | 20:07 |
pabelanger | Shrews: maybe? infracloud is ipv4 only | 20:08 |
pabelanger | but rax we will | 20:08 |
Shrews | so yeah then | 20:08 |
Shrews | oh, i see. v6 not available in infracloud. i guess we're good for now then | 20:09 |
pabelanger | ya | 20:10 |
pabelanger | RAX does support ipv6, however we still need to patch glean to support centos /fedora | 20:11 |
clarkb | Shrews: pabelanger you can test the ipv6 stuff using devstack since its enabled there by default | 20:16 |
clarkb | (and why we tripped over the ipv6_dhcp thing with glean recently) | 20:17 |
pabelanger | k | 20:18 |
pabelanger | I'm not sure why we fail to SSH right now | 20:18 |
pabelanger | trying to figure it out | 20:18 |
clarkb | pabelanger: oh! is it because neutron + nova say use ipv6 but then glean doesn't configure ipv6? | 20:47 |
pabelanger | yes, I think that is the isuse | 20:48 |
pabelanger | it works in osic because dhcp | 20:48 |
pabelanger | but I don't think we have tested static | 20:48 |
pabelanger | clarkb: Wait, is it type=ipv6? | 20:49 |
pabelanger | or something else | 20:49 |
clarkb | pabelanger: the type is ipv6_dhcp (whcih is wrong) so my fix to glean was to skip it | 20:50 |
pabelanger | ya | 20:50 |
clarkb | pabelanger: but if shade interprets what nova/neutron are saying to mean "use ipv6" then we'd break | 20:50 |
pabelanger | we won't setup ipv6 then | 20:50 |
clarkb | and I think we recently merged a change to nodepool to stop setting that directly and rely on shade/occ | 20:50 |
pabelanger | because we expect ipv6 only | 20:50 |
pabelanger | yup | 20:50 |
clarkb | (guessing no one checked the integration job on that) | 20:50 |
clarkb | if its not too terrible I would revert for now | 20:51 |
pabelanger | ha | 20:51 |
clarkb | other option is to tell occ/shade in devstack to prefer ipv4 | 20:51 |
pabelanger | yes | 20:51 |
pabelanger | https://review.openstack.org/#/c/449705/ | 20:51 |
clarkb | but revert bceause its broken | 20:51 |
pabelanger | let me get clouds.yaml change in place | 20:51 |
clarkb | there is currently no way for nova + neutron + glean + nodepool to operate properly with ipv6 without modifying your configs | 20:52 |
pabelanger | OS_FORCE_IPV4 is a thing in os-client-config | 20:54 |
pabelanger | mordred: ^does that sound right? | 20:54 |
clarkb | worth noting this will also be a problem with rax and centos/fedora images | 20:55 |
pabelanger | Ya, its been on my list to add ipv6 to centos for a while | 20:55 |
pabelanger | I should break down and do that | 20:56 |
clarkb | oh and osic | 20:56 |
clarkb | thoughts on making that job voting as soon as its fixed? | 20:57 |
clarkb | the major chrun should be over at thsi point aiui | 20:57 |
pabelanger | looks like we can set force_ipv4 or routes_ipv6_externally: false | 20:58 |
clarkb | I would set force_ipv4 as thats more direct | 20:59 |
clarkb | there is no ipv6 there despite what nova/neutron say (due to glean) | 20:59 |
mordred | clarkb, pabelanger: the change we merged I thought was only to the zuulv3 branch - but I think we should do force_ipv4 | 21:00 |
clarkb | mordred: yes I think that is correct | 21:00 |
pabelanger | k, I'll somehow sed clouds.yaml | 21:01 |
clarkb | pabelanger: you can use pyyaml to import it, add your key then spit it back out again | 21:01 |
clarkb | might be simplest to get correct that way | 21:01 |
pabelanger | k | 21:01 |
pabelanger | clarkb: actually, I can patch devstack for this | 21:08 |
pabelanger | it has an update_clouds_yaml.py already | 21:08 |
clarkb | pabelanger: but do we want to disable it in devstack globally? | 21:08 |
clarkb | its a glean issue end of the day | 21:08 |
pabelanger | clarkb: right, not change devstack but update the script an have us call directly | 21:09 |
clarkb | ah gotcha | 21:09 |
pabelanger | Oh | 21:13 |
pabelanger | actually | 21:13 |
pabelanger | this is easier | 21:13 |
*** hashar has quit IRC | 21:15 | |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Force os-client-config to use ipv4 https://review.openstack.org/450983 | 21:16 |
pabelanger | clarkb: mordred: ^I think that is what we need | 21:16 |
mordred | pabelanger: that works? | 21:17 |
pabelanger | not sure | 21:17 |
pabelanger | I was reading os-client-config | 21:17 |
clarkb | there are like 3 clouds in the couds.yaml devsatck writes now | 21:18 |
clarkb | so it will only affect the last one I think | 21:18 |
* clarkb looks | 21:18 | |
pabelanger | I think client is global | 21:18 |
mordred | yah - the docs say that works | 21:18 |
mordred | although I had totally forgotten doing any such thing - let me test real quick | 21:18 |
pabelanger | otherwise we need routes_ipv6_externally: false | 21:19 |
pabelanger | for each cloud | 21:19 |
clarkb | oh thats its own top level thing gotcha | 21:19 |
clarkb | well its self testing | 21:19 |
clarkb | so we just gotta wait | 21:19 |
pabelanger | however, we could totally boot an ipv4 and ipv6 image for testing | 21:19 |
pabelanger | some day | 21:19 |
mordred | ah - neat | 21:20 |
mordred | yes - I remember writing this code now | 21:20 |
mordred | pabelanger: fwiw, force-ipv4 can also be set on a per-cloud basis | 21:20 |
mordred | pabelanger: setting it in the client stanza like that sets it as a global default so that any cloud stanzas that do not contain force-ipv4 will default to the value in the client stanza | 21:21 |
mordred | pabelanger: but thank you for reminding me that we'd made that global setting! :) | 21:21 |
clarkb | as for fixing glean we need to watch what frickler's change to nova does and then update once people like the solution proposed | 21:21 |
mordred | turns out to have made this task much easier | 21:21 |
mordred | ++ | 21:22 |
mordred | and then I guess actually add support for centos yeah? | 21:22 |
clarkb | but looks like we'd change type=ipv6 to also match type=ipv6_slaac and separately add dhcp support | 21:22 |
clarkb | mordred: ya | 21:22 |
pabelanger | clarkb: which review is that, want to monitor | 21:23 |
clarkb | pabelanger: let me find it again | 21:23 |
clarkb | https://review.openstack.org/#/c/450297/1 that stack there | 21:24 |
pabelanger | maybe we should run a 3rdparty CI system :) | 21:25 |
pabelanger | and report back failures | 21:25 |
mordred | pabelanger: :) | 21:29 |
clarkb | you can depends on with a glean patch and it shoudl all be tested properly | 21:30 |
clarkb | but looks like things may still be in flux a little | 21:30 |
clarkb | they are tring to understand how to not break cloud init according to the bug | 21:30 |
pabelanger | if network['type'].startswith('ipv6') | 21:31 |
pabelanger | ya, we'll need to do that now too | 21:31 |
clarkb | no you have to be even more specific | 21:42 |
clarkb | bceause slaac and dhcpv6 are different /etc/network/interfaces.d/ configs | 21:43 |
pabelanger | neat | 21:48 |
pabelanger | ya, that is going to be fun /s | 21:48 |
jeblair | mordred, fungi, clarkb, jhesketh, pabelanger: i need to sign up cores to review the secrets stack. fungi has volunteered to at least review the 2 crypto-critical changes; not sure about the rest of the stack. i need at least one more for the whole stack. who's interested? | 21:48 |
jeblair | (it's really not bad; i just don't want to skimp on it) | 21:49 |
pabelanger | jeblair: I can give a once over again, but my crypto isn't the best | 21:49 |
jeblair | pabelanger: that's okay, there's a lot of zuul internal stuff too | 21:50 |
mordred | jeblair: yah - I will aso review it - but I'm not fungi, so I'd love as many of us as is reasonable to | 21:50 |
jeblair | jlk, SpamapS: i'd also be more comfortable if at least one of you +1d the stack | 21:50 |
jeblair | mordred: thanks -- and you have a head start since you reviewed the first one already (thanks!) :) | 21:50 |
jeblair | (i haven't forgotten the key size conversation, but i consider that something that's open to change until release, at least) | 21:51 |
jeblair | pabelanger, mordred, fungi, SpamapS, jlk, (anyone else): the stack starts here: https://review.openstack.org/406382 and should be ready to go. | 21:52 |
* jlk looks | 21:53 | |
pabelanger | I'll look this evening | 21:53 |
jeblair | pabelanger, mordred, fungi, SpamapS, jlk, (anyone else): note https://review.openstack.org/447087 is a refactor in the middle of the stack, so if you're going to leave a review comment of "it would be nice to put all these methods in one file..." be aware of that. :) | 21:54 |
jlk | haha | 21:55 |
jlk | good to know | 21:55 |
mordred | jeblair: what if I want you to just rename all of the methods? | 21:57 |
SpamapS | jeblair: thanks for the heads up :-D | 21:57 |
SpamapS | mordred: rename all the methods to variations of "theLobster()" ? | 21:57 |
SpamapS | like instead of prepareLayout() it should now be boilLobster() | 21:58 |
jeblair | do not boil gerald | 21:58 |
SpamapS | and instead of submitMerge() it should be heatWaterToBoiling() | 21:58 |
SpamapS | In all seriousness.. my high school friends and I wrote a C++ obfuscator that replaced all function and variable names with random adjectives before the words 'lobster', 'crab', or 'fork' | 21:59 |
mordred | jeblair: that sounds like a good bool - _should_boil_gerald = False | 21:59 |
mordred | SpamapS: ahahahaha | 21:59 |
SpamapS | #GetThoseNerds | 22:00 |
pabelanger | can people keep an eye on https://review.openstack.org/#/c/450983/ if all tests are green, plz merge! It will fix our nodepool dsvm jobs | 22:00 |
pabelanger | must eat now | 22:00 |
jeblair | clarkb, pabelanger: so the idea is do that until nova decides how to spell their ipv6 option, then fix glean, then revert that? | 22:02 |
jeblair | (also, is this having a production impact in osic?) | 22:02 |
clarkb | jeblair: ya | 22:02 |
clarkb | jeblair: no prod impac | 22:03 |
clarkb | jeblair: whatever change in nova is much newer than any of our ipv6 clouds | 22:03 |
jeblair | gotcha. so as long as they don't update.... :) | 22:03 |
clarkb | ya | 22:03 |
jeblair | someone send mudpuppy a cruise ticket | 22:03 |
clarkb | oh actually I know what it was | 22:06 |
clarkb | the nova change I linked to initially caused nova to start writing the info to config drive/metadata, buefore that it was explicitly optin | 22:07 |
clarkb | so in osic we had been relying on glean not configuring anything to do with ipv6 and likely just getting default use slaac and be happy from distro defaults there | 22:07 |
clarkb | so ya if they upgrade to pike we'd be in a world of hurt | 22:07 |
SpamapS | has anyone bothered to check if we're using the libyaml path in our test suite runs btw? | 22:10 |
SpamapS | getting up over 10 minutes for a unit test run, might be worth it as the pure python one is sometimes 400x slower | 22:11 |
SpamapS | s/400x/40x/ ... oops ;) | 22:11 |
clarkb | SpamapS: just add libyaml-dev to bindep | 22:12 |
clarkb | oh except if we grab a wheel not built against it then we may still not use it | 22:12 |
clarkb | but I would start there and then inspect the tox log | 22:12 |
SpamapS | clarkb: my past experience was that you also had to force in CSafeLoader in some cases. | 22:12 |
jlk | jeblair: general question on the stack, is there any work/thought done to prevent logging of the secret values anywhere? As a service provider, I would not like to accidentally leak my consumer's secrets. | 22:12 |
SpamapS | but maybe they fixed some of that | 22:13 |
jeblair | jlk: i agree. i believe by the end of the stack there is a model.Secret object with a __repr__ method which intentionally only includes the name, not the decrypted form. typically logging in zuul relies on either str or repr methods of data structure objects (like "something happened with %s" % (secret)) which should make it easy for a dev to do the right thing | 22:17 |
jlk | cool. | 22:17 |
SpamapS | jlk: I recall discussion in OpenStack about being able to hand oslo.log a list of strings to XXX out. | 22:17 |
SpamapS | don't know that it ever went anywhere | 22:17 |
jlk | It looks like secrets are global to the tenant? I'm wondering how that would if say "github.com" is one tenant. | 22:18 |
jeblair | jlk: but you know what, the decrypted form is in a ".data" attribute, which is a bit... generic. if we renamed that to ".decrypted_secret_data" we could easily grep for all direct references to it. that's maybe worth doing? | 22:18 |
SpamapS | jlk: hah, I don't think that's how we're going to do things. | 22:18 |
jlk | jeblair: that might be worth doing, giving it some extra flavor might make future devs careful about what they do with that data. | 22:18 |
jlk | SpamapS: well, I didn't think we'd be writing layouts for every consumer,that'd be a TON of repeated code | 22:19 |
jeblair | yeah, that way we can seek out anything that touches the decrypted form, whether for logging or just "why is that using the decrypted form there?" kinds of things. | 22:19 |
SpamapS | But even if secrets were global to all of github.com ... only zuul can decrypt | 22:19 |
jlk | SpamapS: sure, but that means everybody has to play nice with names, who gets to own which secret name? | 22:19 |
SpamapS | yeah you'd almost have to just enforce 1:1 with repo | 22:20 |
SpamapS | or at least namespace them | 22:20 |
jeblair | jlk: i think that's true for job names too, which is why i think you may want tenants. | 22:20 |
jeblair | basically, tenancy is the namespace operator | 22:20 |
SpamapS | jeblair: can jobs be shared across tenant though? | 22:21 |
SpamapS | Because like, we'll have a base-nodejs job that does all the pre/post node stuff.. | 22:21 |
jlk | okay this is a distraction that is not relevant to the patch review. | 22:21 |
SpamapS | well it does kind of speak to the use case of tenants and secrets | 22:22 |
jeblair | SpamapS: not internally, but via adding the repo to multiple tenants (which is a thing). | 22:22 |
jeblair | SpamapS: so a "common-project-config" repo can be added to every tenant. | 22:22 |
jlk | each tenant could point back to the same config repo | 22:22 |
jeblair | right | 22:22 |
jlk | makes sense. | 22:23 |
SpamapS | That seems like the opposite of a lot of duplicate code then. | 22:23 |
jlk | We're going to want to enforce some consistency across the whole of say github, like the pipeline names, behaviors. | 22:23 |
jeblair | SpamapS: yep, it's the *same* code over and over :) so i think it's good for the bonnyci setup. | 22:23 |
SpamapS | Would it make sense to have a 'domain' level that encapsulates multiple 'tenants' ? | 22:24 |
SpamapS | I haven't thought about what would be repeated though. | 22:24 |
jlk | Is there also a way to prevent a repo from overriding core behaviors? | 22:25 |
SpamapS | jlk: yeah, final | 22:25 |
jeblair | (it's possible once we're past the threshold of premature optimization, we may want to make sure that zuul's internal data structures are optimized for heavy repitition like that... but later) | 22:25 |
jeblair | SpamapS: re domain -- maybe, and probably not too hard, but my feeling is that we'll be better poised to examine that once we run into some problems we have trouble solving with tenants | 22:25 |
SpamapS | I'm also a fan of having a single namespace level, and then optimizing for composition within a namespace. | 22:27 |
SpamapS | I don't mind if every github.com tenant has a few lines of config repo references. | 22:28 |
SpamapS | and said config repos can setup the pipelines, base jobs,tc. | 22:28 |
SpamapS | etc | 22:28 |
jlk | yeah that's pretty reasonable. | 22:31 |
SpamapS | jlk: as long as the 'final' attribute is used properly so that we can keep users to a reasonable amount of conformity. | 22:33 |
jlk | Wonder how costly it'll be to iterate through all the tenants to find a place to deal with an event. | 22:34 |
jlk | or how big the in-memory config will be after reading in the same yaml X times, where X == number of tenants | 22:34 |
jeblair | jlk: yeah, that's the sort of thing i was alluding to earlier where we may want to optimize some stuff later | 22:36 |
jlk | yup, shelved until we have data. | 22:36 |
jeblair | (an emergency stop-gap solution if we hit performance problems before we can fix that would be to shard tenants across multiple zuul installations. ultimately, in a very broad installation, that may be necessary anyway, at least until zuulv4 [fully distributed schedulers]) | 22:39 |
SpamapS | jlk: the events would arrive at ${HOST}/${TENANT}/stuff | 22:40 |
jlk | SpamapS: well... that's not how its set up right now | 22:40 |
SpamapS | jlk: I know. :-/ | 22:40 |
jlk | github is the first thing adding a webhook, and we've made it global | 22:40 |
SpamapS | but some day maybe we'll have a working GH integration and we can get those things done without asking everybody to update. :) | 22:40 |
jlk | mostly because we can't really do custom web hook URLs for every github consumer | 22:40 |
jlk | you get one URL I think. | 22:40 |
SpamapS | jlk: /tenant isn't much different than a header with a repo, but your point is definitely made: indexes will be needed at some level. | 22:41 |
SpamapS | jlk: I -1'd the first patch, pretty minor stuff tho... digging deeper into the stack now | 22:41 |
jlk | SpamapS: from the github side, we get 1 URL we can put in our Integration. It does not appear to be dynamic at all, it's a static URL. | 22:43 |
jlk | if Zuul is going to want things to be sorted into tenant buckets, we'd have to put something in front of zuul to ingest the github hook, to re-write the URL to land it in an appropriate tenant bucket. | 22:43 |
SpamapS | jlk: I think we can make the GH driver build and maintain a repo<->tenant index. To your original point: it will not be free. | 22:45 |
jlk | ah right, I see what you mean by doing it there. | 22:45 |
jlk | _that_ I think is kind of happening already? | 22:46 |
jlk | maybe not. I don't have any tests written that do multiple tenants and github. | 22:46 |
SpamapS | Yeah, sort of :) | 22:46 |
SpamapS | clarkb: looping back to libyaml ... I'm having a hard time getting a fast yaml on my system at all. | 22:51 |
clarkb | SpamapS: have a test yaml file I can use to check? | 22:51 |
SpamapS | one would think 'pip install yaml' on a box with libyaml-dev would suffice | 22:52 |
clarkb | I can try locally | 22:52 |
clarkb | SpamapS: ya thats what I expect | 22:52 |
SpamapS | for speed? I may | 22:52 |
SpamapS | just to see if you have libyaml support, just look for yaml.CSafeLoader | 22:52 |
clarkb | SpamapS: oh support at all you mean I can cehck that in a moment | 22:52 |
mordred | SpamapS: I do not get yaml.CSafeLoader locally | 22:53 |
SpamapS | also look in your yaml install... the .so would be there | 22:54 |
clarkb | neither do I | 22:54 |
SpamapS | /usr/lib/python2.7/dist-packages/_yaml.so | 22:55 |
*** dkranz has quit IRC | 22:55 | |
SpamapS | Ubuntu's python-yaml has that | 22:55 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Fix constructor arguments to source https://review.openstack.org/451102 | 22:55 |
SpamapS | >>> yaml.CSafeLoader | 22:55 |
SpamapS | <class 'yaml.cyaml.CSafeLoader'> | 22:55 |
SpamapS | and as a result ^^ | 22:55 |
mordred | yup | 22:55 |
clarkb | python setup.py --with-libyaml install | 22:56 |
mordred | SpamapS: so - I just got it from python setup.py | 22:56 |
clarkb | apparently necessary to get it | 22:56 |
mordred | install | 22:56 |
mordred | without --with-libyaml | 22:56 |
clarkb | mordred: but not pip install? | 22:56 |
SpamapS | IIRC there's a pypi module that is just yaml that enforces libyaml | 22:56 |
mordred | but pip instlal didn't do that for me - I _think_ because I had a wheel cached locally from before I had libyaml-dev | 22:56 |
mordred | testing now | 22:56 |
SpamapS | I thought you could get it just by building the extension when libyaml is linkable | 22:56 |
jeblair | let's all just use mordred's machine | 22:56 |
SpamapS | pip install I think is grabbing a wheel | 22:56 |
SpamapS | and the wheel is being overly generic for wheel reasons | 22:57 |
SpamapS | I'm sorry for this bikeshed-ish thing.. I am hoping it saves us a couple of minutes per full tox run | 22:57 |
mordred | SpamapS: nope - I think it's local wheel cache | 22:57 |
clarkb | error: option --with-libyaml not recognized | 22:57 |
mordred | SpamapS: I just deleted my pip cache and pip installed again | 22:57 |
clarkb | mordred: aha! | 22:57 |
clarkb | ya I bet thats it | 22:57 |
mordred | and it built against libyaml-dev | 22:58 |
SpamapS | ah so it gets cached wrong | 22:58 |
SpamapS | and not invalidated | 22:58 |
mordred | well - if it gets cached before you installed libyaml-dev - yeah | 22:58 |
clarkb | so adding libyaml-dev to bindep won't be engouh you also have to bypass our wheel mirror | 22:58 |
mordred | ah - yah - the wheel mirror in the gate makes this fun | 22:59 |
mordred | /home/mordred/.cache/pip/wheels/2c/f7/79/13f3a12cd723892437c0cfbde1230ab4d82947ff7b3839a4fc/PyYAML-3.12-cp27-cp27mu-linux_x86_64.whl is the wheel that just built for me, fwiw | 22:59 |
mordred | clarkb: this seems like an interestingly difficult problem to solve generally for the gate | 23:00 |
jeblair | so, if the gate wheel mirror ran with libyaml-dev installed, we'd get a benefit? | 23:00 |
clarkb | is libyaml support used by default? could we get away with putting libyaml-dev on the mirror builders? | 23:00 |
clarkb | and not break everyone without libyaml installed? | 23:00 |
jeblair | (wheel mirror *build* i mean) | 23:00 |
SpamapS | I believe there's a way to do this by wrapping PyYAML in a pypi package that enforces libyaml-dev | 23:00 |
mordred | jeblair: we could - but it would mean any project installing pyyaml without libyaml in their bindep would get screwed I think | 23:00 |
SpamapS | I think somebody may have done it | 23:00 |
* mordred checks | 23:00 | |
SpamapS | but I can't seem to find it | 23:00 |
clarkb | mordred: thats the question | 23:00 |
clarkb | its possible pyyaml gracefully degreads | 23:01 |
mordred | ow. it's VERY hard to get ubuntu without libyaml | 23:01 |
jeblair | mordred: want to apt-get remove pyyaml and see if your existing venv.... | 23:01 |
jeblair | ok :) | 23:01 |
mordred | unintsalling it uninstalls ubuntu-standard | 23:01 |
jlk | lolol | 23:01 |
clarkb | so maybe its fine then :) | 23:02 |
mordred | so - I think for the most part in the gate we're completely safe to build the ubuntu wheel mirror with libyaml-dev installed | 23:02 |
SpamapS | https://pypi.python.org/pypi/rtyaml | 23:02 |
mordred | and that we should - since it'll make the gate better | 23:02 |
jlk | "See, if you were just using a container and only installing what you wanted".... | 23:02 |
clarkb | is it in centos standard and fedora standard? >_< | 23:02 |
mordred | and us doing that in our wheel mirror should not impact anyone locally | 23:02 |
clarkb | jlk: its actually what we want | 23:02 |
clarkb | so we're good :) | 23:03 |
jlk | this is one of those fun things where you start to wonder if one use gets tested, and the other doesn't | 23:03 |
jlk | and one way has bugs, the other doesn't. | 23:03 |
jlk | and then we all go drink. | 23:03 |
clarkb | mordred: we'll also likely need to remove pyyaml from the existing wheel cache on the builder | 23:03 |
clarkb | but then we'll be good | 23:03 |
mordred | clarkb: yah | 23:03 |
SpamapS | There are things the C one can't do | 23:03 |
SpamapS | they're all things that you almost never want to do with YAML | 23:03 |
mordred | SpamapS: :) | 23:03 |
SpamapS | I wonder if this rtyaml is a better option long term. | 23:04 |
SpamapS | It would most likely break some things | 23:04 |
pabelanger | keep in mind, we disable wheel mirrors on pep8 jobs today | 23:06 |
clarkb | pabelanger: which is fine it will just install pyyaml without libyaml bindings | 23:07 |
clarkb | or you can list libyaml-dev and get them there too | 23:07 |
pabelanger | clarkb: right, moving forward we could add a playbook to opt-in / opt-out for jobs | 23:07 |
pabelanger | we could do that today actually | 23:07 |
SpamapS | Also the native python yaml loader creates a ton of redundant objects and leaks tons and tons of RAM | 23:07 |
SpamapS | I seem to have lost my benchmark scripts.. | 23:10 |
SpamapS | but I do have my "make giant YAML files" scripts and my big yaml files | 23:10 |
jlk | Forever tagging SpamapS as "Big YAML" in my mind. | 23:11 |
pabelanger | clarkb: jeblair: I should I add a TODO to about removing this once we've updated glean? https://review.openstack.org/#/c/450983/ | 23:15 |
clarkb | pabelanger: sure | 23:17 |
jeblair | ++ | 23:17 |
SpamapS | "I love it when you call me Big YA-ML Throw white space in the air, if you a true player" | 23:17 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Rename nodepoold to nodepool-launcher https://review.openstack.org/450877 | 23:17 |
jlk | is status.json and it's ilk all now tenant scoped? Is there no longer a global status? | 23:18 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Rename nodepoold to nodepool-launcher https://review.openstack.org/450877 | 23:19 |
openstackgerrit | Paul Belanger proposed openstack-infra/nodepool feature/zuulv3: Force os-client-config to use ipv4 https://review.openstack.org/450983 | 23:19 |
pabelanger | jlk: I believe so | 23:19 |
jeblair | jlk: there are some loose ends there, but that's the intent | 23:22 |
SpamapS | I think I have a patch that will make use use libyaml | 23:25 |
openstackgerrit | James E. Blair proposed openstack-infra/zuul feature/zuulv3: Add canonical hostname to source object https://review.openstack.org/451110 | 23:26 |
jeblair | jlk: ^ start of work we talked about last week. i think/hope i'm going to be able to do this in a bunch of small patches. | 23:27 |
jeblair | mordred: fyi ^ | 23:27 |
SpamapS | oy | 23:33 |
SpamapS | so far, configloader is unhappy with CSafeLoader | 23:33 |
jeblair | SpamapS: it's worth doing this work on the end of the secrets stack, where i made changes to configloader and yaml loading | 23:33 |
SpamapS | jeblair: Ah I hadn't got that far down. Good idea. | 23:34 |
SpamapS | http://paste.openstack.org/show/604586/ | 23:34 |
SpamapS | in case you're curious.. haven't debugged it yet, but all tests are basically exploding on that | 23:35 |
Shrews | always so much conversation after my EOD | 23:35 |
openstackgerrit | Clint 'SpamapS' Byrum proposed openstack-infra/zuul feature/zuulv3: Use libyaml if possible https://review.openstack.org/451113 | 23:36 |
SpamapS | Shrews: it's so much easier to talk without your superior intellect and education around. | 23:37 |
SpamapS | ^^ that's my (fairly broken still) patch to try and use libyaml if it's available. | 23:37 |
Shrews | SpamapS: that's blatantly false on so many levels | 23:39 |
SpamapS | mmmmm failing fast | 23:39 |
SpamapS | Ran 228 (+35) tests in 399.195s (-507.783s) | 23:39 |
SpamapS | FAILED (id=35, failures=142 (+133), skips=31) | 23:39 |
jlk | Is this semaphore change really part of the secrets stack? | 23:42 |
clarkb | SpamapS: I think thats voluptuous schema failing to apply conf because conf is None? | 23:46 |
clarkb | SpamapS: the getSchema is returning the schema object which is then called against with (conf) and thats what raises | 23:47 |
clarkb | oh "None", line 1 column 1 is the _start_mark value | 23:48 |
clarkb | so its breaking right away maybe? | 23:48 |
SpamapS | clarkb: dunno, I'm just now running with a debugger in line | 23:49 |
* jlk out. | 23:49 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!