*** tosky has quit IRC | 00:28 | |
openstackgerrit | James E. Blair proposed zuul/nodepool master: Convert NodeLaunchRecord into NodeLauncher https://review.opendev.org/c/zuul/nodepool/+/779407 | 00:36 |
---|---|---|
corvus | that gets us stats (and, interestingly, slots even better into the existing driver framework); main todo left now is quota handling | 00:37 |
corvus | actually, looks like that's done; i think that framework may be ready to port the azure driver over | 00:59 |
*** jamesmcarthur has joined #zuul | 01:05 | |
*** jamesmcarthur has quit IRC | 01:14 | |
*** jamesmcarthur has joined #zuul | 01:15 | |
*** jamesmcarthur has quit IRC | 01:17 | |
*** jamesmcarthur has joined #zuul | 01:17 | |
*** hamalq has quit IRC | 01:24 | |
*** jamesmcarthur has quit IRC | 01:25 | |
*** jamesmcarthur has joined #zuul | 01:28 | |
*** jamesmcarthur has quit IRC | 01:33 | |
*** jamesmcarthur has joined #zuul | 01:39 | |
*** jamesmcarthur has quit IRC | 02:00 | |
*** jamesmcarthur has joined #zuul | 02:01 | |
*** jamesmcarthur has quit IRC | 02:02 | |
*** jamesmcarthur has joined #zuul | 02:02 | |
*** jamesmcarthur has quit IRC | 02:04 | |
*** jamesmcarthur has joined #zuul | 02:05 | |
*** ikhan has quit IRC | 02:05 | |
*** jamesmcarthur has quit IRC | 02:09 | |
*** jamesmcarthur has joined #zuul | 02:25 | |
*** jamesmcarthur has quit IRC | 02:37 | |
*** jamesmcarthur has joined #zuul | 02:41 | |
*** jamesmcarthur has quit IRC | 02:45 | |
corvus | the error with those tests in the zk stack is the lack of a time database directory | 02:48 |
corvus | we should keep the testonly argument to scheduler for that | 02:49 |
openstackgerrit | James E. Blair proposed zuul/zuul master: Make ConnectionRegistry mandatory for Scheduler https://review.opendev.org/c/zuul/zuul/+/779086 | 02:52 |
openstackgerrit | James E. Blair proposed zuul/zuul master: Instantiate executor client, merger, nodepool and app within Scheduler https://review.opendev.org/c/zuul/zuul/+/779087 | 02:52 |
corvus | tobiash, swest, felixedel: the alternate stack that i pushed under "hashtag:sos" should be ready now; can you take a look on tuesday? | 02:54 |
*** jamesmcarthur has joined #zuul | 02:59 | |
*** ajitha has joined #zuul | 03:14 | |
*** jamesmcarthur has quit IRC | 03:22 | |
*** jamesmcarthur has joined #zuul | 03:27 | |
*** jamesmcarthur has quit IRC | 03:27 | |
*** jamesmcarthur has joined #zuul | 03:37 | |
*** jamesmcarthur has quit IRC | 03:49 | |
corvus | remote: https://review.opendev.org/c/zuul/nodepool/+/779420 WIP: add azure state machine driver [NEW] | 03:50 |
corvus | that's still early -- but that does create, delete, and cleanup leaks for real | 03:51 |
*** dpawlik6 has joined #zuul | 03:51 | |
*** jamesmcarthur has joined #zuul | 03:53 | |
*** Tahvok_ has joined #zuul | 03:53 | |
*** raukadah has joined #zuul | 03:54 | |
*** avass_ has joined #zuul | 03:55 | |
*** freefood has joined #zuul | 03:56 | |
*** icey_ has joined #zuul | 03:56 | |
*** paulalbertella has joined #zuul | 03:56 | |
*** openstackgerrit has quit IRC | 03:59 | |
*** reiterative has quit IRC | 03:59 | |
*** Tahvok has quit IRC | 03:59 | |
*** icey has quit IRC | 03:59 | |
*** avass has quit IRC | 03:59 | |
*** chandankumar has quit IRC | 03:59 | |
*** jkt has quit IRC | 03:59 | |
*** mhu has quit IRC | 03:59 | |
*** freefood_ has quit IRC | 03:59 | |
*** dpawlik has quit IRC | 03:59 | |
*** fbo has quit IRC | 03:59 | |
*** Tahvok_ is now known as Tahvok | 03:59 | |
*** dpawlik6 is now known as dpawlik | 03:59 | |
*** jkt has joined #zuul | 04:00 | |
*** jamesmcarthur has quit IRC | 04:03 | |
*** jamesmcarthur has joined #zuul | 04:14 | |
*** vishalmanchanda has joined #zuul | 04:16 | |
*** vishalmanchanda has quit IRC | 04:21 | |
*** vishalmanchanda has joined #zuul | 04:21 | |
*** ykarel has joined #zuul | 04:23 | |
*** saneax has joined #zuul | 04:48 | |
*** wuchunyang has joined #zuul | 04:49 | |
*** raukadah is now known as chandankumar | 04:51 | |
*** jamesmcarthur has quit IRC | 04:55 | |
*** jamesmcarthur has joined #zuul | 05:11 | |
*** jangutter has quit IRC | 05:15 | |
*** jangutter has joined #zuul | 05:16 | |
*** wuchunyang has quit IRC | 05:24 | |
*** iurygregory has quit IRC | 05:26 | |
*** jamesmcarthur has quit IRC | 05:32 | |
*** evrardjp has quit IRC | 05:33 | |
*** evrardjp has joined #zuul | 05:33 | |
*** jfoufas1 has joined #zuul | 05:34 | |
*** jamesmcarthur has joined #zuul | 05:35 | |
*** jamesmcarthur has quit IRC | 05:40 | |
*** jamesmcarthur has joined #zuul | 06:03 | |
*** wuchunyang has joined #zuul | 06:27 | |
*** wuchunyang has quit IRC | 06:53 | |
*** hashar has joined #zuul | 07:09 | |
*** vishalmanchanda has quit IRC | 07:36 | |
*** piotrowskim has joined #zuul | 07:59 | |
*** jamesmcarthur has quit IRC | 08:08 | |
*** jcapitao has joined #zuul | 08:12 | |
*** okamis has joined #zuul | 08:15 | |
okamis | Hello, what is the purpose of gearman in zuul? | 08:16 |
*** hashar has quit IRC | 08:19 | |
tobiash | okamis: it's the rpc protocol zuul-scheduler uses to talk to zuul-executors | 08:20 |
okamis | okay, do i understand it correctly that gearman also partially decides on the scheduling because it has itself a queue? | 08:22 |
*** rpittau|afk is now known as rpittau | 08:24 | |
*** jamesmcarthur has joined #zuul | 08:36 | |
*** jamesmcarthur has quit IRC | 08:43 | |
swest | corvus: lgtm, small comment on 779087 that should fix the failing test | 08:51 |
*** jpena|off is now known as jpena | 08:55 | |
*** paulalbertella is now known as reiterative | 08:57 | |
*** tosky has joined #zuul | 09:02 | |
*** hashar has joined #zuul | 09:25 | |
*** ajitha has quit IRC | 10:08 | |
*** jamesmcarthur has joined #zuul | 10:39 | |
*** jangutter has quit IRC | 10:41 | |
*** jangutter has joined #zuul | 10:42 | |
*** jangutter has quit IRC | 10:43 | |
*** jangutter has joined #zuul | 10:44 | |
*** iurygregory_ has joined #zuul | 10:45 | |
*** jamesmcarthur has quit IRC | 10:46 | |
*** iurygregory_ is now known as iurygregory | 10:46 | |
okamis | is this channel more active in other hours? | 10:53 |
okamis | Im in utc +1 | 10:53 |
*** icey_ is now known as icey | 11:02 | |
*** jangutter has quit IRC | 11:07 | |
*** jangutter has joined #zuul | 11:07 | |
*** hashar has quit IRC | 11:08 | |
*** nils has joined #zuul | 11:14 | |
avass_ | okamis: yeah it's usually active later | 11:17 |
*** avass_ is now known as avass | 11:17 | |
avass | okamis: most people are in america. exceptions are me, tobiash and zbr I believe | 11:18 |
okamis | guess they be sleeping. | 11:19 |
tobiash | okamis: yes, gearman decides on its own how the jobs are distributed | 11:19 |
tobiash | what's the background of your question? | 11:19 |
okamis | Im curious about the scheduler, in my old experiences we had issues with gearman either starving jobs, or we went round robin and its also not optimal | 11:19 |
okamis | Im hoping that it would be possible to write custom job prioritizer, so I can use scarce resources effectively, and also prioritize jobs together which belong to the same change to have good throughput | 11:21 |
tobiash | okamis: are you asking about zuul jobs or using gearman for your own software? | 11:22 |
okamis | we are using zuulv2, but im curious what zuul (v4?) can do for us | 11:22 |
tobiash | this is how the zuul executors distribute their load: https://opendev.org/zuul/zuul/src/branch/master/zuul/executor/server.py#L2538 | 11:23 |
tobiash | this makes the scheduling quite equally distributed among all executors | 11:24 |
okamis | How does delaying the request spread it among executors? | 11:27 |
*** jangutter has quit IRC | 11:30 | |
*** jangutter has joined #zuul | 11:30 | |
okamis | Even spread is nice, but I would like that scarce resources are only used for specific jobs when we have those in queue, and if there is none I would like them to pick up arbitrary jobs | 11:31 |
tobiash | okamis: each executor delays the noop request with a backoff depending on its current running jobs | 11:33 |
okamis | Aaah, Okay, that makes sense, so nodes that doesnt run anything responds faster they are available | 11:34 |
tobiash | yes | 11:34 |
okamis | Another topic im interested in, is upgrading the worker hosts/executor? through this queue system as well. So I dont have to remove worker hosts from ci. Is that something you guys thought about? | 11:35 |
okamis | If I have a pool of 100 hosts, I want to say, after you finish job X, run this upgrade host job. If I can prioritize jobs in the scheduler I could send those to the top of the queue | 11:36 |
*** jangutter has quit IRC | 11:42 | |
*** jangutter has joined #zuul | 11:42 | |
tobiash | okamis: is this still a zuulv2 question? Remember zuulv2 is eol long ago and zuulv3 works completely different | 11:44 |
tobiash | with zuulv5 we'll even get rid of gearman in the system | 11:44 |
okamis | not a zuulv2 question at least, Latest and greatest makes sense :) | 11:45 |
tobiash | this question doesn't really fit into the zuulv3+ world since there are usually no queued jobs on the executors | 11:46 |
tobiash | since then the zuul-scheduler talks with nodepool and asks for nodes (here is where the waiting/queuing mainly happens) and when it got its nodes it schedules a job on the node (which is most of the time without queuing) | 11:48 |
okamis | https://zuul-ci.org/docs/zuul/discussion/components.html#overview | 11:49 |
okamis | it talks through gearman when requesting nodes? | 11:50 |
tobiash | no, it talks through zookeeper to nodepool when requesting nodes | 11:50 |
*** jcapitao is now known as jcapitao_lunch | 11:52 | |
okamis | ah, so I dont know what zookeper api supports but if the scheduler has the queue then it would be possible to write a custom queue then to implement the features i mentioned? | 11:53 |
tobiash | zookeeper is just a distributed datastore where nodepool implements its own queue mechanism on top | 11:54 |
tobiash | this already supports priorities so it might already support what you need | 11:55 |
tobiash | the priority can be defined in the pipeline in zuul | 11:55 |
tobiash | https://zuul-ci.org/docs/zuul/reference/pipeline_def.html#attr-pipeline.precedence | 11:56 |
okamis | Has happened that we need an emergency patch to go through quickly, I guess having a second check pipeline with high precedence would help | 12:01 |
avass | okamis: you can also promote changes in a queue I believe | 12:09 |
avass | okamis: https://zuul-ci.org/docs/zuul/reference/client.html?highlight=promote#configuration | 12:09 |
avass | okamis: https://zuul-ci.org/docs/zuul/reference/client.html?highlight=promote#promote | 12:09 |
*** ykarel_ has joined #zuul | 12:17 | |
*** ykarel has quit IRC | 12:20 | |
*** ykarel_ is now known as ykarel | 12:21 | |
okamis | Ah cheers, forgot about it as the client doesnt work on our zuul v2 version | 12:28 |
okamis | Another topic is the dependent gate, I read that testing with the assumption that changes will fail is more efficient than just being optimistic and assuming all will pass. Is that something you guys have evaluated? | 12:30 |
okamis | *changes might also fail | 12:32 |
*** jpena is now known as jpena|lunch | 12:34 | |
*** jamesmcarthur has joined #zuul | 12:42 | |
*** mhu has joined #zuul | 12:46 | |
*** jamesmcarthur has quit IRC | 12:50 | |
avass | okamis: I'm not sure I understand what you mean :) | 12:51 |
avass | the reason we test is that we expect that the changes might fail, no? | 12:54 |
okamis | Many changes are tested with assumption that the changes ahead will pass. When a change fails, following changes will have to be shuffled around and rerun which can be costly | 12:58 |
okamis | I realized I spoke to soon without understanding which scenarios its too costly. | 12:58 |
avass | that's why it's often recommended to run the same tests in check and gate | 12:58 |
tobiash | okamis: did you watch the video on https://zuul-ci.org/ ? | 12:59 |
tobiash | that is a short but very good explanation of the gating | 12:59 |
avass | and the changes aren't shuffled around. the change that fails is removed from the queue and the ones behind it a re-enqueue in the same order (the logic is a bit more than that but it gets you the right idea) | 12:59 |
okamis | https://eng.uber.com/research/keeping-master-green-at-scale/ I got it from this paper, there is a pdf on that page | 13:00 |
okamis | Seen the video, I didnt read the paper super in depth, so it might be having scaling issues when adding 500 of worker nodes or some condition. Just wanted to mention it because its very close to what zuul does | 13:02 |
avass | if you're running the same jobs in both check & gate then the only reason something will fail in gate is when two changes are incompatible. | 13:02 |
*** jcapitao_lunch is now known as jcapitao | 13:03 | |
*** hashar has joined #zuul | 13:04 | |
okamis | There is definitiely flaky tests, and intermittency issuse in our ci sadly :( | 13:07 |
okamis | I will come back with some data later | 13:08 |
avass | oh yeah there's that too, which is a bit harder to avoid | 13:09 |
avass | okamis: a way to avoid that is to make sure your setup steps are in a 'pre-run' so the jobs gets retried if the pre-run fail. however if something fails in the tests themselves (in a 'run' playbook) the change would fail | 13:11 |
*** sduthil has quit IRC | 13:11 | |
*** sduthil has joined #zuul | 13:12 | |
avass | okamis: https://zuul-ci.org/docs/zuul/reference/job_def.html#attr-job.attempts | 13:13 |
*** vishalmanchanda has joined #zuul | 13:16 | |
avass | okamis: you could also tell ansible to retry specific tasks if they're prone to intermittent errors | 13:19 |
*** jangutter has quit IRC | 13:34 | |
*** jangutter has joined #zuul | 13:34 | |
*** jpena|lunch is now known as jpena | 13:36 | |
*** jangutter has quit IRC | 13:44 | |
*** jangutter has joined #zuul | 13:44 | |
*** ikhan has joined #zuul | 13:59 | |
okamis | okay. | 14:06 |
okamis | I do hope you guys will peek a bit on that paper, because they have a speculative approach to see if it will pass or not :) | 14:08 |
*** jangutter has quit IRC | 14:16 | |
*** jangutter has joined #zuul | 14:17 | |
avass | okamis: I might take a look later :) | 14:28 |
mordred | I like that they say the zuul approach doesn't scale when we're already operating at higher throughput rates than they are. :) | 14:32 |
*** okamis has quit IRC | 14:32 | |
mordred | ok - so the magic beans as to how they "predict" which speculative paths are more or less likely to fail is related to SubmitQueue being integrated with a particular build system | 14:33 |
mordred | they then analyze the sub-tasks of the build system as part of their analysis (in their case Buck, but they mention Bazel as well) | 14:33 |
mordred | that's hard to generalize for a system designed to do integration testing of arbitrary workloads and languages | 14:34 |
mordred | the conflictanalyzer seems like an interesting idea if it's feasible given an environment | 14:37 |
*** okamis has joined #zuul | 14:39 | |
okamis | What throughput you guys got, Im assuming uber talks about a single project and not the sum of many | 14:39 |
mordred | yeah - because they chose a monorepo organization. comparing their results against a single repo in opedev wouldn't be an apples to apples comparison. the more appopriate comparison would be to compare their single repo vs, say, all of the interrelated repos of openstack. my throughput comment was more anecdotal than analysis - they said "thousands of changes per day" and I normally think of our load in terms of "thousands of jobs per hour". :) ... it's | 14:46 |
mordred | an interesting paper | 14:46 |
mordred | (although I'm totally still on first-caffeine of the day) | 14:46 |
corvus | opendev's zuul has peaked at about 2000 jobs per hour at the limit of its donated cloud resources; there are even larger private zuul installations. | 14:47 |
*** jamesmcarthur has joined #zuul | 14:47 | |
mordred | yah | 14:48 |
*** jamesmcarthur has quit IRC | 14:52 | |
*** jamesmcarthur has joined #zuul | 14:52 | |
okamis | No doubt you guys win there in total jobs per hour, but they surely improved the efficiency with dependent gate. | 14:55 |
corvus | okamis: perhaps, i don't have jobs-per-change-merged handy. but as mordred pointed out, that approach is dependent on a ci system narrowly tailored to the software under test. that approach can not be applied to the general case. | 14:57 |
avass | I suppose to compare the systems you really need to check something like: (changes/resources)*confidence | 14:58 |
mordred | they definitely do some interesting things - thanks for sharing the paper. I think the main win is in what they call the conflict analyzer. there are also some other things they're able to do by being closely aligned with the underlying build system. as corvus mentions, I don't think those are as readily applicable to the general case. that said - I've long been a proponent of standardized tooling, and I think the uber paper here is a great explanation | 14:59 |
mordred | of the power of having all of your teams use the same thing :) | 14:59 |
avass | mordred: I agree that having a standardized system can be really nice when you need to optimize the system, but the dev in me says it wants to use the latest tooling for everything :) | 15:02 |
okamis | The page 11 should be of interest to you guys, it mentions that speculate all performs better than zuuls optimistic approach. Which is generic | 15:02 |
okamis | I retract my previous statement, in some conditions it is better | 15:03 |
corvus | avass: latest version of zuul for everyone :) zuul's job content is continuously deployed, so devs do get the latest everything :) | 15:03 |
okamis | any chance you guys can implement (changes/resources)*custom_confidence? Then you can hardcode it custom_confidence=1 | 15:07 |
corvus | it's worth noting that zuul currently has three queue manager implementations, each describing a slightly different way of handling dependent changes; we can add more if we find a use for them. | 15:17 |
okamis | Wow, thats nice to hear, so the code architecture I guess makes it easy to add more cases :starstruck: | 15:18 |
corvus | i'm not sure i'd use the word "easy", but we do try to avoid backing ourselves into a corner :) | 15:19 |
avass | one that some people I know think is missing is queueing up a bunch of changes and testing the last one then mergin all of them if that succeeds, or doing sort of a binary search with tests | 15:20 |
okamis | Just wanna mention also that we had a lot of intermittency so we modified zuul to not rerun already passed jobs so rechecks would be much faster in check. Somethings you guys considered | 15:20 |
okamis | avass that also sounds very nice :) | 15:21 |
*** hashar has quit IRC | 15:28 | |
tobiash | avass: I've also thought about adding a batch parameter to the dependent pipeline manager such that it saves some builds in between | 15:37 |
clarkb | one thing I've wondered about batching is how do you decide which groups of changes to batch. Do you use a count, a timeout, both? I think some other systems rely on humans to manually construct the batch | 15:41 |
clarkb | if you accept the downsides with bisectability and breaking rolling release models batching would likely be a good way to save resources. Just have to sort out the mechanism for collecting changes | 15:42 |
*** ykarel has quit IRC | 15:43 | |
clarkb | also looking at their chart for probability of conflicting changes makes me wonder if they also have communications problems between teams | 15:45 |
mordred | clarkb: of course they do! they have teams :) | 15:45 |
clarkb | (what level that manifests in I don't know could be in person, api stability/docs, etc) | 15:45 |
clarkb | heh ya | 15:45 |
clarkb | the graph says 15 concurrent android changes have a 50% chance of conflict | 15:46 |
clarkb | that seems really really high (remember the CI system can discard trivial conflicts so they must only be looking at conflicts in functionality) | 15:46 |
*** ikhan has quit IRC | 15:48 | |
avass | clarkb: I think it's either on a timer or number of changes or a combination of both yeah | 15:48 |
okamis | A conflict is if it changes the same function I think, its not necessarily bad but a a high risk after merging that its not working as intended. | 15:48 |
clarkb | okamis: right but lets say you have 15 people working together at the same company on the same software and the same function even. They should be communicating | 15:48 |
clarkb | 50% probability of a conflict there says to me that there isn't enough communication | 15:49 |
clarkb | and that doesn't necessarily mean meetings all day, but stronger api contracts, stability assertions in libraries, etc can go a long way ime | 15:50 |
avass | the problem is that what they "should" is often not what they will do :) | 15:50 |
okamis | clarkb If you and me both needs to update function FOO, then we should both be able to do it in different changes. The uber thing will just lower the confidence that it will merge correctly. And I think thats is very reasonable | 15:51 |
*** jangutter_ has joined #zuul | 15:51 | |
clarkb | okamis: yes, but the zuul approach is also very reasonable imo. We communciate and stack the chagnes appropriately | 15:52 |
clarkb | essentially we try to drive practices that drive the confidence up rather than accepting it will be terrible. | 15:52 |
*** jangutte_ has joined #zuul | 15:53 | |
okamis | clarkb Uber doesnt necessarily conflict that. | 15:54 |
*** jangutter has quit IRC | 15:54 | |
okamis | Say Im doing feature A and modifying function Foo, and my colleague does feature B and modifies function Foo too. We can comumnicate that. But the conflict analyzer regardless will see that there is an increase risk of mistakes | 15:55 |
clarkb | yup they are not exclusive, it just feels weird to optimize for the suboptimal. But maybe they are more efficient that way | 15:55 |
okamis | why is it suboptimal? | 15:56 |
clarkb | okamis: because the developers could collaborate and stack the work explicitly to avoid the conflicts | 15:56 |
*** jangutter_ has quit IRC | 15:56 | |
okamis | I dont understand that stack thing to avoid conflict, can you make up some scenario | 15:57 |
*** hashar has joined #zuul | 15:57 | |
clarkb | our tools tell us when we are conflicting and we talk about it. I did this just yesterday with the ansible shell type work and corvus' zk work | 15:57 |
okamis | what tools informs you? | 15:57 |
clarkb | okamis: gerrit | 15:57 |
okamis | of merge conflicts? | 15:58 |
clarkb | yes | 15:58 |
okamis | I dont know what uber is using, but a function can be modified by 2 parties without having merge conflicts | 15:58 |
clarkb | this is true, also zuul won't enqueue subsequent changes that merge conflict | 15:59 |
clarkb | (which avoids the reset cost entirely) | 15:59 |
clarkb | okamis: but in theory you could run the conflict analyzer ahead of CI and force discussions to happen rather than waiting for when we want to merge stuff | 16:00 |
okamis | I think the discussion is moving to improving something else now | 16:01 |
clarkb | yes communication :) | 16:01 |
okamis | sure | 16:01 |
clarkb | I was just thinking that a conflict rate of 50% for 15 in flight changes seems really really high | 16:02 |
clarkb | because 15 people working on 15 changes that touch the same code should be able to coordinate | 16:02 |
clarkb | I do think some conflict is likely unavoidable, we are human afterall. Just that specific published number seems high | 16:03 |
okamis | same logic can also mean (im python developer) touching same module | 16:03 |
okamis | I dont know if its high, I belonged to a team of 7 but we didnt develop 1 single thing, we were doing cicd so we touch many things | 16:05 |
clarkb | note in my evaluation I'm not considering a coordinated stack of changes to the same function as a conflict | 16:05 |
clarkb | they can still operate on the same code just in an explicit manner than acknowledges some sort of order (to resolve the conflict) | 16:06 |
okamis | Yeah, I dont know if its high, if its one product then it maybe is reasonable because it scales fast per developer. Its like the birthday paradox right | 16:07 |
okamis | but how do you guys resolve it if it doesnt warn of merge conflicts? I rather just press a button and get an answer then speculate | 16:08 |
okamis | than* | 16:08 |
clarkb | yup, I'm suggesting that the conflict detection might be of better use pushed earlier in the change lifecycle | 16:09 |
clarkb | I think catching conflicts that are more complex than a text merge conflict would be great, but I'd want that in code review early on ? | 16:09 |
clarkb | (and then the CI system can still query its status, similar to how merge conflicts are handled) | 16:09 |
okamis | Yeah, so you want gerrit to have a new feature right? | 16:10 |
okamis | If it was nice in gerrit interface it would be cool, changes that modifies same files as you or function | 16:10 |
clarkb | right | 16:11 |
okamis | yeah, that I think I would agree on, probably very possible as you can query the changes through gerrit api | 16:12 |
fungi | there could even be an external solution, for example a periodic zuul job which runs an analysis of open changes for a project and then updates some information somewhere (via a change comment, findings tab, separate interface, whatever) to inform reviewers when different changes under development are touching the same areas | 16:18 |
fungi | that could serve as a reminder for them to coordinate better with one another on those particular changes | 16:18 |
clarkb | aha I think they explain it in another portion of the paper. "This is due to the fact that the build graph on the iOS monorepo is very deep (i.e., only a handful of leaf-level nodes) resulting in a large number of conflicts among changes. Consequently, the speculation graph has few independent changes that can execute and commit in parallel. Therefore, we expect substantially better improvements when | 16:19 |
clarkb | using the conflict analyzer for repositories that have a wider build graph." | 16:19 |
clarkb | sounds like the repo itself induces conflicting changes | 16:19 |
clarkb | whereas something like openstack is probably comparatively wide considering we've explicitly split it up into multiple repos and so on | 16:19 |
fungi | "multiple" being hundreds | 16:20 |
clarkb | oh yup and they talk about proactively communicating the conflicts to devs as well. | 16:21 |
clarkb | and suggest that monorepo change counts make that difficult | 16:22 |
*** openstackgerrit has joined #zuul | 16:22 | |
openstackgerrit | Merged zuul/zuul master: Fix possible race in _getChange https://review.opendev.org/c/zuul/zuul/+/758424 | 16:22 |
clarkb | I disagree where they suggest it encourages developers to rus htheir code to avoid conflicts. I think what we more frequently see is explicit stacking to resolve the conflicts then working to land the bottom up | 16:23 |
okamis | im heading out, thx all | 16:28 |
fungi | perhaps a similar model is the linux kernel, where there are several layers of commit aggregation which happens before branches are pulled to the main tree. people with familiarity of or working in a particular are of the repository are forced to coordinate their work, and the owners of those parts of the tree then have to coordinate with one another | 16:28 |
*** jfoufas1 has quit IRC | 16:29 | |
*** okamis has quit IRC | 16:29 | |
*** jamesmcarthur has quit IRC | 16:40 | |
*** jamesmcarthur has joined #zuul | 16:41 | |
*** jcapitao has quit IRC | 16:47 | |
*** jamesmcarthur has quit IRC | 16:47 | |
*** rpittau is now known as rpittau|afk | 17:05 | |
*** hashar has quit IRC | 17:07 | |
*** jamesmcarthur has joined #zuul | 17:11 | |
*** saneax has quit IRC | 17:12 | |
*** jamesmcarthur has quit IRC | 17:15 | |
*** jamesmcarthur has joined #zuul | 17:15 | |
*** bhavikdbavishi has joined #zuul | 17:23 | |
*** jangutter has joined #zuul | 17:26 | |
*** jangutte_ has quit IRC | 17:30 | |
tobiash | zuul-maint: it would be great if you could put the spec for enhancing regional executors onto your review list: https://review.opendev.org/c/zuul/zuul/+/663413 | 17:51 |
*** jpena is now known as jpena|off | 18:02 | |
*** nils has quit IRC | 18:06 | |
*** bhavikdbavishi has quit IRC | 18:08 | |
*** bhavikdbavishi has joined #zuul | 18:14 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Move fingergw config to fingergw https://review.opendev.org/c/zuul/zuul/+/664949 | 18:15 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Route streams to different zones via finger gateway https://review.opendev.org/c/zuul/zuul/+/664965 | 18:15 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Support ssl encrypted fingergw https://review.opendev.org/c/zuul/zuul/+/664950 | 18:15 |
openstackgerrit | Merged zuul/zuul master: Make repo state buildset global https://review.opendev.org/c/zuul/zuul/+/738603 | 18:22 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Support ssl encrypted fingergw https://review.opendev.org/c/zuul/zuul/+/664950 | 18:29 |
*** hamalq has joined #zuul | 18:30 | |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Make reporting asynchronous https://review.opendev.org/c/zuul/zuul/+/691253 | 18:34 |
tristanC | tobiash: does tls finger protocol works with the finger client? i wonder if droping the finger protocol support would make things easier | 18:40 |
tobiash | tristanC: you mean gnu finger? | 18:43 |
tristanC | tobiash: yes, when the user connect to the stream with gnu finger | 18:44 |
tobiash | I don't think gnu finger works with tls, but anyway this protocol is as easy as it can get | 18:45 |
tobiash | Comnect, send build id, stream data | 18:45 |
clarkb | correct fingerdoesn't do ssl/tls at all aiui | 18:45 |
tobiash | tristanC: if you want a tls capable terminal streaming client we'd probable need to add that to zuul-client | 18:46 |
tristanC | tobiash: thus i wonder if we shouldn't drop the fingergw and only use the web service? | 18:48 |
tobiash | websocket is way more complicated for streaming between executors and zuul-web | 18:48 |
tristanC | i also noticed we don't actually need websocket and we could use plain http with EventSource | 18:49 |
tristanC | (which would work using `curl -N`) | 18:51 |
tobiash | getting rid of websocket would be compelling for some users since that creates problems with some reverse proxies | 18:52 |
tristanC | i think so too, it would be easier to manage | 18:53 |
*** hashar has joined #zuul | 18:53 | |
tristanC | we can keep (tls) finger between zuul-web and executor, and only support plain http between user and zuul-web | 18:54 |
corvus | we can also keep non-tls finger for end-users. that costs nothing. | 18:55 |
*** jamesmcarthur has quit IRC | 18:56 | |
corvus | though if the "curl -N" experience is just as good or better, then i could see us dropping it for simplicity and improved UX | 18:57 |
corvus | but maybe let's see it in action first | 18:57 |
avass | had to look up evensource and it seems that it only allows for 6 open connection to the same url across all tabs for some reason | 19:03 |
fungi | eventsource? i'm not finding much about anything called evensource | 19:06 |
tristanC | fungi: it's https://developer.mozilla.org/en-US/docs/Web/API/EventSource | 19:06 |
avass | fungi: yeah eventsource, typo :) | 19:07 |
fungi | yep, okay i did find that. thanks | 19:07 |
tristanC | avass: 6 sounds acceptable? | 19:07 |
avass | tristanC: some people don't close tabs | 19:08 |
tobiash | and some people likely want to open 10 streams at once and cycle through them | 19:08 |
corvus | "You are about to close 7 windows with 344 tabs." actual message from this weekend. | 19:08 |
tobiash | wow | 19:08 |
corvus | i declared tab bankruptcy and started over | 19:09 |
*** hamalq has quit IRC | 19:09 | |
avass | what to do you even do with that many tabs? :) | 19:09 |
corvus | nothing | 19:09 |
*** hamalq has joined #zuul | 19:09 | |
corvus | just open new ones | 19:09 |
tristanC | tobiash: avass: we could use the same trick as in the status page, and stop the stream when the tab doesn't have the focus | 19:10 |
avass | maybe that's good enough | 19:12 |
clarkb | corvus: I suffer this ailment as well | 19:12 |
fungi | i've been able to brutally close out tabs i don't strictly need for the past year now. it's a challenge though | 19:13 |
fungi | i realized i'd been using open browser tabs as a very lazy to do list, and started to get better about putting them on my actual to do list instead | 19:13 |
clarkb | I found a plugin that allowed me to set a tab limit. I hit the limit and discovered I couldn't open a new tab to adjust the plugin settings up more and rage unistalled it once | 19:13 |
*** bhavikdbavishi has quit IRC | 19:24 | |
corvus | tristanC: i think 6 would be ok. there are 2 use cases i can think of that would change a little bit. one is a user opens a console window for a long running job, switches away for a while, then comes back to check on it. as long as the job is still running, there's no behavior change. if the job has finished, then there will be a behavior change since the result won't be visible. that could be | 19:44 |
corvus | mitigated by supplying a nice link to the current log or build location. | 19:44 |
corvus | tristanC: the other case is a little more specialized: sometimes i open ~10 builds at once to try to catch some random behavior in the act (say, some post-log upload failure). that would be limited to 6 now, and i'd have to make sure all 6 windows are open and visible. that's still probably sufficient for that use case, but it's something to be aware of. | 19:45 |
corvus | i don't think we need to design for the second case, that's esoteric, and as long as we have a procedure for an expert user to follow, i think it's fine. the first case is probably more typical and we should design a good ux for it. | 19:46 |
clarkb | keeping 6 windows open and visible doesn't play nice with my tiling window manager workflow. Not the end of the world though | 19:48 |
clarkb | (it would clutter up the window space) | 19:48 |
avass | corvus: you could also work around it by running 6 firefox windows and 6 chrome windows :) | 19:49 |
clarkb | avass: you'd need 6 firefox profiles I think | 19:49 |
clarkb | as they all oeprate on the same set of resource limits within a profile iirc | 19:49 |
avass | yeah but one browser would be limited to 6 connections, so 6 connections per browser | 19:50 |
avass | you can also increase that value in some settings and configure it to be tab local as well from what I understood | 19:51 |
avass | you can set 'network.http.max-persistent-connections-per-server' for firefox. | 19:54 |
fungi | for streaming 10 different build consoles at the same time, i think i'd use terminals and finger (or the mentioned curl stream) anyway | 20:04 |
fungi | which would not be subject to any javascript limitations | 20:05 |
avass | yeah I'm more concerned about users being confused why their tab doesn't show any log output because they've maxed out the number of connections they can have at once | 20:11 |
avass | speaking of the build console, can we reduce the number of "Waiting on logger" messages that gets sent? https://review.opendev.org/c/zuul/zuul/+/777887 :) | 20:14 |
openstackgerrit | Albin Vass proposed zuul/zuul master: Reduce amount of 'Waiting on logger' messages sent https://review.opendev.org/c/zuul/zuul/+/777887 | 20:16 |
avass | I got the wrong scope on that variable | 20:16 |
fungi | clarkb: ^ i seem to remember you had an opinion on that too | 20:17 |
clarkb | oh yes I should review that one, thanks. I've got a number of changes to review after lunch today | 20:25 |
*** irclogbot_3 has quit IRC | 20:31 | |
*** irclogbot_2 has joined #zuul | 20:34 | |
*** sassyn has joined #zuul | 20:37 | |
sassyn | hi all | 20:37 |
sassyn | good evening. | 20:37 |
openstackgerrit | Tobias Henkel proposed zuul/nodepool master: Log openstack requests https://review.opendev.org/c/zuul/nodepool/+/775797 | 20:43 |
mordred | corvus, tristanC: the limit of six is only when not using http/2 | 20:43 |
avass | zbr: I've gotten my rust compilation down to 2min and the entire build from 30min to below 6min with the zuul-cache. So I'd call that a successful experiment | 20:43 |
sassyn | consider this: I have a Repo name RepoX in my gerrit server. I commit patch to RepoX. RepoX is a untrusted repo configure in Zuul and have the .zuul.yaml file configure. The .zuul.yaml run Job X, and then JobY, JobZ and JobL. JobY, JobZ and JobL are all depend on JobX. My question is as follow: I want that JobY for example will only run if the | 20:43 |
sassyn | patch was done for the file foo in the RepoX, while JobZ will only run if the patch was done for file bar in RepoX. I saw there is an option in the job called files, but Im not sure how I should configure it? If I will set JobY with the setting files: foo will this work? | 20:43 |
avass | sassyn: hi! | 20:43 |
sassyn | avass: Hi dear friend how are you? | 20:44 |
openstackgerrit | Tobias Henkel proposed zuul/nodepool master: Log openstack requests https://review.opendev.org/c/zuul/nodepool/+/775797 | 20:44 |
mordred | so perhaps eventsource over http v1 for easy support of curl (curl does have an --http2 option, but now things are getting maybe more complex) - and maybe eventsource over http/2 for clients (like browsers) that support it? | 20:44 |
clarkb | sassyn: if you set the files config on each of the jobs that should work | 20:45 |
sassyn | OK. Thank u | 20:47 |
tristanC | mordred: is there http2 server library available for python yet? | 20:48 |
tristanC | from what i understand you need a solid runtime to manage the different channels, and it seems like python asyncio implementation is not so popular | 20:52 |
tobiash | cherrypy has an open issue for http2: https://github.com/cherrypy/cherrypy/issues/1276 | 20:53 |
mordred | I read a thing about just having your apache/nginx do the http/2 upgrade for you | 20:55 |
mordred | I don't know if that would be an improvement over the websocket module in apache/nginx - but maybe since it's all the same port the firewall issues would be lower | 20:56 |
mordred | and - considering that the eventsource does work over http v1 - the http/2 proxy upgrade could just be an option for deployers that should otherwise be transparent? | 20:57 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: WIP: replace console-stream websocket with event stream https://review.opendev.org/c/zuul/zuul/+/779581 | 20:59 |
mordred | tristanC: +54,-255 | 21:01 |
tristanC | mordred: i haven't test how cherrypy handles long running generator, but if it does it efficiently, then we can drop that stream manager logic | 21:03 |
tristanC | mordred: using apache/nginx sounds good to me if we are ok with the extra dependency | 21:04 |
corvus | i'm not sure i'm okay with that | 21:05 |
corvus | i'd rather us make it work with http1 | 21:05 |
corvus | i think it's worth considering implementing the auto-shut-off on the client side over http1, and removing that restriction on http2. i'm assuming the client can tell. | 21:06 |
tobiash | it should work with http1, but as I've understood it's easy to set the reverse proxy into http2 mode to have both worlds | 21:06 |
mordred | but you'd still want to have the javascript client suppor the auto-shut-off so that it worked well if an admin did not deploy an http/2 proxy | 21:08 |
tobiash | yea | 21:08 |
corvus | yeah, i'd rather not add an extra deployment burden if we don't need to. i mean, part of the rationale here is to make deployment easier. so plain-old-http1 proxy should be a use case we support. also supporting http2 and additionally eating more data seems fine. | 21:08 |
mordred | also - the upgrade notes of "you need to add this new proxy layer while you stop caring about the websocket proxy" is less friendly than " you can just stop caring about the websocket proxy" | 21:09 |
mordred | corvus: ++ | 21:09 |
*** jamesmcarthur has joined #zuul | 21:13 | |
tristanC | it may not be possible to detect http2 on the client side, so perhaps we could keep both endpoint and have a toggle to enable one or the other | 21:13 |
*** tflink_ has joined #zuul | 21:13 | |
*** tflink has quit IRC | 21:14 | |
corvus | tristanC: admin toggle exposed in api/info? | 21:14 |
clarkb | will apache/nginx have similar connection count problems between the backend and themslves? | 21:14 |
clarkb | or is that purely a browser thing? | 21:14 |
corvus | i think that would be okay, but if it is very complicated, we might want to consider the idea that auto-limiting the number of simultaneous streams in both cases may be universally a good idea and friendly to the server operator :) | 21:15 |
avass | clarkb: it's a browser thing | 21:15 |
tristanC | https://developer.mozilla.org/en-US/docs/Web/API/PerformanceResourceTiming/nextHopProtocol is the javascript thing that is still under draft | 21:16 |
tristanC | corvus: we don't even need it in api/info, the endpoint url is embeded in the status page content, so if the admin pick `event-stream`, then the status page would have the correct links | 21:17 |
openstackgerrit | James E. Blair proposed zuul/nodepool master: Add python-logstash-async to container images https://review.opendev.org/c/zuul/nodepool/+/778793 | 21:17 |
corvus | tristanC: keep websocket and event-stream? | 21:17 |
corvus | i assumed you were suggesting have the admin toggle between limiting the number of connections or leaving it unlimited. | 21:18 |
tristanC | corvus: well if you want http1 compatibility, then it seems like websocket is more appropriate | 21:18 |
*** tflink_ has quit IRC | 21:18 | |
corvus | tristanC: oh, so you're not in favor of auto-shutoff? | 21:18 |
tobiash | I'd be in favor of having event-stream xor websocket actually | 21:20 |
tristanC | corvus: not sure that is actually possible, e.g. a tab can't tell how many other tabs are opened | 21:20 |
corvus | tobiash: ++ i agree i don't think we should have 2 implementations | 21:20 |
corvus | tristanC: the suggestion was to have it stop when the user leaves the tab, like the status page. | 21:20 |
corvus | so there would be a max of 1 console stream from a browser | 21:21 |
tristanC | corvus: and i don't know what is the actual limitation, so perhaps we have to close and re-open, resulting in a brand new stream | 21:21 |
corvus | (but if the user switched back, we could resume, so it could be semi-transparent) | 21:21 |
corvus | tristanC: yeah, but if we re-open, we can still ask the remote side to skip the first (x) bytes | 21:21 |
corvus | the only downside i see is if the user switces back after the job is finished, then we can't resume; we have to just send them to either the log url or the build page. | 21:22 |
tristanC | corvus: well if we are comfortable with having only one stream active at a time, then that sounds possible | 21:22 |
*** tflink has joined #zuul | 21:23 | |
corvus | tristanC: yeah, i at least think it's worth considering. i'm not 100% sure, but based on thinking about it a little over lunch :), i think it's a trade-off i'd be willing to make. | 21:23 |
tobiash | zuul-maint: this is a tiny executor lifecycle bugfix: https://review.opendev.org/c/zuul/zuul/+/777694/ | 21:34 |
tobiash | currently the executor doesn't exit on graceful shutdown if it has been paused or governed already | 21:35 |
tobiash | and this is a small but I think important doc fix since sql reporters are deprecated now: https://review.opendev.org/c/zuul/zuul/+/777638/ | 21:37 |
clarkb | I'm looking at replacing our nodepool launchers and in the process have wondered if I can start a new launcher on a different host with the same hostname using the same provider config (but set max-servers on the old one to 0 and max-servers to valid number on the new one) | 21:37 |
clarkb | it appears this won't work because the launcher id uses the hostname as part of the launcher id value | 21:37 |
clarkb | it also uses the pid, but our pids are fairly static because we launch everything in containers | 21:38 |
clarkb | I'm wondering if I can make that value random (uuid4) and if so do I need to keep it around like the builder does? | 21:38 |
clarkb | another option may be to try and use the fqdn? | 21:38 |
clarkb | reading the code I think it is ok for different launchers to operate on the same provider as long as they have different launcher ids. Which would imply it is ok for a restarted launcher to register with a new id | 21:39 |
tobiash | clarkb: we always do a rolling restart with an overlap with two launchers | 21:39 |
clarkb | tobiash: using the same provider name but different hostnames? | 21:40 |
tobiash | but I guess ours get unique launcher ids due to pods | 21:40 |
clarkb | ya, but that gives more reinforcement to my reading it should be safe to just use a random value or at least a more unique value | 21:40 |
tobiash | yes, same provider name | 21:40 |
clarkb | since that should be what you are getting out of your pods | 21:40 |
clarkb | (I expect that it doesn't reuse names anyway) | 21:40 |
openstackgerrit | Tobias Henkel proposed zuul/zuul master: Gracefully handle non-existent label on unlabel https://review.opendev.org/c/zuul/zuul/+/775329 | 21:44 |
clarkb | looks like we don't recreate pool workers unless you delete the provider from config entirely (this isn't someting that gets recreated if you change ap rovider setting) | 21:45 |
clarkb | which means that if we set a uuid4 as part of the name instead of hostname-pid that should be stable for the entirety of the process life. Which is also all we can guaruntee using a pid | 21:45 |
clarkb | the fact that containers give us a fairly stable pid doesn't mean that that value would be stable across process restarts | 21:46 |
clarkb | all that to say I think using uuid4 instead would be fine. maybe hostname-uuid so that it is easier to identify them but we'd get away from stable pids | 21:46 |
clarkb | or as an alternative switch socket.gethostname with socket.getfqdn. Though I think this still suffers issues from ctonainers because the containers could all have the same hostname and also the same pid due to pid namespacing | 21:49 |
clarkb | (though in opendev's case we run a container per host so that would work for us) | 21:49 |
tobiash | tristanC: added a question to https://review.opendev.org/c/zuul/zuul/+/776287/ | 21:56 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Zuul Cache role with s3 implementation. https://review.opendev.org/c/zuul/zuul-jobs/+/764808 | 21:56 |
avass | added example in the docs ^ | 21:57 |
openstackgerrit | Clark Boylan proposed zuul/nodepool master: Uniquely identify launchers https://review.opendev.org/c/zuul/nodepool/+/779616 | 22:01 |
clarkb | I'll WIP that but pushed it up so others can see what I'm talking about more concretely (I don't think I need to update the tests to match the new format but I should and haven't done that yet) | 22:02 |
corvus | clarkb: lgtm; while you're in there, it also might be nice to put the uuid first or last (if we do it last, then sorting them becomes useful) | 22:18 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Zuul Cache role with s3 implementation. https://review.opendev.org/c/zuul/zuul-jobs/+/764808 | 22:18 |
corvus | i think the "random" part of that was in the middle just because we added the pool name onto the end later iirc | 22:18 |
clarkb | corvus: good idea | 22:19 |
openstackgerrit | Albin Vass proposed zuul/zuul-jobs master: Zuul Cache role with s3 implementation. https://review.opendev.org/c/zuul/zuul-jobs/+/764808 | 22:24 |
openstackgerrit | Merged zuul/nodepool master: Add python-logstash-async to container images https://review.opendev.org/c/zuul/nodepool/+/778793 | 22:24 |
avass | there we go. I've added some better doumentation and I think I've workout out all the quirks I've encountered while using the zuul-cache :) | 22:25 |
avass | worked out* | 22:25 |
openstackgerrit | Clark Boylan proposed zuul/nodepool master: Uniquely identify launchers https://review.opendev.org/c/zuul/nodepool/+/779616 | 22:29 |
clarkb | corvus: ^ I'll pull the WIP now I guess | 22:29 |
corvus | clarkb: typo see comment | 22:33 |
clarkb | too much copy pasta, thanks | 22:34 |
openstackgerrit | Clark Boylan proposed zuul/nodepool master: Uniquely identify launchers https://review.opendev.org/c/zuul/nodepool/+/779616 | 22:35 |
corvus | gotta admit, there was a moment there where i was like "is this a new python3.42 dict assignment syntax?" | 22:36 |
clarkb | I just got lost in the difference between object to dict and dict to objcet :) | 22:37 |
clarkb | and yy p was easay | 22:37 |
clarkb | also I can't type ^ see above for evidence | 22:37 |
clarkb | I think we can restart our launchers with that landed, ensure everything is happy, then try the easy mode rollout of new launchers | 22:37 |
corvus | again, i just assumed i was missing out on the lingo. kk. ymmv. yy. | 22:38 |
clarkb | corvus: yy is vi(m) for yank the line and p for put the yank buffer | 22:38 |
clarkb | I copied the object version assignment into the dict then got all sideways making pep8 line lengths happy | 22:38 |
corvus | i love that the terms are backwards from emacs (you kill the line (into the kill ring), then yank it back into the buffer) | 22:39 |
clarkb | also I had a really early start today (for me) and my brain is probably not working so well as a result | 22:39 |
openstackgerrit | Tristan Cacqueray proposed zuul/zuul master: WIP: replace console-stream websocket with event stream https://review.opendev.org/c/zuul/zuul/+/779581 | 22:41 |
*** hashar has quit IRC | 22:53 | |
openstackgerrit | Merged zuul/zuul master: Catch exception when double unregistering merge jobs https://review.opendev.org/c/zuul/zuul/+/777694 | 23:16 |
*** jamesmcarthur has quit IRC | 23:17 | |
*** jamesmcarthur has joined #zuul | 23:17 | |
openstackgerrit | Merged zuul/zuul master: Include database requirements by default https://review.opendev.org/c/zuul/zuul/+/777245 | 23:24 |
openstackgerrit | Merged zuul/zuul master: ansible: ensure we can delete ansible files https://review.opendev.org/c/zuul/zuul/+/775943 | 23:24 |
mordred | corvus: I find, for that reason, that it's best to not try to associate vim commands with concepts. that said ... | 23:27 |
mordred | clarkb: yy does not yank anything in vi for me - I use dd for that purpose? | 23:27 |
clarkb | mordred: oh maybe yy is a vim ism then | 23:27 |
corvus | i've done the dd before | 23:27 |
clarkb | yy is like dd without removing the line | 23:27 |
mordred | OH! | 23:27 |
mordred | I didn't know that | 23:27 |
mordred | lookie there | 23:27 |
* mordred learned a new vi today | 23:28 | |
fungi | yy is copy, dd is cut, essentially | 23:28 |
mordred | to do copy, I always just cut, then paste, then paste | 23:29 |
mordred | but clearly that's silly | 23:29 |
mordred | ooh - and a single y works in a visual block | 23:29 |
corvus | the good news is you already know how to use ed, the standard unix text editor! | 23:31 |
corvus | y is also yank in ed | 23:32 |
corvus | though x is put, not p | 23:32 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!