kata-irc-bot | <christophe> I noticed yesterday that if I run `go mod tidy` on the runtime on `main`, it removes a large number of entries from `go.mod`, some of which were added recently: ``` github.com/containerd/cgroups v1.0.1 github.com/containerd/console v1.0.2 github.com/containerd/containerd v1.5.2 - github.com/containerd/cri v1.11.1 // indirect github.com/containerd/cri-containerd v1.11.1-0.20190125013620-4dd6735020f5 | 08:53 |
---|---|---|
kata-irc-bot | github.com/containerd/fifo v1.0.0 github.com/containerd/ttrpc v1.0.2 github.com/containerd/typeurl v1.0.2 github.com/containernetworking/plugins v0.9.1 github.com/cri-o/cri-o v1.0.0-rc2.0.20170928185954-3394b3b2d6af - github.com/docker/distribution v2.7.1+incompatible // indirect - github.com/docker/docker v1.13.1 // indirect - github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c | 08:53 |
kata-irc-bot | // indirect github.com/go-ini/ini v1.28.2 github.com/go-openapi/errors v0.18.0 github.com/go-openapi/runtime v0.18.0 github.com/go-openapi/strfmt v0.18.0 github.com/go-openapi/swag v0.19.5 github.com/go-openapi/validate v0.18.0 - github.com/gogo/googleapis v1.4.0 // indirect github.com/gogo/protobuf v1.3.2 github.com/hashicorp/go-multierror v1.0.0 | 08:53 |
kata-irc-bot | github.com/intel-go/cpuid v0.0.0-20210602155658-5747e5cec0d9 - github.com/juju/errors v0.0.0-20180806074554-22422dad46e1 // indirect - github.com/juju/loggo v0.0.0-20190526231331-6e530bcce5d8 // indirect - github.com/juju/testing v0.0.0-20190613124551-e81189438503 // indirect github.com/kata-containers/govmm v0.0.0-20210622075516-263136e69ac8 github.com/mdlayher/vsock v0.0.0-20191108225356-d9c65923cb8f - | 08:53 |
kata-irc-bot | github.com/opencontainers/image-spec v1.0.1 // indirect github.com/opencontainers/runc v1.0.0-rc93 github.com/opencontainers/runtime-spec v1.0.3-0.20200929063507-e6143ca7d51d github.com/opencontainers/selinux v1.8.0 @@ -44,7 +35,6 @@ require ( github.com/prometheus/common v0.10.0 github.com/prometheus/procfs v0.6.0 github.com/safchain/ethtool v0.0.0-20190326074333-42ed695e3de8 - | 08:53 |
kata-irc-bot | github.com/seccomp/libseccomp-golang v0.9.1 // indirect github.com/sirupsen/logrus v1.7.0 github.com/smartystreets/goconvey v1.6.4 // indirect github.com/stretchr/testify v1.6.1 @@ -58,11 +48,7 @@ require ( golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43 golang.org/x/sys v0.0.0-20210324051608-47abb6519492 google.golang.org/grpc v1.33.2 - gopkg.in/mgo.v2 v2.0.0-20180705113604-9856a29383ce | 08:53 |
kata-irc-bot | // indirect - gotest.tools v2.2.0+incompatible // indirect k8s.io/apimachinery v0.20.6 - k8s.io/klog v1.0.0 // indirect - sigs.k8s.io/structured-merge-diff/v3 v3.0.0 // indirect )``` Can someone explain to me why? | 08:53 |
kata-irc-bot | <christophe> In particular, if I make a change to `go.mod`, should I have a commit in there for the `tidy` first, or do we really need these entries? | 08:54 |
kata-irc-bot | <christophe> @fidencio I think that you are familiar with our usage of Go modules? ^ | 08:55 |
kata-irc-bot | <fidencio> Folks, I need a review ASAP on https://github.com/kata-containers/kata-containers/pull/2197, in order to unblock our pipeline. And huge thanks to @fupan for nailing it down. | 10:55 |
kata-irc-bot | <fidencio> @samuel.ortiz, @bergwolf, @jakob.naucke, @james.o.hunt: ^^ | 10:55 |
kata-irc-bot | <christophe> I see this was already merged. I put a comment there, I'm a bit concerned about the root cause. | 12:21 |
kata-irc-bot | <eric.ernst> Missed the fun; thanks. | 14:26 |
kata-irc-bot | <eric.ernst> When I wrote the PodOverhead KEP, we included a section on updating the CRI API to pass downward overhead/pod resource details… :thread: | 15:24 |
kata-irc-bot | <eric.ernst> I didn’t implement this optional part of it, as it wasn’t really necessary for Kata. ie, we didn’t care the overhead value, and we didn’t *need* the pod resources struct. | 15:25 |
kata-irc-bot | <eric.ernst> However.. I think this could be convenient and simplify our sandbox handling. ie, we don’t need to size default memory size to accommodate a hotplug of a “potential” maximum container memory size. | 15:26 |
kata-irc-bot | <eric.ernst> We can still support updateContainer, and in place vertical pod autoscaling, but this could be useful. | 15:27 |
kata-irc-bot | <eric.ernst> @bergwolf - curious if you have feedback on this? I’m looking at submitting a PR, and asking for feedback from sig-node/containerd folks at this point. | 15:28 |
kata-irc-bot | <fidencio> I'm trying to understand what's the problem you're trying to solve with that. | 15:46 |
kata-irc-bot | <eric.ernst> Not a functional gap; more of an optimization. | 15:49 |
kata-irc-bot | <jakob.naucke> A week ago, I set up an s390x Jenkins job on http://jenkins.katacontainers.io. I did this by cloning the ARM configuration. Don't do this if you ever want to set up a similar thing ,:) The build triggers (which are a bit hidden) are copied, which led to GitHub sometimes showing the s390x result for the ARM CI. Which always failed due to some outstanding PRs. My apologies :pensive: to anyone who got a PR tested and merged over last | 16:17 |
kata-irc-bot | week -- at least it seems to me that you never got the :x: _solely_ because of that. I would especially like to apologise to @jianyong.wu because "your" CI looked worse than it should have. | 16:17 |
kata-irc-bot | <fidencio> :slightly_smiling_face: Learning in the hard way! | 17:33 |
kata-irc-bot | <fidencio> @eric.ernst, @archana.m.shinde, @bergwolf, @samuel.ortiz, Folks, how do we feel about having the planned release Tomorrow? | 17:44 |
kata-irc-bot | <fidencio> My most sincere opinion is that, regardless of CI being back on track, we'd need one week or so to stabilise whatever is in before we cut the alpha1. Of course, we can cut alpha1 and ensure we don't cut -rc0 without things in order ... | 17:47 |
kata-irc-bot | <fidencio> But I'd like to understand what's your take on this. | 17:47 |
kata-irc-bot | <fidencio> cc @pradipta.banerjee @salvador.fuentes @christophe | 17:48 |
kata-irc-bot | <wmoschet> IMHO the problem with leaking processes should be proper fixed before the release (even it being a alpha1). | 17:52 |
kata-irc-bot | <eric.ernst> Do we have outstanding features we’re looking at still? | 17:54 |
kata-irc-bot | <fidencio> I don't think so | 17:56 |
kata-irc-bot | <fidencio> And even if we have, there's 2 weeks (according to the original planning) till we cut -rc0 | 17:56 |
kata-irc-bot | <eric.ernst> we can still take bug fixes post rc0, etc. | 18:00 |
kata-irc-bot | <eric.ernst> THe leaking processes - that’s the QEMU cleanup? | 18:00 |
kata-irc-bot | <eric.ernst> That was saw in soak testing? | 18:01 |
kata-irc-bot | <fidencio> Yep, that's QEMU taking longer to be shutdown due to some change done (I'm not entirely convinced it's due to the q35 switch though) | 18:01 |
kata-irc-bot | <fidencio> I start to think we should revert the patch, increase the timeout, and work on the QEMU side to understand better what may be causing it | 18:02 |
kata-irc-bot | <fidencio> killing QEMU and then having to care about other processes (virtiofsd) is not optimal | 18:02 |
kata-irc-bot | <fidencio> Wait ... | 18:08 |
kata-irc-bot | <fidencio> Having the virtiofsd processes leaking in case we sigkill QEMU is not exactly ... not expected. | 18:18 |
kata-irc-bot | <fidencio> But let me switch this to the GitHub ... | 18:18 |
kata-irc-bot | <fidencio> Still ... @eric.ernst, so you're in favour of rolling the alpha1 regardless of the state we have in `main` now? | 18:19 |
kata-irc-bot | <eric.ernst> i think it needs to be sorted out. | 18:22 |
kata-irc-bot | <eric.ernst> that’s most ipmortant. | 18:22 |
kata-irc-bot | <eric.ernst> I wouldn’t waste time otherwise on a release, since that should be second priority. | 18:22 |
kata-irc-bot | <fidencio> I will wait for the input from the others ... | 18:30 |
kata-irc-bot | <fidencio> So, just to be completely fair ... https://github.com/kata-containers/kata-containers/issues/2198#issuecomment-876681283 I don't think that the "leak" should be a blocker for the alpha1 release | 19:15 |
kata-irc-bot | <fidencio> But I'd appreciate opinions there | 19:15 |
kata-irc-bot | <eric.ernst> :thinking: | 21:22 |
kata-irc-bot | <eric.ernst> …shouldn’t virtiofsd terminate once its socket connection closes? Ie, if QEMU terminates, I thought we’d expect virtiofsd to terminate as well. | 21:31 |
kata-irc-bot | <eric.ernst> When I saw leaks before, I _thought_ it was when QEMU failed to even start. | 21:31 |
kata-irc-bot | <eric.ernst> :thinking_face: | 21:32 |
kata-irc-bot | <eric.ernst> Curious how easy/hard it is to reproduce the soak issue, @wmoschet | 21:32 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!