kata-irc-bot | <meng.mobile> why not ship kernel and initrd and other guest assets as container images? As I understand, for each kata deployment, hypervisor use the fixed version of kernel and initrd shipping with the release. I am wondering why they are not shipped as container images, allow user to switch between different versions of kernel easily, e.g. if user need a kernel with GPU driver, he/she can pack the kernel with the needed driver. When | 13:43 |
---|---|---|
kata-irc-bot | running container, annotating pods/containers saying that that particular kernel images is needed to run. kata runtime then pull down the specific kernel image, mount it, and pass kernel/initrd files as parameters to hypervisor. Is it already work in this way today? If yes, where to find doc for configuring it? | 13:43 |
kata-irc-bot | <fidencio> Users can use the kernel image they want, @meng.mobile, we just don't ship specialised kernels with kata-deploy. There are two ways you can use your own kernel image, for instance. 1. Change the configuration file: https://github.com/kata-containers/kata-containers/blob/5b6e45ed6c3896f69adfe688fe858c0c69b0b42d/src/runtime/config/configuration-qemu.toml.in#L16 2. Enable annotations in | 14:07 |
kata-irc-bot | https://github.com/kata-containers/kata-containers/blob/5b6e45ed6c3896f69adfe688fe858c0c69b0b42d/src/runtime/config/configuration-qemu.toml.in#L44 and then set the kernel path via annotations: https://github.com/kata-containers/kata-containers/blob/5b6e45ed6c3896f69adfe688fe[…]0b42d/src/runtime/virtcontainers/pkg/annotations/annotations.go | 14:07 |
kata-irc-bot | <fidencio> err, set the annotations using the annotation described in the annotations.go file | 14:08 |
kata-irc-bot | <fidencio> I am not against distributing the assets in a container image, and I know some folks do that downstream (hey @eric.ernst!). But I am not sure if kata should be the part downloading and replacing that ... it's something we need to think about. | 14:09 |
kata-irc-bot | <fidencio> I'm trying to think on how we'd do that, though (and that's the reason I mentioned we need to think about it) | 14:36 |
kata-irc-bot | <fidencio> kata-deploy, as provided by the community, yes. | 14:40 |
kata-irc-bot | <fidencio> Mind to file an RFE for that? Otherwise it'll get lost in slack. | 14:42 |
kata-irc-bot | <eric.ernst> To me this seems more like a downstream potential deployment option. | 14:50 |
kata-irc-bot | <eric.ernst> Some folks will install all this via a package. Some via a daemonset on top of kubernetes. | 14:50 |
kata-irc-bot | <eric.ernst> There are examples of each potential deployment model. kata-deploy is doing just what you describe -- deploying it via a container image. | 14:51 |
kata-irc-bot | <eric.ernst> Disk space is cheap, and latency is something you'd experience with each container. Why not just have your kernel version A, B and C available on the node already, and user can specify either by an annotaiton, or better imo, via a specific runtimeclass? | 14:53 |
kata-irc-bot | <meng.mobile> One of the advantage is the ease of use and auto deploy. I below kata-deploy is loved by k8s folks: a single cmd get kata rollout kata to all nodes. But kata-deploy is limit to a single version, if more than one needed, you need to customize the scripts ..... | 15:50 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!