*** sameo has joined #kata-general | 09:29 | |
*** gwhaley has joined #kata-general | 09:38 | |
*** LinuxMe has joined #kata-general | 13:58 | |
*** irclogbot_0 has quit IRC | 14:14 | |
*** irclogbot_0 has joined #kata-general | 14:39 | |
*** irclogbot_0 has quit IRC | 15:06 | |
*** irclogbot_0 has joined #kata-general | 15:14 | |
*** sameo has quit IRC | 16:54 | |
*** stackedsax has joined #kata-general | 17:58 | |
*** gwhaley has quit IRC | 18:00 | |
kata-irc-bot | <torque_wrexer> hey rico, it's dvergurinn. i had previously messaged from my weechat before i knew about the slack channel. | 20:55 |
---|---|---|
kata-irc-bot | <torque_wrexer> @raravena80, i know that they are made using co-functional runtimes to make them mesh flawlessly. although, my question is, with MAAS, kata, k8s, what could you suggest as a good model to pursue. ive read that some planning should go into k8s, especially when provisioning a gpu computing lab and i'm pretty new to the cloud and the different framworks, but after dilligent research, i've settled on the trinity, even though i'm | 21:02 |
kata-irc-bot | not set on it if there is a better way with less overhead, but good usability. | 21:02 |
kata-irc-bot | <raravena80> @torque_wrexer I'm not familiar with Trinity is that the MVC? by MAAS, do you monitoring as a service? | 21:05 |
kata-irc-bot | <torque_wrexer> @raravena80, lol sorry. i just ment the maas/k8s/kata as a trinity, since they seem to be complimentary to each other. | 21:09 |
kata-irc-bot | <torque_wrexer> i shouldn't use imprecise language | 21:10 |
kata-irc-bot | <raravena80> so Kata supports GPUs, so if that's your requirement then it would be a fit. You can also run regular K8s pods with Kata. But would really depend on your specific application. | 21:12 |
kata-irc-bot | <torque_wrexer> would you find it useful to have this all on a Canonical system by any chance? | 22:22 |
kata-irc-bot | <torque_wrexer> well, i have a server 1) which is main storage of 9TB RAID 6 and i want a container on it for a web scraper. all the scraped data (mostly numeric) will be prepared on the dual xeon (lga775) server 1 to be sent over data link to server 2) (single xeon lga1366 with IOMMU) where the GPU(s) will be housed. here i would like a container for each gpu, s.t. i can balance load across them if necessary. i still need to set up a RAID of | 22:31 |
kata-irc-bot | SAS on 2) soon, but i have it running ubuntu 18.04 like 1) is. 2) will be used for data cruncking and the necessary storage, in addition (if feasible) to organize, and combine/link congruent data sets. | 22:31 |
kata-irc-bot | <torque_wrexer> ergo, i was thinking maas for metal provisioning, and kata for deploying k8s pods. | 22:34 |
*** sameo has joined #kata-general | 23:26 | |
*** sameo has quit IRC | 23:26 | |
*** sameo has joined #kata-general | 23:27 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!