03:07:58 #startmeeting openstack-cyborg 03:07:59 Meeting started Thu Aug 27 03:07:58 2020 UTC and is due to finish in 60 minutes. The chair is Yumeng. Information about MeetBot at http://wiki.debian.org/MeetBot. 03:08:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 03:08:03 The meeting name has been set to 'openstack_cyborg' 03:08:12 #topic Roll call 03:08:16 #info Yumeng 03:08:37 #info swp20 03:08:37 #info s_shogo 03:08:48 #info xinranwang__ 03:09:16 #info brinzhang_ 03:10:03 hi shaohe_feng 03:10:11 hi all 03:10:20 hi Yumeng, morning 03:11:35 cool, we have so many people today 03:11:40 Hi all, we should have a aggrement of the tempary report result of the 3rd-party driver 03:11:47 #topic Agenda 03:12:12 brinzhang_: yes, that's one of the topic I want to mention 03:12:12 where to save the tempart results 03:12:21 yeah ^^ 03:14:05 as discussed in Intel QAT driver patch: https://review.opendev.org/#/c/725821/7//COMMIT_MSG 03:14:10 I am ok with the qat driver patch, and the inspur fpga driver patch, just with the temparory reulst 03:14:53 do we have a template wiki to show test result? 03:15:52 xinranwant_: may we should have to create a tempate for this 03:16:42 the etherpad is not save, because it can be modified by everyone 03:16:50 IMO, the format of test result is not a big deal. Just to show the driver works. 03:16:56 s/save/safe 03:17:42 #info chenke 03:18:22 xinranwang__: your format of test result is good enough. 03:18:48 xinranwang_: not exactly all, we shoul keep the readable of the result, so the format shuold be beautiful ASAP 03:19:11 xinranwang__: about the template, I'm fine with the structure of your report. I think that make sense to showt the report results and provision results 03:19:33 I remember it was a website call paste.openstack.org, things like that. 03:19:44 xinranwang__: just one small question, we can add openstack accelerator device list to show the report result in cyborg side. 03:19:47 does anyone know that website> 03:20:11 xinranwang_: paste.openstack.org is also a temparory place too 03:21:02 I am trend to use wiki 03:21:09 brinzhang_: does everyone can modify it? 03:21:19 brinzhang_, xinranwang__: I also have the concern that etherpad and paste.openstack.org are temparory. anyobe can modify it 03:21:19 Yumeng: ok, sure 03:22:15 xinranwang_: no, but it will be lost with a long time 03:22:31 brinzhang_: ok, got it. 03:22:59 we can create a wiki page to record the driver report result and then add the link to the driver-support table on doc page https://docs.openstack.org/cyborg/latest/reference/support-matrix.html#driver-support 03:23:00 It seems a wiki page is the better one 03:23:05 what do you think? 03:23:05 we can use wiki, but how do we let the user or operator know the result exist in wiki? 03:23:37 We can mention this wiki page in cyborg doc 03:24:50 agree, and we can add a column as "temparory result", marked when it merged 03:24:58 the https://docs.openstack.org/cyborg/latest/reference/support-matrix.html#driver-support should add item to show the link 03:25:05 yes 03:25:11 yes, agree 03:25:44 such as, Added it in Victoria (Auguest, 2020) 03:26:06 yes, we can do that. 03:26:11 and past the wiki test result link 03:26:16 lol, like a biography for the driver 03:27:13 yep =_< 03:27:23 ok, let do it by this way, after meeting, I will send an email to the ML, that everyone should do, if they are not have the third-party CI 03:27:33 aggree 03:28:00 brinzhang_: cool. 03:28:46 about the driver implementation, I have another quesion to discuss 03:29:02 VENDOR_MAPS format of GPU,FPGA are different, should we keep consistent? 03:29:30 FPGA is like this https://github.com/openstack/cyborg/blob/master/cyborg/accelerator/drivers/fpga/base.py#L23 03:29:45 while GPU is like https://github.com/openstack/cyborg/blob/master/cyborg/accelerator/drivers/gpu/base.py#L26 03:30:38 we can merge them in one dict. 03:31:04 move this to a common file? 03:32:18 Yumeng: what do you mean by difference 03:32:22 VENDER_MAPS = ['GPU': {'0x8086': 'intel', ..}, 'FPGA': {'','', ...}] 03:32:46 how about like this? that we can maintain this in one place 03:34:06 Just wondering why we should do this. Is there any gap in current implementation? 03:34:40 not array, a dict. VENDER_MAPS = {'GPU': {'0x8086': 'intel', ..}, 'FPGA': {'','', ...}} 03:36:27 xinranwang_: I am not sure what Yumeng want todo, but from the maintain accelerator devices type and vendor, that we can merge this, because we can easy to know what devices that we can support 03:37:28 but as you said, is it make sense? It make sense to me. ^ 03:38:00 xinranwang__: initially, this is a downstream problem I found in baremetal support. ironic report Nvidia GPU vendor_id as hexadecimal int, while we accept vendor_id as string 03:38:40 brinzhang_: https://github.com/openstack/cyborg/blob/cde5c3421d03722696269eb45ac148afd9838042/cyborg/common/constants.py#L90 we can this to show all supported resources. 03:38:49 here by difference I mean '0x8086' and '10de' 03:38:55 Yumeng: ok, that's the problem. 03:39:42 It comes from the discovery method. Intel drivers all discover devices from sysfs which contains the hex number. 03:39:59 seems different, I not yet look into the details. 03:40:29 but should we keep consistent, or just let this go and let any driver do their own way 03:40:49 AFAIK, the gpu driver's result comes from lspci's output. 03:41:06 Inspur FPGA is '1db4' like GPU. 03:41:09 yes 03:41:20 yeah also use lscpi command. 03:42:07 So the problem is that if driver use lspci, the output is now in hex format. We may need to translate it. 03:42:47 sounds good. 03:43:28 we can rethink it and make decision later. not in a hurry. 03:43:49 Yes, we should leave some time to program api discussion. 03:43:56 ;) 03:44:34 do we need to maintain a list for the supported device type? we know every product maybe need a driver 03:44:50 YES, We don't have much time left, I wanna leave some time for haibin-huang s_shogo, and shaohe_feng to discuss Program API 03:45:11 yes, that's welcome >> Program API discussion 03:45:35 hello 03:45:43 ok 03:46:03 hello shogo 03:46:12 hello haibin-huang 03:46:58 Is ok to begin discussion about the program API? 03:47:22 yes 03:47:27 Thanks, 03:48:04 so, what is your question? 03:48:07 haibin-huang , shaohe, xinran : Let me confirm, what type FPGA do you use for the test of programming API?(N3000?) 03:48:42 IMHO, in my concern, to use N3000 with Cyborg, that needs suited driver for N3000 and async program algorithm for that, before trying programming API. 03:49:30 yes, N3000 03:49:43 we need aysnc 03:50:04 s/aysnc/async 03:51:23 OK,t I got it. 03:52:26 As we discussed in previous IRC, I thinks that needs new driver and async, and that seems to be out of scope of this Program API patch.. 03:52:42 I think we should consider two points, one is reboot fpga, one is get program status 03:52:48 Are there any plan for modifying driver and cyborg mechanism for N3000? 03:55:36 we have implentation draft for it 03:55:56 but, we don't know when we can push is upstream 03:57:03 the Draft is not ready to push it upstream 03:58:23 Ok, I got it. 03:58:43 So that is better to be reviewed as the target. 03:58:53 Of course, the programming method for N3000 is important,that should be discussed as another patch and IRC. 04:00:04 I would like to settle out the program patch work as current scope, once. 04:01:27 OK, but can you add a new API for program status check, just add a UnimplemenationExcetpion(" unsport") ? 04:01:43 just define the API 04:02:03 we can add a the implementation later. 04:03:16 umm, to add a new API for program check is agree, 04:06:51 OK, thanks 04:08:38 Thanks, shaohe-feng :) 04:09:12 s_shogo,shaohe_feng:cool. since we don't have much time left for victoria, I think we should keep the scope of program API patch focused. and add other features later. 04:09:45 Yumeng Thanks, I agree that. 04:10:17 Thanks haibin-huang for coming to join the discussion. 04:10:37 Thanks haibin-huang, shaohe-feng 04:11:04 Thanks s_shogo and shaohe_feng, program API is not easy work. 04:12:18 time is up for today. I will bring up the left topics(PTG things) on wechat where necessary 04:13:01 Thank you all for coming, and I'll see you all again next week 04:13:19 bye 04:13:24 bye 04:13:35 bye 04:13:39 #endmeeting