Monday, 2017-01-16

*** sdake has quit IRC00:00
*** sdake has joined #openstack-meeting-400:01
*** sdake has quit IRC00:01
*** sdake has joined #openstack-meeting-400:02
*** sdake has quit IRC00:03
*** sdake has joined #openstack-meeting-400:04
*** julim has joined #openstack-meeting-400:04
*** dave-mcc_ has joined #openstack-meeting-400:08
*** dave-mccowan has quit IRC00:10
*** dave-mcc_ has quit IRC00:13
*** rainya has quit IRC00:15
*** neiljerram has quit IRC00:25
*** salv-orl_ has quit IRC00:27
*** sdake has quit IRC00:28
*** dave-mccowan has joined #openstack-meeting-400:40
*** thorst has joined #openstack-meeting-400:41
*** thorst has quit IRC00:45
*** thorst has joined #openstack-meeting-400:46
*** thorst has quit IRC00:46
*** dave-mccowan has quit IRC00:51
*** SerenaFeng has joined #openstack-meeting-400:58
*** limao has joined #openstack-meeting-400:58
*** tovin07 has joined #openstack-meeting-401:27
*** salv-orlando has joined #openstack-meeting-401:28
*** sacharya has joined #openstack-meeting-401:31
*** salv-orlando has quit IRC01:35
*** amotoki has joined #openstack-meeting-401:42
*** l4yerffeJ_ has joined #openstack-meeting-401:53
*** l4yerffeJ has quit IRC01:53
*** armax has joined #openstack-meeting-402:01
*** armax has quit IRC02:01
*** zhihui has joined #openstack-meeting-402:10
*** nkrinner_afk has quit IRC02:12
*** nkrinner_afk has joined #openstack-meeting-402:13
*** hongbin has joined #openstack-meeting-402:17
*** Dmitrii-Sh has quit IRC02:19
*** stevemar has quit IRC02:21
*** stevemar has joined #openstack-meeting-402:21
*** baoli has joined #openstack-meeting-402:24
*** links has joined #openstack-meeting-402:25
*** salv-orlando has joined #openstack-meeting-402:31
*** salv-orlando has quit IRC02:36
*** links has quit IRC02:38
*** yifei has joined #openstack-meeting-402:43
*** links has joined #openstack-meeting-402:46
*** thorst has joined #openstack-meeting-402:47
*** chandanc_ has joined #openstack-meeting-402:50
*** bobh has quit IRC02:52
*** thorst has quit IRC02:52
*** chandanc_ has quit IRC02:53
*** rainya has joined #openstack-meeting-403:01
*** dave-mccowan has joined #openstack-meeting-403:02
*** dave-mccowan has quit IRC03:06
*** severion has joined #openstack-meeting-403:10
*** thorst has joined #openstack-meeting-403:14
*** thorst has quit IRC03:14
*** Jeffrey4l_ has quit IRC03:21
*** Jeffrey4l has joined #openstack-meeting-403:21
*** dave-mccowan has joined #openstack-meeting-403:32
*** salv-orlando has joined #openstack-meeting-403:32
*** salv-orlando has quit IRC03:36
*** galstrom_zzz is now known as galstrom03:40
*** galstrom is now known as galstrom_zzz03:42
*** dave-mccowan has quit IRC03:45
*** janki has joined #openstack-meeting-403:51
*** julim has quit IRC03:53
*** julim has joined #openstack-meeting-403:54
*** julim has quit IRC03:58
*** psachin has joined #openstack-meeting-404:01
*** SerenaFeng has quit IRC04:01
*** hongbin has quit IRC04:07
*** hongbin has joined #openstack-meeting-404:07
*** cathrich_ has quit IRC04:11
*** hongbin has quit IRC04:13
*** salv-orlando has joined #openstack-meeting-404:33
*** baoli has quit IRC04:36
*** salv-orlando has quit IRC04:38
*** nick-ma has joined #openstack-meeting-404:52
*** sp__ has joined #openstack-meeting-404:57
*** nick-ma has quit IRC04:57
*** Sukhdev has joined #openstack-meeting-404:59
*** rainya_ has joined #openstack-meeting-405:08
*** amotoki_ has joined #openstack-meeting-405:09
*** rainya has quit IRC05:09
*** unicell has joined #openstack-meeting-405:11
*** amotoki has quit IRC05:12
*** rainya_ has quit IRC05:13
*** thorst has joined #openstack-meeting-405:15
*** severion has quit IRC05:18
*** v1k0d3n has quit IRC05:18
*** thorst has quit IRC05:20
*** sgordon has quit IRC05:31
*** salv-orlando has joined #openstack-meeting-405:34
*** v1k0d3n has joined #openstack-meeting-405:35
*** sacharya has quit IRC05:38
*** salv-orlando has quit IRC05:38
*** sacharya has joined #openstack-meeting-405:39
*** sgordon has joined #openstack-meeting-405:39
*** SerenaFeng has joined #openstack-meeting-405:45
*** v1k0d3n has quit IRC05:48
*** v1k0d3n has joined #openstack-meeting-405:49
*** Sukhdev has quit IRC05:50
*** bobh has joined #openstack-meeting-405:55
*** bobh has quit IRC05:59
*** rainya has joined #openstack-meeting-406:10
*** rainya has quit IRC06:15
*** unicell has quit IRC06:22
*** madhuri has joined #openstack-meeting-406:25
*** madhuri has quit IRC06:25
*** yfauser has joined #openstack-meeting-406:32
*** yfauser has joined #openstack-meeting-406:33
*** salv-orlando has joined #openstack-meeting-406:35
*** nick-ma has joined #openstack-meeting-406:36
*** salv-orlando has quit IRC06:39
*** karthiks has joined #openstack-meeting-406:47
*** sp__ has quit IRC06:48
*** greghaynes has quit IRC06:52
*** mordred has quit IRC06:53
*** greghaynes has joined #openstack-meeting-406:59
*** mordred has joined #openstack-meeting-407:03
*** greghaynes has quit IRC07:04
*** yamamoto has quit IRC07:04
*** greghaynes has joined #openstack-meeting-407:15
*** thorst has joined #openstack-meeting-407:16
*** thorst has quit IRC07:21
*** hogepodge_ has joined #openstack-meeting-407:24
*** nkrinner_afk is now known as nkrinner07:24
*** pcaruana has joined #openstack-meeting-407:34
*** salv-orlando has joined #openstack-meeting-407:36
*** hogepodge_ has quit IRC07:37
*** marst has quit IRC07:40
*** marst has joined #openstack-meeting-407:40
*** salv-orlando has quit IRC07:40
*** ralonsoh has joined #openstack-meeting-407:43
*** ricolin has joined #openstack-meeting-407:44
*** Dmitrii-Sh has joined #openstack-meeting-407:44
*** amirv has joined #openstack-meeting-407:47
*** barmaley has joined #openstack-meeting-407:50
*** nick-ma has quit IRC08:00
*** zhurong has joined #openstack-meeting-408:05
*** SerenaFeng has quit IRC08:05
*** _degorenko|afk is now known as degorenko08:07
*** amotoki has joined #openstack-meeting-408:07
*** nick-ma has joined #openstack-meeting-408:09
*** amotoki_ has quit IRC08:11
*** rainya has joined #openstack-meeting-408:12
*** rainya has quit IRC08:16
*** idan_hefetz has joined #openstack-meeting-408:29
*** nick-ma has quit IRC08:32
*** alexchadin has joined #openstack-meeting-408:34
*** adisky_ has joined #openstack-meeting-408:35
*** nick-ma has joined #openstack-meeting-408:36
*** salv-orlando has joined #openstack-meeting-408:37
*** salv-orlando has quit IRC08:41
*** matrohon has joined #openstack-meeting-408:42
*** sdake has joined #openstack-meeting-408:52
*** sdake_ has joined #openstack-meeting-409:00
yuli_shi09:00
*** hshan has joined #openstack-meeting-409:00
*** rajivk has joined #openstack-meeting-409:00
oanson#startmeeting Dragonflow09:00
openstackMeeting started Mon Jan 16 09:00:52 2017 UTC and is due to finish in 60 minutes.  The chair is oanson. Information about MeetBot at http://wiki.debian.org/MeetBot.09:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.09:00
*** openstack changes topic to " (Meeting topic: Dragonflow)"09:00
openstackThe meeting name has been set to 'dragonflow'09:00
dimakGood morning09:01
lihiHi09:01
*** SerenaFeng has joined #openstack-meeting-409:01
hshanhi~09:01
rajivkHi09:01
*** ishafran has joined #openstack-meeting-409:01
*** qwebirc27203 has joined #openstack-meeting-409:01
oansonAl right. Let's begin09:01
irenabhi09:01
oansonActually, before we begin.09:01
*** qwebirc27203 is now known as itamaro09:01
*** amirv has quit IRC09:01
itamarohi09:01
oansonPlease note that our gate is broken again :)09:01
*** sdake has quit IRC09:02
oansonLooks like Neutron removed the tenant_id from their objects.09:02
oansonI think it's from patch https://review.openstack.org/#/c/38265909:03
oansonI am looking at it, but if anyone has any info, please share :)09:03
oansonNow we can begin.09:03
dimakI'll look into it :)09:03
oanson#info dimak lihi hshan rajivk irenab itamaro in meeting09:03
ishafranme too09:04
oansondimak, note I uploaded a test here: https://review.openstack.org/420587 but it's just a test09:04
oanson#info ishafran is also in meeting09:04
oanson#topic roadmap09:04
*** openstack changes topic to "roadmap (Meeting topic: Dragonflow)"09:04
oansonIPv6. lihi, the floor is yours09:04
lihiI've stopped working on the Router Discovery for now. I'm having issues detecting the socilitations when there are multiple routers, and advertising the same response as the periodic router advertisemts messages.09:04
lihiSo I've started to work on the DHCPv609:04
yuli_soanson, i think they use project_id instead of tenant_id09:05
irenablihi: can you please elaborate on the issues you see09:05
oansonyuli_s, yes. I gathered that too. That's what I put in the test09:05
yuli_sok09:05
nick-mahi09:06
oansonnick-ma, hi09:06
oanson#info nick-ma also in meeting09:06
lihiThe requests are sent to the broadcast link local address, and I can't build the response in the flows the same way I did in the flows.09:07
lihiI still need to think how to do it properly09:07
irenabhow does it work with ref. impementation?09:08
oansonirenab, I don't think ref implementation uses responders09:08
*** matrohon has quit IRC09:09
oansonlihi, I am not sure I understand the issues you're running into09:09
lihiAll the requests are send to the broadcast linklocal address. The address is the same for all routers09:09
lihiUsually, each router that receives the messages response.09:10
oansonYou can detect your network using the 'metadata' field in OVS09:10
oansonThis way you know which router interfaces need to respond09:10
lihiOK, I will look into it09:11
lihiI wasn't sure what to do, so I've started working on the DHCPv6 in the meanwhile. But I think this might help09:12
oansonBoth directions are important09:12
lihiyeah I know :)09:12
oansonAll right09:13
oansonIf you have any issues, feel free to ask in the channel09:13
oansonNB refactor09:13
oansondimak, you want to update?09:13
dimakYeah09:13
dimakI think we are starting to have a better picture of how everything should look and be used09:14
oansonAll right.09:14
dimakI've talked to Irena and Omer and we decided to try dropping CRUD layer\09:14
dimakAnd add custom functionality to NB api operations with hooks09:15
dimakwe still have to update the spec (if we see that it works well for us)09:15
dimakOther than that, jsonmodels requirements is in!09:15
lihiWhat was the issue with CRUD?09:16
dimakI've asked library maintainer to roll out a version with my changes into PyPI09:16
dimakWe tried to add CRUD logic to each model09:16
oansonAll right. I'll update the spec probably tomorrow. I'd ask that you'd all vote so we can get it in09:17
dimakAnd we wanted to add shared model functionality with mixins09:17
*** thorst has joined #openstack-meeting-409:17
dimak(e.g. Mixin that adds unique key or version fields)09:17
dimakAnd some fields might require special treatment in CRUD layer09:17
oansonAnything else for NB refactor?09:18
dimakBut if we add a CRUD layer to each mixin that requires it, and then use several mixins, deciding on ordering that CRUDs are called are not that simple or readable09:18
dimakBe patient :P09:18
oansonSorry. Didn't know you had such a big buffer09:18
oansonYou should limit your mtu :)09:19
dimakWe though of exploring hooks because they sit on the models/mixins themselves and obey to super() rules09:19
dimaklihi, I can draw up some more elaborate example with code why we wanted to avoid CRUD layer when using mixins09:20
*** amirv has joined #openstack-meeting-409:20
itamaroI will be glad to sit in too09:20
itamaro:)09:20
nick-mait may be better to update the spec along with these pictures.09:21
dimakOther than that, I want to see if the chassis refactor works (fullstack-wise)09:21
lihiyeah, that would be nice09:21
irenablihi: in short it required to count on the model inheritance order, which is really bad practice09:21
lihiok, makes sense09:21
*** thorst has quit IRC09:22
oansondimak, could you prepare these diagrams, I'll add them to the spec?09:22
dimakSure09:22
dimakI'll post them in #dragonflow too to facilitate some discussion :)09:22
dimakOh09:23
dimakjsonmodels maintaner just pushed a new version :)09:23
oansonGreat!09:23
lihi:)09:23
irenablooks like you are their super user :-)09:23
nick-magood news.09:24
nick-ma~09:24
oansonAll right. dimak, is there any more?09:24
dimakI think thats all09:24
oansonChassis09:24
oansonChassis health09:24
oansonrajivk, anything to update?09:25
rajivkI have put patch for it.09:25
oansonThis one: https://review.openstack.org/#/c/415997/ ?09:25
rajivkyes09:25
rajivkOnce i get enough comment and it get freeze.09:26
rajivkI will add UT and commands of df-db to service enable and disable.09:26
rajivkDo you think, current patch is ok or does it require major changes?09:26
oansonFrom what I saw, there are a few minor things09:27
dimakrajivk, I think df-db is getting a bit too overloaded09:27
oansondimak, on the other hand, your work should fix that :)09:27
rajivkI think, there is a specs for dragonflow-api.09:27
oansonAnd that is the only CLI utility we have for the moment09:27
rajivkmay be i should add API directly in that specs.09:28
oansonYes. This should be a model-API09:28
dimakoanson, we can just rename it to df-client or something 😉09:28
irenabso ‘service’ will be a model object?09:28
rajivkyes09:28
oansonIIRC, that's how it is in the spec09:28
rajivkdimak, +109:29
rajivkI think, df-db will become too complex soon.09:29
rajivkIf we keep on adding functionality.09:29
irenabdimak: df-client will be confising, we will need it once add proper API09:30
nick-mai suggest to add a subproject named python-dfclient for the api spec.09:30
oansonThe CLI requirement is in the API spec. I think df-db will become a troiubleshooting tool. And we will have a cli client for the northbound stuff09:30
oansonNot a bad idea09:30
irenabnick-ma: please check the spec and post comments for what you think is missing09:30
nick-mabut it is just cli implementation.09:31
nick-mathe api is belonging to df project as the rest layer.09:31
nick-mairenab: sure.09:31
oansonIn general, maybe we should start splitting things off into smaller subprojects. e.g. specific database drivers, external applications (once the NB refactor is complete)09:31
rajivkwe will have to integrate with keystone as well.09:31
nick-maproject splitting is another big topic to discuss.09:31
rajivkIf we provide support for APIs.09:31
nick-mamaybe in another spec.09:31
oansonnick-ma, that's something I wanted cleared up - is the API REST, or a python library with REST on top?09:32
irenaboanson: by subpojects you mean diferent repos?09:32
oansonirenab, ^^^^09:32
irenab09:32
oansonLike Neutron have a stadium, we'll have a... errr.. theatre...09:32
nick-maoanson: in my opinion, it should be the standard api framework as nova, cinder, etc.09:32
nick-marest api with python library.09:32
oansonYou mean REST, and the CLI client will send REST requests to the API?09:32
irenaboanson: we need to have both, REST API and pythion clinet09:33
oansonnick-ma, beat me to it :)09:33
nick-mairenab: yes09:33
oansonirenab, no argument. I wanted to verify where everything connects09:33
irenaboanson: will upload the updated spec version soon, hope it will get more clear09:34
nick-maok.09:34
oansonirenab, great, thanks!09:34
oansonrajivk, one more questions: How do you plan to do the UT?09:34
*** sshnaidm|off is now known as sshnaidm09:35
oansonAnd do you plan to add fullstack tests?09:35
rajivkNo, idea. Give me ideas :)09:36
rajivkWhich is the best for the project?09:36
oansonI guess you could set up something inheriting Service, and see if the NB database is updated (fullstack test)09:36
rajivkok, i will start looking at it.09:37
oansonThanks. This can be done as a separate patch if you'd like, since the main patch already seems to be very advanced09:37
rajivkok, i will do it in other patch.09:37
*** salv-orlando has joined #openstack-meeting-409:37
*** zhurong has quit IRC09:37
oansonGreat. Thanks!09:38
oansonAnything else on this topic?09:38
rajivkno, that's all.09:38
oansonTAPaaS - yuli_s, any updates?09:38
yuli_sYes09:38
yuli_stoday I submitted first patch09:38
yuli_sto make ids more sparse09:38
oansonLooks promising.09:39
yuli_safter that I will start to other parts09:39
yuli_s;)09:39
oansonGreat. Only downside is that I now need to re-memorise all the table numbers :)09:40
yuli_swe use constants in code, so, I guess it will not be that hard09:40
rajivkI saw one constants, can we make enums?09:41
oansonYes. If we behaved well, this patch should go very smoothly.09:41
*** sgordon has quit IRC09:41
yuli_sI am also working on submitting the rally project I mare09:41
yuli_sI am also working on submitting the rally project I was working on too09:41
*** sdake_ has quit IRC09:41
oansonrajivk, sure. What are the benefits?09:42
oansonyuli_s, great!09:42
itamaroWould you like to discuss how we can do connectivity tests using naitive DF api only (openstack less)?09:42
*** salv-orlando has quit IRC09:42
yuli_srajivk, send you comments in the patch, I think we will continue from their09:42
rajivkok09:43
yuli_srajivk, https://review.openstack.org/#/c/420602/09:43
irenabitamaro: native API is only speced now09:43
oansonitamaro, yes, but let's wait for the open discussion09:43
irenab# link https://review.openstack.org/#/c/418842/09:43
itamarotests are not even there09:43
itamarook09:44
oansonsNAT application09:44
*** yifei has quit IRC09:44
oansonishafran, any updates?09:44
oansonPlease note that the patch is here: https://review.openstack.org/#/c/417799/09:44
ishafranI posted first implementation + UT on review09:44
ishafranSince then no update09:45
oansonAll right. I left some comments there. Please review.09:45
ishafranok09:46
*** asettle has joined #openstack-meeting-409:46
oansonDid you get a chance to try what I suggested regarding passing the zone as an immediate value (and not via the regs)?09:46
oansonI'm curious to know if that worked and could be done09:46
*** asettle is now known as Guest5850609:46
oansonPlease also rebase the spec, so that we can vote on it09:46
ishafranmy environment is broken now due to rebase to DF master, so still not tried it09:47
oansonAll right. Please keep me posted - as I said, I am very curious :)09:47
ishafranok09:47
oansonAnything else on sNAT?09:47
oansonAnything else for roadmap?09:47
irenaboanson: any update on LBaaS or vlan amware VMs?09:48
oansonNo update on LBaaS. It's looking for a carrier09:48
rajivkI am going through specs09:48
oansonrajivk, that's for VLAN aware VMs, right?09:48
rajivkyes. I am checking how it can be done in dragonflow.09:48
rajivkCurrently, how vlans etc work and then map implementation from neutron to dragonflow09:49
*** sgordon has joined #openstack-meeting-409:49
rajivkI will discuss on IRC, if i need information.09:49
irenabrajivk: thanks for update09:49
oansonAnything else?09:50
rajivknot from my side.09:50
oanson#topic Bugs09:50
*** openstack changes topic to "Bugs (Meeting topic: Dragonflow)"09:50
oansonJust a quick update here: nick-ma posted some workarounds for the critical bug 165164309:51
openstackbug 1651643 in DragonFlow "metadata service cannot start due to zmq binding conflict" [High,In progress] https://launchpad.net/bugs/1651643 - Assigned to Li Ma (nick-ma-z)09:51
oansonIt is now reduced to High.09:51
oansonnick-ma, thanks!09:51
oansonAlso thanks to xiaohhui for quickly fixing the broken gate last time. The gate was stable for a whole day before Neutron broke it again :D09:51
nick-mayou are welcome. the bug has not been closed yet.09:51
irenaboanson: its good to have gate watching09:52
oansonYes, but it is no longer Critical. That's not to be ignored.09:52
nick-mayes.09:53
oansonirenab, not sure what to do on this front. Maybe I'll ask Neutron to add a Dragonflow check, but I don't know if they'll approve09:53
*** sambetts|afk is now known as sambetts09:53
*** Guest58506 has quit IRC09:53
irenabnot sure if this change was communcated well enough, we should have not got by suprise09:54
oansonI think I'll free up some time and start watching the Neutron changes.09:55
oansonAt least we'll know what could have broken the gate when it happens, rather than go looking for it in retrospect09:55
irenaboanson: bot is less expensive :-)09:55
oansonNot sure a bot can do that.09:55
oansonNo matter. Let's move on.09:56
nick-mayes.09:56
oanson#topic Open Discussion09:56
*** openstack changes topic to "Open Discussion (Meeting topic: Dragonflow)"09:56
oansonFloor is for the taking.09:56
nick-mado you guys go to project gathering?09:56
oansonI plan on attending, yes.09:56
oanson(I think I have to :D )09:56
nick-macool, I don't have opportunity to attend, :-(09:57
oansonThat's a shame. We'll miss you09:57
nick-ma:-)09:57
oansonAnyone else coming to the PTG?09:58
oansonNot all at once :(09:58
rajivkno09:58
oansonI have to admit, this isn't very surprising when it's split from the summit09:59
oansonAll right. That's our time.10:00
oansonThanks everyone for coming. Thanks for the great work!10:00
oanson#endmeeting10:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"10:00
openstackMeeting ended Mon Jan 16 10:00:20 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)10:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/dragonflow/2017/dragonflow.2017-01-16-09.00.html10:00
yuli_sthanks !10:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/dragonflow/2017/dragonflow.2017-01-16-09.00.txt10:00
openstackLog:            http://eavesdrop.openstack.org/meetings/dragonflow/2017/dragonflow.2017-01-16-09.00.log.html10:00
*** ishafran has quit IRC10:01
*** amotoki has quit IRC10:01
*** nick-ma has quit IRC10:02
*** anilvenkata has joined #openstack-meeting-410:03
*** limao has quit IRC10:07
*** amotoki has joined #openstack-meeting-410:08
*** zhurong has joined #openstack-meeting-410:08
*** dtardivel has joined #openstack-meeting-410:10
*** rainya has joined #openstack-meeting-410:14
*** rainya has quit IRC10:18
*** amotoki has quit IRC10:24
*** yifei has joined #openstack-meeting-410:32
*** yifei has quit IRC10:36
*** salv-orlando has joined #openstack-meeting-410:38
*** salv-orlando has quit IRC10:43
*** pbourke has quit IRC10:47
*** pbourke has joined #openstack-meeting-410:48
*** neiljerram has joined #openstack-meeting-410:51
*** SerenaFeng has quit IRC10:52
*** SerenaFeng has joined #openstack-meeting-410:52
*** asettle has joined #openstack-meeting-410:53
*** asettle has quit IRC10:54
*** asettle has joined #openstack-meeting-410:59
*** asettle__ has joined #openstack-meeting-411:01
*** asettle has quit IRC11:01
*** rfolco has joined #openstack-meeting-411:05
*** asettle__ has quit IRC11:10
*** ricolin has quit IRC11:11
*** asettle has joined #openstack-meeting-411:11
*** asettle is now known as Guest7429711:11
*** SerenaFeng has quit IRC11:14
*** SerenaFeng has joined #openstack-meeting-411:14
*** Guest74297 has quit IRC11:17
*** thorst has joined #openstack-meeting-411:18
*** rajivk has left #openstack-meeting-411:18
*** SerenaFeng has quit IRC11:18
*** thorst has quit IRC11:23
*** iyamahat has joined #openstack-meeting-411:23
*** sshnaidm is now known as sshnaidm|afk11:24
*** iyamahat has quit IRC11:29
*** amotoki has joined #openstack-meeting-411:37
*** salv-orlando has joined #openstack-meeting-411:39
*** salv-orlando has quit IRC11:43
*** yfauser has quit IRC11:47
*** asettle_ has joined #openstack-meeting-411:50
*** asettle_ is now known as asettle11:52
*** jamespage has joined #openstack-meeting-412:04
*** amirv has quit IRC12:07
*** rtheis has joined #openstack-meeting-412:07
*** rainya has joined #openstack-meeting-412:15
*** janki has quit IRC12:18
*** rainya has quit IRC12:20
*** janki has joined #openstack-meeting-412:20
*** sdague has joined #openstack-meeting-412:22
*** khushbu_ has joined #openstack-meeting-412:37
*** salv-orlando has joined #openstack-meeting-412:40
*** klamath has joined #openstack-meeting-412:44
*** klamath has quit IRC12:44
*** salv-orlando has quit IRC12:44
*** thorst has joined #openstack-meeting-412:45
*** klamath has joined #openstack-meeting-412:45
*** salv-orlando has joined #openstack-meeting-412:46
*** janki has quit IRC12:58
*** julim has joined #openstack-meeting-412:59
*** bobh has joined #openstack-meeting-413:00
*** zhurong has quit IRC13:03
*** bobh has quit IRC13:04
*** janki has joined #openstack-meeting-413:06
*** matrohon has joined #openstack-meeting-413:07
*** beagles_afk is now known as beagles13:08
*** SerenaFeng has joined #openstack-meeting-413:11
*** khushbu_ has quit IRC13:16
*** l4yerffeJ_ has quit IRC13:19
*** l4yerffeJ_ has joined #openstack-meeting-413:19
*** sshnaidm|afk is now known as sshnaidm13:30
*** gnuoy has quit IRC13:32
*** psachin has quit IRC13:34
*** janki has quit IRC13:38
*** sdake has joined #openstack-meeting-413:40
*** baoli has joined #openstack-meeting-413:45
*** salv-orlando has quit IRC13:46
*** limao has joined #openstack-meeting-413:49
*** sacharya has quit IRC13:52
*** amirv has joined #openstack-meeting-413:52
*** julim has quit IRC13:55
*** yedongcan has joined #openstack-meeting-413:55
*** mchiappero has joined #openstack-meeting-413:55
*** dougbtv has joined #openstack-meeting-413:55
*** l4yerffeJ_ has quit IRC13:58
*** l4yerffeJ_ has joined #openstack-meeting-413:59
*** baoli has quit IRC13:59
*** limao has quit IRC14:00
*** l4yerffeJ_ has quit IRC14:00
apuimedo#startmeeting kuryr14:00
openstackMeeting started Mon Jan 16 14:00:50 2017 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: kuryr)"14:00
openstackThe meeting name has been set to 'kuryr'14:00
*** l4yerffeJ_ has joined #openstack-meeting-414:00
*** garyloug has joined #openstack-meeting-414:01
*** ivc_ has joined #openstack-meeting-414:01
apuimedoHello and welcome everybody to another Kuryr weekly IRC meeting14:01
garylougo/14:01
ivc_o/14:01
yedongcano/14:01
apuimedoToday vikasc and irenab have excused themselves and won't be able to join14:01
mchiapperoo/14:01
apuimedo#topic kuryr-lib14:02
*** openstack changes topic to "kuryr-lib (Meeting topic: kuryr)"14:02
*** limao has joined #openstack-meeting-414:02
*** alraddarla has joined #openstack-meeting-414:03
alraddarlao/14:03
apuimedoThere's basically no news on kuryr-lib :-)14:03
apuimedoNo news, good news14:03
apuimedoas they say14:03
apuimedoonly this patch14:03
apuimedo#link https://review.openstack.org/#/c/418792/14:03
apuimedowhich pisses me off a bit that came so late14:04
apuimedobut... What can we do, we merge it now and we'll consider if we do a point release to get the requirement upped14:04
ltomasboo/14:05
*** cathrichardson has joined #openstack-meeting-414:05
apuimedoAnybody's got anything on kuryr-lib?14:05
apuimedovery well!14:06
apuimedoMoving on14:06
apuimedo#topic kuryr-libnetwokr14:06
*** openstack changes topic to "kuryr-libnetwokr (Meeting topic: kuryr)"14:06
apuimedodarn!!!14:06
apuimedotypo14:06
mchiapperoeheh14:06
apuimedo#topic kuryr-libnetwork14:06
*** openstack changes topic to "kuryr-libnetwork (Meeting topic: kuryr)"14:06
apuimedoltomasbo: can you update us on the race?14:07
ltomasboHi14:07
ltomasbothe problem is that, after calling to attach the (sub)port to the trunk14:07
ltomasbowe (kuryr-libnetwork) calls update_port14:07
ltomasboand it seems attach_subport also calls update_port internally14:07
ltomasbotherefore, there is a race there14:08
ltomasboand something me device_owner is not properly set14:08
ltomasbowhich makes kuryr-libnetwork not able to remove the port after removing the container14:08
*** cathrich_ has joined #openstack-meeting-414:08
ltomasbosince there is a filter by deviceowner before removing the port14:08
apuimedo#link https://review.openstack.org/#/c/419028/14:09
*** idan_hefetz has quit IRC14:09
ltomasboI've been discussing with armax about the possibility of reverting the patch that set device_owner for trunk ports14:09
*** baoli has joined #openstack-meeting-414:09
ltomasboanother easy fix could ve to remove the device_owner filter we do on ipam_release before calling the remove_port14:10
*** cathrich_ has left #openstack-meeting-414:10
apuimedo#info there is a race in container-in-VM flow due to subport addition usage of device owner14:10
mchiapperoI'm sorry, but I'm not sure I've understood the root cause of the race14:11
apuimedoltomasbo: it is kind of a nice service we provide our users to mark which ports we manage for them automatically14:11
ltomasboapuimedo, yes, I agree14:11
apuimedoso we should try to preserve it as much as possible14:11
*** mattmceuen has joined #openstack-meeting-414:11
apuimedomchiappero: basically we update the device:owner field to container:kuryr14:11
ltomasbothe problem is that, the patch I proposed to revert is setting a different device_owner to the subports14:11
apuimedoand the trunk_subport_add does the same behind the scene14:11
*** cathrichardson has quit IRC14:12
ltomasbotherefore we call attach_subport and then update_port14:12
ltomasbobut attach_subport internally calls to update_port too14:12
apuimedoso IIRC, if the trunk_subport_add op finds it changed before it finishes, it goes boom. Is that right ltomasbo?14:12
ltomasboand both update_ports set a different device_owner14:12
*** aheczko-mirantis has joined #openstack-meeting-414:12
apuimedoor is it only that it breaks our removal?14:12
ltomasbowe need to call first trunk_subport_add and then update_port14:13
ltomasbosince trunk_subport_add will fail if device_id is already set in the port14:13
ltomasbobutn trunk_subport_add also calls update_port to set the device_owner to trunk:subport14:13
*** dave-mccowan has joined #openstack-meeting-414:14
ltomasbowhile we call update_port from kuryr-libnetwork to set device_owner to kuryr:container14:14
ivc_ltomasbo wait a sec, if trunk_subport_add fails, does that mean that it does not support device_id? or is it a bug in neutron?14:14
ltomasbowith the current implementation, there is no problem with device_id, as we call first trunk_subport_add14:14
ltomasbothe problem is that, as device_owner is not set to kuryr:container14:14
ltomasbothe port will not be deleted after the container is removed14:15
ltomasboand, the fact that turnk_subport_add fails if device_id is already set is not a bug14:15
ivc_what is the device_owner then? if not 'kuryr:container'?14:15
ltomasbois the way it should work14:15
ltomasboif the port is already in used, it should not be made part of a trunk14:15
*** janki has joined #openstack-meeting-414:16
apuimedoivc_: no, it is trunk:subport14:16
ltomasbodevice_owner is set to trunk:subport14:16
apuimedosometimes, depending on the race14:16
ivc_apuimedo thats what i see as the problem. neutron trunk code set it to 'trunk:subport' and we reset it to 'kuryr:container', right?14:16
ivc_i think we are breaking some neutron contract here14:16
apuimedoivc_: I think so14:16
apuimedono, no contract14:17
ltomasboyes14:17
mchiapperoyes, no? :D14:17
apuimedothey only set it to trunk:subport for no specific reason14:17
ltomasbothey just tagged subports to kuryr:container for simplicity14:17
ltomasbothere is no real need for that on neutron14:17
*** rainya has joined #openstack-meeting-414:17
ivc_neutron might expect it to be 'trunk:subport'14:17
ivc_maybe it's not used now, but we should not rely on it14:17
apuimedoivc_: mchiappero: "Altough this is not currently required by any of the business logic, it is handy to have this set to help users quickly identify ports used in trunks"14:17
apuimedoThis is the justification for setting trunk:subport in the original patch14:18
apuimedonothing uses this fact14:18
ivc_'currently'14:18
apuimedoivc_: this 'currently' has not changed14:18
apuimedoand there's no good reason it should14:18
ivc_at some point neutron could add the business logic that would rely on it14:18
mchiapperoI agree with using kuryr as device owner, but still, I don't fully understand whether it's a timing issue or what14:18
ltomasbowe can modify (I'm waiting for Armax answer) the way trunk_subport_owner set the device_owner14:18
apuimedoivc_: I agree with you on that14:19
apuimedoI find it misguided14:19
ltomasboand make it possible to not set it to anything (for the kuryr case)14:19
apuimedoivc_: I honestly just don't see the point of setting it to trunk:subport14:19
ltomasboneither do I14:20
apuimedowhen it is something that can be checked in the API, that it belongs to a trukn14:20
apuimedo*trunk14:20
ltomasboI think it is just to easily find the subports14:20
ivc_apuimedo i understand that, but thats the current api14:20
apuimedobut now the damage is done14:20
apuimedoexactly14:20
ivc_imo best course of action is to update neutron trunk api in a way that would allow us to legitimately set device_owner14:21
ltomasboyes, it seems reverting that could affect already set deployments14:21
ivc_excluding the potential conflict between trunk code and kuryr14:21
apuimedoit is quite annoying, but we'll probably have to consider whether we do not mark subports or use tags or something else14:21
apuimedoivc_: that was my first thought14:21
apuimedoto extend the api of trunk_subport_add14:21
apuimedoso that you can pass it a device owner14:21
*** rainya has quit IRC14:21
apuimedo(which by the way saves us one neutron call :P )14:22
ivc_yup14:22
apuimedoltomasbo: did you propose that to armax?14:22
ltomasboand easy modifications could be to just change the config option, and allow not to set any device_owner14:22
apuimedoIs there a mailing list thread for that?14:22
ltomasbowithout needed to modify the API14:22
ltomasbowhich is always trickier14:22
apuimedoivc_: the problem with extending the API is that we're probably too close to freeze or already frozen14:23
ltomasboit is being discussed on the revert patch:14:23
ltomasbohttps://review.openstack.org/#/c/41902814:23
apuimedoltomasbo: gotcha14:23
ivc_-2 ...14:23
ltomasbo:D14:23
ltomasboI know! But still discussing with armax14:24
apuimedoltomasbo got the hammer of justice!14:24
ltomasboand I agree reverting could not be done, but we are using that to discuss14:24
ltomasboand then (I suppose) another solution will be proposed14:24
ltomasboI see the other option, not setting TRUNK_SUBPORT_OWNER14:24
apuimedoltomasbo: from the commit message discussion I see an interesting option14:25
ltomasboand making the code not setting the device owner on trunk_add_subport14:25
apuimedowe could probably argue for having the device:owner unchanged if it is not None14:25
ltomasboit will be just a couple of lines14:25
ivc_so how tricky it would be to keep 'trunk:subport' owner? do we have some sort of workaround?14:25
apuimedoand move the update before the trunk_port_add14:25
apuimedo*trunk_subport_add14:25
mchiapperosorry but I guess there is no guarantee on the serialization of the operations in neutron14:25
mchiapperobut shouldn't be that way for the same port?14:25
mchiapperoshouldn't actions to the same port be serialized?14:26
mchiapperowouldn't this solve the issue?14:26
apuimedomchiappero: not sure14:26
ivc_mchiappero afaik they are. as soon as you got confirmation for your request, it is commited14:26
ltomasboyes, but calls are async, so, they can be executed in different orders14:27
mchiapperoso in this case our update port could get confirmed first, right?14:27
apuimedoivc_: re keeping trunk:subport would break our contract of marking our resources14:27
ivc_the problem is not the race as i understand it but just the conflict between kuryr's port update and trunk logic14:27
ltomasboyes and not14:27
apuimedoivc_: For me the race is just a symptom14:27
ltomasboif it ensure that our call is called after trunk_add_subport fully finished14:27
*** hongbin has joined #openstack-meeting-414:27
mchiapperoapuimedo: right14:28
ltomasbothen from kuryr point of view, that will work14:28
ltomasbobut I agree with apuimedo14:28
hongbino/14:28
ltomasbothe problem is the use of device_owner14:28
apuimedoexactly14:28
ivc_yup14:28
mchiapperoI would expect neutron to perform some ordering or take some port specific lock14:28
apuimedoI would really like to have the extra parameter in the trunk_subport_add14:28
ltomasbobut that will not solve the problem14:29
ltomasboif we set device_owner to whatever we want14:29
ltomasbowe can still do that14:29
ltomasbobut the problem will be the same14:29
ltomasbonot an unified view of what device_owner should be about14:29
apuimedoltomasbo: recognizing that it can be set to something else kind of forces Neutron to acknowledge that this field is informative for them and not for logic14:29
ivc_ltomasbo, toni's point is if we get 'device_owner' as part of 'trunk_subport_add' api, it would make 'kuryr:container' device owner legit14:30
ltomasbogot it14:30
apuimedoivc_: or a "use at your own risk"14:30
apuimedodepends on the wording in the method doc14:30
*** links has quit IRC14:30
apuimedothat gets merged14:30
ltomasboof course, for kuryr deployment we can state that TRUNK_SUBPORT_OWNER should be set to kuryr:container14:31
mchiapperoI still don't fully understand: does setting the owner after a well finished and seccessful trunk_subport_add work?14:31
ltomasboand that will solve the problem, at the expense of flexibility14:31
apuimedoivc_: I didn't want to say it. But the right fix, things being what they are now, would be that there would be multiple owners14:31
apuimedobut that is even more API breaking :/14:31
*** SerenaFeng has quit IRC14:31
apuimedomchiappero: it does14:31
mchiapperook, so the problem is neutron14:31
apuimedobut it makes us tread in unsafe waters14:32
mchiapperothat's something for them to fix14:32
apuimedothat if neutron subport related code started relying on this (like for upgrades)14:32
apuimedoit could render our subports useless14:32
apuimedothe fix I'd like to avoid, but that work work right away14:33
ivc_apuimedo i think multiple owners would only add confusion. the 'owner' should be unique, but the device_owner field should not be used for storing 'informative' date as it is the case with 'trunk:subport'14:33
ivc_data*14:33
mchiapperoivc_: agree14:33
apuimedois to use a new tag instead of the device:owner14:33
ivc_imo kuryr is the real owner in this case14:34
apuimedoivc_: I agree with you, if Neutron wanted so much to have this informative to avoid checking in DB whether it was a subport, it could have added a field for that14:34
apuimedoivc_: no question about that14:34
apuimedobut we need a pragmatic solution14:34
apuimedolet's wait to hear what ltomasbo gets from armax in proposing a new parameter for the trunk operations14:35
ltomasbowould you agree/like the other solution? just disabling trunk_add_subport to write on device:owner?14:35
ivc_the problem is HCF14:35
apuimedo#action ltomasbo to continue discussion with armax, proposing trunk_subport_add to receive optionally an API owner name14:35
apuimedoltomasbo: disabling it how?14:36
apuimedoivc_: HCF?14:36
ivc_hard code freeze14:36
ltomasboas they just set whatever value is in TRUNK_SUBPORT_OWNER in the config.py file14:36
ltomasbojust setting that to none14:36
ltomasboand if it is set to none, then just don't call update_port14:37
ltomasboand if it is set to wahtever it is, keep working as it is14:37
ltomasboso that it does not break current deployments14:37
apuimedoltomasbo: that would apply to all the ports, not just those used by kuryr14:37
ltomasboonly to subports14:37
apuimedoright, but a user may use subports for other purposes14:38
ivc_apuimedo ltomasbo if we look at it from different perspective, do we need 'device_owner' for cleanup only?14:38
ltomasbowe don't even really needed14:38
ltomasbois just a filter to speed up the search14:38
apuimedoivc_: we use it for cleanup. But its main purpose is to notify that it is automatically handled by kuryr to users14:38
apuimedoit was sort of... Since it is already there, let's use it14:39
ltomasboto me, it makes sense, as it is kuryr service creating/managing them14:39
mchiapperoI used it a lot while working on ipvlan14:39
ivc_as much as i dislike special-casing, maybe then we could have a special case for 'trunk:subport' that would fetch the ports for kuryr-managed nodes somehow14:39
mchiapperoi often had leftovers14:39
ltomasbowhile tagging that it is a subport, should not go on device_owner, but device_type or something like that if they need it14:40
ivc_ofc until we get a proper api update on neutron side14:40
*** julim has joined #openstack-meeting-414:40
apuimedoivc_: let's wait a week to see what Neutron people say14:41
apuimedoand then we can decide on contingency measures14:41
ivc_sure14:41
*** janki has quit IRC14:41
apuimedoanything else about kuryr-libnetwork?14:42
*** v1k0d3n has quit IRC14:43
apuimedovery well14:43
apuimedomoving on14:43
apuimedo#topic fuxi14:43
*** openstack changes topic to "fuxi (Meeting topic: kuryr)"14:43
apuimedo#chair hongbin14:43
openstackCurrent chairs: apuimedo hongbin14:43
hongbinhi14:43
hongbinin last week, there are several proposed fixes14:44
hongbin#link https://review.openstack.org/#/c/419767/14:44
apuimedohongbin: today I was asked about fuxi on magnum. Do we have some docs on that? Or it only targets bare metal?14:44
hongbinapuimedo: i am happy to explore fuxi on magnum14:44
hongbinapuimedo: it is definitely one of the target14:44
hongbinapuimedo: there are several things that needs to be done14:45
hongbinapuimedo: 1. containerized fuxi14:45
hongbinapuimedo: 2. trust support14:45
*** lrensing has joined #openstack-meeting-414:45
hongbinapuimedo: then, we are ready to propose it to magnum14:45
apuimedocool14:45
apuimedosorry for the interruption14:46
hongbinapuimedo: np14:46
apuimedo:-)14:46
hongbinyes, to continue,14:46
hongbini was trying to move fuxi to py3514:46
hongbin#link https://review.openstack.org/#/c/419683/14:46
*** salv-orlando has joined #openstack-meeting-414:47
hongbinthe last one, i have a pov for making multi-tenancy support14:47
*** baoli has quit IRC14:47
hongbin#link https://review.openstack.org/#/c/420386/14:47
hongbinall of those are under review, feedback is appreciate14:47
hongbinapuimedo: that is all from my side14:47
*** baoli has joined #openstack-meeting-414:47
apuimedohongbin: I suppose swift's failure with py3 is reported as a bug, right?14:47
apuimedothanks hongbin14:47
*** galstrom_zzz is now known as galstrom14:48
apuimedo#topic kuryr-kubernetes14:48
*** openstack changes topic to "kuryr-kubernetes (Meeting topic: kuryr)"14:48
hongbinapuimedo: yes, we worked around swift14:48
*** galstrom is now known as galstrom_zzz14:48
apuimedo#info ivc_ is a new core for Kuryr-kubernetes! Congratulations!14:49
ivc_thanks! :)14:49
apuimedoivc_: and now that you are congratulated, pls review https://review.openstack.org/#/c/419933/ for merge :P14:49
apuimedo#info vikasc reported that he is finishing addressing ltomasbo and ivc_'s comments to https://review.openstack.org/#/c/410578/14:50
*** galstrom_zzz is now known as galstrom14:50
apuimedohongbin: we could appreciate help into drafting a plan to integrate kuryr-kubernetes with Magnum once that patch is merged14:50
apuimedoa TODO list or something like that14:50
hongbinapuimedo: i can try14:50
apuimedos/could/would/14:51
ltomasbonice, I'm trying to follow the instruction to set it up, but so far no luck with it14:51
apuimedo:-)14:51
apuimedothanks hongbin14:51
apuimedoltomasbo: you mean vikasc's patch?14:51
ltomasboyep14:51
ltomasbousing the devstack templates14:51
apuimedoltomasbo: ping him in the morning then :P14:51
*** Michael-zte has joined #openstack-meeting-414:51
*** salv-orlando has quit IRC14:51
ltomasboI'll do14:51
apuimedocool14:51
*** Michael-zte has quit IRC14:51
apuimedoivc_: any news on the services front?14:51
ivc_nope14:52
apuimedovery well14:52
ltomasboservivcs == lbaas/octavia?14:52
ltomasbos/servivcs/services14:52
apuimedoltomasbo: neutron-lbaasv214:52
*** Michael-zte has joined #openstack-meeting-414:52
apuimedowe should add octavia after that14:52
ltomasbois there a patch on that already? I would like to take a look14:53
ivc_ltomasbo i think toni is referring to the split of https://review.openstack.org/#/c/376045/ :)14:53
apuimedoit should be a matter of changing the driver14:53
ltomasbogreat!14:53
apuimedoltomasbo: there is a patch, the one ivc_ links to. However, it needs a bit of splitting and UT14:53
ltomasboshould that include the floating ip support too (in a follow up patch)?14:53
apuimedolet me check14:54
ltomasbo(maybe it already does...)14:54
ivc_ltomasbo it does not14:54
ltomasbojust asking...14:54
ltomasbook14:54
apuimedoI was checking if you could define externalIP for pod14:55
apuimedoapparently you can't14:55
apuimedoso yeah, it should be a follow-up patch14:55
apuimedoanybody's got anything about kuryr-kubernetes?14:55
ivc_apuimedo ltomasbo thats a rather trivial change technically, but i'm not yet certain if floating ip is the right fit for external IP14:56
apuimedoivc_: why?14:57
ltomasboI think it is something you can add to the VIP in lbaas, so that the loadbalancer can get reached from outside14:57
*** beagles is now known as beagles_brb14:57
ivc_ltomasbo apuimedo because there's also 'loadbalancer' type service14:58
*** alraddarla has left #openstack-meeting-414:58
ivc_my understanding was that external ip (https://kubernetes.io/docs/user-guide/services/#external-ips) from k8s point of view is a an IP configured on the node's interface14:58
sigmavirusapuimedo: y'all wrapping up?14:59
apuimedosigmavirus: we are14:59
apuimedosorry about that14:59
sigmavirusNo problem :)14:59
apuimedolet's move to the channel14:59
apuimedothank you all for joining14:59
apuimedo#endmeeting14:59
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"14:59
openstackMeeting ended Mon Jan 16 14:59:36 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:59
openstackMinutes:        http://eavesdrop.openstack.org/meetings/kuryr/2017/kuryr.2017-01-16-14.00.html14:59
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/kuryr/2017/kuryr.2017-01-16-14.00.txt14:59
openstackLog:            http://eavesdrop.openstack.org/meetings/kuryr/2017/kuryr.2017-01-16-14.00.log.html14:59
sigmavirus#startmeeting craton14:59
openstackMeeting started Mon Jan 16 14:59:42 2017 UTC and is due to finish in 60 minutes.  The chair is sigmavirus. Information about MeetBot at http://wiki.debian.org/MeetBot.14:59
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:59
*** openstack changes topic to " (Meeting topic: craton)"14:59
openstackThe meeting name has been set to 'craton'14:59
*** yedongcan has left #openstack-meeting-414:59
sigmavirus#chair sigmavirus sulo jimbaker14:59
openstackCurrent chairs: jimbaker sigmavirus sulo14:59
sigmavirus    #link https://etherpad.openstack.org/p/craton-meetings14:59
sigmavirus#link https://etherpad.openstack.org/p/craton-meetings15:00
*** garyloug has left #openstack-meeting-415:00
sigmavirus#topic Roll Call15:00
*** openstack changes topic to "Roll Call (Meeting topic: craton)"15:00
suloo/15:00
sigmaviruso/15:00
*** ivc_ has left #openstack-meeting-415:00
sigmaviruspalendae: jimbaker reminder, we have our meeting15:01
*** spotz_zzz is now known as spotz15:01
sigmavirussulo: I'll give them a few more minutes but if no one else shows up, want to cancel it?15:02
sigmavirusSeems a bit ... hard to have a meeting with just two people15:02
sulook15:02
sigmavirusI mean, we can discuss the agenda15:02
sigmavirus:P15:02
suloso was there some discussion on secrets mgt last time ?15:03
sulodid we decide to start with barbican ?15:03
sigmavirussulo: well let's go in order :P15:04
sigmavirus#topic Action Items from Last Meeting15:04
*** openstack changes topic to "Action Items from Last Meeting (Meeting topic: craton)"15:04
suloheh ok15:04
sigmavirus    #link http://eavesdrop.openstack.org/meetings/craton/2017/craton.2017-01-09-14.59.html15:04
sigmavirusdangit15:04
sigmavirus#link http://eavesdrop.openstack.org/meetings/craton/2017/craton.2017-01-09-14.59.html15:04
sigmavirus#info there were no action items last week15:04
sigmavirus#topic Storing secrets in Craton15:04
*** openstack changes topic to "Storing secrets in Craton (Meeting topic: craton)"15:04
sigmavirusSo there's been frequent discussion of storing secrets in craton, sulo (to answer your question)15:05
sigmavirusThere seems to be an unqualified resistance to using Barbican (either by having a soft or hard requirement on it) that I have yet to get any reasoning about15:05
sigmavirusBeyond the reasoning that operators don't want to have to deploy things (like Keystone) to deploy Craton15:05
*** git-harry has joined #openstack-meeting-415:05
suloright .. but can we do secrets without going that route ?15:06
palendaeNot against using Barbican; I think I'm past being able to use Craton without Keystone, as everyone else seems to think that's the preferred method15:06
sigmavirussulo: well the design that jimbaker has in mind hasn't made any sense to me personally15:06
sigmavirusI think he wants some way of storing the private keys that encrypt the secrets in Craton15:06
sigmavirusSo that Craton can decrypt the secrets itself15:07
sigmavirusI don't think Craton's in the position to do that right now though15:07
palendaeI'd be -1 to Craton doing secret management itself15:07
sigmavirusAnd I'd rather have the user encrypt the secrets, ship them to craton, and then be in charge of decrypting them15:07
sigmaviruspalendae: me too15:07
*** antonym has joined #openstack-meeting-415:08
sigmavirusThat leads me to15:08
sigmavirus#info Barbican already provides an HA way of storing secrets15:08
sigmavirusBarbican also has a really good access control mechanism for secrets which might not match what we're brewing up for Craton15:08
*** hongbin has quit IRC15:08
sigmavirusFurther15:08
sigmavirus#info If Craton absolutely must store its own secrets, it should investigate Castellan15:08
sigmavirusCastellan is a project made by the Barbican team for people who need to access secrets storage devices, e.g., TPMs without using Barbican15:09
*** bobh has joined #openstack-meeting-415:09
sigmavirus#link http://docs.openstack.org/developer/castellan/15:09
*** bobh has quit IRC15:09
sigmavirus#info Topic on mailing list about projects avoiding Barbican, please contribute15:09
sigmavirus    #link http://lists.openstack.org/pipermail/openstack-dev/2017-January/110192.html15:09
sigmavirus#link http://lists.openstack.org/pipermail/openstack-dev/2017-January/110192.html15:09
sigmavirusSo if y'all have reasons why not to use Barbican in Craton, I'd like y'all to contribute to that thread15:10
sigmavirusWe should be contributing (at the very least) information back to that project so they know what is holding it back from wider spread adoption15:10
*** anilvenkata has quit IRC15:10
sigmavirusI really don't want us to have to come up with ways of storing secrets securely15:10
sigmavirusAnd I really think Barbican has done all of the heavy lifting done w/r/t security and cryptography15:11
sulosigmavirus: is there a bp on secrets mgt ?15:11
sigmavirussulo: jimbaker is the owner so I'm not sure but I haven't seen one15:11
sulook15:11
*** l4yerffeJ__ has joined #openstack-meeting-415:12
*** alexchad_ has joined #openstack-meeting-415:12
*** l4yerffeJ_ has quit IRC15:12
*** alexchadin has quit IRC15:12
sigmavirusIf we had more of the team here, I'd start a vote about craton storing its own secrets, but there's only 3 of us, so I won't15:13
palendaeIMO it should be a bp/spec and voting happens there15:14
sigmavirussulo, palendae do you have any topics you want to cover?15:14
*** l4yerffeJ__ has quit IRC15:14
palendaeWith explanation of why Barbican doesn't fit and how exactly Craton will manage things differently15:14
*** l4yerffeJ__ has joined #openstack-meeting-415:14
palendaesigmavirus: Not really; been focused on internal stuff this past week15:14
jimbakero/ - tech problems here, but online now15:14
sulosigmavirus: well, few topics but moslty for my own catchup really15:15
sulolike cli work15:15
suloand where we are with the url structure discussion and pagination support15:15
jimbakersulo, in reqs meeting that we had with toan and dusty, we did bring up CLI15:15
sigmavirussulo: pagination spec was merged, and I'm hacking on it15:15
jimbaker(as well as pagination)15:15
sulothats where we were before i went on a break15:15
jimbakerwill read log before my much more commenting :)15:16
sulojimbaker: sigmavirus: ok cool15:16
jimbakersulo, but in a nutshell, toan asked for a demo of inventory against the requirements dusty put together. end of january15:16
jimbakerthis is not all of the reqs mind you :)15:17
sulojimbaker: ok15:17
sulojimbaker: this is from last meeting ?15:17
sulolast week i mean ?15:17
jimbakerjust doing stuff like getting/setting variables against a host for its hardware/software inventory15:17
jimbakersulo, correct - this is the friday meeting you missed because of leave15:18
sulook15:18
*** rainya has joined #openstack-meeting-415:18
sulosigmavirus: jimbaker: another topic that was in the middle of discussion was access control15:18
jimbakerwe need an interface that's not just the python client. so the CLI will satisfy15:19
jimbakersulo, right, i made good progress on rbac15:19
sulojimbaker: nice15:19
sulois there a bp ? we are going with oslo policy ?15:19
sigmavirusjimbaker: ah, I never got that invite from dusty15:19
jimbakerso in addition to scoped role assignments that's discussed in the rbac blueprint15:20
sigmavirussulo: our rbac seems to becoming quite involved beyond oslo.policy15:20
jimbakeras sigmavirus points out, there's stuff beyond just mere oslo.policy15:20
jimbakerlast week i discussed and showed a gist that lets us connect scoped role assignments to oslo.policy15:21
sulook .. is there a bp ?15:21
jimbakersulo, it's going in a spec15:22
suloits not merged i guess .. ill check reviews15:22
suloi only see pagination and url specs15:22
*** rainya has quit IRC15:22
jimbakersulo, no, not merged, or even in gerrit15:22
jimbakerjust still in the writing stage15:22
suloah gotcha15:23
jimbakerbut i think it's very much worthwhile to discuss now :)15:23
jimbakeranyway, the key to the whole work here is15:23
jimbaker1. scoped role assignments are managed in the database. they are implemented as triples connecting principals (users, workflows) with other mixed in resources on some role15:24
jimbaker2. rest api to actually manage15:24
jimbaker3. usage with oslo.policy as attributed assertions that can then be used as part of standard backwards chaining inference to a given goal as part of the enforce method15:25
sigmavirusjimbaker: so the way I see this is that there will be actually two layers of policy15:25
sigmavirusoslo.policy and whatever policy enforcement craton does with those scoped assignments15:25
*** beekneemech is now known as bnemec15:26
sigmavirusThe layer of granularity that you're talking about isn't presently done by anyone with oslo.policy15:26
jimbakersigmavirus, it is two level, but more like how it's two level in keystone15:26
sigmavirusjimbaker: care to elaborate on what you mean by "it's two level in keystone"?15:26
jimbakersigmavirus, what i mean by that is keystone captures a similar idea in terms of how users and domains are managed in a db; then pulled together using attributes15:28
jimbakeranyway, probably best to be discussed in the context of the spec itself15:29
*** alexchad_ has quit IRC15:29
*** aheczko-mirantis has quit IRC15:30
*** marst has quit IRC15:31
jimbakergoing through the log: i think there's a difference between managing HSM for master keys themselves; and any encrypted secrets with respect to those master keys15:32
jimbakerso castellan/barbican could be good options; so too amazon cloudhsm15:33
sulojimbaker: so just encrypt and store secrets on how to access the real secrets in barbican etc15:33
jimbakerthen there are tools for managing secrets. so hashicorp vault is a good example here. i don't believe integrations have been done with hsm and hashicorp vault15:34
sigmavirusjimbaker: vault would be integrated at the level of barbican15:34
sigmavirusbarbican is meant to abstract all of that15:34
sigmavirusno one has done the work yet though15:34
jimbakerthe dev work for vault seems to be more focused on rolling secrets15:34
sigmavirusjimbaker: like the secrets that sit on top of spaghetti?15:35
*** nkrinner is now known as nkrinner_afk15:35
jimbakersigmavirus, yeah, and i want to avoid us going down that path if possible. first, implement hsm integration...15:35
*** l4yerffeJ__ has quit IRC15:35
sigmavirusjimbaker: in craton?15:35
sigmavirusWhy reimplement what barbican has already painstakingly done?15:35
jimbakersigmavirus, as in a dev path to avoid15:36
sigmavirushuh?15:36
jimbakeryes, if we re-implement stuff that has been painstakingly done... not a good idea15:36
*** l4yerffeJ__ has joined #openstack-meeting-415:36
*** sshnaidm has quit IRC15:36
jimbakersigmavirus, i think we are in agreement here15:36
*** sshnaidm has joined #openstack-meeting-415:36
jimbakerat least at a high level. details maybe need to be worked out about our agreement? ;)15:36
*** l4yerffeJ__ has quit IRC15:37
sigmavirusfair15:37
*** l4yerffeJ__ has joined #openstack-meeting-415:37
jimbakerrelated to secrets is, why do we need them anyway? so there are alternatives like trusts15:38
jimbakerbut apparently the future, while implemented, is not yet widely adopted/distributed15:38
*** alexchadin has joined #openstack-meeting-415:39
jimbakerso we need secrets for the time being. anyway... best discussed i think over a spec15:39
sigmavirusjimbaker: I think secrets are deployment secrets and trusts are related to identity15:39
sigmavirusso a user can use a trust to authenticate to craton and let it reuse that token with other services taht the users has scoped that too (iiuc)15:40
sigmavirussecrets, in the context of an inventory system, are secrets you might use when doing automatic remediation15:40
sigmavirusor in the OSA context, passwords that services use when auuthenticating to mariadb15:40
jimbakersigmavirus, trusts and similar tooling like opengrid's myproxy can replace the need for tooling like craton to store secrets to complete the next credentialling hop15:40
*** beagles_brb is now known as beagles15:41
jimbakerconsider for example that myproxy can remove the need to use ssh keys by an intermediary15:41
sigmavirusjimbaker: not familiar with myproxy but it seems we're getting a little off into the weeds too15:42
palendaejimbaker: Definitely agreed with needing a spec. I'm skeptical that craton needs to add secrets15:42
jimbakerall it takes is modified ssh servers. or maybe ssh servers that can talk to kerberos. i don't know. as i said, adoption/distribution means this is not really relevant. eg weeds15:42
jimbakerinteresting weeds. perspective weeds ;)15:42
jimbakerso we still need to manage secrets somewhere. that's the conclusion15:43
palendaeA deployer does; not craton15:43
sigmavirus^15:43
*** marst has joined #openstack-meeting-415:43
sigmavirusI think we should start with barbican and castellan and let people develop integrations into the services they need15:44
sigmaviruswe'll have done our dilligence in creating a driver API for that15:44
sigmavirusand then people can either hook into barbican or craton or wherever makes most sense to them15:44
palendaeOr, at the absolute simplest, use TLS and they'll encrypte/decrypt on their end15:44
sigmaviruscastellan will be for PoC deployments15:44
sigmaviruspalendae: yeah, I think we'd just keep references to the secret stored in teh driver15:45
sigmavirusI don't think we should handle the secret at all if at all possible15:45
palendaesigmavirus: Well, I'm talking about the scenario of storing an encrypted secret without a 3rd service. But, for me, that's preferable to Craton adding yet more stuff15:45
sulosigmavirus: jimbaker: i thought that was always the plan ? driver support for one or two backends by default ..15:45
suloso all we have to do is be able to handle .. where the secret is and how to get it15:46
sigmavirussulo: I don't know if that was always teh plan15:46
sigmavirussulo: right15:46
sigmaviruswe have 12 min left15:46
sigmavirusdo we have other topics?15:46
jimbakersulo, yes, for master keys. whether for secrets as a whole like ssh access keys, clearly we didn't go there15:46
*** limao has quit IRC15:47
suloi dont have anything .. ihave a few things to catchup on15:47
*** mattmceuen has left #openstack-meeting-415:47
*** salv-orlando has joined #openstack-meeting-415:48
jimbakersulo, we should just sync up with the reqs meeting15:48
sulojimbaker: ok15:48
jimbakerso no changes there on that doc that dusty is putting together15:49
sulojimbaker: sigmavirus: i think we have things on our priority list15:49
jimbakerwhich should be published soon15:49
*** hongbin has joined #openstack-meeting-415:49
sulobut maybe its worth putting down the workitems and goals for this cycle etc15:49
jimbakersulo, i assume by this you mean15:49
*** dave-mccowan has quit IRC15:50
jimbakersulo, i assume by this you mean current inventory model stability/production scale15:50
jimbakerplus CLI15:50
sulojimbaker: i mean, access control, secrets mgt, workflow and cli work15:50
sulojimbaker: yes15:50
sulopretty much15:50
jimbakersulo, yes, we can do that in parallel for access, secrets, auditing, and remote inventory integration ("inventory fabric")15:50
jimbakersulo, with these last aspects, we have a comprehensive inventory system15:51
*** rtheis has quit IRC15:51
jimbakerthe other piece is workflows, but that can be on the back burner15:51
jimbakerinventory seems to be far more important to get complete first15:52
* sigmavirus thinks auditing is higher prio than secrets15:52
jimbakersigmavirus, likely is15:52
*** salv-orlando has quit IRC15:52
suloright .. i think inventory is taknig shape .. for next phase of work though15:52
jimbakersecrets are for sure harder to get right15:52
sulolike auditing15:52
sulowe need secrets and access control15:52
suloto we can hit machines15:53
jimbakerexactly, it all works together15:53
jimbakerand i think secrets is harder with respect to daemon usage15:53
sigmavirussulo: last time I was in a meeting with Rackspace's support team, automated remediation is a very low prio item given they don't want Craton doing things for them15:53
jimbakerlike workflows15:53
sulosigmavirus: ok .. yeah i guess we are kinda far from that also15:54
jimbakerhence my "back burner" prioritization15:54
sulobut auditing is the same process as remediation15:54
jimbakeryeah15:55
sulothe inner working is the same15:55
jimbakerand they will want it15:55
jimbakerotherwise we lose the efficiency gains we want to see here15:55
jimbakerand agreed about effectively the same15:55
*** spotz is now known as spotz_zzz15:55
jimbakersulo, so nothing is changed as we look at 201715:56
jimbaker(and btw, welcome back!)15:56
sulothanks :)15:56
jimbakerinventory first, get that deep15:56
jimbakerand build out workflows around it15:56
jimbakerjason's original priorities for us15:56
jimbakerwe are just hearing it from toan and dusty as well15:57
sigmavirusAnyway, let's continue in #craton so we don't step on the next meeting's time15:57
jimbakersigmavirus, agreed15:57
*** amotoki has quit IRC15:57
sigmavirusIn the future, I think I'll run meetings and force us to stick to the posted agenda15:57
*** spotz_zzz is now known as spotz15:57
sigmavirusBecause I can15:57
sigmavirus#endmeeting15:57
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings"15:57
openstackMeeting ended Mon Jan 16 15:57:36 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:57
sulosigmavirus: +115:57
openstackMinutes:        http://eavesdrop.openstack.org/meetings/craton/2017/craton.2017-01-16-14.59.html15:57
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/craton/2017/craton.2017-01-16-14.59.txt15:57
*** apuimedo is now known as apuimedo|away15:57
openstackLog:            http://eavesdrop.openstack.org/meetings/craton/2017/craton.2017-01-16-14.59.log.html15:57
Michael-zteIs there a meeting of "Ironic-neutron" today?16:04
*** git-harry has left #openstack-meeting-416:05
*** dave-mccowan has joined #openstack-meeting-416:05
*** dave-mcc_ has joined #openstack-meeting-416:08
*** dave-mccowan has quit IRC16:10
*** benj_ has joined #openstack-meeting-416:10
*** dandruta has joined #openstack-meeting-416:12
*** itamaro has quit IRC16:14
*** jose-phillips has joined #openstack-meeting-416:16
*** jose-phillips has quit IRC16:16
*** armax has joined #openstack-meeting-416:26
*** cwolferh has left #openstack-meeting-416:29
*** salv-orlando has joined #openstack-meeting-416:30
*** sdake has quit IRC16:33
*** Michael-zte2 has joined #openstack-meeting-416:38
*** Michael-zte has quit IRC16:40
*** jose-phillips has joined #openstack-meeting-416:52
*** jose-phi_ has joined #openstack-meeting-416:54
*** jose-phillips has quit IRC16:54
*** vsaienk0 has joined #openstack-meeting-416:55
*** vmorris has joined #openstack-meeting-416:57
*** amotoki has joined #openstack-meeting-416:57
*** markmcclain has quit IRC17:02
*** vsaienk0 has left #openstack-meeting-417:03
*** armax has quit IRC17:05
*** dave-mcc_ has quit IRC17:06
*** erikmwilson has quit IRC17:08
*** erikwilson has joined #openstack-meeting-417:08
*** alexchadin has quit IRC17:11
*** matrohon has quit IRC17:17
*** markmcclain has joined #openstack-meeting-417:19
*** acabot has quit IRC17:24
*** jose-phi_ has quit IRC17:24
*** jose-phillips has joined #openstack-meeting-417:30
*** johnsom has quit IRC17:30
*** johnsom has joined #openstack-meeting-417:30
*** julim_ has joined #openstack-meeting-417:31
*** julim has quit IRC17:32
*** jose-phillips has quit IRC17:35
*** jose-phillips has joined #openstack-meeting-417:36
*** tonytan4ever has joined #openstack-meeting-417:39
*** jose-phi_ has joined #openstack-meeting-417:40
*** neiljerram has quit IRC17:41
*** armax has joined #openstack-meeting-417:42
*** jose-phillips has quit IRC17:43
*** tonytan4ever has quit IRC17:44
*** tonytan_brb has joined #openstack-meeting-417:44
*** armax has quit IRC17:47
*** spzala has joined #openstack-meeting-417:59
*** sambetts is now known as sambetts|afk18:00
*** amirv has quit IRC18:00
*** ralonsoh has quit IRC18:01
*** Michael-zte2 has quit IRC18:01
*** amirv has joined #openstack-meeting-418:01
*** ivc_ has joined #openstack-meeting-418:05
*** rainya has joined #openstack-meeting-418:21
*** rainya has quit IRC18:25
*** dandruta has quit IRC18:26
*** salv-orlando has quit IRC18:35
*** salv-orlando has joined #openstack-meeting-418:36
*** dave-mccowan has joined #openstack-meeting-418:44
*** tonytan_brb has quit IRC18:47
*** barmaley has quit IRC18:47
*** tonytan4ever has joined #openstack-meeting-418:48
*** tonytan_brb has joined #openstack-meeting-418:51
*** tonytan4ever has quit IRC18:52
*** sacharya has joined #openstack-meeting-418:54
*** tonytan_brb has quit IRC19:09
*** woodard has joined #openstack-meeting-419:09
*** tonytan4ever has joined #openstack-meeting-419:09
*** amirv has quit IRC19:12
*** matrohon has joined #openstack-meeting-419:14
*** joanna has joined #openstack-meeting-419:19
*** bobmel has joined #openstack-meeting-419:26
*** bobmel has quit IRC19:31
*** vishnoianil has joined #openstack-meeting-419:56
*** sdake has joined #openstack-meeting-419:57
*** cfarquhar has joined #openstack-meeting-419:58
*** bobh has joined #openstack-meeting-420:07
*** bobh has quit IRC20:07
*** bobh has joined #openstack-meeting-420:07
*** bobh has quit IRC20:09
*** spzala has quit IRC20:14
*** dtardivel has quit IRC20:14
*** tonytan4ever has quit IRC20:17
*** tonytan4ever has joined #openstack-meeting-420:18
*** adisky_ has quit IRC20:19
*** rainya has joined #openstack-meeting-420:22
*** salv-orlando has quit IRC20:23
*** bobh has joined #openstack-meeting-420:23
*** bobh has quit IRC20:24
*** rainya has quit IRC20:27
*** Syed__ has joined #openstack-meeting-420:29
*** revon has joined #openstack-meeting-420:29
*** woodster_ has joined #openstack-meeting-420:32
*** jose-phi_ has quit IRC20:33
*** dave-mccowan has quit IRC20:34
*** rtheis has joined #openstack-meeting-420:34
*** jose-phillips has joined #openstack-meeting-420:36
*** rtheis has quit IRC20:48
*** spzala has joined #openstack-meeting-420:49
*** MeganR has joined #openstack-meeting-420:58
*** salv-orlando has joined #openstack-meeting-421:09
*** jose-phillips has quit IRC21:12
*** jose-phillips has joined #openstack-meeting-421:14
*** rfolco has quit IRC21:15
*** rainya has joined #openstack-meeting-421:23
*** sdake_ has joined #openstack-meeting-421:24
*** sdake has quit IRC21:24
*** rainya has quit IRC21:27
*** sdake_ has quit IRC21:27
*** sdake has joined #openstack-meeting-421:28
*** dave-mccowan has joined #openstack-meeting-421:32
*** l4yerffeJ has joined #openstack-meeting-421:34
*** l4yerffeJ__ has quit IRC21:34
*** Jeffrey4l has quit IRC21:34
*** Jeffrey4l has joined #openstack-meeting-421:35
*** sdake has quit IRC21:41
*** tonytan4ever has quit IRC21:44
*** salv-orlando has quit IRC21:47
*** v1k0d3n has joined #openstack-meeting-421:50
*** bobmel has joined #openstack-meeting-421:54
*** dave-mccowan has quit IRC21:54
*** thorst has quit IRC21:55
*** sdake has joined #openstack-meeting-421:57
*** bobmel has quit IRC21:58
*** bobh has joined #openstack-meeting-422:01
*** MeganR has quit IRC22:01
*** lrensing has quit IRC22:04
*** sdake_ has joined #openstack-meeting-422:12
*** salv-orlando has joined #openstack-meeting-422:12
*** spzala has quit IRC22:13
*** sdake has quit IRC22:14
*** marst has quit IRC22:15
*** yamamoto has joined #openstack-meeting-422:15
*** Dmitrii-Sh has quit IRC22:22
*** salv-orl_ has joined #openstack-meeting-422:24
*** thorst has joined #openstack-meeting-422:24
*** salv-orlando has quit IRC22:26
*** rbak has joined #openstack-meeting-422:27
*** thorst has quit IRC22:28
*** woodard has quit IRC22:28
*** woodard has joined #openstack-meeting-422:29
*** woodard has quit IRC22:33
*** matrohon has quit IRC22:35
*** julim_ has quit IRC22:38
*** rbak has quit IRC22:53
*** medberry is now known as med_23:05
*** rbak has joined #openstack-meeting-423:08
*** v1k0d3n has quit IRC23:08
*** bobh has quit IRC23:09
*** spzala has joined #openstack-meeting-423:14
*** spzala has quit IRC23:18
*** jose-phillips has quit IRC23:23
*** rainya has joined #openstack-meeting-423:25
*** klamath has quit IRC23:25
*** rainya has quit IRC23:30
*** jose-phillips has joined #openstack-meeting-423:30
*** spzala has joined #openstack-meeting-423:36
*** galstrom is now known as galstrom_zzz23:38
*** salv-orl_ has quit IRC23:50
*** vmorris has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!