16:01:42 <kozhukalov_> #startmeeting Fuel
16:01:42 <openstack> Meeting started Thu Jun 25 16:01:42 2015 UTC and is due to finish in 60 minutes.  The chair is kozhukalov_. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:44 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:46 <openstack> The meeting name has been set to 'fuel'
16:01:57 <kozhukalov_> #chair kozhukalov_
16:01:58 <openstack> Current chairs: kozhukalov_
16:02:02 <kozhukalov_> hey guys
16:02:04 <ikalnitsky> o/
16:02:05 <agordeev> o/
16:02:07 <docaedo> hi
16:02:11 <akislitsky> hi
16:02:12 <kozhukalov_> agenda is here
16:02:13 <angdraug> o/
16:02:13 <alex_didenko> hi
16:02:15 <mihgen> hi guys
16:02:19 <barthalion> sup
16:02:21 <kozhukalov_> #link https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
16:02:31 <rmoe> yo
16:02:38 <kozhukalov_> xarses is absent today
16:02:46 <kozhukalov_> is he on vacation?
16:02:51 <kozhukalov_> does anybody know?
16:02:55 <xarses_> sorry, internet issue
16:03:17 <kozhukalov_> xarses_: would you like to be a chair?
16:03:27 <xarses_> sure
16:03:34 <kozhukalov_> #chair xarses_
16:03:36 <openstack> Current chairs: kozhukalov_ xarses_
16:03:44 <xarses_> #topic new role as plugin (ikalnitsky)
16:03:53 <ikalnitsky> hi folks!
16:03:58 <mattymo> hi!
16:04:00 <ikalnitsky> the basic spec was merged -
16:04:05 <ikalnitsky> #link https://review.openstack.org/#/c/185267/
16:04:18 <ikalnitsky> but some details will be added next week.
16:04:32 <ikalnitsky> the team has started working on implementation, and here's the first patch -
16:04:36 <ikalnitsky> #link https://review.openstack.org/#/c/195079
16:05:04 <mihgen> great!
16:05:06 <ikalnitsky> i'm working on refactoring role representation, and i think it will be sent to review at beginning next week.
16:05:27 <mihgen> very good prorgress. We just need to ensure that other guys are Ok for sure with our approach here…
16:05:34 <ikalnitsky> unfortunately, i met more troubles with refactoring and much time was spent of meetings of other features
16:05:57 <ikalnitsky> yeah, guys, please feel free to review the spec
16:06:00 <ikalnitsky> even if it's merged
16:06:05 <mihgen> may be we need to attract some attention in openstack-dev, send it to partners, etc. - like guys, last call to make adjustments if you have any
16:06:48 <ikalnitsky> you mean refactoring? i don't think so.. refactoring is fully internal, interfaces are kept
16:07:01 <mihgen> nope, I was talking about design approach
16:07:05 <ikalnitsky> Oh.
16:07:08 <mihgen> in short, and attach link to the design
16:07:14 <ikalnitsky> ok, i'll drop a letter
16:07:31 <mihgen> ikalnitsky: thanks. are we on schedule?
16:07:41 <ikalnitsky> yes
16:07:42 <xarses_> #action ikalnitsky to bring up role as a plugin on the ML
16:07:54 <mattymo> xarses_ has appeared!
16:07:56 <mihgen> ok cool. thanks for pushing it thru difficulties
16:08:29 <mihgen> ikalnitsky: did we get QA guys involved?
16:08:43 <ikalnitsky> yep, Max Strukov is assigned to the feature
16:08:50 <ikalnitsky> and he +1 the spec
16:08:53 <mihgen> ok excellent
16:08:54 <ikalnitsky> everything's ok
16:08:57 <mihgen> thanks for status
16:09:14 <xarses_> moving on
16:09:25 <xarses_> #topic flexible networking (akasatkin)
16:09:33 <akasatkin> Hi guys!
16:09:35 <akasatkin> Tickets for networking are updated.
16:09:35 <akasatkin> Here is the spec for 7.0 part:
16:09:35 <akasatkin> https://review.openstack.org/#/c/195109/
16:09:35 <akasatkin> Please review the spec!
16:09:43 <akasatkin> Template solution is proposed to 7.0 that should allow to have
16:09:43 <akasatkin> flexible network roles to networks mapping, network roles depended
16:09:43 <akasatkin> on node roles, flexible low-level topology (e.g. subinterface bonding).
16:09:43 <akasatkin> Here is the example template:
16:09:43 <akasatkin> http://paste.openstack.org/show/320988/
16:09:44 <akasatkin> Implementation is in progress now. But some details are to be clarified.
16:09:52 <akasatkin> There are some questions here:
16:09:52 <akasatkin> 1. How to do networking verification with templates. I thought about two options:
16:09:52 <akasatkin> a) write template parser that will build input data for net-checker;
16:09:52 <akasatkin> b) add some metainfo into template that will be passed to net-checker.
16:09:52 <akasatkin> 2. Some nailgun checks should be disabled as info in DB about networks to
16:09:53 <akasatkin> interfaces mapping is not consistent.
16:09:53 <akasatkin> 3. Provide networks L3 settings in template or via networks API.
16:10:00 <akasatkin> It is not clear how multi-racks tasks for 7.0 may influence on this work as
16:10:00 <akasatkin> multi-racks scope for 7.0 is not clear.
16:10:47 <xarses_> akasatkin: it sounds like multi-rack will be a requirement
16:11:09 <akasatkin> yes, but "amount" of it is not clear
16:11:13 <mihgen> xarses_: akasatkin please do multirack as separate blueprint
16:11:19 <xarses_> I was talking to rmoe yesterday about maybe shortening the template some
16:11:23 <xenolog> I started survey blueprint for multi-rack solution
16:11:23 <xenolog> https://blueprints.launchpad.net/fuel/+spec/multi-rack-templated-network
16:12:01 <rmoe> I think making the template as concise as possible should be a high priorirty
16:12:18 <xarses_> there is a lot of data that we can infer or generate (we don't need to use the full transform scheme) and it will confuse people
16:12:19 <akasatkin> agree
16:12:25 <rmoe> editing yaml is error-prone as anybody who has attempted to deploy an env solely through editing the deployment/network yaml
16:12:30 <rmoe> knows
16:13:28 <xarses_> sounds good
16:13:30 <mihgen> yeah we don't want to make it overcomplicated… thanks for keeping attention to UX
16:13:55 <xarses_> moving on
16:13:58 <mihgen> wait
16:14:02 <xarses_> yep
16:14:12 <mihgen> xarses_: are you participating in bp shared by xenolog ?
16:14:29 <mihgen> multi-rack one? I think it's good to have this as separate bp
16:14:45 <xarses_> no, It's the first I saw it. I will work on it
16:15:11 <mihgen> I just want to ensure we still keep focus on original template bp for 7.0
16:15:49 <mihgen> akasatkin: thanks for status. Questions are good, I might be able to provide my opinion.. but need to think about those
16:16:09 <mihgen> so you might want to have some of those in openstack-dev ML so we can get more people to think about it
16:16:28 <akasatkin> yeah, i'll file a letter
16:16:35 <mihgen> I would say that we need this templating even if we won't find resources for net-checker
16:16:40 <xenolog> For multi-rack I propose make one survey blueprint for disccribing scope of work and set of additional blueprints for each part of work.
16:16:52 <mihgen> it's gonna be sad, but still way better than have nothing  )
16:17:12 <xarses_> #action akasatkin to raise question to ML about how to deal with flexible-networking's impact on network checker
16:17:25 <mihgen> xenolog: good idea. xarses_ - please work together on this guys
16:18:09 <xarses_> moving along
16:18:13 <mihgen> akasatkin: for 2 - disable checks - yeah we want flexibility and some of our hardcoded checks drive people crazy
16:18:22 <mihgen> yep let's move on, thanks guys for status
16:18:28 <xarses_> #topic granular deployment (mattymo)
16:18:34 <mattymo> Hi everyone
16:18:41 <mattymo> Over the last week we tried to get code merged for enabling separated services from controller (basically from management VIP).
16:19:01 <mattymo> This all has backward compatibility to standard deployment and passes CI, which is awesome
16:19:07 <mattymo> We also discovered lots of python-side feature gaps.
16:19:13 <mattymo> We can still meet the feature goals, but UX is going to suffer considerably due to lack of Fuel python resources and other items getting descoped.
16:19:25 <mattymo> The biggest gaps are related to making API calls from a plugin and OSTF support.
16:19:34 <mattymo> Our plugin framework can't handle a plugin talking to nailgun and making any changes.
16:19:42 <mihgen> > making API calls from a plugin -  what do you mean by this?
16:19:43 <mattymo> It can only add tasks and settings metadata (confined to its own tree). Additionally, OSTF only knows about the public VIP of a deployed environment.
16:19:53 <xarses_> mattymo: do we have a good list of these issues?
16:20:00 <mattymo> xarses_, absolutely. I have a full list
16:20:41 <mihgen> mattymo: you might want to share this one in openstack-dev.. I know I have email from you in my mailbox, but would be good to have broad tech audience for those issues
16:20:56 <mihgen> even if don't find resources now, at least we will start thinking of those..
16:20:59 <mattymo> 1 is environment metadata: such as setting database_server IP address to use an external DB or use pre-created credentials
16:21:18 <mattymo> the second is making modifications to controller role to unset the hardcoded min number of 1
16:21:41 <mihgen> mattymo: this is actually a good candidates to think well of. ikalnitsky - please be in a loop here
16:21:43 <mattymo> A plugin can't override any metadata generated by the deployment serializer. All it can do is create a task and deploy it
16:21:53 <mattymo> mihgen, ikalnitsky and I just got off a meeting where they said it's not possible for 7.0
16:22:05 <mihgen> if there are simple things, we will want to make those, if we find free moments..
16:22:31 <mihgen> this is sad. nothing possible or we could do some API calls to nailgun?
16:22:54 <mihgen> because API is still available, right. It's gonna be bad UX for plugin dev, but still
16:22:55 <mattymo> mihgen, API calls would be an excellent workaround, but python team strongly objected to adding hooks where a plugin could possibly run them
16:23:04 <xarses_> I was thinking that we should extend the plugin metadata with info about api's to call and have nailgun run them when the API is installed
16:23:09 <mihgen> ikalnitsky: why so? ^^^
16:23:17 <mihgen> I think it's reasonable to allow flexibility here.
16:23:31 <ikalnitsky> sorry, wait me a sec to read it
16:23:45 <mihgen> while framework is not ideal, we want to allow almost any type of hacking for a plugin developer, including API calls
16:24:18 <ikalnitsky> ok. what we are talking about?
16:24:22 <ikalnitsky> about api calls from plugins?
16:24:25 <mattymo> ikalnitsky, yes
16:24:27 <alex_didenko> yep
16:24:35 <ikalnitsky> ok, which ones?
16:24:38 <ikalnitsky> during installation
16:24:41 <ikalnitsky> during deployment?
16:24:43 <mattymo> deployment
16:24:51 <ikalnitsky> it's up to you.
16:24:56 <xarses_> install too
16:24:58 <ikalnitsky> plugins know nothing about it
16:25:04 <mattymo> xarses_, I already have it written and it's tentatively accepted
16:25:13 <ikalnitsky> if you write deployment task that perform api call - it's up to you
16:25:14 <mattymo> xarses_, waiting on v3 plugin format
16:25:15 <ikalnitsky> but..
16:25:16 <mihgen> if you you call during deployment, thi sis gonna be VERY hacky, I would avoid this
16:25:29 <ikalnitsky> mihgen, yep,
16:25:30 <mihgen> during installation of plugin, I think it's fine
16:25:36 <xarses_> mattymo: I'd like to make it better than run random code during rpm post-inst
16:25:50 <mattymo> xarses_, again it's already in the pipeline. We need on-deployment
16:26:12 <xarses_> mihgen: we are already making call during deployment as design for plugin defines a vip
16:26:21 <ikalnitsky> moreover, if someone wants to call api from plugins during deployment and is waiting that it affects deployment - that's wrong. i mean, we serialize the whole deployment info at once
16:26:24 <alex_didenko> what exactly do we need from API call in scope of service separation? new roles? but it's already covered in other feature
16:26:27 <mihgen> mattymo: why do we need api calls during deployment? I think we need to discuss this in ML...
16:26:28 <mattymo> xarses_, where? is there code for that?
16:26:29 <rmoe> why can't plugins "hook" into the serialization? like we run the serializer and pass that data to plugins where they can modify it and pass it back to nailgun
16:26:35 <xarses_> because calling the API is difficult
16:26:51 <mihgen> xarses_: for VIPs we are developing solution which won't run API
16:26:52 <xarses_> mattymo: I'm not sure there is code, but they are planning to add a puppet provider
16:26:52 <ikalnitsky> rmoe, because currently plugins don't have any python code
16:27:05 <ikalnitsky> and there're many questions how we it should be done
16:27:10 <mihgen> +1 to rmoe
16:27:21 <ikalnitsky> i just think we don't have resources for this
16:27:33 <ikalnitsky> but that's a road to go
16:27:45 <mattymo> +1 to rmoe, but in the absence of that, we get a clunky UX
16:27:48 <xarses_> and then add another task to update the value in heira on all the nodes
16:28:10 <mattymo> xarses_, can I finish my update on separating services from controller?
16:28:25 <xarses_> please
16:28:33 <mihgen> this is all not good… mattymo please start ML thread on this. We want to carefully think and figure out best solution for 7.0 and think about beyond…
16:28:53 <mattymo> We've descoped OSTF from 7.0 for this role. No resources available to handle evaluation of new endpoints.
16:29:00 <mattymo> for this blueprint, rather
16:29:57 <mattymo> and on the Puppet side we've made a lot of traction to get code merged for detached services. The next big tasks are related to client-driven keystone/mysql endpoint and db creation
16:30:28 <mattymo> </done>
16:30:36 <mattymo> xarses_, can you add an action item for me for what mihgen said?
16:30:52 <xenolog> Matthew, did you saw new network roles?
16:31:02 <xarses_> for the plugin - api topic?
16:31:12 <mattymo> xenolog, yes but I was told it might not reach 7.0
16:31:22 <mihgen> xarses_: this one: this is all not good… mattymo please start ML thread on this. We want to carefully think and figure out best solution for 7.0 and think about beyond…
16:31:49 <mihgen> that's about all plugin complexity, yes
16:31:51 <xenolog> It should be already merged.
16:31:52 <xarses_> yes, but context is around plugin and API handeling?
16:32:01 <mihgen> xarses_: yes
16:32:18 <xarses_> #action mattymo will start thread in ML about needs for plugins to call API's to get their things done
16:32:24 <xarses_> moving on
16:32:38 <xarses_> #topci SSL status (sbog)
16:32:43 <xarses_> #topic SSL status (sbog)
16:33:48 <kozhukalov_> sbog: around?
16:34:16 <kozhukalov_> let's postpone this topic and move one
16:34:19 <kozhukalov_> let's postpone this topic and move on
16:34:31 <mattymo> xarses_, kozhukalov_ it appears he is indisposed
16:34:32 <xarses_> #topic nailgun volume manager (prmtl, kozhukalov)
16:35:04 <kozhukalov_> here is the spec
16:35:04 <kozhukalov_> #link https://review.openstack.org/#/c/194201
16:35:04 <kozhukalov_> not merged yet
16:35:04 <kozhukalov_> review is welcome, there are comments I need to address
16:35:04 <kozhukalov_> we've decided to split it into two separate tasks
16:35:04 <kozhukalov_> 1) re-work volume manager as nailgun extension
16:35:04 <kozhukalov_> 2) implement dynamic volume allocation stuff in terms of Fuel Agent
16:35:05 <kozhukalov_> and then use this stuff in nailgun extension importing necessary modules from Fuel Agent
16:35:05 <kozhukalov_> I've synced this approach with partition preservation feature (Artur Svechnikov)
16:35:06 <kozhukalov_> AFAIK, Sebastian is still learning the current code
16:35:53 <kozhukalov_> ideally, new volume manager is to take into account plenty of parameters when allocating volumes
16:36:05 <kozhukalov_> including disk types and disk performace
16:36:31 <kozhukalov_> but i suggest to limit this feature for 7.0 by size driven allocation
16:36:52 <kozhukalov_> our current volume manager is only size driven
16:36:54 <xarses_> Seems fantastic, how will this impact the allocation DSL for the roles?
16:37:21 <kozhukalov_> yes, good point
16:37:30 <kozhukalov_> i need to add an example
16:37:30 <mihgen> kozhukalov_: it seems to be over risky for 7.0, so please be very careful here. We don't really want to take huge pieces of code which are not production-tested.
16:37:40 <mihgen> let's think how we can make it experimental first
16:37:48 <kozhukalov_> but it is not going to change this significantly
16:38:12 <mihgen> is sebastian here?
16:38:14 <kozhukalov_> mihgen: ok, i will think
16:38:39 <kozhukalov_> mihgen: no, we was not going to be here today
16:38:48 <kozhukalov_> mihgen: no, he was not going to be here today
16:39:22 <kozhukalov_> extension related stuff is going to be ready during next 2 weeks
16:39:36 <kozhukalov_> evgeniyl___: is working on it
16:39:41 <mihgen> ok. do you cover pluggabiility here?
16:39:56 <mihgen> so that when you define new role/task in plugin, you can specify disk layout needed?
16:40:06 <evgeniyl___> mihgen: it's not about plugins at all.
16:40:18 <evgeniyl___> mihgen: it's about splitting our current architecture into the modules
16:40:34 <mihgen> evgeniyl___: that's ok
16:40:35 <evgeniyl___> mihgen: so each group of developers will be able to work on their own part
16:40:50 <mihgen> but that's different
16:40:53 <kozhukalov_> we've discussed this (pluggability) and it looks like when you have flat data structure it is easy to append and remove volumes for plugins if they need
16:41:23 <mihgen> append/remove volumes per plugin, or per role defined by plugin, or per task defined by plugin?
16:41:49 <mihgen> I just want to ensure we make it plugin-aware from the day #1
16:42:15 <kozhukalov_> if plugin needs to introduce new volume it will use volume manager api
16:42:37 <kozhukalov_> calling append and remove volumes
16:42:53 <mihgen> during plugin install time?
16:42:53 <kozhukalov_> calling append and remove methods
16:43:30 <mihgen> do i assume right that UX of plugin dev would be just define YAML with similar piece which we have in release yaml
16:43:39 <mihgen> for particular role
16:43:57 <mihgen> and that's it. API calls will be done by whether fbp or something else
16:44:00 <evgeniyl___> kozhukalov_: mihgen plugins does not call anything, plugin will provide a role, in this role is description of role specific volumes
16:44:15 <mihgen> evgeniyl___: that's what I meant, thank you
16:45:10 <mihgen> ok thanks guys, no more questions from my side her
16:45:13 <mihgen> here&
16:46:13 <xarses_> #topic removing classic provisioning (agordeev)
16:46:18 <agordeev> hi
16:46:29 <agordeev> #link https://blueprints.launchpad.net/fuel/+spec/remove-classic-provisioning
16:46:31 <agordeev> Here is the updated version of specs
16:46:33 <agordeev> #link https://review.openstack.org/#/c/193588
16:46:42 <agordeev> The scoping isn't finished. I hope that all agreements will be reached on Monday, Jun 29.
16:46:44 <agordeev> Among devs we've come to conclusion that IBP will be used after upgrade to provision new nodes for existent old envs for 5.0, 5.1, 6.0, 6.1.
16:47:05 <agordeev> In order to get this done, fuel-agent will be improved to be able to built ubuntu precise images. Also images with centos for 5.0, 5.1 will be built.
16:47:07 <agordeev> Combined together those steps should allow to provision a node via IBP for all envs which we want to support.
16:47:16 <agordeev> Additionally, those steps will bring a lot of job to QA. It's still questionable as we want to reduce QA scope by disabling classic provisioning.
16:48:12 <mihgen> agordeev: why 5.0, 5.1, 6.0 under consideration?
16:48:29 <mihgen> we might want to support only 6.0 after upgdading to 6.1
16:48:47 <agordeev> mihgen: confirmation of scope should be aligned with top management
16:48:51 <mihgen> and restrict scale up story for envs if there are any running  <6.0 code
16:49:12 <mihgen> agordeev: with product management you mean, and that's ok. let's do this.
16:49:15 <agordeev> it's the most preffered option for us to implement
16:49:39 <xarses_> are we going to wait until near the end of the cycle to merge it since we don't know how well IBP works for many deployments yet?
16:50:12 <mihgen> agordeev: what would be the ideal solution from engineering perspective here, not taking into account anything else?
16:50:21 <agordeev> xarses_: sorry, i have no information about that.
16:50:49 <agordeev> mihgen: it's exaclty what was said. To support IBP for 5.0, 5.1, 6.0, 6.1
16:51:02 <mihgen> if we want to support 6.1 -> 7.0 upgrade for instance, and want to scale up env with 6.1
16:51:09 <mihgen> nope, I mean what is the easiest
16:51:18 <agordeev> mihgen: otherwise, additional improvements need to be done to let introduce disabling for 5.0 or 5.1 envs, for example
16:51:22 <mihgen> I don't mean to support eveyrhitng till 1.0
16:51:37 <agordeev> mihgen: yep, it's the easiest.
16:51:55 <mihgen> to confirm - easiest is to support only 6.1?
16:52:29 <mihgen> will scale up be IBP based, if it was provisioned with classical before? is it easier than to use classic for old envs?
16:52:53 <mihgen> agordeev: limitation to not to scale up envs with <6.1 can be an easy hack in nailgun
16:53:17 <agordeev> mihgen: easier to support building images for all other envs too. To disable only particilara version of envs, there's no avaialable method yet
16:53:37 <xarses_> "not scale up" what does this mean?
16:53:45 <mihgen> xarses_: add nodes to old envs
16:53:55 <mihgen> and provision&deploy them
16:54:09 <xarses_> so then old deployments can not upgrade past 6.1?
16:54:22 <xarses_> that sounds bad
16:54:31 <mihgen> ok I got you, agordeev. Let's ask product management's opinion on this. I'll be defending that we want to drop support of scale up story for envs <6.1
16:54:46 <agordeev> mihgen: it might be not easier to introduce limitations for different version. That's the point
16:55:02 <mihgen> agordeev: how comes, simplest hack in deploy handler
16:55:12 <mihgen> you click deploy and there is hack which verifies version
16:55:18 <xarses_> mihgen: then, like i said the max a fuel can be upgraded priort to 6.1, is 6.1
16:55:29 <xarses_> s/priort/prior
16:55:30 <mihgen> xarses_: use case is: you had 6.0 env, upgraded master to 6.1, then to 7.0
16:55:41 <mihgen> and now you want to add couple nodes to old 6.0 env
16:56:00 <mihgen> so my suggestion would be to upgrade your openstack to 7.0
16:56:23 <mihgen> ok guys do we have time for sbog ?
16:56:32 <mihgen> I'd like to get status of SSL ..
16:56:36 <xarses_> sbog, are you here?
16:56:57 <mihgen> agordeev: let me know if you don't get feedback from product management pls
16:57:17 <agordeev> mihgen: ok
16:58:37 <xarses_> seem like sbog, is not here still
16:58:45 <xarses_> #topic open discuss
16:59:10 <xarses_> we have 2 min, any other topics any one wants to raise?
16:59:35 <agordeev> is it planned or discussed somehow to support multipath disks?
16:59:38 <mihgen> kilo… and sync of upstream manifests
16:59:43 <mihgen> not for 2min though
16:59:52 <mihgen> let's have it for the next meeting ..
16:59:57 <xarses_> mihgen: lets add it to next week
17:00:08 <xarses_> #endmeeting