*** kei_ has joined #openstack-defcore | 01:36 | |
openstackgerrit | Mark T. Voelker proposed openstack/defcore: Create 2016.08 Guideline from next.json https://review.openstack.org/351339 | 01:37 |
---|---|---|
*** kei has quit IRC | 01:38 | |
*** kei_ has quit IRC | 01:40 | |
openstackgerrit | Mark T. Voelker proposed openstack/defcore: Create 2016.08 Guideline from next.json https://review.openstack.org/351339 | 01:47 |
*** woodster_ has quit IRC | 03:09 | |
*** rarcea has quit IRC | 04:34 | |
*** rarcea has joined #openstack-defcore | 04:47 | |
*** pcaruana has quit IRC | 05:01 | |
-openstackstatus- NOTICE: zuul is being restarted to reload configuration. Jobs should be re-enqueued but if you're missing anything (and it's not on http://status.openstack.org/zuul/) please issue a recheck in 30min. | 05:24 | |
openstackgerrit | Catherine Diep proposed openstack/defcore: Remove tests that require second set of credentials from next. https://review.openstack.org/338609 | 05:55 |
*** pcaruana has joined #openstack-defcore | 07:24 | |
openstackgerrit | Catherine Diep proposed openstack/defcore: Flag advisory tests in 2016.01 due to requirement of admin credential. https://review.openstack.org/353287 | 07:33 |
*** openstackgerrit has quit IRC | 08:18 | |
*** openstackgerrit has joined #openstack-defcore | 08:19 | |
*** xiangfeiz has joined #openstack-defcore | 09:40 | |
*** xiangfeiz has quit IRC | 10:21 | |
*** woodster_ has joined #openstack-defcore | 12:36 | |
*** edmondsw has joined #openstack-defcore | 13:00 | |
*** skazi has joined #openstack-defcore | 13:05 | |
openstackgerrit | Mark T. Voelker proposed openstack/defcore: Create 2016.08 Guideline from next.json https://review.openstack.org/351339 | 13:24 |
*** ametts has joined #openstack-defcore | 13:36 | |
*** Tetsuo has joined #openstack-defcore | 13:46 | |
*** woodburn has joined #openstack-defcore | 13:47 | |
*** hjanssen-hpe has joined #openstack-defcore | 13:51 | |
*** hj-hpe has joined #openstack-defcore | 13:51 | |
*** tkfjt has joined #openstack-defcore | 13:56 | |
*** lizdurst has joined #openstack-defcore | 13:58 | |
*** tkfjt_ has joined #openstack-defcore | 13:59 | |
*** tongli has joined #openstack-defcore | 13:59 | |
*** tkfjt_ has left #openstack-defcore | 13:59 | |
*** leong has joined #openstack-defcore | 13:59 | |
*** kei_ has joined #openstack-defcore | 13:59 | |
*** kei_ is now known as kei | 14:00 | |
*** shamail has joined #openstack-defcore | 14:00 | |
shamail | #startmeeting interop_challenge | 14:00 |
openstack | Meeting started Wed Aug 10 14:00:51 2016 UTC and is due to finish in 60 minutes. The chair is shamail. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:00 |
openstack | The meeting name has been set to 'interop_challenge' | 14:00 |
gema | o/ | 14:01 |
catherineD | o/ | 14:01 |
shamail | Hi everyone, who is here for the interop challenge meeting today? | 14:01 |
leong | o/ | 14:01 |
*** rohit404 has joined #openstack-defcore | 14:01 | |
kei | o/ | 14:01 |
woodburn | o/ | 14:01 |
rohit404 | 0/ | 14:01 |
*** tkfjt__ has joined #openstack-defcore | 14:01 | |
skazi | o/ | 14:01 |
hjanssen-hpe | o/ | 14:01 |
shamail | Thanks gema, catherineD, leong, kei, woodburn, rohit404, skazi and hjanssen-hpe | 14:01 |
lizdurst | o/ | 14:01 |
tongli | o/ | 14:02 |
shamail | hey lizdurst and tongli | 14:02 |
markvoelker | o/ | 14:02 |
eeiden | o/ | 14:02 |
shamail | The agenda for today can be found at: | 14:02 |
shamail | #link https://wiki.openstack.org/wiki/Interop_Challenge#Meeting_Information | 14:02 |
*** tkfjt has quit IRC | 14:02 | |
*** jkomg has joined #openstack-defcore | 14:02 | |
shamail | hi markvoelker and eeiden | 14:02 |
shamail | #topic review action items from previous meeting | 14:02 |
jkomg | morning | 14:02 |
shamail | #link http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-03-14.00.html | 14:02 |
*** Jokoester has joined #openstack-defcore | 14:02 | |
shamail | The bottom of the linked log has action items that were taken in the last meeting | 14:03 |
shamail | Document how to submit requests for new tools (markvoelker, shamail) | 14:03 |
tongli | please add your name to the attendance list at the bottom. thanks. | 14:03 |
gema | that document is automatically generated | 14:03 |
gema | that list gets created at the end of the meeting for today | 14:03 |
*** MartinST_ has joined #openstack-defcore | 14:04 | |
shamail | #link https://etherpad.openstack.org/p/interop-challenge-meeting-2016-08-10 | 14:04 |
shamail | in case we need to take notes | 14:04 |
gema | +1 | 14:04 |
shamail | So on the first topic on documenting how to submit… | 14:04 |
shamail | #link https://wiki.openstack.org/wiki/Interop_Challenge#How_to_Propose.2FSubmit_new_tools.2Fworkloads | 14:04 |
shamail | Added this section to the wiki as a starting point | 14:05 |
shamail | markvoelker: Do you have anything to add on this action item? | 14:05 |
markvoelker | shamail: we've been kicking around ideas on how to submit results | 14:05 |
shamail | Ah, yes. I would like to hold off on that discussion since its an agenda item for today | 14:06 |
markvoelker | 'k | 14:06 |
shamail | Thanks | 14:06 |
shamail | Determine where to share results (markvoelker, topol, tongli) | 14:06 |
shamail | As mentioned, I want to pass on this action item review for now since we will discuss in depth today | 14:06 |
shamail | as part of the current agenda | 14:06 |
shamail | AI: Please add which OpenStack versions you plan to run interop challenge tests against (everyone) | 14:07 |
gema | #action Please add which OpenStack versions you plan to run interop | 14:07 |
gema | #action Please add which OpenStack versions you plan to run interop challenge tests against (everyone) | 14:07 |
shamail | This was an action from our previous meeting but I don’t think we discussed where to put this information | 14:07 |
gema | (sorry, copy & paste) | 14:07 |
shamail | This might be another one that intersects with the “sharing results” conversation or I have some thoughts I can share in the open discussion section | 14:08 |
shamail | np gema, thanks. | 14:08 |
shamail | Onwards! | 14:08 |
leong | +1 shamail.. i think the OpenStack version can be part of the "results" | 14:08 |
*** xiangfei1 has joined #openstack-defcore | 14:08 | |
shamail | leong: I agree | 14:08 |
shamail | #topic Test candidates review | 14:08 |
tongli | @leong, +1 | 14:09 |
shamail | The test candidates can be found in two sources: | 14:09 |
shamail | https://wiki.openstack.org/wiki/Interop_Challenge#Test_Candidate_Process | 14:09 |
shamail | Or line 28 in the etherpad for today | 14:09 |
shamail | tongli: can you lead this topic? | 14:09 |
tongli | @shamail, sure. | 14:09 |
tongli | I think we are open on what tools to use, either terramform, ansible, heat, your choice. | 14:10 |
tongli | probably we need to figure out the content of the app. lampstack has been one, dockerswarm is another. | 14:10 |
gema | tongli: do you mean that the same workload deployed with different tools in different clouds means the workload is interoperable? | 14:10 |
tongli | we currently already have dockerswarm in terraform. | 14:11 |
tongli | @gema, I think we should talk about that. not exactly sure if different tools for the same work load will accomplish something more than just one tool. | 14:11 |
gema | tongli: I think different deploying tools will use different apis | 14:11 |
jkomg | gema: the same workload deployed with the same tools in multiple clouds. Just multiple tools, I'd think. | 14:11 |
gema | what we should do is have different deploying tools as different use cases? | 14:12 |
tongli | for example, if we have lampstack in terraform and ansible, will that be better vs lampstack terraform. | 14:12 |
gema | maybe same workload? | 14:12 |
gema | or multiple ones | 14:12 |
shamail | jkomg: +1 | 14:12 |
gema | tongli: it will be different | 14:12 |
jkomg | I don't see the effectiveness of employing lampstack with terraform and ansible; you're deploying the same thing with different tools. We're not testing the tools. | 14:13 |
jkomg | s/employing/deploying/g | 14:13 |
tongli | komg, +1. | 14:13 |
gema | jkomg: if the tools are using the openstack apis they are part of getting the workload running | 14:13 |
gema | jkomg: agree with you we are not trying to test the tools | 14:13 |
jkomg | True, but I don't think anyone thinks there's a difference between lamp via ansible and lamp via heat, or terraform. | 14:13 |
gema | jkomg: I think it is different | 14:14 |
shamail | The question I see is that we need to agree whether we are building a catalog of workloads to test or tools… If workloads then once a workload (e.g. LAMPStack) is available using any tool then we should switch gears to the next workload. The tool leveraged is open but I am curious about the value of multiple tools for the same workloads. | 14:14 |
tongli | ok, so for one workload, we just have one scripts (either in terraform, ansible, or heat). we will just run that against multiple clouds. | 14:14 |
tongli | can we all agree on that? | 14:14 |
jkomg | shamail: +1 | 14:14 |
jkomg | tongli: +1 | 14:14 |
gema | shamail: it depends on the api coverage we are after demonstrating | 14:14 |
leong | shamail +1 | 14:14 |
gema | shamail: we could do each workload with a different tool | 14:14 |
gema | that'd work | 14:15 |
shamail | gema: I get your point too. | 14:15 |
skazi | shamail: +1 | 14:15 |
leong | +1 gema | 14:15 |
shamail | Essentially with multiple tools doing the same workload, we might cover a broader API set. | 14:15 |
gema | yep | 14:15 |
tongli | so looks we want to use multiple tools for same work load? I am ok with that as well. | 14:16 |
shamail | I think based on that maybe we shoot for one tool per workload as a starting point and we can revisit running the same workload with additional tool if A) we have time remaining in the challenge and B) we have multiple tools in the repo for a workload | 14:16 |
hjanssen-hpe | hjanssen-hpe: +1 | 14:16 |
hjanssen-hpe | I mean | 14:16 |
gema | shamail: +1 | 14:16 |
hjanssen-hpe | shamail: +1 | 14:17 |
catherineD | shamail: +1 one tool per workload | 14:17 |
shamail | tongli: I am proposing we start with 1 per workload and come back to running the workload with additional tools after we have run at least each workload once | 14:17 |
markvoelker | +1. We have a limited amount of time before Barcelona, so I think for this first go-around we just want to use what we can get going quickly. | 14:17 |
skazi | shamail: +1 | 14:17 |
jkomg | +1 | 14:17 |
leong | let's work on what we have today and we can refine it later | 14:17 |
tongli | @shamail, that is fine. we already have lampstack in heat, terraform and ansible (I am working on it now) | 14:17 |
gema | shamail: there is no need to run all the tools per workload, if we use them independently of what we are deploying, it almost doesn't matter | 14:17 |
tongli | that will work. | 14:17 |
gema | it demonstrates interop | 14:17 |
*** DaisukeB has joined #openstack-defcore | 14:17 | |
shamail | #agree The team will start with one tool per workload for tests, we will be open to running the same workload with additional tools but only after the first go-around has been completed for each workload | 14:18 |
tongli | so at the top level directory, we will have ansible, terraform and heat. | 14:18 |
shamail | gema: +1 | 14:18 |
tongli | workload goes into one of the directory. | 14:18 |
leong | we can just let the user to decide which tool they want to use for workload | 14:18 |
tongli | leong, +1 | 14:18 |
tongli | I think that is settled. | 14:19 |
gema | if you want to be able to compare results | 14:19 |
gema | we should all use the same tool for the same workload | 14:19 |
gema | else they are not comparable | 14:19 |
jkomg | exactly | 14:19 |
gema | so we agree which tool goes with which workload and do it all that way? | 14:19 |
hjanssen-hpe | Interoperability means doing the same thing everywhere and you are interoperable if the results are the same | 14:19 |
gema | hjanssen-hpe: yep | 14:20 |
shamail | gema and hjanssen-hpe: +1 | 14:20 |
tongli | @gema, that is the thing, people may not agree on which tool to use? | 14:20 |
gema | tongli: if we have all the tools represented, one per workload, they'll agree | 14:20 |
catherineD | so we have one workload (LAMP stack) with 3 tools (Heat, terraform and anisble) should we choose one comibnation of workload + tool | 14:20 |
gema | everybody gets a bit of benefit from this exercise | 14:20 |
tongli | I would rather have the options for developers. and we can define the content of the app. | 14:20 |
shamail | If the results are not based on the same test framework (which includes tool + workload) then it makes them hard to be seen as the same | 14:20 |
gema | shamail: +1 | 14:21 |
*** catherine_d|1 has joined #openstack-defcore | 14:21 | |
hjanssen-hpe | shamail: =1 | 14:21 |
hjanssen-hpe | shamail: +1 (Sorry, I hate my new keyboard) | 14:21 |
tongli | we currently already see terraform, heat being used. | 14:21 |
shamail | all good hjanssen-hpe | 14:21 |
tongli | seems to me ansible is a way better tool for this. | 14:21 |
gema | as long as the tool works for one of us it should work for all | 14:22 |
tongli | can we just start with these three and see how people use them? | 14:22 |
gema | tool+workload combination, I mean | 14:23 |
shamail | I think we should have a common starting point (pick one: heat, terraform, or ansible) and then revisit later if there is time remaining | 14:23 |
luzC | shamail +1 | 14:23 |
nikhil | hogepodge: gentle reminder, I've you scheduled for a Q&A with glance team tomorrow's mtg https://etherpad.openstack.org/p/glance-team-meeting-agenda . Please feel free to update/remove depending on your availability. | 14:23 |
tongli | @shamail, we already have terraform and heat in there. | 14:23 |
shamail | #startvote Which tool should we run initially for LAMPStack tests? terraform heat ansible | 14:23 |
openstack | Begin voting on: Which tool should we run initially for LAMPStack tests? Valid vote options are terraform, heat, ansible. | 14:23 |
catherine_d|1 | shamail: pick one workload+tool to start with | 14:23 |
openstack | Vote using '#vote OPTION'. Only your last vote counts. | 14:23 |
gema | #vote ansible | 14:24 |
shamail | Voting is open, please select your preferred tool and we can see the results in a second | 14:24 |
hjanssen-hpe | #vote ansible | 14:24 |
tongli | #vote ansible | 14:24 |
* dhellmann arrives late | 14:24 | |
* nikhil apologizes for interrupting the meeting. the channel topic confused to indicate no meeting :/ | 14:24 | |
shamail | hi dhellmann | 14:24 |
shamail | np nikhil | 14:24 |
dhellmann | #vote ansible | 14:24 |
jkomg | #vote ansible | 14:24 |
luzC | #vote heat | 14:25 |
leong | #vote heat | 14:25 |
MartinST_ | #vote ansible | 14:25 |
* markvoelker is fairly agnostic and thinks we should use whatever tools we actually have in the repo today | 14:25 | |
shamail | Thanks markvoelker | 14:25 |
shamail | that would be heat or terraform | 14:25 |
gema | then heat got more votes :D | 14:26 |
shamail | lizdurst, catherine_d|1: closing out the voting… | 14:26 |
tongli | @shamail, right, doing this , heat hurts. | 14:26 |
tongli | head hurts | 14:26 |
shamail | #endvote | 14:26 |
openstack | Voted on "Which tool should we run initially for LAMPStack tests?" Results are | 14:26 |
openstack | heat (2): leong, luzC | 14:26 |
catherine_d|1 | of the two workload submitted heat+LAMP and terraform+LAMP stack | 14:26 |
openstack | ansible (6): MartinST_, dhellmann, hjanssen-hpe, tongli, gema, jkomg | 14:26 |
catherine_d|1 | I would like to see which whether the workload are similar ... | 14:26 |
shamail | markvoelker: I agree with your position but it seems that ansible won by a lot (even when people are aware that it isn’t in the repo yet) | 14:27 |
jkomg | which we can do after we have our initial data sets if there's time | 14:27 |
tongli | I think that as long as we define the content of the workload, tools do not matter that much. | 14:27 |
tongli | same workload should test same things such as install, function calls. | 14:27 |
markvoelker | shamail: Ansible's fine as long as someone has a playbook to contribute that we can all run. =) | 14:28 |
shamail | markvoelker: +1 | 14:28 |
jkomg | +1 | 14:28 |
rohit404 | IMO, the three tools are not directly comparable so not sure what criteria I need to use to pick one of the tools...i'm actually ok with what we have in the repo | 14:28 |
shamail | Does anyone have an ansible playbook for LAMPstack in the works or available already? | 14:28 |
luzC | markvoelker: +1 | 14:28 |
catherine_d|1 | markvoelker: +1 | 14:28 |
jkomg | I think tongli said he's working on ansible | 14:28 |
shamail | ah, okay.. thanks jkomg | 14:29 |
tongli | @shmail, I am working on it now. should put up a patch later today or tomorrow. | 14:29 |
jkomg | word | 14:29 |
shamail | Okay to summarize this topic as “team will use ansible and LAMPstack as the first tool+workload combination”? | 14:29 |
gema | +1 | 14:29 |
hjanssen-hpe | +1 | 14:30 |
shamail | #agree Team will use ansible and LAMPstack as the first tool+workload combination to generate results. | 14:30 |
shamail | #action tongli will post ansible playbook for LAMPstack | 14:30 |
catherine_d|1 | shamail: LAMP stack to me is the middleware ... do we have application on top of the LAMP stack submitted? | 14:30 |
tongli | right, let's talk about the content of the app on top of lamp stack. | 14:30 |
shamail | catherine_d|1: We don’t yet… | 14:31 |
tongli | wordpress has been mentioned few times. | 14:31 |
markvoelker | IMHO I'm not much concerned with the actual app, but rather what it needs from OpenStack. | 14:32 |
gema | and wordpress wouldn't need much | 14:33 |
markvoelker | E.g. it's going to want an instance for a web server that has external connectivity, a separate network tha's not external w/another instance + volume for a database, that sort of thing | 14:33 |
hjanssen-hpe | The app should use Openstack features/facilities and not just test a VM | 14:33 |
catherine_d|1 | Does Wordpress include in the current 2 LAMP stack submissions? | 14:33 |
tongli | @markvoelder, I thought that the work load is to test a deployed app runs correctly on openstack. | 14:33 |
catherine_d|1 | markvoelker: the app is to demonstrate that the LAMP stack works | 14:33 |
hjanssen-hpe | markvoelker: +1 | 14:33 |
shamail | markvoelker: +1 | 14:34 |
leong | the app on the top doesn't really matter unless that involves testing the OpenStack API | 14:34 |
markvoelker | tongli: Sure, but if all OpenStack is doing is spinning up a single instance running a generic x86 linux os, we haven't really proved much in terms of interoperability | 14:34 |
hjanssen-hpe | Deploying an app without using Openstack only shows that the hypervisor works | 14:34 |
leong | catherine_d|1 the wordpress is included in the current heat LAMP | 14:34 |
gema | markvoelker: should be able to run on an AArch64 linux too | 14:34 |
markvoelker | Modern apps need more than that from the IaaS layer, so we want to exercise more OpenStack capabilities | 14:34 |
catherine_d|1 | leong: thx | 14:34 |
gema | as in, the image shouldn't matter | 14:35 |
tongli | so the terraform lampstack I put up there does the following: | 14:35 |
tongli | 1. provision 3 nodes (can be more configurable) | 14:35 |
tongli | 2. install mysql | 14:35 |
tongli | 3. install lamp components | 14:36 |
hogepodge | o/ | 14:36 |
tongli | 4. add a one page web app | 14:36 |
shamail | hi hogepodge | 14:36 |
tongli | 5. 3 nodes working together serve the lamp stack app. | 14:36 |
tongli | just my first shot at it. | 14:36 |
markvoelker | tongli: sounds totally reasonable. Bonus points if we could get it to exercise a few more capabilities (like if the mysql box had a persistent cinder volume attached or was on an isolated netowrk). | 14:36 |
dhellmann | tongli : seems like a good demo | 14:37 |
dhellmann | markvoelker : ++ | 14:37 |
gema | tongli: +1 | 14:37 |
leong | markvoelker.. i think the heat template covered that | 14:37 |
hjanssen-hpe | tongli: An excellent start! | 14:37 |
shamail | nice leong | 14:37 |
catherine_d|1 | tongli:so the verification of the deployed LAMP stack works is by for a user to hit the webpage? | 14:37 |
tongli | ok, I will add the cinder volume for database. | 14:38 |
leong | in the heat template, the db layer using cinder volume as the persistent storage | 14:38 |
dhellmann | tongli : or trove? :-) | 14:38 |
*** tkfjt__ has quit IRC | 14:38 | |
tongli | are we all ok that the actual app can be just some thing simple to prove that database was connected and everything else is working? | 14:38 |
catherine_d|1 | leong: could you please describe the workload that you submitted (Heat+LAMPstack) | 14:38 |
shamail | tongli: +1 | 14:39 |
tongli | do not need to be a very complex app? | 14:39 |
leong | is a 3 tier lamp stack as well with wordpress install | 14:39 |
dhellmann | tongli : +1 | 14:39 |
leong | it provision 3 network for each tier | 14:39 |
tongli | @dhellmann, hmmm, we are doing lamp, trove , interesting idea though. | 14:39 |
hjanssen-hpe | tongli: +1 | 14:39 |
leong | the db tier also utilise Cinder volume for persistence store | 14:39 |
skazi | tongli: +1, imo the networking itself is already showing some openstack features | 14:40 |
leong | there is also a separate heat template if you want to test AutoScaling (but this is with ceilometer dependent) | 14:40 |
leong | question: are we testing devcore-specific api in this interop challenge? or beyond? | 14:41 |
catherine_d|1 | leong: how do user verify that this workload is functional ? | 14:41 |
jkomg | initial testing should stick to defcore standards; we don't want to be too in the weeds | 14:41 |
tongli | ok, I think we are all saying the similar thing. application should be simple. | 14:41 |
gema | leong: beyond | 14:41 |
jkomg | really? alright | 14:41 |
gema | we could step it up next cycle, when networking is in full in the program | 14:41 |
shamail | jkomg: +1 | 14:42 |
hogepodge | leong: catherine_d|1: I'll reiterate that one of the initial tasks should be to run the tests that are part of DefCore, that would cover 100% of the DefCore required capabilities | 14:42 |
skazi | I think it's better to have more apps than just one with many features | 14:42 |
leong | catherine_d|1 the Heat engine will output the results if the deployment is sucessful... and user can validate by viewing the workpress | 14:42 |
skazi | at least from the presentation pov | 14:42 |
tongli | the test should include all the setup, installation steps are successful (or fail), then access the application (via restful api maybe) | 14:42 |
shamail | I think initially the thought was to use capabilities covered by the OpenStack-Powered program… We could definitely expand beyond that but that might be after the Barcelona summit. | 14:42 |
jkomg | shamail: +1 | 14:43 |
dhellmann | shamail : that makes sense; start with what we're already testing | 14:43 |
leong | hogepodge.. i understand that.. that's why the heat template that we submitted has two different files.. one is only test defcore api, another is to test beyond | 14:43 |
shamail | We wanted to use OpenStack-Powered Clouds/Services to showcase API interoperability as a foundation and then show workloads being deployed as the next layer | 14:43 |
shamail | hogepodge: +1 | 14:44 |
tongli | the patch I put up also create security groups, rules, keypairs etc. | 14:44 |
shamail | hogepodge: the process outlined in the etherpad/wiki (draft) today starts with the first task being to run RefStack against the cloud | 14:44 |
shamail | https://wiki.openstack.org/wiki/Interop_Challenge#Test_Candidate_Process | 14:44 |
tongli | so that running it will be easy. | 14:45 |
tongli | @shamail, we are running out of the times. | 14:45 |
tongli | there are so many other things we need to address from the agenda. | 14:45 |
rohit404 | so, ansible + LAMP + wordpress ? | 14:45 |
jkomg | +1 | 14:45 |
shamail | thanks tongli | 14:45 |
shamail | Okay, so please review the testing candidate workloads/tools and we can discuss them again next week. I am going to change topics now. | 14:46 |
hogepodge | shamail: sweet thanks | 14:46 |
leong | +1 shamail | 14:46 |
catherine_d|1 | hogepodge: RefStack is one of the test category for testing ... | 14:47 |
shamail | On this topic, I’d like to state that we selected a starting point but please continue to add tools/workloads because I am sure once we start testing that people might be able to get multiple test runs in after becoming comfortable with the submission process. I think having multiple tools/workloads in the repo is beneficial and I hope we get to run more than just one tool per workload (but have to prioritize) | 14:47 |
shamail | #link https://review.openstack.org/#/q/status:open+project:openstack/osops-tools-contrib | 14:47 |
shamail | open reviews for scripts as well. | 14:47 |
shamail | please go review later :) | 14:47 |
shamail | #topic Discuss how/where to share results (results repository) | 14:47 |
leong | agree shamail! :) | 14:47 |
shamail | markvoelker, tongli: can you lead this topic since you two have been thinking about this? | 14:47 |
markvoelker | sure | 14:48 |
shamail | thanks | 14:48 |
markvoelker | So basically once we pick workloads and people run them, we need a way to collect results | 14:48 |
markvoelker | And what's probably most valuable isn't binary "pass/fail" | 14:48 |
markvoelker | What's probably valuable is two things: | 14:48 |
markvoelker | 1.) Some light analysis of failures (e.g. "it didn't work because it assumes floating IP's will be used; we use provider networks instead") | 14:49 |
leong | looks like we need to define a 'result template' such as pass/fail, if fail, what fail | 14:50 |
markvoelker | 2.) For things that did actually run for everyone, we can do some analysis later to glean best practices (E.g. "this checked for available image API's rather than assuming Glance v2 was available") | 14:50 |
markvoelker | So to that end we started a basic template for reporting results | 14:50 |
markvoelker | This isn't final, but gives you sort of a feel for what we'd like to collect | 14:50 |
tongli | @markvoelker, right, define a template (hint, yaml), then at the end of the run, replace the variables in the tempalte. | 14:50 |
markvoelker | # link http://paste.openstack.org/show/553587/ Skeleton of results template | 14:50 |
shamail | markvoelker: Can you please post it in the etherpad as well? | 14:51 |
markvoelker | shamail: sure | 14:51 |
tongli | @markvoelker, yaml ,please. | 14:51 |
shamail | We can add it to the wiki later in the week | 14:51 |
shamail | thanks | 14:51 |
markvoelker | As far as the means of collecting that info, there's a couple of methods we could use | 14:51 |
tongli | eventually we feed these results to some charting tools to plot nice charts. | 14:51 |
markvoelker | E.g. for the initial run for Barcelona, we could just set up a SurveyMonkey (or similar tool) for folks to report to | 14:52 |
leong | tongli +1 yaml+++1 | 14:52 |
markvoelker | Or we could just email. Or...etc etc etc | 14:52 |
tongli | @markvoelder, you are not suggesting manually doing this, right? | 14:52 |
tongli | I would rather produce a yaml file then http post to somewhere. | 14:52 |
markvoelker | tongli: I am. Because I'm not sure we've got time to build a client wrapper and server before Barcelona. =) | 14:52 |
gema | yeah, manually testing this is not going to scale | 14:53 |
markvoelker | Longer term I'm all in favor of automation | 14:53 |
shamail | tongli: some of the things markvoelker highlighted (e.g. light analysis of failures) will have to be manual | 14:53 |
jkomg | It'd have to be at least partially manual | 14:53 |
jkomg | right | 14:53 |
gema | plus some of those questions are open to interpretation | 14:53 |
shamail | jkomg: +1 | 14:53 |
markvoelker | But if we want to have soemthing to show before Barcelona, I think we need to move fast. | 14:53 |
leong | the result template can be in yaml, how to submit can be manual at this stage | 14:53 |
shamail | I agree that a simple pass/fail for each task doesn’t give us the data to learn from | 14:54 |
shamail | leong: +1 | 14:54 |
leong | the template that markvoelker shown can be defined in yaml | 14:54 |
tongli | for the location of the results, I would just suggest we have some thing like swift or http post capable site. | 14:54 |
shamail | I think we need to separate out this conversation between: what are we collecting, what format are we collecting it in | 14:54 |
leong | then people can either http post or email the yaml result back.. | 14:54 |
gema | or git commit it | 14:54 |
shamail | Does the proposal for what type of information we should capture after tests look good as proposed by markvoelker? | 14:54 |
markvoelker | shamail: +1. I'm far less concenred about the method of collection than figuring out what useful data we can collect. | 14:55 |
leong | and of course a http post or something can built to validate the 'result format' | 14:55 |
shamail | markvoelker: ditto :) | 14:55 |
gema | shamail: +1 | 14:55 |
jkomg | shamail: does to me, +1 one markvoelker | 14:55 |
skazi | markvoelker: +1 | 14:55 |
tongli | another github project to contain the results? | 14:55 |
jkomg | also +1 that the data is more important than how we collect it | 14:55 |
shamail | It seems that we all like this notion of capturing results and including a light analysis/brief description of where a workload failed | 14:56 |
markvoelker | So, for today: is there any important data that we should be collecting that isn't in http://paste.openstack.org/show/553587/ already? | 14:56 |
shamail | markvoelker: The summary sounded good to me, I will review the questions later in the day again | 14:56 |
gema | markvoelker: I would add bug number if any, that you raised from this testing | 14:56 |
gema | that makes all the errors we uncover traceable | 14:56 |
shamail | #action please look at the results template at http://paste.openstack.org/show/553587/ and share thoughts on ML on whether it captures everything we’d want (everyone) | 14:57 |
markvoelker | gema: bug against what? OpenStack? | 14:57 |
gema | markvoelker: against any project | 14:57 |
tongli | @gema, are you saying we create bugs against the cloud ? | 14:57 |
gema | openstack, linux, lamp | 14:57 |
markvoelker | gema: I'm thinking a lot of failures we're going to see won't be the result of OpenStack bugs | 14:57 |
shamail | #agree We will discuss format/process for submitting results next week | 14:57 |
shamail | We are almost out of time and I think that will be a good conversation as well | 14:57 |
gema | markvoelker: ok | 14:57 |
shamail | #topic Open Discussion | 14:58 |
markvoelker | E.g. it'll be things like "this Ansible play assumed I need floating IP's for external connectivity, but the cloud I'm testing uses provider networks that are routable" | 14:58 |
shamail | We have two minutes remaining | 14:58 |
*** lizdurst has quit IRC | 14:58 | |
shamail | markvoelker: +1 | 14:58 |
tongli | I would think that the test will run automatically on a daily basis. | 14:59 |
tongli | which will produce a lot of results, then we can chart over the time | 14:59 |
shamail | Do we expect change over time? | 14:59 |
shamail | besides capacity issues | 14:59 |
jkomg | hopefully not :P | 14:59 |
tongli | hmmm, after errors get fixed, you would like to run again, right? | 15:00 |
luzC | what would be the timeframe, once we have the playbooks we are expected to test the cloud for how many days? | 15:00 |
shamail | Alright, thanks everyone for making this a great meeting! See you next week. | 15:00 |
shamail | #endmeeting | 15:00 |
openstack | Meeting ended Wed Aug 10 15:00:18 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:00 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-10-14.00.html | 15:00 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-10-14.00.txt | 15:00 |
openstack | Log: http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-08-10-14.00.log.html | 15:00 |
shamail | I have to run but this is DefCore channel so conversation can continue :) | 15:00 |
markvoelker | tongli: well, honestly a big part of what I'd hope to get out of this is a set of best practices. I'm not sure daily runs really help with that. | 15:00 |
* shamail sneaks out | 15:01 | |
markvoelker | E.g. I'm more interested in figuring out that if I want to deploy a 3-tier web app with Ansible, when I go write my plays I should do things like: | 15:01 |
tongli | @markvoelker, so you expect just one set of result from each cloud? | 15:01 |
gema | tongli: yeah, me too | 15:01 |
jkomg | one set from each version | 15:01 |
gema | not sure what we would gain by automating this | 15:02 |
jkomg | the expectation is that x test will run the same on x cloud at x version, no matter the # of iterations, right? | 15:02 |
gema | the value of this testing is how wide the run is | 15:02 |
gema | not how many of them we do | 15:02 |
*** DaisukeB has quit IRC | 15:02 | |
gema | jkomg: right | 15:02 |
markvoelker | 1.) Check for supported image API's rather than assuming Glance v1 is available | 15:02 |
*** shamail has quit IRC | 15:02 | |
markvoelker | 2.) Don't assume floating IP's are necessary for external connectivity | 15:03 |
markvoelker | 3.) etc etc etc | 15:03 |
markvoelker | tongli: Pretty much. Get results from a bunch of clouds, see what we can learn from the failures, then repeat at some later point with the workloads adjusted based on what we learned the last time. | 15:03 |
markvoelker | tongli: at the end of the day, if we can provide app devs with some best practices to ensure their workloads run across more clouds, that's a win. | 15:03 |
tongli | mark, if we very well define the content of each work load, then these things will be tested each run. | 15:03 |
jkomg | I think if we see different results with the same configuration sets on the same clouds on the same version, interop is the least of our worries, right? :P | 15:04 |
gema | haha | 15:04 |
gema | jkomg: for our own peace of mind we should run them on each upgrade | 15:04 |
gema | even if not shared , but for ourselves | 15:05 |
tongli | if the result will be just posted couple of times, then manual is fine. | 15:05 |
tongli | thought that we will run this on a daily basis. | 15:05 |
* jkomg is just steering away from having to hang this framework on jenkins or something for a daily run | 15:05 | |
gema | tongli: manual has the problem that it is open for interpretation | 15:05 |
jkomg | gema: oh for sure, we want version testing | 15:05 |
jkomg | gema: I think if we're careful we can minimize that, like the framework markvoelker setup | 15:05 |
markvoelker | tongli: Sure, my point was just that running a daily test doesn't necessarily do much for us. E.g. VIO version 2.5 is a shipped distribution...it's not changing daily. The Heat template or Ansible playbook we run might, but it's not. | 15:05 |
jkomg | did it work? no? what was the error? did the error help? or similar | 15:06 |
jkomg | markvoelker: +1, that's what I'm saying too | 15:06 |
tongli | we did not get to talk about clouds we will run against. | 15:06 |
tongli | I really want to know what you guys think. | 15:07 |
jkomg | We should be able to get some of this done via the list, right? | 15:07 |
markvoelker | jkomg: yep, I think so | 15:07 |
tongli | the clouds we will run against is kind of big deal since the results eventually will mean something. | 15:07 |
markvoelker | tongli: Well, speaking for myself: I'd like to see the workloads run against as many products as we can that are actually available to consumers right now. | 15:08 |
jkomg | markvoelker: +1 | 15:08 |
jkomg | that's why we have cast a wide net for involvement | 15:08 |
markvoelker | E.g. public clouds (RAX, OVH, etc), distributions (RHOSP, VIO, MOS, etc), etc | 15:08 |
tongli | so you mean public openstack cloud, | 15:08 |
*** kbaikov has quit IRC | 15:08 | |
jkomg | all the clouds, as many as we can get | 15:09 |
jkomg | we want this not only to succeed, but to showcase that yep, throw x workload on any stack, it'll run like you expect | 15:09 |
jkomg | with expected results | 15:09 |
markvoelker | At the end of the day if we come up with best practices for creating interoperable workloads, then as an app dev I know if I follow that guidance then I can choose from a whole wealth of products. Which is way cool. | 15:09 |
jkomg | right | 15:09 |
jkomg | and best practices aside, we're proving without a doubt that refstack testing works to show interop, with functional testing of as much those tests as possible. | 15:11 |
jkomg | ala here's my refstack results, and here's a number of functioning workload tests proving said results. | 15:11 |
jkomg | interop? check. | 15:11 |
jkomg | anything else is gravy | 15:12 |
luzC | I have to run, talk to you soon :) | 15:13 |
jkomg | peace out | 15:13 |
*** galstrom_zzz is now known as galstrom | 15:13 | |
*** leong has quit IRC | 15:14 | |
hogepodge | Meeting agenda for today, a bit later than normal. Please add topics as you see fit. | 15:20 |
hogepodge | https://etherpad.openstack.org/p/DefCoreLunar.13 | 15:20 |
eglute | thanks hogepodge! | 15:28 |
*** MartinST_ has quit IRC | 15:50 | |
*** pcaruana has quit IRC | 16:29 | |
*** ametts has quit IRC | 16:38 | |
openstackgerrit | Chris Hoge proposed openstack/defcore: Update schema to 1.6 to disallow additional properties https://review.openstack.org/351363 | 16:43 |
*** rohit404 has quit IRC | 16:51 | |
*** catherine_d|1 has quit IRC | 16:55 | |
*** galstrom is now known as galstrom_zzz | 16:58 | |
*** tongli has quit IRC | 17:19 | |
*** Jokoester has quit IRC | 17:21 | |
*** ametts has joined #openstack-defcore | 17:28 | |
*** woodster_ has quit IRC | 17:39 | |
*** galstrom_zzz is now known as galstrom | 18:35 | |
*** ametts has quit IRC | 18:42 | |
*** ametts has joined #openstack-defcore | 18:43 | |
*** woodster_ has joined #openstack-defcore | 19:13 | |
openstackgerrit | Merged openstack/defcore: Flag advisory tests in 2016.01 due to requirement of admin credential. https://review.openstack.org/353287 | 19:52 |
*** ametts has quit IRC | 20:34 | |
*** jkomg has quit IRC | 21:30 | |
*** jkomg has joined #openstack-defcore | 21:35 | |
*** jkomg has quit IRC | 21:40 | |
*** galstrom is now known as galstrom_zzz | 22:15 | |
*** edmondsw has quit IRC | 22:36 | |
*** woodster_ has quit IRC | 23:49 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!