| *** rosmaita1 is now known as rosmaita | 00:08 | |
| opendevreview | Goutham Pacha Ravi proposed openstack/ossa master: Add OSSA-2026-009 (CVE-2026-pending) https://review.opendev.org/c/openstack/ossa/+/986480 | 05:00 |
|---|---|---|
| sean-k-mooney | so i was reviweing some nots on teh vmt dicssiosn at the ptg and i have a quistion or ratahhr a common on a potinal logical inconsitency with regards to ai usage | 10:44 |
| sean-k-mooney | effectivly there was a comment to the effect that any embargoed issue cannot use remote hosted llm in the patch creation or review of the same | 10:45 |
| sean-k-mooney | effeictly saying that would poticaly discolse the embargoed content or issue to hte llm provider | 10:45 |
| sean-k-mooney | but taking that to its logicl concoltuion that woudl also mean that we coudl never have any embargo seciryt bug that was foudn while using an llm | 10:46 |
| sean-k-mooney | i.e. if i suspect there is somethign odd, and investiage it with an llm, or i am just reviwing code with an llm and it note a potical security issue | 10:47 |
| sean-k-mooney | that coudl never be a private security bug | 10:47 |
| sean-k-mooney | so im concernted that actully adotping that stance would be counter productive as it would prevent us using these tools in waht is already a much hardwer way fo developing then our standard workflow | 10:48 |
| sean-k-mooney | im wonderign if there is a middel ground where we say. if the issue was found without llm tools these tools shoudl not be applied? | 10:49 |
| sean-k-mooney | i.e. if you found the issue with claude or similar through your natural usage of the tool or directed audit of the code its ok ot continue to use them. but if a human repots an issue to you and your asseing it then you shoudl not? | 10:50 |
| sean-k-mooney | we are all i think expecting that we will get a buch of potically false posive vmt report because of these tools and it feels like we woudl be shooting ourselve in the foot if we could not use the saem tools to triage or repoduce them once reported | 10:52 |
| sean-k-mooney | for non ai reports i think there is sense in keeping it mostly or entirly human driven but i have alot fo concer with adjusting the vmt policy to say ai cannot be used with ai discovered issues | 10:53 |
| sean-k-mooney | it woudl be good however i think to update the vmt process to requrie discolsure of if ai was used in teh discovery process so that we can then make that nuanced determination on how to proceed | 10:54 |
| sean-k-mooney | JayF: gouthamr toughts^ | 10:54 |
| JayF | sean-k-mooney: It's not a black and white issue. Each time you send information to a remote service like an llm, you run a risk. That makes me want to encourage people to not take that risk once we realize there's a vulnerability. We can't control the actions of people who report issues, but we can be very careful once embargoed. | 13:02 |
| fungi | sean-k-mooney: i would go so far as to say that any vulnerability found with a tool, whether a canned security scanner, static analyzer, or llm, should not be embargoed because anyone can run the same tools and find the same vulnerabilities, so if one person has we need to assume others also have and may be keeping it to themselves for their own profit/criminality | 13:05 |
| sean-k-mooney | fungi: ya so if you point an llm at a part of code adn say "hay this looks weired what happens if i do x" | 13:09 |
| sean-k-mooney | its very very good and trying x and telling you | 13:10 |
| sean-k-mooney | i defnitly understand the risk of discouder | 13:10 |
| sean-k-mooney | but there is a real tradeoff in time to fix and the qualty of the fix (better or worse) by using or not using llms | 13:11 |
| fungi | it's almost like nobody can remember that only a few months ago they were writing bug fixes without an llm | 13:24 |
| sean-k-mooney | well monts vs years (i have been using ai for upstream since at lest 2023 in some capcity) | 13:26 |
| sean-k-mooney | but also yes they are not required | 13:27 |
| sean-k-mooney | im just pointing out that as the use of llms to find bugs increases of any kind increase, if we dont use them to help fix them we risk being overwlehme with bugs to triage and or fix espcially as our contibutor base contracts | 13:28 |
| sean-k-mooney | so im just play devils advocat here and saying we shoudl fight fire with fire when approiate to do so | 13:29 |
| fungi | yep, i get it. also my sense of time is pretty much shot, as i'm personally exhausted by llm everything everywhere all the time now | 13:29 |
| sean-k-mooney | well yes and no i woudl very much consider myslef an early adoptor | 13:30 |
| sean-k-mooney | my use fo llm predate the ai policy but a cople of release and predates claude code for example | 13:30 |
| sean-k-mooney | i stated experimentign with them before codepoit even came out of beta | 13:31 |
| sean-k-mooney | usign teribel local models | 13:31 |
| sean-k-mooney | then i used github copiolt mainly to write comments,tests or relesae note sfor a few years | 13:31 |
| fungi | i'm just hoping it all calms down/slows down soon, because this pase is unsustainable | 13:31 |
| fungi | er, pace | 13:32 |
| sean-k-mooney | ya its fatiguing | 13:32 |
| sean-k-mooney | i also think its funny that "normal users" are now feelign the core reviewer fatigute of doign diff/code reivew on a large volume of code on diffent context | 13:33 |
| sean-k-mooney | i.e. a lot of peopel the orginally reported llms made the more productive started reportign they were burnign out for the exact reason core reviewers do | 13:33 |
| sean-k-mooney | well that assumes they are reading the output... | 13:34 |
| sean-k-mooney | anyway that of topic | 13:36 |
| sean-k-mooney | i woudl be interested in reviewign what any vmt process update in this regard end up looking like | 13:37 |
| fungi | yeah, i'm thinking i'll ask the broader open source community on the oss-security ml | 14:07 |
| JayF | sean-k-mooney: I've said more than once that my review-heavy workflow in upstream OpenStack (I've always spent more time reviewing than authoring code) was good preparation for the current llm tools | 14:46 |
| sean-k-mooney | JayF: yep it very much is | 14:56 |
| sean-k-mooney | for anyoen not used to ti it like trying to run a maraton | 14:56 |
| sean-k-mooney | as your first race | 14:57 |
| gouthamr | devils advocate for a bit: most commonly, someone that's discovering/triaging/fixing vulnerabilities is using an enterprise account with these remote LLMs | 16:45 |
| gouthamr | i wonder if the presence of a strict/transparent data privacy policy would help.. enterprise agreements should cover something regarding processing embargoed/security-sensitive data already.... so maybe we can ask companies to share that "commitment" publicly before they allow their employees to work on embargoed issues with LLM help | 16:45 |
| gouthamr | obligatory NAL, but this one: https://www.anthropic.com/legal/commercial-terms says: "Anthropic may not train models on Customer Content from Services" | 16:47 |
| gouthamr | https://openai.com/enterprise-privacy/ says: "We do not train our models on your data by default" | 16:47 |
| JayF | gouthamr: as I noted in the TC session, in many of these cases it's a more general "don't spam details of this change across a thousand cloud systems" as much as it is a worry of them training on it | 16:50 |
| JayF | gouthamr: for instance, let's say I used a private github issues board to track downstream work (as some gr-oss teams do), would it be OK for me to put [work on OSSA-2026-TBD issue with omghax via method] | 16:51 |
| gouthamr | wishful thinking that there can be something specific for vulnerabilities in open source communities.. but, i can reach out to my company's security team to get their bottomline clarification: "Can I use Claude to fix an embargoed issue upstream; do you have recommendations on what i should do?" | 16:51 |
| gouthamr | i think the risk that gtema alluded to in the etherpad extended to "misses" as well... we have a number of instances where embargo was broken because the AI tool pushed code to gerrit or elsewhere happily in violation of the VMT's process.. | 16:51 |
| gouthamr | JayF: yeah, that would be a big no with our embargo policy.. but the hope is that other humans aren't seeing your work with these machines.. | 16:53 |
| gouthamr | my reasoning for the devil's advocacy here is, people may just do things anyway? :( | 16:55 |
| JayF | Flexibility when requested is good. Flexibility because people can't comply with the basic professional requirements of their job is negligence. | 16:56 |
| JayF | Especially when there are pressures outside of the OSS community that may be pushing for speed over quality and validation. | 16:57 |
| gouthamr | yeah, agreed. i like that we're thinking of basic ground rules.. | 16:58 |
| fungi | the sudden surge in volume is also leading to more accidents | 16:58 |
| gouthamr | ++ | 16:59 |
| fungi | understandably, i mean, our processes are bursting at the seams | 17:00 |
| fungi | related, i also posted https://www.openwall.com/lists/oss-security/2026/04/28/15 to find out how other communities are handling this | 17:01 |
| JayF | I pretty much strongly disagree with this take and would be -1 to OpenStack setting such a policy, fungi | 17:04 |
| JayF | LLM tokens are a proxy for money and time. You're basically saying that with enough money and time a vulnerability can be found. This is always true. This also matches your mental model of usually being against coordinated disclosure in most cases :D | 17:05 |
| fungi | that's fair, but other projects take a similar stance wrt bugs from fuzzers and similar tools | 17:05 |
| sean-k-mooney | the one thing i will say is is much easier to nail an llm to the wall and ask it to review your code then a colegue or contibutor at a diffent company | 17:44 |
| sean-k-mooney | i will say that rehdat has a contractual relathip with a cloud provier for inference parlty because fo data privacy and confidentialy as well as data residncy reasons | 17:46 |
| sean-k-mooney | we obvioulsy have strict regltory adn other restriciton on how customer data can be used ectra | 17:47 |
| sean-k-mooney | so legal reviewign the data handleign and processisng stiputlation in those infernice contract was very much a part fo allowign those to be used in redhat | 17:48 |
| sean-k-mooney | fungi: for what its worth most llm release discolse there knoldage cutoffs for new training tdata and that frequetrly 6+ months old | 17:52 |
| sean-k-mooney | fungi: but i also wondered if llm fould bug should just be public secuirty bugs. i think no but it is a approch | 17:53 |
| JayF | > <sean-k-mooney> the one thing i will say is is much easier to nail an llm to the wall and ask it to review your code then a colegue or contibutor at a diffent company | 17:59 |
| JayF | This is a scary, scary, scary comment. | 17:59 |
| gouthamr | sean-k-mooney is a scary man :D /jk | 17:59 |
| JayF | The LLM is the ultimate "avoid talking to other people about your code" machine | 17:59 |
| JayF | And for an upstream community, that dialog *is all we've got* | 17:59 |
| JayF | if we stop talking to each other, we might as well just all fork and go home | 17:59 |
| sean-k-mooney | JayF: its all fair until we give ai robots and then we have to be sure not to hurt it feeling | 18:12 |
| sean-k-mooney | cause movies tell use that will go very well for us if we do | 18:12 |
| JayF | I know you're somewhat joking, but I'm serious about all an OSS community has is reputation and collaboration | 18:13 |
| JayF | the software is an output of those ingredients | 18:13 |
| JayF | and AI threatens both seriously which is why I want us to proceed at an OpenStack-velocity pace :D | 18:13 |
| JayF | (we are not the fastest moving community; that's a strength in this case!) | 18:14 |
| sean-k-mooney | oh i know but for secuity issues its not alwasy easy to collabary partly becase we dont have teh same tooling | 18:14 |
| sean-k-mooney | i know the kernel is equally furstrated with that | 18:14 |
| gouthamr | this was a good read: https://daverupert.com/2026/04/more-talk-less-grok/ | 18:14 |
| gouthamr | (sorta aligns with what JayF's saying and i stole that from the ironic ptg notes) | 18:14 |
| JayF | gouthamr: thanks for showing the source I stole my quote from :D | 18:15 |
| JayF | now they know I'm plagarising :P | 18:15 |
| gouthamr | roflmao :D sorry.. it read like a great discussion.. you guys had other things there that helped me through the rest of the week's discourse | 18:17 |
| JayF | Believe me, that article came across and I was like "I have finally found the AI blogger who sees these things like I do" | 18:18 |
| JayF | Lots of useful things are also dangerous. I have a giant 48A@240V cable plugged into my truck. Super useful tool. It also could burn down my house if misused or misdesigned. | 18:19 |
| JayF | LLMs are dangerous and powerful in the same ways, except we don't have codes yet or electricians or really anyone who fully understands the power and weaknesses entirely yet. | 18:19 |
| JayF | meanwhile AI companies/pundits/VCs are trying to build https://en.wikipedia.org/wiki/Wardenclyffe_Tower :) | 18:20 |
| fungi | or tesla's "death ray" ;) | 20:29 |
| JayF | the death ray is the interviews the CEOs give saying like "we've got the AGI locked up in the vault! It almost escaped! Spooooooooky!" | 20:32 |
Generated by irclog2html.py 4.1.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!