Tuesday, 2026-04-28

*** rosmaita1 is now known as rosmaita00:08
opendevreviewGoutham Pacha Ravi proposed openstack/ossa master: Add OSSA-2026-009 (CVE-2026-pending)  https://review.opendev.org/c/openstack/ossa/+/98648005:00
sean-k-mooneyso i was reviweing some nots on teh vmt dicssiosn at the ptg and i have a quistion or ratahhr a common on a potinal logical inconsitency with regards to ai usage10:44
sean-k-mooneyeffectivly there was a comment to the effect that any embargoed issue cannot use remote hosted llm in the patch creation or review of the same10:45
sean-k-mooneyeffeictly saying that would poticaly discolse the embargoed content or issue to hte llm provider10:45
sean-k-mooneybut taking that to its logicl concoltuion that woudl also mean that we coudl never have any embargo seciryt bug that was foudn while using an llm10:46
sean-k-mooneyi.e. if i suspect there is somethign odd, and investiage it with an llm, or i am just reviwing code with an llm and it note a potical security issue10:47
sean-k-mooneythat coudl never be a private security bug 10:47
sean-k-mooneyso im concernted that actully adotping that stance would be counter productive as it would prevent us using these tools in waht is already a much hardwer way fo developing then our standard workflow10:48
sean-k-mooneyim wonderign if there is a middel ground where we say. if the issue was found without llm tools these tools shoudl not be applied?10:49
sean-k-mooneyi.e. if you found the issue with claude or similar through your natural usage of the tool or directed audit of the code its ok ot continue to use them. but if a human repots an issue to you and your asseing it then you shoudl not?10:50
sean-k-mooneywe are all i think expecting that we will get a buch of potically false posive vmt report because of these tools and it feels like we woudl be shooting ourselve in the foot if we could not use the saem tools to triage or repoduce them once reported10:52
sean-k-mooneyfor non ai reports i think there is sense in keeping it mostly or entirly human driven but i have alot fo concer with adjusting the vmt policy to say ai cannot be used with ai discovered issues10:53
sean-k-mooneyit woudl be good however i think to update the vmt process to requrie discolsure of if ai was used in teh discovery process so that we can then make that nuanced determination on how to proceed10:54
sean-k-mooneyJayF: gouthamr toughts^10:54
JayFsean-k-mooney: It's not a black and white issue. Each time you send information to a remote service like an llm, you run a risk. That makes me want to encourage people to not take that risk once we realize there's a vulnerability. We can't control the actions of people who report issues, but we can be very careful once embargoed.13:02
fungisean-k-mooney: i would go so far as to say that any vulnerability found with a tool, whether a canned security scanner, static analyzer, or llm, should not be embargoed because anyone can run the same tools and find the same vulnerabilities, so if one person has we need to assume others also have and may be keeping it to themselves for their own profit/criminality13:05
sean-k-mooneyfungi: ya so if you point an llm at a part of code adn say "hay this looks weired what happens if i do x"13:09
sean-k-mooneyits very very good and trying x and telling you 13:10
sean-k-mooneyi defnitly understand the risk of discouder 13:10
sean-k-mooneybut there is a real tradeoff in time to fix and the qualty of the fix (better or worse) by using or not using llms13:11
fungiit's almost like nobody can remember that only a few months ago they were writing bug fixes without an llm13:24
sean-k-mooneywell monts vs years (i have been using ai for upstream since at lest 2023 in some capcity)13:26
sean-k-mooneybut also yes they are not required13:27
sean-k-mooneyim just pointing out that as the use of llms to find bugs increases of any kind increase, if we dont use them to help fix them we risk being overwlehme with bugs to triage and or fix espcially as our contibutor base contracts13:28
sean-k-mooneyso im just play devils advocat here and saying we shoudl fight fire with fire when approiate to do so13:29
fungiyep, i get it. also my sense of time is pretty much shot, as i'm personally exhausted by llm everything everywhere all the time now13:29
sean-k-mooneywell yes and no i woudl very much consider myslef an early adoptor13:30
sean-k-mooneymy use fo llm predate the ai policy but a cople of release and predates claude code for example13:30
sean-k-mooneyi stated experimentign with them before codepoit even came out of beta13:31
sean-k-mooneyusign teribel local models13:31
sean-k-mooneythen i used github copiolt mainly to write comments,tests or relesae note sfor a few years13:31
fungii'm just hoping it all calms down/slows down soon, because this pase is unsustainable13:31
fungier, pace13:32
sean-k-mooneyya its fatiguing13:32
sean-k-mooneyi also think its funny that "normal users" are now feelign the core reviewer fatigute of doign diff/code reivew on a large volume of code on diffent context13:33
sean-k-mooneyi.e. a lot of peopel the orginally reported llms made the more productive started reportign they were burnign out for the exact reason core reviewers do13:33
sean-k-mooneywell that assumes they are reading the output...13:34
sean-k-mooneyanyway that of topic13:36
sean-k-mooneyi woudl be interested in reviewign what any vmt process update in this regard end up looking like13:37
fungiyeah, i'm thinking i'll ask the broader open source community on the oss-security ml14:07
JayFsean-k-mooney: I've said more than once that my review-heavy workflow in upstream OpenStack (I've always spent more time reviewing than authoring code) was good preparation for the current llm tools14:46
sean-k-mooneyJayF: yep it very much is14:56
sean-k-mooneyfor anyoen not used to ti it like trying to run a maraton14:56
sean-k-mooneyas your first race14:57
gouthamrdevils advocate for a bit: most commonly, someone that's discovering/triaging/fixing vulnerabilities is using an enterprise account with these remote LLMs 16:45
gouthamri wonder if  the presence of a strict/transparent data privacy policy would help.. enterprise agreements should cover something regarding processing embargoed/security-sensitive data already.... so maybe we can ask companies to share that "commitment" publicly before they allow their employees to work on embargoed issues with LLM help16:45
gouthamrobligatory NAL, but this one: https://www.anthropic.com/legal/commercial-terms says: "Anthropic may not train models on Customer Content from Services"16:47
gouthamrhttps://openai.com/enterprise-privacy/ says: "We do not train our models on your data by default"16:47
JayFgouthamr: as I noted in the TC session, in many of these cases it's a more general "don't spam details of this change across a thousand cloud systems" as much as it is a worry of them training on it16:50
JayFgouthamr: for instance, let's say I used a private github issues board to track downstream work (as some gr-oss teams do), would it be OK for me to put [work on OSSA-2026-TBD issue with omghax via method]16:51
gouthamrwishful thinking that there can be something specific for vulnerabilities in open source communities.. but, i can reach out to my company's security team to get their bottomline clarification: "Can I use Claude to fix an embargoed issue upstream; do you have recommendations on what i should do?"16:51
gouthamri think the risk that gtema alluded to in the etherpad extended to "misses" as well... we have a number of instances where embargo was broken because the AI tool pushed code to gerrit or elsewhere happily in violation of the VMT's process.. 16:51
gouthamrJayF: yeah, that would be a big no with our embargo policy.. but the hope is that other humans aren't seeing your work with these machines.. 16:53
gouthamrmy reasoning for the devil's advocacy here is, people may just do things anyway? :( 16:55
JayFFlexibility when requested is good. Flexibility because people can't comply with the basic professional requirements of their job is negligence.16:56
JayFEspecially when there are pressures outside of the OSS community that may be pushing for speed over quality and validation.16:57
gouthamryeah, agreed. i like that we're thinking of basic ground rules.. 16:58
fungithe sudden surge in volume is also leading to more accidents16:58
gouthamr++16:59
fungiunderstandably, i mean, our processes are bursting at the seams17:00
fungirelated, i also posted https://www.openwall.com/lists/oss-security/2026/04/28/15 to find out how other communities are handling this17:01
JayFI pretty much strongly disagree with this take and would be -1 to OpenStack setting such a policy, fungi 17:04
JayFLLM tokens are a proxy for money and time. You're basically saying that with enough money and time a vulnerability can be found. This is always true. This also matches your mental model of usually being against coordinated disclosure in most cases :D 17:05
fungithat's fair, but other projects take a similar stance wrt bugs from fuzzers and similar tools17:05
sean-k-mooneythe one thing i will say is is much easier to nail an llm to the wall and ask it to review your code then a colegue or contibutor at a diffent company17:44
sean-k-mooneyi will say that rehdat has a contractual relathip with a cloud provier for inference parlty because fo data privacy and confidentialy as well as data residncy reasons17:46
sean-k-mooneywe obvioulsy have strict regltory adn other restriciton on how customer data can be used ectra17:47
sean-k-mooneyso legal reviewign the data handleign and processisng stiputlation in those infernice contract was very much a part fo allowign those to be used in redhat17:48
sean-k-mooneyfungi: for what its worth most llm release discolse there knoldage cutoffs for new training tdata and that frequetrly 6+ months old17:52
sean-k-mooneyfungi: but i also wondered if llm fould bug should just be public secuirty bugs. i think no but it is a approch17:53
JayF> <sean-k-mooney> the one thing i will say is is much easier to nail an llm to the wall and ask it to review your code then a colegue or contibutor at a diffent company17:59
JayFThis is a scary, scary, scary comment.17:59
gouthamrsean-k-mooney is a scary man :D /jk17:59
JayFThe LLM is the ultimate "avoid talking to other people about your code" machine17:59
JayFAnd for an upstream community, that dialog *is all we've got*17:59
JayFif we stop talking to each other, we might as well just all fork and go home17:59
sean-k-mooneyJayF: its all fair until we give ai robots and then we have to be sure not to hurt it feeling18:12
sean-k-mooneycause movies tell use that will go very well for us if we do18:12
JayFI know you're somewhat joking, but I'm serious about all an OSS community has is reputation and collaboration18:13
JayFthe software is an output of those ingredients18:13
JayFand AI threatens both seriously which is why I want us to proceed at an OpenStack-velocity pace :D 18:13
JayF(we are not the fastest moving community; that's a strength in this case!)18:14
sean-k-mooneyoh i know but for secuity issues its not alwasy easy to collabary partly becase we dont have teh same tooling18:14
sean-k-mooneyi know the kernel is equally furstrated with that18:14
gouthamrthis was a good read: https://daverupert.com/2026/04/more-talk-less-grok/18:14
gouthamr(sorta aligns with what JayF's saying and i stole that from the ironic ptg notes)18:14
JayFgouthamr: thanks for showing the source I stole my quote from :D 18:15
JayFnow they know I'm plagarising :P 18:15
gouthamrroflmao :D sorry.. it read like a great discussion.. you guys had other things there that helped me through the rest of the week's discourse 18:17
JayFBelieve me, that article came across and I was like "I have finally found the AI blogger who sees these things like I do"18:18
JayFLots of useful things are also dangerous. I have a giant 48A@240V cable plugged into my truck. Super useful tool. It also could burn down my house if misused or misdesigned.18:19
JayFLLMs are dangerous and powerful in the same ways, except we don't have codes yet or electricians or really anyone who fully understands the power and weaknesses entirely yet.18:19
JayFmeanwhile AI companies/pundits/VCs are trying to build https://en.wikipedia.org/wiki/Wardenclyffe_Tower :)18:20
fungior tesla's "death ray" ;)20:29
JayFthe death ray is the interviews the CEOs give saying like "we've got the AGI locked up in the vault! It almost escaped! Spooooooooky!"20:32

Generated by irclog2html.py 4.1.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!