I don’t care if the solution is AI based or not, indeed.
I guess I thought it like that because AI is quite fit for the task of understanding what might be the purpose of code in a few seconds/minutes without you having to review it. I don’t know how some non-AI tool could be better for such task.
Edit: so many people against the idea. Have you guys used GitHub Copilot? It understands the context of your repo to help you write the next thing… Right? Well, what if you apply the same idea to simply review for malicious/unexpected behaviour on third party repos? Doesn’t seem too weird for me.
AI is quite fit for the task of understanding what might be the purpose of code
Disagree.
I don’t know how some non-AI tool could be better for such task.
ClamAV has been filling a somewhat similar use case for a long time, and I don’t think I’ve ever heard anyone call it “AI”.
I guess bayesian filters like email providers use to filter spam could be considered “AI” (though old-school AI, not the kind of stuff that’s such a bubble now) and may possibly be applicable to your use case.
Don’t listen to the idiots downvoting you. This is absolutely a good task for AI. I suspect current AI isn’t quite clever enough to detect this sort of thing reliably unless it is very blatant malicious code, but a lot of malicious code is fairly blatant if you have the time to actually read an entire codebase in detail, which of course AI can do and humans can’t.
For example the extra . that disabled a test in xz? I think current AI would easily be capable of highlighting it as wrong. It probably wouldn’t be able to figure out that it was malicious rather than a mistake yet though.
I mean anything is a good fit for future, science fiction AI if we imagine hard enough.
What you describe as “blatant malicious code” is probably only things like very specific C&C domains or instruction sets. We already have very efficient string matching tools for those, though, and they don’t burn power at an atrocious rate.
You’ve given us an example so PoC||GTFO. Major code AI tools like Copilot struggle to explain test files with a variety of styles, skips, and comments, so I think you have your work cut out for you.
I’m not moving any goalposts. The addition of the . was very blatant. They literally just added a syntax error. It went undetected because humans don’t have the stamina to exhaustively do code review down to that level. Computers (even AI) don’t have that issue.
I don’t care if the solution is AI based or not, indeed.
I guess I thought it like that because AI is quite fit for the task of understanding what might be the purpose of code in a few seconds/minutes without you having to review it. I don’t know how some non-AI tool could be better for such task.
Edit: so many people against the idea. Have you guys used GitHub Copilot? It understands the context of your repo to help you write the next thing… Right? Well, what if you apply the same idea to simply review for malicious/unexpected behaviour on third party repos? Doesn’t seem too weird for me.
EXTREMELY LOUD INCORRECT BUZZER
Disagree.
ClamAV has been filling a somewhat similar use case for a long time, and I don’t think I’ve ever heard anyone call it “AI”.
I guess bayesian filters like email providers use to filter spam could be considered “AI” (though old-school AI, not the kind of stuff that’s such a bubble now) and may possibly be applicable to your use case.
Sure, and parrots are amazing at spotting fallacies like cherry picking…
Don’t listen to the idiots downvoting you. This is absolutely a good task for AI. I suspect current AI isn’t quite clever enough to detect this sort of thing reliably unless it is very blatant malicious code, but a lot of malicious code is fairly blatant if you have the time to actually read an entire codebase in detail, which of course AI can do and humans can’t.
For example the extra
.
that disabled a test in xz? I think current AI would easily be capable of highlighting it as wrong. It probably wouldn’t be able to figure out that it was malicious rather than a mistake yet though.I mean anything is a good fit for future, science fiction AI if we imagine hard enough.
What you describe as “blatant malicious code” is probably only things like very specific C&C domains or instruction sets. We already have very efficient string matching tools for those, though, and they don’t burn power at an atrocious rate.
You’ve given us an example so PoC||GTFO. Major code AI tools like Copilot struggle to explain test files with a variety of styles, skips, and comments, so I think you have your work cut out for you.
How is a string matching tool going to find a single
.
?🙄
A single character, per your definition, is not blatant malicious code. Stop moving the goalposts.
It’s clear you don’t understand the space and you don’t seem to have any interest in acting in good faith based on your other comments so good luck.
I’m not moving any goalposts. The addition of the
.
was very blatant. They literally just added a syntax error. It went undetected because humans don’t have the stamina to exhaustively do code review down to that level. Computers (even AI) don’t have that issue.You are clearly out of your depth here.