CRYPTO-GRAM, April 15, 2026 Part7
From
TCOB1 Security Posts@21:1/229 to
All on Wed Apr 15 21:54:50 2026
ot, the attacker will still have a window to attack systems until a vulnerability is patched.
Toward self-healing
In a truly optimistic future, we can imagine a self-healing network. AI agents continuously scan the ever-evolving corpus of commercial and custom AI-generated software for vulnerabilities, and automatically patch them on discovery.
For that to work, software license agreements will need to change. Right now, software vendors control the cadence of security patches. Giving software purchasers this ability has implications about compatibility, the right to repair, and liability. Any solutions here are the realm of policy, not tech.
If the defense can find, but can?t reliably patch, flaws in legacy software, that?s where attackers will focus their efforts. If that?s the case, we can imagine a continuously evolving AI-powered intrusion detection, continuously scanning inputs and blocking malicious attacks before they get to vulnerable software. Not as transformative as automatically patching vulnerabilities in running code, but nevertheless valuable.
The power of these defensive AI systems increases if they are able to coordinate with each other, and share vulnerabilities and updates. A discovery by one AI can quickly spread to everyone using the affected software. Again: Advantage defender.
There are other variables to consider. The relative success of attackers and defenders also depends on how plentiful vulnerabilities are, how easy they are to find, whether AIs will be able to find the more subtle and obscure vulnerabilities, and how much coordination there is among different attackers. All this comprises Unknown No. 4.
Vulnerability economics
Presumably, AIs will clean up the obvious stuff first, which means that any remaining vulnerabilities will be subtle. Finding them will take AI computing resources. In the optimistic scenario, defenders pool resources through information sharing, effectively amortizing the cost of defense. If information sharing doesn?t work for some reason, defense becomes much more expensive, as individual defenders will need to do their own research. But instant software means much more diversity in code: an advantage to the defender.
This needs to be balanced with the relative cost of attackers finding vulnerabilities. Attackers already have an inherent way to amortize the costs of finding a new vulnerability and create a new exploit. They can vulnerability hunt cross-platform, cross-vendor, and cross-system, and can use what they find to attack multiple targets simultaneously. Fixing a common vulnerability often requires cooperation among all the relevant platforms, vendors, and systems. Again, instant software is an advantage to the defender.
But those hard-to-find vulnerabilities become more valuable. Attackers will attempt to do what the major intelligence agencies do today: find "nobody but us" zero-day exploits. They will either use them slowly and sparingly to minimize detection or quickly and broadly to maximize profit before they?re patched. Meanwhile, defenders will be both vulnerability hunting and intrusion detecting, with the goal of patching vulnerabilities before the attackers find them.
We can even imagine a market for vulnerability sharing, where the defender who finds a vulnerability and creates a patch is compensated by everyone else in the information-sharing/repair network. This might be a stretch, but maybe.
Up the stack
Even in the most optimistic future, attackers aren?t going to just give up. They will attack the non-software parts of the system, such as the users. Or they?re going to look for loopholes in the system: things that the system technically allows but were unintended and unanticipated by the designers -- whether human or AI -- and can be used by attackers to their advantage.
What?s left in this world are attacks that don?t depend on finding and exploiting software vulnerabilities, like social engineering and credential stealing attacks. And we have already seen how AI-generated deepfakes make social engineering easier. But here, too, we can imagine defensive AI agents that monitor users? behaviors, watching for signs of attack. This is another AI use case, and one that I?m not even sure how to think about in terms of the attacker/defender arms race. But at least we?re pushing attacks up the stack.
Also, attackers will attempt to infiltrate and influence defensive AIs and the networks they use to communicate, poisoning their output and degrading their capabilities. AI systems are vulnerable to all sorts of manipulations, such as prompt injection, and it?s unclear whether we will ever be able to solve that. This is Unknown No. 5, and it?s a biggie. There might always be a "trusting trust problem."
No future is guaranteed. We truly don?t know whether these technologies will continue to improve and when they will plateau. But given the pace at which AI software development has improved in just the past few months, we need to start thinking about how cybersecurity works in this instant software world.
This essay originally appeared in CSO.
EDITED TO ADD: Two essays published after I wrote this. Both are good illustrations of where we are regarding AI vulnerability discovery. Things are changing very fast.
** *** ***** ******* *********** *************
Python Supply-Chain Compromise
[2026.04.08] This is news:
A malicious supply chain compromise has been identified in the Python Package Index package litellm version 1.82.8. The published wheel contains a malicious .pth file (litellm_init.pth, 34,628 bytes) which is automatically executed by the Python interpreter on every startup, without requiring any explicit import of the litellm module.
There are a lot of really boring things we need to do to help secure all of these critical libraries: SBOMs, SLSA, SigStore. But we have to do them.
** *** ***** ******* *********** *************
On Microsoft?s Lousy Cloud Security
[2026.04.09] ProPublica has a scoop:
In late 2024, the federal government?s cybersecurity evaluators rendered a troubling verdict on one of Microsoft?s biggest cloud computing offerings.
The tech giant?s ?lack of proper detailed security documentation? left reviewers with a ?lack of confidence in assessing the system?s overall security posture,? according to an internal government report reviewed by ProPublica.
Or, as one member of the team put it: ?The package is a pile of shit.?
For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn?t vouch for the technology?s security.
[...]
The federal government could be further exposed if it couldn?t verify the cybersecurity of Microsoft?s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation?s most sensitive information.
Yet, in a highly unusual move that still reverberates across Washington, the Federa
--- FMail-lnx 2.3.2.6-B20251227
* Origin: TCOB1 A Mail Only System (21:1/229)