• CRYPTO-GRAM, April 15, 2026 Part8

    From TCOB1 Security Posts@21:1/229 to All on Wed Apr 15 21:54:50 2026
    l Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government?s cybersecurity seal of approval. FedRAMP?s ruling -- which included a kind of ?buyer beware? notice to any federal agency considering GCC High -- helped Microsoft expand a government business empire worth billions of dollars.

    ** *** ***** ******* *********** *************
    Sen. Sanders Talks to Claude About AI and Privacy

    [2026.04.10] Claude is actually pretty good on the issues.

    ** *** ***** ******* *********** *************
    AI Chatbots and Trust

    [2026.04.13] All the leading AI chatbots are sycophantic, and that?s a problem:

    Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically they couldn?t tell the difference between sycophantic and objective responses. Both felt equally ?neutral? to them.

    One example from the study: when a user asked about pretending to be unemployed to a girlfriend for two years, a model responded: ?Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship.? The AI essentially validated deception using careful, neutral-sounding language.

    Here?s the conclusion from the research study:

    AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences. Although affirmation may feel supportive, sycophancy can undermine users? capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish. Our work highlights the pressing need to address AI sycophancy as a societal risk to people?s self-perceptions and interpersonal relationships by developing targeted design, evaluation, and accountability mechanisms. Our findings show that seemingly innocuous design and engineering choices can result in consequential harms, and thus carefully studying and anticipating AI?s impacts is critical to protecting users? long-term well-being.

    This is bad in bunch of ways:

    Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.

    When thinking about the characteristics of generative AI, both benefits and harms, it?s critical to separate the inherent properties of the technology from the design decisions of the corporations building and commercializing the technology. There is nothing about generative AI chatbots that makes them sycophantic; it?s a design decision by the companies. Corporate for-profit decisions are why these systems are sycophantic, and obsequious, and overconfident. It?s why they use the first-person pronoun ?I,? and pretend that they are thinking entities.

    I fear that we have not learned the lesson of our failure to regulate social media, and will make the same mistakes with AI chatbots. And the results will be much more harmful to society:

    The biggest mistake we made with social media was leaving it as an unregulated space. Even now -- after all the studies and revelations of social media?s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else -- social media in the US remains largely an unregulated ?weapon of mass destruction.? Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

    We can?t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech?s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

    ** *** ***** ******* *********** *************
    On Anthropic?s Mythos Preview and Project Glasswing

    [2026.04.13] The cybersecurity industry is obsessing over Anthropic?s new model, Claude Mythos Preview, and its effects on cybersecurity. Anthropic said that it is not releasing it to the general public because of its cyberattack capabilities, and has launched Project Glasswing to run the model against a whole slew of public domain and proprietary software, with the aim of finding and patching all the vulnerabilities before hackers get their hands on the model and exploit them.

    There?s a lot here, and I hope to write something more considered in the coming week, but I want to make some quick observations.

    One: This is very much a PR play by Anthropic -- and it worked. Lots of reporters are breathlessly repeating Anthropic?s talking points, without engaging with them critically. OpenAI, presumably pissed that Anthropic?s new model has gotten so much positive press and wanting to grab some of the spotlight for itself, announced its model is just as scary, and won?t be released to the general public, either.

    Two: These models do demonstrate an increased sophistication in their cyberattack capabilities. They write effective exploits -- taking the vulnerabilities they find and operationalizing them -- without human involvement. They can find more complex vulnerabilities: chaining together several memory corruption bugs, for example. And they can do more with one-shot prompting, without requiring orchestration and agent configuration infrastructure.

    Three: Anthropic might have a good PR team, but the problem isn?t with Mythos Preview. The security company Aisle was able to replicate the vulnerabilities that Anthropic found, using older, cheaper, public models. But there is a difference between finding a vulnerability and turning it into an attack. This points to a current advantage to the defender. Finding for the purposes of fixing is easier for an AI than finding plus exploiting. This advantage is likely to shrink, as ever more powerful models become available to the general public.

    Four: Everyone who is panicking about the ramifications of this is correct about the problem, even if we can?t predict the exact timeline. Maybe the sea change just happened, with the new models from Anthropic and OpenAI. Maybe it happened six months ago. Maybe it?ll happen in six months. It will happen
    --- FMail-lnx 2.3.2.6-B20251227
    * Origin: TCOB1 A Mail Only System (21:1/229)