InfoSec News 14APR2026

General

  • OpenAI were hit in the Axios supply-chain attacks, potentially compromising their macOS application signing certificate.
The company said that on March 31, 2026, the legitimate workflow downloaded and executed a compromised Axios package (version 1.14.1) that was used in attacks to deploy malware on devices.
That workflow had access to code-signing certificates used to sign OpenAI's macOS apps, including ChatGPT Desktop, Codex, Codex CLI, and Atlas.
...
The company said that on March 31, 2026, the legitimate workflow downloaded and executed a compromised Axios package (version 1.14.1) that was used in attacks to deploy malware on devices.
That workflow had access to code-signing certificates used to sign OpenAI's macOS apps, including ChatGPT Desktop, Codex, Codex CLI, and Atlas.
The vulnerability, discovered by Nicholas Carlini of Anthropic and tracked as CVE-2026-5194, is a cryptographic validation flaw that affects multiple signature algorithms in wolfSSL, allowing improperly weak digests to be accepted during certificate verification.
...
According to Lukasz Olejnik, independent security researcher and consultant, exploiting CVE-2026-5194 could trick applications or devices using a vulnerable wolfSSL version to "accept a forged digital identity as genuine, trusting a malicious server, file, or connection it should have rejected."
An attacker can exploit this weakness by supplying a forged certificate with a smaller digest than cryptographically appropriate, so the system accepts a signature that is easier to falsify or reproduce.
Missing hash/digest size and OID checks allow digests smaller than allowed when verifying ECDSA certificates, or smaller than is appropriate for the relevant key type, to be accepted by signature verification functions. This could lead to reduced security of ECDSA certificate-based authentication if the public CA key used is also known. This affects ECDSA/ECC verification when EdDSA or ML-DSA is also enabled.
The W3ll Store was a phishing kit and online marketplace that enabled cybercriminals to steal thousands of credentials and attempt more than $20 million in fraud.
...
The W3LL phishing kit sold for $500 and allowed attackers to create convincing replicas of corporate login portals to harvest credentials. The kit allowed threat actors to capture authentication session tokens, enabling attackers to bypass multi-factor authentication and gain access to compromised accounts.
After posing as a trusted Linux Foundation community leader in Slack, the attacker tried to trick developers into clicking a phishing link hosted on Google Sites: https://sites[.]google[.]com/view/workspace-business/join.
The link imitates a legitimate Google Workspace sign-in flow but leads users into a fraudulent authentication process, prompting them to enter their credentials and then install a fake root certificate masquerading as a Google certificate.
Sixty-one percent of Australian children between the ages of 12 and 15 told researchers from a prominent UK foundation and an Australian youth research agency that they can still access accounts on major platforms just as they did before the ban was put in place.
...
The survey found that TikTok and YouTube retained 53% of previous youth users and Instagram 52%. In most cases, children can still access social media because the platforms “failed to identify and remove their accounts in the first place,” according to the research report.

Getting Techy

  • Trail of Bits are continuing their push to document security testing, in an open handbook. The latest to be added, are sections on C and C++.
The chapter covers five areas: general bug classes, Linux usermode and kernel, Windows usermode and kernel, and seccomp/BPF sandboxes. It starts with language-level issues in the bug classes section—memory safety, integer errors, type confusion, compiler-introduced bugs—and gets progressively more environment-specific.
The Linux usermode section focuses on libc gotchas. This section is also applicable to most POSIX systems. It ranges from well-known problems with string methods, to somewhat less known caveats around privilege dropping and environment variable handling. The Linux kernel is a complicated beast, and no checklist could cover even a part of its intricacies. However, our new Testing Handbook chapter can give you a starting point to bootstrap manual reviews of drivers and modules.
The Windows sections cover DLL planting, unquoted path vulnerabilities in CreateProcess, and path traversal issues. This last bug class includes concerns like WorstFit Unicode bugs, where characters outside the basic ANSI set can be reinterpreted in ways that bypass path checks entirely. The kernel section addresses driver-specific concerns such as device access controls, denial of service through improper spinlock usage, security issues arising from passing handles from usermode to kernelmode, and various sharp edges in Windows kernel APIs.

AI

  • [CN] China is pushing ahead with its aggressive AI strategy.
China’s National Data Administration last Friday published its action plan for AI in education which calls for upskilling of the nation’s citizens to ensure they can put the technology to work.
The plan calls for classes on AI to become part of the curriculum at all levels of the education system, including vocational education.
Beijing also wants teachers taught how to use AI, and imagines AI will help them in the classroom by offering them support to prepare lessons and material for students.
...
China’s publications of this sort always mention the need for secure implementation, and this one is no different as it calls for development of “security evaluation standards for AI applications in education” and “ensuring that the application of technology conforms to educational principles.”

Subscribe to Deuxieme RE Banque News

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe