InfoSec News 17FEB2026

General

  • Trying to extort the Dutch police might backfire, when they already know who you are?
The incident began when the man contacted police on February 12 about images he had that may be relevant to an ongoing investigation. An officer responded to his inquiry but, instead of sending a link to upload the images, mistakenly shared a download link to confidential police documents.
...
the man downloaded the files despite the obvious error. When the police instructed him to stop downloading and delete the materials, he allegedly refused unless he was given "something in return."
He did, in the end, get something in return – a trip in the back of a police car.
Officers arrested the man Thursday evening, searched his home, and seized data storage devices to recover the documents and prevent them from being shared further. Police say they have also reported the incident as a data breach and are continuing their investigation.
Add support for verifying ML-DSA signatures.
ML-DSA (Module-Lattice-Based Digital Signature Algorithm) is a
recently-standardized post-quantum (quantum-resistant) signature
algorithm. It was known as Dilithium pre-standardization.
The first use case in the kernel will be module signing. But there
are also other users of RSA and ECDSA signatures in the kernel that
might want to upgrade to ML-DSA eventually.
We have got to stop using unsupported technologies that pose unacceptable security risks. It can no longer be ignored. OpenEoX provides an automated framework for standardized and transparent product lifecycle management. As cyber defenders, we need to adopt new practices to secure our networks and counter the ever-increasing exploitation speed of threat actors. By embracing OpenEoX, we can proactively eliminate vulnerabilities and safeguard the digital ecosystem at scale.

Getting Techy

Geo-Politics

  • [CN] China's answer to Pwn2Own - the Tianfu Cup - is back, now being run by the Ministry of Public Security.
Launched in 2018 after Chinese authorities barred domestic researchers from participating in international exploit competitions, such as Canada’s Pwn2Own, the Tianfu Cup emerged as a domestic alternative for high-end vulnerability research and exploitation.
what has changed with the Tianfu Cup’s ...the shift from a commercially led competition to one organized almost entirely by the MPS, specifically the Sichuan Provincial Public Security Bureau...what remains the most consequential and unresolved question: where vulnerabilities discovered at the Tianfu Cup are likely to end up
Ukrainian citizens began receiving unexpected text messages this month from the country’s security service, warning that Russia was trying to recruit locals to help restore access to blocked Starlink satellite internet terminals.
...
The warning follows Ukraine’s rollout of a new national verification system for Starlink terminals earlier this month. Under the new rules, only registered and verified devices can operate in Ukrainian-controlled territory, with all others automatically disconnected.
...
Russian troops had reduced the number of kamikaze drone attacks in the southeastern Zaporizhzhia region after the shutdown.
...
Russian military bloggers also reported losing access to Starlink connections, warning that the outages could weaken Moscow’s drone warfare capabilities and hinder coordination between units.

AI

  • The creator of OpenClaw (Peter Steinberger) is joining OpenAI. To ensure OpenClaw continues, he's turning it into a foundation.
When I started exploring AI, my goal was to have fun and inspire people. And here we are, the lobster is taking over the world. My next mission is to build an agent that even my mum can use.
...
The community around OpenClaw is something magical and OpenAI has made strong commitments to enable me to dedicate my time to it and already sponsors the project. To get this into a proper structure I’m working on making it a foundation. It will stay a place for thinkers, hackers and people that want a way to own their data, with the goal of supporting even more models and companies.
Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape.

Subscribe to Deuxieme RE Banque News

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe