Feed aggregator

On Hacking Back

Schneier on Security - Wed, 11/12/2025 - 7:01am

Former DoJ attorney John Carlin writes about hackback, which he defines thus: “A hack back is a type of cyber response that incorporates a counterattack designed to proactively engage with, disable, or collect evidence about an attacker. Although hack backs can take on various forms, they are—­by definition­—not passive defensive measures.”

His conclusion:

As the law currently stands, specific forms of purely defense measures are authorized so long as they affect only the victim’s system or data.

At the other end of the spectrum, offensive measures that involve accessing or otherwise causing damage or loss to the hacker’s systems are likely prohibited, absent government oversight or authorization. And even then parties should proceed with caution in light of the heightened risks of misattribution, collateral damage, and retaliation...

Meet the Republicans who killed solar subsidies — after using them

ClimateWire News - Wed, 11/12/2025 - 6:39am
POLITICO’s E&E News examined satellite imagery of more than 100 homes owned by Republican lawmakers to see if they have solar panels. Seven had rooftop arrays.

Lots of studies show warming affected Hurricane Melissa. Is that confusing?

ClimateWire News - Wed, 11/12/2025 - 6:38am
Scientists say "many lines of evidence" convey the dangers of extreme weather. But too much information risks muddling the public's perception about the effects of climate change, some researchers say.

Protesters and UN security clash at climate summit in Brazil

ClimateWire News - Wed, 11/12/2025 - 6:37am
The demonstrators waved yellow flags protesting oil drilling in the Amazon.

‘We’re at peak influence’: Gavin Newsom struts at UN climate summit

ClimateWire News - Wed, 11/12/2025 - 6:36am
If the world wants an American climate leader, the California governor is happy to play the part, even if his country isn’t quite ready to follow.

Camp Mystic asked FEMA to change flood maps years before tragedy

ClimateWire News - Wed, 11/12/2025 - 6:36am
The owners of the central Texas girls camp are being accused in two lawsuits of trying to save money on insurance.

IEA: China’s control of critical minerals threatens energy transition

ClimateWire News - Wed, 11/12/2025 - 6:35am
The International Energy Agency warns that the world will exceed the 1.5-degree warming threshold in all scenarios.

Report warns about EU using climate credits to meet emission goals

ClimateWire News - Wed, 11/12/2025 - 6:33am
The climate will suffer under proposal to let nations avoid some emissions cuts by instead funding climate projects elsewhere, experts say.

Açaí berry dishes surprise visitors to Brazil climate summit

ClimateWire News - Wed, 11/12/2025 - 6:32am
This traditional preparation has been a tough sell for visitors accustomed to the frozen and sweetened açaí cream sold in other countries.

UN shipping regulator advocates for emissions fee at COP30

ClimateWire News - Wed, 11/12/2025 - 6:32am
The move comes despite the United States and Saudi Arabia blocking new rules last month.

Governments are flying blind on climate costs, study says

ClimateWire News - Wed, 11/12/2025 - 6:31am
The study found that nine in 10 countries don’t know their climate spending, while seven in 10 lack adequate medium- and long-term strategies to deal with climate impacts.

Melissa shows how climate change is outstripping defenses

ClimateWire News - Wed, 11/12/2025 - 6:24am
The hurricane's Caribbean rampage spotlights a contentious issue of how much industrialized nations should pay to help developing countries adapt to climate change.

Teaching large language models how to absorb new knowledge

MIT Latest News - Wed, 11/12/2025 - 12:00am

In an MIT classroom, a professor lectures while students diligently write down notes they will reread later to study and internalize key information ahead of an exam.

Humans know how to learn new information, but large language models can’t do this in the same way. Once a fully trained LLM has been deployed, its “brain” is static and can’t permanently adapt itself to new knowledge.

This means that if a user tells an LLM something important today, it won’t remember that information the next time this person starts a new conversation with the chatbot.

Now, a new approach developed by MIT researchers enables LLMs to update themselves in a way that permanently internalizes new information. Just like a student, the LLM generates its own study sheets from a user’s input, which it uses to memorize the information by updating its inner workings.

The model generates multiple self-edits to learn from one input, then applies each one to see which improves its performance the most. This trial-and-error process teaches the model the best way to train itself.

The researchers found this approach improved the accuracy of LLMs at question-answering and pattern-recognition tasks, and it enabled a small model to outperform much larger LLMs.

While there are still limitations that must be overcome, the technique could someday help artificial intelligence agents consistently adapt to new tasks and achieve changing goals in evolving environments.   

“Just like humans, complex AI systems can’t remain static for their entire lifetimes. These LLMs are not deployed in static environments. They are constantly facing new inputs from users. We want to make a model that is a bit more human-like — one that can keep improving itself,” says Jyothish Pari, an MIT graduate student and co-lead author of a paper on this technique.

He is joined on the paper by co-lead author Adam Zweiger, an MIT undergraduate; graduate students Han Guo and Ekin Akyürek; and senior authors Yoon Kim, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Pulkit Agrawal, an associate professor in EECS and member of CSAIL. The research will be presented at the Conference on Neural Information Processing Systems.

Teaching the model to learn

LLMs are neural network models that have billions of parameters, called weights, that contain the model’s knowledge and process inputs to make predictions. During training, the model adapts these weights to learn new information contained in its training data.

But once it is deployed, the weights are static and can’t be permanently updated anymore.

However, LLMs are very good at a process called in-context learning, in which a trained model learns a new task by seeing a few examples. These examples guide the model’s responses, but the knowledge disappears before the next conversation.

The MIT researchers wanted to leverage a model’s powerful in-context learning capabilities to teach it how to permanently update its weights when it encounters new knowledge.

The framework they developed, called SEAL for “self-adapting LLMs,” enables an LLM to generate new synthetic data based on an input, and then determine the best way to adapt itself and learn from that synthetic data. Each piece of synthetic data is a self-edit the model can apply.

In the case of language, the LLM creates synthetic data by rewriting the information, and its implications, in an input passage. This is similar to how students make study sheets by rewriting and summarizing original lecture content.

The LLM does this multiple times, then quizzes itself on each self-edit to see which led to the biggest boost in performance on a downstream task like question answering. It uses a trial-and-error method known as reinforcement learning, where it receives a reward for the greatest performance boost.

Then the model memorizes the best study sheet by updating its weights to internalize the information in that self-edit.

“Our hope is that the model will learn to make the best kind of study sheet — one that is the right length and has the proper diversity of information — such that updating the model based on it leads to a better model,” Zweiger explains.

Choosing the best method

Their framework also allows the model to choose the way it wants to learn the information. For instance, the model can select the synthetic data it wants to use, the rate at which it learns, and how many iterations it wants to train on.

In this case, not only does the model generate its own training data, but it also configures the optimization that applies that self-edit to its weights.

“As humans, we know how we learn best. We want to grant that same ability to large language models. By providing the model with the ability to control how it digests this information, it can figure out the best way to parse all the data that are coming in,” Pari says.

SEAL outperformed several baseline methods across a range of tasks, including learning a new skill from a few examples and incorporating knowledge from a text passage. On question answering, SEAL improved model accuracy by nearly 15 percent and on some skill-learning tasks, it boosted the success rate by more than 50 percent.

But one limitation of this approach is a problem called catastrophic forgetting: As the model repeatedly adapts to new information, its performance on earlier tasks slowly declines.

The researchers plan to mitigate catastrophic forgetting in future work. They also want to apply this technique in a multi-agent setting where several LLMs train each other.

“One of the key barriers to LLMs that can do meaningful scientific research is their inability to update themselves based on their interactions with new information. Though fully deployed self-adapting models are still far off, we hope systems able to learn this way could eventually overcome this and help advance science,” Zweiger says.

This work is supported, in part, by the U.S. Army Research Office, the U.S. Air Force AI Accelerator, the Stevens Fund for MIT UROP, and the MIT-IBM Watson AI Lab. 

Artificial light reduces ecosystem carbon sinks

Nature Climate Change - Wed, 11/12/2025 - 12:00am

Nature Climate Change, Published online: 12 November 2025; doi:10.1038/s41558-025-02499-4

As artificial light encroaches upon cities and countryside, natural darkness recedes and circadian rhythms shift in regions worldwide. Now, a study reveals that bright nights are negatively impacting the carbon sinks of ecosystems.

Widespread influence of artificial light at night on ecosystem metabolism

Nature Climate Change - Wed, 11/12/2025 - 12:00am

Nature Climate Change, Published online: 12 November 2025; doi:10.1038/s41558-025-02481-0

The authors combine light intensity data with eddy covariance observations from 86 sites to show that artificial light at night increases ecosystem respiration and alters carbon exchange, with impacts shaped by diel cycles and seasonal dynamics.

Prompt Injection in AI Browsers

Schneier on Security - Tue, 11/11/2025 - 7:08am

This is why AIs are not ready to be personal assistants:

A new attack called ‘CometJacking’ exploits URL parameters to pass to Perplexity’s Comet AI browser hidden instructions that allow access to sensitive data from connected services, like email and calendar.

In a realistic scenario, no credentials or user interaction are required and a threat actor can leverage the attack by simply exposing a maliciously crafted URL to targeted users.

[…]

CometJacking is a prompt-injection attack where the query string processed by the Comet AI browser contains malicious instructions added using the ‘collection’ parameter of the URL...

Retreat or recast? Democrats debate future of climate politics.

ClimateWire News - Tue, 11/11/2025 - 6:21am
Democratic election wins last week reignited arguments on how — or if — candidates should discuss climate change on the campaign trail.

Colorado seeks to extend life of major coal plant

ClimateWire News - Tue, 11/11/2025 - 6:20am
The move comes amid speculation that DOE is preparing to issue emergency orders directing some retiring coal plants to stay open.

Solar maker cuts 1,000 workers in Georgia

ClimateWire News - Tue, 11/11/2025 - 6:20am
The move by Qcells came as U.S. authorities hold imported solar components to determine if they violate a slave labor law.

Boulder tells Supreme Court to stay out of its climate fight with Exxon

ClimateWire News - Tue, 11/11/2025 - 6:18am
Colorado communities say local governments have a right to sue on behalf of residents, citing opioid and asbestos litigation.

Pages