Feed aggregator
China and Japan join forces on typhoon research
England to keep most hosepipe bans as drought persists
A faster problem-solving tool that guarantees feasibility
Managing a power grid is like trying to solve an enormous puzzle.
Grid operators must ensure the proper amount of power is flowing to the right areas at the exact time when it is needed, and they must do this in a way that minimizes costs without overloading physical infrastructure. Even more, they must solve this complicated problem repeatedly, as rapidly as possible, to meet constantly changing demand.
To help crack this consistent conundrum, MIT researchers developed a problem-solving tool that finds the optimal solution much faster than traditional approaches while ensuring the solution doesn’t violate any of the system’s constraints. In a power grid, constraints could be things like generator and line capacity.
This new tool incorporates a feasibility-seeking step into a powerful machine-learning model trained to solve the problem. The feasibility-seeking step uses the model’s prediction as a starting point, iteratively refining the solution until it finds the best achievable answer.
The MIT system can unravel complex problems several times faster than traditional solvers, while providing strong guarantees of success. For some extremely complex problems, it could find better solutions than tried-and-true tools. The technique also outperformed pure machine learning approaches, which are fast but can’t always find feasible solutions.
In addition to helping schedule power production in an electric grid, this new tool could be applied to many types of complicated problems, such as designing new products, managing investment portfolios, or planning production to meet consumer demand.
“Solving these especially thorny problems well requires us to combine tools from machine learning, optimization, and electrical engineering to develop methods that hit the right tradeoffs in terms of providing value to the domain, while also meeting its requirements. You have to look at the needs of the application and design methods in a way that actually fulfills those needs,” says Priya Donti, the Silverman Family Career Development Professor in the Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS).
Donti, senior author of an open-access paper on this new tool, called FSNet, is joined by lead author Hoang Nguyen, an EECS graduate student. The paper will be presented at the Conference on Neural Information Processing Systems.
Combining approaches
Ensuring optimal power flow in an electric grid is an extremely hard problem that is becoming more difficult for operators to solve quickly.
“As we try to integrate more renewables into the grid, operators must deal with the fact that the amount of power generation is going to vary moment to moment. At the same time, there are many more distributed devices to coordinate,” Donti explains.
Grid operators often rely on traditional solvers, which provide mathematical guarantees that the optimal solution doesn’t violate any problem constraints. But these tools can take hours or even days to arrive at that solution if the problem is especially convoluted.
On the other hand, deep-learning models can solve even very hard problems in a fraction of the time, but the solution might ignore some important constraints. For a power grid operator, this could result in issues like unsafe voltage levels or even grid outages.
“Machine-learning models struggle to satisfy all the constraints due to the many errors that occur during the training process,” Nguyen explains.
For FSNet, the researchers combined the best of both approaches into a two-step problem-solving framework.
Focusing on feasibility
In the first step, a neural network predicts a solution to the optimization problem. Very loosely inspired by neurons in the human brain, neural networks are deep learning models that excel at recognizing patterns in data.
Next, a traditional solver that has been incorporated into FSNet performs a feasibility-seeking step. This optimization algorithm iteratively refines the initial prediction while ensuring the solution does not violate any constraints.
Because the feasibility-seeking step is based on a mathematical model of the problem, it can guarantee the solution is deployable.
“This step is very important. In FSNet, we can have the rigorous guarantees that we need in practice,” Hoang says.
The researchers designed FSNet to address both main types of constraints (equality and inequality) at the same time. This makes it easier to use than other approaches that may require customizing the neural network or solving for each type of constraint separately.
“Here, you can just plug and play with different optimization solvers,” Donti says.
By thinking differently about how the neural network solves complex optimization problems, the researchers were able to unlock a new technique that works better, she adds.
They compared FSNet to traditional solvers and pure machine-learning approaches on a range of challenging problems, including power grid optimization. Their system cut solving times by orders of magnitude compared to the baseline approaches, while respecting all problem constraints.
FSNet also found better solutions to some of the trickiest problems.
“While this was surprising to us, it does make sense. Our neural network can figure out by itself some additional structure in the data that the original optimization solver was not designed to exploit,” Donti explains.
In the future, the researchers want to make FSNet less memory-intensive, incorporate more efficient optimization algorithms, and scale it up to tackle more realistic problems.
“Finding solutions to challenging optimization problems that are feasible is paramount to finding ones that are close to optimal. Especially for physical systems like power grids, close to optimal means nothing without feasibility. This work provides an important step toward ensuring that deep-learning models can produce predictions that satisfy constraints, with explicit guarantees on constraint enforcement,” says Kyri Baker, an associate professor at the University of Colorado Boulder, who was not involved with this work.
"A persistent challenge for machine learning-based optimization is feasibility. This work elegantly couples end-to-end learning with an unrolled feasibility-seeking procedure that minimizes equality and inequality violations. The results are very promising and I look forward to see where this research will head," adds Ferdinando Fioretto, an assistant professor at the University of Virginia, who was not involved with this work.
Study: Good management of aid projects reduces local violence
Good management of aid projects in developing countries reduces violence in those areas — but poorly managed projects increase the chances of local violence, according to a new study by an MIT economist.
The research, examining World Bank projects in Africa, illuminates a major question surrounding international aid. Observers have long wondered if aid projects, by bringing new resources into developing countries, lead to conflict over those goods as an unintended consequence. Previously, some scholars have identified an increase in violence attached to aid, while others have found a decrease.
The new study shows those prior results are not necessarily wrong, but not entirely right, either. Instead, aid oversight matters. World Bank programs earning the highest evaluation scores for their implementation reduce the likelihood of conflict by up to 12 percent, compared to the worst-managed programs.
“I find that the management quality of these projects has a really strong effect on whether that project leads to conflict or not,” says MIT economist Jacob Moscona, who conducted the research. “Well-managed aid projects can actually reduce conflict, and poorly managed projects increase conflict, relative to no project. So, the way aid programs are organized is very important.”
The findings also suggest aid projects can work well almost anywhere. At times, observers have suggested the political conditions in some countries prevent aid from being effective. But the new study finds otherwise.
“There are ways these programs can have their positive effects without the negative consequences,” Moscona says. “And it’s not the result of what politics looks like on the receiving end; it’s about the organization itself.”
Moscona’s paper detailing the study, “The Management of Aid and Conflict in Africa,” is published in the November issue of the American Economic Journal: Economic Policy. Moscona, the paper’s sole author, is the 3M Career Development Assistant Professor in MIT’s Department of Economics.
Decisions on the ground
To conduct the study, Moscona examined World Bank data from the 1997-2014 time period, using the information compiled by AidData, a nonprofit group that also studies World Bank programs. Importantly, the World Bank conducts extensive evaluations of its projects and includes the identities of project leaders as part of those reviews.
“There are a lot of decisions on the ground made by managers of aid, and aid organizations themselves, that can have a huge impact on whether or not aid leads to conflict, and how aid resources are used and whether they are misappropriated or captured and get into the wrong hands,” Moscona says.
For instance, diligent daily checks about food distribution programs can and have substantially reduced the amount of food that is stolen or “leaks” out of the program. Other projects have created innovative ways of tagging small devices to ensure those objects are used by program participants, reducing appropriation by others.
Moscona combined the World Bank data with statistics from the Armed Conflict Location and Event Data Project (ACLED), a nonprofit that monitors political violence. That enabled him to evaluate how the quality of aid project implementation — and even the quality of the project leadership — influenced local outcomes.
For instance, by looking at the ratings of World Bank project leaders, Moscona found that shifting from a project leader at the 25th percentile, in terms of how frequently projects are linked with conflict, to one at the 75th percentile, increases the chances of local conflict by 15 percent.
“The magnitudes are pretty large, in terms of the probability that a conflict starts in the vicinity of a project,” Moscona observes.
Moscona’s research identified several other aspects of the interaction between aid and conflict that hold up over the region and time period. The establishment of aid programs does not seem to lead to long-term strategic activity by non-government forces, such as land acquisition or the establishment of rebel bases. The effects are also larger in areas that have had recent political violence. And armed conflict is greater when the resources at stake can be expropriated — such as food or medical devices.
“It matters most if you have more divertable resources, like food and medical devices that can be captured, as opposed to infrastructure projects,” Moscona says.
Reconciling the previous results
Moscona also found a clear trend in the data about the timing of violence in relation to aid. Government and other armed groups do not engage in much armed conflict when aid programs are being established; it is the appearance of desired goods themselves that sets off violent activity.
“You don’t see much conflict when the projects are getting off the ground,” Moscona says.” You really see the conflict start when the money is coming in or when the resources start to flow. Which is consistent with the idea of the relevant mechanism being about aid resources and their misappropriation, rather than groups trying to deligitimize a project.”
All told, Moscona’s study finds a logical mechanism explaining the varying results other scholars have found with regard to aid and conflict. If aid programs are not equally well-administered, it stands to reason that their outcomes will not be identical, either.
“There wasn’t much work trying to make those two sets of results speak to each other,” says Moscona. “I see it less as overturning existing results than providing a way to reconcile different results and experiences.”
Moscona’s findings may also speak to the value of aid in general — and provide actionable ideas for institutions such as the World Bank. If better management makes such a difference, then the potential effectiveness of aid programs may increase.
“One goal is to change the conversation about aid,” Moscona says. The data, he suggests, shows that the public discourse about aid can be “less defeatist about the potential negative consequences of aid, and the idea that it’s out of the control of the people who administer it.”
Once Again, Chat Control Flails After Strong Public Pressure
The European Union Council pushed for a dangerous plan to scan encrypted messages, and once again, people around the world loudly called out the risks, leading to the current Danish presidency to withdraw the plan.
EFF has strongly opposed Chat Control since it was first introduced in 2022. The zombie proposal comes back time and time again, and time and time again, it’s been shot down because there’s no public support. The fight is delayed, but not over.
It’s time for lawmakers to stop attempting to compromise encryption under the guise of public safety. Instead of making minor tweaks and resubmitting this proposal over and over, the EU Council should accept that any sort of client-side scanning of devices undermines encryption, and move on to developing real solutions that don’t violate the human rights of people around the world.
As long as lawmakers continue to misunderstand the way encryption technology works, there is no way forward with message-scanning proposals, not in the EU or anywhere else. This sort of surveillance is not just an overreach; it’s an attack on fundamental human rights.
The coming EU presidencies should abandon these attempts and work on finding a solution that protects people’s privacy and security.
Friday Squid Blogging: Giant Squid at the Smithsonian
I can’t believe that I haven’t yet posted this picture of a giant squid at the Smithsonian.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
The Department of Defense Wants Less Proof its Software Works
When Congress eventually reopens, the 2026 National Defense Authorization Act (NDAA) will be moving toward a vote. This gives us a chance to see the priorities of the Secretary of Defense and his Congressional allies when it comes to the military—and one of those priorities is buying technology, especially AI, with less of an obligation to prove it’s effective and worth the money the government will be paying for it.
As reported by Lawfare, “This year’s defense policy bill—the National Defense Authorization Act (NDAA)—would roll back data disclosures that help the department understand the real costs of what they are buying, and testing requirements that establish whether what contractors promise is technically feasible or even suited to its needs.” This change comes amid a push from the Secretary of Defense to “Maximize Lethality” by acquiring modern software “at a speed and scale for our Warfighter.” The Senate Armed Services Committee has also expressed interest in making “significant reforms to modernize the Pentagon's budgeting and acquisition operations...to improve efficiency, unleash innovation, and modernize the budget process.”
The 2026 NDAA itself says that the “Secretary of Defense shall prioritize alternative acquisition mechanisms to accelerate development and production” of technology, including an expedited “software acquisition pathway”—a special part of the U.S. code that, if this version of the NDAA passes, will transfer powers to the Secretary of Defense to streamline the buying process and make new technology or updates to existing technology and get it operational “in a period of not more than one year from the time the process is initiated…” It also makes sure the new technology “shall not be subjected to” some of the traditional levers of oversight.
All of this signals one thing: speed over due diligence. In a commercial technology landscape where companies are repeatedly found to be overselling or even deceiving people about their product’s technical capabilities—or where police departments are constantly grappling with the reality that expensive technology may not be effective at providing the solutions they’re after—it’s important that the government agency with the most expansive budget has time to test the efficacy and cost-efficiency of new technology. It’s easy for the military or police departments to listen to a tech company’s marketing department and believe their well-rehearsed sales pitch, but Congress should make sure that public money is being used wisely and in a way that is consistent with both civil liberties and human rights.
The military and those who support its preferred budget should think twice about cutting corners before buying and deploying new technology. The Department of Defense’s posturing does not elicit confidence that the technologically-focused military of tomorrow will be equipped in a way that is effective, efficient, or transparent.
Will AI Strengthen or Undermine Democracy?
Listen to the Audio on NextBigIdeaClub.com
Below, co-authors Bruce Schneier and Nathan E. Sanders share five key insights from their new book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship.
What’s the big idea?AI can be used both for and against the public interest within democracies. It is already being used in the governing of nations around the world, and there is no escaping its continued use in the future by leaders, policy makers, and legal enforcers. How we wire AI into democracy today will determine if it becomes a tool of oppression or empowerment...
Documentary explores missed chance for US climate policy
Michigan coal plant to stay open ‘long term’ on Trump’s orders
Judge scolds Oregon lawyer for ‘gobsmacking failure’ in climate lawsuit
In a hurricane season of ‘mixed signals,’ Melissa stands out
UN’s Green Climate Fund delivers record $3B
Conservative groups rebuff Whitehouse climate probe
Swiss village still digging out after deadly spring landslide
Climate change is putting Day of the Dead orange flower at risk
Families of Spain’s flood victims voice sorrow and rage at memorial
New nanoparticles stimulate the immune system to attack ovarian tumors
Cancer immunotherapy, which uses drugs that stimulate the body’s immune cells to attack tumors, is a promising approach to treating many types of cancer. However, it doesn’t work well for some tumors, including ovarian cancer.
To elicit a better response, MIT researchers have designed new nanoparticles that can deliver an immune-stimulating molecule called IL-12 directly to ovarian tumors. When given along with immunotherapy drugs called checkpoint inhibitors, IL-12 helps the immune system launch an attack on cancer cells.
Studying a mouse model of ovarian cancer, the researchers showed that this combination treatment could eliminate metastatic tumors in more than 80 percent of the mice. When the mice were later injected with more cancer cells, to simulate tumor recurrence, their immune cells remembered the tumor proteins and cleared them again.
“What’s really exciting is that we’re able to deliver IL-12 directly in the tumor space. And because of the way that this nanomaterial is designed to allow IL-12 to be borne on the surfaces of the cancer cells, we have essentially tricked the cancer into stimulating immune cells to arm themselves against that cancer,” says Paula Hammond, an MIT Institute Professor, MIT’s vice provost for faculty, and a member of the Koch Institute for Integrative Cancer Research.
Hammond and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the new study, which appears today in Nature Materials. Ivan Pires PhD ’24, now a postdoc at Brigham and Women’s Hospital, is the lead author of the paper.
“Hitting the gas”
Most tumors express and secrete proteins that suppress immune cells, creating a microenvironment in which the immune response is weakened. One of the main players that can kill tumor cells are T cells, but they get sidelined or blocked by the cancer cells and are unable to attack the tumor. Checkpoint inhibitors are an FDA-approved treatment designed to take those brakes off the immune system by removing the immune-suppressing proteins so that T cells can mount an attack on tumor cells
For some cancers, including some types of melanoma and lung cancer, removing the brakes is enough to provoke the immune system into attacking cancer cells. However, ovarian tumors have many ways to suppress the immune system, so checkpoint inhibitors alone usually aren’t enough to launch an immune response.
“The problem with ovarian cancer is no one is hitting the gas. So, even if you take off the brakes, nothing happens,” Pires says.
IL-12 offers one way to “hit the gas,” by supercharging T cells and other immune cells. However, the large doses of IL-12 required to get a strong response can produce side effects due to generalized inflammation, such as flu-like symptoms (fever, fatigue, GI issues, headaches, and fatigue), as well as more severe complications such as liver toxicity and cytokine release syndrome — which can be so severe they may even lead to death.
In a 2022 study, Hammond’s lab developed nanoparticles that could deliver IL-12 directly to tumor cells, which allows larger doses to be given while avoiding the side effects seen when the drug is injected. However, these particles tended to release their payload all at once after reaching the tumor, which hindered their ability to generate a strong T cell response.
In the new study, the researchers modified the particles so that IL-12 would be released more gradually, over about a week. They achieved this by using a different chemical linker to attach IL-12 to the particles.
“With our current technology, we optimize that chemistry such that there’s a more controlled release rate, and that allowed us to have better efficacy,” Pires says.
The particles consist of tiny, fatty droplets known as liposomes, with IL-12 molecules tethered to the surface. For this study, the researchers used a linker called maleimide to attach IL-12 to the liposomes. This linker is more stable than the one they used in the previous generation of particles, which was susceptible to being cleaved by proteins in the body, leading to premature release.
To make sure that the particles get to the right place, the researchers coat them with a layer of a polymer called poly-L-glutamate (PLE), which helps them directly target ovarian tumor cells. Once they reach the tumors, the particles bind to the cancer cell surfaces, where they gradually release their payload and activate nearby T cells.
Disappearing tumors
In tests in mice, the researchers showed that the IL-12-carrying particles could effectively recruit and stimulate T cells that attack tumors. The cancer models used for these studies are metastatic, so tumors developed not only in the ovaries but throughout the peritoneal cavity, which includes the surface of the intestines, liver, pancreas, and other organs. Tumors could even be seen in the lung tissues.
First, the researchers tested the IL-12 nanoparticles on their own, and they showed that this treatment eliminated tumors in about 30 percent of the mice. They also found a significant increase in the number of T cells that accumulated in the tumor environment.
Then, the researchers gave the particles to mice along with checkpoint inhibitors. More than 80 percent of the mice that received this dual treatment were cured. This happened even when the researchers used models of ovarian cancer that are highly resistant to immunotherapy or to the chemotherapy drugs usually used for ovarian cancer.
Patients with ovarian cancer are usually treated with surgery followed by chemotherapy. While this may be initially effective, cancer cells that remain after surgery are often able to grow into new tumors. Establishing an immune memory of the tumor proteins could help to prevent that kind of recurrence.
In this study, when the researchers injected tumor cells into the cured mice five months after the initial treatment, the immune system was still able to recognize and kill the cells.
“We don’t see the cancer cells being able to develop again in that same mouse, meaning that we do have an immune memory developed in those animals,” Pires says.
The researchers are now working with MIT’s Deshpande Center for Technological Innovation to spin out a company that they hope could further develop the nanoparticle technology. In a study published earlier this year, Hammond’s lab reported a new manufacturing approach that should enable large-scale production of this type of nanoparticle.
The research was funded by the National Institutes of Health, the Marble Center for Nanomedicine, the Deshpande Center for Technological Innovation, the Ragon Institute of MGH, MIT, and Harvard, and the Koch Institute Support (core) Grant from the National Cancer Institute.
Fracturing of Antarctic ice shelves depends on future climate warming rate
Nature Climate Change, Published online: 31 October 2025; doi:10.1038/s41558-025-02479-8
Antarctic ice shelves affect the mass loss of the Antarctic ice sheet and are vulnerable to damage from crevasses and rifts. Decades of satellite observations link this damage to past thinning and retreat of ice shelves. Damage is projected to intensify under future high-emission climate scenarios, further weakening ice shelves and accelerating ice loss.Reorienting climate litigation in a time of backlash
Nature Climate Change, Published online: 31 October 2025; doi:10.1038/s41558-025-02475-y
Restrictions on civil society may drive climate activists to shift from protest to litigation. However, challenges to judicial independence, deregulation and anti-climate litigation mean that activists need to consider the conditions under which litigation leads to strengthened climate ambition and implementation.