Authors: Erling W. Eriksen, Magnus M. Nygård, Niklas Erdmann, Heine N. Riise
We investigate three distinct methods of incorporating all-sky imager (ASI) images into deep learning (DL) irradiance nowcasting. The first method relies on a convolutional neural network (CNN) to extract features directly from raw RGB images. The second method uses state-of-the-art algorithms to engineer 2D feature maps informed by domain knowledge, e.g., cloud segmentation, the cloud motion vector, solar position, and cloud base height. These feature maps are then passed to a CNN to extract compound features. The final method relies on aggregating the engineered 2D feature maps into time-series input. Each of the three methods were then used as part of a DL model trained on a high-frequency, 29-day dataset to generate multi-horizon forecasts of global horizontal irradiance up to 15 minutes ahead. The models were then evaluated using root mean squared error and skill score on 7 selected days of data. Aggregated engineered ASI features as model input yielded superior forecasting performance, demonstrating that integration of ASI images into DL nowcasting models is possible without complex spatially-ordered DL-architectures and inputs, underscoring opportunities for alternative image processing methods as well as the potential for improved spatial DL feature processing methods.
Authors: Yujia Li, Alexandre Moreira, Miguel Heleno
Mechanisms to coordinate transmission and distribution planning should be regulatory compliant and keep the spheres of DSO and TSO decisions separate, without requiring disclosure of proprietary data or unrealistic computationally expensive T&D co-simulations. The concept of Netload Range Cost Curves (NRCC) has been recently proposed as simple non-invasive form of coordinating T&D investments under distribution netload uncertainty. This paper extends the NRCC concept to accommodate the temporal dimension of the T&D planning process. We propose to compute a hierarchy of certified temporal interface products that represent the different levels of flexibility that distribution networks can provide transmission grids with at the planning stage. The first product (P1) maps distribution investment into scenario-robust, per-window service envelopes within which any TSO service call (to modify load within specified bounds) is guaranteed distribution-network-feasible. The second product (P2) adds lexicographic rebound minimization, preserving P1-optimal service capacity while certifying post-service recovery under three governance variants with qualitatively distinct rebound-budget responses. In our numerical results, based on a real distribution feeder, we compare the performance of our proposed time-window-based flexibility products to an atemporal product (P0) that offers a static bound on the aggregate distribution grid netload across all time periods. Our results demonstrate the superiority of our proposed products in properly valuing the benefits of incremental investments in storage to allow for temporal flexibility.
Authors: Hanghang Cui, Arash Khalatbarisoltani, Jie Han, Wenxue Liu, Muhammad Saeed, Xiaosong Hu
Effective co-optimization of energy management strategy (EMS) and thermal management (TM) is crucial for optimizing fuel efficiency in hybrid electric vehicles (HEVs). Driving conditions significantly influence the performance of both EMS and TM in HEVs. This study presents a novel driving condition-aware integrated thermal and energy management (ITEM) framework. In this context, after analyzing and segmenting driving data into micro-trips, two primary features (average speed and maximum acceleration) are measured. Using the K-means approach, the micro-trips are clustered into three main groups. Finally, a deep neural network is employed to develop a real-time driving recognition model. An ITEM is then developed based on multi-agent deep reinforcement learning (DRL), leveraging the proposed real-time driving recognition model. The primary objectives are to improve the fuel economy and reduce TM power consumption while maintaining a pleasant cabin temperature for passengers. Our simulation results illustrate the effectiveness of the suggested framework and the positive impact of recognizing driving conditions on ITEM, improving fuel economy by 16.14% and reducing TM power consumption by 8.22% compared to the benchmark strategy.
Authors: John Slane, Adam Mate
In recent years, the contribution of renewable energy resources to the electrical grid has increased drastically; the most common of these are photovoltaic solar panels and wind turbines. These resources rely on inverters to interface with the grid, which do not inherently exhibit the same fault characteristics as synchronous generators. Consistently, they can strain grid reliability and security, cause increased number of blackouts, and, in some cases, allow relatively minor faults to turn into cascading failures. Solar and wind energy provide benefits and can support grid stability; however, several challenges and gaps in understanding must be explored and addressed before this can be realized. This paper provides a comprehensive literature review of grid codes, modeling techniques, and tools, as well as current methods for responding to various faults. It also presents an overview of the industry's state as it relates to grid fault response in the presence of inverter-based resources.
Authors: Yiru Ji, Constance Crozier, Matthew Liska
The rapid growth of GPU-heavy data centers has significantly increased electricity demand and creating challenges for grid stability. Our paper investigates the extent to which an energy-aware job scheduling algorithm can provide flexibility in GPU-heavy data centers. Compared with the traditional first-in first-out (FIFO) baseline, we show that more efficient job scheduling not only increases profit, but also brings latent power flexibility during peak price period. This flexibility is achieved by moving lower energy jobs, preferentially executing jobs with lower GPU utilization and smaller node requirements, when the electricity price is high. We demonstrate that data centers with lower queue length and higher variance in job characteristics such as job GPU utilization and job size, offer the greatest flexibility potential. Finally we show that data center flexibility is highly price sensitive, a 7% demand reduction is achieved with a small incentive, but unrealistically high prices are required to achieve a 33% reduction.
Authors: Yubo Zhang, Zhiguo Hao, Songhao Yang, Baohui Zhang
Additional active power control (AAPC) of wind turbines (WTs) is essential to improve the transient frequency stability of low-inertia power systems. Most of the existing research has focused on imitating the frequency response of the synchronous generator (SG), known as virtual inertia control (VIC), but are such control laws optimal for the power systems? Inspired by this question, this paper proposes an optimal AAPC of WTs to maximize the frequency nadir post a major power deficit. By decoupling the WT response and the frequency dynamics, the optimal frequency trajectory is solved based on the trajectory model, and its universality is strictly proven. Then the optimal AAPC of WTs is constructed reversely based on the average system frequency (ASF) model with the optimal frequency trajectory as the desired control results. The proposed method can significantly improve the system frequency nadir. Meanwhile, the event insensitivity makes it can be deployed based on the on-line rolling update under a hypothetic disturbance, avoiding the heavy post-event computational burden. Finally, simulation results in a two-machine power system and the IEEE 39 bus power system verify the effectiveness of the optimal AAPC of WTs.
Authors: Songhao Yang, Zhiguo Hao, Baohui Zhang, Masahide Hojo
With the development of PMUs in power systems, the response-based real-time emergency control becomes a promising way to prevent power outages when power systems are subjected to large disturbances. The first step in the emergency control is to start up accurately and fast when needed. To this end, this paper proposes a well-qualified start-up scheme for the power system real-time emergency control. Three key technologies are proposed to ensure the effectiveness of the scheme. They are an instability index, a Critical Machines (CMs) identification algorithm and a two-layer Single Machine Infinite Bus (SMIB) equivalence framework. The concave-convex area based instability index shows good accuracy and high reliability, which is used to identify the transient instability of the system. The CMs identification algorithm can track the changes of CMs and form the proper SMIB system at each moment. The new two-layer SMIB equivalence framework, compared with conventional ones, can significantly reduce the communication burden and improve the computation efficiency. The simulations in two test power systems show that the scheme can identify the transient instability accurately and fast to restore the system to stability after the emergency control. Besides, the proposed method is robust to measurement errors, which enhances its practicality.
Authors: Omid Mokhtari, Samuel Chevalier, Mads Almassalkhi
This paper presents a novel structure-preserving, Kron-based reduction framework for unbalanced distribution feeders. The method aggregates electrically similar nodes within a mixed-integer optimization (MIP) problem to produce reduced networks that optimally reproduce the voltage profiles of the original full network. To overcome computational bottlenecks of MIP formulations, we propose an exhaustive-search formulation to identify optimal aggregation decisions while enforcing voltage margin limits. The proposed exhaustive network reduction algorithm is parallelizable on GPUs, which enables scalable network reduction. The resulting reduced networks approximate the full system's voltage profiles with low errors and are suitable for steady-state analysis and optimal power flow studies. The framework is validated on two real utility distribution feeders with 5,991 and 8,381 nodes. The reduced models achieve up to 90% and 80% network reduction, respectively, while the maximum voltage-magnitude error remains below 0.003 p.u. Furthermore, on a 1000-node version of the network, the GPU-accelerated reduction algorithm runs up to 15x faster than its CPU-based counterpart.
Authors: Kailong Wang, Athina Petropulu
Orthogonal Time Frequency Space (OTFS) modulation has recently garnered attention for its robustness in high-mobility wireless communication environments. In OTFS, the data symbols are mapped to the Doppler-Delay (DD) domain. In this paper, we address low-overhead, scalable pilot-aided estimation of channel state information (CSI) for MIMO OTFS systems. Existing channel estimation techniques either require non-overlapping DD domain pilots with guard regions across multiple antennas, thus sacrificing significant communication rate as the number of transmit antennas increases, or allow pilots to overlap between antennas and rely on high-complexity methods to mitigate pilot pollution. We propose a novel pilot placement approach that embeds pilots within the time-frequency (TF) frame of each OTFS burst, along with a new use of TF and DD guard bins to preserve waveform orthogonality on the TF pilot bins and data integrity in the DD domain, respectively. The proposed pilot placement enables low-complexity coarse estimation of the channel parameters. Moreover, the pilot orthogonality allows the construction of a virtual array (VA), enabling the formulation of a sparse signal recovery (SSR) problem in which the coarse estimates are used to build a low-dimensional dictionary matrix. The SSR solution then yields high-resolution estimates of the channel parameters. Simulation results show that the proposed approach achieves good performance with very low overhead and is robust to pilot pollution. Importantly, the required overhead is independent of the number of transmit antennas, ensuring scalability to large MIMO arrays. The proposed approach accounts for practical rectangular transmit pulse shaping and receiver matched filtering, as well as fractional Doppler effects.
Authors: Vincenzo Marcianò, Hava Chaptoukaev, Virginia Fernandez, M. Jorge Cardoso, Sébastien Ourselin, Michela Antonelli, Maria A. Zuluaga
Medical image segmentation using deep learning (DL) has enabled the development of automated analysis pipelines for large-scale population studies. However, state-of-the-art DL methods are prone to hallucinations, which can result in anatomically implausible segmentations. With manual correction impractical at scale, automated quality control (QC) techniques have to address the challenge. While promising, existing QC methods are organ-specific, limiting their generalizability and usability beyond their original intended task. To overcome this limitation, we propose no-new Quality Control (nnQC), a robust QC framework based on a diffusion-generative paradigm that self-adapts to any input organ dataset. Central to nnQC is a novel Team of Experts (ToE) architecture, where two specialized experts independently encode 3D spatial awareness, represented by the relative spatial position of an axial slice, and anatomical information derived from visual features from the original image. A weighted conditional module dynamically combines the pair of independent embeddings, or opinions to condition the sampling mechanism within a diffusion process, enabling the generation of a spatially aware pseudo-ground truth for predicting QC scores. Within its framework, nnQC integrates fingerprint adaptation to ensure adaptability across organs, datasets, and imaging modalities. We evaluated nnQC on seven organs using twelve publicly available datasets. Our results demonstrate that nnQC consistently outperforms state-of-the-art methods across all experiments, including cases where segmentation masks are highly degraded or completely missing, confirming its versatility and effectiveness across different organs.
Authors: Xiaoke Yang, Long Gao, Haoyu He, Hanyuan Hang, Qi Liu, Shuai Zhao, Qiantu Tuo, Rui Li
Arc-fault circuit interrupters (AFCIs) are essential for mitigating fire hazards in residential photovoltaic (PV) systems, yet achieving reliable DC arc-fault detection under real-world conditions remains challenging. Spectral interference from inverter switching, hardware heterogeneity, operating-condition drift, and environmental noise collectively compromise conventional AFCI solutions. This paper proposes a lightweight, transferable, and self-adaptive learning-driven framework (LD-framework) for intelligent DC arc-fault detection. At the device level, LD-Spec learns compact spectral representations enabling efficient on-device inference and near-perfect arc discrimination. Across heterogeneous inverter platforms, LD-Align performs cross-hardware representation alignment to ensure robust detection despite hardware-induced distribution shifts. To address long-term evolution, LD-Adapt introduces a cloud-edge collaborative self-adaptive updating mechanism that detects unseen operating regimes and performs controlled model evolution. Extensive experiments involving over 53,000 labeled samples demonstrate near-perfect detection, achieving 0.9999 accuracy and 0.9996 F1-score. Across diverse nuisance-trip-prone conditions, including inverter start-up, grid transitions, load switching, and harmonic disturbances, the method achieves a 0% false-trip rate. Cross-hardware transfer shows reliable adaptation using only 0.5%-1% labeled target data while preserving source performance. Field adaptation experiments demonstrate recovery of detection precision from 21% to 95% under previously unseen conditions. These results indicate that the LD-framework enables a scalable, deployment-oriented AFCI solution maintaining highly reliable detection across heterogeneous devices and long-term operation.
Authors: Dianyu Zhong, Tian Xing, Kailai Sun, Xu Yang, Heye Huang, Irfan Qaisar, Tinggang Jia, Shaobo Wang, Qianchuan Zhao
Heating, ventilation, and air conditioning (HVAC) systems account for a substantial share of building energy consumption. Environmental uncertainty and dynamic occupancy behavior bring challenges in decarbonized HVAC control. Reinforcement learning (RL) can optimize long-horizon comfort-energy trade-offs but suffers from exponential action-space growth and inefficient exploration in multi-zone buildings. Large language models (LLMs) can encode semantic context and operational knowledge, yet when used alone they lack reliable closed-loop numerical optimization and may result in less reliable comfort-energy trade-offs. To address these limitations, we propose a hierarchical control framework in which a fine-tuned LLM, trained on historical building operation data, generates state-dependent feasible action masks that prune the combinatorial joint action space into operationally plausible subsets. A masked value-based RL agent then performs constrained optimization within this reduced space, improving exploration efficiency and training stability. Evaluated in a high-fidelity simulator calibrated with real-world sensor and occupancy data from a 7-zone office building, the proposed method achieves a mean PPD of 7.30%, corresponding to reductions of 39.1% relative to DQN, the best vanilla RL baseline in comfort, and 53.1% relative to the best vanilla LLM baseline, while reducing daily HVAC energy use to 140.90~kWh, lower than all vanilla RL baselines. The results suggest that LLM-guided action masking is a promising pathway toward efficient multi-zone HVAC control.
Authors: Bingfang Li, Songhao Yang, Pu Cheng, Zhiguo Hao
Integrating grid-forming converters (GFMCs) into grid-following converter (GFLC)-dominated power systems enhances the grid strength, but GFMCs' current-limiting characteristic triggers dynamic mode switching between constant voltage control (CVC) and current limit control (CLC). This switching feature poses critical transient stability risks to GFLCs, requiring urgent investigation. This paper first develops a mathematical model for this switched system. Then, it derives mode switching conditions for droop-controlled GFMCs, which are separately GFMC angle-dependent and GFLC angle-dependent. On this basis, the stability boundaries of GFLC within each subsystem are analyzed, and the impact of GFMC mode switching arising from GFLC angle oscillation is investigated. The findings reveal that the switched system's stability boundary coincides with that of the CLC subsystem. To enhance GFLC's transient stability and ensure GFMC converges to the CVC mode, this paper introduces a virtual fixed d-axis control (VFDC) strategy. Compared with existing methods, this method achieves decoupling and self-stabilization using only local state variables from individual converters. The conclusions are validated through simulations and Controller Hardware-in-the-Loop tests.
Authors: Shuyi Gao, Stavros Orfanoudakis, Shengren Hou, Peter Palensky, Pedro P. Vergara
Optimal dispatch of energy storage systems (ESSs) in distribution networks involves jointly improving operating economy and voltage security under time-varying conditions and possible topology changes. To support fast online decision making, we develop a topology-aware Reinforcement Learning architecture based on Twin Delayed Deep Deterministic Policy Gradient (TD3), which integrates graph neural networks (GNNs) as graph feature encoders for ESS dispatch. We conduct a systematic investigation of three GNN variants: graph convolutional networks (GCNs), topology adaptive graph convolutional networks (TAGConv), and graph attention networks (GATs) on the 34-bus and 69-bus systems, and evaluate robustness under multiple topology reconfiguration cases as well as cross-system transfer between networks with different system sizes. Results show that GNN-based controllers consistently reduce the number and magnitude of voltage violations, with clearer benefits on the 69-bus system and under reconfiguration; on the 69-bus system, TD3-GCN and TD3-TAGConv also achieve lower saved cost relative to the NLP benchmark than the NN baseline. We also highlight that transfer gains are case-dependent, and zero-shot transfer between fundamentally different systems results in notable performance degradation and increased voltage magnitude violations. This work is available at: this https URL and this https URL.
Authors: Dong Liu, Sander Timmerman, Yu Xiang, Ensieh Hosseini, Peter Palensky, Pedro P. Vergara
Most existing phase balancing and topology reconfiguration problems are formulated as mixed-integer optimization problems that depend on network topologies~\cite{10098964,11017695,10571996}. However, these topologies are often inaccurate and outdated for distribution system operators~(DSOs) due to missing recordings, topology maintenance and reconfiguration, such as congestion management ~\cite{vanin2024phase}. Thus, the topology of the low-voltage distribution network (LVDN) needs to be checked and corrected when it is outdated. The increasing uncertainty of distributed energy resources (DERs), including household photovoltaic (PV), heating pumps, etc., impacts the frequency of topology reconfiguration and challenges the correction of the low-voltage distribution network topology~\cite{10026490, 10347462, 10475702}. Moreover, the available smart meter (SM) datasets are often limited due to privacy concerns and random communication channel failure, challenging the topology correction~\cite{9696306, costa2022identification, dande2025consumer}. Synthetic European networks and benchmark models presented in~\cite{birchfield2016grid,2020Non} are benchmarks for research but insufficient to represent the diversity of European LVDNs for practical use by DSOs (e.g., state estimation). Thus, practical topology identification and correction approaches are required for real-time topology updating for active management of LVDNs.
Authors: Haoxiang Wan, Linhan Fang, Xingpeng Li
Data centers are facilities housing computing infrastructure for processing and storing digital information. The rapid expansion of artificial intelligence is driving unprecedented growth in data center capacity, with global electricity demand from data centers projected to double by 2026. This growth creates substantial challenges for power transmission networks, as large concentrated loads can cause congestion and threaten grid reliability. Meanwhile, the intermittent nature of solar and wind generation requires flexible resources to maintain grid reliability and minimize curtailment. This paper assesses whether data center spatial flexibility-the ability to migrate computational workloads geographically-can serve as a grid resource to address these challenges. An optimal power flow model is developed to co-optimize generation dispatch, security reserves, and flexible data center loads. Case studies on a modified IEEE 73-bus system show that inflexible data center placement can lead to severe transmission violations, with line overloads reaching 30.1%. Enabling spatial flexibility mitigates these violations in the studied scenarios and restores system feasibility. This flexibility also reduces solar curtailment by up to 61.0% by strategically reallocating load to solar-rich areas. The results suggest that spatial flexibility offers a viable approach to defer transmission upgrades and enhance renewable utilization.
Authors: Wen-Xuan Long, Shengyu Ye, Marco Moretti, Michele Morelli, Luca Sanguinetti, Rui Chen, Cheng-Xiang Wang
The sixth-generation (6G) wireless systems are expected to adopt extremely large aperture arrays (ELAAs), novel antenna architectures, and operate in extremely high-frequency bands to meet growing data demands. ELAAs significantly increase the number of antennas, enabling finer spatial resolution and improved beamforming. At high frequencies, ELAAs shift communication from the conventional far-field to near-field regime, where spherical wavefronts dominate and the channel response depends on both angle and distance, increasing channel dimensionality. Conventional far-field channel estimation methods, which rely on angular information, struggle in near-field scenarios due to increased pilot overhead and computational complexity. This paper presents a comprehensive survey of recent advances in near-field channel estimation. It first defines the near- and far-field boundary from an electromagnetic perspective and discusses key propagation differences, alongside a brief review of ELAA developments. Then, it introduces mainstream near-field channel models and compares them with far-field models. Major estimation techniques are reviewed under different configurations (single/multi-user, single/multi-carrier), including both direct estimation and RIS-assisted cascaded estimation. These techniques reveal trade-offs among estimation accuracy, complexity, and overhead. This survey aims to provide insights and foundations for efficient and scalable near-field channel estimation in 6G systems, while identifying key challenges and future research directions.
Authors: Jingwei Dong, Mahdieh S. Sadabadi, Per Mattsson, André Teixeira
This paper proposes a distributed diagnosis scheme to detect and estimate actuator and power line faults in DC microgrids (e.g., electric-vehicle charging microgrids) subject to unknown power loads and stochastic noise. To address actuator faults, we develop an optimization-based filter design approach within the differential-algebraic equation (DAE) framework, which achieves fault estimation, decoupling from power line faults, and robustness against noise. In contrast, the estimation of power line faults poses greater challenges due to the inherent coupling between fault currents and unknown power loads, especially under insufficient system excitation, where their effects become difficult to distinguish from measurements. To the best of our knowledge, this is the first study to address this critical yet underexplored issue. Our solution introduces a novel differentiate-before-estimate strategy. A set of diagnosis rules based on the temporal characteristics (i.e., duration of threshold violation) of a constructed residual is developed to distinguish step load changes from line faults. Once a power line fault is detected, a regularized least-squares (LS) method is activated to estimate the fault currents, for which we further derive an upper bound on the estimation error. Finally, comprehensive simulations validate the effectiveness of the proposed scheme in terms of estimation accuracy and robustness against disturbances and noise under different fault scenarios.
Authors: Songhao Yang, Bingfang Li, Zhiguo Hao, Yiwen Hu, Huan Xie, Tianqi Zhao, Baohui Zhang
In traditional views, the build-up of accelerating energy during faults can cause the well-known first-swing angle instability in synchronous generators (SGs). Interestingly, this letter presents a new insight that the accumulation of decelerating energy due to the low voltage ride-through (LVRT) and recovery control of grid-following inverter-based resources (GFL-IBRs), might also result in transient angle instability in SGs. The transient energy accumulated during angle-decreasing swing transforms into the acceleration energy of the subsequent swing, hence such phenomena often manifest as multi-swing instability. Both theoretical analysis and simulation support these findings.
Authors: Bingfang Li, Songhao Yang, Qinglan Wang, Xu Zhang, Huan Xie, Chuan Qin, Zhiguo Hao
Deploying synchronous condensers (SynCons) near grid-following renewable energy sources (GFLRs) is an effective and increasingly adopted strategy for grid support. However, the potential transient instability risks in such configurations remain an open research question. This study investigates the mechanism of dominant synchronization instability source transition upon SynCon integration and proposes a straightforward approach to enhance system stability by leveraging their interactive characteristics. Firstly, a dual-timescale decoupling model is established, partitioning the system into a fast subsystem representing phase-locked loop (PLL) dynamics and a slow subsystem characterizing SynCon rotor dynamics. The study then examines the influence of SynCons on the transient stability of nearby PLLs and their own inherent stability. The study shows that SynCon's voltage-source characteristics and its time-scale separation from PLL dynamics can significantly enhance the PLL's stability boundary and mitigate non-coherent coupling effects among multiple GFLRs. However, the dominant instability source shifts from the fast-time-scale PLL to the slow-time-scale SynCon after SynCon integration. Crucially, this paper demonstrates that the damping effect of PLL control can also be transferred from the fast to the slow time scale, allowing well-tuned PLL damping to suppress SynCon rotor acceleration. Consequently, by utilizing SynCon's inherent support capability and a simple PLL damping loop, the transient stability of the co-located system can be significantly enhanced. These conclusions are validated using a converter controller-based Hardware-in-the-Loop (CHIL) platform.
Authors: Georg Kordowich, Julian Oelhaf, Siming Bayer, Andreas Maier, Matthias Kereit, Johann Jaeger
While conventional power system protection isolates faulty components only after a fault has occurred, fault prediction approaches try to detect faults before they can cause significant damage. Although initial studies have demonstrated successful proofs of concept, development is hindered by scarce field data and ineffective feature selection. To address these limitations, this paper proposes a surrogate task that uses simulation data for feature selection. This task exhibits a strong correlation (r = 0.92) with real-world fault prediction performance. We generate a large dataset containing 20000 simulations with 34 event classes and diverse grid configurations. From 1556 candidate features, we identify 374 optimal features. A case study on three substations demonstrates the effectiveness of the selected features, achieving an F1-score of 0.80 and outperforming baseline approaches that use frequency-domain and wavelet-based features.
Authors: Ruike Lyu, Anna Li, Jianxiao Wang, Hongxi Luo, Yan Shen, Hongye Guo, Ershun Du, Chongqing Kang, Jesse Jenkins
In many countries, declining demand in energy-intensive industries such as cement, steel, and aluminum is leading to industrial overcapacity. Although industrial overcapacity is traditionally envisioned as problematic and resource-wasteful, it could unlock energy-intensive industries' flexibility in electricity use. Here, using China's aluminum smelting industry as a case study, we evaluate the system-level cost-benefit of retaining energy-intensive industries overcapacity for flexible electricity use in decarbonized energy systems. We find that overcapacity can enable aluminum smelters to adopt a seasonal operation paradigm, ceasing production during winter load peaks that are exacerbated by heating electrification and renewable seasonality. This seasonal operation paradigm could reduce the investment and operational costs of China's decarbonized electricity system by 23-32 billion CNY/year (11-15% of the aluminum smelting industry's product value), sufficient to offset the increased smelter maintenance and product storage costs associated with overcapacity. It may also provide an opportunity for seasonally complementary labor deployment across the aluminum smelting and thermal power generation sectors, offering a potential pathway for mitigating socio-economic disruptions caused by industrial restructuring and energy decarbonization.
Authors: Sungjoo Chung, Ying Zhang
Adversarial training is a defense method that trains machine learning models on intentionally perturbed attack inputs, so they learn to be robust against adversarial examples. This paper develops a robust voltage control framework for distribution networks with high penetration of distributed energy resources (DERs). Conventional voltage control methods are vulnerable to strategic cyber attacks, as they typically consider only random or black-box perturbations. To address this, we formulate white-box adversarial attacks using Projected Gradient Descent (PGD) and train a deep reinforcement learning (DRL) agent adversarially. The resulting policy adapts in real time to high-impact, strategically optimized perturbations. Simulations on DER-rich networks show that the approach maintains voltage stability and operational efficiency under realistic attack scenarios, highlighting the effectiveness of gradient-based adversarial DRL in enhancing robustness and adaptability in modern distribution system control.
Authors: Xinyi Yi, Ioannis Lestas
District heating systems (DHSs) require coordinated economic dispatch and temperature regulation under uncertain operating conditions. Existing DHS operation strategies often rely on disturbance forecasts and nominal models, so their economic and thermal performance may degrade when predictive information or model knowledge is inaccurate. This paper develops a data-driven online control framework for DHS operation by embedding steady-state economic optimality conditions into the temperature dynamics, so that the closed-loop system converges to the economically optimal operating point without relying on disturbance forecasts. Based on this formulation, we develop a Data-Enabled Policy Optimization (DeePO)-based online learning controller and incorporate Adaptive Moment Estimation (ADAM) to improve closed-loop performance. We further establish convergence and performance guarantees for the resulting closed-loop system. Simulations on an industrial-park DHS in Northern China show that the proposed method achieves stable near-optimal operation and strong empirical robustness to both static and time-varying model mismatch under practical disturbance conditions.
Authors: Tao Tan, Rui Xie, Meng Yang, Yue Chen
The rapid deployment of distributed energy resources (DERs) is one of the essential efforts to mitigate global climate change. However, a vast number of small-scale DERs are difficult to manage individually, motivating the introduction of virtual power plants (VPPs). A VPP operator coordinates a group of DERs by setting suitable prices, and aggregates them for interaction with the power grid. In this context, optimal pricing plays a critical role in VPP operation. This paper proposes a robust optimal operation model for VPPs that considers uncertainty in the price elasticity of demand. Specifically, the demand elasticity is found to be influenced by the pricing decision, giving rise to decision-dependent uncertainty (DDU). An improved column-and-constraint (C&CG) algorithm, together with tailored transformation and reformulation techniques, is developed to solve the robust model with DDU efficiently. Case studies based on actual electricity consumption data of London households demonstrate the effectiveness of the proposed model and algorithm.
Authors: Andrej Stankovski, Blazhe Gjorgiev, James Ciyu Qin, Giovanni Sansavini
Power system security assessments, e.g. via cascading outage models, often use operational set-points based on optimal power flow (OPF) dispatch. However, driven by cost minimization, OPF provides an ideal, albeit unrealistic, clearing of the generating units that disregards the complex interactions among market participants. In addition, existing market modeling tools often utilize economic dispatch and unit commitment to minimize total system costs, often disregarding the profit-driven behavior of market participants. The security of the system, therefore, may be overestimated. To address this gap, we introduce a social-welfare-based day-ahead market-clearing model. The security implications are analyzed using Cascades, a model for cascading failure analysis. We apply this model to the IEEE-118 bus system with three independent control zones. The results show that market dispatch leads to an increase in demand not served (DNS) of up to 80% higher than OPF, highlighting a significant security overestimation. This is especially pronounced in large-scale cascading events with DNS above 100MW. A key driver is the increased dispatch of storage and gas units, which can place the system in critical operating conditions. Operators can use this information to properly estimate the impact of the market on system security and plan efficient expansion strategies.
Authors: Sunki Hong, Jisoo Lee
Selecting the right deep learning model for power grid forecasting is challenging, as performance heavily depends on the data available to the operator. This paper presents a comprehensive benchmark of five modern neural architectures: two state space models (PowerMamba, S-Mamba), two Transformers (iTransformer, PatchTST), and a traditional LSTM. We evaluate these models on hourly electricity demand across six diverse US power grids for forecast windows between 24 and 168 hours. To ensure a fair comparison, we adapt each model with specialized temporal processing and a modular layer that cleanly integrates weather covariates. Our results reveal that there is no single best model for all situations. When forecasting using only historical load, PatchTST and the state space models provide the highest accuracy. However, when explicit weather data is added to the inputs, the rankings reverse: iTransformer improves its accuracy three times more efficiently than PatchTST. By controlling for model size, we confirm that this advantage stems from the architecture's inherent ability to mix information across different variables. Extending our evaluation to solar generation, wind power, and wholesale prices further demonstrates that model rankings depend on the forecast task: PatchTST excels on highly rhythmic signals like solar, while state space models are better suited for the chaotic fluctuations of wind and price. Ultimately, this benchmark provides grid operators with actionable guidelines for selecting the optimal forecasting architecture based on their specific data environments.
Authors: Shaked Regev, Eve Tsybina, Slaven Peles
We expand our novel computational method for unit commitment (UC) to include long-horizon planning. We introduce a fast novel algorithm to commit hydro-generators, provably accurately. We solve problems with thousands of generators at 5 minute market intervals. We show that our method can solve interconnect size UC problems in approximately 1 minute on a commodity hardware and that an increased planning horizon leads to sizable operational cost savings (our objective). This scale is infeasible for current state-of-the-art tools. We attain this runtime improvement by introducing a heuristic tailored for UC problems. Our method can be implemented using existing continuous optimization solvers and adapted for different applications. Combined, the two algorithms would allow an operator operating large systems with hydro units to make horizon-aware economic decisions.
Authors: Kaiyuan Yang, Xupeng Chen, Jiangpeng He
Video-based gait analysis has become a promising approach for assessing motor impairment in children with cerebral palsy (CP). However, existing methods usually rely on either pose sequences or handcrafted gait features alone, making it difficult to simultaneously capture spatiotemporal motion patterns and clinically meaningful biomechanical information. To address this gap, we propose a multimodal fusion framework that integrates skeleton dynamics with contribution-guided clinically meaningful gait features. First, Grad-CAM analysis on a pre-trained ST-GCN backbone identified the most discriminative body keypoints, providing an interpretable basis for subsequent gait feature extraction. We then build a dual-stream architecture, with one stream modeling skeleton dynamics using ST-GCN and the other encoding gait geatures derived from the identified keypoints. By fusing the two streams through feature cross-attention improved four-level CP motor severity classification to 70.86%, outperforming the baseline by 5.6 percentage points. Overall, this work suggests that integrating skeleton dynamics with clinically meaningful gait descriptors can improve both prediction performance and biomechanical interpretability for video-based CP severity assessment.
Authors: Anna Stuhlmacher, Panupong Srisuthankul, Johanna L. Mathieu, Peter Seiler
Agrivoltaic systems--photovoltaic (PV) panels installed above agricultural land--have emerged as a promising dual-use solution to address competing land demands for food and energy production. In this paper, we propose a model predictive control (MPC) approach to dual-axis agrivoltaic panel tracking control that dynamically adjusts panel positions in real time to maximize power production and crop yield given solar irradiance and ambient temperature measurements. We apply convex relaxations and shading factor approximations to reformulate the MPC optimization problem as a convex second-order cone program that determines the PV panel position adjustments away from the sun-tracking trajectory. Through case studies, we demonstrate our approach, exploring the Pareto front between i) an approach that maximizes power production without considering crop needs and ii) crop yield with no agrivoltaics. We also conduct a case study exploring the impact of forecast error on MPC performance. We find that dynamically adjusting agrivoltaic panel position helps us actively manage the trade-offs between power production and crop yield, and that active panel control enables the agrivoltaic system to achieve land equivalent ratio values of up to 1.897.
Authors: E. D. Gomez Anccas, E. A. MacPherson, J. Tegeler, K. Röbert, M. Fischer, C. A. Hans, D. Schulz
Synchronisation of parallel grid-forming inverters is crucial for stable operation of future power systems. This includes accurate and robust reactive power sharing under realistic operating conditions such as impedance mismatch and communication constraints. In this work, reactive power sharing by virtue of a distributed control law is investigated under line impedance mismatch. Furthermore, robustness and transient behaviour of the proposed approach are experimentally evaluated under communication-induced stressors including a fixed 3% packet loss and communication delays ranging from 50 ms to 100 ms, artificially introduced through a software-defined overlay. The study is conducted in a low-voltage laboratory-scale microgrid comprising two parallel grid-forming inverters, an AC load, and a grid-following battery system acting as a reactive power injector. The results show reactive power sharing convergence up to 90 ms communication delay, with a stability boundary between 90 ms and 100 ms, which decreases with increasing integral gain.
Authors: Juan A. Martinez-Velasco, Pau Casals-Torrens, Ricard Bosch-Tous, Alexandre Serrano-Fontova
The use of open-access software is an option that can be considered by those interested in power system studies. In addition, the combination of two or more of these tools can expand the capabilities and the fields of application of each tool. This paper proposes the implementation of a flexible and powerful simulation environment based on R/Rstudio for carrying out power system studies. Several simple case studies are presented aimed at showing how the combination of either EMTP/ATP or OpenDSS with R/RStudio can expand the capabilities of each of these tools for performing either steady-state or transient power system studies. Basically, the proposed environment uses RStudio as control center from which each simulation tool (e.g., R, ATP, OpenDSS) can be run. Some procedures for generating information that must be exchanged between RStudio and ATP or RStudio and OpenDSS have been implemented. Such exchanges are bidirectional: ATP and OpenDSS produce simulation results that can be read by RStudio (text files in the case of ATP, comma separated value (CSV) and text files in the case of OpenDSS), while RStudio capabilities are used to generate files that are embedded into the input file to be read by either ATP or OpenDSS. This late option can be used to change either the configuration or some parameters of the test system under study. Finally, one very interesting option illustrated in this paper is the possibility of using machine learning algorithms to predict the performance of the test system.
Authors: Arbel Yaniv, Kilian Golinski, Christoph Goebel
This study analyzes Graph Neural Networks (GNNs) for distribution system state estimation (DSSE) by employing an interpretable Graph Neural Additive Network (GNAN) and by utilizing an edge-conditioned message-passing mechanism. The architectures are benchmarked against the standard Graph Attention Network (GAT) architecture. Multiple SimBench grids with topology changes and various measurement penetration rates were used to evaluate performance. Empirically, GNAN trails GAT in accuracy but serves as a useful probe for graph learning when accompanied with the proposed edge attention mechanism. Together, they demonstrate that incorporating information from distant nodes could improve learning depending on the grid topology and available data. This study advances the state-of-the-art understanding of learning on graphs for the state estimation task and contributes toward reliable GNN-based DSSE prediction technologies.
Authors: Ann Mary Toms, Xingpeng Li
The global transition towards renewable energy has accelerated the deployment of utility-scale wind farms, increasing the need for accurate performance and economic assessments. Although wind energy offers substantial potential for carbon emission reduction, investment decisions are highly sensitive to predicted annual energy production and economic profitability. Conventionally wind farm analyses often estimate turbine power output based solely on incoming wind conditions, neglecting wake interactions between turbines. These wake effects can significantly reduce downstream turbine performance, leading to overestimation of energy yield and financial returns. This study proposes WAKE-NET a wake-aware optimization framework that incorporates both turbine layout optimization and hub height diversification across turbines of varying capacities. Unlike traditional approaches that assume a uniform hub height or ignore wake dynamics, the proposed methodology accounts for wake-induced power losses in its framework. Results indicate that the benchmark model that neglects wake effects can overestimate annual profits, while the use of multiple hub heights reduces wake overlap and associated power losses. Overall, the findings demonstrate that wake-aware design and hub height diversity improve energy yield accuracy and economic viability, offering a valuable guidance for wind farm developers and investors seeking to invest in renewable energy systems.
Authors: Thibaud Cambronne, Samuel Bobick, Wente Zeng, Scott Moura
Demand charge, a utility fee based on an electricity customer's peak power consumption, often constitutes a significant portion of costs for commercial electric vehicle (EV) charging station operators. This paper explores control methods to reduce peak power consumption at workplace EV charging stations in a joint price and power optimization framework. We optimize a menu of price options to incentivize users to select controllable charging service. Using this framework, we propose a model predictive control approach to reduce both demand charge and overall operator costs. Through a Monte Carlo simulation, we find that our algorithm outperforms a state-of-the-art benchmark optimization strategy and can significantly reduce station operator costs.
Authors: Alireza Zabihi, Luis Badesa, Araceli Hernandez
Finding clear economic signals for distribution-network operation and expansion is increasingly important as single-phase loads and distributed energy resources escalate. These devices create phase-to-phase imbalances that manifest as voltage unbalance, a power quality issue that accelerates insulation aging in machines and increases network losses, thereby raising costs for operators and consumers. Traditional grid codes address unbalance via disparate hard limits on various indices thresholds that differ across standards, offer no dynamic economic incentive and undermine optimality. This paper proposes instead to treat voltage unbalance as a `soft limit' by adding penalty terms to grid operation costs within a three-phase optimal power flow to reflect the cost of the decrease in lifetime of assets due to being subject to voltage unbalance. This unified approach yields dynamic economic signals unbalance-aware Distribution Locational Marginal Prices (DLMP) that reflect the cost of power quality deviations. A novel mathematical decomposition of DLMP is developed, isolating the energy, loss, congestion, and unbalance components. Case studies conducted on two benchmark networks demonstrate the effectiveness and practical value of the proposed method. The results indicate that unbalance penalties reshape nodal prices, produce unexpected phase-level effects, and even allow scenarios where added load reduces unbalance and lowers costs, while providing planners and market designers with actionable insights to balance investment, operation, and power quality in modern distribution systems.
Authors: Saurabh Vaishampayan, Maryam Kamgarpour
Local energy markets empower prosumers to form coalitions for energy trading. However, the optimal partitioning of the distribution grid into such coalitions remains unclear, especially in constrained grids with stochastic production and consumption. This analysis must take into account the interests of both the grid operator and the constituent prosumers. In this work, we present a cooperative game theoretic framework to study distribution grid partitioning into local energy market coalitions under uncertain prosumption and grid constraints. We formulate the optimal stable partitioning problem to balance the interests of the grid operator with that of prosumers. Under deterministic load and generation, we show that the largest market coalition is the optimal stable partition. For the case of stochastic loads and generation, we provide an algorithm to evaluate the optimal stable partition. Numerical experiments are performed on benchmark and real world distribution grids. Our results help in understanding how uncertainty affects local energy market partitioning decisions in constrained distribution grids.
Authors: Yiwei Dong, Wenqi Cui, Han Xu, Adam Wierman, Steven Low
Power distribution systems are increasingly exposed to large voltage fluctuations driven by intermittent renewable generation and time varying loads (e.g., electric vehicles and storage). To address this challenge, a number of advanced controllers have been proposed for voltage regulation. However, these controllers typically rely on fixed linear approximations of voltage dynamics. As a result, the solutions may become infeasible when applied to the actual voltage behavior governed by nonlinear power flow equations, particularly under heavy power injection from distributed energy resources. This paper proposes a data-driven successive linearization approach for voltage control under nonlinear power flow constraints. By leveraging the fact that the deviation between the nonlinear power flow solution and its linearization is bounded by the distance from the operating point, we perform data-driven linearization around the most recent operating point. Convergence of the proposed method to a neighborhood of KKT points is established by exploiting the convexity of the objective function and structural properties of the nonlinear constraints. Case studies show that the proposed approach achieves fast convergence and adapts quickly to changes in net load.
Authors: Liangshun Wu, Jianbo Du, Junsuo Qu
Efficient computation offloading in multi-UAV edge networks becomes particularly challenging in dense urban areas, where line-of-sight (LoS) links are frequently blocked and user demand varies rapidly. Reconfigurable intelligent surfaces (RISs) can mitigate blockage by creating controllable reflected links, but realizing their potential requires tightly coupled decisions on UAV trajectories, offloading schedules, and RIS phase configurations. This joint optimization is hard to solve in practice because multiple UAVs must coordinate under limited information exchange, and purely model-free multi-agent reinforcement learning (MARL) often learns too slowly in highly dynamic environments. To address these challenges, we propose a decentralized model-based MARL framework. Each UAV optimizes mobility and offloading using observations from several hop neighbors, and submits an RIS phase proposal that is aggregated by a lightweight RIS controller. To boost sample efficiency and stability, agents learn local dynamics models and perform short horizon branched rollouts for proximal policy optimization (PPO) updates. Simulations show near centralized performance with improved throughput and energy efficiency at scale.
Authors: Le Fang, Wangkun Xu, Fei Teng
The large-scale integration of renewable energy sources introduces significant operational uncertainty into power systems. Although Polynomial Chaos Expansion (PCE) provides an efficient tool for uncertainty quantification (UQ) in power system dynamics, its accuracy depends critically on the faithful representation of input uncertainty, an assumption that is oftern violated in practice due to correlated, non-Gaussian, and otherwise complex data distributions. In contrast to purely data-driven surrogates that often overlook rigorous input distribution modelling, this paper introduces flow-based PCE, a unified framework that couples expressive input modelling with efficient uncertainty propagation. Specifically, normalising flows are employed to learn an invertible transport map from a simple base distribution to the empirical joint distribution of uncertain inputs, and this map is then integrated directly into the PCE construction. In addition, the Map Smoothness Index (MSI) is introduced as a new metric to quantify the quality of the learned map, and smoother transformations are shown to yield more accurate PCE surrogates. The proposed Flow-based PCE framework is validated on benchmark dynamic models, including the IEEE 14-bus system and the Great Britain transmission system, under a range of uncertainty scenarios.
Authors: Jose A. Solano-Castellanos, Hassan Haes Alhelou, Ali T. Al- Awami, Mohannad Alkhraijah, Anuradha M. Annaswamy
This paper addresses frequency regulation under operational constraints in interconnected power systems with high penetration of inverter-based renewable generation. A two-layer control architecture is proposed that combines optimized droop and Virtual Synchronous Machine (VSM) primary control with a Model Predictive Control (MPC) secondary layer operating at realistic control-room update rates. Unlike recent proposed approaches, the proposed framework integrates MPC within existing grid control structures, enabling constraint-aware coordination. A reduced-order frequency response model is systematically derived from a high-fidelity grid model using Hankel singular values, and a reduced-order Kalman-Bucy observer enables state and disturbance estimation using only measurable outputs. Validation using representative data from the Kingdom of Saudi Arabia demonstrates effective frequency regulation under realistic operating conditions.
Authors: Birva Sevak, Shrenik Jadhav, Van-Hai Bui
Cascading failures in power grids pose severe risks to infrastructure reliability, yet real-time prediction of their progression remains an open challenge. Physics-based simulators require minutes to hours per scenario, while existing graph neural network approaches treat cascading failures as static classification tasks, ignoring temporal evolution and physical laws. This paper proposes Physics-Informed Graph Neural Jump ODEs (PI-GN-JODE), combining an edge-conditioned graph neural network encoder, a Neural ODE for continuous power redistribution, a jump process handler for discrete relay trips, and Kirchhoff-based physics regularization. The model simultaneously predicts edge and node failure probabilities, severity classification, and demand not served, while an autoregressive extension enables round-by-round temporal cascade prediction. Evaluated on the IEEE 24-bus and 118-bus systems with 20,000 scenarios each, PI-GN-JODE achieves a Precision--Recall Area Under the Curve of 0.991 for edge failure detection, 0.973 for node failure detection, and a coefficient of determination of 0.951 for demand-not-served regression on the 118-bus system, outperforming a standard graph convolutional network baseline (0.948, 0.925, and 0.912, respectively). Ablation studies reveal that the four components function synergistically, with the physics-informed loss alone contributing +9.2 points to demand-not-served regression. Performance improves when scaling to larger grids, and the architecture achieves the highest balanced accuracy (0.996) on the PowerGraph benchmark using data from a different simulation framework.
Authors: Régulo E. Ávila-Martínez, Luis Rouco, Javier Renedo, Lukas Sigrist, Aurelio Garcia-Cerrada
Grid-forming voltage source converters (GFM-VSCs) play a crucial role in the stability of power systems with large amounts of converter-based generation. Transient stability (angle stability under large disturbances) is a critical limiting factor in stressed power systems. Previous studies have proposed control strategies in GFM-VSCs to improve transient stability. These approaches typically rely on suitable current-limiting algorithms, voltage/reactive-power and active-power supplementary control strategies. This paper investigates and compares the effectiveness of three active-power control strategies in GFM-VSCs to enhance transient stability in power systems with 100 % converter-based generation: (i) a wide-area control strategy (TSP-WACS) using the centre of inertia (COI) frequency, (ii) a local transient damping method (TSP-TDM), and (iii) a novel local control strategy (TSP-L) proposed in this work. All strategies were implemented and assessed using short-circuit simulations on Kundur two-area test system with 100 % GFM-VSC generators, demonstrating critical clearing time (CCT) improvement. The TSP-WACS strategy achieves the best performance but requires a communication infrastructure, while TSP-L strategy offers a simple-but-robust alternative using local measurements, only.
Authors: Qirui Zheng, Dan Wu, Franz-Erich Wolter, Sijia Geng
The widespread adoption of renewable energy poses a challenge in maintaining a feasible operating point in highly variable scenarios. This paper demonstrates that, within a feasible region of a power system that meets practical stability requirements, the power flow equations define a smooth bijection between nodal voltage phasors (angle and magnitude) and nodal active/reactive power injections. Based on this theoretical foundation, this paper proposes a data-based power flow evaluation method that can imply the associated power flow manifold from a limited number of data points around a single operating point. Using techniques from differential geometry and analytic functions, we represent geodesic curves in the associated power flow manifold as analytic functions at the initial point. Then, a special algebraic structure of the power flow problem is revealed and applied to reduce the computation of all higher-order partial derivatives to that of the first-order ones. Integrating these techniques yields the proposed data-based evaluation method, suggesting that a small number of local measurements around a single operating point is sufficient to imply the entire associated power flow manifold. Numerical cases with arbitrary directional variations are tested, certifying the efficacy of the proposed method.
Authors: Daniyaer Paizulamu, Lin Cheng, Ning Qi, Zhengmao Li, Nikos D. Hatziargyriou
Uncertainties in balancing generation and load in low-carbon industrial microgrids (IMGs) make hybrid energy storage systems (HESS) crucial for their stable and economic operation. Existing model predictive control (MPC) techniques typically enforce periodic state of charge (SOC) constraints to maintain long term stability. However, these hard constraints compromise dispatch flexibility near the end of the prediction horizon, preventing sufficient energy release during critical peaks and leading to optimization infeasibility. This paper eliminates the periodic SOC constraints of individual storage units and proposes a novel full-timescale hierarchical MPC scheduling framework. Specifically, comprehensive physical and cost models are established for the HESS composed of flywheel, battery, compressed-air, and hydrogen-methanol energy storage. The control problem is decoupled into a hierarchical MPC architecture. Furthermore, a novel adaptive feedback mechanism based on micro trajectory inverse projection (MTIP) is embedded into the scheduling process, accurately mapping the high frequency dynamic buffering capabilities of lower tier storages into the upper decision space to generate dynamic boundaries. Experiments using 14 consecutive months of second-level data from a real-world IMG validate the effectiveness of the proposed method, demonstrating its significant superiority over existing approaches. By effectively preventing limit violations and deadlocks in lower-tier storages under extreme fluctuations, it achieves a 97.4\% net load smoothing rate and a 62.2\% comprehensive cycle efficiency.
Authors: Seyed Amir Mansouri, Kenneth Bruninx
The vision of electrolytic hydrogen as a clean energy vector prompts the emergence of hydrogen-centric companies that must simultaneously engage in electricity, hydrogen, and green certificate markets while operating complex, geographically distributed asset portfolios. This paper proposes a portfolio-level optimization framework tailored for the integrated operational scheduling and market participation of such companies. The model co-optimizes asset scheduling and market decisions across multiple sites, incorporating spatial distribution, technical constraints, and company-level policy requirements. It supports participation in the electricity market, physical and virtual Power Purchase Agreements (PPAs), bundled and unbundled hydrogen markets, and green certificate transactions. The model is applied to three operational scenarios to evaluate the economic and operational impacts of different compliance strategies. Results show that centralized, portfolio-level control unlocks the full flexibility of geographically distributed assets, enabling a 2.42-fold increase in hydrogen production and a 9.4% reduction in daily operational costs, while satisfying all company policy constraints.
Authors: Lila Perkins, Baosen Zhang
In electricity markets, customers are increasingly constrained by their budgets. A budget constraint for a user is an upper bound on the price multiplied by the quantity. However, since prices are determined by the market equilibrium, the budget constrained welfare maximization problem is difficult to define rigorously and to work with. In this letter, we show that a natural dual-ascent algorithm converges to a unique competitive equilibrium under budget constraints. Further, this budget-constrained equilibrium is exactly the solution of a convex welfare maximization problem in which each user's utility is replaced by a modified utility that splices the original utility with a logarithmic function where the budget binds. We also provide an explicit piecewise construction of this modified utility and demonstrate the results on examples with quadratic and square root utility functions.
Authors: Annisaa Fitri Nurfidausi, Eleonora Mancini, Paolo Torroni
Depression is a widespread mental health disorder, yet its automatic detection remains challenging. Prior work has explored unimodal and multimodal approaches, with multimodal systems showing promise by leveraging complementary signals. However, existing studies are limited in scope, lack systematic comparisons of features, and suffer from inconsistent evaluation protocols. We address these gaps by systematically exploring feature representations and modelling strategies across EEG, together with speech and text. We evaluate handcrafted features versus pre-trained embeddings, assess the effectiveness of different neural encoders, compare unimodal, bimodal, and trimodal configurations, and analyse fusion strategies with attention to the role of EEG. Consistent subject-independent splits are applied to ensure robust, reproducible benchmarking. Our results show that (i) the combination of EEG, speech and text modalities enhances multimodal detection, (ii) pretrained embeddings outperform handcrafted features, and (iii) carefully designed trimodal models achieve state-of-the-art performance. Our work lays the groundwork for future research in multimodal depression detection.
Authors: Danilo Saccani, Haoming Shen, Luca Furieri, Giancarlo Ferrari-Trecate
We study a control architecture for nonlinear constrained systems that integrates a performance-boosting (PB) controller with a scheduled Predictive Safety Filter (PSF). The PSF acts as a pre-stabilizing base controller that enforces state and input constraints. The PB controller, parameterized as a causal operator, influences the PSF in two ways: it proposes a performance input to be filtered, and it provides a scheduling signal to adjust the filter's Lyapunov-decrease rate. We prove two main results: (i) Stability by design: any controller adhering to this parametrization maintains closed-loop stability of the pre-stabilized system and inherits PSF safety. (ii) Trajectory-set expansion: the architecture strictly expands the set of safe, stable trajectories achievable by controllers combined with conventional PSFs, which rely on a pre-defined Lyapunov decrease rate to ensure stability. This scheduling allows the PB controller to safely execute complex behaviors, such as transient detours, that are provably unattainable by standard PSF formulations. We demonstrate this expanded capability on a constrained inverted pendulum task with a moving obstacle.
Authors: Lucas Souza e Silva, Luis Rodrigues
This paper formulates an optimal control framework for computing cruise airspeeds in predecessor-follower platoons of all-electric aircraft that balance operational cost and airspace complexity. To quantify controller workload and coordination effort, a novel pairwise dynamic workload (PDW) function is developed. Within this framework, the optimal airspeed solution is derived for all-electric aircraft under longitudinal wind disturbances. Moreover, an analytical suboptimal solution for heterogeneous platoons with nonlinear aircraft dynamics is determined, for which a general sufficient condition for string stability is formally established. The methodology is validated through case studies of all-electric aircraft operating in air corridors that are suitable for low-altitude advanced/urban air mobility (AAM/UAM) applications. Results show that the suboptimal solution closely approximates the optimal, while ensuring safe separations, maintaining string stability, and reducing operational cost and airspace complexity. These findings support the development of sustainable and more autonomous air traffic procedures that will enable the implementation of emerging air transportation technologies, such as AAM/UAM, and their integration to the air traffic system environment.
Authors: Qiping Lai, Yi Shen, Chen Shen
In high-penetration renewable power systems with complex and highly variable operating scenarios, grid-connected inverters (GCIs) may transition between different control modes to adapt to diverse grid conditions. Among these, the switching between grid-following (GFL) and grid-forming (GFM) control modes is particularly critical. Nevertheless, safe and robust GFL-GFM switching control strategies for GCIs remain largely unexplored. To overcome this challenge, this paper establishes a full-order small-signal state-space model for the GFL-GFM switched system, precisely reflecting all internal circuit and control dynamics. Subsequently, the small-signal security region (SSSR) of the switched system is defined and characterized, followed by an in-depth investigation into the multi-parameter impacts on the SSSRs and internal stability margin distributions (ISMDs). Furthermore, a novel comprehensive stability index (CSI) is proposed by integrating the stability margin, parameter sensitivity, and boundary distance. Based on this CSI, a multi-objective adaptive GFL-GFM switching control strategy is designed to guarantee the dynamic security and robustness of the system. Finally, the proposed SSSR analysis method for the GFL-GFM switched system and the designed CSI-based switching control mechanism are validated through electromagnetic transient (EMT) simulations.
Authors: Ondrej Zeleny, Radek Zavorka, Ales Prokes, Tomas Fryza, Jaroslaw Wojtun, Jan M. Kelner, Cezary Ziolkowski, Aniruddha Chandra
Power Delay Profile (PDP) plays a crucial role in wireless communications, providing information on multipath propagation and signal strength variations over time. Accurate detection of peaks within PDP is essential to identify dominant signal paths, which are critical for tasks such as channel estimation, localization, and interference management. Traditional approaches to PDP analysis often struggle with noise, low resolution, and the inherent complexity of wireless environments. In this paper, we evaluate the application of traditional and modern deep learning neural networks to reconstruction-based anomaly detection to detect multipath components within the PDP. To further refine detection and robustness, a framework is proposed that combines autoencoders and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering. To compare the performance of individual models, a relaxed F1 score strategy is defined. The experimental results show that the proposed framework with transformer-based autoencoder shows superior performance both in terms of reconstruction and anomaly detection.
Authors: Rajeev Shukla, Atharva Verma, Aniruddha Chandra, Ondrej Zeleny, Radek Zavorka, Jiri Blumenstein, Ales Prokes, Jaroslaw Wojtun, Jan M. Kelner, Cezary Ziolkowski, Domenico Ciuonzo
Longlshort-term memory (LSTM) is a deep learning model that can capture long-term dependencies of wireless channel models and is highly adaptable to short-term changes in a wireless environment. This paper proposes a simple LSTM model to predict the channel transfer function (CTF) for a given transmitter-receiver location inside a bus for the 60 GHz millimetre wave band. The average error of the derived power delay profile (PDP) taps, obtained from the predicted CTFs, was less than 10% compared to the ground truth.
Authors: Friedemann Laue, Sebastian Lotter, Nikita Shani, Robert Schober
This paper studies the codebook-based configuration of a reconfigurable intelligent surface (RIS) that extends the coverage of a base station (BS) while utilizing energy harvesting to facilitate self-sustainable operation. For a given coverage area, we design a RIS codebook and propose a mathematical framework for analyzing the efficiency of three common energy harvesting schemes: power splitting (PS), element splitting (ES), and time splitting (TS). Thereby, we use a tile-based architecture at the RIS to exploit the advantages of both radio-frequency (RF) combining and direct-current (DC) combining. Moreover, we account for deterministic and random transmit signals for beam training and data transmission, respectively, and show their impact on the RF-DC conversion efficiencies at the rectifiers. Our main objective is to minimize the average transmit power at the BS by jointly optimizing the splitting ratio for the incident signal at the RIS and the power allocated to each RIS codeword. While the optimal power allocation is derived analytically, we show that the optimal splitting ratio can be determined by performing a grid search over a single optimization variable. Our performance evaluation reveals that the efficiency of the optimized splitting schemes depends on the adopted power consumption model and the number of tiles at the RIS. In particular, our results show that depending on the system parameters a different splitting scheme will achieve the lowest transmit power at the BS.
Authors: Ipek Kuvvetli, Christofer Sundström, Sogol Kharrazi, Erik Frisk
This paper presents a comparative optimization framework for smart charging of electrified vehicle fleets. Using heuristic sequential dynamic programming (SeqDP), the framework minimizes electricity costs while adhering to constraints related to the power grid, charging infrastructure, vehicle availability, and simple considerations of battery aging. Based on real-world operational data, the model incorporates discrete energy states, time-varying tariffs, and state-of-charge (SoC) targets to deliver a scalable and cost-effective solution. Classical DP approach suffers from exponential computational complexity as the problem size increases. This becomes particularly problematic when conducting monthly-scale analyses aimed at minimizing peak power demand across all vehicles. The extended time horizon, coupled with multi-state decision-making, renders exact optimization impractical at larger scales. To address this, a heuristic method is employed to enable systematic aggregation and tractable computation for the Non-Linear Programming (NLP) problem. Rather than seeking a globally optimal solution, this study focuses on a time-efficient smart charging strategy that aims to minimize energy cost while flattening the overall power profile. In this context, a sequential heuristic DP approach is proposed. Its performance is evaluated against a full-fleet solver using Gurobi, a widely used commercial solver in both academia and industry. The proposed algorithm achieves a reduction of the overall cost and peak power by more than 90% compared to uncontrolled schedules. Its relative cost remains within 9\% of the optimal values obtained from the full-fleet solver, and its relative peak-power deviation stays below 15% for larger fleets.
Authors: Xingyu Feng, Chang Sun, Yuzhu Wang, Zhangbing Zhou, Chengwen Luo, Zhuangzhuang Chen, Xiaomin Ouyang, Huanqi Yang
Battery life remains a critical challenge for mobile devices, yet existing power management mechanisms rely on static rules or coarse-grained heuristics that ignore user activities and personal preferences. We present PowerLens, a system that tames the reasoning power of Large Language Models (LLMs) for safe and personalized mobile power management on Android devices. The key idea is that LLMs' commonsense reasoning can bridge the semantic gap between user activities and system parameters, enabling zero-shot, context-aware policy generation that adapts to individual preferences through implicit feedback. PowerLens employs a multi-agent architecture that recognizes user context from UI semantics and generates holistic power policies across 18 device parameters. A PDL-based constraint framework verifies every action before execution, while a two-tier memory system learns individualized preferences from implicit user overrides through confidence-based distillation, requiring no explicit configuration and converging within 3--5 days. Extensive experiments on a rooted Android device show that PowerLens achieves 81.7% action accuracy and 38.8% energy saving over stock Android, outperforming rule-based and LLM-based baselines, with high user satisfaction, fast preference convergence, and strong safety guarantees, with the system itself consuming only 0.5% of daily battery capacity.
Authors: Emmanuel O. Badmus, Amritanshu Pandey
This paper introduces PowerDAG, an agentic AI system for automating complex distribution-grid analysis. We address the reliability challenges of state-of-the-art agentic systems in automating complex engineering workflows by introducing two innovative active mechanisms: adaptive retrieval, which uses a similarity-decay cutoff algorithm to dynamically select the most relevant annotated exemplars as context, and just-in-time (JIT) supervision, which actively intercepts and corrects tool-usage violations during execution. On a benchmark of unseen distribution grid analysis queries, PowerDAG achieves a 100% success rate with GPT-5.2 and 94.4--96.7% with smaller open-source models, outperforming base ReAct (41-88%), LangChain (30-90%), and CrewAI (9-41%) baselines by margins of 6-50 percentage points.
Authors: Steven Li, Luis Rodrigues
Electrified propulsion is expected to play an important role in the sustainable development of Advanced Air Mobility (AAM). However, the limited energy density of batteries motivates the need to minimize energy consumption during flight. This paper studies the minimum total energy problem for an all-electric aircraft in steady cruise flight. The problem is formulated as an optimal control problem in which the cruise airspeed and final cruise time are optimization variables. The battery supply voltage is modeled as an affine function of the battery charge. Pontryagin's Minimum Principle is used to derive the necessary and sufficient conditions for optimality, from which closed-form expressions for the optimal cruise airspeed and optimal final cruise time are obtained. Additional analytical conditions are derived that determine when all-electric operation is feasible, one of which is that sufficient electric charge must be available. Numerical simulations based on the BETA Technologies CX300 all-electric aircraft and a representative AAM scenario illustrate how the aircraft weight, cruising altitude, electrical system efficiency, and initial battery charge influence the optimal airspeed and the feasibility of all-electric cruise.
Authors: Muhammad Hamza Ali, Amritanshu Pandey
The growing use of inverter-based resources in modern power systems has made grid-following inverters a central topic in power-system modeling, control, and simulation. Despite their widespread deployment, introductory material that explains grid-following inverter operation from first principles and connects control design to time-domain simulation remains limited. To address this need, this tutorial presents a circuit-theoretic introduction to the modeling and simulation of a grid- following inverter connected to an electrical power grid. We describe the inverter synchronization with the grid (PLL), power control, and current control structure and show how these elements can be represented within an electromagnetic transient (EMT) simulation framework using companion model-based formulations similar to those used in circuit simulators such as SPICE and Cadence. In this tutorial, we use the grid-following inverter as the primary example to illustrate how its governing equations, control loops, and network interface can be formulated and simulated from first principles. By the end of the document, readers should gain a clear introductory understanding of how to model and simulate a grid-following inverter in an EMT platform.
Authors: Kaan T. Gun, Xiaozhe Wang, Danial Jafarigiv
Electric vehicles (EVs) in Vehicle-to-Grid (V2G) systems act as distributed energy resources that support grid stability. Centralized coordination such as the extended State Space Model (eSSM) enhances scalability and estimation efficiency but may introduce new cyber-attack surfaces. This paper presents a stealthy False Data Injection Attack (FDIA) targeting eSSM-based V2G coordination. Unlike prior studies that assume attackers can disrupt physical charging or discharging processes, we consider an adversary who compromises only a subset of EVs, and limiting their influence to the manipulation of reported State of Charge (SoC) and power measurements. By doing so, the attacker can deceive the operator's perception of fleet flexibility while remaining consistent with model-based expectations, thus evading anomaly detection. Numerical simulations show that the proposed stealthy FDIA can deteriorate grid frequency stability even without direct access to control infrastructure. These findings highlight the need for enhanced detection and mitigation mechanisms tailored to aggregated V2G frameworks
Authors: Young-ho Cho, Min-Seung Ko, Hao Zhu
Achieving a sustainable electricity infrastructure requires the explicit integration of carbon emissions into power system modeling and optimization. However, existing open-source test cases for power system research lack generator-level carbon profiling, preventing the benchmark of carbon-aware operational strategies. To address this gap, this work introduces PGLib-CO2, an open-source extension to the PGLib-OPF test case library. The proposed PGLib-CO2 enriches standard grid test cases with CO2 and CO2-equivalent emission intensity factors to achieve realistic, generator-level carbon profiling with an expanded list of fuel types. Using the standardized data, PGLib-CO2 allows us to enhance the algorithms for computing key carbon emission metrics. We first utilize the differentiable programming paradigm for computing LMCE by treating the OPF-based grid dispatch as a differentiable layer. This method provides a rigorous marginal sensitivity for general convex cost functions, eliminating the need of using a small incremental change in numerical perturbation. Moreover, to accelerate the real-time LMCE computation, we develop an MPP-based approach that shifts the optimization burden to offline phase of identifying the OPF critical regions. Since each critical region is characterized by a pre-computed affine dispatch function, the online phase reduces to identifying the region followed by efficiently evaluating the region-specific LMCE values. Numerical evaluations on IEEE test systems demonstrate that the differentiable LMCE computation attains the precise sensitivity information, and the MPP-based approach retrieves the LMCE signals faster than the direct optimization approach. By bridging high-fidelity data with advanced parametric computation, PGLib-CO2 provides a reproducible and computationally efficient foundation for future research in sustainable power system operations.
Authors: Meysam Masoudi, Milad Ganjalizadeh, Tahar Zanouda, Pal Frenger
Energy consumption is a significant concern for mobile network operators, and to enable further network energy improvements it is also an important target when developing the emerging 6G standard. In this paper we show that, despite the existence of many energy-saving features in 5G new radio (NR) networks, activating them in isolation yields only suboptimal savings and often compromises other network key performance indicators (KPIs) such as coverage or latency. We first introduce a compact taxonomy that distinguishes hardware capabilities from higher-layer features. Features fall into two classes: (i) signaling and scheduling mechanisms that create idle windows, and (ii) features that utilize those windows to save energy. We then present a feature orchestrator as a logical node to coordinate between features to maximize the gain. Using a 3GPP-aligned simulator with product-realistic parameters, we show that coordinating lean NR, scheduling, and advanced sleep modes significantly reduces gNodeB (gNB) energy consumption with negligible throughput loss, compared to the uncoordinated scenario. We conclude by outlining open issues in observability, system dynamics, coordination, and intelligent automation for energy performance management.
Authors: G. Svistunov (1), A. Akhtarshenas (1), D. López-Pérez (1), M. Giordani (2), G. Geraci (3), H. Yanikomeroglu (4) ((1) Universitat Politècnica de València, (2) University of Padova, (3) Universitat Pompeu Fabra, (4) Carleton University)
HAPS are emerging as key enablers in the evolution of 6G wireless networks, bridging terrestrial and non-terrestrial infrastructures. Operating in the stratosphere, HAPS can provide wide-area coverage, low-latency, energy-efficient broadband communications with flexible deployment options for diverse applications. This survey delivers a comprehensive overview of HAPS use cases, technologies, and integration strategies within the 6G ecosystem. The roles of HAPS in extending connectivity to underserved regions, supporting dynamic backhauling, enabling massive IoT, and delivering reliable low-latency communications for autonomous and immersive services are discussed. The paper reviews state-of-the-art architectures for terrestrial and non-terrestrial network integration, highlights recent field trials. Furthermore, key enabling technologies such as channel modeling, AI-driven resource allocation, interference control, mobility management, and energy-efficient communications are examined. The paper also outlines open research challenges. By addressing existing gaps in the literature, this survey positions HAPS as a foundational component of globally integrated, resilient, and sustainable 6G networks.
Authors: Harish K. Dureppagari, Harikumar Krishnamurthy, Chiranjib Saha, Xiaofeng Wang, Alberto Rico-Alvariño, R. Michael Buehrer, Harpreet S. Dhillon
The integration of non-terrestrial networks (NTN) into 5G new radio (NR) enables a new class of positioning capabilities based on cellular signals transmitted by Low-Earth Orbit (LEO) satellites. In this paper, we investigate joint delay-and-carrier-phase positioning for LEO-based NR-NTN systems and provide a convergence-centric comparison with Global Navigation Satellite Systems (GNSS). We show that the rapid orbital motion of LEO satellites induces strong temporal and geometric diversity across observation epochs, thereby improving the conditioning of multi-epoch carrier-phase models and enabling significantly faster integer-ambiguity convergence. To enable robust carrier-phase tracking under intermittent positioning reference signal (PRS) transmissions, we propose a dual-waveform design that combines wideband PRS for delay estimation with a continuous narrowband carrier for phase tracking. Using a realistic simulation framework incorporating LEO orbit dynamics, we demonstrate that LEO-based joint delay-and-carrier-phase positioning achieves cm-level accuracy with convergence times on the order of a few seconds, whereas GNSS remains limited to meter-level accuracy over comparable short observation windows. These results establish LEO-based cellular positioning as a strong complement and potential alternative to GNSS for high-accuracy positioning, navigation, and timing (PNT) services in future wireless networks.
Authors: Faicel Khennoufa, Khelil Abdellatif, Metin Ozturk, Halim Yanikomeroglu, Safwan Alfattani
Uncrewed aerial vehicles (UAVs) are expected to enhance connectivity, extend network coverage, and support advanced communication services in sixth-generation (6G) cellular networks, particularly in public and civil domains. Although multi-UAV systems enhance connectivity for IoT networks more than single-UAV systems, energy-efficient communication systems and the integration of energy harvesting (EH) are crucial for their widespread adoption and effectiveness. In this regard, this paper proposes a hierarchical ad hoc UAV network with non-linear EH and non-orthogonal multiple access (NOMA) to enhance both energy and cost efficiency. The proposed system consists of two UAV layers: a cluster head UAV (CHU), which acts as the source, and cluster member UAVs (CMUs), which serve as relays and are capable of harvesting energy from a terrestrial power beacon. For the considered IoT network architecture, the outage probability expressions of ground Internet of things (IoT) devices, each CMU, and the overall outage probability of the proposed system are derived over Nakagami-m fading channels with practical constraints such as hardware impairments and non-linear EH. We compare the proposed system against a non EH system, and our findings indicate that the proposed system outperforms the benchmark in terms of outage probability.
Authors: Faicel Khennoufa, Khelil Abdellatif, Metin Ozturk, Halim Yanikomeroglu, Safwan Alfattani
Uncrewed aerial vehicles (UAVs) are expected to enhance connectivity, extend network coverage, and support advanced communication services in sixth-generation (6G) cellular networks, particularly in public and civil domains. Although multi-UAV systems enhance connectivity for IoT networks more than single-UAV systems, energy-efficient communication systems and the integration of energy harvesting (EH) are crucial for their widespread adoption and effectiveness. In this regard, this paper proposes a hierarchical ad hoc UAV network with non-linear EH and non-orthogonal multiple access (NOMA) to enhance both energy and cost efficiency. The proposed system consists of two UAV layers: a cluster head UAV (CHU), which acts as the source, and cluster member UAVs (CMUs), which serve as relays and are capable of harvesting energy from a terrestrial power beacon. For the considered IoT network architecture, the outage probability expressions of ground Internet of things (IoT) devices, each CMU, and the overall outage probability of the proposed system are derived over Nakagami-m fading channels with practical constraints such as hardware impairments and non-linear EH. We compare the proposed system against a non EH system, and our findings indicate that the proposed system outperforms the benchmark in terms of outage probability.
Authors: Irched Chafaa, Giacomo Bacci, Luca Sanguinetti
Optimal AP clustering and power allocation are critical in user-centric cell-free massive MIMO systems. Existing deep learning models lack flexibility to handle dynamic network configurations. Furthermore, many approaches overlook pilot contamination and suffer from high computational complexity. In this paper, we propose a lightweight transformer model that overcomes these limitations by jointly predicting AP clusters and powers solely from spatial coordinates of user devices and AP. Our model is architecture-agnostic to users load, handles both clustering and power allocation without channel estimation overhead, and eliminates pilot contamination by assigning users to AP within a pilot reuse constraint. We also incorporate a customized linear attention mechanism to capture user-AP interactions efficiently and enable linear scalability with respect to the number of users. Numerical results confirm the model's effectiveness in maximizing the minimum spectral efficiency and providing near-optimal performance while ensuring adaptability and scalability in dynamic scenarios.
Authors: Josip Kir Hromatko, Šandor Ileš, Branimir Škugor, Joško Deur
Electric vehicles with multiple motors provide a flexibility in meeting the driver torque demand, which calls for minimizing the battery energy consumption through torque allocation. In this paper, we present an approach to this problem based on approximating electric motor losses using higher-order polynomials with specific properties. To ensure a well-behaved optimization landscape, monotonicity and positivity constraints are imposed on the polynomial models using sum of squares programming. This methodology provides robustness against noisy or sparse data, while retaining the computational efficiency of a polynomial function approximation. The torque allocation problem based on such polynomials is formulated as a constrained nonlinear optimization problem and solved efficiently using readily available solvers. In the nominal case, the first-order necessary conditions for optimality can also be used to obtain a global solution. The performance of the proposed method is evaluated on several certification driving cycles against a grid search-based benchmark. Results show a modest influence on electric energy consumption, while enabling real-time optimization and integration with other vehicle control systems.
Authors: José Pulido, Francesc Wilhelmi, Sergio Fortes, Alfonso Fernández-Durán, Lorenzo Galati Giordano, Raquel Barco
Synthetic data generation is an appealing tool for augmenting and enriching datasets, playing a crucial role in advancing artificial intelligence (AI) and machine learning (ML). Not only does synthetic data help build robust AI/ML datasets cost-effectively, but it also offers privacy-friendly solutions and bypasses the complexities of storing large data volumes. This paper proposes a novel method to generate synthetic data, based on first-order auto-regressive noise statistics, for large-scale Wi-Fi deployments. The approach operates with minimal real data requirements while producing statistically rich traffic patterns that effectively mimic real Access Point (AP) behavior. Experimental results show that ML models trained on synthetic data achieve Mean Absolute Error (MAE) values within 10 to 15 of those obtained using real data when trained on the same APs, while requiring significantly less training data. Moreover, when generalization is required, synthetic-data-trained models improve prediction accuracy by up to 50 percent compared to real-data-trained baselines, thanks to the enhanced variability and diversity of the generated traces. Overall, the proposed method bridges the gap between synthetic data generation and practical Wi-Fi traffic forecasting, providing a scalable, efficient, and real-time solution for modern wireless networks.
Authors: Hiroshi Okajima
This paper presents an LMI-based design framework for multirate steady-state Kalman filters in systems with sensors operating at different sampling rates. The multirate system is formulated as a periodic time-varying system, where the Kalman gains converge to periodic steady-state values that repeat every frame period. Cyclic reformulation transforms this into a time-invariant problem; however, the resulting measurement noise covariance becomes semidefinite rather than positive definite, preventing direct application of standard Riccati equation methods. I address this through a dual LQR formulation with LMI optimization that naturally handles semidefinite covariances. The framework enables multi-objective design, supporting pole placement for guaranteed convergence rates and $l_2$-induced norm constraints for balancing average and worst-case performance. Numerical validation using an automotive navigation system with GPS and wheel speed sensors, including Monte Carlo simulation with 500 independent noise realizations, demonstrates that the proposed filter achieves a position RMSE well below the GPS noise level through effective multirate sensor fusion, and that the LMI solution provides valid upper bounds on the estimation error covariance.
Authors: Arslan Ahmad, Ian Dobson
The impact of routine smaller outages on distribution system customers in terms of customer minutes interrupted can be tracked using conventional reliability indices. However, the customer minutes interrupted in large blackout events are extremely variable, and this makes it difficult to quantify the customer impact of these extreme events with resilience metrics. We solve this problem with the System Average Large Event Duration Index SALEDI that logarithmically transforms the customer minutes interrupted. We explain how this new resilience metric works, compare it with alternatives, quantify its statistical accuracy, and illustrate its practical use with standard outage data from five utilities.
Authors: Ran Tao, Pan Zhao, Ilya Kolmanovsky, Naira Hovakimyan
This paper introduces an uncertainty compensation-based robust adaptive model predictive control (MPC) framework for linear systems with nonlinear time-varying uncertainties. The framework integrates an L1 adaptive controller to compensate for the matched uncertainty and a robust feedback controller, designed using linear matrix inequalities, to mitigate the effect of unmatched uncertainty on target output channels. Uniform bounds on the errors between the system's states and control inputs and those of a nominal (i.e., uncertainty-free) system are derived. These error bounds are then used to tighten the actual system's state and input constraints, enabling the design of an MPC for the nominal system under these tightened constraints. Referred to as uncertainty compensation-based MPC (UC-MPC), this approach ensures constraint satisfaction while delivering enhanced performance compared to existing methods. Simulation results for a flight control example and a spacecraft landing on an asteroid demonstrate the effectiveness of the proposed framework.
Authors: Ali Eslami, Jiangbo Yu
This paper develops a control-theoretic framework for analyzing agentic systems embedded within feedback control loops, where an AI agent may adapt controller parameters, select among control strategies, invoke external tools, reconfigure decision architectures, and modify control objectives during operation. These capabilities are formalized by interpreting agency as hierarchical runtime decision authority over elements of the control architecture, leading to an augmented closed-loop representation in which physical states, internal memory, tool outputs, interaction signals, and design variables evolve as a coupled dynamical system. A five-level hierarchy of agency is defined, ranging from fixed control laws to runtime synthesis of control architectures and objectives. The analysis shows that increasing agency introduces interacting dynamical mechanisms such as time-varying adaptation, endogenous switching, decision-induced delays, and structural reconfiguration. The framework is developed in both nonlinear and linear settings, providing explicit design constraints for AI-enabled control systems in safety-critical applications.
Authors: Simon Pistrosch, Kleanthis Avramidis, Zhao Ren, Tiantian Feng, Jihwan Lee, Monica Gonzalez-Machorro, Anton Batliner, Tanja Schultz, Shrikanth Narayanan, Björn W. Schuller
The expression of affect is integral to spoken communication, yet, its link to underlying articulatory execution remains unclear. Measures of articulatory muscle activity such as EMG could reveal how speech production is modulated by emotion alongside acoustic speech analyses. We investigate affect decoding from facial and neck surface electromyography (sEMG) during phonated and silent speech production. For this purpose, we introduce a dataset comprising 2,780 utterances from 12 participants across 3 tasks, on which we evaluate both intra- and inter-subject decoding using a range of features and model embeddings. Our results reveal that EMG representations reliably discriminate frustration with up to 0.845 AUC, and generalize well across articulation modes. Our ablation study further demonstrates that affective signatures are embedded in facial motor activity and persist in the absence of phonation, highlighting the potential of EMG sensing for affect-aware silent speech interfaces.
Authors: Mingyuan Yan, Trager Joswig-Jones, Baosen Zhang, Yize Chen, Wenqi Cui
Large-scale AI training workloads in modern data centers exhibit rapid and periodic power fluctuations, which may induce significant voltage deviations in power distribution systems. Existing voltage regulation methods, such as droop control, are primarily designed for slowly varying loads and may therefore be ineffective in mitigating these fast fluctuations. In addition, repeated control actions can incur substantial cost. To address this challenge, this paper proposes a decentralized switching-reference voltage control framework that exploits the structured behavior of AI training workloads. We establish conditions for voltage convergence and characterize an effective reference design that aligns with the two dominant operating levels of the AI training workload. The switching rule for voltage references is implemented solely using local voltage measurements, enabling simple local implementation while significantly reducing control effort. Simulation studies demonstrate that the proposed method substantially reduces both voltage deviations and reactive control effort, while remaining compatible with internal data center control strategies without requiring extensive coordination.
Authors: Ming Zeng, Ji Wang, Wanming Hao, Zheng Chu, Wenwu Xie, Quoc-Viet Pham
Extremely large-scale multiple-input multiple-output (XL-MIMO) is a key technology for next-generation wireless communication systems. By deploying significantly more antennas than conventional massive MIMO systems, XL-MIMO promises substantial improvements in spectral efficiency. However, due to the drastically increased array size, the conventional planar wave channel model is no longer accurate, necessitating a transition to a near-field spherical wave model. This shift challenges traditional beam training and channel estimation methods, which were designed for planar wave propagation. In this article, we present a comprehensive review of state-of-the-art beam training and channel estimation techniques for XL-MIMO systems. We analyze the fundamental principles, key methodologies, and recent advancements in this area, highlighting their respective strengths and limitations in addressing the challenges posed by the near-field propagation environment. Furthermore, we explore open research challenges that remain unresolved to provide valuable insights for researchers and engineers working toward the development of next-generation XL-MIMO communication systems.
Authors: Asutay Ozmen, João P. Hespanha, Katie Byl
Accurately modeling friction in robotics remains a core challenge, as robotics simulators like MuJoCo and PyBullet use simplified friction models or heuristics to balance computational efficiency with accuracy, where these simplifications and approximations can lead to substantial differences between simulated and physical performance. In this paper, we present a physics-informed friction estimation framework that enables the integration of well-established friction models with learnable components, requiring only minimal, generic measurement data. Our approach enforces physical consistency yet retains the flexibility to capture complex friction phenomena. We demonstrate, on an underactuated and nonlinear system, that the learned friction models, trained solely on small and noisy datasets, accurately reproduce dynamic friction properties with significantly higher fidelity than the simplified models commonly used in robotics simulators. Crucially, we show that our approach enables the learned models to be transferable to systems they are not trained on. This ability to generalize across multiple systems streamlines friction modeling for complex, underactuated tasks, offering a scalable and interpretable path toward improving friction model accuracy in robotics and control.
Authors: Wenchao Liu, Xuhui Zhang, Jinke Ren, Weijie Yuan, Changsheng You, Shuangyang Li
Unmanned aerial vehicle (UAV)-enabled integrated sensing and communication (ISAC) is regarded as a key enabler for next-generation wireless systems. However, conventional fixed-position antennas limit the ability of UAVs to fully exploit their inherent potential. To overcome this limitation, we propose a UAV-enabled ISAC framework equipped with fluid antennas (FAs), where the mobility of antenna elements introduces additional spatial degrees of freedom to simultaneously enhance communication and sensing performance. A multi-objective optimization problem is formulated to maximize the communication rates of multiple users while minimizing the Cramér-Rao bound (CRB) for the angle estimation of a single target. Due to excessively frequent updates of FA positions may lead to response delay, a three-timescale optimization framework is developed to jointly optimize transmit beamforming, FA positions, and UAV trajectory based on their characteristics. To solve the non-convexity of the problem, an alternating optimization-based algorithm is developed to obtain a sub-optimal solution. Numerical results show that the proposed scheme significantly outperforms various benchmark schemes, validating the effectiveness of integrating the FA technology into the UAV-enabled ISAC systems.
Authors: Pooneh Mousavi, Lovenya Jain, Mirco Ravanelli, Cem Subakan
Large Audio Language Models (LALMs) integrate audio encoders with pretrained Large Language Models to perform complex multimodal reasoning tasks. While these models can generate Chain-of-Thought (CoT) explanations, the faithfulness of these reasoning chains remains unclear. In this work, we propose a systematic framework to evaluate CoT faithfulness in LALMs with respect to both the input audio and the final model prediction. We define three criteria for audio faithfulness: hallucination-free, holistic, and attentive listening. We also introduce a benchmark based on both audio and CoT interventions to assess faithfulness. Experiments on Audio Flamingo 3 and Qwen2.5-Omni suggest a potential multimodal disconnect: reasoning often aligns with the final prediction but is not always strongly grounded in the audio and can be vulnerable to hallucinations or adversarial perturbations.
Authors: Addie McCurdy, Andrew Gusty, Emily Jensen
The optimal controller design problem for a linear, first-order spatially-invariant distributed parameter system is considered. Through a case study of the Linear Quadratic Regulator (LQR) problem for the diffusion equation over the torus, it is illustrated that the optimal controller design problem can be equivalently formulated as an optimization problem over the system's closed-loop mappings, analogous to the System Level Synthesis framework. This reformulation is solved analytically to recover the LQR for the diffusion equation, and an internally stable implementation of this controller is recovered from the optimal closed-loop mappings. It is further demonstrated that a class of spatio-temporal constraints on the closed-loop maps can be imposed on this closed-loop formulation while preserving convexity.
Authors: Hansol Park, Hoseong Ahn, Junwon Moon, Yejin Lee, Kyuhong Shim
Hallucinations in multimodal models have been extensively studied using benchmarks that probe reliability in image-text query settings. However, the effect of spoken queries on multimodal hallucinations remains largely unexplored, despite the growing role of voice interfaces. In this paper, we introduce a systematic pipeline that converts existing multimodal hallucination benchmarks into spoken-query versions while preserving the original tasks and labels. We instantiate this pipeline on RePOPE and release RePOPE-Spk, where all queries are provided as spoken audio under diverse input conditions. Experimental results show that hallucinations escalate when queries are spoken rather than written: error rates increase by 3-6% with clean speech and by up to 30% under environmental noise. Furthermore, many-shot prompting and chain-of-thought reasoning provide only partial mitigation. Our findings motivate new directions for building reliable voice interface systems and evaluations.
Authors: Cheng Ouyang, Moeen Ul Islam, Dong Chen, Kaixiang Zhang, Zhaojian Li, Xiaobo Tan
Soft robots offer significant advantages in safety and adaptability, yet achieving precise and dynamic control remains a major challenge due to their inherently complex and nonlinear dynamics. Recently, Data-enabled Predictive Control (DeePC) has emerged as a promising model-free approach that bypasses explicit system identification by directly leveraging input-output data. While DeePC has shown success in other domains, its application to soft robots remains underexplored, particularly for three-dimensional (3D) soft robotic systems. This paper addresses this gap by developing and experimentally validating an effective DeePC framework on a 3D, cable-driven soft arm. Specifically, we design and fabricate a soft robotic arm with a thick tubing backbone for stability, a dense silicone body with large cavities for strength and flexibility, and rigid endcaps for secure termination. Using this platform, we implement DeePC with singular value decomposition (SVD)-based dimension reduction for two key control tasks: fixed-point regulation and trajectory tracking in 3D space. Comparative experiments with a baseline model-based controller demonstrate DeePC's superior accuracy, robustness, and adaptability, highlighting its potential as a practical solution for dynamic control of soft robots.
Authors: Jiquan Wang, Sha Zhao, Yangxuan Zhou, Yiming Kang, Shijian Li, Gang Pan
Electroencephalography (EEG) foundation models hold significant promise for universal Brain-Computer Interfaces (BCIs). However, existing approaches often rely on end-to-end fine-tuning and exhibit limited efficacy under frozen-probing protocols, lacking the intrinsic universality required for broad generalization. This limitation stems from adapting general-purpose sequence architectures that overlook the biophysical and dynamical principles of neural activity. To bridge this gap, we propose DeeperBrain, a neuro-grounded foundation model integrating domain-specific inductive biases into its model design and learning objectives. Architecturally, DeeperBrain incorporates a volume conduction-aware channel encoding to model spatial mixing via 3D geometry, and a neurodynamics-aware temporal encoding capturing slow adaptations using oscillatory and exponential bases. For pretraining, we introduce a dual-objective strategy combining Masked EEG Reconstruction (MER) for local fidelity and Neurodynamics Statistics Prediction (NSP). NSP enforces alignment with macroscopic brain states by predicting interpretable order parameters, including spectral power, functional connectivity, cross-frequency coupling, and dynamic complexity. Extensive experiments demonstrate that DeeperBrain achieves state-of-the-art or highly competitive performance under end-to-end fine-tuning. Crucially, it maintains superior efficacy under a rigorous frozen-probing protocol, verifying that embedding neuroscientific first principles endows learned representations with the intrinsic universality essential for universal BCI. The code will be publicly available.
Authors: Ahmed M. Elshazly, Ahmed Arafa
We study federated learning (FL) over wireless fading channels where multiple devices simultaneously send their model updates. We propose an efficient age-aware edge-blind over-the-air FL approach that does not require channel state information (CSI) at the devices. Instead, the parameter server (PS) uses multiple antennas and applies maximum-ratio combining (MRC) based on its estimated sum of the channel gains to detect the parameter updates. A key challenge is that the number of orthogonal subcarriers is limited; thus, transmitting many parameters requires multiple Orthogonal Frequency Division Multiplexing (OFDM) symbols, which increases latency. To address this, the PS selects only a small subset of model coordinates each round using AgeTop-k, which first picks the largest-magnitude entries and then chooses the k coordinates with the longest waiting times since they were last selected. This ensures that all selected parameters fit into a single OFDM symbol, reducing latency. We provide a convergence bound that highlights the advantages of using a higher number of antenna array elements and demonstrates a key trade-off: increasing k decreases compression error at the cost of increasing the effect of channel noise. Experimental results show that (i) more PS antennas greatly improve accuracy and convergence speed; (ii) AgeTop-k outperforms random selection under relatively good channel conditions; and (iii) the optimum k depends on the channel, with smaller k being better in noisy settings.
Authors: Hiu Yung Wong
The superconducting qubit quantum computer is one of the most promising quantum computing architectures for large-scale integration due to its maturity and close proximity to the well-established semiconductor manufacturing infrastructure. From an education perspective, it also bridges classical microwave electronics and quantum electrodynamics. In this paper, we will review the basics of quantum computers, superconductivity, and Josephson junctions. We then introduce important technologies and concepts related to DiVincenzo's criteria, which are the necessary conditions for the superconducting qubits to work as a useful quantum computer. Firstly, we will discuss various types of superconducting qubits formed with Josephson junctions, from which we will understand the trade-off across multiple design parameters, including their noise immunity. Secondly, we will discuss different schemes to achieve entanglement gate operations, which are a major bottleneck in achieving more efficient fault-tolerant quantum computing. Thirdly, we will review readout engineering, including the implementations of the Purcell filters and quantum-limited amplifiers. Finally, we will discuss the nature and review the studies of two-level system defects, which are currently the limiting factor of qubit coherence time. DiVincenzo's criteria are only the necessary conditions for a technology to be eligible for quantum computing. To have a useful quantum computer, large-scale integration is required. We will review proposals and developments for the large-scale integration of superconducting qubit devices. By comparing with the application of electronic design automation (EDA) in semiconductors, we will also review the use of EDA in superconducting qubit quantum computer design, which is necessary for its large-scale integration.
Authors: Yuhan Chen, Tao Liu, Jie Huang
The leader-following consensus problem for general linear multi-agent systems over jointly connected switching networks has been a challenging problem and the solvability of the problem has been limited to the class of linear multi-agent systems whose system matrix is marginally stable. This condition is restrictive since it even excludes the most commonly used double-integrator system. This paper presents a breakthrough by demonstrating that leader-following exponential consensus is achievable for general linear multi-agent systems over jointly connected switching networks, even when the system matrix is exponentially unstable. The degree of instability can be explicitly characterized by two key quantities that arise from the jointly connected condition on a switching graph. By exploiting duality, we further show that the output-based distributed observer design problem for a general leader system is solvable over jointly connected switching networks, even when the system matrix is exponentially unstable. This is also in sharp contrast to the existing distributed observers, which rely on the assumption that the leader system is marginally stable.
Authors: Sunki Hong, Jisoo Lee, Yuanyuan Shi
Selecting the right deep learning model for power grid forecasting is challenging, as performance heavily depends on the data available to the operator. This paper presents a comprehensive benchmark of five modern neural architectures: two state space models (PowerMamba, S-Mamba), two Transformers (iTransformer, PatchTST), and a traditional LSTM. We evaluate these models on hourly electricity demand across six diverse US power grids for forecast windows between 24 and 168 hours. To ensure a fair comparison, we adapt each model with specialized temporal processing and a modular layer that cleanly integrates weather covariates. Our results reveal that there is no single best model for all situations. When forecasting using only historical load, PatchTST and the state space models provide the highest accuracy. However, when explicit weather data is added to the inputs, the rankings reverse: iTransformer improves its accuracy three times more efficiently than PatchTST. By controlling for model size, we confirm that this advantage stems from the architecture's inherent ability to mix information across different variables. Extending our evaluation to solar generation, wind power, and wholesale prices further demonstrates that model rankings depend on the forecast task: PatchTST excels on highly rhythmic signals like solar, while state space models are better suited for the chaotic fluctuations of wind and price. Ultimately, this benchmark provides grid operators with actionable guidelines for selecting the optimal forecasting architecture based on their specific data environments.
Authors: Xuanhao Mu, Jakob Geiges, Nan Liu, Thorsten Schlachter, Veit Hagenmeyer
In energy system analysis, coupling models with mismatched spatial resolutions is a significant challenge. A common solution is assigning weights to high-resolution geographic units for aggregation, but traditional models are limited by using only a single geospatial attribute. This paper presents an innovative method employing a self-supervised Heterogeneous Graph Neural Network to address this issue. This method models high-resolution geographic units as graph nodes, integrating various geographical features to generate physically meaningful weights for each grid point. These weights enhance the conventional Voronoi-based allocation method, allowing it to go beyond simply geographic proximity by incorporating essential geographic this http URL addition, the self-supervised learning paradigm overcomes the lack of accurate ground-truth data. Experimental results demonstrate that applying weights generated by this method to cluster-based Voronoi Diagrams significantly enhances scalability, accuracy, and physical plausibility, while increasing precision compared to traditional methods.
Authors: Yangyang Qu, Todisco Massimiliano, Galdi Chiara, Evans Nicholas
Voice biometric systems can exhibit sex-related performance gaps even when overall verification accuracy is strong. We attribute these gaps to two practical mechanisms: (i) demographic shortcut learning, where speaker classification training exploits spurious correlations between sex and speaker identity, and (ii) feature entanglement, where sex-linked acoustic variation overlaps with identity cues and cannot be removed without degrading speaker discrimination. We propose Fair-Gate, a fairness-aware and interpretable risk-gating framework that addresses both mechanisms in a single pipeline. Fair-Gate applies risk extrapolation to reduce variation in speaker-classification risk across proxy sex groups, and introduces a local complementary gate that routes intermediate features into an identity branch and a sex branch. The gate provides interpretability by producing an explicit routing mask that can be inspected to understand which features are allocated to identity versus sex-related pathways. Experiments on VoxCeleb1 show that Fair-Gate improves the utility--fairness trade-off, yielding more sex-fair ASV performance under challenging evaluation conditions.
Authors: Shuai Zeng
Large-scale MIMO detection remains challenging because exact or near-maximum-likelihood search is difficult to scale, while available quantum resources are insufficient for directly solving full-size detection instances by QAOA. This paper therefore proposes a Block-QAOA-Aware MIMO Detector (BQA-MD), whose primary purpose is to reorganize the detection chain so that it becomes compatible with limited-qubit local quantum subproblems. Specifically, BQA-MD combines block-QAOA-aware preprocessing in the QR domain, a standards-consistent blockwise 5G NR Gray-HUBO interface, an MMSE-induced dynamic regularized blockwise objective, and K-best candidate propagation. Within this framework, fixed-size block construction gives every local subproblem a uniform circuit width and parameter dimension, which in turn enables parameter-transfer QAOA as a practical realization strategy for structurally matched local subproblems. Experiments are conducted on a 16x16 Rayleigh MIMO system with 16QAM using classical simulation of the quantum subroutine. The results show that the regularized blockwise detector improves upon its unregularized counterpart, validating the adopted blockwise objective and the block-QAOA-aware design rationale. They also show that the parameter-transfer QAOA detector nearly matches the regularized blockwise exhaustive reference and clearly outperforms direct-training QAOA in BER, thereby supporting parameter reuse as the preferred QAOA realization strategy within the proposed framework. In the tested setting, MMSE remains slightly better in the low-SNR region, whereas the parameter-transfer QAOA detector becomes highly competitive from the medium-SNR regime onward.
Authors: Yujun Huang, Gioele Zardini
Complex engineered systems require coordinated design choices across heterogeneous components under multiple conflicting objectives and uncertain specifications. Monotone co-design provides a compositional framework for such problems by modeling each subsystem as a design problem: a feasible relation between provided functionalities and required resources in partially ordered sets. Existing uncertain co-design models rely on interval bounds, which support worst-case reasoning but cannot represent probabilistic risk or multi-stage adaptive decisions. We develop a distributional extension of co-design that models uncertain design outcomes as distributions over design problems and supports adaptive decision processes through Markov-kernel re-parameterizations. Using quasi-measurable and quasi-universal spaces, we show that the standard co-design interconnection operations remain compositional under this richer notion of uncertainty. We further introduce queries and observations that extract probabilistic design trade-offs, including feasibility probabilities, confidence bounds, and distributions of minimal required resources. A task-driven unmanned aerial vehicle case study illustrates how the framework captures risk-sensitive and information-dependent design choices that interval-based models cannot express.
Authors: Jianghong Dong, Chunying Yang, Mengchi Cai, Chaoyi Chen, Qing Xu, Jianqiang Wang, Keqiang Li
Sufficient testing under corner cases is critical for the long-term operation of vehicle-infrastructure cooperation systems (VICS). However, existing corner-case generation methods are primarily AI-driven, and VICS testing under corner cases is typically limited to simulation. In this paper, we introduce an L5 ''Interactable'' level to the VICS digital twin (VICS-DT) taxonomy, extending beyond the conventional L4 ''Optimizable'' level. We further propose an L5-level VICS testing framework, IMPACT (Interactive Mixed-digital-twin Paradigm for Advanced Cooperative vehicle-infrastructure Testing). By enabling direct human interactions with VICS entities, IMPACT incorporates highly uncertain and unpredictable human behaviors into the testing loop, naturally generating high-quality corner cases that complement AI-based methods. Furthermore, the mixedDT-enabled ''Physical-Virtual Action Interaction'' facilitates safe VICS testing under corner cases, incorporating real-world environments and entities rather than purely in simulation. Finally, we implement IMPACT on the I-VIT (Interactive Vehicle-Infrastructure Testbed), and experiments demonstrate its effectiveness. The experimental videos are available at our project website: this https URL.
Authors: Chidozie Ezeakunne, Jose E. Tabarez, Reeju Pokharel, Anup Pandey
Solving the alternating current power flow equations in real time is essential for secure grid operation, yet classical Newton-Raphson solvers can be slow under stressed conditions. Existing graph neural networks for power flow are typically trained on a single system and often degrade on different systems. We present PowerModelsGAT-AI, a physics-informed graph attention network that predicts bus voltages and generator injections. The model uses bus-type-aware masking to handle different bus types and balances multiple loss terms, including a power-mismatch penalty, using learned weights. We evaluate the model on 14 benchmark systems (4 to 6,470 buses) and train a unified model on 13 of these under N-2 (two-branch outage) conditions, achieving an average normalized mean absolute error of 0.89% for voltage magnitudes and R^2 > 0.99 for voltage angles. We also show continual learning: when adapting a base model to a new 1,354-bus system, standard fine-tuning causes severe forgetting with error increases exceeding 1000% on base systems, while our experience replay and elastic weight consolidation strategy keeps error increases below 2% and in some cases improves base-system performance. Interpretability analysis shows that learned attention weights correlate with physical branch parameters (susceptance: r = 0.38; thermal limits: r = 0.22), and feature importance analysis supports that the model captures established power flow relationships.
Authors: Elsa Bou Gebrael (1), Majd Olleik (1), Sebastian Zwickl-Bernhard (2) ((1) American University of Beirut, Maroun Semaan Faculty of Engineering and Architecture, Industrial Engineering and Management Department Beirut, Lebanon, (2) Vienna University of Technology, Institute of Energy Systems and Electrical Drives, Energy Economics Group (EEG), Austria, Norwegian University of Science and Technology, Norway)
In many low-income countries, neighborhood diesel generators are widely used to compensate for unreliable or unavailable national electricity grids. These diesel-based microgrids are typically characterized by market power, significant pollution, and weak regulatory oversight. In parallel, households increasingly deploy off-grid solar photovoltaic (PV) systems to gain control over electricity supply. However, these systems suffer from curtailed excess generation during peak solar hours and unreliable access at other times. While prior studies have optimized microgrids in developing contexts from a techno-economic perspective, they largely neglect the market power exerted by monopolistic private generators. This paper addresses this gap by developing a bi-level game-theoretic model that enables household-generated electricity to be fed into the microgrid while explicitly accounting for the market power of a neighborhood diesel generator company (DGC). The regulator sets price and feed-in-tariff caps to maximize household economic surplus (HES), while the DGC acts as a profit-maximizing agent controlling access and supply. The model is applied to a Lebanese case study using high-resolution empirical data collected via logging devices. Results show that: (i) price and feed-in-tariff caps substantially increase HES and consistently induce significant household PV feed-in to the microgrid; (ii) higher DGC budgets or greater PV-owner penetration lead to pronounced gains in HES; and (iii) the renewable energy share reaches 60% under base conditions and approaches 100% at sufficiently high budgets or PV-owner penetration levels, compared to 0% under the status quo.
Authors: Puskar Neupane, Bai Cui
The solvability condition of the power flow equation is important in operational planning and control as it guarantees the existence and uniqueness of a solution for a given set of power injections. As renewable generation becomes more prevalent, the steady-state operating point of the system changes more frequently, making it increasingly challenging to verify power flow solvability by running the AC power flow solver after each change in power injections. This process can be computationally intensive, and numerical solvers do not always converge reliably to an operational solution. In this paper, we propose a sufficient condition for the solvability of the lossless real power flow equation based on the cycle space of a meshed network. The proposed condition yields a less conservative solvability certificate than existing sufficient conditions on the tested systems and can serve as a useful foundation for developing solvability conditions for the fully coupled power flow equations.
Authors: Jinyu Miao, Pu Zhang, Rujun Yan, Yifei He, Bowei Zhang, Zheng Fu, Ke Wang, Qi Song, Kun Jiang, Mengmeng Yang, Diange Yang
Advanced autonomous driving systems require accurate vehicle dynamics modeling. However, identifying a precise dynamics model remains challenging due to strong nonlinearities and the coupled longitudinal and lateral dynamic characteristics. Previous research has employed physics-based analytical models or neural networks to construct vehicle dynamics representations. Nevertheless, these approaches often struggle to simultaneously achieve satisfactory performance in terms of system identification efficiency, modeling accuracy, and compatibility with linear control strategies. In this paper, we propose a fully data-driven dynamics modeling method tailored for complex distributed electric-drive trucks (DETs), leveraging Koopman operator theory to represent highly nonlinear dynamics in a lifted linear embedding space. To achieve high-precision modeling, we first propose a novel dual-branch encoder which encodes dynamic states and provides a powerful basis for the proposed Koopman-based methods entitled KODE. A physics-informed supervision mechanism, grounded in the geometric consistency of temporal vehicle motion, is incorporated into the training process to facilitate effective learning of both the encoder and the Koopman operator. Furthermore, to accommodate the diverse driving patterns of DETs, we extend the vanilla Koopman operator to a mixture-of-Koopman operator framework, enhancing modeling capability. Simulations conducted in a high-fidelity TruckSim environment and real-world experiments demonstrate that the proposed approach achieves state-of-the-art performance in long-term dynamics state estimation.
Authors: Seungtaek Kim, Jonghyup Lee, Kyoungseok Han, Seibum B. Choi
To address the computational challenges of Model Predictive Control (MPC), recent research has studied using imitation learning to approximate MPC with a computationally efficient Deep Neural Network (DNN). However, this introduces a common issue in learning-based control, the simulation-to-reality (sim-to-real) gap. Inspired by Robust Tube MPC, this study proposes a new control framework that addresses this issue from a control perspective. The framework ensures the DNN operates in the same environment as the source domain, addressing the sim-to-real gap with great data collection efficiency. Moreover, an input refinement governor is introduced to address the DNN's inability to adapt to variations in model parameters, enabling the system to satisfy MPC constraints more robustly under parameter-changing conditions. The proposed framework was validated through two case studies: cart-pole control and vehicle collision avoidance control, which analyzed the principles of the proposed framework in detail and demonstrated its application to a vehicle control case.
Authors: Dhruv S. Kushwaha, Zoleikha A. Biron
Reinforcement Learning (RL) has achieved remarkable success in solving complex sequential decision-making problems. However, its application to safety-critical physical systems remains constrained by the lack of stability guarantees. Standard RL algorithms prioritize reward maximization, often yielding policies that may induce oscillations or unbounded state divergence. There has been significant work in incorporating Lyapunov-based stability guarantees in RL algorithms with key challenges being selecting a candidate Lyapunov function, computational complexity by using excessive function approximators and conservative policies by incorporating stability criterion in the learning process. In this work we propose a novel Lyapunov-constrained Soft Actor-Critic (LC-SAC) algorithm using Koopman operator theory. We propose use of extended dynamic mode decomposition (EDMD) to produce a linear approximation of the system and use this approximation to derive a closed form solution for candidate Lyapunov function. This derived Lyapunov function is incorporated in the SAC algorithm to further provide guarantees for a policy that stabilizes the nonlinear system. The results are evaluated trajectory tracking of a 2D Quadrotor environment based on safe-control-gym. The proposed algorithm shows training convergence and decaying violations for Lyapunov stability criterion compared to baseline vanilla SAC algorithm. GitHub Repository: this https URL
Authors: Alma Lago
Mechanistic interpretability has transformed the analysis of transformer circuits by decomposing model behavior into competing algorithms, identifying phase transitions during training, and deriving closed-form predictions for when and why strategies shift. However, this program has remained largely confined to sequence-prediction architectures, leaving embodied control systems without comparable mechanistic accounts. Here we extend this framework to sensorimotor-cognitive development, using infant motor learning as a model system. We show that foundational inductive biases give rise to causal control circuits, with learned gating mechanisms converging toward theoretically motivated uncertainty thresholds. The resulting dynamics reveal a clean phase transition in the arbitration gate whose commitment behavior is well described by a closed-form exponential moving-average surrogate. We identify context window k as the critical parameter governing circuit formation: below a minimum threshold (k$\leq$4) the arbitration mechanism cannot form; above it (k$\geq$8), gate confidence scales asymptotically as log k. A two-dimensional phase diagram further reveals task-demand-dependent route arbitration consistent with the prediction that prospective execution becomes advantageous only when prediction error remains within the task tolerance window. Together, these results provide a mechanistic account of how reactive and prospective control strategies emerge and compete during learning. More broadly, this work sharpens mechanistic accounts of cognitive development and provides principled guidance for the design of interpretable embodied agents.
Authors: Francesco Ceccanti, Aldo Bischi, Marco Antonelli, Andrea Baccioli
Vertical farming is a controlled-environment agriculture (CEA) approach in which crops are grown in stacked layers under regulated climate and lighting, enabling predictable production but requiring high electricity input. This study quantifies the techno-economic impact of roof-mounted daylighting in a three-tier container vertical farm using a light-pipe (LP) system that delivers sunlight to the upper tier. The optical chain, comprising a straight duct and a tilting aluminum-coated mirror within a rotating dome, was modelled in Tonatiuh to estimate crop-level photon delivery and solar gains. These outputs were coupled with a transient AGRI-Energy model to perform year-round simulations for Dubai. Tier-3 strategies were compared against a fully LED benchmark, including daylight-only operation, on/off supplementation, PWM dimming, UV-IR filtering, variable-transmittance control, and simple glazing. Ray-tracing predicted an overall LP optical efficiency of 45%-75%, depending on solar position, quantifying the fraction of incident daylight at the collector aperture delivered to the target growing zone. Daylight-only operation reduced the total three-tier yield by 17% and was not economically viable despite 27-29% electricity savings. Hybrid daylight-LED strategies preserved benchmark yield while reducing electricity use. PWM dimming combined with UV-IR filtering achieved the lowest specific electricity energy consumption (6.32 kWh/kg), 14% below the benchmark. Overall, viability remains CAPEX-limited because achievable electricity savings are insufficient to offset the added investment and thus improves mainly under high electricity and carbon-price contexts, although the LP system delivers a 15-38% lower light cost than an optical-fiber reference under identical incident daylight.
Authors: Taulant Kerci, Angel Vaca, Andrew Groom, Julia Matevosyan, Federico Milano
Frequency control in power systems is implemented in a hierarchical structure traditionally known as primary frequency control (PFC), secondary frequency control (SFC) and tertiary control reserve (TCR) and, some jurisdictions, include time error control (TEC) as well. This hierarchical structure has been designed around a century ago based on timescales separation, that is, approximately an order of magnitude difference between each control structure. This paper argues, based on real-world observations as well as detailed dynamic simulations on a model of the All-Island power system (AIPS) of Ireland, that this frequency control structure is not necessary in current and future converter-dominated power grids. The paper proposes to redesign this structure by removing the SFC and TCR and rely on PFC and a real-time energy market. The PFC is responsible for addressing fast power imbalances in timescales of tens of ms to few minutes (e.g., 100 ms to 5 minutes) while the real-time energy market is responsible for addressing longer imbalances in timescales of minutes to hours (e.g., 5 minutes to 1 hour). TEC, on the other hand, is considered as optional.
Authors: Guido Cavraro, Andrey Bernstein, Emiliano Dall'Anese
This paper focuses on price-based residential demand response implemented through dynamic adjustments of electricity prices during DR events. It extends existing DR models to a stochastic framework in which customer response is represented by price-dependent random variables, leveraging models and tools from the theory of stochastic optimization with decision-dependent distributions. The inherent epistemic uncertainty in the customers' responses renders open-loop, model-based DR strategies impractical. To address this challenge, the paper proposes to employ stochastic, feedback-based pricing strategies to compensate for estimation errors and uncertainty in customer response. The paper then establishes theoretical results demonstrating the stability and near-optimality of the proposed approach and validates its effectiveness through numerical simulations.
Authors: Xiaoxuan Jiang, Yijie Mao
Integrated sensing, communication, and powering (ISCAP) has emerged as a promising solution for enabling multi-functionality in 6G networks. However, it poses a significant challenge in the design of multi-functional waveforms that must jointly consider communication, sensing, and powering performance. In this paper, we propose a novel rate-splitting multiple access (RSMA)-enabled multi-functional ISCAP network, where RSMA facilitates the use of communication signals to simultaneously achieve all three functionalities. Based on the proposed system model, we investigate the beamforming optimization problem to explore the performance trade-offs among communication, sensing, and power transfer. To efficiently solve this problem, we develop a novel ISCAP-extragradient (ISCAP-EG) algorithm, which transforms the original problem into a sequence of convex subproblems, reformulates the dual problem as a variational inequality, and solves it using the EG method. Numerical results show that the proposed ISCAP-EG algorithm achieves performance equivalent to that of the conventional successive convex approximation (SCA)-based method, while significantly reducing simulation time. Moreover, the RSMA-enabled multi-functional ISCAP network enhance the performance trade-off compared with the conventional space-division multiple access (SDMA)-based scheme, highlighting RSMA as a promising technique for advancing multi-functional ISCAP development in 6G.
Authors: Yomali Lokugama, Charith Dissanayake, Saman Atapattu, Kandeepan Sithamparanathan
This paper investigates energy-efficient inter-satellite communication in Low Earth Orbit (LEO) networks, where satellites exchange both buffered and newly generated data through half-duplex inter-satellite links (ISLs). Due to orbital motion and interference-prone directional asymmetry, the achievable ISL capacities in opposite directions vary dynamically, leading to inefficient utilization under conventional fixed or alternating duplex modes. To address this, we propose a Flexible Duplex (FlexD) scheme that adaptively selects the ISL transmission direction in each slot to maximize instantaneous end-to-end sky-to-ground throughput, jointly accounting for ISL quality, downlink conditions, and queue backlogs. A unified analytical framework is developed that transforms the bottleneck rate structure into an equivalent SINR domain, enabling closed-form derivations of throughput outage probability and energy efficiency under deterministic ISLs and Rician satellite-to-ground fading. The analysis reveals distinct operating regions governed by ISL and backlog constraints and provides tractable bounds for ergodic rate and energy efficiency. Numerical results confirm that FlexD achieves higher reliability and up to 30% improvement in energy efficiency compared with conventional half- and full-duplex schemes under realistic inter-satellite interference conditions.
Authors: Luis F. Abanto-Leon, Setareh Maghsudi
This work investigates the radio resource management (RRM) design for downlink integrated sensing and communications (ISAC) systems, jointly optimizing timeslot allocation, beam adaptation, functionality selection, and user-target pairing, with the goal of economizing resource consumption under imperfect information. Timeslot allocation assigns a number of discrete channel uses to targets and users, while beam adaptation selects transmit and receive beams with suitable directions, power levels, and beamwidths. Functionality selection determines whether each timeslot is used for sensing, communication, or their simultaneous operation, while user-target pairing specifies which users and targets are jointly served within the same timeslot. To ensure reliable operation, information imperfections arising from motion, quantization, feedback delays, and hardware limitations are considered. Resource economization is achieved by minimizing energy and time consumption through a multi-objective function, with strict prioritization of time savings. The resulting RRM problem is formulated as a semi-infinite, nonconvex mixed-integer nonlinear program (MINLP). Given the lack of generic methods for solving such problems, we propose a tailor-made approach that exploits the underlying structure of the problem to uncover hidden convexities. This enables an exact reformulation as a mixed-integer semidefinite program (MISDP), which can be solved to global optimality. Simulations reveal important interdependencies among the considered RRM components and show that the proposed approach achieves substantial performance improvements over baseline schemes, with gains up to 88%.
Authors: Arslan Ahmad, Ian Dobson
We develop LENORI, a Large Event Number of Outages Resilience Index measuring distribution system resilience with the number of forced line outages observed in large extreme events. LENORI is calculated from standard utility outage data. The statistical accuracy of LENORI is ensured by taking the logarithm of the outage data. A related Average Large Event Number of Outages metric ALENO is also developed, and both metrics are applied to a distribution system to quantify the power grid strength relative to the extreme events stressing the grid. The metrics can be used to track resilience and quantify the contributions of various types of hazards to the overall resilience.
Authors: Arslan Ahmad, Ian Dobson
Accurate probabilistic modeling of the power system restoration process is essential for resilience planning, operational decision-making, and realistic simulation of resilience events. In this work, we develop data-driven probabilistic models of the restoration process using outage data from four distribution utilities. We decompose restoration into three components: normalized restore time progression, total restoration duration, and the time to first restore. The Beta distribution provides the best-pooled fit for restore time progression, and the Uniform distribution is a defensible, parsimonious approximation for many events. Total duration is modeled as a heteroskedastic Lognormal process that scales superlinearly with event size. The time to first restore is well described by a Gamma model for moderate and large events. Together, these models provide an end-to-end stochastic model for Monte Carlo simulation, probabilistic duration forecasting, and resilience planning that moves beyond summary statistics, enabling uncertainty-aware decision support grounded in utility data.
Authors: Tianyang Yi, D. Adrian Maldonado, Anirudh Subramanyam
Chance-constrained optimization has emerged as a promising framework for managing uncertainties in power systems. This work advances its application to the DC Optimal Power Flow (DC-OPF) model, developing a novel approach to uncertainty modeling and estimation. Current methods typically tackle these problems by first modeling random nodal injections using high-dimensional statistical distributions that scale with the number of buses, followed by deriving deterministic reformulations of the probabilistic constraints. We propose an alternative methodology that exploits the constraint structure to inform the uncertainties to be estimated, enabling significant dimensionality reduction. Rather than learning joint distributions of net-load forecast errors across units, we instead directly model the one-dimensional aggregate system forecast error and two-dimensional line errors weighted by power transfer distribution factors. We evaluate our approach under both Gaussian and non-Gaussian distributions on synthetic and real-world datasets, demonstrating significant improvements in statistical accuracy and optimization performance compared to existing methods.
Authors: Hongchao Zhang, Mohamad H. Kazma, Meiyi Ma, Taylor T. Johnson, Ahmad F. Taha
Differential-algebraic equations (DAEs) arise in power networks, chemical processes, and multibody systems, where algebraic constraints encode physical conservation laws. The safety of such systems is critical, yet safe control is challenging because algebraic constraints restrict allowable state trajectories. Control barrier functions (CBFs) provide computationally efficient safety filters for ordinary differential equation (ODE) systems. However, existing CBF methods are not directly applicable to DAEs due to potential conflicts between the CBF condition and the constraint manifold. This paper introduces DAE-aware CBFs that incorporate the differential-algebraic structure through projected vector fields. We derive conditions that ensure forward invariance of safe sets while preserving algebraic constraints and extend the framework to higher-index DAEs. A systematic verification framework is developed, establishing necessary and sufficient conditions for geometric correctness and feasibility of DAE-aware CBFs. For polynomial systems, sum-of-squares certificates are provided, while for nonpolynomial and neural network candidates, satisfiability modulo theories are used for falsification. The approach is validated on wind turbine and flexible-link manipulator systems.
Authors: Daniel Shen, Marija Ilic, John Parsons
Energy storage shifts energy from off-peak periods to on-peak periods. Unlike conventional generation, storage is duration-limited: the stored energy capacity constrains the duration over which it can supply power. To understand how these constraints affect optimal pricing and investment decisions, we extend the classic two-period peak-load pricing model to include duration-limited storage. By adopting assumptions typical of solar-dominated systems, we link on- and off-peak prices to storage investment costs, round-trip efficiency, and the duration of the peak period. The bulk of the scarcity premium from on-peak prices is associated with the fixed costs of storage as opposed to variable costs stemming from round-trip efficiency losses. Unlike conventional generators, the binding duration constraints lead storage to recover energy capacity costs on a per-peak-event basis instead of amortizing these costs over total peak hours. A numerical example illustrates the implications for equilibrium prices and capacity investment.
Authors: Arnab Pal, Suman Singha Roy, Asim Kumar Naskar
This research presents a novel approach to solving the economic load dispatch (ELD) problem in smart grid systems by leveraging a multi-agent distributed consensus strategy. The core idea revolves around achieving agreement among generators on their incremental cost values, thereby enabling an optimal allocation of power generation. To enhance convergence and robustness, the study introduces an adaptive coupling weight mechanism within a fully decentralized consensus framework, carefully designed with appropriate initial settings for incremental costs. The proposed distributed control protocol is versatile it functions effectively in both constrained and unconstrained generator capacity scenarios. Importantly, the methodology ensures that total power generation continuously matches dynamic load demands throughout the dispatch process, maintaining system-wide balance. To accommodate fluctuating and time varying load profiles, a dummy node is incorporated into the network architecture, acting as a flexible proxy for real time demand changes. The resilience of the method is further evaluated under communication disruptions, specifically by analyzing generator link failures through a switching network topology. Stability of the system is rigorously established using a Lyapunov-based analysis, assuming an undirected and connected communication graph among agents. To validate the practical efficacy of the proposed technique, comprehensive simulations are conducted on the IEEE 30 bus test system within the MATLAB environment, confirming its accuracy, adaptability, and computational efficiency in realistic smart grid conditions.
Authors: Eder Baron-Prada, Adolfo Anta, Florian Dörfler
This paper presents a decentralized frequency-domain framework to characterize the influence of the operating point on the small-signal stability of converter-dominated power systems. The approach builds on Scaled Relative Graph (SRG) analysis, extended here to address Linear Parameter-Varying (LPV) systems. By exploiting the affine dependence of converter admittances on their steady-state operating points, the centralized small-signal stability assessment of the grid is decomposed into decentralized, frequency-wise geometric tests. Each converter can independently evaluate its feasible stability region, expressed as a set of linear inequalities in its parameter space. The framework provides closed-form geometric characterizations applicable to both grid-following (GFL) and grid-forming (GFM) converters, and validation results confirm its effectiveness.
Authors: Zongyan Zhang, Chao Shen, Xu Wan, Jie Song, Mingyang Sun
The increasing penetration of renewable generation and the growing variability of electrified demand introduce substantial operational uncertainty to modern power systems. Topology reconfiguration is widely recognized as an effective and economical means to enhance grid resilience. Due to the coexistence of AC power-flow constraints and discrete switching decisions, topology reconfiguration in large-scale systems leads to a highly nonlinear and nonconvex optimization problem, making traditional methods computationally prohibitive. Consequently, several studies have explored reinforcement learning-based approaches to improve scalability and operational efficiency. However, its practical implementation is challenged by the high-dimensional combinatorial action space and the need to ensure safety during learning-based decision-making. To address these challenges, this paper presents a safe and intelligent topology control framework that integrates Large Language Models (LLMs) with a Safety Soft Actor-Critic (Safety-SAC) architecture. Operational voltage and thermal limits are reformulated into smooth safety-cost signals, enabling risk-aware policy optimization within a constrained Markov decision process. A knowledge-based Safety-LLM module is further introduced to refine unsafe or suboptimal transitions through domain knowledge and state-informed reasoning, thus guiding the learning agent toward safer and more effective switching actions. Experiments on the IEEE 36-bus and 118-bus Grid2Op benchmarks show that the proposed method consistently improves reward, survival time, and safety metrics, achieving higher reward, longer survival, and lower safety cost compared with SAC, ACE, and their safety-enhanced variants. These results demonstrate the potential of combining LLM-based reasoning with safe reinforcement learning to achieve scalable and reliable grid topology control.
Authors: Meysam Masoudi, Tahar Zanouda, Milad Ganjalizadeh, Cicek Cavdar
Electricity consumption in mobile networks is increasing with the continued 5G expansion, rising data traffic, and more complex infrastructures. However, energy management is often handled independently by each mobile network operator (MNO), leading to limited coordination and missed opportunities for collective efficiency gains. To address this gap, we propose a privacy-preserving framework for automated energy infrastructure sharing among co-located MNOs. Our framework consists of three modules: (i) a federated learning-based privacy-preserving site energy consumption forecasting module, (ii) an orchestration module in which a mixed-integer linear program is solved to schedule energy purchases from the grid, utilization of renewable sources, and shared battery charging or discharging, based on real-time prices, forecasts, and battery state, and (iii) an energy source selection module which handles the selection of cost-effective power sources and storage actions based on predicted demand across MNOs for the next control window. Using data from operational networks, our experiments confirm that the proposed solution substantially reduces operational costs and outperforms non-sharing baselines, with gains that increase as network density rises in 5G-and-beyond deployments.
Authors: Mingyuan Yan, Trager Joswig-Jones, Baosen Zhang, Yize Chen, Wenqi Cu
Large-scale AI training workloads in modern data centers exhibit rapid and periodic power fluctuations, which may induce significant voltage deviations in power distribution systems. Existing voltage regulation methods, such as droop control, are primarily designed for slowly varying loads and may therefore be ineffective in mitigating these fast fluctuations. In addition, repeated control actions can incur substantial cost. To address this challenge, this paper proposes a decentralized switching-reference voltage control framework that exploits the structured behavior of AI training workloads. We establish conditions for voltage convergence and characterize an effective reference design that aligns with the two dominant operating levels of the AI training workload. The switching rule for voltage references is implemented solely using local voltage measurements, enabling simple local implementation while significantly reducing control effort. Simulation studies demonstrate that the proposed method substantially reduces both voltage deviations and reactive control effort, while remaining compatible with internal data center control strategies without requiring extensive coordination.
Authors: Jason J. Choi, Donggun Lee, Boyang Li, Jonathan P. How, Koushil Sreenath, Sylvia L. Herbert, Claire J. Tomlin
Control invariant sets are crucial for various methods that aim to design safe control policies for systems whose state constraints must be satisfied over an indefinite time horizon. In this article, we explore the connections among reachability, control invariance, and Control Barrier Functions (CBFs). Unlike prior formulations based on backward reachability concepts, we establish a strong link between these three concepts by examining the inevitable Forward Reachable Tube (FRT), which is the set of states such that every trajectory reaching the FRT must have passed through a given initial set of states. First, our findings show that the inevitable FRT is a robust control invariant set if it has a continuously differentiable boundary. If the boundary is not differentiable, the FRT may lose invariance. We also show that any robust control invariant set including the initial set is a superset of the FRT if the boundary of the invariant set is differentiable. Next, we formulate a differential game between the control and disturbance, where the inevitable FRT is characterized by the zero-superlevel set of the value function. By incorporating a discount factor in the cost function of the game, the barrier constraint of the CBF naturally arises in the Hamilton-Jacobi (HJ) equation and determines the optimal policy. The resulting FRT value function serves as a CBF-like function, and conversely, any valid CBF is also a forward reachability value function. We further prove that any $C^1$ supersolution of the HJ equation for the FRT value functions is a valid CBF and characterizes a robust control invariant set that outer-approximates the FRT. Building on this property, finally, we devise a novel method that learns neural control barrier functions, which learn an control invariant superset of the FRT of a given initial set.
Authors: Milad Hoseinpour, Vladimir Dvorkin
The optimal power flow (OPF) is a multi-valued, non-convex mapping from loads to dispatch setpoints. The variability of system parameters (e.g., admittances, topology) further contributes to the multiplicity of dispatch setpoints for a given load. Existing deep learning OPF solvers are single-valued and thus fail to capture the variability of system parameters unless fully represented in the feature space, which is prohibitive. To solve this problem, we introduce a diffusion-based OPF solver, termed \textit{DiffOPF}, that treats OPF as a conditional sampling problem. The solver learns the joint distribution of loads and dispatch setpoints from operational history, and returns the marginal dispatch distributions conditioned on loads. Unlike single-valued solvers, DiffOPF enables sampling statistically credible warm starts with favorable cost and constraint satisfaction trade-offs. We explore the sample complexity of DiffOPF to ensure the OPF solution within a prescribed distance from the optimization-based solution, and verify this experimentally on power system benchmarks.
Authors: Bowen Li, Junting Chen, Nikolaos Pappas
Complete awareness of the wireless environment, crucial for future intelligent networks, requires sensing all transmitted signals, not just the strongest. A fundamental barrier is estimating the target signal when it is buried under strong co-channel interference from other transmitters, a failure of which renders the signal unusable. This work proposes a maximum likelihood (ML)-based cross-preamble estimation framework that exploits carrier frequency offset (CFO) constancy across beam-swept synchronization signals (SS), coherently aggregating information across multiple observations to reinforce the desired signal against overwhelming interference. Cramer-Rao lower bound (CRLB) analysis and simulation demonstrate reliable estimation even when the signal is over a thousand times weaker than the interference. A low-altitude radio-map case study further verifies the framework's practical effectiveness.
Authors: Duncan Eddy, Esen Yel, Emma Passmore, Niles Egan, Grayson Armour, Dylan M. Asmar, Mykel J. Kochenderfer
Managing announced task completion times is a fundamental control problem in project management. While extensive research exists on estimating task durations and task scheduling, the problem of when and how to update completion times communicated to stakeholders remains understudied. Organizations must balance announcement accuracy against the costs of frequent timeline updates, which can erode stakeholder trust and trigger costly replanning. Despite the prevalence of this problem, current approaches rely on static predictions or ad-hoc policies that fail to account for the sequential nature of announcement management. In this paper, we formulate the task announcement problem as a Partially Observable Markov Decision Process (POMDP) where the control policy must decide when to update announced completion times based on noisy observations of true task completion. Since most state variables (current time and previous announcements) are fully observable, we leverage the Mixed Observability MDP (MOMDP) framework to enable more efficient policy optimization. Our reward structure captures the dual costs of announcement errors and update frequency, enabling synthesis of optimal announcement control policies. Using off-the-shelf solvers, we generate policies that act as feedback controllers, adaptively managing announcements based on belief state evolution. Simulation results demonstrate significant improvements in both accuracy and announcement stability compared to baseline strategies, achieving up to 75\% reduction in unnecessary updates while maintaining or improving prediction accuracy.
Authors: Everest Bloomer, Irem Didin, Ching-Yi Lin, Sahil Shah
Computational workloads are growing exponentially, driving power consumption to unsustainable levels. Efficiently distributing large-scale networks is an NP-Complete problem equivalent to Boolean satisfiability (SAT), making it one of the core challenges in modern computation. To address this, physics and device inspired methods such as Ising systems have been explored for solving SAT more efficiently. In this work, we implement an Ising model equivalence of the 3-SAT problem using a ReRAM crossbar fabricated in the Skywater 130 nm CMOS process. Our ReRAM-based algorithm achieves $91.0\%$ accuracy in matrix representation across iterative reprogramming cycles. Additionally, we establish a foundational energy profile by measuring the energy costs of small sub-matrix structures within the problem space, demonstrating under linear growth trajectory for combining sub-matrices into larger problems. These results demonstrate a promising platform for developing scalable architectures to accelerate NP-Complete problem solving.
Authors: Abid Afridi, Alexis A. Dowhuszko, Jevgenij Krivochiza, Risto Wichman, Jyri Hämäläinen
Non-terrestrial networks (NTNs) increasingly rely on non-geostationary (NGSO) constellations that combine radio frequency (RF) feeder links (FLs) with free space optical (FSO) inter-satellite links (ISLs). Downlink performance in such systems is often constrained by uneven satellite-gateway visibility, data traffic congestion, and rain-induced FL attenuation, leaving the downlink capacity of some satellites underutilized while others become bottlenecks. To prevent such non-uniform load distribution, this paper presents a fairness-driven load balancing strategy that treats the satellite constellation in space as an anycast multi-commodity flow problem. Then, by solving an equivalent linear programming optimization problem, the proposed algorithm dynamically selects the most convenient ground station (GS) to serve each satellite and, when needed, offloads data traffic to adjacent satellites through FSO ISLs. Using a realistic MEO satellite constellation with 1550 nm FSO ISLs and Ka-band feeder links, the method stabilizes the reverse link data service, maintaining the average data rate but notably improving the worst-case throughput. Our proposed algorithm enhances the minimum downlink data rate by more than 25% in the presence of rain and by over 10% under no-rain conditions. These results demonstrate that the use of an ISL-assisted load-balancing scheme mitigates FL bottlenecks and enhances fairness across the satellite constellation, offering a scalable basis for resource allocation in future NTN systems.
Authors: Max Domagk, Jan Meyer, Marco Lindner
Large-scale power quality (PQ) measurement campaigns generate vast amounts of multivariate data, in which systematic dependencies are difficult to identify using conventional analysis techniques. This paper presents a methodology for the automated analysis and visualization of correlation structures in large PQ datasets. Building on an existing framework, the approach is adapted for shorter observation periods and enhanced with aggregation and distance-based visualization techniques. Daily Spearman correlation coefficients are averaged via Fishers z-transformation and aggregated across phases, parameters, and sites. The resulting correlation structures are visualized using hierarchical clustering and multidimensional scaling to reveal consistent and recurring relationships. The methodology is demonstrated using data from 85 measurement sites within the German transmission system.
Authors: Ognjen Stanojev, Pol Jane Soneira, Gösta Stomberg, Mario Schweizer
Thyristor rectifiers are a well-established and cost-effective solution for controlled high-power rectification, commonly used for hydrogen electrolysis and HVDC transmission. However, small-signal modeling and analysis of thyristor rectifiers remain challenging due to their line-commutated operation and nonlinear switching dynamics. This paper first revisits conventional RMS-based modeling of thyristor rectifiers and subsequently proposes a novel nonlinear state-space EMT model in the dq domain that can be linearized for small-signal analysis. The proposed model accurately captures all the relevant dynamic phenomena, including PLL dynamics, the commutation process, and switching delays. It is derived in polar coordinates, offering novel insights into the impact of the PLL and commutation angle on the thyristor rectifier dynamics. We verify the RMS and EMT models against a detailed switching model and demonstrate their applicability through small-signal stability analysis of a modified IEEE 39-bus test system that incorporates thyristor rectifier-interfaced hydrogen electrolyzers, synchronous generators, and grid-forming converters.
Authors: Verena Häberle, Kehao Zhuang, Xiuqiang He, Linbin Huang, Gabriela Hug, Florian Dörfler
This paper introduces a conceptual foundation for Next Generation Grid Codes (NGGCs) based on stability and performance certificates, enabling the provision of dynamic ancillary services such as fast frequency and voltage regulation through decentralized frequency-domain criteria. The NGGC framework offers two key benefits: (i) rigorous closed-loop stability guarantees, and (ii) explicit performance guarantees for frequency and voltage dynamics in power systems. Regarding (i) stability, we employ loop-shifting and passivity-based techniques to derive local frequency-domain stability certificates for individual device dynamics. These certificates ensure the closed-loop stability of the entire interconnected power system through fully decentralized verification. Concerning (ii) performance, we establish quantitative bounds on critical time-domain indicators of system dynamics, including the average-mode frequency and voltage nadirs, the rate-of-change-of-frequency (RoCoF), steady-state deviations, and oscillation damping capabilities. The bounds are obtained by expressing the performance metrics as frequency-domain conditions on local device behavior. The NGGC framework is non-parametric, model-agnostic, and accommodates arbitrary device dynamics under mild assumptions. It thus provides a unified, decentralized approach to certifying both stability and performance without requiring explicit device-model parameterizations. Moreover, the NGGC framework can be directly used as a set of specifications for control design, offering a principled foundation for future stability- and performance-oriented grid codes in power systems.
Authors: Ruslan Zakirzyanov
Metaheuristic algorithms are currently widely used to solve a variety of optimization problems across various industries. This article discusses the application of a metaheuristic algorithm to optimize the hierarchical architecture of an industrial distributed control system. The success of the algorithm depends largely on the choice of starting conditions and algorithm parameters. We examine the impact of parameter selection on the convergence of a modified ant colony algorithm and provide recommendations for tuning the algorithm to achieve optimal results for a specific industrial problem. The findings presented in this article can also be applied to other combinatorial optimization problems.
Authors: Yiliu He, Haiwang Zhong, Grant Ruan, Yan Xu, Chongqing Kang
High-voltage direct current (HVDC) technology has played a crucial role for long-distance transmission of renewable power generation. However, the integration of large-capacity HVDC lines introduces significant frequency security challenges during HVDC fault emergencies. This paper proposes an emergency-aware and frequency-constrained HVDC planning method to optimize the capacity of inter-area HVDC tie-lines in a multi-area asynchronously interconnected grid. Firstly, a coordinated emergency frequency control scheme is proposed to allocate the emergency control resources during HVDC faults. Then, an enhanced system frequency response model integrating event-driven emergency frequency control is developed and a weighted oblique decision tree approach is employed to extract frequency nadir security constraints. The proposed planning model considers all potential HVDC fault emergencies while treating candidate HVDC capacities as decision variables. Simulation results demonstrate superior performance in balancing economic efficiency with frequency security requirements, providing a practical solution for inter-area HVDC planning.
Authors: Scott Angus, Jethro Browell, David Greenwood, Matthew Deakin
Low voltage (LV) distribution transformers face accelerating demand growth while replacement lead times and costs continue to rise, making improved utilisation of existing assets essential. Static and conservative protection devices (PDs) in distribution transformers are inflexible and limit the available headroom of the transformer. This paper presents a probabilistic framework for dynamically forecasting optimal thermal protection settings. The proposed approach directly predicts the day-ahead scale factor which maximises the dynamic thermal rating of the transformer from historical load, temperature, and metadata using clustered quantile regression models trained on 644 UK LV transformers. Probabilistic forecasting quantifies overheating risk directly through the prediction percentile, enabling risk-informed operational decisions. Results show a 10--12\% additional capacity gain compared to static settings, with hotspot temperature risk matching the selected percentile, including under realistic temperature forecast errors. These results demonstrate a practical approach for distribution network operators to take advantage of PDs with adaptive settings to maximise capacity and manage risk on operational time scales.
Authors: Christian Doh Dinga, Francesco Lombardi, Roald Arkesteijn, Arjan van Voorden, Sander van Rijn, Laurens James de Vries, Milos Cvetkovic
District heating networks (DHNs) have significant potential to decarbonize residential heating and accelerate the energy transition. However, designing carbon-neutral DHNs requires balancing several objectives, including economic costs, social acceptance, long-term uncertainties, and grid-integration challenges from electrification. By combining modeling-to-generate-alternatives with power flow simulation techniques, we develop a decision-support method for designing carbon-neutral DHNs that are cost-effective, socially acceptable, robust to future risks, and impose minimal impacts on the electricity grid. Applying our method to a Dutch case, we find substantial diversity in how carbon-neutral DHNs can be designed. The flexibility in technology choice, sizing, and location enables accommodating different real-world needs and achieving high electrification levels without increasing grid loading. For instance, intelligently located heat pumps and thermal storage can limit grid stress even when renewable baseload heat sources and green-fuel boilers are scarce. Using our method, planners can explore diverse carbon-neutral DHN designs and identify the design that best balances stakeholders' preferences.
Authors: Qinghua Ma, Reetam Sen Biswas, Denis Osipov, Guannan Qu, Soummya Kar, Shimiao Li
Existing or planned power grids need to evaluate survivability under extreme events, like a number of peak load overloading conditions, which could possibly cause system collapses (i.e. blackouts). For realistic extreme events that are correlated or share similar patterns, it is reasonable to expect that the dominant vulnerability or failure sources behind them share the same locations but with different severity. Early warning diagnosis that proactively identifies the key vulnerabilities responsible for a number of system collapses of interest can significantly enhance resilience. This paper proposes a multi-period sparse optimization method, enabling the discovery of persistent failure sources across a sequence of collapsed systems with increasing system stress, such as rising demand or worsening contingencies. This work defines persistency and efficiently integrates persistency constraints to capture the ``hidden'' evolving vulnerabilities. Circuit-theory based power flow formulations and circuit-inspired optimization heuristics are used to facilitate the scalability of the method. Experiments on benchmark systems show that the method reliably tracks persistent vulnerability locations under increasing load stress, and solves with scalability to large systems (on average taking around 200 s per scenario on 2000+ bus systems).
Authors: Yiwei Dong, Wenqi Cui, Han Xu, Adam Wierman, Steven Low
Power distribution systems are increasingly exposed to large voltage fluctuations driven by intermittent solar photovoltaic generation and rapidly varying loads (e.g., electric vehicles and storage). To address this challenge, a number of advanced controllers have been proposed for voltage regulation. However, these controllers typically rely on fixed linear approximations of voltage dynamics. As a result, the solutions may become infeasible when applied to the actual voltage behavior governed by nonlinear power flow equations, particularly under heavy power injection from distributed energy resources. This paper proposes a data-driven successive linearization approach for voltage control under nonlinear power flow constraints. By leveraging the fact that the deviation between the nonlinear power flow solution and its linearization is bounded by the distance from the operating point, we perform data-driven linearization around the most recent operating point. Convergence of the proposed method to a neighborhood of KKT points is established by exploiting the convexity of the objective function and the structural properties of the nonlinear constraints. Case studies show that the proposed approach achieves fast convergence and adapts quickly to changes in net load.
Authors: Osasumwen Cedric Ogiesoba-Eguakun, Kaveh Ashenayi, Suman Rath
Public power-system datasets often lack electromagnetic transient (EMT) waveforms, inverter control dynamics, and diverse disturbance coverage, which limits their usefulness for training surrogate models and studying cyber-physical behavior in inverter-based microgrids. This paper presents a high-fidelity digital twin dataset generated from a MATLAB/Simulink EMT model of a low-voltage AC microgrid with ten inverter-based distributed generators. The dataset records synchronized three-phase PCC voltages and currents, per-DG active power, reactive power, and frequency, together with embedded scenario labels, producing 38 aligned channels sampled at $\Delta t = 2~\mu$s over $T = 1$~s ($N = 500{,}001$ samples) per scenario. Eleven operating and disturbance scenarios are included: normal operation, load step, voltage sag (temporary three-phase fault), load ramp, frequency ramp, DG trip, tie-line trip, reactive power step, single-line-to-ground faults, measurement noise injection, and communication delay. To ensure numerical stability without altering sequence length, invalid samples (NaN, Inf, and extreme outliers) are repaired using linear interpolation. Each scenario is further validated using system-level evidence from mean frequency, PCC voltage magnitude, total active power, voltage unbalance, and zero-sequence current to confirm physical observability and correct timing. The resulting dataset provides a consistent, labeled EMT benchmark for surrogate modeling, disturbance classification, robustness testing under noise and delay, and cyber-physical resilience analysis in inverter-dominated microgrids. The dataset and processing scripts will be released upon acceptance
Authors: Clement Wong, Amalie Trewartha, Steven B. Torrisi, Alexandre L. S. Filipowicz
Vehicle-to-Grid (V2G) adoption is hindered by uncertainties regarding its effects on battery lifetime and vehicle usability. These uncertainties are compounded by limited insight into real-world vehicle usage. Here, we leverage real-world Californian BEV usage data to design and evaluate a user-centric V2G strategy. We identified four clustered driver profiles for V2G assessment, ranging from "Daily Chargers" to "Public Chargers". We show that V2G participation is most feasible for "Daily Chargers," and that the effects on battery lifetime depend on calendar aging sensitivity. For batteries with low sensitivity, V2G participation increases capacity loss for all drivers. However, for batteries with high sensitivity, V2G participation can lead to negligible changes in capacity or even improved capacity retention, particularly for drivers who tend to keep their batteries at high states of charge. Our findings enable stakeholders to better assess the potential and viability of V2G adoption.
Authors: Wasseem Al-Rousan, Caisheng Wang, Feng Lin
Cascading failures in power systems caused by sequential tripping of components are a serious concern as they can lead to complete or partial shutdowns, disrupting vital services and causing damage and inconvenience. In prior work, we developed a new approach for identifying and preventing cascading failures in power systems. The approach uses supervisory control technique of discrete event systems (DES) by incorporating both on-line lookahead control and forcible events. In this paper, we use modular supervisory control of DES to reduce computation complexity and increase the robustness and reliability of control. Modular supervisory control allows us to predict and mitigate cascading failures in power systems more effectively. We implemented the proposed control technique on a simulation platform developed in MATLAB and applied the proposed DES controller. The calculations of modular supervisory control of DES are performed using an external tool and imported into the MATLAB platform. We conduct simulation studies for the IEEE 30-bus, 118-bus and 300-bus systems, and the results demonstrate the effectiveness of our proposed approach.
Authors: Ali Rajaei, Jochen L. Cremer
Substation reconfiguration via busbar splitting can mitigate transmission grid congestion and reduce operational costs. However, existing approaches neglect the security of substation topology, particularly for substations without busbar splitting (i.e., closed couplers), which can lead to severe consequences. Additionally, the computational complexity of optimizing substation topology remains a challenge. This paper introduces a MILP formulation for security-constrained substation reconfiguration (SC-SR), considering N-1 line, coupler and busbar contingencies to ensure secure substation topology. To efficiently solve this problem, we propose a heuristic approach with multiple master problems (HMMP). A central master problem optimizes dispatch, while independent substation master problems determine individual substation topologies in parallel. Linear AC power flow equations ensure PF accuracy, while feasibility and optimality sub-problems evaluate contingency cases. The proposed HMMP significantly reduces computational complexity and enables scalability to large-scale power systems. Case studies on the IEEE 14-bus, 118-bus, and PEGASE 1354-bus system show the effectiveness of the approach in mitigating the impact of coupler and busbar tripping, balancing system security and cost, and computational efficiency.
Authors: Juana M. Martínez-Heredia, José L. Mora
This preprint presents a neural network tuner for the finite state model predictive control of an induction motor. The tuner deals with the parameters of the controllers in the speed loop and in the stator current loop. The results are assessed using a five phase machine in an experimental setup. Data for the neural network training is obtained from the experiments using step tests.
Authors: Sobhan Badakhshan, Roshni Anna Jacob, Ali Mahboub Rad, Chao Pan, Yaoyu Li, Jie Zhang
The accelerating growth of computational demand in modern data centers has further heightened the need for power infrastructures that are highly reliable, environmentally sustainable, and capable of supporting grid stability. Small Modular Reactors (SMRs) as a clean source of energy are particularly attractive for next-generation hyperscale data centers with significant electrical and cooling demands. This paper presents a comprehensive dynamic modeling and stability analysis of a grid-connected Integrated Energy System (IES) designed for data center applications. The proposed IES integrates an SMR and a battery energy storage system to jointly supply electricity for computational and cooling load while providing stability support to the main grid. A coupled computational-thermal load model is developed to capture the real-time power demand of the data center, incorporating CPU utilization, cooling efficiency, and ambient temperature effects. The integrated SMR-powered data center model is implemented in PSSE and tested on the IEEE 118-bus system under various fault scenarios. Simulation results demonstrate that the IES substantially enhances voltage and frequency stability compared to a conventionally grid-connected data center, minimizing disturbance-induced deviations and improving post-fault recovery.
Authors: Rodrigo Bernal, Taulant Kerci, Federico Milano
In power networks based on Inverter-Based Resources (IBRs), fast controllers cause frequency and voltage dynamics to overlap. Thus, it becomes critical to assess the overall dynamic performance of such networks through a combined system-wide metric. This letter presents a unified metric designed to evaluate dynamic performance in such cases. The proposed metric consists of a weighted sum of local voltage phasor variations at each bus, where the weights are the complex powers injected at the buses. The proposed metric is further decomposed into device-driven and network-driven components, enabling a more comprehensive assessment of grid dynamics. A case study based on a modified version of the IEEE 39-bus system is presented, in which synchronous machines are replaced by inverter-based resources. A sensitivity analysis of the R/X ratio is utilized to evaluate the metric in conventional grids, as well as in those characterized by strong voltage-frequency coupling with complex power flows.
Authors: Yixuan Yu, Rajni K. Bansal, Yan Jiang, Pengcheng You
Frequency stability is fundamental to the secure operation of power systems. With growing uncertainty and volatility introduced by renewable generation, secondary frequency regulation must now deliver enhanced performance not only in the steady state but also during transients. This paper presents a systematic framework to embed learning in the design of a primal-dual controller that provides provable (potentially exponential) stability and steady-state optimality, while simultaneously improving key transient metrics, including frequency nadir and control effort, in a data-driven manner. In particular, we employ the primal-dual dynamics of an optimization problem that encodes steady-state objectives to realize secondary frequency control with asymptotic stability guarantee. To augment transient performance of the controller via learning, a change of variables on control inputs, which will be deployed by neural networks, is proposed such that under mild conditions, stability and steady-state optimality are preserved. It further allows us to define a learning goal that accounts for the exponential convergence rate, frequency nadir and accumulated control effort, and use sample trajectories to enhance these metrics. Simulation results validate the theories and demonstrate superior transient performance of the learning-augmented primal-dual controller.
Authors: Mihitha Maithripala, Chenyang Qiu, Zongli Lin
A privacy-preserving dynamic average consensus (DAC) algorithm is proposed that achieves consensus while preventing external eavesdroppers from inferring the reference signals and their derivatives. During the initialization phase, each agent generates a set of sinusoidal signals with randomly selected frequencies and exchanges them with its neighboring agents to construct a masking signal. Each agent masks its reference signals using this composite masking signal before executing the DAC update rule. It is shown that the developed scheme preserves the convergence properties of the conventional DAC framework while preventing information leakage to external eavesdroppers. Furthermore, the developed algorithm is applied to state-of-charge (SoC) balancing in a networked battery energy storage system to demonstrate its practical applicability. Simulation results validate the theoretical findings.
Authors: Ichiro Toyoshima, Pierre-Louis Poirion, Tomohide Yamazaki, Kota Yaguchi, Masayuki Kubota, Ryota Mizutani, Akiko Takeda
Power company operators make power generation plans one day in advance, in what is known as the Unit Commitment (UC) problem. UC is exposed to uncertainties, such as unknown electricity load and disturbances caused by renewable energy sources, especially PVs. In previous research, we proposed the Renewable Energy Robust Optimization Problem (RE-RP), which solves these uncertainties by considering suppression. In this paper, we propose a new model called RE-RP with fairness (RE-RPfair), which aims to achieve fair allocation among PVs allocation. This model is an expansion of the original RE-RP, and we prove its effectiveness through simulation. To measure the degree of fairness, we use the Gini Index, which is well-known in social science.
Authors: Mohammed Cherifi
Public EV charging infrastructure suffers from significant failure rates -- with field studies reporting up to 27.5% of DC fast chargers non-functional -- and multi-day mean time to resolution, imposing billions in annual economic burden. Cloud-centric architectures cannot achieve the latency, reliability, and bandwidth characteristics required for autonomous operation. We present Auralink SDC (Software-Defined Charging), an architecture deploying domain-specialized AI agents at the network edge for autonomous charging infrastructure management. Key contributions include: (1) Confidence-Calibrated Autonomous Resolution (CCAR), enabling autonomous remediation with formal false-positive bounds; (2) Adaptive Retrieval-Augmented Reasoning (ARA), combining dense and sparse retrieval with dynamic context allocation; (3) Auralink Edge Runtime, achieving sub-50ms TTFT on commodity hardware under PREEMPT_RT constraints; and (4) Hierarchical Multi-Agent Orchestration (HMAO). Implementation uses AuralinkLM models fine-tuned via QLoRA on a domain corpus spanning OCPP 1.6/2.0.1, ISO 15118, and operational incident histories. Evaluation on 18,000 labeled incidents in a controlled environment establishes 78% autonomous incident resolution, 87.6% diagnostic accuracy, and 28-48ms TTFT latency (P50). This work presents architecture and implementation patterns for edge-deployed industrial AI systems with safety-critical constraints.
Authors: Mohamad Alkadamani, Halim Yanikomeroglu, Amir Ghasemi
The surge in wireless connectivity demand, coupled with the finite nature of spectrum resources, compels the development of efficient spectrum management approaches. Spectrum sharing presents a promising avenue, although it demands precise characterization of spectrum demand for informed policy-making. This paper introduces HR-GAT, a hierarchical resolution graph attention network model, designed to predict spectrum demand using geospatial data. HR-GAT adeptly handles complex spatial demand patterns and resolves issues of spatial autocorrelation that usually challenge standard machine learning models, often resulting in poor generalization. Tested across five major Canadian cities, HR-GAT improves predictive accuracy of spectrum demand by 21% over eight baseline models, underscoring its superior performance and reliability.
Authors: Yen-Ju Lu, Yashesh Gaur, Wei Zhou, Benjamin Muller, Jesus Villalba, Najim Dehak, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Srinivasan Iyer, Duc Le
Auto-regressive speech-text models pre-trained on interleaved text tokens and discretized speech tokens demonstrate strong speech understanding and generation, yet remain substantially less compute-efficient than text LLMs, partly due to the much longer sequences of speech tokens relative to text. This modality imbalance disproportionately allocates pre-training and inference compute to speech, potentially hindering effective cross-modal alignment and slowing performance scaling by orders of magnitude. We introduce the Latent Speech-Text Transformer (LST), which aggregates speech tokens into latent speech patches that serve as higher-level autoregressive units. This design aligns the sequence-modeling granularity between speech and text while improving computational efficiency. The resulting patches can align with textual units to facilitate cross-modal knowledge transfer and compactly capture recurring acoustic patterns such as silence. Across story-completion benchmarks under both compute-controlled and data-controlled settings, LST consistently improves speech accuracy while also improving text performance, achieving up to +6.5% absolute gain on speech HellaSwag in compute-controlled training (+5.3% in data-controlled training). Under compute-controlled scaling from 420M to 1.8B parameters in a near compute-optimal regime, gains grow with scale, and improvements persist up to 7B parameters under fixed-token budgets. These benefits extend to downstream tasks: LST stabilizes ASR adaptation and reduces the effective autoregressive sequence length during ASR and TTS inference, lowering computational cost without degrading reconstruction quality. The code is available at this https URL.
Authors: Marjorie Hoegen, René Glebke, M. Sahnawaz Alam, Alessandro David, Juan Navarro Arenas, Nikolaus Wirtz, Mario Albanese, Daniele Carta, Felix Motzoi, Antonello Monti, Carsten Schuck, Andrea Benigni, Klaus Wehrle, Ferdinanda Ponci
In modern power systems, edge devices serve as local hubs that collect data, perform on-site computing, sense electrical parameters, execute control actions, and communicate with neighboring edge devices as part of the larger grid. However, as the number of monitored nodes and control loops grows, traditional edge devices face serious limits. They can become overloaded by complex signal processing and decision tasks, causing delays and higher energy use. Standard sensors hit a noise floor that prevents them from detecting miniature changes, making it harder to spot early signs of faults or instability. Meanwhile, conventional communication links struggle with bandwidth limits, security risks, and rising encryption demands, which together slow down and weaken the transfer of critical grid information. Quantum technologies have the potential to overcome these challenges. Quantum computers can deliver exponential speed-ups for optimization and machine-learning tasks that ordinary processors cannot handle. Quantum sensors can sense signals with atomic precision, giving edge devices a more precise view of grid dynamics. Quantum communication techniques, including quantum key distribution, offer methods to achieve information-theoretic security and ensure that information arrives quickly and without tampering. We explore how quantum technologies can be integrated into edge devices, highlighting both opportunities and challenges.
Authors: Marco Iorio, Mohammad Golgol, Anamitra Pal
Uncoordinated electric vehicle (EV) charging is altering residential load patterns and pushing distribution transformers to operate beyond their limits. These outcomes can be offset by exploiting the flexibility in work schedules (hybrid, remote vs. in-person) of EV owners, particularly when combined with rooftop photovoltaic (PV) generation. However, this phenomenon has not been explored in-depth yet. This paper addresses this research gap by introducing weekly work schedule-aware robust and chance-constrained optimization formulations for EV charging coordination to determine a transformer's EV hosting capacity. The results obtained using data from a residential feeder in Arizona indicate that an intelligent combination of work schedule flexibility with PV generation can help power utilities effectively manage changing grid demands.
Authors: Nanhong Liu, Jingyi Yan, Mucun Sun, Jie Zhang
In practical data-driven applications on electrical equipment fault diagnosis, training data can be poisoned by sensor failures, which can severely degrade the performance of machine learning (ML) models. However, once the ML model has been trained, removing the influence of such harmful data is challenging, as full retraining is both computationally intensive and time-consuming. To address this challenge, this paper proposes a SISA (Sharded, Isolated, Sliced, and Aggregated)-based machine unlearning (MU) framework for power transformer inter-turn short-circuit fault (ITSCF) localization. The SISA method partitions the training data into shards and slices, ensuring that the influence of each data point is isolated within specific constituent models through independent training. When poisoned data are detected, only the affected shards are retrained, avoiding retraining the entire model from scratch. Experiments on simulated ITSCF conditions demonstrate that the proposed framework achieves almost identical diagnostic accuracy to full retraining, while reducing retraining time significantly.
Authors: Roshni Anna Jacob, Prithvi Poddar, Jaidev Goel, Souma Chowdhury, Yulia R. Gel, Jie Zhang
Extreme weather events and cyberattacks can cause component failures and disrupt the operation of power distribution networks (DNs), during which reconfiguration and load shedding are often adopted for resilience enhancement. This study introduces a topology-aware graph reinforcement learning (RL) framework for outage management that embeds higher-order topological features of the DN into a graph-based RL model, enabling reconfiguration and load shedding to maximize energy supply while maintaining operational stability. Results on the modified IEEE 123-bus feeder across 300 diverse outage scenarios demonstrate that incorporating the topological data analysis (TDA) tool, persistence homology (PH), yields 9-18% higher cumulative rewards, up to 6% increase in power delivery, and 6-8% fewer voltage violations compared to a baseline graph-RL model. These findings highlight the potential of integrating RL with TDA to enable self-healing in DNs, facilitating fast, adaptive, and automated restoration.
Authors: Jingbo Wang, Roshni Anna Jacob, Harshal D. Kaushik, Jie Zhang
This paper presents a Vehicle-to-Grid (V2G) coordination framework using reinforcement learning (RL). {An intelligent control strategy based on the soft actor-critic algorithm is developed for voltage regulation through single and multi-hub charging systems while respecting realistic fleet constraints. A two-phase training approach integrates stability-focused learning with battery-aware deployment to ensure practical feasibility. Simulation studies on the IEEE 34-bus system validate the framework against a standard Volt-Var/Volt-Watt droop controller. Results indicate that the RL agent achieves performance comparable to the baseline control strategy in nominal scenarios. Under aggressive overloading, it provides robust voltage recovery (within 10% of the baseline) while prioritizing fleet availability and state-of-charge preservation, demonstrating the viability of constraint-aware learning for critical grid services.}
Authors: Alessandra Parisio
Integrated Energy Systems (IES) are systems of interconnected electricity, gas, heating, and cooling networks, where the carriers interact and depend on one another. Beyond these core vectors, IES may also incorporate additional infrastructures, such as hydrogen, transportation and water networks, whenever sector coupling or cross-vector exchanges are relevant. Although modern cities already function as multi-energy systems, these networks are still planned and operated in isolation, which leads to inefficiencies and unused flexibility. As distributed energy resources (DERs) grow, local coupling among electricity, heating, and gas networks becomes stronger, so coordinated operation across carriers and infrastructures is essential. IES can improve efficiency, flexibility, and renewable integration, yet operation is challenging because of complex interdependencies, non-convex behaviors, and multi-scale dynamics of the energy networks. A key point that the literature often overlooks is the explicit role of network constraints and topology, which shape feasible operating regions, affect scalability, and determine how uncertainty and formal guarantees can be addressed. This review provides a first comprehensive analysis of network-aware modeling, optimization, and control methods for IES. We identify methodological limitations related to tractability, feasibility guarantees, and scalability. Building on these insights, we outline research directions that include distributed optimization with theoretical guarantees and control approaches informed by operational data. The review offers a foundation for scalable, network-aware operational frameworks for future low-carbon energy systems.
Authors: Hang Nguyen, Koen Kok, Trung Thai Tran, Phuong H. Nguyen
Nowadays, energy transition is ongoing in many countries, aiming to reduce dependence on fossil fuels and CO2 emissions. Besides the positive impacts on the environment, this transition brings technical challenges to the system operators, such as the intricacies of energy system integration, diminishing uncertainty, and incentivizing customers with advanced transaction models. The coordination between the Transmission system operator (TSO) and the Distribution system operator (DSO) is one of the most important aspects of encountering these obstacles. This coordination enhances the utilization of flexibility from Distributed energy resources (DERs) by incentivizing the market parties with better willingness to pay schemes. This paper provides an overview of the coordination schemes (CS), their classification, assessment of the current situation and the challenges associated with applying these schemes in practical context. The main purpose is to investigate the most effective way for TSO/DSOs to use the flexibility resource to maintain the balancing of the entire system while ensuring no congestion occurs in the network. A broad range of possible coordination schemes along with exploiting flexibility services is presented and the pros and cons are analyzed. Additionally, the study presents a general scenario that describes the interaction between the operators and the third party in providing service to the balancing market, considering cases with and without coordination.
Authors: Young-ho Cho, Mohamad Chehade, Fatima Al-Janahi, Sol Lim, Javad Mohammadi, Hao Zhu
Tackling climate change requires the rapid and deep decarbonization of electric power systems. While energy management systems (EMSs) play a central role in this transition, conventional EMSs focus mainly on economic efficiency and often overlook the environmental impact of operational decisions. To address this gap, this paper proposes a unified, real-time building-level carbon-aware EMS (CAEMS) capable of simultaneously co-optimizing grid imports, energy storage, and flexible demand within a single integrated framework. We formulate a mixed-integer linear program (MILP) model that directly integrates time-varying marginal carbon intensity signals into the EMS objective for coordinated participation in both day-ahead (DA) and real-time (RT) markets. To relax the unrealistic assumption of perfect foresight, we incorporate a model predictive control (MPC) extension driven by a Transformer-based forecaster that jointly predicts electricity prices and carbon intensity. The proposed CAEMS is validated using real-world data from the PJM electricity market. Simulation results demonstrate that modest carbon prices can achieve a significant 22.5% reduction in emissions with only a 1.7% increase in cost.
Authors: Zeynab Kaseb, Matthias Moller, Pedro P. Vergara, Peter Palensky
This study further explores reformulating power flow (PF) analysis as a discrete combinatorial optimization problem, proposed in our earlier study using the Adiabatic Quantum Power Flow (AQPF) algorithm, which can be executed on Ising machines, including quantum and quantum-inspired hardware. This approach provides a new representation of the underlying equations, analogous to how neural networks approximate complex functions using simple operations. While the resulting combinatorial optimization problem is NP-hard, it is compatible with emerging quantum hardware designed to address such complexity. We introduce the Adiabatic Quantum Optimal Power Flow (AQOPF) algorithm, which transforms the classical optimal power flow (OPF) equations into quadratic unconstrained binary optimization (QUBO) models. Furthermore, the AQPF and AQOPF algorithms are evaluated on standard test cases ranging from 4- to 1354-bus systems using D-Wave's Advantage\texttrademark\ system (QA), its hybrid quantum-classical solver (HA), and Fujitsu's third-generation Digital Annealer (DAv3) and Quantum-Inspired Integrated Optimization (QIIO) platform. Both full and partitioned formulations are investigated, with particular attention to scalability and robustness in ill-conditioned scenarios. The results demonstrate that the algorithms can reproduce feasible PF and OPF solutions and exhibit promising computational scalability when supported by scalable hardware.
Authors: Yubo Zhang, Hassan ZivariFard, Xiaodong Wang
We aim to achieve keyless covert communication with a positive-rate in Rayleigh block-fading channels. Specifically, the transmitter and the legitimate receiver are assumed to have either causal or non-causal knowledge of the \ac{CSI} for both the legitimate and the warden channels, while the warden only knows the statistical distribution of the \ac{CSI}. Two problem formulations are considered in this work: (a) Power allocation: maximizing the sum covert rate subject to a maximum power constraint, and (b) Rate allocation: minimizing the power consumption subject to a minimum covert rate constraint. Both problems are formulated based on recent information theoretical results on covert communication over state-dependent channels. When the \ac{CSI} of each fading block is known non-causally, we propose a novel three-step method to solve both the power and rate allocation problems. In the case where the \ac{CSI} is known causally, the power allocation problem can be formulated as \ac{MDP} and be solved using a \ac{DDQN} approach. Although the rate allocation problem under causal \ac{CSI} does not directly conform to an \ac{MDP} structure, it can be approximately solved using the \ac{DDQN} trained for power allocation. Simulation results demonstrate the effectiveness of the proposed power and rate allocation methods and provide comprehensive performance comparisons across different allocation schemes.
Authors: Richard Campos, Erica Fischer, Eduardo Cotilla-Sanchez
The increasing intensity and frequency of wildfires are causing significant economic and societal impacts on communities through direct effects on the built environment, particularly critical infrastructure. Electrical systems can both initiate wild-fires (grid-to-fire) and be damaged by wildfire exposure (fire-to-grid). Therefore, resilient electric systems can both limit ignitions and be hardened such that they are more robust to fire demands. Researchers have investigated wildfire mitigation strategies using traditional transmission and distribution electrical test-system models. However, these test cases may not accurately represent realistic electrical system configurations or fuel landscapes, nor capture community impacts, particularly the social and economic effects of mitigation strategies. A wildfire-aware modeling framework enables researchers to develop test cases that benchmark resilience and mitigation strategies while reducing reliance on overly simplistic assumptions about wildfire effects on electrical systems and communities. This study proposes a modeling framework for wildfire-electrical system research by analyzing recent literature and identifying key dimensions as well as gaps within these dimensions. In particular, the framework considers how fire in the wildland-urban interface propagates in space and time, how hazard-infrastructure interactions (e.g., wind and fire) cause system- and component-level damage, and how wildfire-related power outages affect communities.
Authors: Peng Wang, Luis Badesa
An important limitation of Inverter-Based Resources (IBR) is their reduced contribution to Short-Circuit Current (SCC), as compared to that of Synchronous Generators (SGs). With increasing penetration of IBR in most power systems, the reducing SCC poses challenges to a secure system operation, as line protections may not trip when required. In order to address this issue, the SCC ancillary service could be procured via an economic mechanism, aiming at securing adequate SCC on all buses. However, the suitability of markets for SCC services is not well understood, given that these could be prone to market power issues: since the SCC contributions from various SGs to a certain bus are determined by the electrical topology of the grid, this is a highly local service. It is necessary to understand if SGs at advantageous electrical locations could exert market power and, if so, how it could be mitigated. In order to fill this gap, this paper, for the first time, adopts an SCC-constrained bilevel model to investigate strategic behaviors of SGs. To address the non-convexity due to unit commitment variables, the model is restructured through a primal-dual formulation. Based on a modified IEEE 30-bus system, cases with strategic SGs placed at different buses are analyzed. These studies demonstrate that strategic agents exerting market power by manipulating service prices and extending operating periods could achieve up to triple revenues from SCC provision, which reduces market efficiency and would increase the financial burden on consumers. These findings highlight the need for careful market design, for which potential measures to mitigate these market power issues are also discussed.
Authors: Shalini Tripathi, Ankur Bansal, Chinmoy Kundu
This paper explores secure communication in an underwater energy-harvesting (EH) relay network that supports hybrid optical-acoustic transmission. The optical hop is modeled using a Gamma-Gamma turbulence channel with pointing errors and may occasionally be blocked by underwater obstacles. At the same time, an eavesdropper is assumed to monitor the acoustic hop, creating a secrecy concern. To address this, we formulate the relay power allocation problem as an infinite-horizon Markov decision process (MDP). A model-based reinforcement learning (RL) driven optimal power allocation (OPA) strategy is proposed to maximize long-term cumulative secrecy performance until the network stops functioning. To offer lower-complexity alternatives, we also develop a Greedy Algorithm (GA) and a Naive Algorithm (NA). Simulation results show that the RL based OPA adapts effectively to battery dynamics, varying channel conditions, and optical link availability, achieving the highest secure data transmission, while GA performs reasonably and NA performs poorly due to its short-sighted decisions.
Authors: Jiahui Yang, Yuru Wu, Haozong Wang, Yu Liu, Biao Sun, Yilu Liu, Clifton Black
Phasor measurement units (PMUs) are widely used for sub-synchronous oscillation monitoring, yet the effect of windowed discrete Fourier transform (DFT)-based phasor estimation on oscillation observability is not fully characterized. This letter derives the complete complex-valued frequency response of the windowed DFT phasor estimator under both magnitude and phase modulation. The analysis shows that the estimation window introduces both frequency-dependent magnitude attenuation and phase shift to oscillation components, governed by the complex gain. A simple recovery method is also proposed to restore the true oscillation amplitude and phase from PMU data using the analytically known complex gain. The results are validated through time-domain simulations and provide guidance for industry practitioners on interpreting PMU-based oscillation measurements and selecting appropriate window lengths.
Authors: Xingyao Zhang, Haoran Yin, Yanqun Tang, Yao Ge, Yong Zeng, Miaowen Wen, Zilong Liu, Yong Liang Guan, Hüseyin Arslan, Giuseppe Caire
Next-generation wireless networks require enhanced flexibility, efficiency, and reliability in physical layer waveform design to address the challenges posed by heterogeneous channel conditions and stringent quality-of-service demands. To this end, this paper proposes a unified multicarrier waveform framework that provides a systematic characterization and practical implementation guidelines to facilitate waveform selection for the sixth-generation (6G) mobile networks and beyond. We commence by examining the design principles of the state-of-the-art waveforms, which are categorized into one-dimensional modulation waveforms (e.g., orthogonal frequency division multiplexing (OFDM) and affine frequency division multiplexing (AFDM)) and two-dimensional modulation waveforms (e.g., orthogonal time frequency space (OTFS)). Their inherent resilience against various channel-induced interference is further studied, revealing their distinct suitability in diverse channel conditions. Furthermore, an in-depth performance analysis is presented by comparing their key performance indicators (KPIs), followed by an extensive exploration of these advanced waveforms in various applications. Consequently, this work aims to serve as a pivotal reference for waveform adoption in future 6G standardization and network deployment.
Authors: Chandan Kumar Sheemar, Giovanni Iacovelli, Wali Ullah Khan, George C. Alexandropoulos, Stefano Tomasin, Symeon Chatzinotas
This paper develops a physically consistent signal model with hardware constraints for a simultaneous transmitting and reflecting beyond-diagonal RIS (STAR BD-RIS) endowed with per-element amplification and lossless power splitting. We explicitly decouple (i) amplification via a diagonal gain matrix, (ii) element-wise reflection/transmission splitting, and (iii) passive beyond-diagonal coupling on each branch, while enforcing practical feasibility through per-element emission caps and an aggregate RIS power budget under the operating covariance. Building on this model, we cast downlink sum-rate maximization as an equivalent weighted minimum mean-square error (WMMSE) problem and propose an alternating optimization framework with provable monotonic descent. The method admits closed-form updates for MMSE combiners and weights, waterfilling-like beamformer updates via a single dual variable, a per-element amplification update that satisfies emission constraints, and a STAR power-splitting update based on cyclic coordinate descent with a global acceptance test. For the beyond-diagonal coupling matrices, we derive Riemannian gradient steps on the complex Stiefel manifold with QR/polar retraction method, preserving passivity at every iterate. Furthermore, the proposed approach decouples the optimization of the reflective and transmissive responses of the BD-RIS, enabling efficient distributed implementation. Numerical results demonstrate substantial sum-rate gains compared to the conventional passive BD-RIS.
Authors: Stella Kombo, Masih Haseli, Skylar X. Wei, Joel W. Burdick
Autonomous systems often must predict the motions of nearby agents from partial and noisy data. This paper asks and answers the question: "can we learn, in real-time, a nonlinear predictive model of another agent's motions?" Our online framework denoises and forecasts such dynamics using a modified sliding-window Hankel Dynamic Mode Decomposition (Hankel-DMD). Partial noisy measurements are embedded into a Hankel matrix, while an associated Page matrix enables singular-value hard thresholding (SVHT) to estimate the effective rank. A Cadzow projection enforces structured low-rank consistency, yielding a denoised trajectory and local noise variance estimates. From this representation, a time-varying Hankel-DMD lifted linear predictor is constructed for multi-step forecasts. The residual analysis provides variance-tracking signals that can support downstream estimators and risk-aware planning. We validate the approach in simulation under Gaussian and heavy-tailed noise, and experimentally on a dynamic crane testbed. Results show that the method achieves stable variance-aware denoising and short-horizon prediction suitable for integration into real-time control frameworks.
Authors: Jacob Moore, Phil Tokumaru, Ian Reid, Brandon Sutherland, Joseph Ritchie, Gabe Snow, Tim McLain
ROSflight is a lean, open-source autopilot ecosystem for unmanned aerial vehicles (UAVs). Designed by researchers for researchers, it is built to lower the barrier to entry to UAV research and accelerate the transition from simulation to hardware experiments by maintaining a lean (not full-featured), well-documented, and modular codebase. This publication builds on previous treatments and describes significant additions to the architecture that improve the modularity and usability of ROSflight, including the transition from ROS 1 to ROS 2, supported hardware, low-level actuator mixing, and the simulation environment. We believe that these changes improve the usability of ROSflight and enable ROSflight to accelerate research in areas like advanced-air mobility. Hardware results are provided, showing that ROSflight is able to control a multirotor over a serial connection at 400 Hz while closing all control loops on the companion computer.
Authors: Samuel Talkington, Cameron Khanpour, Rahul K. Gupta, Sergio A. Dorado-Rojas, Daniel Turizo, Hyeongon Park, Dmitrii M. Ostrovskii, Daniel K. Molzahn
This paper presents conservative probabilistic bounds for the spectrum of the admittance matrix and classical linear power flow models under uncertain network parameters; for example, probabilistic line contingencies. Our proposed approach imports tools from probability theory, such as concentration inequalities for random matrices. This provides a theoretical framework for understanding error bounds of common approximations of the AC power flow equations under parameter uncertainty, including the DC and LinDistFlow approximations. Additionally, we show that the upper bounds scale as functions of nodal criticality. This network-theoretic quantity captures how uncertainty concentrates at critical nodes for use in contingency analysis. We validate these bounds on IEEE test networks, demonstrating that they correctly capture the scaling behavior of spectral perturbations up to conservative constants.
Authors: Vince Kurtz, Joel W. Burdick
Generative control policies have recently unlocked major progress in robotics. These methods produce action sequences via diffusion or flow matching, with training data provided by demonstrations. But existing methods come with two key limitations: they require expert demonstrations, which can be difficult to obtain, and they are limited to relatively slow, quasi-static tasks. In this paper, we leverage a tight connection between sampling-based predictive control and generative modeling to address each of these issues. In particular, we introduce generative predictive control, a supervised learning framework for tasks with fast dynamics that are easy to simulate but difficult to demonstrate. We then show how trained flow-matching policies can be warm-started at inference time, maintaining temporal consistency and enabling high-frequency feedback. We believe that generative predictive control offers a complementary approach to existing behavior cloning methods, and hope that it paves the way toward generalist policies that extend beyond quasi-static demonstration-oriented tasks.
Authors: Boxuan Xie, Yifan Zhang, Kalle Koskinen, Alexis A. Dowhuszko, Jiacheng Wang, Ruichen Zhang, Zehui Xiong, Dusit Niyato, Zhu Han, Riku Jäntti
The rapid growth of the Internet of Things (IoT) devices in the sixth-generation (6G) wireless networks raises significant generality and scalability challenges due to energy consumption, deployment complexity, and environmental impact. Ambient IoT (A-IoT), leveraging ambient energy harvesting (EH) for batteryless device operation, has emerged as a promising solution to address these this http URL various EH and communication techniques, visible light communication (VLC) integrated with ambient backscatter communication (AmBC) offers remarkable advantages, including energy neutrality, high reliability, and enhanced security. In this paper, we propose a joint VLC-AmBC architecture, emphasizing fundamental concepts, system designs, and practical implementations. We explore potential applications in environmental monitoring, healthcare, smart logistics, and secure communications. We present proof-of-concept demonstrations for three distinct types of ambient backscatter devices (AmBDs): EH-Only, VLC-Relay, and VLC-Control. Experimental results demonstrate the feasibility of implementing joint VLC-AmBC systems, highlighting their practical viability across various deployment scenarios. Finally, we outline future research directions, including integrated sensing and communication, as well as optimized energy-efficient deployment. Open issues, such as large-scale deployment challenges, are also discussed, thereby providing a clear roadmap for future developments in joint VLC-AmBC-enabled A-IoT ecosystems.
Authors: Yan Tong, Qin Wang, Sihao Chen, Xue Hu, Zhaoyuan Wu
As the penetration level of distributed energy resources (DERs) continues to rise, traditional frequency and voltage support from synchronous machines declines. This weakens grid stability and increases the need for fast and adaptive control in a dynamic manner, especially in weak grids. However, most virtual power plants (VPPs) rely on static aggregation and plan based resource allocation strategies. These methods overlook differences in device response times and limit flexibility for ancillary services. To address this issue, we propose a dynamic virtual power plant (DVPP) that coordinates heterogeneous resources across multiple time scales using grid forming control. We first contrast grid following and grid forming converters: grid following designs rely on a phase locked loop which can undermine stability in weak grids, whereas our DVPP applies virtual synchronous generator control at the aggregate level to provide effective inertia and damping. Then, we introduce a dynamic participation factor framework that measures each device s contribution through the frequency active power and voltage reactive power loops. Exploiting device heterogeneity, we adopt a banded allocation strategy: slow resources manage steady state and low frequency regulation; intermediate resources smooth transitions; and fast resources deliver rapid response and high frequency damping. Comparative simulations demonstrate that this coordinated, timescale aware approach enhances stability and ancillary service performance compared to conventional VPPs.
Authors: Saurabh Vaishampayan, Maryam Kamgarpour
Local energy markets empower prosumers to form coalitions for energy trading. However, the optimal partitioning of the distribution grid into such coalitions remains unclear, especially in constrained grids with stochastic production and consumption. This analysis must take into account the interests of both the grid operator and the constituent prosumers. In this work, we present a cooperative game theoretic framework to study distribution grid partitioning into local energy market coalitions under uncertain prosumption and grid constraints. We formulate the optimal stable partitioning problem to balance the interests of the grid operator with that of prosumers. Under deterministic load and generation, we show that the largest market coalition is the optimal stable partition. For the case of stochastic loads and generation, we provide an algorithm to evaluate the optimal stable partition. Numerical experiments are performed on benchmark and real world distribution grids. Our results help in understanding how uncertainty affects local energy market partitioning decisions in constrained distribution grids.
Authors: Daniel Mastropietro, Vyacheslav Kungurtsev
High intermittent renewable penetration in the energy mix presents challenges in robustness for the management of power systems' operation. If a tail realization of the distribution of weather yields a prolonged period of time during which solar irradiation and wind speed are insufficient for satisfying energy demand, then it becomes critical to ramp up the generation of conventional power plants with adequate foresight. This event trigger is costly, and inaccurate forecasting can either be wasteful or yield catastrophic undersupply. This encourages particular attention to accurate modeling of the noise and the resulting dynamics within the aforementioned scenario. In this work we present a method for rare event-aware control of power systems using multi-stage scenario-based optimization. A Fleming-Viot particle approach is used to bias the scenario generation towards rare realizations of very low wind power, in order to obtain a cost-effective control of conventional power plants that is robust under prolonged renewable energy shortfalls.
Authors: Shuhao Qi, Zengjie Zhang, Zhiyong Sun, Sofie Haesaert
Human drivers naturally balance the risks of different concerns while driving, including traffic rule violations, minor accidents, and fatalities. However, achieving the same behavior in autonomous driving systems remains an open problem. This paper extends a risk metric that has been verified in human-like driving studies to encompass more complex driving scenarios specified by linear temporal logic (LTL) that go beyond just collision risks. This extension incorporates the timing and severity of events into LTL specifications, thereby reflecting a human-like risk awareness. Without sacrificing expressivity for traffic rules, we adopt LTL specifications composed of safety and co-safety formulas, allowing the control synthesis problem to be reformulated as a reachability problem. By leveraging occupation measures, we further formulate a linear programming (LP) problem for this LTL-based risk metric. Consequently, the synthesized policy balances different types of driving risks, including both collision risks and traffic rule violations. The effectiveness of the proposed approach is validated by three typical traffic scenarios in Carla simulator.
Authors: Yacob Medhin, Tushar Sial, Simone Servadio
Low-thrust electric propulsion missions are often designed under simplifying assumptions such as constant thrust or fixed specific impulse, neglecting the strong coupling between trajectory dynamics, spacecraft power availability, and propulsion performance. In deep-space environments with reduced solar irradiance, these assumptions can lead to suboptimal or infeasible designs, underscoring the need to simultaneously optimize the trajectory and power subsystem. This paper presents a multidisciplinary design optimization (MDO) framework for the simultaneous design of low-thrust trajectories and spacecraft power systems, with explicit coupling to electric propulsion performance. The framework incorporates a high-fidelity variable-specific impulse model of the SPT-140 Hall thruster, in which thrust and efficiency are directly constrained by time-varying solar power availability and solar array degradation, rather than treated as fixed parameters. The coupled problem is posed as a time-optimal control problem and addressed using a framework built on top of OpenMDAO and Dymos toolchains, where Dymos employs a collocation-based direct-transcription approach for trajectory optimization. OpenMDAO provides accurate analytic partial derivatives, enabling efficient gradient-based optimization. A Fast Fourier Series shape-based method is used to generate dynamically feasible initial guess trajectories, and the resulting nonlinear programming problem is solved using IPOPT. The proposed framework is demonstrated through a low-thrust orbit insertion scenario around asteroid 16-Psyche, a regime in which reduced solar irradiance makes power-aware trajectory design particularly critical. Simulation results demonstrate the framework's ability to capture key power-propulsion-trajectory trade-offs, highlighting the importance of integrated power optimization for realistic electric propulsion mission design.
Authors: Ginevra Larroux, Matthieu Jacobs, Keyu Jia, Fabrizio Sossan, Mario Paolone
This work presents a general framework for the operationally driven optimal siting and sizing of battery energy storage systems in power transmission networks, aimed at enhancing their resource adequacy. The approach considers multi-period planning horizons, enforces network constraints at high temporal resolution, and targets large-scale meshed systems. The resulting computationally complex mixed-integer non-linear programming problem is reformulated as a mixed-integer second-order cone programming problem and solved via Generalized Benders Decomposition, with feasibility cuts enabling congestion management and voltage regulation under binding network limits. A tailored heuristic recovers an alternating-current power-flow-feasible operating point from the relaxed solution. The proposed formulation is parallelizable, yielding excellent computational performance, while featuring rigorous guarantees of convergence.
Authors: Ali Rajaei, Jochen L. Cremer
Substation reconfiguration via busbar splitting can mitigate transmission grid congestion and reduce operational costs. However, existing approaches neglect the security of substation topology, particularly for substations without busbar splitting (i.e., closed couplers), which can lead to severe consequences. Additionally, the computational complexity of optimizing substation topology remains a challenge. This paper introduces a MILP formulation for security-constrained substation reconfiguration (SC-SR), considering N-1 line, coupler and busbar contingencies to ensure secure substation topology. To efficiently solve this problem, we propose a heuristic approach with multiple master problems (HMMP). A central master problem optimizes dispatch, while independent substation master problems determine individual substation topologies in parallel. Linear AC power flow equations ensure PF accuracy, while feasibility and optimality sub-problems evaluate contingency cases. The proposed HMMP significantly reduces computational complexity and enables scalability to large-scale power systems. Case studies on the IEEE 14-bus, 118-bus, and PEGASE 1354-bus system show the effectiveness of the approach in mitigating the impact of coupler and busbar tripping, balancing system security and cost, and computational efficiency.
Authors: Jie Zhu (1,2), Yiwei Qiu (1), Yangjun Zeng (1), Shahab Dehghan (2), Sheng Wang (2), Shi Chen (1), Buxiang Zhou (1) ((1) College of Electrical Engineering, Sichuan University, (2) School of Engineering, Newcastle University)
Renewable power-to-hydrogen (ReP2H) enables large-scale renewable energy utilization and supports the decarbonization of hard-to-abate sectors, such as chemicals and maritime transport, via hydrogen-based renewable ammonia and methanol fuels. As a result, utility-scale ReP2H projects are expanding worldwide. However, off-grid ReP2H systems exhibit low inertia due to their converter-dominated nature, making frequency security a critical concern. Although recent studies show that electrolyzers can contribute to frequency regulation (FR), their support capability depends on operating states and loading levels, creating a trade-off between hydrogen output and frequency security. To address this challenge, this work develops a unified co-optimization framework for frequency security-aware production scheduling of utility-scale off-grid ReP2H systems coordinating heterogeneous electrolyzers. A system-level frequency response model is established to capture multi-stage FR from alkaline water electrolyzers (AWEs), proton exchange membrane electrolyzers (PEMELs), and other resources, including ammonia-fueled generators retrofitted in co-located chemical plants, battery energy storage, and wind turbines (WTs). Stage-wise transient frequency security constraints are derived, reformulated into tractable forms, and embedded into production scheduling, enabling coordinated on/off switching and load allocation across electrolyzers to maximize hydrogen output under uncertain renewable power input while enforcing frequency security constraints. Case studies based on real-world systems demonstrate that the proposed approach allows HPs to replace 55.52% and 96.85% of FR reserves from WTs and AFGs, respectively, while maintaining comparable hydrogen output. Year-long simulations show an average 28.96% increase in annual net profit resulting from reduced reliance on conventional reserves.
Authors: Farshad Amani, Amin Kargarian, Ramachandran Vaidyanathan
This paper proposes a learning-based approach to accelerate the interior-point method (IPM) for solving optimal power flow (OPF) problems by learning the structure of the IPM central path from its early stable iterations. Unlike traditional learning models that attempt to predict the OPF solution directly, our approach learns the structure of the IPM trajectory itself, since even accurate predictions may not reliably reduce IPM iterations. The IPM follows a central path that iteratively progresses toward the optimal solution. While this trajectory encodes critical information about the optimization landscape, the later iterations become increasingly expensive due to ill-conditioned linear systems. Our analysis of the IPM central path reveals that its initial segments contain the most informative features for guiding the trajectory toward optimality. Leveraging this insight, we model the central path as a time series and use a Long Short-Term Memory (LSTM) network to project the path using only the first few stable iterations. To ensure that the learned trajectory remains within the feasible region--especially near the optimal point--we introduce a grid-informed mechanism into the LSTM that enforces key operational constraints on generation, voltage magnitudes, and line flows. This framework, referred to as Learning-IPM (L-IPM), significantly reduces both the number of IPM iterations and overall solution time. To improve generalization, we use a sampling-based strategy to generate a diverse set of load conditions that effectively span the operational space. Simulation results across a range of test systems--including a 2869-bus European transmission network--demonstrate that L-IPM achieves up to a 94% reduction in solution time and an 85.5% reduction in iterations, without compromising feasibility or accuracy.
Authors: Liya Huang, Federico Milano, Georgios Tzounas
This paper focuses on power flow analysis through the lens of the Newton flow, a continuous-time formulation of Newton's method. Within this framework, we explore how quantized-state concepts, originally developed as an alternative to time discretization, can be incorporated to govern the evolution of the Newton flow toward the power flow solution. This approach provides a novel perspective on adaptive step-size control and shows how state quantization can enhance robustness in illconditioned cases. The performance of the proposed approach is discussed with the ACTIVSg70k synthetic test system.
Authors: Mohamad Izdin Hlal (HIST), Hussien Elharati (UTPB), Ahmed Altaher (GIPSA-DA, GIPSA-SAIGA, systèmes d'algorithmes avancés et informatique industrielle)
This paper presents an optimized design of a Standalone Solar PV/Battery (SSPVB) system to address energy reliability and cost efficiency challenges in off-grid environments. The proposed system integrates a Multi-Objective Particle Swarm Optimization (MOPSO) approach and validates the results using the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). The optimization process aims to minimize both the Cost of Energy (COE) and Loss of Load Probability (LLP), while examining the effects of Battery Depth of Discharge (DOD) on system reliability and lifecycle cost. Results indicate that an optimal DOD of approximately 70% yields a COE of 0.2059 USD/kWh with zero LLP, demonstrating strong reliability and cost-effectiveness. Comparative analysis shows that both MOPSO and NSGA-II methods achieve consistent outcomes, with MOPSO exhibiting faster convergence. The study provides valuable insights into optimal battery sizing for stand-alone systems, contributing to modern optimization practices in renewable energy applications.
Authors: Max Domagk, Peter Feistel, Jan Meyer, Marco Lindner, Jako Kilter
The growing integration of power electronic-based technologies has increased the necessity of power quality (PQ) monitoring in transmission systems. Although large datasets are collected by operators, their use is typically limited to compliance assessment. Medium- to long-term forecasting can enhance the value of these datasets by enabling proactive asset management and trend detection, despite challenges related to data heterogeneity and seasonality. This paper systematically evaluates individual and ensemble forecasting approaches for PQ parameters in transmission systems. More than 700 weekly time series from measurement campaigns in Germany and Estonia are analysed to assess various models and aggregation strategies within a structured ensemble framework. The results show that ensemble forecasts consistently outperform individual models in terms of accuracy and robustness, achieving significant improvements over seasonal naive benchmarks and the best-performing single models. Ensemble forecasting is therefore confirmed as a robust and scalable approach for long-term PQ prediction in transmission systems.
Authors: Bhathiya Rathnayake, Sijia Geng
This paper develops a nonlinear grid-forming (GFM) controller with provable voltage-formation guarantees, with over-current limiting enforced via a control-barrier-function (CBF)-based safety filter. The nominal controller follows a droop-based inner-outer architecture, in which voltage references and frequency are generated by droop laws, an outer-loop voltage controller produces current references using backstepping (BS), and an inner-loop current controller synthesizes the terminal voltage. The grid voltage is treated as an unknown bounded disturbance, without requiring knowledge of its bound, and the controller design does not rely on any network parameters beyond the point of common coupling (PCC). To robustify voltage formation against the grid voltage, a deadzone-adapted disturbance suppression (DADS) framework is incorporated, yielding practical voltage regulation characterized by asymptotic convergence of the PCC voltage errors to an assignably small and known residual set. Furthermore, the closed-loop system is proven to be globally well posed, with all physical and adaptive states bounded and voltage error transients (due to initial conditions) decaying exponentially at an assignable rate. On top of the nominal controller, hard over-current protection is achieved through a minimally invasive CBF-based safety filter that enforces strict current limits via a single-constraint quadratic program. The safety filter is compatible with any locally Lipschitz nominal controller. Rigorous analysis establishes forward invariance of the safe-current set and boundedness of all states under current limiting. Numerical results demonstrate improved transient performance and faster recovery during current-limiting events when the proposed DADS-BS controller is used as the nominal control law, compared with conventional PI-based GFM control.
Authors: Xinliang Dai, Yuning Jiang, Yi Guo, Colin N. Jones, Moritz Diehl, Veit Hagenmeyer
This paper introduces a novel distributed optimization framework for large-scale AC Optimal Power Flow (OPF) problems, offering both theoretical convergence guarantees and rapid convergence in practice. By integrating smoothing techniques and the Schur complement, the proposed approach addresses the scalability challenges and reduces communication overhead in distributed AC OPF. Additionally, optimal network decomposition enables efficient parallel processing under the single program multiple data (SPMD) paradigm. Extensive simulations on large-scale benchmarks across various operating scenarios indicate that the proposed framework outperforms the state-of-the-art centralized solver IPOPT on modest hardware. This paves the way for more scalable and efficient distributed optimization in future power system applications.
Authors: Edwyn Brient, Santiago Velasco-Forero (CMM), Rami Kassab
We revisit High-Resolution Range Profile (HRRP) classification with aspect-angle conditioning. While prior work often assumes that aspect-angle information is incomplete during training or unavailable at inference, we study a setting where angles are available for all training samples and explicitly provided to the classifier. Using three datasets and a broad range of conditioning strategies and model architectures, we show that both single-profile and sequential classifiers benefit consistently from aspect-angle awareness, with an average accuracy gain of about 7% and improvements of up to 10%, depending on the model and dataset. In practice, aspect angles are not directly measured and must be estimated. We show that a causal Kalman filter can estimate them online with a median error of 5{\textdegree}, and that training and inference with estimated angles preserves most of the gains, supporting the proposed approach in realistic conditions.
Authors: Gabriel Mantegna, Emil Dimanchev, Filippo Pecci, Neha Patankar, Jesse Jenkins
Capacity expansion models are frequently used to inform multi-billion dollar grid infrastructure decisions, a context in which there is significant uncertainty surrounding the future need for and performance of such infrastructure. However, despite much academic literature on the topic, virtually no grid planning processes use capacity expansion models that endogenously consider uncertainty, an oversight which frequently leads to short-sighted infrastructure decisions. This is partially due to a technology transfer gap, but it is also due to a lack of methods that work at large scale. In this paper we introduce a method for endogenizing uncertainty into capacity expansion models, a variant of adaptive robust optimization, that addresses this gap. We apply the method to a real-world capacity expansion planning problem, that of the State of California, and compare its performance to that of traditional adaptive robust optimization. We find that both the traditional method and our method identify increased transmission investment as a key lever for increasing robustness and adaptability, while helping to avoid downside risks that current deterministic planning processes may be exposing ratepayers to. Our method performs similarly to the traditional method in terms of outcomes, while significantly reducing computational complexity, making it scalable to real-world planning problems.
Authors: Sina Mohammadi, Wayne Wang, Marcus Chen I Wada, Rouzbeh Haghighi, Ali Hassan, Hualong Liu, Archit Bhatnagar, Ang Chen, Wencong Su
Artificial intelligence (AI) is driving a rapid expansion of data centers (DCs). These facilities consume large amounts of electricity and introduce new challenges for power systems. AI workloads cause rapid power changes and high peak demand. These behaviors are different from traditional data centers (TDCs) and can affect grid stability and reliability. This paper reviews how energy storage systems (ESSs) can help integrate AI data DCs with the electric grid. We examine storage solutions at multiple levels, including grid-scale batteries, UPS systems, rack-level storage, and chip-level buffering. Each layer operates at a different time scale and serves a different purpose. Grid-interactive UPS (GiUPS) systems can respond quickly to disturbances and assist with frequency regulation or voltage ride through. Large battery energy storage systems (BESSs) can smooth power demand, support renewable on-site generation, and provide grid services. Rack-level and server-level storage help manage fast power fluctuations close to computing hardware. We also discuss other technologies such as fuel cells (FCs) and thermal energy storage (TE) that can support co-generation and reduce emissions. In addition, second-life battery energy storage (SLBESS) are reviewed as a lower-cost option for large installations whether supporting UPS battery or as a backup generation. The paper compares the benefits, challenges, and coordination requirements of these solutions. Overall, the study provides a structured view of how energy storage can improve reliability, flexibility, and sustainability when connecting future AI data centers to the power grid.
Authors: Zhiyi Zhao, Ye Guo, Zhenjia Lin, Yinliang Xu
This paper considers the flexibility degradation problem caused by excessive flexible ramping product (FRP) requirements with high variable energy resource (VER) penetration}. Based on the rolling-window co-optimization model of energy and FRP, theoretical analysis of this paper reveals a unit dispatch transfer effect, in which high FRP requirements under forecast-based dispatch (FBD) constrain real-time flexibility and distort economic efficiency. To alleviate this effect, a regulated forecast-based dispatch (RFBD) approach is proposed, which moderately caps VER outputs and enhances system flexibility. Simulation results demonstrate that the proposed approach effectively lowers FRP requirements and reduces operating cost compared with FBD.
Authors: Andrea Espinosa del Pozo, Araceli Hernandez, Luis Badesa
The increasing penetration of photovoltaic (PV) generation in low-voltage distribution networks presents operational challenges, with overvoltages being among the most critical. This study introduces a tool based on Unbalanced Optimal Power Flow (UBOPF) to assess cost-effective local inverter control strategies specifically aimed at mitigating overvoltage issues. Two approaches are examined: dynamic active power curtailment and combined active and reactive power control. These strategies are tested on a residential low-voltage network with high PV penetration, where the UBOPF model with voltage-magnitude constraints was implemented in Julia using the JuMP optimization package. The results demonstrate that both methods are effective in maintaining voltage levels within regulatory limits, with the latter leading to lower PV curtailment. The analysis highlights the need to consider these control actions as ancillary services to the grid, which should be properly compensated given their effect on generator revenues.
Authors: Dewan Mahnaaz Mahmud, Vinu Thomas, Bogdan Marinescu, Mickaël Hilairet
With the increasing penetration of renewable energy sources, grid-forming (GFM) inverters are becoming essential for voltage and frequency regulation. However, the transient stability of GFM inverter is critically affected by the current limiters that are embedded with the standard control schemes. This paper proposes a novel adaptive function to enhance the transient stability of droop-controlled GFM inverters. The proposed method autonomously adjusts the active power reference and the droop gain based on the terminal voltage of the inverter. Also, the acceleration of the phase angle is prevented, leading to the maximization of critical clearing time (CCT). The proposed method is benchmarked against two state-of-the-art GFM inverter CCT enhancement methods. Effectiveness of the proposed method is validated through electromagnetic transient (EMT) simulations in MATLAB/Simulink\textsuperscript{\textregistered}.
Authors: Yiwei Qiu (1), Jiahao Hu (1), Yi Zhou (1), Jie Zhu, (1), Li Jiang (1), Shi Chen (1), Buxiang Zhou (1) ((1) College of Electrical Engineering, Sichuan University)
This article proposes an energy storage-enhanced hydrogen electrolyzer (ESEHE) to provide grid-forming (GFM) services for off-grid renewable power to hydrogen (ReP2H) systems. Unlike conventional ReP2H systems that use a centralized energy storage (ES) plant, the proposed topology directly connects batteries to the DC buses of electrolysis rectifiers. A tailored virtual synchronous machine (VSM) control framework enables the electrolyzer to autonomously provide real and reactive power support. A coordinated frequency-splitting energy extraction strategy is designed to exploit both the battery and the electrolysis stack's electrical double-layer (EDL) effect on different timescales, maximizing active power support while mitigating battery and stack degradation. An adaptive equalization control strategy is further developed to balance the battery state of charge (SOC) among multiple ESEHEs operating in parallel, which optimizes energy distribution and extends battery life. Real-time simulations on StarSim validate the proposed topology and control strategies. Techno-economic analysis shows that, compared with conventional off-grid ReP2H systems based on a centralized ES plant, the ESEHE improves overall energy efficiency by 0.23% and reduces the initial total converter investment cost by roughly 6%, mainly due to the elimination of bidirectional AC/DC conversion and its associated losses in the centralized ES plant.
Authors: Lingyun Xu, Bowen Wang, Huiyong Li, Ziyang Cheng
In recent years, security has emerged as a critical aspect of integrated sensing and communication (ISAC) systems. While significant research has focused on secure communications, particularly in ensuring physical layer security, the issue of sensing security has received comparatively less attention. This paper addresses the sensing security problem in ISAC, particularly under the threat of multi-intercept adversaries. We consider a realistic scenario in which the sensing target is an advanced electronic reconnaissance aircraft capable of employing multiple signal interception techniques, such as power detection (PD) and cyclostationary analysis (CA). To evaluate sensing security under such sophisticated threats, we analyze two critical features of the transmitted signal: (i) power distribution and (ii) cyclic spectrum. Further, we introduce a novel ergodic cyclic spectrum metric which leverages the intrinsic mathematical structure of cyclostationary signals to more comprehensively characterize their behavior. Building on this analysis, we formulate a new ISAC design problem that explicitly considers sensing security, and we develop a low-complexity, efficient optimization approach to solve it. Simulation results demonstrate that the proposed metric is both effective and insightful, and that our ISAC design significantly enhances sensing security performance in the presence of multi-intercept threats.
Authors: Shrunal Pothagoni, Benjamin Schweinhart
Convolutional neural networks (CNNs) are a standard tool for computer vision tasks such as image classification. However, typical model architectures may result in the loss of topological information. In specific domains such as histopathology, topology is an important descriptor that can be used to distinguish between disease-indicating tissue by analyzing the shape characteristics of cells. Current literature suggests that reintroducing topological information using persistent homology can improve medical diagnostics; however, previous methods utilize global topological summaries which do not contain information about the locality of topological features. To address this gap, we present a novel method that generates local persistent homology-based data using a modified version of the convolution operator called Persistent Homology Convolutions. This method captures information about the locality and translation invariance of topological features. We perform a comparative study using various representations of histopathology slides and find that models trained with persistent homology convolutions outperform conventionally trained models and are less sensitive to hyperparameters. These results indicate that persistent homology convolutions extract meaningful geometric information from the histopathology slides.
Authors: Daxiang Li, Zhichao Zhang, Wei Yao
The one-dimensional (1D) fractional Fourier transform (FRFT) generalizes the Fourier transform, offering significant advantages in the time-frequency analysis of non-stationary signals. While various 2D extensions exist, such as the 2D separable FRFT (SFRFT), gyrator transform (GT), coupled FRFT (CFRFT), and earlier nonseparable definitions, they suffer from fragmented theoretical frameworks and a fundamental lack of geometric consistency with the 2D Wigner distribution (WD). Addressing these limitations, we propose a unified 2D nonseparable FRFT (NSFRFT) framework. Theoretically derived from the intersection of the symplectic and special orthogonal groups (isomorphic to the unitary group $\mathrm{U}(2)$), this transform inherently possesses four degrees of freedom and mathematically incorporates the 2D SFRFT, GT, and CFRFT as special cases. Unlike prior algebraic generalizations, it strictly preserves the rigid 4D rotational geometry of the 2D WD, ensuring geometric consistency and numerical stability. We derive its essential properties and develop efficient discrete algorithms with a computational complexity of $O(N^{2}\log N)$. Numerical simulations validate the superiority of the 2D NSFRFT in analyzing coupled chirp signals and demonstrate its robustness in filtering and image encryption and decryption applications.
Authors: Hanyoung Park, Ji-Woong Choi
Millimeter-wave (mmWave) communications have gained attention as a key technology for high-capacity wireless systems, owing to the wide available bandwidth. However, mmWave signals suffer from their inherent characteristics such as severe path loss, poor scattering, and limited diffraction, which necessitate the use of large antenna arrays and directional beamforming, typically implemented through massive MIMO architectures. Accurate channel estimation is critical in such systems, but its computational complexity increases proportionally with the number of antennas. This may become a significant burden in mmWave systems where channels exhibit rapid fluctuations and require frequent updates. In this paper, we propose a low-complexity channel denoiser based on Bayesian binary hypothesis testing and beamspace sparsity. By modeling each sparse beamspace component as a mixture of signal and noise under a Bernoulli-complex Gaussian prior, we formulate a likelihood ratio test to detect signal-relevant elements. Then, a hard-thresholding rule is applied to suppress noise-dominant components in the noisy channel vector. Despite its extremely low computational complexity, the proposed method achieves channel estimation accuracy that is comparable to that of complex iterative or learning-based approaches. This effectiveness is supported by both theoretical analysis and numerical evaluation, suggesting that the method can be a viable option for mmWave systems with strict resource constraints.
Authors: Zhanhong He, Roberto Togneri, David Huang
MIDI velocity is crucial for capturing expressive dynamics in human performances. In practical scenarios, a music score with inaccurate velocities may be available alongside the performance audio (e.g., music education and free online archives), enabling the task of score-informed MIDI velocity estimation. In this work, we propose a modular, lightweight score-informed Transformer correction module that refines the velocity estimates of Automatic Music Transcription (AMT) systems. We integrate the proposed module into multiple AMT systems (HPT, HPPNet, and DynEst). Trained exclusively on the MAESTRO training split, our method consistently reduces velocity estimation errors on MAESTRO and improves cross-dataset generalization to SMD and MAPS datasets. Under this training protocol, integrating our score-informed module with HPT (named Score-HPT) establishes a new state-of-the-art performance, outperforms existing score-informed methods and velocity-enabled AMT systems while adding only 1 M parameters.
Authors: Yupei Zhang, Xiaofei Wang, Anran Liu, Lequan Yu, Chao Li
Histopathology remains the gold standard for cancer diagnosis and prognosis. With the advent of transcriptome profiling, multi-modal learning combining transcriptomics with histology offers more comprehensive information. However, existing multi-modal approaches are challenged by intrinsic multi-modal heterogeneity, insufficient multi-scale integration, and reliance on paired data, restricting clinical applicability. To address these challenges, we propose a disentangled multi-modal framework with four contributions: 1) To mitigate multi-modal heterogeneity, we decompose WSIs and transcriptomes into tumor and microenvironment subspaces using a disentangled multi-modal fusion module, and introduce a confidence-guided gradient coordination strategy to balance subspace optimization. 2) To enhance multi-scale integration, we propose an inter-magnification gene-expression consistency strategy that aligns transcriptomic signals across WSI magnifications. 3) To reduce dependency on paired data, we propose a subspace knowledge distillation strategy enabling transcriptome-agnostic inference through a WSI-only student model. 4) To improve inference efficiency, we propose an informative token aggregation module that suppresses WSI redundancy while preserving subspace semantics. Extensive experiments on cancer diagnosis, prognosis, and survival prediction demonstrate our superiority over state-of-the-art methods across multiple settings. Code is available at this https URL.
Authors: Tianyi Zhang, Zheng-Peng Duan, Peng-Tao Jiang, Bo Li, Ming-Ming Cheng, Chun-Le Guo, Chongyi Li
Diffusion-based real-world image super-resolution (Real-ISR) methods have demonstrated impressive this http URL achieve efficient Real-ISR, many works employ Variational Score Distillation (VSD) to distill pre-trained stable-diffusion (SD) model for one-step SR with a fixed timestep. However, since SD will perform different generative priors at different timesteps, a fixed timestep is difficult for these methods to fully leverage the generative priors in SD, leading to suboptimal this http URL address this, we propose a \textbf{T}ime-\textbf{A}ware one-step \textbf{D}iffusion Network for Real-ISR (\textbf{TADSR}). We first introduce a Time-Aware VAE Encoder, which projects the same image into different latent features based on this http URL joint dynamic variation of timesteps and latent features, the student model can better align with the input pattern distribution of the pre-trained SD, thereby enabling more effective utilization of SD's generative this http URL better activate the generative prior of SD at different timesteps, we propose a Time-Aware VSD loss that bridges the timesteps of the student model and those of the teacher model, thereby producing more consistent generative prior guidance conditioned on timesteps. Additionally, though utilizing the generative prior in SD at different timesteps, our method can naturally achieve \textbf{controllable trade-offs between fidelity and realism} by changing the this http URL results demonstrate that our method achieves both state-of-the-art performance and controllable SR results with only a single step. The source codes are released at this https URL
Authors: Sebastian Lotter, Marco Seiter, Maryam Pirmoradi, Lukas Brand, Dagmar Fischer, Robert Schober
Recently, bacterial nanocellulose (BNC), a biological material produced by non-pathogenic bacteria that possesses excellent material properties for various medical applications, has received increased interest as a carrier system for drug delivery. However, the vast majority of existing studies on drug release from BNC are feasibility studies with modeling and design aspects remaining largely unexplored. To narrow this research gap, this paper proposes a novel model for the drug release from BNC. Specifically, the drug delivery system considered in this paper consists of a BNC fleece coated with a polymer. The polymer coating is used as an additional diffusion barrier, enabling the controlled release of an active pharmaceutical ingredient. The proposed physics-based model reflects the geometry of the BNC and incorporates the impact of the polymer coating on the drug release. Hence, it can be useful for designing BNC-based drug delivery systems in the future. The accuracy of the model is validated with experimental data obtained in wet lab experiments.
Authors: Amir Ivry, Samuele Cornell, Shinji Watanabe
Objective assessment of audio source-separation systems still mismatches subjective human perception, especially when interference from competing talkers and distortion of the target signal interact. We introduce Perceptual Separation (PS) and Perceptual Match (PM), a complementary pair of measures that, by design, isolate these leakage and distortion factors. Our intrusive approach generates a set of fundamental distortions, e.g., clipping, notch filter, and pitch shift from each reference waveform signal in the mixture. Distortions, references, and system outputs from all sources are independently encoded by a pre-trained self-supervised model, then aggregated and embedded with a manifold learning technique called diffusion maps, which aligns Euclidean distances on the manifold with dissimilarities of the encoded waveform representations. On this manifold, PM captures the self-distortion of a source by measuring distances from its output to its reference and associated distortions, while PS captures leakage by also accounting for distances from the output to non-attributed references and distortions. Both measures are differentiable and operate at a resolution as high as 75 frames per second, allowing granular optimization and analysis. We further derive, for both measures, frame-level deterministic error radius and non-asymptotic, high-probability confidence intervals. Experiments on English, Spanish, and music mixtures show that, against 18 widely used measures, the PS and PM are almost always placed first or second in linear and rank correlations with subjective human mean-opinion scores.
Authors: Minseok Kim, Masato Yomoda, Minghe Mao, Nobuaki Kuno, Koshiro Kitao, Satoshi Suyama
Sub-terahertz (sub-THz) frequencies (100--300 GHz) are expected to play a key role in beyond-5G and 6G mobile networks. However, their quasi-optical propagation characteristics require new channel models beyond sub-100 GHz extrapolations. This paper presents an extensive double-directional (D-D) channel measurement campaign conducted in an outdoor street-canyon environment at 154 GHz and 300 GHz under both line-of-sight (LoS) and non-line-of-sight (NLoS) conditions using an in-house-developed multi-tone frequency-domain channel sounder. Based on these measurements, clustering with merged datasets across the two frequencies enables comparative analyses that identify both common and distinct multipath clusters, as well as the frequency dependence of cluster-level characteristics. A quasi-deterministic (Q-D) channel model is then proposed, combining deterministic components, such as LoS and single-bounce reflections from side walls, with random components. Large-scale parameters (path loss, delay spread, angular spread, and Rician $K$-factor) are also evaluated. These results provide valuable insights into sub-THz propagation in urban street canyons and contribute toward the development of accurate, channel models for future 6G systems.
Authors: Ping Zhang, Xiaodong Xu, Mengying Sun, Haixiao Gao, Nan Ma, Xiaoyun Wang, Ruichen Zhang, Jiacheng Wang, Dusit Niyato
Semantic communication (SemCom) has emerged as a transformative paradigm for future 6G networks, offering task-oriented and meaning-aware transmission that fundamentally redefines traditional bit-centric design. Recognized by leading standardization bodies including the institute of electrical and electronics engineers (IEEE) and the international telecommunication union (ITU), and actively discussed within the 3rd generation partnership project (3GPP) working groups, SemCom is rapidly gaining traction as a foundational enabler for native-AI 6G. This paper presents a comprehensive overview of recent progress in SemCom from both academic and industrial perspectives, with a focus on its ongoing and upcoming standardization activities. We systematically examine advances in representative application scenarios, architectural design, semantic-traditional system compatibility, unified evaluation metrics, and validation methodologies. Furthermore, we highlight several key enabling technologies, such as joint source-channel coding (JSCC), SemCom-based multiple access (MA) technologies such as model division MA (MDMA), and semantic knowledge base (KB), that support the practical implementation of SemCom in standard-compliant systems. Additionally, we present a case study for channel state information (CSI) feedback, illustrating the concrete performance gains of SemCom under 3GPP-compliant fading channels. Finally, we discuss emerging challenges and research opportunities for incorporating semantic-native mechanisms into the evolving 6G standardization landscape, and provide forward-looking insights into its development and global adoption.
Authors: Jie Zhou, Yongxiang Liu, Li Liu, Weijie Li, Bowen Peng, Yafei Song, Gangyao Kuang, Xiang Li
Synthetic Aperture Radar (SAR) imaging is capable of observing objects in nearly all weather and illumination conditions and has become an indispensable means of information acquisition for analysis and recognition of objects and scenes. SAR Automatic Target Recognition (SAR ATR) has been one of the most fundamental and challenging problems in remote sensing image analysis. Nowadays, the AI technology, represented by large models and AI agents, has transformed the research paradigm, profoundly influenced various research fields, and continues to evolve at an unprecedented pace. However, the huge potential of AI for SAR image analysis remains locked. To unlock the potential of AI in SAR image understanding, the research community should rethink how to enable bidirectional empowerment between AI and SAR image understanding and strive to achieve substantial breakthroughs at critical bottlenecks. Given this period of remarkable evolution, this paper offers the first comprehensive review of SAR ATR, tracing its development and milestones over the past five decades and providing the research community with a clear roadmap. This survey includes approximately 250 research contributions, covering critical aspects of SAR ATR: pivotal challenges, important datasets, the merits and limitations of representative methods, evaluation metrics, and state of the art performance. Finally, we finish the survey by identifying promising directions for future research. Looking ahead, we call for significant attention on three fundamental pillars: the curation of high-quality large-scale datasets, the design of fair and comprehensive evaluation benchmarks, and the fostering of safe open-source ecosystems.
Authors: Matt Y. Cheung, Ashok Veeraraghavan, Guha Balakrishnan
In clinical applications, the utility of segmentation models is often based on the accuracy of derived downstream metrics such as organ size, rather than by the pixel-level accuracy of the segmentation masks themselves. Thus, uncertainty quantification for such metrics is crucial for decision-making. Conformal prediction (CP) is a popular framework to derive such principled uncertainty guarantees, but applying CP naively to the final scalar metric is inefficient because it treats the complex, non-linear segmentation-to-metric pipeline as a black box. We introduce COMPASS, a practical framework that generates efficient, metric-based CP intervals for image segmentation models by leveraging the inductive biases of their underlying deep neural networks. COMPASS performs calibration directly in the model's representation space by perturbing intermediate features along low-dimensional subspaces maximally sensitive to the target metric. We prove that COMPASS achieves valid marginal coverage under the assumption of exchangeability. Empirically, we demonstrate that COMPASS produces significantly tighter intervals than traditional CP baselines on four medical image segmentation tasks for area estimation of skin lesions and anatomical structures. Furthermore, we show that leveraging learned internal features to estimate importance weights allows COMPASS to also recover target coverage under covariate shifts. COMPASS paves the way for practical, metric-based uncertainty quantification for medical image segmentation.
Authors: Hang Zhou, Yuxin Yang, Branislav Hredzak, John Edward Fletcher
Digital control has become increasingly widespread in modern power electronic converters. When acquiring feedback signals such as the inductor current, synchronizing the analog-to-digital converter (ADC) with the digital pulse-width modulator (DPWM) is commonly employed to accurately track their steady-state average. However, the small-signal implications of such synchronization have not been investigated. This paper presents an exact small-signal model for digitally controlled buck converters operating in forced continuous-conduction mode (FCCM) under constant-frequency current-mode control, explicitly accounting for DPWM-ADC synchronization. Using a sampled-data framework, the proposed model captures all sideband effects introduced by the sampling process, yielding precise predictions of both analog and digital loop gains, even at frequencies beyond the switching and sampling frequencies. Both asymmetrical and symmetrical carrier modulations are considered. Furthermore, the digital loop gain is derived in closed form using the modified z-transform, enabling low-complexity compensator design and stability assessment. Within this framework, the analog loop gain can be directly obtained from the digital loop gain, thereby eliminating the need for computationally intensive infinite series evaluations. The validity of the proposed model is confirmed through both simulation and experimental results.
Authors: Zizhe Zhang, Yicong Wang, Zhiquan Zhang, Tianyu Li, Nadia Figueroa
Conventional passivity-based torque controllers for manipulators are typically unconstrained, which can lead to safety violations under external perturbations. In this paper, we employ viability theory to pre-compute safe sets in the state-space of joint positions and velocities. These viable sets, constructed via data-driven and analytical methods for self-collision avoidance, external object collision avoidance and joint-position and joint-velocity limits, provide constraints on joint accelerations and thus joint torques via the robot dynamics. A quadratic programming-based control framework enforces these constraints on a passive controller tracking a dynamical system, ensuring the robot states remain within the safe set in an infinite time horizon. We validate the proposed approach through simulations and hardware experiments on a 7-DoF Franka Emika manipulator. In comparison to a baseline constrained passive controller, our method operates at higher control-loop rates and yields smoother trajectories.
Authors: Zhanle Zhao, Son Dinh-Van, Yuen Kwan Mo, Siddartha Khastgir, Matthew D. Higgins
Connected braking can reduce fatal collisions in connected and autonomous vehicles (CAVs) by using reliable, low-latency 5G New Radio (NR) links, especially NR Sidelink Vehicle-to-Everything (V2X). In rural areas, road side units are sparse and power-constrained, so energy efficiency must be considered alongside safety. This paper studies how three communication control factors including subcarrier spacing ($\mathrm{SCS}$), modulation and coding scheme ($\mathrm{MCS}$), and transmit power ($P_{\mathrm{t}}$) should be configured to balance safety and energy consumption in rural scenarios in light and heavy traffic scenarios. Safety is quantified by the packet receive ratio ($\mathrm{PRR}$) against the minimum communication distance $D_{\mathrm{comm}}$, defined as the distance that the vehicle travels during the transmission of the safety message. Results show that, under heavy traffic, increasing $P_{\mathrm{t}}$ and selecting a low-rate $\mathrm{MCS}$ at $\mathrm{SCS} = 30$ kHz sustains high $\mathrm{PRR}$ at $D_{\mathrm{comm}}$, albeit with higher energy cost. In light traffic, maintaining lower $P_\mathrm{t}$ with low $\mathrm{MCS}$ levels achieves a favorable reliability-energy trade-off while preserving acceptable $\mathrm{PRR}$ at $D_{\mathrm{comm}}$. These findings demonstrate the necessity of adaptive, energy-aware strategy to guarantee both safety and energy efficiency in rural V2X systems.
Authors: Geon Roh, Jip Kim
Dynamic line rating (DLR) enables greater utilization of existing transmission lines by leveraging real-time weather data. However, the elevated temperature operation (ETO) of conductors under DLR is often overlooked, despite its long-term impact on conductor health. This paper addresses this issue by 1) quantifying risk-based depreciation costs associated with ETO and 2) proposing a Conductor Health-Aware Unit Commitment (CHA-UC) that internalizes these costs in operational decisions. CHA-UC incorporates a robust linear approximation of conductor temperature and integration of expected depreciation costs due to hourly ETO into the objective function. Case studies on the Texas 123-bus backbone test system using NOAA weather data demonstrate that the proposed CHA-UC model reduces the total cost by 0.74\% and renewable curtailment by 85\% compared to static line rating (SLR) and outperforms quantile regression forest-based methods, while conventional DLR operation without risk consideration resulted in higher costs due to excessive ETO. Further analysis of the commitment decisions and the line temperature statistics confirms that the CHA-UC achieves safer line flows by shifting generator commitments. Finally, we examine the emergent correlation behaviors arising between wind generation and DLR forecast errors, and show that CHA-UC adaptively manages this effect by relaxing flows for risk-hedging conditions while tightening flows for risk-amplifying ones.
Authors: Jing-Xuan Zhang, Genshun Wan, Jin Li, Jianqing Gao, Duo Zhao, Zhen-Hua Ling
While speech foundation models (SFMs) have demonstrated remarkable performance in audio-only tasks, their adaptation to multimodal scenarios remains underexplored. This work presents UASR-LLM, a novel framework that adapts frozen SFMs to unified visual speech recognition (VSR), automatic speech recognition (ASR), and audio-visual speech recognition (AVSR) by leveraging large language models (LLMs) as text decoders. Visual representations are injected into multiple SFM layers via visual injection modules, enabling multimodal fusion and unified representation learning. The augmented SFMs are connected to decoder-only LLMs through a feed-forward adaptor, where concatenated representations and instruction prompts guide transcription. We propose a two-stage training strategy consisting of visual injection pretraining followed by speech recognition finetuning. The pretraining stage aligns audio, visual, and audio-visual representations within the frozen SFM backbone, while the finetuning stage integrates LLMs for unified optimization across speech recognition tasks. Experimental results demonstrate superior performance over state-of-the-art baselines across VSR, ASR, and AVSR under both clean and noisy conditions. Ablation studies further confirm generalization across various SFMs and LLMs, validating the effectiveness of the proposed training strategy.
Authors: Hisayoshi Muramatsu
In linear time-invariant systems, the sensitivity function to disturbances is designed under a sensitivity tradeoff known as the waterbed effect. To compensate for a quasiperiodic disturbance, a quasiperiodic disturbance observer using time delays was proposed. Its sensitivity function avoids the sensitivity tradeoff, achieving wideband harmonic suppression without amplifying aperiodic disturbances or shifting harmonic suppression frequencies. However, its open-loop transfer function is not rational and does not satisfy the assumptions of existing Bode sensitivity integrals due to its time delays. This paper provides Bode-like sensitivity integrals for the quasiperiodic disturbance observer in both continuous-time and discrete-time representations and clarifies the avoided sensitivity tradeoff with time delays.
Authors: Dipanjan Ghose, S Sivaranjani, Junjie Qin
Dynamic Wireless Electric Vehicle Charging (DWC) on electrified roadways is an emerging technology that can significantly reduce battery sizes, eliminate charging downtime, and alleviate range anxiety, specially for long-haul transportation and fleet operations of electric vehicles (EVs). However, these systems introduce new challenges for power system planning due to their short-duration and high-power demands which can strain the grid if not properly managed. As the energy demands from DWC depend on vehicle speed, density, dwell time in charging zones, and load profiles along road segments, there is a need for integrated planning of such systems, jointly considering both traffic behavior and EV energy consumption. In this paper, we propose a traffic-aware grid planning framework for DWC. We leverage a macroscopic Cell Transmission Model of traffic flow to estimate real-time, spatiotemporal EV charging demand from DWC corridors. The demand model is then integrated into an AC Optimal Power Flow based formulation to optimally size a microgrid that supports DWC under varying traffic conditions while minimizing the cost of operation. Our framework explicitly models how spatiotemporal traffic patterns affect the utilization of grid resources to obtain system designs that achieve lower costs and are easier to operationalize as compared to planning models that rely on worst-case traffic data. We demonstrate the framework on data from a 14-mile segment of the I-210W highway in California, USA, evaluating multiple traffic scenarios like free-flow, severe congestion, accidents of varying severity, and natural disasters like forest fires. Our results demonstrate that traffic-aware grid planning significantly reduces infrastructure costs as compared to worst-scenario based modeling, while ensuring reliability of service in terms of meeting charging demands under diverse traffic conditions.
Authors: Ali Saeizadeh, Miead Tehrani-Moayyed, Davide Villa, J. Gordon Beattie Jr., Pedram Johari, Stefano Basagni, Tommaso Melodia
Accurate, low-latency channel modeling is essential for real-time wireless network simulation and digital-twin applications. Traditional modeling methods like ray tracing are however computationally demanding and unsuited to model dynamic conditions. In this paper, we propose AIRMap, a deep-learning framework for ultra-fast radio-map estimation, along with an automated pipeline for creating the largest radio-map dataset to date. AIRMap uses a single-input U-Net autoencoder that processes only a 2D elevation map of terrain and building heights. Trained on 1.2M Boston-area samples and validated across four distinct urban and rural environments with varying terrain and building density, AIRMap predicts path gain with under 4 dB RMSE in 4 ms per inference on an NVIDIA L40S-over 100x faster than GPU-accelerated ray tracing based radio maps. A lightweight calibration using just 20% of field measurements reduces the median error to approximately 5%, significantly outperforming traditional simulators, which exceed 50% error. Integration into the Colosseum emulator and the Sionna SYS platform demonstrate near-zero error in spectral efficiency and block-error rate compared to measurement-based channels. These findings validate AIRMap's potential for scalable, accurate, and real-time radio map estimation in wireless digital twins.
Authors: Chandrima Thakur, Priyanka Ghosh, Rashmita Badhai, Sumit Kundu
This paper analyzes a NOMA-enabled dual-Intelligent Reflecting Surface (IRS) relay network integrated with Ambient Backscatter (BS) communication. The system comprises a source, an energy-constrained relay with energy harvesting (EH) and BS capabilities, two NOMA users, and a BS node. The relay adopts a time-switching relaying (TSR) protocol to harvest energy and forward information ,while simultaneously enabling BS-based communication. Two IRS are deployed to enhance the S to R and R to (D1, D2) links under blockage conditions. Closed-form expressions for the Outage Probability (OP) and Throughput of both the main communication links and the BS-assisted secondary links are derived. Furthermore, throughput is analyzed under varying system parameters, including power allocation factors, reflection efficiency, IRS elements, and transmission rate. Monte Carlo simulations validate the analytical results. numerical findings reveal critical trade-offs between the main and RS links. The proposed framework provides useful insights for designing reliable and energy-efficient NOMA-IRS-aided BS networks for future IoT applications.
Authors: Niccolò Paglierani, Francesco Linsalata, Vineeth Teeda, Davide Scazzoli, Maurizio Magarini
This paper presents an autonomous sensing framework for identifying and localizing multiple users in Fifth Generation (5G) cooperative networks using an Unmanned Aerial Vehicle (UAV) that is not part of the serving access network. Unlike conventional aerial serving nodes, the proposed UAV operates passively and is dedicated solely to sensing. Passively receiving Uplink (UL) Sounding Reference Signals (SRS), the UAV requires only minimal initial coordination with the network infrastructure during the mission. A complete signal processing chain is proposed and developed, encompassing synchronization, user identification, and localization, all executed onboard UAV during flight. The system autonomously plans and adapts its mission workflow to estimate multiple user positions within a single deployment, integrating flight control with real-time sensing. The approach is validated through extensive simulations and a full-scale low-altitude experimental campaign. Urban simulation scenarios show localization errors below 8 m, while rural field tests achieve errors below 3 m, with reliable synchronization and user identification ensured in both cases. The results confirm the feasibility of infrastructure-independent sensing UAVs as a core element of the emerging Low Altitude Economy (LAE), supporting situational awareness and rapid deployment in emergency or connectivity-limited environments.
Authors: Shyam Kamal, Sunidhi Pandey, Thach Ngoc Dinh, Cao Thanh Tinh
This paper introduces a Ramanujan inner product and its corresponding norm, establishing a novel framework for the stability analysis of hybrid and discrete-time systems as an alternative to traditional Euclidean metrics. We establish new $\epsilon$-$\delta$ stability conditions that utilize the unique properties of Ramanujan summations and their relationship with number-theoretic concepts. The proposed approach provides enhanced robustness guarantees and reveals fundamental connections between system stability and arithmetic properties of the system dynamics. Theoretical results are rigorously proven, and simulation results on numerical examples are presented to validate the efficacy of the proposed approach.
Authors: Ziyuan Zheng, Qingqing Wu, Yanze Zhu, Honghao Wang, Ying Gao, Wen Chen, Jian Xiong
This paper investigates a low-altitude integrated sensing and communication (ISAC) system that leverages cooperative rotatable active and passive arrays. We consider a downlink scenario where a base station (BS) with an active rotatable array serves multiple communication users and senses low-altitude targets, assisted by a rotatable reconfigurable intelligent surface (RIS). A rotation-aware geometry-based multipath model is developed to capture the impact of three-dimensional (3D) array orientations on both steering vectors and direction-dependent element gains. On this basis, we formulate a new optimization problem that maximizes the downlink sum rate subject to a transmit power budget, RIS unit-modulus constraints, mechanical rotation limits, and a sensing beampattern mean-squared-error (MSE) constraint. To address the resulting highly non-convex problem, we propose a penalty-based alternating-optimization (AO) framework that alternately updates the BS precoder, RIS phase shifts, and BS/RIS array rotation angles. The three blocks are efficiently handled via a convex optimization method based on quadratic-transform (QT) and majorization-minorization (MM), Riemannian conjugate gradient (RCG) on the unit-modulus manifold, and projected gradient descent (PGD) with Barzilai-Borwein step sizes, respectively. Numerical results in low-altitude geometries demonstrate that the proposed jointly rotatable BS-RIS architecture achieves significant sum-rate gains over fixed or partially rotatable baselines while guaranteeing sensing requirements, especially with directional antennas and in interference-limited regimes.
Authors: Si-Yu Xiao, Xin-Di Zhao, Xiang-Zhan Wang, Tian-Hao Mao, Ying-Kai Liao, Xing-Yu Liao, Yu-Qiao Chen, Jun-Jie Wang, Shuang Liu, Tu-Pei Chen, Yang Liu
Casing collar locator (CCL) measurements are widely used as reliable depth markers for positioning downhole instruments in cased-hole operations, enabling accurate depth control for operations such as perforation. However, autonomous collar recognition in downhole environments remains challenging because CCL signals are often corrupted by toolstring- or casing-induced magnetic interference, while stringent size and power budgets limit the use of computationally intensive algorithms and specific operations require real-time, in-situ processing. To address these constraints, we propose Collar Recognition Nets (CRNs), a family of domain-specific lightweight 1-D convolutional neural networks for collar signature recognition from streaming CCL waveforms. With depthwise separable convolutions and input pooling, CRNs optimize efficiency without sacrificing accuracy. Our most compact model achieves an F1-score of 0.972 on field data with only 1,985~parameters and 8,208~MACs, and deployed on an ARM Cortex-M7 based embedded system using TensorFlow Lite for Microcontrollers (TFLM) library, the model demonstrates a throughput of 1,000 inference per second and 343.2 {\mu}s latency, confirming the feasibility of robust, autonomous, and real-time collar recognition under stringent downhole constraints.
Authors: Yuanchao Li
Speech Emotion Recognition (SER) plays a pivotal role in understanding human communication, enabling emotionally intelligent systems, and serving as a fundamental component in the development of Artificial General Intelligence (AGI). However, deploying SER in real-world, spontaneous, and low-resource scenarios remains a significant challenge due to the complexity of emotional expression and the limitations of current speech and language technologies. This thesis investigates the integration of Automatic Speech Recognition (ASR) into SER, with the goal of enhancing the robustness, scalability, and practical applicability of emotion recognition from spoken language.
Authors: Abdoulaye Diack, Perry Nelson, Kwaku Agbesi, Angela Nakalembe, MohamedElfatih MohamedKhair, Vusumuzi Dube, Tavonga Siyavora, Subhashini Venugopalan, Jason Hickey, Uche Okonkwo, Abhishek Bapna, Isaac Wiafe, Raynard Dodzi Helegah, Elikem Doe Atsakpo, Charles Nutrokpor, Fiifi Baffoe Payin Winful, Kafui Kwashie Solaga, Jamal-Deen Abdulai, Akon Obu Ekpezu, Audace Niyonkuru, Samuel Rutunda, Boris Ishimwe, Michael Melese, Engineer Bainomugisha, Joyce Nakatumba-Nabende, Andrew Katumba, Claire Babirye, Jonathan Mukiibi, Vincent Kimani, Samuel Kibacia, James Maina, Fridah Emmah, Ahmed Ibrahim Shekarau, Ibrahim Shehu Adamu, Yusuf Abdullahi, Howard Lakougna, Bob MacDonald, Hadar Shemtov, Aisha Walcott-Bryant, Moustapha Cisse, Avinatan Hassidim, Jeff Dean, Yossi Matias
The advancement of speech technology has predominantly favored high-resource languages, creating a significant digital divide for speakers of most Sub-Saharan African languages. To address this gap, we introduce WAXAL, a large-scale, openly accessible speech dataset for 24 languages representing over 100 million speakers. The collection consists of two main components: an Automated Speech Recognition (ASR) dataset containing approximately 1,250 hours of transcribed, natural speech from a diverse range of speakers, and a Text-to-Speech (TTS) dataset with around 235 hours of high-quality, single-speaker recordings reading phonetically balanced scripts. This paper details our methodology for data collection, annotation, and quality control, which involved partnerships with four African academic and community organizations. We provide a detailed statistical overview of the dataset and discuss its potential limitations and ethical considerations. The WAXAL datasets are released at this https URL under the permissive CC-BY-4.0 license to catalyze research, enable the development of inclusive technologies, and serve as a vital resource for the digital preservation of these languages.
Authors: Alexandre Barbosa de Lima
Accurate channel state information in wideband multiple-input multiple-output (MIMO) systems is fundamentally constrained by pilot overhead, a challenge that intensifies as antenna counts and bandwidths scale toward 6G. This paper proposes a structure-informed hybrid estimator that formulates pilot-limited MIMO channel estimation as low-rank tensor completion from sparse pilot observations -- a severely underdetermined inverse problem that prior tensor approaches avoid by assuming fully observed received signal tensors. Canonical polyadic~(CP) and Tucker decompositions are comparatively analyzed: CP excels for specular channels whose rank-one multipath structure matches the CP parameterization exactly, while Tucker provides greater numerical stability at extreme pilot scarcity where CP exhibits heavy-tail divergence. A lightweight 3D U-Net learns residual components beyond the dominant low-rank structure, compensating for diffuse scattering and hardware non-idealities that algebraic priors alone cannot capture. On synthetic specular channels, Tucker completion achieves $10.88$~dB NMSE improvement over least squares and $7.83$~dB over orthogonal matching pursuit at $\rho = 10\%$ pilot density; CP outperforms Tucker by $13.11$~dB at SNR\,=\,20~dB under the specular multipath model. On DeepMIMO ray-tracing channels, the hybrid estimator surpasses CP by $2.26$~dB and Tucker by $4.80$~dB at $\rho = 8\%$, while remaining stable at $\rho = 2\%$ where CP diverges; algebraic structure consistently outperforms unconstrained deep learning across the full pilot-density range, with a margin growing from $1.53$~dB at $\rho = 2\%$ to $5.67$~dB at $\rho = 20\%$. Empirical recovery threshold analysis confirms that sample complexity scales with intrinsic channel dimensionality -- governed by the number of dominant propagation paths -- rather than with the ambient tensor size.
Authors: Chien-Chun Wang, Hung-Shin Lee, Hsin-Min Wang, Berlin Chen
Pre-trained models for automatic speech recognition (ASR) and speech enhancement (SE) have exhibited remarkable capabilities under matched noise and channel conditions. However, these models often suffer from severe performance degradation when confronted with domain shifts, particularly in the presence of unseen noise and channel distortions. In view of this, we in this paper present URSA-GAN, a unified and domain-aware generative framework specifically designed to mitigate mismatches in both noise and channel conditions. URSA-GAN leverages a dual-embedding architecture that consists of a noise encoder and a channel encoder, each pre-trained with limited in-domain data to capture domain-relevant representations. These embeddings condition a GAN-based speech generator, facilitating the synthesis of speech that is acoustically aligned with the target domain while preserving phonetic content. To enhance generalization further, we propose dynamic stochastic perturbation, a novel regularization technique that introduces controlled variability into the embeddings during generation, promoting robustness to unseen domains. Empirical results demonstrate that URSA-GAN effectively reduces character error rates in ASR and improves perceptual metrics in SE across diverse noisy and mismatched channel scenarios. Notably, evaluations on compound test conditions with both channel and noise degradations confirm the generalization ability of URSA-GAN, yielding relative improvements of 16.16% in ASR performance and 15.58% in SE metrics.
Authors: Xingzhi Huang, Ji Wang
Motivated by active wing flutter suppression in high-Mach-number flight, this paper presents a rapid boundary stabilization strategy for a two-dimensional PDE-modeled elastic plate with in-domain instabilities, where the exponential stability is achieved with a decay rate that can be arbitrarily assigned by the users. First, the aeroelastic system is modeled as two-dimensional coupled wave PDEs with internal anti-damping terms, derived by Piston theory and Hamilton's principle. Using Fourier series expansion, the 2-D problem is decomposed into a large-scale 1-D system, based on which full-state boundary feedback control is designed via PDE backstepping transformation. To enable output-feedback implementation, a state observer is further designed to estimate the distributed states over the two-dimensional spatial domain using the available boundary measurements. Through Lyapunov analysis, the exponential stability of the 2-D elastic plate PDE under the proposed boundary control is established with a designer-tunable decay rate. Numerical simulations verify the effectiveness of the control strategy in suppressing flow-induced vibrations in a 2-D elastic plate.
Authors: Daohong Shen, Wei Feng, Yunfei Chen, Yongxu Zhu, Jinxia Cheng, Dapeng Wang, Shi Jin
Direct-to-cell (D2C) satellite communications have emerged as a crucial alternative to terrestrial communications in the sixth generation (6G) mobile networks due to their wide-area coverage capability. Unlike human-oriented communications, future 6G robot-oriented D2C satellite communications in autonomous operations place greater emphasis on the ultimate task completion than on the intermediate stage of data transmissions. Such a difference renders it crucial to evaluate the performance of each stage in a systematic manner and consider a multistage integrated optimization. Motivated by this, we model the system with a sensing-communication-computing-control (SC3) closed loop and analyze it from an entropy-based perspective, from which a task-oriented system design method is developed. Furthermore, to manage the complexity of the closed-loop network, we decompose it into fine-grained functional structures and investigate the key challenges of collaborative sensing, collaborative computing, and collaborative control. A case study is presented to compare the proposed task-oriented scheme with conventional communication-oriented schemes, showing that the proposed method has better performance in system-level control cost. Finally, several open issues are outlined for future research and practical implementation.
Authors: Qingshun She, Jing Peng, Yangui Fang, Yu Xi, Kai Yu
This work investigates bidirectional Mamba (BiMamba) for unified streaming and non-streaming automatic speech recognition (ASR). Dynamic chunk size training enables a single model for offline decoding and streaming decoding with various latency settings. In contrast, existing BiMamba based streaming method is limited to fixed chunk size decoding. When dynamic chunk size training is applied, training overhead increases substantially. To tackle this issue, we propose the Trans-Chunk BiMamba (TC-BiMamba) for dynamic chunk size training. Trans-Chunk mechanism trains both bidirectional sequences in an offline style with dynamic chunk size. On the one hand, compared to traditional chunk-wise processing, TC-BiMamba simultaneously achieves 1.3 times training speedup, reduces training memory by 50%, and improves model performance since it can capture bidirectional context. On the other hand, experimental results show that TC-BiMamba outperforms U2++ and matches LC-BiMmaba with smaller model size.
Authors: Shihong Tan, Haoyu Wang, Youran Ni, Yingzhao Hou, Jiayue Luo, Zipei Hu, Han Dou, Zerui Han, Ningning Pan, Yuzhu Wang, Gongping Huang
Music source restoration (MSR) aims to recover unprocessed stems from mixed and mastered recordings. The challenge lies in both separating overlapping sources and reconstructing signals degraded by production effects such as compression and reverberation. We therefore propose DTT-BSR, a hybrid generative adversarial network (GAN) combining rotary positional embeddings (RoPE) transformer for long-term temporal modeling with dual-path band-split recurrent neural network (RNN) for multi-resolution spectral processing. Our model achieved 3rd place on the objective leaderboard and 4th place on the subjective leaderboard on the ICASSP 2026 MSR Challenge, demonstrating exceptional generation fidelity and semantic alignment with a compact size of 7.1M parameters.
Authors: Pelin Sekercioglu, Angela Fontan, Dimos V. Dimarogonas
This work addresses the edge-based synchronization problem in first-order multi-agent systems containing both cooperative and antagonistic interactions with one or multiple leader groups. The presence of multiple leaders and antagonistic interactions means that the multi-agent system typically does not achieve consensus, unless specific conditions (on the number of leaders and on the signed graph) are met, in which case the agents reach a trivial form of consensus. In general, we show that the multi-agent system exhibits a more general form of synchronization, including bipartite consensus and containment. Our approach proposes a signed edge-based agreement protocol for signed networks described by signed edge-Laplacian matrices. In particular, in this work, we present new spectral properties of signed edge-Laplacian matrices containing multiple zero eigenvalues and establish global exponential stability of the synchronization errors. Moreover, we explicitly compute the equilibrium to which all edge states converge, thereby characterizing the resulting synchronization behavior. Numerical simulations validate our theoretical results.
Authors: Yahia Salaheldin Shaaban, Salem Lahlou, Abdelrahman Sayed Sayed
This paper proposes HyperKKL, a novel learning approach for designing Kazantzis-Kravaris/Luenberger (KKL) observers for non-autonomous nonlinear systems. While KKL observers offer a rigorous theoretical framework by immersing nonlinear dynamics into a stable linear latent space, its practical realization relies on solving Partial Differential Equations (PDE) that are analytically intractable. Current existing learning-based approximations of the KKL observer are mostly designed for autonomous systems, failing to generalize to driven dynamics without expensive retraining or online gradient updates. HyperKKL addresses this by employing a hypernetwork architecture that encodes the exogenous input signal to instantaneously generate the parameters of the KKL observer, effectively learning a family of immersion maps parameterized by the external drive. We rigorously evaluate this approach against a curriculum learning strategy that attempts to generalize from autonomous regimes via training heuristics alone. The novel approach is illustrated on four numerical simulations in benchmark examples including the Duffing, Van der Pol, Lorenz, and Rössler systems.
Authors: Hoan My Tran, Xin Wang, Wanying Ge, Xuechen Liu, Junichi Yamagishi
Deepfake speech utterances can be forged by replacing one or more words in a bona fide utterance with semantically different words synthesized with speech-generative models. While a dedicated synthetic word detector could be developed, we developed a cost-effective method that fine-tunes a pre-trained Whisper model to detect synthetic words while transcribing the input utterance via next-token prediction. We further investigate using partially vocoded utterances as the fine-tuning data, thus reducing the cost of data collection. Our experiments demonstrate that, on in-domain test data, the fine-tuned Whisper yields low synthetic-word detection error rates and transcription error rates. On out-of-domain test data with synthetic words produced with unseen speech-generative models, the fine-tuned Whisper remains on par with a dedicated ResNet-based detection model; however, the overall performance degradation calls for strategies to improve its generalization capability.
Authors: Chun-Wei Kong, Sebastian Escobar, Ibon Gracia, Jay McMahon, Morteza Lahijanian
Due to their expressive power, neural networks (NNs) are promising templates for functional optimization problems, particularly for reach-avoid certificate generation for systems governed by stochastic differential equations (SDEs). However, ensuring hard-constraint satisfaction remains a major challenge. In this work, we propose two constraint-driven training frameworks with guarantees for supermartingale-based neural certificate construction and controller synthesis for SDEs. The first approach enforces certificate inequalities via domain discretization and a bound-based loss, guaranteeing global validity once the loss reaches zero. We show that this method also enables joint NN controller-certificate synthesis with hard guarantees. For high-dimensional systems where discretization becomes prohibitive, we introduce a partition-free, scenario-based training method that provides arbitrarily tight PAC guarantees for certificate constraint satisfaction. Benchmarks demonstrate scalability of the bound-based method up to 5D, outperforming the state of the art, and scalability of the scenario-based approach to at least 10D with high-confidence guarantees.
Authors: Bin Xu, Yufei Zhou, Boling Song, Jingwen Sun, Yang Bian, Cheng Lu, Ye Wu, Jianfei Tu, Xiangxue Wang
We propose a Hierarchical Multi-scale Knowledge-aware Graph Network (HMKGN) that models multi-scale interactions and spatially hierarchical relationships within whole-slide images (WSIs) for cancer prognostication. Unlike conventional attention-based MIL, which ignores spatial organization, or graph-based MIL, which relies on static handcrafted graphs, HMKGN enforces a hierarchical structure with spatial locality constraints, wherein local cellular-level dynamic graphs aggregate spatially proximate patches within each region of interest (ROI) and a global slide-level dynamic graph integrates ROI-level features into WSI-level representations. Moreover, multi-scale integration at the ROI level combines coarse contextual features from broader views with fine-grained structural representations from local patch-graph aggregation. We evaluate HMKGN on four TCGA cohorts (KIRC, LGG, PAAD, and STAD; N=513, 487, 138, and 370) for survival prediction. It consistently outperforms existing MIL-based models, yielding improved concordance indices (10.85% better) and statistically significant stratification of patient survival risk (log-rank p < 0.05).
Authors: Salome Kazeminia, Carsten Marr, Bastian Rieck
Multiple instance learning (MIL) is a framework for weakly supervised classification, where labels are assigned to sets of instances, i.e., bags, rather than to individual data points. This paradigm has proven effective in tasks where fine-grained annotations are unavailable or costly to obtain. However, the effectiveness of MIL drops sharply when training data are scarce, such as for rare disease classification. To address this challenge, we propose incorporating topological inductive biases into the data representation space within the MIL framework. This bias introduces a topology-preserving constraint that encourages the instance encoder to maintain the topological structure of the instance distribution within each bag when mapping them to MIL latent space. As a result, our Topology Guided MIL (TG-MIL) method enhances the performance and generalizability of MIL classifiers across different aggregation functions, especially under scarce-data regimes. Our evaluations show average performance improvements of 15.3% for synthetic MIL datasets, 2.8% for MIL benchmarks, and 5.5% for rare anemia classification compared to current state-of-the-art MIL models, where only 17-120 samples per class are available. We make our code publicly available.
Authors: Jinge Ma, Xiaoyan Zhang, Gautham Vinod, Siddeshwar Raghavan, Jiangpeng He, Fengqing Zhu
Food portion estimation is crucial for monitoring health and tracking dietary intake. Image-based dietary assessment, which involves analyzing eating occasion images using computer vision techniques, is increasingly replacing traditional methods such as 24-hour recalls. However, accurately estimating the nutritional content from images remains challenging due to the loss of 3D information when projecting to the 2D image plane. Existing portion estimation methods are challenging to deploy in real-world scenarios due to their reliance on specific requirements, such as physical reference objects, high-quality depth information, or multi-view images and videos. In this paper, we introduce MFP3D, a new framework for accurate food portion estimation using only a single monocular image. Specifically, MFP3D consists of three key modules: (1) a 3D Reconstruction Module that generates a 3D point cloud representation of the food from the 2D image, (2) a Feature Extraction Module that extracts and concatenates features from both the 3D point cloud and the 2D RGB image, and (3) a Portion Regression Module that employs a deep regression model to estimate the food's volume and energy content based on the extracted features. Our MFP3D is evaluated on MetaFood3D dataset, demonstrating its significant improvement in accurate portion estimation over existing methods.
Authors: Litian Liu, Reza Pourreza, Sunny Panchal, Apratim Bhattacharyya, Yubing Jian, Yao Qin, Roland Memisevic
Large Language Models (LLMs) are prone to generating plausible yet incorrect responses, known as hallucinations. Effectively detecting hallucinations is therefore crucial for the safe deployment of LLMs. Recent research has linked hallucinations to model uncertainty, suggesting that hallucinations can be detected by measuring dispersion over answer distributions obtained from multiple samples drawn from a model. While drawing from the distribution over tokens defined by the model is a natural way to obtain samples, in this work, we argue that it is suboptimal for the purpose of detecting hallucinations. We show that detection can be improved significantly by taking into account model uncertainty in the Bayesian sense. To this end, we propose a very simple, training-free approach based on perturbing an appropriate subset of model parameters, or equivalently hidden unit activations, during sampling. We demonstrate that our approach significantly improves inference-time hallucination detection over standard sampling across diverse datasets, model architectures, and uncertainty metrics.
Authors: Mohammad Moulaeifard, Peter H. Charlton, Nils Strodthoff
Photoplethysmography (PPG)-based blood pressure (BP) estimation represents a promising alternative to cuff-based BP measurements. Recently, an increasing number of deep learning models have been proposed to infer BP from the raw PPG waveform. However, these models have been predominantly evaluated on in-distribution test sets, which immediately raises the question of the generalizability of these models to external datasets. To investigate this question, we trained five deep learning models on the recently released PulseDB dataset, provided in-distribution benchmarking results on this dataset, and then assessed out-of-distribution performance on several external datasets. The best model (XResNet1d101) achieved in-distribution MAEs of 9.4 and 6.0 mmHg for systolic and diastolic BP respectively on PulseDB (with subject-specific calibration), and 14.0 and 8.5 mmHg respectively without calibration. Equivalent MAEs on external test datasets without calibration ranged from 15.0 to 25.1 mmHg (SBP) and 7.0 to 10.4 mmHg (DBP). Our results indicate that the performance is strongly influenced by the differences in BP distributions between datasets. We investigated a simple way of improving performance through sample-based domain adaptation and put forward recommendations for training models with good generalization properties. With this work, we hope to educate more researchers for the importance and challenges of out-of-distribution generalization.
Authors: Mohammad Moulaeifard, Loic Coquelin, Mantas Rinkevičius, Andrius Sološenko, Oskar Pfeffer, Ciaran Bench, Nando Hegemann, Sara Vardanega, Manasi Nandi, Jordi Alastruey, Christian Heiss, Vaidotas Marozas, Andrew Thompson, Philip J. Aston, Peter H. Charlton, Nils Strodthoff
Photoplethysmography (PPG) is a widely used non-invasive physiological sensing technique, suitable for various clinical applications. Such clinical applications are increasingly supported by machine learning methods, raising the question of the most appropriate input representation and model choice. Comprehensive comparisons, in particular across different input representations, are scarce. We address this gap in the research landscape by a comprehensive benchmarking study covering three kinds of input representations, interpretable features, image representations and raw waveforms, across prototypical regression and classification use cases: blood pressure and atrial fibrillation prediction. In both cases, the best results are achieved by deep neural networks operating on raw time series as input representations. Within this model class, best results are achieved by modern convolutional neural networks (CNNs). but depending on the task setup, shallow CNNs are often also very competitive. We envision that these results will be insightful for researchers to guide their choice on machine learning tasks for PPG data, even beyond the use cases presented in this work.
Authors: Zhiyuan Tang, Dong Wang, Zhikai Zhou, Yong Liu, Shen Huang, Shidong Shang
Full-text error correction with Large Language Models (LLMs) for Automatic Speech Recognition (ASR) is attracting increased attention for its ability to address a wide range of error types, such as punctuation restoration and inverse text normalization, across long context. However, challenges remain regarding stability, controllability, completeness, and fluency. To mitigate these issues, this paper proposes the Chain of Correction (CoC), which uses a multi-turn chat format to correct errors segment by segment, guided by pre-recognized text and full-text context for better semantic understanding. Utilizing the open-sourced ChFT dataset, we fine-tune a pre-trained LLM to evaluate CoC's performance. Experiments show that CoC significantly outperforms baseline and benchmark systems in correcting full-text ASR outputs. We also analyze correction thresholds to balance under-correction and over-rephrasing, extrapolate CoC on extra-long ASR outputs, and explore using other types of information to guide error correction.
Authors: Xiaochen Wei, Weiwei Guo, Wenxian Yu, Feiming Wei, Dongying Li
Multimodal remote sensing image registration aligns images from different sensors for data fusion and analysis. However, existing methods often struggle to extract modality-invariant features when faced with large nonlinear radiometric differences, such as those between SAR and optical images. To address these challenges, we propose OSDM-MReg, a novel multimodal image registration framework that bridges the modality gap through image-to-image translation. Specifically, we introduce a one-step unaligned target-guided conditional diffusion model (UTGOS-CDM) to translate source and target images into a unified representation domain. Unlike traditional conditional DDPM that require hundreds of iterative steps for inference, our model incorporates a novel inverse translation objective during training to enable direct prediction of the translated image in a single step at test time, significantly accelerating the registration process. After translation, we design a multimodal multiscale registration network (MM-Reg) that extracts and fuses both unimodal and translated multimodal images using the proposed multimodal fusion strategy, enhancing the robustness and precision of alignment across scales and modalities. Extensive experiments on the OSdataset demonstrate that OSDM-MReg achieves superior registration accuracy compared to state-of-the-art methods.
Authors: Junbo Wang, Haofeng Tan, Bowen Liao, Albert Jiang, Teng Fei, Qixing Huang, Bing Zhou, Zhengzhong Tu, Shan Ye, Yuhao Kang
Recent audio-to-image models have shown impressive performance in generating images of specific objects conditioned on their corresponding sounds. However, these models fail to reconstruct real-world landscapes conditioned on environmental soundscapes. To address this gap, we present Geo-contextual Soundscape-to-Landscape (GeoS2L) generation, a novel and practically significant task that aims to synthesize geographically realistic landscape images from environmental soundscapes. To support this task, we construct two large-scale geo-contextual multi-modal datasets, SoundingSVI and SonicUrban, which pair diverse environmental soundscapes with real-world landscape images. We propose SounDiT, a diffusion transformer (DiT)-based model that incorporates environmental soundscapes and geo-contextual scene conditioning to synthesize geographically coherent landscape images. Furthermore, we propose the Place Similarity Score (PSS), a practically-informed geo-contextual evaluation framework to measure consistency between input soundscapes and generated landscape images. Extensive experiments demonstrate that SounDiT outperforms existing baselines in the GeoS2L, while the PSS effectively captures multi-level generation consistency across element, scene,and human perception. Project page: this https URL
Authors: Zifan Peng, Yule Liu, Zhen Sun, Mingchen Li, Zeren Luo, Jingyi Zheng, Wenhan Dong, Xinlei He, Xuechao Wang, Yingjie Xue, Shengmin Xu, Xinyi Huang
Large Audio Language Models (LALMs) have made significant progress. While increasingly deployed in real-world applications, LALMs face growing safety risks from jailbreak attacks that bypass safety alignment. However, there remains a lack of an adversarial audio dataset and a unified framework specifically designed to evaluate and compare jailbreak attacks against them. To address this gap, we introduce JALMBench, a comprehensive benchmark that assesses LALM safety against jailbreak attacks, comprising 11,316 text samples and 245,355 audio samples (>1,000 hours). JALMBench supports 12 mainstream LALMs, 8 attack methods (4 text-transferred and 4 audio-originated), and 5 defenses. We conduct in-depth analysis on attack efficiency, topic sensitivity, voice diversity, and model architecture. Additionally, we explore mitigation strategies for the attacks at both the prompt and response levels. Our systematic evaluation reveals that LALMs' safety is strongly influenced by modality and architectural choices: text-based safety alignment can partially transfer to audio inputs, and interleaved audio-text strategies enable more robust cross-modal generalization. Existing general-purpose moderation methods only slightly improve security, highlighting the need for defense methods specifically designed for LALMs. We hope our work can shed light on the design principles for building more robust LALMs.
Authors: Nikhil Singh, Manuel Cherep, Pattie Maes
The fidelity with which neural networks can now generate content such as music presents a scientific opportunity: these systems appear to have learned implicit theories of such content's structure through statistical learning alone. This offers a potentially new lens on theories of human-generated media. When internal representations align with traditional constructs (e.g. chord progressions in music), they show how such categories can emerge from statistical regularities; when they diverge, they expose limits of existing frameworks and patterns we may have overlooked but that nonetheless carry explanatory power. In this paper, focusing on autoregressive music generators, we introduce a method for discovering interpretable concepts using sparse autoencoders (SAEs), extracting interpretable features from the residual stream of a transformer model. We make this approach scalable and evaluable using automated labeling and validation pipelines. Our results reveal both familiar musical concepts and coherent but uncodified patterns lacking clear counterparts in theory or language. As an extension, we show such concepts can be used to steer model generations. Beyond improving model transparency, our work provides an empirical tool for uncovering organizing principles that have eluded traditional methods of analysis and synthesis.
Authors: Cong Chen, Omer Karaduman, Xu Kuang
Problem definition: Accurately modeling consumer behavior in energy operations is challenging due to uncertainty, behavioral heterogeneity, and limited empirical data-particularly in low-frequency, high-impact events. While generative AI trained on large-scale human data offers new opportunities to study decision behavior, its role in operational applications remains unclear. We examine how generative agents can support customer behavior discovery in energy operations, complementing rather than replacing human-based experiments. Methodology/results: We introduce a novel approach leveraging generative agents-artificial agents powered by large language models-to simulate sequential customer decisions under dynamic electricity prices and outage risks. We find that these agents behave more optimally and rationally in simpler market scenarios, while their performance becomes more variable and suboptimal as task complexity rises. Furthermore, the agents exhibit heterogeneous customer preferences, consistently maintaining distinct, persona-driven reasoning patterns in both operational decisions and textual reasoning. Comparisons with dynamic programming and greedy policy benchmarks show alignment between specific personas and distinct heuristic decision policies. In low-frequency, high-impact events such as blackouts, agents prioritize energy reliability over cost or profit, demonstrating their ability to uncover behavioral patterns beyond the rigidity of traditional mathematical models. Managerial Implications: Our findings suggest that behavioral generative agents can serve as scalable and flexible tools for studying consumer behavior in energy operations. By enabling controlled experiments across heterogeneous customer types and rare events, these agents can enhance the design of energy management systems and support more informed analysis of energy policies and incentive programs.
Authors: Christoph Minixhofer, Ondrej Klejch, Peter Bell
Evaluation of Text to Speech (TTS) systems is challenging and resource-intensive. Subjective metrics such as Mean Opinion Score (MOS) are not easily comparable between works. Objective metrics are frequently used, but rarely validated against subjective ones. Both kinds of metrics are challenged by recent TTS systems capable of producing synthetic speech indistinguishable from real speech. In this work, we introduce Text to Speech Distribution Score 2 (TTSDS2), a more robust and improved version of TTSDS. Across a range of domains and languages, it is the only one out of 16 compared metrics to correlate with a Spearman correlation above 0.50 for every domain and subjective score evaluated. We also release a range of resources for evaluating synthetic speech close to real speech: A dataset with over 11,000 subjective opinion score ratings; a pipeline for continually recreating a multilingual test dataset to avoid data leakage; and a continually updated benchmark for TTS in 14 languages.
Authors: Wenhai Lai, Kaiming Shen, Rui Zhang
This paper studies the optimal placement of ceiling-mounted metasurfaces (MTSs) to help focus the wireless signal beam onto the target receiver, as inspired by the theatre spotlight. We assume that a total of $M$ MTSs are deployed, and that there are $L$ possible positions for each MTS. The resulting signal-to-noise (SNR) maximization problem is difficult to tackle directly because of the coupling between the placement decisions of the different MTSs. Mathematically, we are faced with a nonlinear discrete optimization problem with $L^M$ possible solutions. A remarkable result shown in this paper is that the above challenging problem can be efficiently solved within $O(ML^2\log(ML))$ time. There are two key steps in developing the proposed algorithm. First, we successfully decouple the placement variables of different MTSs by introducing a continuous auxiliary variable $\mu$; the discrete primal variables are now easy to optimize when $\mu$ is held fixed, but the optimization problem of $\mu$ is nonconvex. Second, we show that the optimization of continuous $\mu$ can be recast into a discrete optimization problem with only $LM$ possible solutions, so the optimal $\mu$ can now be readily obtained. Numerical results show that the proposed algorithm can not only guarantee a global optimum but also reach the optimal solution efficiently.
Authors: Filippo A. Spinelli, Yifan Zhai, Fang Nan, Pascal Egli, Julian Nubert, Thilo Bleumer, Lukas Miller, Ferdinand Hofmann, Marco Hutter
Bulk material handling involves the efficient and precise moving of large quantities of materials, a core operation in many industries, including cargo ship unloading, waste sorting, construction, and demolition. These repetitive, labor-intensive, and safety-critical operations are typically performed using large hydraulic material handlers equipped with underactuated grippers. In this work, we present a comprehensive framework for the autonomous execution of large-scale material handling tasks. The system integrates specialized modules for environment perception, pile attack point selection, path planning, and motion control. The main contributions of this work are two reinforcement learning-based modules: an attack point planner that selects optimal grasping locations on the material pile to maximize removal efficiency and minimize the number of scoops, and a robust trajectory following controller that addresses the precision and safety challenges associated with underactuated grippers in movement, while utilizing their free-swinging nature to release material through dynamic throwing. We validate our framework through real-world experiments on a 40 t material handler in a representative worksite, focusing on two key tasks: high-throughput bulk pile management and high-precision truck loading. Comparative evaluations against human operators demonstrate the system's effectiveness in terms of precision, repeatability, and operational safety. To the best of our knowledge, this is the first complete automation of material handling tasks on a full scale.
Authors: Ziyan Wu, Ivan Korolija, Rui Tang
With the increasing penetration of renewable generation on the power grid, maintaining system balance requires coordinated demand flexibility from aggregations of buildings. Reinforcement learning has been widely explored for building controls because of its model-free nature. Open-source simulation testbeds are essential not only for training RL agents but also for fairly benchmarking control strategies. However, most building-sector testbeds target single buildings; multi-building platforms are relatively limited and typically rely on simplified models (e.g., Resistance-Capacitance) or data-driven approaches, which lack the ability to fully capture the physical intricacies and intermediate variables necessary for interpreting control performance. Moreover, these platforms often impose fixed inputs, outputs, and model formats, restricting their applicability as benchmarking tools across diverse control scenarios. To address these gaps, MuFlex, a scalable, open-source platform for multi-building flexibility coordination, was developed. MuFlex enables synchronous information exchange and co-simulation across multiple detailed building models programmed in EnergyPlus and Modelica, and adheres to the latest OpenAI Gym interface, providing a modular, standardized RL implementation. The platform's physics-based capabilities and workflow were demonstrated in a case study coordinating demand flexibility across four office buildings using the Soft Actor-Critic algorithm. The results show that under four buildings' coordination, SAC effectively reduced the aggregated peak demand by nearly 12% with maintained indoor comfort to ensure the power demand below the threshold. Additionally, the platform's scalability was investigated through computational benchmarking on building clusters with varying sizes, model types, and simulation programs.
Authors: Omkar Tupe, Max Hartman, Lav R. Varshney, Saurav Prakash
We consider federated learning of linearly-parameterized nonlinear systems. We establish theoretical guarantees on the effectiveness of federated nonlinear system identification compared to centralized approaches, demonstrating that the convergence rate improves as the number of clients increases. Although the convergence rates in the linear and nonlinear cases differ only by a constant, this constant depends on the feature map $\phi$, which can be carefully chosen in the nonlinear setting to increase excitation and improve performance. We experimentally validate our theory in physical settings where client devices are driven by i.i.d. control inputs and control policies exhibiting i.i.d. random perturbations, ensuring non-active exploration. Experiments use trajectories from nonlinear dynamical systems characterized by real-analytic feature functions, including polynomial and trigonometric components, representative of physical systems including pendulum and quadrotor dynamics. We analyze the convergence behavior of the proposed method under varying noise levels and data distributions. Results show that federated learning consistently improves convergence of any individual client as the number of participating clients increases.
Authors: Hong-Ye Hu, Abigail McClain Gomez, Liyuan Chen, Aaron Trowbridge, Andy J. Goldschmidt, Zachary Manchester, Frederic T. Chong, Arthur Jaffe, Susanne F. Yelin
Analog quantum simulators with global control fields have emerged as powerful platforms for exploring complex quantum phenomena. Despite these advances, a fundamental theoretical question remains unresolved: to what extent can such systems realize universal quantum dynamics under global control? Here we establish a necessary and sufficient condition for universal quantum computation using only global pulse control, proving that a broad class of analog quantum simulators is, in fact, universal. We further extend this framework to fermionic and bosonic systems, including modern platforms such as ultracold atoms in optical superlattices. Moreover, we observe that analog simulators driven by random global pulses exhibit information scrambling comparable to random unitary circuits. In a dual-species neutral-atom array setup, the measurement outcomes anti-concentrate on a $\log N$ timescale despite the presence of only temporal randomness, opening opportunities for efficient randomness generation. To bridge theoretical possibility with experimental reality, we introduce \emph{direct quantum optimal control}, a control framework that enables the synthesis of complex effective Hamiltonians while incorporating realistic hardware constraints. Using this approach, we experimentally engineer three-body interactions outside the blockade regime and demonstrate topological dynamics on a Rydberg-atom array. Experimental measurements reveal dynamical signatures of symmetry-protected-topological edge modes, confirming both the expressivity and feasibility of our method. Our work opens a new avenue for quantum simulation beyond native hardware Hamiltonians, enabling the engineering of effective multi-body interactions and advancing the frontier of quantum information processing with globally-controlled analog platforms.
Authors: Bowen Ye, Junyue Huang, Yang Liu, Xiaozhen Qiao, Xiang Yin
We investigate the task and motion planning problem for Signal Temporal Logic (STL) specifications in robotics. Existing STL methods rely on pre-defined maps or mobility representations, which are ineffective in unstructured real-world environments. We propose the \emph{Structured-MoE STL Planner} (\textbf{S-MSP}), a differentiable framework that maps synchronized multi-view camera observations and an STL specification directly to a feasible trajectory. S-MSP integrates STL constraints within a unified pipeline, trained with a composite loss that combines trajectory reconstruction and STL robustness. A \emph{structure-aware} Mixture-of-Experts (MoE) model enables horizon-aware specialization by projecting sub-tasks into temporally anchored embeddings. We evaluate S-MSP using a high-fidelity simulation of factory-logistics scenarios with temporally constrained tasks. Experiments show that S-MSP outperforms single-expert baselines in STL satisfaction and trajectory feasibility. A rule-based \emph{safety filter} at inference improves physical executability without compromising logical correctness, showcasing the practicality of the approach.
Authors: Efrayim Yanir, David Burshtein, Sharon Gannot
This paper introduces a novel speech enhancement (SE) approach based on a denoising diffusion probabilistic model (DDPM), termed Guided diffusion for speech enhancement (GDiffuSE). In contrast to conventional methods that directly map noisy speech to clean speech, our method employs a lightweight helper model to estimate the noise distribution, which is then incorporated into the diffusion denoising process via a guidance mechanism. This design improves robustness by enabling seamless adaptation to unseen noise types and by leveraging large-scale DDPMs originally trained for speech generation in the context of SE. We evaluate our approach on noisy signals obtained by adding noise samples from the BBC sound effects database to LibriSpeech utterances, showing consistent improvements over state-of-the-art baselines under mismatched noise conditions. Examples are available at our project webpage.
Authors: Yilong Li, Shuai Zhang, Yijing Zeng, Hao Zhang, Xinmiao Xiong, Jingyu Liu, Pan Hu, Suman Banerjee
Large Multimodal Models (LMMs) are inherently modular, consisting of vision and audio encoders, projectors, and large language models. Yet, they are almost always executed monolithically, which underutilizes the heterogeneous accelerators (NPUs, GPUs, DSPs) in modern SoCs and leads to high end-to-end latency. In this paper, we present NANOMIND, a hardware--software co-design inference framework for Large Multimodal Models (LMMs) that breaks large models into modular ``bricks'' (vision, language, audio, etc.) and maps each to its ideal accelerator. The key insight is that large models can be broken into modular components and scheduled to run on the most appropriate compute units. It performs module-level dynamic offloading across accelerators on unified-memory SoCs. By combining customized hardware design, system-level scheduling, and optimized low-bit computation kernels, we demonstrate our framework with a compact, battery-powered device capable of running LMMs entirely on device. This prototype functions as a self-contained intelligent assistant that requires no network connectivity, while achieving higher throughput and superior power efficiency under strict resource constraints. The design further bypasses CPU bottlenecks and reduces redundant memory usage through token-aware buffer management and module-level coordination. Our system outperforms existing implementations in resource efficiency, cutting energy consumption by 42.3\% and GPU memory usage by 11.2\%. This enables a battery-powered device to run LLaVA-OneVision with a camera for nearly 20.8 hours.
Authors: Jianzhu Yao, Hongxu Su, Taobo Liao, Zerui Cheng, Huan Zhang, Xuechao Wang, Pramod Viswanath
Neural networks increasingly run on hardware outside the user's control (cloud GPUs, inference marketplaces). Yet ML-as-a-Service reveals little about what actually ran or whether returned outputs faithfully reflect the intended inputs. Users lack recourse against service downgrades (model swaps, quantization, graph rewrites, or discrepancies like altered ad embeddings). Verifying outputs is hard because floating-point(FP) execution on heterogeneous accelerators is inherently nondeterministic. Existing approaches are either impractical for real FP neural networks or reintroduce vendor trust. We present TAO: a Tolerance Aware Optimistic verification protocol that accepts outputs within principled operator-level acceptance regions rather than requiring bitwise equality. TAO combines two error models: (i) sound per-operator IEEE-754 worst-case bounds and (ii) tight empirical percentile profiles calibrated across hardware. Discrepancies trigger a Merkle-anchored, threshold-guided dispute game that recursively partitions the computation graph until one operator remains, where adjudication reduces to a lightweight theoretical-bound check or a small honest-majority vote against empirical thresholds. Unchallenged results finalize after a challenge window, without requiring trusted hardware or deterministic kernels. We implement TAO as a PyTorch-compatible runtime and a contract layer currently deployed on Ethereum Holesky testnet. The runtime instruments graphs, computes per-operator bounds, and runs unmodified vendor kernels in FP32 with negligible overhead (0.3% on Qwen3-8B). Across CNNs, Transformers and diffusion models on A100, H100, RTX6000, RTX4090, empirical thresholds are $10^2-10^3$ times tighter than theoretical bounds, and bound-aware adversarial attacks achieve 0% success. Together, TAO reconciles scalability with verifiability for real-world heterogeneous ML compute.
Authors: Ruiyang Jin, Yuke Zhou, Yujie Tang, Jie Song, Siyang Gao
Zeroth-order optimization (ZO) has been a powerful framework for solving black-box problems, which estimates gradients using zeroth-order data to update variables iteratively. The practical applicability of ZO critically depends on the efficiency of single-step gradient estimation and the overall query complexities. However, existing constrained ZO algorithms cannot achieve efficiency on both simultaneously. In this work, we consider a general constrained optimization model with black-box objective and constraint functions. To solve it, we propose novel algorithms that can achieve the best-known overall query complexity bound of $\mathcal{O}(d/\epsilon^4)$ to find an $\epsilon$-stationary solution ($d$ is the dimension of variables), while reducing the queries for estimating a single-step gradient from $\mathcal{O}(d)$ to $\mathcal{O}(1)$. Specifically, we integrate block gradient estimators with gradient descent ascent, which leads to two algorithms, ZOB-GDA and ZOB-SGDA, respectively. Instead of constructing full gradients, they estimate only partial gradients along random blocks of dimensions, where the adjustable block sizes enable high single-step efficiency without sacrificing convergence guarantees. Our theoretical results establish the finite-sample convergence of the proposed algorithms for nonconvex optimization. Finally, numerical experiments demonstrate the superior performance of our algorithms compared to existing methods.
Authors: Jinting Wang, Chenxing Li, Li Liu
Dance-to-music (D2M) generation aims to automatically compose music that is rhythmically and temporally aligned with dance movements. Existing methods typically rely on coarse rhythm embeddings, such as global motion features or binarized joint-based rhythm values, which discard fine-grained motion cues and result in weak rhythmic alignment. Moreover, temporal mismatches introduced by feature downsampling further hinder precise synchronization between dance and music. To address these problems, we propose \textbf{GACA-DiT}, a diffusion transformer-based framework with two novel modules for rhythmically consistent and temporally aligned music generation. First, a \textbf{genre-adaptive rhythm extraction} module combines multi-scale temporal wavelet analysis and spatial phase histograms with adaptive joint weighting to capture fine-grained, genre-specific rhythm patterns. Second, a \textbf{context-aware temporal alignment} module resolves temporal mismatches using learnable context queries to align music latents with relevant dance rhythm features. Extensive experiments on the AIST++ and TikTok datasets demonstrate that GACA-DiT outperforms state-of-the-art methods in both objective metrics and human evaluation. Project page: this https URL.
Authors: Si-Yu Xiao, Xin-Di Zhao, Tian-Hao Mao, Yi-Wei Wang, Yu-Qiao Chen, Hong-Yun Zhang, Jian Wang, Jun-Jie Wang, Shuang Liu, Tu-Pei Chen, Yang Liu
Accurate downhole depth measurement is essential for oil and gas well operations, directly influencing reservoir contact, production efficiency, and operational safety. Collar correlation using a casing collar locator (CCL) is fundamental for precise depth calibration. While neural network has achieved significant progress in collar recognition, preprocessing methods for such applications remain underdeveloped. Moreover, the limited availability of real well data poses substantial challenges for training neural network models that require extensive datasets. This paper presents a system integrated into a downhole toolstring for CCL log acquisition to facilitate dataset construction. Comprehensive preprocessing methods for data augmentation are proposed, and their effectiveness is evaluated using baseline neural network models. Through systematic experimentation across diverse configurations, the contribution of each augmentation method is analyzed. Results demonstrate that standardization, label distribution smoothing, and random cropping are fundamental prerequisites for model training, while label smoothing regularization, time scaling, and multiple sampling significantly enhance model generalization capabilities. Incorporating the proposed augmentation methods into the two baseline models results in maximum F1 score improvements of 0.027 and 0.024 for the TAN and MAN models, respectively. Furthermore, applying these techniques yields F1 score gains of up to 0.045 for the TAN model and 0.057 for the MAN model compared to prior studies. Performance evaluation on real CCL waveforms confirms the effectiveness and practical applicability of our approach. This work addresses the existing gaps in data augmentation methodologies for training casing collar recognition models under CCL data-limited conditions, and provides a technical foundation for the future automation of downhole operations.
Authors: Vittorio Giammarino, Ahmed H. Qureshi
Goal-Conditioned Reinforcement Learning (GCRL) mitigates the difficulty of reward design by framing tasks as goal reaching rather than maximizing hand-crafted reward signals. In this setting, the optimal goal-conditioned value function naturally forms a quasimetric, motivating Quasimetric RL (QRL), which constrains value learning to quasimetric mappings and enforces local consistency through discrete, trajectory-based constraints. We propose Eikonal-Constrained Quasimetric RL (Eik-QRL), a continuous-time reformulation of QRL based on the Eikonal Partial Differential Equation (PDE). This PDE-based structure makes Eik-QRL trajectory-free, requiring only sampled states and goals, while improving out-of-distribution generalization. We provide theoretical guarantees for Eik-QRL and identify limitations that arise under complex dynamics. To address these challenges, we introduce Eik-Hierarchical QRL (Eik-HiQRL), which integrates Eik-QRL into a hierarchical decomposition. Empirically, Eik-HiQRL achieves state-of-the-art performance in offline goal-conditioned navigation and yields consistent gains over QRL in manipulation tasks, matching temporal-difference methods.
Authors: Joonwon Seo
This monograph introduces a novel approach to polyphonic music generation by addressing the "Missing Middle" problem through structural inductive bias. Focusing on Beethoven's piano sonatas as a case study, we empirically verify the independence of pitch and hand attributes using normalized mutual information (NMI=0.167) and propose the Smart Embedding architecture, achieving a 48.30% reduction in parameters. We provide rigorous mathematical proofs using information theory (negligible loss bounded at 0.153 bits), Rademacher complexity (28.09% tighter generalization bound), and category theory to demonstrate improved stability and generalization. Empirical results show a 9.47% reduction in validation loss, confirmed by SVD analysis and an expert listening study (N=53). This dual theoretical and applied framework bridges gaps in AI music generation, offering verifiable insights for mathematically grounded deep learning.
Authors: Minhui Lu, Joshua D. Reiss
We present a physics-informed voiced backend renderer for singing-voice synthesis. Given synthetic single-channel audio and a fund-amental--frequency trajectory, we train a time-domain Webster model as a physics-informed neural network to estimate an interpretable vocal-tract area function and an open-end radiation coefficient. Training enforces partial differential equation and boundary consistency; a lightweight DDSP path is used only to stabilize learning, while inference is purely physics-based. On sustained vowels (/a/, /i/, /u/), parameters rendered by an independent finite-difference time-domain Webster solver reproduce spectral envelopes competitively with a compact DDSP baseline and remain stable under changes in discretization, moderate source variations, and about ten percent pitch shifts. The in-graph waveform remains breathier than the reference, motivating periodicity-aware objectives and explicit glottal priors in future work.
Authors: Ivan Viakhirev, Kirill Borodin, Mikhail Gorodnichev, Grach Mkrtchian
Multi-branch deep neural networks like AASIST3 achieve state-of-the-art comparable performance in audio anti-spoofing, yet their internal decision dynamics remain opaque compared to traditional input-level saliency methods. While existing interpretability efforts largely focus on visualizing input artifacts, the way individual architectural branches cooperate or compete under different spoofing attacks is not well characterized. This paper develops a framework for interpreting AASIST3 at the component level. Intermediate activations from fourteen branches and global attention modules are modeled with covariance operators whose leading eigenvalues form low-dimensional spectral signatures. These signatures train a CatBoost meta-classifier to generate TreeSHAP-based branch attributions, which we convert into normalized contribution shares and confidence scores (Cb) to quantify the model's operational strategy. By analyzing 13 spoofing attacks from the ASVspoof 2019 benchmark, we identify four operational archetypes-ranging from Effective Specialization (e.g., A09, Equal Error Rate (EER) 0.04%, C=1.56) to Ineffective Consensus (e.g., A08, EER 3.14%, C=0.33). Crucially, our analysis exposes a Flawed Specialization mode where the model places high confidence in an incorrect branch, leading to severe performance degradation for attacks A17 and A18 (EER 14.26% and 28.63%, respectively). These quantitative findings link internal architectural strategy directly to empirical reliability, highlighting specific structural dependencies that standard performance metrics overlook.
Authors: Hanning Guo, Farah Abdellatif, Hanwen Bi, Andrei Galbenus, Jon. N. Shah, Abigail Morrison, Jürgen Dammers
Brain foundation models have achieved remarkable advances across a wide range of neuroscience tasks. However, most existing models are limited to a single functional modality, restricting their ability to exploit complementary spatiotemporal dynamics and the collective data scale across imaging techniques. To address this limitation, we propose Brain-OF, the first omnifunctional brain foundation model jointly pretrained on fMRI, EEG and MEG, capable of handling both unimodal and multimodal inputs within a unified framework. To reconcile heterogeneous spatiotemporal resolutions, we introduce the Any-Resolution Neural Signal Sampler, which projects diverse brain signals into a shared semantic space. To further manage semantic shifts, the Brain-OF backbone integrates DINT attention with a Sparse Mixture of Experts, where shared experts capture modality-invariant representations and routed experts specialize in modality-specific semantics. Furthermore, we propose Masked Temporal-Frequency Modeling, a dual-domain pretraining objective that jointly reconstructs brain signals in both the time and frequency domains. Brain-OF is pretrained on a large-scale corpus comprising around 40 datasets and demonstrates superior performance across diverse downstream tasks, highlighting the benefits of joint multimodal integration and dual-domain pretraining.
Authors: Yifan Li, Mehrdad Salimitari, Taiyu Zhang, Guang Li, David Dreizin
Detection of rare lesions in whole-body CT is fundamentally limited by extreme class imbalance and low target-to-volume ratios, producing precision collapse despite high AUROC. Synthetic augmentation with diffusion models offers promise, yet pixel-space diffusion is computationally expensive, and existing mask-conditioned approaches lack controllable attribute-level regulation and paired supervision for accountable training. We introduce SALIENT, a mask-conditioned wavelet-domain diffusion framework that synthesizes paired lesion-masking volumes for controllable CT augmentation under long-tail regimes. Instead of denoising in pixel space, SALIENT performs structured diffusion over discrete wavelet coefficients, explicitly separating low-frequency brightness from high-frequency structural detail. Learnable frequency-aware objectives disentangle target and background attributes (structure, contrast, edge fidelity), enabling interpretable and stable optimization. A 3D VAE generates diverse volumetric lesion masks, and a semi-supervised teacher produces paired slice-level pseudo-labels for downstream mask-guided detection. SALIENT improves generative realism, as reflected by higher MS-SSIM (0.63 to 0.83) and lower FID (118.4 to 46.5). In a separate downstream evaluation, SALIENT-augmented training improves long-tail detection performance, yielding disproportionate AUPRC gains across low prevalences and target-to-volume ratios. Optimal synthetic ratios shift from 2x to 4x as labeled seed size decreases, indicating a seed-dependent augmentation regime under low-label conditions. SALIENT demonstrates that frequency-aware diffusion enables controllable, computationally efficient precision rescue in long-tail CT detection.
Authors: Domenico Ciuonzo, Alessio Zappone, Marco Di Renzo, Linlong Wu
This paper investigates channel-aware decision fusion empowered by massive MIMO systems and reconfigurable intelligent surfaces (RIS). By integrating both, we aim to improve goal-oriented (fusion) performance despite the unique propagation challenges introduced. Specifically, we investigate traditional favorable propagation properties in the context of RIS-aided Massive MIMO decision fusion. The above analysis is then leveraged (i) to design three sub-optimal simple fusion rules suited for the large-array regime and (ii) to devise an optimization criterion for RIS reflection coefficients based on long-term channel statistics. Simulation results confirm the appeal of the presented design.
Authors: Mihir Sinha, Kriti Thakur, Prasanta K. Panigrahi, Alivelu Manga Parimi, Mayukha Pal
High-impedance arc faults (HIAFs) in medium-voltage electrical distribution systems are difficult to detect due to their low fault current levels and nonlinear transient behavior. Traditional detection algorithms generally struggle with predictions under dynamic waveform scenarios. This research provides our approach of using a unique data-driven linearization (DDL) framework for early prediction of HIAFs, giving both interpretability and scalability. The proposed method translates nonlinear current waveforms into a linearized space using coordinate embeddings and polynomial transformation, enabling precise modelling of fault this http URL total duration of the test waveform is 0.5 seconds, within which the arc fault occurs between 0.2 seconds to 0.3 seconds. Our proposed approach using DDL, trained solely on the pre-fault healthy region (0.10 seconds to 0.18 seconds) effectively captures certain invisible fault precursors, to accurately predict the onset of fault at 0.189 seconds, which is approximately 0.011 seconds (i.e., 11 milliseconds) earlier than the actual fault occurrence. In particular, the framework predicts the start of arc faults at 0.189 seconds, significantly earlier of the actual fault incidence at 0.200 seconds, demonstrating substantial early warning capability. Performance evaluation comprises eigenvalue analysis, prediction error measures, error growth rate and waveform regeneration fidelity. Such early prediction proves that the model is capable of correctly foreseeing faults which is especially helpful in preventing real-world faults and accidents. It confirms that our proposed approach reliably predicts arc faults in medium-voltage power distribution systems
Authors: Kriti Thakur, Alivelu Manga Parimi, Mayukha Pal
Accurate fault detection and localization in electrical distribution systems is crucial, especially with the increasing integration of distributed energy resources (DERs), which inject greater variability and complexity into grid operations. In this study, FaultXformer is proposed, a Transformer encoder-based architecture developed for automatic fault analysis using real-time current data obtained from phasor measurement unit (PMU). The approach utilizes time-series current data to initially extract rich temporal information in stage 1, which is crucial for identifying the fault type and precisely determining its location across multiple nodes. In Stage 2, these extracted features are processed to differentiate among distinct fault types and identify the respective fault location within the distribution system. Thus, this dual-stage transformer encoder pipeline enables high-fidelity representation learning, considerably boosting the performance of the work. The model was validated on a dataset generated from the IEEE 13-node test feeder, simulated with 20 separate fault locations and several DER integration scenarios, utilizing current measurements from four strategically located PMUs. To demonstrate robust performance evaluation, stratified 10-fold cross-validation is performed. FaultXformer achieved average accuracies of 98.76% in fault type classification and 98.92% in fault location identification across cross-validation, consistently surpassing conventional deep learning baselines convolutional neural network (CNN), recurrent neural network (RNN). long short-term memory (LSTM) by 1.70%, 34.95%, and 2.04% in classification accuracy and by 10.82%, 40.89%, and 6.27% in location accuracy, respectively. These results demonstrate the efficacy of the proposed model with significant DER penetration.
Authors: Arash Omidi, Tanmay Mishra, Mads R. Almassalkhi
This paper presents a hybrid energy system (HES) experimental testbed developed at the University of Vermont, featuring a dual-site architecture that integrates on-campus laboratory facility with an off-campus solar and meteorological station. This supports the prototyping and validation of advanced HES control and optimization strategies. The platform integrates hardware-in-the-loop (HIL) simulations with a reconfigurable set of kVA-scale assets.A unified monitoring and communication architecture supports real-time data acquisition, model validation, and control implementation. The capabilities of the testbed are demonstrated through an HIL experiment in which a battery systems participate in solar PV smoothing.
Authors: Xiaochong Dong, Jun Dan, Yingyun Sun, Yang Liu, Xuemin Zhang, Shengwei Mei
Driven by global climate change and the ongoing energy transition, the coupling between power supply capabilities and meteorological factors has become increasingly significant. Over the long term, accurately quantifying the power generation of renewable energy under the influence of climate change is essential for the development of sustainable power systems. However, due to interdisciplinary differences in data requirements, climate data often lacks the necessary hourly resolution to capture the short-term variability and uncertainties of renewable energy resources. To address this limitation, a super-resolution recurrent diffusion model (SRDM) has been developed to enhance the temporal resolution of climate data and model the short-term uncertainty. The SRDM incorporates a pre-trained decoder and a denoising network, that generates long-term, high-resolution climate data through a recurrent coupling mechanism. The high-resolution climate data is then converted into power value using the mechanism model, enabling the simulation of wind and photovoltaic (PV) power generation on future long-term scales. Case studies were conducted in the Ejina region of Inner Mongolia, China, using fifth-generation reanalysis (ERA5) and coupled model intercomparison project (CMIP6) data under two climate pathways: SSP126 and SSP585. The results demonstrate that the SRDM outperforms existing generative models in generating super-resolution climate data. Furthermore, the research highlights the estimation biases introduced when low-resolution climate data is used for power conversion.
Authors: Sebastian Kebrich, Felix Engelhardt, David Franzmann, Christina Büsing, Jochen Linßen, Heidi Heinrichs
Future greenhouse gas neutral energy systems will be dominated by renewable energy technologies providing variable supply subject to uncertain weather conditions. For this setting, we propose an algorithm for capacity expansion planning: We evaluate solutions optimised on a single years' data under different input weather years, and iteratively modify solutions whenever supply gaps are detected. These modifications lead to solutions with sufficient capacities to overcome periods of cold dark lulls and seasonal demand/supply fluctuations. A computational study on a German energy system model for 40 operating years shows that preventing supply gaps, i.e. finding a robust system, increases the total annual cost by 1.6-2.9%. In comparison, non-robust systems display loss of load close to 50% of total demand during some periods. Results underline the importance of assessing the feasibility of energy system models using atypical time-series, combining dark lull and cold period effects.
Authors: Arash J. Khabbazi, Kevin J. Kircher
How much energy, money, and emissions can advanced control of heating and cooling equipment save in real buildings? To address this question, researchers sometimes control a small number of thermal zones within a larger multi-zone building, then report savings for the controlled zones only. That approach can overestimate savings by neglecting heat transfer between controlled zones and adjacent zones. This paper mathematically characterizes the overestimation error when the dynamics are linear and the objectives are linear in the thermal load, as usually holds when optimizing energy efficiency, energy costs, or emissions. Overestimation errors can be large even in seemingly innocuous situations. For example, when controlling only interior zones that have no direct thermal contact with the outdoors, all perceived savings are fictitious. This paper provides an alternative estimation method based on the controlled and adjacent zones' temperature measurements. The new method does not require estimating how much energy the building would have used under baseline operations, so it removes the additional measurement and verification challenge of accurate baseline estimation.
Authors: Alexander Bonora, Anna V. Guglielmi, Davide Scazzoli, Marco Giordani, Maurizio Magarini, Vineeth Teeda, Stefano Tomasin
Beamforming in multiple-input multiple-output (MIMO) systems should take interference mitigation into account. However, for beamform design, accurate channel state information (CSI) is needed, which is often difficult to obtain due to channel variability, feedback overhead, or hardware constraints. For example, amplify-and-forward (AF) relays passively forward signals without measurement, precluding full CSI acquisition to and from the relay. To address these issues, this paper introduces a novel prediction-assisted optimization (PAO) framework for beamform design in AF relay-assisted multiuser MIMO systems. The proposed solution in the AF relay aims at maximizing the signal-plus-interference-to-noise ratio (SINR). Unlike other methods, PAO relies solely on received power measurements, making it suitable for scenarios where CSI is unreliable or unavailable. PAO consists of two stages: a supervised-learning-based neural network (NN) that predicts the positions of transmitters using signal observations, and an optimization algorithm, guided by a digital twin (DT), that iteratively refines the beam direction of the relay in a simulated radio environment. As a key contribution, we validate the proposed framework using realistic measurements collected on a custom-built experimental millimeter wave (mmWave) platform, which enables training of the NN model under practical wireless conditions. The estimated information is then used to update the digital twin with knowledge of the surrounding environment, enabling online optimization. Numerical results show the trade-off between localization accuracy and beamforming performance and confirm that PAO maintains robustness even in the presence of localization errors while reducing the need for real-world measurements.
Authors: Xuanhao Mu, Jakob Geiges, Nan Liu, Thorsten Schlachter, Veit Hagenmeyer
In energy system analysis, coupling models with mismatched spatial resolutions is a significant challenge. A common solution is assigning weights to high-resolution geographic units for aggregation, but traditional models are limited by using only a single geospatial attribute. This paper presents an innovative method employing a self-supervised Heterogeneous Graph Neural Network to address this issue. This method models high-resolution geographic units as graph nodes, integrating various geographical features to generate physically meaningful weights for each grid point. These weights enhance the conventional Voronoi-based allocation method, allowing it to go beyond simply geographic proximity by incorporating essential geographic this http URL addition, the self-supervised learning paradigm overcomes the lack of accurate ground-truth data. Experimental results demonstrate that applying weights generated by this method to cluster-based Voronoi Diagrams significantly enhances scalability, accuracy, and physical plausibility, while increasing precision compared to traditional methods.
Authors: Nicola Cibin, Bas Mulder, Herman Carstens, Peter Palensky, Alexandru Ştefanov
The digital transformation of power systems is accelerating the adoption of IEC 61850 standard. However, its communication protocols, including Sampled Values (SV), lack built-in security features such as authentication and encryption, making them vulnerable to malicious packet injection. Such cyber attacks can delay fault clearance or trigger unintended circuit breaker operations. While most existing research focuses on detecting cyber attacks in digital substations, intrusion prevention systems have been disregarded because of the risk of potential communication network disruptions. This paper proposes a novel method using hybrid statistical-deep learning for the detection, prevention, and source localization of IEC 61850 SV injection attacks. The method uses exponentially modified Gaussian distributions to model communication network latency and long short-term memory and Elman recurrent neural network to detect anomalous variations in the estimated probability distributions. It effectively discards malicious SV frames with minimal processing overhead and latency, maintains robustness against communication network latency variation and time-synchronization issues, and guarantees a near-zero false positive rate in non-attack scenarios. Comprehensive validation is conducted on three testbeds involving industrial-grade devices, hardware-in-the-loop simulations, virtualized intelligent electronic devices and merging units, and high-fidelity emulated communication networks. Results demonstrate the method's suitability for practical deployment in IEC 61850-compliant digital substations.
Authors: Noor Ul Ain, Lorenzo Miretti, Renato L. G. Cavalcante, Slawomir Stanczak
We study the impact of imperfect line-of-sight (LoS) phase tracking on the uplink performance of cell-free massive MIMO networks. Unlike prior works that assume perfectly known or completely unknown phases, we consider a realistic regime where LoS phases are estimated with residual uncertainty due to hardware impairments, mobility, and synchronization errors. To this end, we propose a Rician fading model where LoS components are rotated by imperfect phase estimates and attenuated by a deterministic \textit{phase-error penalty factor}. We derive a linear MMSE channel estimator that accounts for statistical phase errors and unifies prior results, reducing to the Bayesian MMSE estimator when phase is perfectly known and to a zero-mean model when no phase information is available. To address the non-Gaussian setting, we introduce a virtual uplink model that preserves second-order statistics of channel estimation, enabling the derivation of tractable virtual centralized and distributed MMSE beamformers. To ensure fair assessment of network performance, we apply these virtual beamformers to the operational uplink model that reflects the actual physical channel and compute the spectral efficiency bounds available in the literature. Numerical results show that our framework bridges idealized assumptions and practical tracking limitations, providing rigorous performance benchmarks and design insights for 6G cell-free networks.
Authors: Oscar Jed R. Chuy, Matthew T. Hale, Vignesh Sivaramakrishnan, Sean Phillips, Ricardo G. Sanfelice
As satellites have proliferated, interest has increased in autonomous rendezvous, proximity operations, and docking (ARPOD). A fundamental challenge in these tasks is the uncertainties when operating in space, e.g., in measurements of satellites' states, which can make future states difficult to predict. Another challenge is that satellites' onboard processors are typically much slower than their terrestrial counterparts. Therefore, to address these challenges we propose to solve an ARPOD problem with feedback optimization, which computes inputs to a system by measuring its outputs, feeding them into an optimization algorithm in the loop, and computing some number of iterations towards an optimal input. We focus on satellite rendezvous, and satellites' dynamics are modeled using the continuous-time Clohessy-Wiltshire equations, which are marginally stable. We develop an asymptotically stabilizing controller for them, and we use discrete-time gradient descent in the loop to compute inputs to them. Then, we analyze the hybrid feedback optimization system formed by the stabilized Clohessy-Wiltshire equations with gradient descent in the loop. We show that this model is well-posed and that maximal solutions are both complete and non-Zeno. Then, we show that solutions converge exponentially fast to a ball around a rendezvous point, and we bound the radius of that ball in terms of system parameters. Simulations show that this approach provides up to a 98.4\% reduction in the magnitude of disturbances across a range of simulations, which illustrates the viability of hybrid feedback optimization for autonomous satellite rendezvous.
Authors: Linhan Fang, Elias Raffoul, Xingpeng Li
While the rapid proliferation of electric vehicles (EVs) accelerates net-zero goals, uncoordinated charging activities impose severe operational challenges on distribution grids, including exacerbated peak loads, thermal overloading, and voltage violations. To overcome the computational intractability of jointly optimizing grid infrastructure reinforcements and battery energy storage system (BESS) installations, this paper proposes a novel three-stage diagnosis-driven co-planning (DDCP) framework. The methodology integrates a violation detection and quantification (VDQ) model to systematically identify system breaches, and a violation-mitigated BESS planning (VMBP) model for optimal BESS sitting and sizing. Specifically, Stage I of the DDCP framework diagnoses critical bottleneck lines that render standalone BESS solutions infeasible. Stage II targets cable upgrades exclusively at the Top-N prioritized bottleneck lines and Stage III then executes the optimal BESS deployment using a network-enhanced VMBP model. Furthermore, this study quantifies the EV hosting capacity thresholds before and after BESS integration across varying EV adoption rates and base voltages. Finally, a comprehensive comparative analysis evaluates four mitigation approaches: the VDQ-driven cable upgrade (VCU) model, the VMBP model, system-wide voltage uprating, and the proposed DDCP framework. The results demonstrate that the DDCP framework not only resolves the complex joint-optimization hurdle but also achieves the high techno-economic superiority in addressing high-EV-penetration challenges.
Authors: Sunki Hong, Jisoo Lee, Yuanyuan Shi
Selecting the right deep learning model for power grid forecasting is challenging, as performance heavily depends on the data available to the operator. This paper presents a comprehensive benchmark of five modern neural architectures: two state space models (PowerMamba, S-Mamba), two Transformers (iTransformer, PatchTST), and a traditional LSTM. We evaluate these models on hourly electricity demand across six diverse US power grids for forecast windows between 24 and 168 hours. To ensure a fair comparison, we adapt each model with specialized temporal processing and a modular layer that cleanly integrates weather covariates. Our results reveal that there is no single best model for all situations. When forecasting using only historical load, PatchTST and the state space models provide the highest accuracy. However, when explicit weather data is added to the inputs, the rankings reverse: iTransformer improves its accuracy three times more efficiently than PatchTST. By controlling for model size, we confirm that this advantage stems from the architecture's inherent ability to mix information across different variables. Extending our evaluation to solar generation, wind power, and wholesale prices further demonstrates that model rankings depend on the forecast task: PatchTST excels on highly rhythmic signals like solar, while state space models are better suited for the chaotic fluctuations of wind and price. Ultimately, this benchmark provides grid operators with actionable guidelines for selecting the optimal forecasting architecture based on their specific data environments.
Authors: Hannes Homburger, Bastian Jäckl, Stefan Wirtensohn, Christian Stopp, Maximilian T. Fischer, Moritz Diehl, Daniel A. Keim, Johannes Reuter
The maritime sector is undergoing a disruptive technological change driven by three main factors: autonomy, decarbonization, and digital transformation. Addressing these factors necessitates a reassessment of inland vessel operations. This paper presents the design and development of a decision support system for ferry operations based on a shrinking-horizon optimal control framework. The problem formulation incorporates a mathematical model of the ferry's dynamics and environmental disturbances, specifically water currents and wind, which can significantly influence the dynamics. Real-world data and illustrative scenarios demonstrate the potential of the proposed system to effectively support ferry crews by providing real-time guidance. This enables enhanced operational efficiency while maintaining predefined maneuver durations. The findings suggest that optimal control applications hold substantial promise for advancing future ferry operations on inland waters. A video of the real-world ferry MS Insel Mainau operating on Lake Constance is available at: this https URL
Authors: Saurabh Prabhakar, Bijaya Ketan Panigrahi, Frede Blaabjerg, Subham Sahoo
Traditional protection systems for microgrids, which rely on high fault currents and continuous communication, struggle to keep up with the changing dynamics and cybersecurity concerns of decentralized networks. In this study, we introduce a novel biologically inspired protection system based on neuromorphic principles, where each distributed energy resource (DER) functions as a simple neuron. These neurons process local changes in voltage, current signals, and converting them into spike patterns that represent the severity of disturbances. Just as neurons communicate via synapses in biological systems, we exploit transmission cables to coordinate between DERs, enabling them to share information and respond to faults collectively. Fault detection and circuit breaker activation are driven by a First-To-Spike (FTTS) mechanism, similar to the concept of traveling wave protection, but without needing GPS synchronization or communication links. A key innovation is the ability to use the timing of spikes to locally determine the nature of a fault, offering an intelligent, adaptive response to disturbances. Performance shows tripping latency of 10-58 ms, surpassing conventional relays and even traveling-wave methods (60 ms), while maintaining detection accuracy above 98% and spatial selectivity over 97%, enabling real-time, communication-free, scalable protection for plug-and-play microgrids.
Authors: Damola Ajeyemi, Yiting Chen, Antonin Colot, Jorge Cortes, Emiliano Dall'Anese
This paper focuses on an AC optimal power flow (OPF) problem for distribution feeders equipped with controllable distributed energy resources (DERs). We consider a solution method that is based on a continuous approximation of the projected gradient flow - referred to as the safe gradient flow - that incorporates voltage and current information obtained either through real-time measurements or power flow computations. These two setups enable both online and offline implementations. The safe gradient flow involves the solution of convex quadratic programs (QPs). To enhance computational efficiency, we propose a novel framework that employs a neural network approximation of the optimal solution map of the QP. The resulting method has two key features: (a) it ensures that the DERs' setpoints are practically feasible, even for an online implementation or when an offline algorithm has an early termination; (b) it ensures convergence to a neighborhood of a strict local optimizer of the AC OPF. The proposed method is tested on a 93-node distribution system with realistic loads and renewable generation. The test shows that our method successfully regulates voltages within limits during periods with high renewable generation.
Authors: Yongwei Yi, Xinping Yi, Wenjin Wang, Xiao Li, Shi Jin
In practical Multiuser Multiple-Input Multiple-Output (MU-MIMO) systems, symbol detection remains challenging due to severe inter-user interference and sensitivity to Channel State Information (CSI) uncertainty. In contrast to the mostly studied belief propagation-type model-driven methods, which incur high computational complexity, Soft Interference Cancellation (SIC) strikes a good balance between performance and complexity. To further address CSI mismatch and nonlinear effects, the recently proposed data-driven deep neural receivers, such as DeepSIC, leverage the advantages of deep neural networks for interference cancellation and symbol detection, demonstrating strong empirical performance. However, there is still a lack of theoretical underpinning for why and to what extent DeepSIC could generalize with the number of training samples. This paper proposes inspecting the fully data-driven DeepSIC detection within a Network-of-MLPs architecture, which is composed of multiple interconnected MLPs via outer and inner Directed Acyclic Graphs (DAGs). Within such an architecture, DeepSIC can be upgraded as a graph-based message-passing process using Graph Neural Networks (GNNs), termed GNNSIC, with shared model parameters across users and iterations. Notably, GNNSIC achieves excellent expressivity comparable to DeepSIC with substantially fewer trainable parameters, resulting in improved sample efficiency and enhanced user generalization. By conducting a norm-based generalization analysis using Rademacher complexity, we reveal that an exponential dependence on the number of iterations for DeepSIC can be eliminated in GNNSIC due to parameter sharing. Simulation results demonstrate that GNNSIC attains comparable or improved Symbol Error Rate (SER) performance to DeepSIC with significantly fewer parameters and training samples.
Authors: Haizum Hanim Ab Halim, Dalila Alias, Akmal Zaini Arsad, Lewis Tee Jen Looi, Rosdiadee Nordin, Denny Ng Kok Sum
Construction and operating of buildings is one of the major contributors to global greenhouse emissions. With the inefficient usage of energy due to human behavior and manual operation, the energy consumption of buildings is further increased. These challenges highlight the need for improved Building Energy Management Systems (BEMS) integrated with Internet of Things (IoT) and data driven intelligence to enhance energy-efficiency in a building and contribute to Net-Zero Energy Buildings (NZEB) targets. This paper offers four keys contributions: i) a systematic review of IoT enabled BEMS including components, network architecture and functional capabilities, ii) an evaluation of real-world BEMS datasets to support Artificial Intelligence (AI) based predictive control, iii) an analysis of integration challenges related to interoperability, smart grids and net-zero energy strategies, and iv) a case study highlighting global best practices, performances outcomes, and lesson learned for scaling advanced BEMS solutions.
Authors: Mohamed Shamseldein
Large language models (LLMs) have demonstrated remarkable tool-use capabilities, yet their application to power system operations remains largely unexplored. This paper presents Grid-Mind, a domain-specific LLM agent that interprets natural-language interconnection requests and autonomously orchestrates multi-fidelity power system simulations. The LLM-first architecture positions the language model as the central decision-making entity, employing an eleven-tool registry to execute Connection Impact Assessment (CIA) studies spanning steadystate power flow, N-1 contingency analysis, transient stability, and electromagnetic transient screening. A violation inspector grounds every decision in quantitative simulation outputs, while a three-layer anti-hallucination defence mitigates numerical fabrication risk through forced capacity-tool routing and post-response grounding validation. A prompt-level self-correction mechanism extracts distilled lessons from agent failures, yielding progressive accuracy improvements without model retraining. End-to-end evaluation on 50 IEEE 118-bus scenarios (DeepSeek-V3, 2026-02-23) achieved 84.0% tool-selection accuracy and 100% parsing accuracy. A separate 56-scenario self-correction suite passed 49 of 56 cases (87.5%) with a mean score of 89.3. These results establish a reproducible baseline for continued refinement while maintaining auditable, simulation-grounded decision support.
Authors: Marvin Dorn, Julian Hoffmann, André Weber, Veit Hagenmeyer
The energy transition requires flexible technologies to maintain grid stability, and electrolyzers are playing an increasingly important role in meeting this need. While previous studies often question the dynamic capabilities of large-scale alkaline electrolyzer systems, we assess their potential to provide balancing services using real manufacturer data. Unlike common approaches, we propose the decoupling between the total electrolyzer power and a smaller fractions of power actually offered on balancing markets. Adapting an existing methodology, we analyze alkaline electrolyzer systems and extend the assessment to Germany and Europe. Our results show that large-scale electrolyzers are technically capable of delivering fast-response balancing services, with significantly lower dynamic requirements than previously assumed. The planned electrolyzers in Germany could cover the entire balancing capacity market, potentially saving around 13 % of their electricity costs, excluding energy balancing revenues. The decoupling also resolves part of the trade-off for electrolyzer manufacturers, enabling the design of less dynamic but more stable systems.
Authors: Saurabh Prabhakar, Bijaya Ketan Panigrahi, Frede Blaabjerg, Subham Sahoo
Traditional protection systems for microgrids, which rely on high fault currents and continuous communication, struggle to keep up with the changing dynamics and cybersecurity concerns of decentralized networks. In this study, we introduce a novel biologically inspired protection system based on neuromorphic principles, where each distributed energy resource (DER) functions as a simple neuron. These neurons process local changes in voltage, current signals, and converting them into spike patterns that represent the severity of disturbances. Just as neurons communicate via synapses in biological systems, we exploit transmission cables to coordinate between DERs, enabling them to share information and respond to faults collectively. Fault detection and circuit breaker activation are driven by a First-To-Spike (FTTS) mechanism, similar to the concept of traveling wave protection, but without needing GPS synchronization or communication links. A key innovation is the ability to use the timing of spikes to locally determine the nature of a fault, offering an intelligent, adaptive response to disturbances. Performance shows tripping latency of 10-58 ms, surpassing conventional relays and even traveling-wave methods (60 ms), while maintaining detection accuracy above 98% and spatial selectivity over 97%, enabling real-time, communication-free, scalable protection for plug-and-play microgrids.
Authors: Thien Duc Hua, Mohammadali Mohammadi, Hien Quoc Ngo, Michail Matthaiou
This study explores a next-generation multiple access (NGMA) framework for cell-free massive MIMO (CF-mMIMO) systems enhanced by stacked intelligent metasurfaces (SIMs), aiming to improve simultaneous wireless information and power transfer (SWIPT) performance. A fundamental challenge lies in optimally selecting the operating modes of access points (APs) to jointly maximize the received energy and satisfy spectral efficiency (SE) quality-of-service constraints. Practical system impairments, including a non-linear harvested energy model, pilot contamination (PC), channel estimation errors, and reliance on long-term statistical channel state information (CSI), are considered. We derive closed-form expressions for both the achievable SE and the average sum harvested energy (sum-HE). A mixed-integer non-convex optimization problem is formulated to jointly optimize the SIM phase shifts, APs mode selection, and power allocation to maximize average sum-HE under SE and average harvested energy constraints. To solve this problem, we propose a centralized training, decentralized execution (CTDE) framework based on deep reinforcement learning (DRL), which efficiently handles high-dimensional decision spaces. A Markovian environment and a normalized joint reward function are introduced to enhance the training stability across on-policy and off-policy DRL algorithms. Additionally, we provide a two-phase convex-based solution as a theoretical robust performance. Numerical results demonstrate that the proposed DRL-based CTDE framework achieves SWIPT performance comparable to convexification-based solution, while significantly outperforming baselines.
Authors: Junbin Yu, Tianyu Lu, Mohammadali Mohammadi, Michail Matthaiou
This paper proposes a novel optimization framework for enhancing the security resilience of cell-free massive multiple-input multiple-output (CF-mMIMO) networks with multi-antenna access points (APs) and protective partial zero-forcing (PPZF) under active eavesdropping. Based on the main principles of absorption, adaptation, and recovery, we formulate a security-aware resilience metric to quantify the system performance during and after a security outage. A multi-user service priority-aware power allocation problem is formulated to minimize the mean squared error (MSE) between real-time and desired security efficiency, thereby enabling a trade-off between the target user's secrecy performance and multi-user quality of service (QoS). To solve this non-convex problem, a security-aware iterative algorithm based on the successive convex approximation (SCA) is employed. The proposed algorithm determines the optimal power allocation strategy by balancing solution quality against recovery time. At each iteration, it evaluates the overall resilience score and selects the strategy that achieves the highest value. Simulation results confirm that the proposed framework significantly improves the resilience of CF-mMIMO networks, allowing flexible adaptation between rapid recovery and high-quality recovery, depending on system requirements.
Authors: Geon Roh, Jip Kim
Dynamic line rating (DLR) enables greater utilization of existing transmission lines by leveraging real-time weather data. However, the elevated temperature operation (ETO) of conductors under DLR is often overlooked, despite its long-term impact on conductor health. This paper addresses this issue by 1) quantifying risk-based depreciation costs associated with ETO and 2) proposing a Conductor Health-Aware Unit Commitment (CHA-UC) that internalizes these costs in operational decisions. CHA-UC incorporates a robust linear approximation of conductor temperature and integration of expected depreciation costs due to hourly ETO into the objective function. Case studies on the Texas 123-bus backbone test system using NOAA weather data demonstrate that the proposed CHA-UC model reduces the total cost by 0.74\% and renewable curtailment by 85\% compared to static line rating (SLR) and outperforms quantile regression forest-based methods, while conventional DLR operation without risk consideration resulted in higher costs due to excessive ETO. Further analysis of the commitment decisions and the line temperature statistics confirms that the CHA-UC achieves safer line flows by shifting generator commitments. Finally, we examine the emergent correlation behaviors arising between wind generation and DLR forecast errors, and show that CHA-UC adaptively manages this effect by relaxing flows for risk-hedging conditions while tightening flows for risk-amplifying ones.
Authors: Mobina Nankali, Michael W. Levin
This work addresses electric vehicle (EV) charging station placement through a bi-level optimization model, where the upper-level planner maximizes net revenue by selecting station locations under budget constraints, while EV users at the lower level choose routes and charging stations to minimize travel and charging costs. To account for range anxiety, we construct a battery-expanded network and apply a shortest path algorithm with Frank-Wolfe traffic assignment. Our primary contribution is developing the first exact solution algorithm for large scale EV charging station placement problems. We propose a Branch-and-Price-and-Cut algorithm enhanced with value function cuts and column generation. While existing research relies on heuristic methods that provide no optimality guarantees or exact algorithms that require prohibitively long runtimes, our exact algorithm delivers globally optimal solutions with mathematical certainty under a reasonable runtime. Computational experiments on the Eastern Massachusetts network (74 nodes, 248 links), the Anaheim network (416 nodes, 914 links), and the Barcelona network (110 zones, 1,020 nodes, and 2,512 links) demonstrate exceptional performance. Our algorithm terminates within minutes rather than hours, while achieving optimality gaps below 1% across all instances. This result represents a computational speedup of over two orders of magnitude compared to existing methods. The algorithm successfully handles problems with over 300,000 feasible combinations, which transform EV charging infrastructure planning from a computationally prohibitive problem into a tractable optimization task suitable for practical decision making problem for real world networks.
Authors: Zeinab Salehi, Elizabeth L. Ratnam, Yijun Chen, Ian R. Petersen, Guodong Shi, Duncan S. Callaway
Electricity market design that accounts for grid constraints such as voltage and thermal limits at the distribution level can increase opportunities for the grid integration of Distributed Energy Resources (DERs). In this paper, we consider rooftop solar backed by battery storage connected to a distribution grid. We design an electricity market to support customers sharing rooftop generation in excess of their energy demand, where customers earn a profit through peer-to-peer (P2P) energy trading. Our proposed electricity market also incorporates P2P reactive power trading to improve the voltage profile across a distribution feeder. We formulate the electricity market as an optimization-based problem, where voltage and thermal limits across a feeder are managed through the assignment of customer-specific dynamic operating envelopes (DOEs). The electricity market equilibrium is referred to as a competitive equilibrium, which is equivalent to a Nash equilibrium in a standard game. Our proposed market design is benchmarked using the IEEE 13-node test feeder.
Authors: Yihsu Chen, Abel Souza, Fargol Nematkhah, Andrew L. Liu
The rapid adoption of AI has led the growth of computational demand, with large language models (LLMs) at the forefront since ChatGPT's debut in 2022. Meanwhile, large amounts of renewable energy have been deployed but, ultimately, curtailed due to transmission congestion and inadequate demand. This work develops a power market model that allows hyperscalers to spatially migrate LLM inference workloads to geo-distributed modular datacenters (MDCs), which are co-located with near renewable sources of energy at the edge of the network. We introduce the optimization problems faced by the hyperscaler and MDCs in addition to consumers, producers, and the electric grid operator, where the hyerscaler enters an agreement to lease MDCs while ensuring that the required service level objectives (SLOs) are met. The overall market model is formulated as a complementarity problem, where the proof is provided showing the existence and uniqueness of the solutions. When applying the model to an IEEE RTS-24 bus case study, we show that even with a provision that requires MDCs to disclose the CO$_2$ emissions associated with their energy supply sources, renting less polluting MDCs is unlikely to yield meaningful emission reductions due to so-called contract-reshuffling. The situation can be mitigated when conventional loads are supplied by forward contracts through power purchase agreements. This also leads to a decline in system congestion when the hyperscaler becomes increasingly cost-aware.
Authors: Tomonari Kanazawa, Hikaru Hoshino, Eiko Furutani
Transmission expansion planning in electricity markets is tightly coupled with the strategic bidding behaviors of generation companies. This paper proposes a Reinforcement Learning (RL)-based co-optimization framework that simultaneously learns transmission investment decisions and generator bidding strategies within a unified training process. Based on a multiagent RL framework for market simulation, the proposed method newly introduces a design policy layer that jointly optimizes continuous/discrete transmission expansion decisions together with strategic bidding policies. Through iterative interaction between market clearing and investment design, the framework effectively captures their mutual influence and achieves consistent co-optimization of expansion and bidding decisions. Case studies on the IEEE 30-bus system are provided for proof-of-concept validation of the proposed co-optimization framework.
Authors: Xinyi Yi, Ioannis Lestas
This paper introduces a mixed H-infinity-passivity framework that enables district heating systems (DHSs) with heat pumps to support electric-grid frequency regulation. The analysis illustrates how the DHS regulator influences coupled electro-thermal frequency dynamics and provides LMI conditions for efficient controller design. We also present a disturbance-independent temperature regulator that ensures stability and robustness against heat-demand uncertainty. Simulations demonstrate improved frequency-control dynamics in the electrical power grid while maintaining good thermal performance in the DHS.
Authors: Junseon Park, Hyeongon Park, Rahul K. Gupta
Power system operators are increasingly deploying Grid Enhancing Technologies (GETs) to mitigate operational challenges such as line and transformer congestion, and voltage violations. These technologies, including Network Topology Optimization (NTO), Variable Impedance Devices (VIDs), and Dynamic Line Rating (DLR), enhance system flexibility and enable better utilization of existing network assets. However, as the deployment of multiple GETs grows, effective coordination among them becomes essential to fully realize their potential benefits. This paper presents a co-optimization framework that models and coordinates NTO, VID, and DLR within a unified optimization scheme to alleviate network congestion and minimize operational costs. The NTO formulation is developed using a node-breaker model, offering finer switching granularity and improved operational flexibility. The inclusion of VIDs introduces nonlinear and non-convex relationships in the optimization problem. DLR takes into account of weather conditions, primarily wind speed and ambient temperature, enabling adaptive utilization of transmission capacity. The proposed framework is validated on standard IEEE benchmark test systems, demonstrating its effectiveness under varying numbers and placements of impedance controllers.
Authors: Timon Conrad, Changhun Kim, Johann Jäger, Andreas Maier, Siming Bayer
Efficient and accurate load flow calculations are a bedrock of modern power system operation. Classical numerical methods such as the Newton-Raphson algorithm provide highly precise results but are computationally demanding, which limits their applicability in large-scale scenario studies and optimization in time-critical contexts. Research has shown that machine learning approaches can approximate load flow results with high accuracy while substantially reducing computation time. Sample efficiency, i.e., the ability to achieve high accuracy with limited training dataset size, is still insufficiently researched, especially in grids with a fixed topology. This paper presents a systematic investigation of the sample efficiency of a Multilayer Perceptron and two Graph Neural Network variants on a dataset based on a modified IEEE 5-bus system. The results for this grid size show that Graph Neural Networks achieve the lowest losses. However, the availability of large training datasets remains the dominant factor influencing performance compared to architecture choice.
Authors: Carlo Karam, Matteo Tacchi, Mirko Fiacchini
This work presents a stochastic tube-based model predictive control framework that guarantees hard input constraint satisfaction for linear systems subject to unbounded additive disturbances. The approach relies on a structured design of probabilistic reachable sets that explicitly incorporates actuator saturation into the error dynamics and bounds the resulting nonlinearity within a convex embedding. The proposed controller retains the computational efficiency and structural advantages of stochastic tube-based approaches while ensuring state chance constraint satisfaction alongside hard input limits. Recursive feasibility and mean-square stability are established for our scheme, and a numerical example illustrates its effectiveness.
Authors: Manish Prajapat, Johannes Köhler, Melanie N. Zeilinger, Andreas Krause
Achieving both optimality and safety under unknown system dynamics is a central challenge in real-world deployment of agents. To address this, we introduce a notion of maximum safe dynamics learning, where sufficient exploration is performed within the space of safe policies. Our method executes $\textit{pessimistically}$ safe policies while $\textit{optimistically}$ exploring informative states and, despite not reaching them due to model uncertainty, ensures continuous online learning of dynamics. The framework achieves first-of-its-kind results: learning the dynamics model sufficiently $-$ up to an arbitrary small tolerance (subject to noise) $-$ in a finite time, while ensuring provably safe operation throughout with high probability and without requiring resets. Building on this, we propose an algorithm to maximize rewards while learning the dynamics $\textit{only to the extent needed}$ to achieve close-to-optimal performance. Unlike typical reinforcement learning (RL) methods, our approach operates online in a non-episodic setting and ensures safety throughout the learning process. We demonstrate the effectiveness of our approach in challenging domains such as autonomous car racing and drone navigation under aerodynamic effects $-$ scenarios where safety is critical and accurate modeling is difficult.
Authors: Soham Ghosh, Gaurav Mittal
Agentic AI systems have recently emerged as a critical and transformative approach in artificial intelligence, offering capabilities that extend far beyond traditional AI agents and contemporary generative AI models. This rapid evolution necessitates a clear conceptual and taxonomical understanding to differentiate this new paradigm. Our paper addresses this gap by providing a comprehensive review that establishes a precise definition and taxonomy for "agentic AI," with the aim of distinguishing it from previous AI paradigms. The concepts are gradually introduced, starting with a highlight of its diverse applications across the broader field of engineering. The paper then presents four detailed, state-of-the-art use case applications specifically within electrical engineering. These case studies demonstrate practical impact, ranging from an advanced agentic framework for streamlining complex power system studies and benchmarking to a novel system developed for survival analysis of dynamic pricing strategies in battery swapping stations. Finally, to ensure robust deployment, the paper provides detailed failure mode investigations. From these findings, we derive actionable recommendations for the design and implementation of safe, reliable, and accountable agentic AI systems, offering a critical resource for researchers and practitioners.
Authors: Savvas Papaioannou, Panayiotis Kolios, Christos G. Panayiotou, Marios M. Polycarpou
We consider the problem of adaptively monitoring a wildfire front using a mobile agent (e.g., a drone), whose trajectory determines where sensor data is collected and thus influences the accuracy of fire propagation estimation. This is a challenging problem, as the stochastic nature of wildfire evolution requires the seamless integration of sensing, estimation, and control, often treated separately in existing methods. State-of-the-art methods either impose linear-Gaussian assumptions to establish optimality or rely on approximations and heuristics, often without providing explicit performance guarantees. To address these limitations, we formulate the fire front monitoring task as a stochastic optimal control problem that integrates sensing, estimation, and control. We derive an optimal recursive Bayesian estimator for a class of stochastic nonlinear elliptical-growth fire front models. Subsequently, we transform the resulting nonlinear stochastic control problem into a finite-horizon Markov decision process and design an information-seeking predictive control law obtained via a lower confidence bound-based adaptive search algorithm with asymptotic convergence to the optimal policy.
Authors: J. Liu, F. Milano
Modal energy provides information complementary to and based on conventional eigenvalues and participation factors for power system modal analysis. However, modal energy definition is not unique. This letter clarifies the definitions and applicability of mainstream modal energy approaches, focusing on their mappings to eigenvalues and to the total system energy. It is shown that these mappings hold only under restrictive conditions, notably system normality, which limits their applicability in inverter-dominated power systems.
Authors: Markus Heinrichs, Aydin Sezgin, Rainer Kronberger
Reconfigurable Intelligent Surfaces enable active control of wireless propagation channels, which is crucial for future 5G and 6G networks. This work presents a scalable RIS design operating at 3.6 GHz with both 1 bit and 3 bit phase resolution, supporting wideband applications. The unit cells employ low-cost printed circuit board technology with an innovative spring-contact feeding structure, enabling efficient assembly and reduced manufacturing complexity for large-area arrays. The design achieves broadband phase control, low power consumption, and high scalability, with experimental results demonstrating phase tunability across the n78 frequency band and competitive reflection performance compared to existing solutions. This RIS architecture provides a practical platform for experimental studies of smart radio environments, beam steering, and sensing applications in next-generation wireless networks.
Authors: Junseon Park, Hyeongon Park, Rahul K. Gupta
Power system operators are increasingly deploying Variable Impedance Devices (VIDs), e.g., Smart Wires, and Network Topology Optimization (NTO) schemes for mitigating operational challenges such as line and transformer congestion, and voltage violations. This work aims to optimize and coordinate the operation of distributed VIDs considering fixed and optimized topologies. This problem is inherently non-linear due to power flow equations as well as bilinear terms introduced due to variable line impedance of VIDs. Furthermore, the topology optimization scheme makes it a mixed integer nonlinear problem. To tackle this, we introduce using McCormick relaxation scheme, which converts the bilinear constraints into a linear set of constraints along with the DC power flow equations. We propose an iterative correction of the McCormick relaxation to enhance its accuracy. The proposed framework is validated on standard IEEE benchmark test systems, and we present a performance comparison of the iterative McCormick method against the non-linear, SOS2 piecewise linear approximation, and original McCormick relaxation.
Authors: Philippe Jacquod, Laurent Pagnier, Daniel J. Gauthier
Accurate state estimation is a crucial requirement for the reliable operation and control of electric power systems. Here, we construct a data-driven, numerical method to infer missing power load values in large-scale power grids. Given partial observations of power demands, the method estimates the operational state using a linear regression algorithm, exploiting statistical correlations within synthetic training datasets. We evaluate the performance of the method on three synthetic transmission grid test systems. Numerical experiments demonstrate the high accuracy achieved by the method in reconstructing missing demand values under various operating conditions. We further apply the method to real data for the transmission power grid of Switzerland. Despite the restricted number of observations in this dataset, the method infers missing power loads rather accurately. Furthermore, Newton-Raphson power flow solutions show that deviations between true and inferred values for power loads result in smaller deviations between true and inferred values for flows on power lines. This ensures that the estimated operational state correctly captures potential line contingencies. Overall, our results indicate that simple data-based regression techniques can provide an efficient and reliable alternative for state estimation in modern power grids.
Authors: Robert Parker
This work formulates and solves optimization problems to generate input points that yield high errors between a neural network's predicted AC power flow solution and solutions to the AC power flow equations. We demonstrate this capability on an instance of the CANOS-PF graph neural network model, as implemented by the PF$\Delta$ benchmark library, operating on a 14-bus test grid. Generated adversarial points yield errors as large as 3.4 per-unit in reactive power and 0.08 per-unit in voltage magnitude. When minimizing the perturbation from a training point necessary to satisfy adversarial constraints, we find that the constraints can be met with as little as an 0.04 per-unit perturbation in voltage magnitude on a single bus. This work motivates the development of rigorous verification and robust training methods for neural network surrogate models of AC power flow.
Authors: Tong Huang
This paper presents a non-intrusive, decentralized approach that stabilizes AC microgrids dominated by inverter-based resources (IBRs). By "non-intrusive" we mean that the approach does not require reprogramming IBRs' controllers to stabilize the microgrids. "Decentralized" is in the sense that the approach stabilizes the microgrids without communication among IBRs. Implementing the approach only requires very minimal information of IBR dynamics, i.e., the L2 gain of an IBR, and sharing such information with the non-IBR-manufacturer parties does not cause any concerns on intellectual property privacy. The approach allows for plug-and-play operation of IBRs, while maintaining microgrid stability. The proposed approach is tested by simulating 2-IBR and 10-IBR microgrids where lines and IBRs are modeled in the electromagnetic transient time scale. Simulations show that oscillations with increasing amplitudes may occur, when two stable microgrids are networked. Simulations also suggest that the proposed approach can mitigate such a system-level symptom.
Authors: Andreas Bouterakos, Georgios Tzounas
The paper focuses on tracking eigenvalue trajectories in power system models with time delays. We formulate a continuation-based approach that employs numerical integration to follow eigenvalues as system parameters vary, in the presence of one or multiple delayed variables. The formulation is compatible with sparse delay differential-algebraic equation (DDAE) formulations of the system model and allows treating the delay magnitude itself as a varying parameter with implementation aspects discussed in detail. The proposed approach is illustrated on a modified IEEE 39-bus system, as well as on a real-world-scale dynamic model of the Irish transmission network.
Authors: Jón Winkel, Tom Willems, Cillian O'Driscoll, Ignacio Fernandez-Hernandez
This paper investigates whether large-scale GNSS spoofing activity can be inferred from maritime Automatic Identification System (AIS) position reports. A data-processing framework, called SeaSpoofFinder, available here: this http URL, was developed to ingest and post-process global AIS streams and to detect candidate anomalies through a two-stage procedure. In Stage 1, implausible position jumps are identified using kinematic and data-quality filters; in Stage 2, events are retained only when multiple vessels exhibit spatially consistent source and target clustering, thereby reducing false positives from single-vessel artifacts. The resulting final potential spoofing events (FPSEs) reveal recurrent patterns in several regions, including the Baltic Sea, the Black Sea, Murmansk, Moscow, and the Haifa area, with affected footprints that can span large maritime areas. The analysis also highlights recurring non-spoofing artifacts (e.g., back-to-port jumps and data gaps) that can still pass heuristic filters in dense traffic regions. These results indicate that AIS-based monitoring can provide useful evidence for identifying and characterizing potential spoofing activity at scale, while emphasizing that AIS-only evidence does not provide definitive attribution.
Authors: Adam Lechowicz, Nicolas Christianson, Mohammad Hajiesmaili, Adam Wierman, Prashant Shenoy
We introduce and study a class of online problems called online smoothed demand management $(\texttt{OSDM})$, motivated by paradigm shifts in grid integration and energy storage for large energy consumers such as data centers. In $\texttt{OSDM}$, an operator makes two decisions at each time step: an amount of energy to be purchased, and an amount of energy to be delivered (i.e., used for computation). The difference between these decisions charges (or discharges) the operator's energy storage (e.g., a battery). Two types of demand arrive online: base demand, which must be covered at the current time, and flexible demand, which can be satisfied at any time before a demand-specific deadline $\Delta_t$. The operator's goal is to minimize a cost (subject to above constraints) that combines a cost of purchasing energy, a cost for delivering energy (if applicable), and smoothness penalties on the purchasing and delivery rates to discourage fluctuations and encourage ``grid healthy'' decisions. $\texttt{OSDM}$ generalizes several problems in the online algorithms literature while being the first to fully model applications of interest. We propose a competitive algorithm for $\texttt{OSDM}$ called $\texttt{PAAD}$ (partitioned accounting & aggregated decisions) and show it achieves the optimal competitive ratio. To overcome the pessimism typical of worst-case analysis, we also propose a novel learning framework that provides guarantees on the worst-case competitive ratio (i.e., to provide robustness against nonstationarity) while allowing end-to-end differentiable learning of the best algorithm on historical instances of the problem. We evaluate our algorithms in a case study of a grid-integrated data center with battery storage, showing that $\texttt{PAAD}$ effectively solves the problem and end-to-end learning achieves substantial performance improvements compared to $\texttt{PAAD}$.
Authors: Ginevra Larroux, Matthieu Jacobs, Mario Paolone
This letter investigates properties of the second-order cone relaxation of the optimal power flow (OPF) problem, with emphasis on relaxation tightness, nodal voltage angles recovery, and alternating-current-OPF feasibility in meshed networks. The theoretical discussion is supported by numerical experiments on standard IEEE test cases. Implications for power system planning are briefly outlined.
Authors: Ki-Hyun Kim, Yeongung Kim, Shenghui Cui, Jae-Jung Jung
As power systems accommodate higher shares of renewable generation, short-term power imbalances become more frequent and can manifest as pronounced voltage and frequency excursions under low-inertia conditions. E-STATCOMs (STATCOMs equipped with energy storage) offer a practical means to provide both voltage support and fast frequency assistance under grid-forming control. Among candidate implementations, double-star multilevel-converter (DS-MC)-based E-STATCOMs enable centralized energy-storage integration at the dc link, which improves thermal management and maintainability. Nevertheless, conventional dc-side power-based internal-energy regulation in DS-MCs can undesirably couple loss compensation to the energy-storage path, accelerating storage cycling and constraining operation when the storage is unavailable. This paper introduces a control strategy that assigns DS-MC total internal-energy regulation to the ac-side active-power path, while reserving dc-side storage power solely for frequency support. By decoupling internal-energy management from inertial-response provision, the proposed scheme enables flexible operation as either a STATCOM or an E-STATCOM according to storage availability and mitigates unnecessary storage cycling. The proposed strategy is verified through offline simulations and laboratory-scale experiments.
Authors: Shankar Ramharack, Rajiv Sahadeo
Electromagnetic transient (EMT) simulation is essential for analyzing sub-cycle switching phenomena in industrial power systems; however, commercial EMT platforms present significant cost barriers for smaller utilities, consultancies, and academic institutions, particularly in developing regions. This paper validates KESTREL EMT, a free and open-source electromagnetic transient solver with Python integration, through three progressive case studies involving industrial capacitor switching transients. This work investigates energization, switching resonance and VFD interactions with capacitor banks. The results demonstrate that KESTREL, when supported by appropriate circuit modeling techniques, produces EMT responses consistent with analytical predictions and established IEEE benchmarks. This work establishes a validated and reproducible methodology for conducting industrial EMT studies using freely available, open-source tools.
Authors: Alessandro Quattrociocchi, Manisha Talukdar, Pere Izquierdo Gómez, Tomislav Dragicevic
Electrified heating systems with thermal storage, such as electric boilers and heat pumps, represent a major source of demand-side flexibility. Under current electricity market designs, balance responsible parties (BRPs) operating such assets are required to submit binding day-ahead electricity consumption schedules, and they typically do it based on forecasts of heat demand and electricity prices. Common scheduling approaches implicitly assume that forecast uncertainty can be well characterized using historical forecast errors. In practice, however, the cumulative effect of uncertainty creates significant exposure to imbalance-price risk when the committed schedule cannot be followed. To address this, we propose a distributionally robust chance-constrained optimization framework for the day-ahead scheduling of a multi-MW electric boiler using only limited residual forecast samples. We derive a tractable convex reformulation of the problem and calibrate the ambiguity set directly from historical forecast-error data through an a priori tunable risk parameter. Numerical results show that enforcing performance guarantees on the heat-demand balance constraint reduces demand violations by 40% compared to a deterministic forecast-based scheduler and up to 10% relative to a nominal chance-constrained model with a fixed error distribution. Further, we show that modeling the real-time rebound cost of demand violations as a second-stage term can reduce the overall daily operating cost by up to 34% by hedging against highly volatile day-ahead electricity prices.
Authors: Max Bruninx, Seyed Soroush Karimi Madahi, Timothy Verstraeten, Jan Decuyper, Chris Develder, Jan Helsen
In a decentralized balancing model, Balance Responsible Parties (BRPs) are encouraged by the Transmission System Operator (TSO) to deviate from their schedule to help the system restore balance, also referred to as implicit balancing. This could reduce balancing costs for the grid operator and lower the entry barrier for flexible assets compared to explicit balancing services. However, these implicit reactions may overshoot when their total capacity is high, potentially requiring more explicit activations. This study analyses the effect of increased participation in the decentralized balancing model in Belgium. To this end, we develop a market simulator that produces price signals on minute-level and simulate the implicit reactions for battery assets with different risk profiles. Besides the current price formula, we also study two potential candidates for the near-term presented by the TSO. A simulation study is conducted using Belgian market data for the year 2023. The findings indicate that, while having a significant positive effect on the balancing costs at first, the risk of overshoots can outweigh the potential benefits when the total capacity of the implicit reactions becomes too large. Furthermore, even when the balancing costs start to increase for the TSO, BRPs were still found to benefit from implicit balancing.
Authors: Saud Alghumayjan, Ming Yi, Bolun Xu
This paper proposes a few-shot classification framework based on Large Language Models (LLMs) to predict whether the next day will have spikes in real-time electricity prices. The approach aggregates system state information, including electricity demand, renewable generation, weather forecasts, and recent electricity prices, into a set of statistical features that are formatted as natural-language prompts and fed to an LLM along with general instructions. The model then determines the likelihood that the next day would be a spike day and reports a confidence score. Using historical data from the Texas electricity market, we demonstrate that this few-shot approach achieves performance comparable to supervised machine learning models, such as Support Vector Machines and XGBoost, and outperforms the latter two when limited historical data are available. These findings highlight the potential of LLMs as a data-efficient tool for classifying electricity price spikes in settings with scarce data.
Authors: Ignasi Ventura Nadal, Mohammad Kazem Bakhshizadeh, Petros Aristidou, Nicolae Darii, Rahul Nellikkath, Spyros Chatzivasileiadis
This paper puts forward a framework to accelerate Electromagnetic Transient (EMT) simulations by replacing individual components with trained Physics-Informed Neural Networks (PINNs). EMT simulations are considered the cornerstone of transient stability assessment of power systems with high shares of Inverter-Based Resources (IBRs), and, although accurate, they are notorious for their slow simulation speed. Taking a deeper dive into the EMT simulation algorithms, this paper identifies the most computationally expensive components of the simulation and replaces them with fast and accurate PINNs. The proposed novel PINN formulation enables a modular and scalable integration into the simulation algorithm. Using a type-4 wind turbine EMT model, we demonstrate a 4--6x simulation speedup by capturing the Phase-Locked Loop (PLL) with a PINN. We validate all our results with PSCAD software.
Authors: Amirhossein Iraniparast, Dominic Groß
In this paper, we propose a projection-free power-limiting droop control for grid-connected power electronics and an associated constrained flow problem. In contrast to projection-based power-limiting droop control, the novel projection-free power-limiting droop control results in networked dynamics that are semi-globally exponentially stable with respect to the set of optimizers of the constrained flow problem. Under a change to edge coordinates, the overall networked dynamics arising from projection-free power-limiting droop control coincide with the projection-free primal-dual dynamics associated with an augmented Lagrangian of the constrained flow problem. Leveraging this result, we (i) provide a bound on the convergence rate of the projection-free networked dynamics, (ii) propose a tuning method for controller parameters to improve the bound on the convergence rate, and (iii) analyze the relationship of the bound on the convergence rate and connectivity of the network. Finally, the analytical results are illustrated using an Electromagnetic transient (EMT) simulation.
Authors: Nelson Salazar-Pena, Alejandra Tabares, Andres Gonzalez-Mancera
This paper proposes implicit cooperation, a framework enabling decentralized agents to approximate optimal coordination in local energy markets without explicit peer-to-peer communication. We formulate the problem as a decentralized partially observable Markov decision problem that is solved through a multi-agent reinforcement learning task in which agents use stigmergic signals (key performance indicators at the system level) to infer and react to global states. Through a 3x3 factorial design on an IEEE 34-node topology, we evaluated three training paradigms (CTCE, CTDE, DTDE) and three algorithms (PPO, APPO, SAC). Results identify APPO-DTDE as the optimal configuration, achieving a coordination score of 91.7% relative to the theoretical centralized benchmark (CTCE). However, a critical trade-off emerges between efficiency and stability: while the centralized benchmark maximizes allocative efficiency with a peer-to-peer trade ratio of 0.6, the fully decentralized approach (DTDE) demonstrates superior physical stability. Specifically, DTDE reduces the variance of grid balance by 31% compared to hybrid architectures, establishing a highly predictable, import-biased load profile that simplifies grid regulation. Furthermore, topological analysis reveals emergent spatial clustering, where decentralized agents self-organize into stable trading communities to minimize congestion penalties. While SAC excelled in hybrid settings, it failed in decentralized environments due to entropy-driven instability. This research proves that stigmergic signaling provides sufficient context for complex grid coordination, offering a robust, privacy-preserving alternative to expensive centralized communication infrastructure.
Authors: Nelson Salazar-Pena, Alejandra Tabares, Andres Gonzalez-Mancera
This paper introduces a novel, open-source MARL simulation framework for studying implicit cooperation in LEMs, modeled as a decentralized partially observable Markov decision process and implemented as a Gymnasium environment for MARL. Our framework features a modular market platform with plug-and-play clearing mechanisms, physically constrained agent models (including battery storage), a realistic grid network, and a comprehensive analytics suite to evaluate emergent coordination. The main contribution is a novel method to foster implicit cooperation, where agents' observations and rewards are enhanced with system-level key performance indicators to enable them to independently learn strategies that benefit the entire system and aim for collectively beneficial outcomes without explicit communication. Through representative case studies (available in a dedicated GitHub repository in this https URL, we show the framework's ability to analyze how different market configurations (such as varying storage deployment) impact system performance. This illustrates its potential to facilitate emergent coordination, improve market efficiency, and strengthen grid stability. The proposed simulation framework is a flexible, extensible, and reproducible tool for researchers and practitioners to design, test, and validate strategies for future intelligent, decentralized energy systems.
Authors: Jialin Zheng, Ruhaan Batta, Zhong Liu, Xiaonan Lu
Discovering the unknown governing equations of grid-connected inverters from external measurements holds significant attraction for analyzing modern inverter-intensive power systems. However, existing methods struggle to balance the identification of unmodeled nonlinearities with the preservation of physical consistency. To address this, this paper proposes a Physics-Informed Sparse Machine Learning (PISML) framework. The architecture integrates a sparse symbolic backbone to capture dominant model skeletons with a neural residual branch that compensates for complex nonlinear control logic. Meanwhile, a Jacobian-regularized physics-informed training mechanism is introduced to enforce multi-scale consistency including large/small-scale behaviors. Furthermore, by performing symbolic regression on the neural residual branch, PISML achieves a tractable mapping from black-box data to explicit control equations. Experimental results on a high-fidelity Hardware-in-the-Loop platform demonstrate the framework's superior performance. It not only achieves high-resolution identification by reducing error by over 340 times compared to baselines but also realizes the compression of heavy neural networks into compact explicit forms. This restores analytical tractability for rigorous stability analysis and reduces computational complexity by orders of magnitude. It also provides a unified pathway to convert structurally inaccessible devices into explicit mathematical models, enabling stability analysis of power systems with unknown inverter governing equations.
Authors: Abhinav Sharma, Pratyush Chakraborty, Manoj Datta, Kazi N. Hasan
This research paper proposes an efficient methodology for the allocation of multiple photovoltaic (PV)-based distributed generation (DG) units in the radial distribution network (RDN), while considering the loading capacity of the network. The proposed method is structured using a two-stage approach. In the first stage, the additional active power loading capacity of the network and each individual bus is determined using an iterative approach. This analysis quantifies the network's additional active loadability limits and identifies buses with high active power loading capacity, which are considered candidate nodes for the placement of DG units. Subsequently, in the second stage, the optimal locations and sizes of DG units are determined using the Monte Carlo method, with the objectives of minimizing voltage deviation and reducing active power losses in the network. The methodology is validated on the standard IEEE 33-bus RDN to determine the optimal locations and sizes of DG units. The results demonstrate that the optimal allocation of one, two, and three DG units, achieved from proposed method, reduces network active power losses by 50.37%, 58.62%, and 65.16%, respectively, and also significantly enhances the voltage profile across all buses. When the obtained results are compared with the results of several existing studies, it is found that the proposed method allows for larger DG capacities and maintains better voltage profiles throughout the RDN.
Authors: Matthieu Jacobs, Mario Paolone
This paper presents a grid-aware probabilistic approach to compute the aggregated flexibility at the grid connection point (GCP) of active distribution networks (ADNs) to allow the participation of DERs in ancillary services (AS) markets. Specifically an optimal power flow (OPF) method using a linear network model is used to compute the aggregated capability for the provision of multiple AS. We start from the method proposed in [1] and extend it to allow for optimizing the provision of multiple services simultaneously, ensure cost-effectiveness of the used DERs and handle uncertainties in a probabilistic way. The allocation of individual DERs power flexibilities accounts for the operational costs associated to the provision of different services and ensures cost-effectiveness while maximizing the value of the advertised aggregated flexibility, assuming known service prices. Empirical uncertainty sets are obtained to achieve a predefined coverage of the probability distribution in line with recent developments in the Nordic AS markets. Finally, a feeder-decomposition approach is proposed to ensure the methods applicability to realistic distribution networks with a large number of buses. Different case studies show the effectiveness of the method, highlight the importance of accounting for network constraints and illustrate its applicability to realistic distribution systems.
Authors: Aron Brenner, Line Roald, Saurabh Amin
Data center electricity use may reach 12% of U.S. demand by 2030, alongside growing ability to shift workloads geographically in response to prices or carbon signals. We examine the system-level implications of such strategic flexibility using a bilevel two-zone model that couples economic dispatch with consumer cost minimization. Two market failures emerge. First, discontinuous price changes at generator capacity limits can induce flexible consumers to shift load in socially inefficient directions; for example, toward a higher-cost region to trigger a price drop elsewhere. Second, by positioning near capacity boundaries, consumers can counteract the marginal benefit of transmission expansion: although shadow prices suggest additional capacity is valuable, strategic consumers reoptimize to offset resulting flow changes, leaving dispatch and costs unchanged. We derive conditions under which these effects arise and show that conventional price signals can misrepresent system value in the presence of large spatially flexible loads.
Authors: Saumyadip Mukhopadhyay, Kiho Yang, Kasyap Thottasserymana Vasudevan, Mounica Jyothi Divvela, Selim Dogru, Dilip Krishnamurthy, Fergo Treska, Werner Gillijns, Ryan Ryoung han Kim, Kumara Sastry, Vivek Singh
From climate science to drug discovery, scientific computing demands have surged dramatically in recent years -- driven by larger datasets, more sophisticated models, and higher simulation fidelity. This growth rate far outpaces transistor scaling, leading to unsustainably rising costs, energy consumption, and emissions. Semiconductor manufacturing is no exception. Computational lithography -- involving transferring circuitry to silicon in diffraction-limited conditions -- is the largest workload in semiconductor manufacturing. It has also grown exceptionally complex as miniaturization has advanced in the angstrom-era, requiring more accurate modeling, intricate corrections, and broader solution-space exploration. Accelerated computing (AC) offers a solution by dramatically freeing up the compute and power envelope. AI augments these gains by serving as high-fidelity surrogates for compute-intensive steps. Together, they present a sustainable, next-generation computing platform for scientific workloads. This new paradigm needs a fundamental redesign of the software stack. For computational lithography, NVIDIA cuLitho reinvents the core primitives -- diffractive optics, computational geometry, multi-variant optimization, data processing -- to achieve a transformative 57X end-to-end acceleration. Beyond dramatically faster cycles, this expanded compute envelope enables more rigorous solutions, including curvilinear masks, high-numerical aperture extreme ultraviolet (high-NA EUV) lithography, and subatomic modeling. We reinvest a small fraction of the freed-up compute to include through-focus correction for better process resilience. Silicon experiments at IMEC show significant benefits compared to conventional methods -- 35% better process window and 19% better edge placement error. This is the first quantified chip-scale demonstration of the lithography benefits of AC and AI in silicon.
Authors: Mohamad Chehade, Hao Zhu
Public Safety Power Shutoffs (PSPS) force rapid topology changes that can render standard operating points infeasible, requiring operators to quickly identify corrective transmission switching actions that reduce load shedding while maintaining acceptable voltage behavior. We present a verifiable, multi-stage adaptation pipeline that fine-tunes an instruction-tuned large language model (LLM) to generate \emph{open-only} corrective switching plans from compact PSPS scenario summaries under an explicit switching budget. First, supervised fine-tuning distills a DC-OPF MILP oracle into a constrained action grammar that enables reliable parsing and feasibility checks. Second, direct preference optimization refines the policy using AC-evaluated preference pairs ranked by a voltage-penalty metric, injecting voltage-awareness beyond DC imitation. Finally, best-of-$N$ selection provides an inference-time addition by choosing the best feasible candidate under the target metric. On IEEE 118-bus PSPS scenarios, fine-tuning substantially improves DC objective values versus zero-shot generation, reduces AC power-flow failure from 50\% to single digits, and improves voltage-penalty outcomes on the common-success set. Code and data-generation scripts are released to support reproducibility.
Authors: Ferdinand Geuss, Orcun Karaca, Mario Schweizer, Ognjen Stanojev
We derive a small-signal transfer function for a system comprising a virtual synchronous generator (VSG), a synchronous generator (SG), and a load, capturing voltage and frequency dynamics. Using this model, we analyze the sensitivity of SG dynamics to VSG parameters, highlighting trade-offs in choosing virtual inertia and governor lag, the limited effect of damper-winding emulation, and several others.
Authors: Ngoc-Son Duong, Huyen-Trang Ta, Quang-Tang Ngo, Thi-Hue Duong, Van-Lap Nguyen, Cong-Minh Nguyen, Minh-Tran Nguyen, Thai-Mai Dinh
We study joint transmit-waveform and receive-filter design for a multi-user downlink integrated sensing and communication (ISAC) system under practical constant-modulus and similarity constraints. We cast the design as a unified multi-objective program that balances communication sum rate and sensing signal-to-interference-plus-noise ratio (SINR). To address this, we introduce an efficient algorithm that use consensus alternating direction method of multipliers (ADMM) framework to alternately update the transmit waveform and radar filter. The proposed method effectively handles the non-convex fractional sensing's SINR formulation and ensures fast convergence. Simulation results demonstrate that the proposed approach achieves better trade-offs between communication sum rate and sensing's SINR compared to existing benchmark schemes.
Authors: Rishav Sen, Fangqi Liu, Jose Paolo Talusan, Ava Pettet, Yoshinori Suzue, Mark Bailey, Ayan Mukhopadhyay, Abhishek Dubey
The growth of Electric Vehicles (EVs) creates a conflict in vehicle-to-building (V2B) settings between building operators, who face high energy costs from uncoordinated charging, and drivers, who prioritize convenience and a full charge. To resolve this, we propose a negotiation-based framework that, by design, guarantees voluntary participation, strategy-proofness, and budget feasibility. It transforms EV charging into a strategic resource by offering drivers a range of incentive-backed options for modest flexibility in their departure time or requested state of charge (SoC). Our framework is calibrated with user survey data and validated using real operational data from a commercial building and an EV manufacturer. Simulations show that our negotiation protocol creates a mutually beneficial outcome: lowering the building operator's costs by over 3.5\% compared to an optimized, non-negotiating smart charging policy, while simultaneously reducing user charging expenses by 22\% below the utility's retail energy rate. By aligning operator and EV user objectives, our framework provides a strategic bridge between energy and mobility systems, transforming EV charging from a source of operational friction into a platform for collaboration and shared savings.
Authors: Wenxuan Ma, Bin Lin, Hongyang Pan, Geng Sun, Enyu Shi, Jiancheng An, Chau Yuen
With the development of sixth-generation (6G) wireless communication networks, the security challenges are becoming increasingly prominent, especially for mobile users (MUs). As a promising solution, physical layer security (PLS) technology leverages the inherent characteristics of wireless channels to provide security assurance. Particularly, stacked intelligent metasurface (SIM) directly manipulates electromagnetic waves through their multilayer structures, offering significant potential for enhancing PLS performance in an energy efficient manner. Thus, in this work, we investigate an SIM-assisted secure communication system for MUs under the threat of an eavesdropper, addressing practical challenges such as channel uncertainty in mobile environments, multiple MU interference, and residual hardware impairments. Consequently, we formulate a joint power and phase shift optimization problem (JPPSOP), aiming at maximizing the achievable secrecy rate (ASR) of all MUs. Given the non-convexity and dynamic nature of this optimization problem, we propose an enhanced proximal policy optimization algorithm with a bidirectional long short-term memory mechanism, an off-policy data utilization mechanism, and a policy feedback mechanism (PPO-BOP). Through these mechanisms, the proposed algorithm can effectively capture short-term channel fading and long-term MU mobility, improve sample utilization efficiency, and enhance exploration capabilities. Extensive simulation results demonstrate that PPO-BOP significantly outperforms benchmark strategies and other deep reinforcement learning algorithms in terms of ASR.https://doi.org/10.1109/TWC.2026.3658332
Authors: Henrique O. Caetano, Rahul K. Gupta, Cristhian G. da R. de Oliveira, João B. A. London Jr, Carlos Dias Maciel
Real-world power distribution data are often inaccessible due to privacy and security concerns, highlighting the need for tools for generating realistic synthetic networks. Existing methods typically overlook critical reliability metrics such as the Customer Average Interruption Frequency Index (CAIFI) and the Customer Average Interruption Duration Index (CAIDI). Moreover, these methods often neglect phase consistency during the design stage, necessitating the use of a separate phase assignment algorithm. This work proposes a Bayesian Hierarchical Model (BHM) that generates phase-consistent unbalanced three-phase distribution systems, and incorporates reliability indices. The BHM learns the joint distribution of phase configuration, power demand, and reliability indices from a reference network, conditioning these attributes on topological features. We apply the proposed methodology to generate synthetic power distribution networks in Brazil, and validated it on known Brazilian networks. The results show that the BHM accurately reproduces the distributions of phase allocation, power demand, and reliability metrics on the training system. Furthermore, in out-of-sample validation on unseen data, the model generates phase-consistent networks and accurately predicts the reliability indices for the synthetic systems. The generated networks are also electrically feasible: three-phase power flows converge and voltages remain within typical operating limits, enabling studies of planning, reliability, and resilience.
Authors: Naoki Hashima, Hikaru Hoshino, Luis David Pabón Ospina, Eiko Furutani
Voltage stability in modern power systems involves coupled dynamics across multiple time scales. Conventional methods based on time-scale separation or static stability margins may overlook instabilities caused by the coupling of slow and fast transients. Uncertainty in operating conditions further complicates stability assessment, and high computational cost of Monte Carlo simulations limit its applicability to multi-scale dynamics. This paper presents a deep reinforcement learning-based framework for probabilistic reachability analysis of multi-scale voltage dynamics. By formulating each instability mechanism as a distinct absorbing state and introducing a multi-critic architecture for mechanism-specific learning, the proposed method enables consistent learning of risk probabilities associated with multiple instability types within a unified framework. The approach is demonstrated on a four-bus system with load tap changers and over-excitation limiters, illustrating effectiveness of the proposed learning-based reachability analysis in identifying and quantifying the mechanisms leading to voltage collapse.
Authors: Cornelia Skaga, Mahdieh S. Sadabadi, Gilbert Bergna-Diaz
This paper investigates a cyber-physical DC microgrid employing a nonlinear distributed consensus-based control scheme for coordinated integration and management of distributed generating units within an expandable framework. Relying on nested primary andsecondary control loops; a (distributed) outer-loop and a (decentralized) inner-loop, the controller achieves proportional current sharing among all distributed generation units, while dynamically operating within predefined voltage limits. A rigorous Lyapunov-based stability analysis establishes a scalable global exponential stability certificate under some tuning conditions and sufficient time-scale separation between the control loops, based on singular perturbation theory. An optimization-based tuning strategy is then formulated to identify and subsequently diminish unstable operating conditions. In turn, various practical tuning strategies are introduced to provide stable operations while facilitating near-optimal proportional current sharing. The effectiveness of the proposed control framework and tuning approaches are finally supported through time-domain simulations of a case-specific low-voltage DC microgrid.
Authors: Hadi Nemati, Álvaro Ortega, Enrique Lobato, Luis Rouco
With the ongoing transition of electricity markets worldwide from hourly to intra-hourly bidding, market participants--especially Renewable Energy Sources (RES)--gain improved opportunities to adjust energy and reserve schedules and to benefit from more accurate higher-resolution forecasts. However, this shift requires participants to update decision-making frameworks and to strengthen uncertainty management in order to fully exploit the new market potential. In particular, Renewable-Based Virtual Power Plants (RVPPs) aggregating dispatchable and non-dispatchable RES must account for these changes through market-oriented scheduling methods that efficiently address multiple uncertainties, including electricity prices, RES generation, and demand consumption. In this vein, this paper proposes a multi-bound robust optimization framework to simultaneously capture these uncertainties, explicitly incorporate intra-hourly variability, and differentiate the deviation levels (frequent, moderate deviations and rare, extreme ones) of uncertain parameters. The proposed approach yields less conservative and more implementable bidding and scheduling decisions, thus improving RVPP profitability in both energy and reserve markets. Simulation studies compare the proposed method with standard robust optimization and evaluate the operational, market-strategy, and economic impacts of quarter-hourly versus hourly market resolution. Results indicate that the normalized absolute differences, across different uncertainty-handling strategies, between hourly and 15-minute schedules are 18.0--34.2% for day-ahead traded energy, and 28.7--65.6% and 10.1--16.3% for upward and downward reserve traded in the secondary reserve market, respectively. Furthermore, relative to classic robust optimization, the proposed multi-bound approach increases profit by 24.9--49.2% across the considered strategies.
Authors: Sidharthenee Nayak, Victor Sam Moses Babu, Chandrashekhar Narayan Bhende, Pratyush Chakraborty, Mayukha Pal
In recent times, there has been considerable interest in fault detection within electrical power systems, garnering attention from both academic researchers and industry professionals. Despite the development of numerous fault detection methods and their adaptations over the past decade, their practical application remains highly challenging. Given the probabilistic nature of fault occurrences and parameters, certain decision-making tasks could be approached from a probabilistic standpoint. Protective systems are tasked with the detection, classification, and localization of faulty voltage and current line magnitudes, culminating in the activation of circuit breakers to isolate the faulty line. An essential aspect of designing effective fault detection systems lies in obtaining reliable data for training and testing, which is often scarce. Leveraging deep learning techniques, particularly the powerful capabilities of pattern classifiers in learning, generalizing, and parallel processing, offers promising avenues for intelligent fault detection. To address this, our paper proposes an anomaly-based approach for fault detection in electrical power systems, employing deep autoencoders. Additionally, we utilize Convolutional Autoencoders (CAE) for dimensionality reduction, which, due to its fewer parameters, requires less training time compared to conventional autoencoders. The proposed method demonstrates superior performance and accuracy compared to alternative detection approaches by achieving an accuracy of 97.62% and 99.92% on simulated and publicly available datasets.
Authors: Nicola Ramseyer, Matthieu Jacobs, Mario Paolone
This paper presents an approach for the modelling of dependent random variables using generalised polynomial chaos. This allows to write chance-constrained optimization problems with respect to a joint distribution modelling dependencies between different stochastic inputs. Arbitrary dependencies are modelled by using Gaussian copulas to construct the joint distribution. The paper exploits the problem structure and develops suitable transformations to ensure tractability. The proposed method is applied to a probabilistic power reserve procurement problem. The effectiveness of the method to capture dependencies is shown by comparing the approach with a standard approach considering independent random variables.
Authors: Hyeon Seok Rou, Vincent Savaux, Zeping Sui, Giuseppe Thadeu Freitas de Abreu, Zilong Liu
As the standardization of sixth generation (6G) wireless systems accelerates, there is a growing consensus in favor of evolutionary waveforms that offer new features while maximizing compatibility with orthogonal frequency division multiplexing (OFDM), which underpins the 4G and 5G systems. This article presents affine frequency division multiplexing (AFDM) as a premier candidate for 6G, offering intrinsic robustness for both high-mobility communications and integrated sensing and communication (ISAC) in doubly dispersive channels, while maintaining a high degree of synergy with the legacy OFDM. To this end, we provide a comprehensive analysis of AFDM, starting with a generalized fractional-delay-fractional-Doppler (FDFD) channel model that accounts for practical pulse shaping filters and inter-sample coupling. We then detail the AFDM transceiver architecture, demonstrating that it reuses nearly the entire OFDM pipeline and requires only lightweight digital pre- and post-processing. We also analyze the impact of hardware impairments, such as phase noise and carrier frequency offset, and explore advanced functionalities enabled by the chirp-parameter domain, including index modulation and physical-layer security. By evaluating the reusability across the radio-frequency, physical, and higher layers, the article demonstrates that AFDM provides a low-risk, feature-rich, and efficient path toward achieving high-fidelity communications in the later versions of 6G and beyond (6G+).
Authors: Shunsei Yamagishi, Lei Jing
Attitude and Heading Reference Systems (AHRSs) are broadly applied wherever reliable orientation and motion sensing is required. In this paper, we present an improved Cubature Kalman Filter (CKF) with lower computational cost while maintaining estimation accuracy, which is named "Kaisoku Cubature Kalman Filter (KCKF)". The computationally efficient equations of the KCKF are derived by simplifying those of the CKF, while preserving equivalent mathematical relations. The lightweight prediction equations in the KCKF are derived by expanding the summation terms in the CKF and simplifying the result. This paper shows that the KCKF requires fewer floating-point operations (FLOPs) than the CKF. The controlled experimental results show that the KCKF reduces the computation time by approximately 19% compared to the CKF on a high-performance computer, whereas the KCKF reduces the computation time by approximately 15% compared to the CKF on a low-cost single-board computer. In addition, the KCKF maintains the attitude estimation accuracy of the CKF.
Authors: Xiaowen Tao, Yinuo Wang, Haitao Ding, Yuanyang Qi, Ziyu Song
With the growth of intelligent civil infrastructure and smart cities, operation and maintenance (O&M) increasingly requires safe, efficient, and energy-conscious robotic manipulation of articulated components, including access doors, service drawers, and pipeline valves. However, existing robotic approaches either focus primarily on grasping or target object-specific articulated manipulation, and they rarely incorporate explicit actuation energy into multi-objective optimisation, which limits their scalability and suitability for long-term deployment in real O&M settings. Therefore, this paper proposes an articulation-agnostic and energy-aware reinforcement learning framework for robotic manipulation in intelligent infrastructure O&M. The method combines part-guided 3D perception, weighted point sampling, and PointNet-based encoding to obtain a compact geometric representation that generalises across heterogeneous articulated objects. Manipulation is formulated as a Constrained Markov Decision Process (CMDP), in which actuation energy is explicitly modelled and regulated via a Lagrangian-based constrained Soft Actor-Critic scheme. The policy is trained end-to-end under this CMDP formulation, enabling effective articulated-object operation while satisfying a long-horizon energy budget. Experiments on representative O&M tasks demonstrate 16%-30% reductions in energy consumption, 16%-32% fewer steps to success, and consistently high success rates, indicating a scalable and sustainable solution for infrastructure O&M manipulation.
Authors: Yuanliang Li, Xun Gong, Reza Iravani, Bo Cao, Heng Liu, Ziming Chen
The DC-side ground fault (GF) poses significant risks to three-phase TN-earthed photovoltaic (PV) systems, as the resulting high fault current can directly damage both PV inverters and PV modules. Once a fault occurs, locating the faulty string through manual string-by-string inspection is highly time-consuming and inefficient. This work presents a comprehensive analysis of GF characteristics through fault-current analysis and a simulation-based case study covering multiple fault locations. Building on these insights, we propose an edge-AI-based GF localization approach tailored for three-phase TN-earthed PV systems. A PLECS-based simulation model that incorporates PV hysteresis effects is developed to generate diverse GF scenarios, from which correlation-based features are extracted throughout the inverter's four-stage shutdown sequence. Using the simulated dataset, a lightweight Variational Information Bottleneck (VIB)-based localization model is designed and trained, achieving over 93% localization accuracy at typical sampling rates with low computational cost, demonstrating strong potential for deployment on resource-constrained PV inverters.
Authors: Bikram Panthee, Haoming Yang, Corey D. Harper, Amritanshu Pandey
The paper develops a methodology, Grid-ECO, to optimally allocate electric vehicle charging stations (EVCS) within a distribution feeder, while considering EV charging demand at census-level granularity. The underlying problem is NP-hard and requires satisfying nonlinear, nonconvex, three-phase unbalanced AC network constraints while including integer decision variables. Existing works cannot guarantee AC feasibility nor optimality of this problem without either i) relaxing the integer decision variable space or ii) convexifying AC constraints. Proposed Grid-ECO exactly solves the underlying mixed-integer nonlinear program (MINLP) to near-zero optimality gap while prioritizing candidate locations based on grid voltage and current sensitivities. To solve the MINLP exactly, Grid-ECO exactly reformulates it into mixed-integer bilinear program (MIBLP), enabling global optimization using the spatial branch-and-bound algorithm (sBnB). To ensure computational tractability for large-scale feeders, we develop and include a novel presolving strategy based on Sequential Bound Tightening (SBT) with variable filtering and decomposition. Case studies demonstrate that Grid-ECO outperforms the off-the-shelf Gurobi sBnB solver by solving cases where no feasible solution is found within 167 hours. When feasible solution is found by off-the-shelf solver, Grid-ECO reduces solution time by up to 73\% and sBnB node exploration by up to 97\%, while achieving a 0\% optimality gap and guaranteed AC feasibility.
Authors: Jiawei Zhang, Gregor Verbic, Frederik Geth, Mohsen Aldaadi, Rahmat Heidari, Julio Braslavsky
The fast uptake of distributed energy resources (DERs) presents increasing challenges for managing hosting capacity in distribution networks. Existing solutions include direct load control, operating envelopes, and price-based control through dynamic energy prices. Despite their effectiveness, these methods often rely on assumed prosumer behavioural patterns and overlook prosumers' desire to retain control over their devices. Additionally, current fixed or Time-of-Use (ToU) prices are based on spatial and temporal averages, having limited impact on network conditions and DER operation. To address these limitations, this paper proposes a bilevel optimisation framework that explicitly models prosumer decision-making in the design of dynamic network prices. The upper level represents the distribution system operator (DSO), setting network prices under cost-recovery and network constraints, while the lower level models prosumers optimising DER operation in response. The proposed framework preserves customer prerogative, enhances DER flexibility, and offers actionable insights for network hosting capacity management and the evolution of network tariff structures under high DER penetration.
Authors: Huang Zhenyu, Yuan Zhao
While ex-ante screening and static price caps are global standards for mitigating price volatility, Singapore's electricity market employs a unique dual-defense mechanism integrating vesting contracts (VC) with a temporary price cap (TPC). Using high-frequency data from 2021 to 2024, this paper evaluates this mechanism and yields three primary findings. First, a structural trade-off exists within the VC framework: while VC quantity (VCQ) suppresses average prices, it paradoxically exacerbates instability via liquidity squeezes. Conversely, VC price (VCP) functions as a tail-risk anchor, dominating at extreme quantiles where VCQ efficacy wanes. Second, a structural break around the 2023 reform reveals a fundamental re-mapping of price dynamics; the previously positive pass-through from offer ratios to clearing prices was largely neutralized post-reform. Furthermore, diagnostics near the TPC threshold show no systematic evidence of strategic bid shading, confirming the TPC's operational integrity. Third, the dual-defense mechanism exhibits a critical synergy that resolves the volatility trade-off. The TPC reverses the volatility penalty of high VCQ, shifting the elasticity of conditional volatility from a destabilizing 0.636 to a stabilizing -0.213. This synergy enables the framework to enhance tail-risk control while eliminating liquidity-related stability costs. We conclude that this dual-defense mechanism successfully decouples price suppression from liquidity risks, thereby maximizing market stability.
Authors: Ayrton Almada, Laurent Pagnier, Igal Goldshtein, Saif R. Kazi, Michael (Misha)Chertkov
Power system operators routinely perform N-1 contingency analysis, yet conventional tools provide limited guidance on which lines or transformers deserve heightened attention during fast post-fault transients. In particular, static screening does not reveal whether (1) the same faulted line repeatedly triggers severe downstream overloads, or (2) a specific transformer emerges as vulnerable across many distinct fault scenarios. This paper introduces a real-time dynamic N-1 screening framework that addresses this gap by estimating, for each counterfactual single-phase transmission fault, the probability of transient overcurrent on critical grid elements. The output is an operator-facing dashboard that ranks (a) faulted lines whose outages most frequently lead to dangerous transformer overloads, and (b) transformers that consistently overload across top-risk scenarios, both of which are actionable indicators for real-time situational awareness. The approach models post-fault electromechanical dynamics using a linear stochastic formulation of the swing equations with short-lived, fault-localized uncertainty, and combines analytic transient evaluation with cross-entropy based importance sampling to efficiently estimate rare but high-impact events. All N-1 contingencies are evaluated in parallel with linear computational complexity. The framework is demonstrated on the IEEE 118-bus system, where it reveals latent high-risk lines and transformers that remain invisible under deterministic dynamic or static N-1 analysis. Results show orders-of-magnitude computational speedup relative to brute-force Monte Carlo, enabling practical deployment within real-time operational cycles.
Authors: Siying Li, Lang Tong, Timothy D. Mount
We study capacity accreditation of resource-colocated large loads, defined as large demands such as data center and manufacturing loads colocated with behind-the-meter generation and storage resources, synchronously connected to the bulk power system, and capable of participating in the wholesale electricity market as an integrated unit. Because the accredited capacity of a resource portfolio is not equal to the sum of its individual resources' capacity values, we adopt a risk-based capacity accreditation framework to evaluate the combined reliability contribution of colocated resources. Grounded in the effective load carrying capability (ELCC) metric, the proposed capacity accreditation employs a convex optimization engine that jointly dispatches colocated resources to minimize reliability risk. We apply the developed methodology to a hydrogen manufacturing facility with colocated renewable generation, storage, and fuel cell resources.
Authors: Thinh Viet Le, Mark M. Wilde, Vassilis Kekatos
Conic programs arise broadly in physics, quantum information, machine learning, and engineering, many of which are defined over sparse graphs. Although such problems can be solved in polynomial time using classical interior-point solvers, the computational complexity scales unfavorably with graph size. In this context, this work proposes a variational quantum paradigm for solving conic programs, including quadratically constrained quadratic programs (QCQPs) and semidefinite programs (SDPs). We encode primal variables via the state of a parameterized quantum circuit (PQC), and dual variables via the probability mass function of a second PQC. The Lagrangian function can thus be expressed as scaled expectations of quantum observables. A primal-dual solution can be found by minimizing/maximizing the Lagrangian over the parameters of the first/second PQC. We pursue saddle points of the Lagrangian in a hybrid fashion. Gradients of the Lagrangian are estimated using the two PQCs, while PQC parameters are updated classically using a primal-dual method. We propose permuting the primal variables so that related observables are expressed in a banded form, enabling efficient measurement. The proposed framework is applied to the OPF problem, a large-scale optimization problem central to the operation of electric power systems. Numerical tests on the IEEE 57-node power system using Pennylane's simulator corroborate that the proposed doubly variational quantum framework can find high-quality OPF solutions. Although showcased for the OPF, this framework features a broader scope, including conic programs with numerous variables and constraints, problems defined over sparse graphs, and training quantum machine learning models to satisfy constraints.
Authors: Zitian Qiu, Yunjie Gu
As the growing penetration of inverter-based resources (IBRs) in modern power systems, the system strength is decreasing. Due to the inherent difference in short-circuit capacity contributions of synchronous generators and inverters, the short-circuit ratio is not a one-size-fit-all metric to assess the system strength. Following the distinct dynamic behavior of the IBR in small- and large-signal disturbance, the system strength is separated accordingly. To address the large-signal system strength assessment, a control type-dependent metric, Power Margin Ratio (PMR), is proposed in this paper. PMR is defined as the ratio between the maximum power that can be injected to the system without causing any instability and the nominal power of the IBR. It can be obtained via power flow calculation with a modified algorithm. The theoretical foundation of PMR is established from the viewpoint of dynamical systems. PMR is identical to SCR for the single-plant-infinite-bus system, while presents advancement for multi-infeed power systems. Comprehensive case studies and discussions have validated that PMR reveals the large-signal system strength from a static perspective.
Authors: Andrej Stankovski, Blazhe Gjorgiev, James Ciyu Qin, Giovanni Sansavini
Power system security assessments, e.g. via cascading outage models, often use operational set-points based on optimal power flow (OPF) dispatch. However, driven by cost minimization, OPF provides an ideal, albeit unrealistic, clearing of the generating units, disregarding the complex interactions among market participants. The security of the system, therefore, may be overestimated. To address this gap, we introduce a market model with a social-welfare-based day-ahead market clearing mechanism. The security implications are analyzed via Cascades, a cascading outage analysis framework. We apply this framework to the IEEE-118 bus system with three independent control zones. The results show that market dispatch leads to an increase in demand not served of up to 80% higher than OPF, highlighting a security overestimation. Operators can use this information to properly allocate reserves and perform efficient expansion planning strategies.
Authors: Kerd Topallaj, Colin McKerrell, Suraj Ramanathan, Ioannis Zografopoulos
In recent years, the evolution of modern power grids has been driven by the growing integration of remotely controlled grid assets. Although Distributed Energy Resources (DERs) and Inverter-Based Resources (IBRs) enhance operational efficiency, they also introduce cybersecurity risks. The remote accessibility of such critical grid components creates entry points for attacks that adversaries could exploit, posing threats to the stability of the system. To evaluate the resilience of energy systems under such threats, this study employs real-time simulation and a modified version of the IEEE 39-bus system that incorporates a Microgrid (MG) with solar-based IBR. The study assesses the impact of remote attacks impacting the MG stability under different levels of IBR penetration through hardware-in-the-loop (HIL) simulations. Namely, we analyze voltage, current, and frequency profiles before, during, and after cyberattack-induced disruptions. The results demonstrate that real-time HIL testing is a practical approach to uncover potential risks and develop robust mitigation strategies for resilient MG operations.
Authors: Ambuj Gupta, Balarko Chaudhuri, Mark O'Malley
The widely accepted definition of grid-forming (GFM) inverter states that it should behave as a (nearly) constant voltage source behind an impedance by maintaining a (nearly) constant internal voltage phasor in the sub-transient to transient time frame. Some system operators further mandate permissible ranges for this effective impedance. However, these specifications do not clearly define the location of the internal voltage source, and no systematic method exists to quantify its effective impedance for a black-box GFM model. To address this, we first compare the transient responses of an ideal voltage source and a GFM to show that an idealistic GFM maintains a (nearly) constant voltage across the filter capacitor, rather than at the inverter switches. Then we propose a systematic method to quantify the effective impedance of a GFM from its black-box model using frequency-domain admittance plots. Using standard PSCAD GFM models developed by NREL, we demonstrate that the GFM's equivalent impedance model captures the sub-transient response and static voltage stability limit reasonably accurately.
Authors: Fan Jiang, Xingpeng Li, Pascal Van Hentenryck
To ensure frequency security in power systems, both the rate of change of frequency (RoCoF) and the frequency nadir (FN) must be explicitly accounted for in real-time frequency-constrained optimal power flow (FCOPF). However, accurately modeling sys-tem frequency dynamics through analytical formulations is chal-lenging due to their inherent nonlinearity and complexity. To address this issue, deep neural networks (DNNs) are utilized to capture the nonlinear mapping between system operating condi-tions and key frequency performance metrics. In this paper, a DNN-based frequency prediction model is developed and trained using the high-fidelity time-domain simulation data generated in PSCAD/EMTDC. The trained DNN is subsequently transformed into an equivalent mixed-integer linear programming (MILP) form and embedded into the FCOPF problem as additional con-straints to explicitly enforce frequency security, leading to the proposed DNN-FCOPF formulation. For benchmarking, two alternative models are considered: a conventional optimal power flow without frequency constraints and a linearized FCOPF in-corporating system-level RoCoF and FN constraints. The effec-tiveness of the proposed method is demonstrated by comparing the solutions of these three models through extensive PSCAD/EMTDC time-domain simulations under various loading scenarios.
Authors: Neha Gupta, Hamed Alimohammadi, Mohammad Shojafar, De Mi, Muhammad N.M. Bhutta
The Open Radio Access Network (O-RAN) offers flexibility and innovation but introduces unique security vulnerabilities, particularly from cryptographically relevant quantum computers. While Post-Quantum Cryptography (PQC) is the primary scalable defence, its computationally intensive handshakes create a significant bottleneck for the RAN control plane, posing sustainability challenges. This paper proposes an energy-aware framework to solve this PQC bottleneck, ensuring quantum resilience without sacrificing operational energy efficiency. The system employs an O-RAN aligned split: a Crypto Policy rApp residing in the Non-Real-Time (Non-RT) RIC defines the strategic security envelope (including PQC suites), while a Security Operations Scheduling (SOS) xApp in the Near-RT RIC converts these into tactical timing and placement intents. Cryptographic enforcement remains at standards-compliant endpoints: the Open Fronthaul utilizes Media Access Control Security (MACsec) at the O-DU/O-RU, while the xhaul (midhaul and backhaul) utilizes IP Security (IPsec) at tunnel terminators. The SOS xApp reduces PQC overhead by batching non-urgent handshakes, prioritizing session resumption, and selecting parameters that meet slice SLAs while minimizing joules per secure connection. We evaluate the architecture via a Discrete-Event Simulation (DES) using 3GPP-aligned traffic profiles and verified hardware benchmarks from literature. Results show that intelligent scheduling can reduce per-handshake energy by approximately 60 percent without violating slice latency targets.
Authors: Shuaijun Liu, Jinqiu Du, Yaxin Zheng, Jiaying Yin, Yuhui Deng, Jingjin Wu
Unmanned Aerial Vehicles (UAVs) have significantly enhanced fog computing by acting as both flexible computation platforms and communication mobile relays. In this paper, we consider four important and interdependent modules: attitude control, trajectory planning, resource allocation, and task assignment, and propose a holistic framework that jointly optimizes the total latency and energy consumption for UAV-assisted fog computing in a three-dimensional spatial domain with varying terrain elevations and dynamic task generations. We first establish a fuzzy-enhanced adaptive reinforcement proportional-integral-derivative control model to control the attitude. Then, we propose an enhanced Ant Colony System (ACS) based algorithm, that includes a safety value and a decoupling mechanism to overcome the convergence issue in classical ACS, to compute the optimal UAV trajectory. Finally, we design an algorithm based on the Particle Swarm Optimization technique, to determine where each offloaded task should be executed. Under our proposed framework, the outcome of one module would affect the decision-making in another, providing a holistic perspective of the system and thus leading to improved solutions. We demonstrate by extensive simulation results that our proposed framework can significantly improve the overall performance, measured by latency and energy consumption, compared to existing mainstream approaches.
Authors: Rishav Sen, Fangqi Liu, Jose Paolo Talusan, Ava Pettet, Yoshinori Suzue, Mark Bailey, Ayan Mukhopadhyay, Abhishek Dubey
The growth of Electric Vehicles (EVs) creates a conflict in vehicle-to-building (V2B) settings between building operators, who face high energy costs from uncoordinated charging, and drivers, who prioritize convenience and a full charge. To resolve this, we propose a negotiation-based framework that, by design, guarantees voluntary participation, strategy-proofness, and budget feasibility. It transforms EV charging into a strategic resource by offering drivers a range of incentive-backed options for modest flexibility in their departure time or requested state of charge (SoC). Our framework is calibrated with user survey data and validated using real operational data from a commercial building and an EV manufacturer. Simulations show that our negotiation protocol creates a mutually beneficial outcome: lowering the building operator's costs by over 3.5\% compared to an optimized, non-negotiating smart charging policy, while simultaneously reducing user charging expenses by 22\% below the utility's retail energy rate. By aligning operator and EV user objectives, our framework provides a strategic bridge between energy and mobility systems, transforming EV charging from a source of operational friction into a platform for collaboration and shared savings.
Authors: Jie Feng, Yuanyuan Shi, Deepjyoti Deka
Reinforcement learning (RL) has shown great potential for designing voltage control policies, but their performance often degrades under changing system conditions such as topology reconfigurations and load variations. We introduce a topology-aware online policy optimization framework that leverages data-driven estimation of voltage-reactive power sensitivities to achieve efficient policy adaptation. Exploiting the sparsity of topology-switching events, where only a few lines change at a time, our method efficiently detects topology changes and identifies the affected lines and parameters, enabling fast and accurate sensitivity updates without recomputing the full sensitivity matrix. The estimated sensitivity is subsequently used for online policy optimization of a pre-trained neural-network-based RL controller. Simulations on both the IEEE 13-bus and SCE 56-bus systems demonstrate over 90 percent line identification accuracy, using only 15 data points. The proposed method also significantly improves voltage regulation performance compared with non-adaptive policies and adaptive policies that rely on regression-based online optimization methods for sensitivity estimation.
Authors: Marc Gillioz, Guillaume Dubuis, Étienne Voutaz, Philippe Jacquod
We apply several machine learning algorithms to the problem of anomaly detection in operational data for large-scale, high-voltage electric power grids. We observe important differences in the performance of the algorithms. Neural networks typically outperform classical algorithms such as k-nearest neighbors and support vector machines, which we explain by the strong contextual nature of the anomalies. We show that unsupervised learning algorithm work remarkably well and that their predictions are robust against simultaneous, concurring anomalies.
Authors: Fan Jiang, Xingpeng Li, Pascal Van Hentenryck
To ensure frequency security in power systems, both the rate of change of frequency (RoCoF) and the frequency nadir (FN) must be explicitly accounted for in real-time frequency-constrained optimal power flow (FCOPF). However, accurately modeling sys-tem frequency dynamics through analytical formulations is chal-lenging due to their inherent nonlinearity and complexity. To address this issue, deep neural networks (DNNs) are utilized to capture the nonlinear mapping between system operating condi-tions and key frequency performance metrics. In this paper, a DNN-based frequency prediction model is developed and trained using the high-fidelity time-domain simulation data generated in PSCAD/EMTDC. The trained DNN is subsequently transformed into an equivalent mixed-integer linear programming (MILP) form and embedded into the FCOPF problem as additional con-straints to explicitly enforce frequency security, leading to the proposed DNN-FCOPF formulation. For benchmarking, two alternative models are considered: a conventional optimal power flow without frequency constraints and a linearized FCOPF in-corporating system-level RoCoF and FN constraints. The effec-tiveness of the proposed method is demonstrated by comparing the solutions of these three models through extensive PSCAD/EMTDC time-domain simulations under various loading scenarios.
Authors: Mickey M. Rogers, William M. Ota, Nathaniel Burola, Tepring Piquado
The rapid growth of data centers driven by cloud computing and artificial intelligence is reshaping infrastructure planning and environmental governance in the United States. Georgia has emerged as a major market for data center development, particularly in the Atlanta metropolitan region, creating economic opportunity alongside significant challenges. Data centers are water-intensive, energy-intensive, and land-intensive infrastructure whose cumulative impacts strain municipal water systems, electric grids, and local land-use frameworks. Unlike single industrial projects, data centers are often proposed in clusters, amplifying community and infrastructure impacts. This report draws on insights from a Georgia-based expert convening to describe the implications of data center growth for water management, energy reliability, ratepayer equity, zoning, and community engagement, identify potential gaps in transparency and regulatory coordination, and present a policy roadmap to help Georgia balance digital infrastructure growth with sustainability, equity, and community protection.
Authors: Junting Chen, Bowen Li, Hao Sun, Shuguang Cui, Nikolaos Pappas
The emergence of dense, mission-driven aerial networks supporting the low-altitude economy presents unique communication challenges, including extreme channel dynamics and severe cross-tier interference. Traditional reactive communication paradigms are ill-suited to these environments, as they fail to leverage the network's inherent predictability. This paper introduces predictive communication, a novel paradigm transforming network management from reactive adaptation to proactive optimization. The approach is enabled by fusing predictable mission trajectories with stable, large-scale radio environment models (e.g., radio maps). Specifically, we present a hierarchical framework that decomposes the predictive cross-layer resource allocation problem into three layers: strategic (routing), tactical (timing), and operational (power). This structure aligns decision-making timescales with the accuracy levels and ranges of available predictive information. We demonstrate that this foresight-driven framework achieves an order-of-magnitude reduction in cross-tier interference, laying the groundwork for robust and scalable low-altitude communication systems.
Authors: Charles Wood
Imaging systems are commonly described using resolution, contrast, and signal-to-noise ratio, but these quantities do not provide a general account of how physical transformations affect the flow of information. This paper introduces an operator-based formulation of information theory for imaging. The approach models the imaging chain as a composition of bounded operators acting on functions, and characterises information redistribution using the spectral properties of these operators. Three measures are developed. Operator entropy quantifies how an operator distributes energy across its singular spectrum. Operator information capacity describes the number of modes that remain recoverable above a noise-dependent threshold. An irreversibility index measures the information lost through suppression or elimination of modes and captures the accumulation of information loss under operator composition. The framework applies to linear, nonlinear, and stochastic operators and does not depend on the specific imaging modality. Analytical examples show how attenuation, blur, and sampling affect entropy, capacity, and irreversibility in different ways. The results provide a general structure for analysing the physical limits of imaging and form the basis for subsequent work on information geometry, spatiotemporal budgets, nonlinear channels, and reconstruction algorithms.
Authors: Hamed Faghihian, Arman Sargolzaei
Electric vehicles (EVs) are increasingly deployed, yet range limitations remain a key barrier. Improving energy efficiency via advanced control is therefore essential, and emerging vehicle automation offers a promising avenue. However, many existing strategies rely on indirect surrogates because linking power consumption to control inputs is difficult. We propose a neural-network (NN) identifier that learns this mapping online and couples it with an actor-critic reinforcement learning (RL) framework to generate optimal control commands. The resulting actor-critic-identifier architecture removes dependence on explicit models relating total power, recovered energy, and inputs, while maintaining accurate speed tracking and maximizing efficiency. Update laws are derived using Lyapunov stability analysis, and performance is validated in simulation. Compared to a traditional controller, the method increases total energy recovery by 12.84%, indicating strong potential for improving EV energy efficiency.
Authors: Jasper Juchem, Mia Loccufier
Gyroscopic interconnections enable redistribution of energy among degrees of freedom while preserving passivity and total energy, and they play a central role in controlled Lagrangian methods and IDA-PBC. Yet their quantitative effect on transient energy exchange and subsystem performance is not well characterised. We study a conservative mechanical system with constant skew-symmetric velocity coupling. Its dynamics are integrable and evolve on invariant two-tori, whose projections onto subsystem phase planes provide geometric description of energy exchange. When the ratio of normal-mode frequencies is rational, these projections become closed resonant Lissajous curves, enabling structured analysis of subsystem trajectories. To quantify subsystem behaviour, we introduce the inscribed-radius metric: the radius of the largest origin-centred circle contained in a projected trajectory. This gives a lower bound on attainable subsystem energy and acts as an internal performance measure. We derive resonance conditions and develop an efficient method to compute or certify the inscribed radius without time-domain simulation. Our results show that low-order resonances can strongly restrict energy depletion through phase-locking, whereas high-order resonances recover conservative bounds. These insights lead to an explicit interconnection-shaping design framework for both energy absorption and containment control strategies, while taking responsiveness into account.
Authors: Andrey Gorbunov, Youhong Chen, Petr Vorobev, Jin Ma, Gregor Verbic
Ensuring small-signal stability in power systems with a high share of inverter-based resources (IBRs) is hampered by two factors: (i) device and network parameters are often uncertain or completely unknown, and (ii) brute-force enumeration of all topologies is computationally intractable. These challenges motivate plug-and-play (PnP) certificates that verify stability locally yet hold globally. Passivity is an attractive property because it guarantees stability under feedback and network interconnections; however, strict passivity rarely holds for practical controllers such as Grid Forming Inverters (GFMs) employing P-Q droop. This paper extends the passivity condition by constructing a dynamic, frequency-dependent multiplier that enables PnP stability certification of each component based solely on its admittance, without requiring any modification to the controller design. The multiplier is parameterised as a linear filter whose coefficients are tuned under a passivity goal. Numerical results for practical droop gains confirm the PnP rules, substantially enlarging the certified stability region while preserving the decentralised, model-agnostic nature of passivity-based PnP tests.
Authors: Arya Abdollahi
Urban energy systems face increasing challenges due to high penetration of renewable energy sources, extreme weather events, and other high-impact, low-probability disruptions. This project proposes a community-centered, open-access framework to enhance the resilience and reliability of urban power and gas networks by integrating microgrid partitioning, mobile energy storage deployment, and data-driven risk assessment. The approach involves converting passive distribution networks into active, self-healing microgrids using distributed energy resources and remotely controlled switches to enable flexible reconfiguration during normal and emergency operations. To address uncertainties from intermittent renewable generation and variable load, an adjustable interval optimization method combined with a column and constraint generation algorithm is developed, providing robust planning solutions without requiring probabilistic information. Additionally, a real-time online risk assessment tool is proposed, leveraging 25 multi-dimensional indices including load, grid status, resilient resources, emergency response, and meteorological factors to support operational decision-making during extreme events. The framework also optimizes the long-term sizing and allocation of mobile energy storage units while incorporating urban traffic data for effective routing during emergencies. Finally, a novel time-dependent resilience and reliability index is introduced to quantify system performance under diverse operating conditions. The proposed methodology aims to enable resilient, efficient, and adaptable urban energy networks capable of withstanding high-impact disruptions while maximizing operational and economic benefits.
Authors: Philipp C. Böttcher, Carsten Hartmann, Andrea Benigni, Thiemo Pesch, Dirk Witthaut
Deterministic frequency deviations (DFDs) are systematic and predictable excursions of grid frequency that arise from synchronized generation ramps induced by electricity market scheduling. In this paper, we analyze the impact of the European day-ahead market reform of 1 October 2025, which replaced hourly trading blocks with quarter-hourly blocks, on DFDs in the Central European synchronous area. Using publicly available frequency measurements, we compare periods before and after the reform based on daily frequency profiles, indicators characterizing frequency deviations, principal component analysis, Fourier-based functional data analysis, and power spectral density analysis. We show that the reform substantially reduces characteristic hourly frequency deviations and suppresses dominant spectral components at hourly and half-hourly time scales, while quarter-hourly structures gain relative importance. While the likelihood of large frequency deviations decreases overall, reductions for extreme events are less clear and depend on the metric used. Our results demonstrate that market design reforms can effectively mitigate systematic frequency deviations, but also highlight that complementary technical and regulatory measures are required to further reduce large frequency excursions in low-inertia power systems.
Authors: Shinhoo Kang, Sangwook Kim, Sehyun Yun
The transition toward low-inertia power systems demands modeling frameworks that provide not only accurate state predictions but also physically consistent sensitivities for control. While scientific machine learning offers powerful nonlinear modeling tools, the control-oriented implications of different differentiable paradigms remain insufficiently understood. This paper presents a comparative study of Physics-Informed Neural Networks (PINNs), Neural Ordinary Differential Equations (NODEs), and Differentiable Programming (DP) for modeling, identification, and control of power system dynamics. Using the Single Machine Infinite Bus (SMIB) system as a benchmark, we evaluate their performance in trajectory extrapolation, parameter estimation, and Linear Quadratic Regulator (LQR) synthesis. Our results highlight a fundamental trade-off between data-driven flexibility and physical structure. NODE exhibits superior extrapolation by capturing the underlying vector field, whereas PINN shows limited generalization due to its reliance on a time-dependent solution map. In the inverse problem of parameter identification, while both DP and PINN successfully recover the unknown parameters, DP achieves significantly faster convergence by enforcing governing equations as hard constraints. Most importantly, for control synthesis, the DP framework yields closed-loop stability comparable to the theoretical optimum. Furthermore, we demonstrate that NODE serves as a viable data-driven surrogate when governing equations are unavailable.
Authors: Nina Fatehi, Antar Kumar Biswas, Masoud H. Nazari
This paper presents a novel learning based framework for predicting power outages caused by extreme events. The proposed approach targets low-probability high-consequence outage scenarios and leverages a comprehensive set of features derived from publicly available data sources. We integrate EAGLE-I outage records from 2014 to 2024 with weather, socioeconomic, infrastructure, and seasonal event data. Incorporating social and demographic indicators reveals patterns of community vulnerability and improves understanding of outage risk during extreme conditions. Four machine learning models are evaluated, including Random Forest (RF), Graph Neural Network (GNN), Adaptive Boosting (AdaBoost), and Long Short-Term Memory (LSTM). Experimental validation is performed on a large-scale dataset covering counties in the lower peninsula of Michigan. Among all models tested, the LSTM network achieves higher accuracy.
Authors: Pujitha Mamillapalli, Shikhar Verma, Tiago Koketsu Rodrigues, Abhinav Kumar
Future 6G networks envision ubiquitous connectivity through space-air-ground integrated networks (SAGINs), where high-altitude platform stations (HAPSs) and satellites complement terrestrial systems to provide wide-area, low-latency coverage. However, the rapid growth of terrestrial devices intensifies spectrum sharing between terrestrial and non-terrestrial segments, resulting in severe cross-tier interference. In particular, frequency sharing between the HAPS satellite uplink and HAPS ground downlink improves spectrum efficiency but suffers from interference caused by the HAPS antenna back-lobe. Existing approaches relying on zero-forcing (ZF) codebooks have limited performance under highly dynamic channel conditions. To overcome this limitation, we employ a reconfigurable intelligent surface (RIS)-aided HAPS-based SAGIN framework with a deep deterministic policy gradient (DDPG) algorithm. The proposed DDPG framework optimizes the HAPS beamforming weights to form spatial nulls toward interference sources while maintaining robust links to the desired signals. Simulation results demonstrate that the DDPG framework consistently outperforms conventional ZF beamforming among different RIS configurations, achieving up to \(11.3\%\) throughput improvement for a \(4\times4\) RIS configuration, validating its adaptive capability to enhance spectral efficiency in dynamic HAPS-based SAGINs.
Authors: Siminfar Samakoush Galougah
Fifth-generation (5G) wireless networks introduce new architectural paradigms, spectrum usage models, and optimization challenges to support enhanced mobile broadband, massive machine-type communications, and ultra-reliable low-latency communications. This survey provides a comprehensive overview of key technologies and design challenges in 5G systems, with a focus on spectrum coexistence and interference management, network dimensioning and planning, cell-free massive MIMO architectures, fronthaul-aware user management, and power allocation strategies. Representative analytical, simulation-based, and optimization-driven approaches are reviewed, fundamental trade-offs are highlighted, and open research challenges relevant to 5G-Advanced and beyond are identified.
Authors: Arslan Ahmad, Ian Dobson
The impact of routine smaller outages on distribution system customers in terms of customer minutes interrupted can be tracked using conventional reliability indices. However, the customer minutes interrupted in large blackout events are extremely variable, and this makes it difficult to quantify the customer impact of these extreme events with resilience metrics. We solve this problem with the System Average Large Event Duration Index SALEDI that logarithmically transforms the customer minutes interrupted. We explain how this new resilience metric works, compare it with alternatives, quantify its statistical accuracy, and illustrate its practical use with standard outage data from five utilities.
Authors: Antonio Alcántara, Spyros Chatzivasileiadis
This work introduces for the first time, to our knowledge, a trustworthiness layer for foundation models in power systems. Using stratified conformal prediction, we devise adaptive, statistically valid confidence bounds for each output of a foundation model. For regression, this allows users to obtain an uncertainty estimate for each output; for screening, it supports conservative decisions that minimize false negatives. We demonstrate our method by enhancing GridFM, the first open-source Foundation Model for power systems, with statistically valid prediction intervals instead of heuristic error margins. We apply it for N-k contingency assessment, a combinatorial NP-Hard problem. We show that trustworthy GridFM can offer richer and more accurate information than DC Power Flow, having 2x-3x higher precision, while running up to 18x faster than AC Power Flow for systems up to 118 buses. Moving a step further, we also examine the ability of trustworthy GridFM to generalize to unseen high-order contingencies: through a rigorous analysis, we assess how a model trained on N-1 or N-2 outages extrapolates to unseen contingencies up to N-5.
Authors: Katharina Kaiser, Gustavo Valverde, Gabriela Hug
Thermostatically controlled loads and electric vehicles offer flexibility to reduce power peaks in low-voltage distribution networks. This flexibility can be maximized if the devices are coordinated centrally, given some level of information about the controlled devices. In this paper, we propose novel optimization-based control schemes with prediction capabilities that utilize limited information from heat pumps, electric water heaters, and electric vehicles. The objective is to flatten the total load curve seen by the distribution transformer by restricting the times at which the available flexible loads are allowed to operate, subject to the flexibility constraints of the loads to preserve customers' comfort. The original scheme was tested in a real-world setup, considering both winter and summer days. The pilot results confirmed the technical feasibility but also informed the design of an improved version of the controller. Computer simulations using the adjusted controller show that, compared to the original formulation, the improved scheme achieves greater peak reductions in summer. Additionally, comparisons were made with an ideal controller, which assumes perfect knowledge of the inflexible load profile, the models of the controlled devices, the hot water and space heating demand, and future electric vehicle charging sessions. The proposed scheme with limited information achieves almost half of the potential average daily peak reduction that the ideal controller with perfect knowledge would achieve.
Authors: Yuang Chen, Fengqian Guo, Chang Wu, Mingyu Peng, Hancheng Lu, Chang Wen Chen
The burgeoning and ubiquitous deployment of the Internet of Things (IoT) landscape struggles with ultra-low latency demands for computation-intensive tasks in massive connectivity scenarios. In this paper, we propose an innovative uplink non-orthogonal multiple access (NOMA)-assisted multi-base station (BS) mobile edge computing (BS-MEC) network tailored for massive IoT connectivity. To fulfill the quality-of-service (QoS) requirements of delay-sensitive and computation-intensive IoT applications, we formulate a joint task offloading, user grouping, and power allocation optimization problem with the overarching objective of minimizing the system's total delay, aiming to address issues of unbalanced subchannel access, inter-group interference, computational load disparities, and device heterogeneity. To effectively tackle this problem, we first reformulate task offloading and user grouping into a non-cooperative game model and propose an exact potential game-based joint decision-making (EPG-JDM) algorithm, which dynamically selects optimal task offloading and subchannel access decisions for each IoT device based on its channel conditions, thereby achieving the Nash Equilibrium. Then, we propose a majorization-minimization (MM)-based power allocation algorithm, which transforms the original subproblem into a tractable convex optimization paradigm. Extensive simulation experiments demonstrate that our proposed EPG-JDM algorithm significantly outperforms state-of-the-art decision-making algorithms and classic heuristic algorithms, yielding performance improvements of up to 19.3% and 14.7% in terms of total delay and power consumption, respectively.
Authors: Nan Gu, Junjie Qin
Electric vehicle charging and geo-distributed datacenters introduce spatially flexible loads (FLs) that couple power, transportation, and datacenter networks. These couplings create a closed-loop feedback between locational marginal prices (LMPs) and decisions of the FL systems, challenging the foundations of conventional competitive equilibrium (CE) in electricity markets. This paper studies a notion of generalized competitive equilibrium (GCE) that aims to capture such price-demand interactions across the interconnected infrastructures. We establish structural conditions under which the GCE preserves key properties of the conventional CE, including existence, uniqueness, and efficiency, without requiring detailed knowledge of decision processes for individual FL systems. The framework generalizes to settings where the grid is coupled with multiple FL systems. Stylized examples and case studies on the New York ISO grid, coupled with the Sioux Falls transportation and distributed datacenter networks, demonstrate the use of our theoretical framework and illustrate the mutual influence among the grid and the studied FL systems.
Authors: Amritha Premkumar, Christian Herglotz
Conventional video encoders typically employ a fixed chroma subsampling format, such as YUV420, which may not optimally reflect variations in chroma detail across different types of content. This can lead to suboptimal chroma quality and inefficiencies in bitrate allocation. We propose an Adaptive Resolution-Chroma Subsampling (ARCS) framework that jointly optimizes spatial resolution and chroma subsampling to balance perceptual quality and decoding efficiency. ARCS selects an optimal (resolution, chroma format) pair for each bitrate by maximizing a composite quality-complexity objective, while enforcing monotonicity constraints to ensure smooth transitions between representations. Experimental results using x265 show that, compared to a fixed-format encoding (YUV444), on average, ARCS achieves a 13.48 % bitrate savings and a 62.18 % reduction in decoding time, which we use as a proxy for the decoding energy, to yield the same colorVideoVDP score. The proposed framework introduces chroma adaptivity as a new control dimension for energy-efficient video streaming.
Authors: Kejun Chen, Bernard Knueven, Wesley Jones
This paper proposes a hard-constrained unsupervised learning framework for rapidly solving the non-linear and non-convex AC optimal power flow (AC-OPF) problem in real-time operation. Without requiring ground-truth AC-OPF solutions, feasibility and optimality are ensured through a properly designed learning environment and training loss. Inspired by residual learning, the neural network (NN) learns the correction mapping from the DC-OPF solution to the active power setpoints of the generators through re-dispatch. A subsequent optimization model is utilized to restore the optimal AC-OPF solution, and the resulting projection difference is employed as the training loss. A replay buffer is utilized to enhance learning efficiency by fully leveraging past data pairs. The optimization model is cast as a differentiable optimization layer, where the gradient is derived by applying the implicit function theorem to the KKT conditions at the optimal solution. Tested on IEEE-118 and PEGASE-9241 bus systems, numerical results demonstrate that the proposed NN can obtain strictly feasible and near-optimal solutions with reduced computational time compared to conventional optimization solvers. In addition, aided by the updated DC-OPF solution under varying topologies, the trained NN, together with the PF solver, can rapidly find the corresponding AC solution. The proposed method achieves a $40\times$ time speedup, while maintaining an average constraint violation on the order of $10^{-4}$ and an optimization gap below $1\%$.
Authors: Venkata Rajesh Chundru, Shreshta Rajakumar Deshpande, Stanislav A Gankov
The existing literature on Battery Energy Storage Systems (BESS) predominantly focuses on two main areas: control system design aimed at achieving grid stability and the techno-economic analysis of BESS dispatch on power grid. However, with the increasing incorporation of ancillary services into power grids, a more comprehensive approach to energy management systems is required. Such an approach should not only optimize revenue generation from BESS but also ensure the safe, efficient, and reliable operation of lithium-ion batteries. This research seeks to bridge this gap by exploring literature that addresses both the economic and operational dimensions of BESS. Specifically, it examines how economic aspects of grid duty cycles can align with control schemes deployed in BESS systems. This alignment, or synergy, could be instrumental in creating robust digital twins virtual representations of BESS systems that enhance both grid stability and revenue potential. The literature review is organized into five key categories: (1) ancillary services for BESS, exploring support functions that BESS can provide to power grids; (2) control systems developed for real-time BESS power flow management, ensuring smooth operations under dynamic grid conditions; (3) optimization algorithms for BESS dispatch, focusing on efficient energy allocation strategies; (4) techno-economic analyses of BESS and battery systems to assess their financial viability; and (5) digital twin technologies for real-world BESS deployments, enabling advanced predictive maintenance and performance optimization. This review will identify potential synergies, research gaps, and emerging trends, paving the way for future innovations in BESS management and deployment strategies.
Authors: Andrey Churkin, Wangwei Kong, Pierluigi Mancarella, Eduardo A. Martínez Ceseña
The increasing integration of distributed energy resources (DER) offers new opportunities for distribution system operators (DSO) to improve network operation through flexibility services. To utilise flexible resources, various DER flexibility aggregation methods have been proposed, such as the concept of aggregated P-Q flexibility areas. Yet, many existing studies assume perfect coordination among DER and rely on single-phase power flow analysis, thus overlooking barriers to flexibility aggregation in real unbalanced systems. To quantify the impact of these barriers, this paper proposes a three-phase optimal power flow (OPF) framework for P-Q flexibility assessment, implemented as an open-source Julia tool this http URL. The framework explicitly accounts for voltage unbalance and imperfect coordination among DER in low voltage (LV) distribution networks. Simulations on an illustrative 5-bus system and a real 221-bus LV network in the UK reveal that over 30% of the theoretical aggregated flexibility potential can be lost due to phase unbalance and lack of coordination across phases. These findings highlight the need for improved flexibility aggregation tools applicable to real unbalanced distribution networks.
Authors: Ahmed S. Alahmed, Audun Botterud, Saurabh Amin, Ali T. Al-Awami
We develop a mathematical framework for the optimal scheduling of flexible water desalination plants (WDPs) as hybrid generator-load resources. WDPs integrate thermal generation, membrane-based controllable loads, and renewable energy sources, offering unique operational flexibility for power system operations. They can simultaneously participate in two markets: selling desalinated water to a water utility, and bidirectionally transacting electricity with the grid based on their net electricity demand. We formulate the scheduling decision problem of a profit-maximizing WDP, capturing operational, technological, and market-based coupling between water and electricity flows. The threshold-based structure we derive provides computationally tractable coordination suitable for large-scale deployment, offering operational insights into how thermal generation and membrane-based loads complementarily provide continuous bidirectional flexibility. The thresholds are analytically characterized in closed form as explicit functions of technology and tariff parameters. We examine how small changes in the exogenous tariff and technology parameters affect the WDP's profit. Extensive simulations illustrate the optimal WDP's operation, profit, and water-electricity exchange, demonstrating significant improvements relative to benchmark algorithms.
Authors: Chao Shen, Zihan Guo, Xu Wan, Zhenghao Yang, Yifan Zhang, Wengi Huang, Jie Song, Zongyan Zhang, Mingyang Sun
Growing renewable penetration introduces substantial uncertainty into power system operations, necessitating frequent adaptation of dispatch objectives and constraints and challenging expertise-intensive, near-real-time modeling workflows. Large Language Models (LLMs) provide a promising avenue for automating this process by translating natural-language (NL) operational requirements into executable optimization models via semantic reasoning and code synthesis. Yet existing LLM datasets and benchmarks for optimization modeling primarily target coarse-grained cross-domain generalization, offering limited, rigorous evaluation in power-system settings, particularly for Optimal Power Flow (OPF). We therefore introduce \textbf{ProOPF-D} and \textbf{ProOPF-B}, a dataset and benchmark for professional-grade OPF modeling: ProOPF-D contains 12K instances pairing NL requests with parameter adjustments and structural extensions to a canonical OPF, together with executable implementations; ProOPF-B provides 121 expert-annotated test cases with ground-truth code, enabling end-to-end evaluation under both concrete and abstract OPF modeling regimes.
Authors: Zhirui Liang, Jae-Won Chung, Mosharaf Chowdhury, Jiasi Chen, Vladimir Dvorkin
While the rapid expansion of data centers poses challenges for power grids, it also offers new opportunities as potentially flexible loads. Existing power system research often abstracts data centers as aggregate resources, while computer system research primarily focuses on optimizing GPU energy efficiency and largely ignores the grid impacts of optimized GPU power consumption. To bridge this gap, we develop a GPU-to-Grid framework that couples device-level GPU control with power system objectives. We study distribution-level voltage regulation enabled by flexibility in LLM inference, using batch size as a control knob that trades off the voltage impacts of GPU power consumption against inference latency and token throughput. We first formulate this problem as an optimization problem and then realize it as an online feedback optimization controller that leverages measurements from both the power grid and GPU systems. Our key insight is that reducing GPU power consumption alleviates violations of lower voltage limits, while increasing GPU power mitigates violations near upper voltage limits in distribution systems; this runs counter to the common belief that minimizing GPU power consumption is always beneficial to power grids.
Authors: Chao Shen, Zihan Guo, Xu Wan, Zhenghao Yang, Yifan Zhang, Wengi Huang, Jie Song, Zongyan Zhang, Mingyang Sun
Growing renewable penetration introduces substantial uncertainty into power system operations, necessitating frequent adaptation of dispatch objectives and constraints and challenging expertise-intensive, near-real-time modeling workflows. Large Language Models (LLMs) provide a promising avenue for automating this process by translating natural-language (NL) operational requirements into executable optimization models via semantic reasoning and code synthesis. Yet existing LLM datasets and benchmarks for optimization modeling primarily target coarse-grained cross-domain generalization, offering limited, rigorous evaluation in power-system settings, particularly for Optimal Power Flow (OPF). We therefore introduce \textbf{ProOPF-D} and \textbf{ProOPF-B}, a dataset and benchmark for professional-grade OPF modeling: ProOPF-D contains 12K instances pairing NL requests with parameter adjustments and structural extensions to a canonical OPF, together with executable implementations; ProOPF-B provides 121 expert-annotated test cases with ground-truth code, enabling end-to-end evaluation under both concrete and abstract OPF modeling regimes.
Authors: Le Minh Triet Tran (IMT Atlantique, LaTIM), Sarah Reynaud (IMT Atlantique, LaTIM), Ronan Fablet (IMT Atlantique, Lab-STICC), Adrien Merlini (IMT Atlantique, Lab-STICC), François Rousseau (IMT Atlantique, LaTIM), Mai Quyen Pham (IMT Atlantique, Lab-STICC)
Inverse problems are often ill-posed and require optimization schemes with strong stability and convergence guarantees. While learning-based approaches such as deep unrolling and meta-learning achieve strong empirical performance, they typically lack explicit control over descent and curvature, limiting robustness. We propose a learned Majorization-Minimization (MM) framework for inverse problems within a bilevel optimization setting. Instead of learning a full optimizer, we learn a structured curvature majorant that governs each MM step while preserving classical MM descent guarantees. The majorant is parameterized by a lightweight recurrent neural network and explicitly constrained to satisfy valid MM conditions. For cosine-similarity losses, we derive explicit curvature bounds yielding diagonal majorants. When analytic bounds are unavailable, we rely on efficient Hessian-vector product-based spectral estimation to automatically upper-bound local curvature without forming the Hessian explicitly. Experiments on EEG source imaging demonstrate improved accuracy, stability, and cross-dataset generalization over deep-unrolled and meta-learning baselines.
Authors: Yanchao Jiang, Pierluigi Poggiolini
Polynomial closed-form GN model is proposed by expressing the spatial power profile of each channel along a span as a polynomial. In this paper, we present the generic closed-form expression for all contributions of self-, cross-, and multi-channel interference. The full derivation is provided.
Authors: Meng Yuan, Ye Wang, Xinghuo Yu, Torsten Wik, Changfu Zou
The widespread adoption of photovoltaic (PV), electric vehicles (EVs), and stationary energy storage systems (ESS) in households increases system complexity while simultaneously offering new opportunities for energy regulation. However, effectively coordinating these resources under uncertainties remains challenging. This paper proposes a novel home energy management framework based on deep reinforcement learning (DRL) that can jointly minimise energy expenditure and battery degradation while guaranteeing occupant comfort and EV charging requirements. Distinct from existing studies, we explicitly account for the heterogeneous degradation characteristics of stationary and EV batteries in the optimisation, alongside stochastic user behaviour regarding arrival time, departure time, and driving distance. The energy scheduling problem is formulated as a constrained Markov decision process (CMDP) and solved using a Lagrangian soft actor-critic (SAC) algorithm. This approach enables the agent to learn optimal control policies that enforce physical constraints, including indoor temperature bounds and target EV state of charge upon departure, despite stochastic uncertainties. Numerical simulations over a one-year horizon demonstrate the effectiveness of the proposed framework in satisfying physical constraints while eliminating thermal oscillations and achieving significant economic benefits. Specifically, the method reduces the cumulative operating cost substantially compared to two standard rule-based baselines while simultaneously decreasing battery degradation costs by 8.44%.
Authors: Erick Silva, Tadeu Freitas, Rehana Yasmin, Ali Shoker, Paulo Esteves-Verissimo
A notable challenge in Electric Vehicle (EV) charging is the time required to fully charge the battery, which can range from 15 minutes to 2-3 hours. This idle period, however, presents an opportunity to offer time-consuming or data-intensive services such as vehicular software updates. ISO 15118 referred to the concept of Value-Added Services (VAS) in the charging scenario, but it remained underexplored in the literature. Our paper addresses this gap by proposing \acronym, the first EV charger compute architecture that supports secure on-charger universal applications with upstream and downstream communication. The architecture covers the end-to-end hardware/software stack, including standard API for vehicles and IT infrastructure. We demonstrate the feasibility and advantages of \acronym by employing and evaluating three suggested value-added services: vehicular software updates, security information and event management (SIEM), and secure payments. The results demonstrate significant reductions in bandwidth utilization and latency, as well as high throughput, which supports this novel concept and suggests a promising business model for Electric Vehicle charging station operation.
Authors: Lianzhe Hu, Yu Wang, Bikash Pal
This paper presents an LLM-driven, end-to-end workflow that addresses the lack of automation and intelligence in power system transient stability assessment (TSA). The proposed agentic framework integrates large language models (LLMs) with a professional simulator (ANDES) to automatically generate and filter disturbance scenarios from natural language, and employs an LLM-driven Neural Network Design (LLM-NND) pipeline to autonomously design and optimize TSA models through performance-guided, closed-loop feedback. On the IEEE 39-bus system, the LLM-NND models achieve 93.71% test accuracy on four-class TSA with only 4.78M parameters, while maintaining real-time inference latency (less than 0.95 ms per sample). Compared with a manually designed DenseNet (25.9M parameters, 80.05% accuracy), the proposed approach jointly improves accuracy and efficiency. Ablation studies confirm that the synergy among domain-grounded retrieval, reasoning augmentation, and feedback mechanisms is essential for robust automation. The results demonstrate that LLM agents can reliably accelerate TSA research from scenario generation and data acquisition to model design and interpretation, offering a scalable paradigm that is readily extensible to other power system tasks such as optimal power flow, fault analysis, and market operations.
Authors: Jincheng Xie, Yili Deng, Jiguang He, Pengyu Wang, Miaomiao Dong, Rui Tang, Zhongyi Huang
Conventional direction-of-arrival (DoA) estimation methods rely on multi-antenna arrays, which are costly to implement on size-constrained Bluetooth Low Energy (BLE) devices. Virtual antenna array (VAA) techniques enable DoA estimation with a single antenna, making angle estimation feasible on such devices. However, BLE only provides a single-shot two-way channel frequency response (CFR) with a binary phase ambiguity issue, which hinders the direct application of VAA. To address this challenge, we propose a unified model that combines VAA with BLE two-way CFR, and introduce a neural network based phase recovery framework that employs row / column predictors with a voting mechanism to resolve the ambiguity. The recovered one-way CFR then enables super resolution algorithms such as MUSIC for joint time of arrival (ToA) and DoA estimation. Simulation results demonstrate that the proposed method achieves superior performance under non-uniform VAAs, with mean square errors approaching the Cramer Rao bound at SNR $\geq$ 5 dB.
Authors: Qinan Zhou, Jing Sun
Estimating cell-to-cell variation (CtCV) and state of health (SoH) for battery modules with parallel-connected cells is challenging when only module-level signals are measurable and individual cell behaviors remain unobserved. Although progress has been made in SoH estimation, CtCV estimation remains unresolved in the literature. This paper proposes a unified framework that accurately estimates both CtCV and SoH for modules using only module-level information extracted from incremental capacity analysis (ICA) and differential voltage analysis (DVA). With the proposed framework, CtCV and SoH estimations can be decoupled into two separate tasks, allowing each to be solved with dedicated algorithms without mutual interference and providing greater design flexibility. The framework also exhibits strong versatility in accommodating different CtCV metrics, highlighting its general-purpose nature. Experimental validation on modules with three parallel-connected cells demonstrates that the proposed framework can systematically select optimal module-level features for CtCV and SoH estimations, deliver accurate CtCV and SoH estimates with high confidence and low computational complexity, remain effective across different C-rates, and be suitable for onboard implementation.
Authors: Shashank Shekhar, Abhinav Karn, Kris Keshav, Shivam Bansal, Parikshit Pareek
Generating large-scale, physically consistent AC Optimal Power Flow (ACOPF) datasets is essential for modern data-driven power system applications. The central challenge lies in balancing solution accuracy with computational efficiency. Recent diffusion-based generative models produce high-quality samples; however, their slow sampling procedures limit practical scalability. In this work, we argue that exact physical feasibility is ultimately enforced by power flow solvers or projection steps, and therefore the generative model only needs to produce good initializations rather than perfectly feasible solutions. Based on this insight, we propose a fast diffusion framework using Denoising Diffusion Implicit Models (DDIM) combined with physics-guided corrections during sampling. The proposed method replaces slow stochastic refinement with a small number of deterministic steps and explicit constraint guidance. Experiments on IEEE 6-, 24-, and 118-bus systems show that our approach achieves up to 20 times faster sampling than standard diffusion models while maintaining comparable statistical accuracy and physical consistency. This makes the method well suited for scalable OPF dataset generation and practical power system learning tasks. We release the implementation code at this https URL.
Authors: Chao Shen, Zihan Guo, Xu Wan, Zhenghao Yang, Yifan Zhang, Wengi Huang, Jie Song, Zongyan Zhang, Mingyang Sun
Growing renewable penetration introduces substantial uncertainty into power system operations, necessitating frequent adaptation of dispatch objectives and constraints and challenging expertise-intensive, near-real-time modeling workflows. Large Language Models (LLMs) provide a promising avenue for automating this process by translating natural-language (NL) operational requirements into executable optimization models via semantic reasoning and code synthesis. Yet existing LLM datasets and benchmarks for optimization modeling primarily target coarse-grained cross-domain generalization, offering limited, rigorous evaluation in power-system settings, particularly for Optimal Power Flow (OPF). We therefore introduce \textbf{ProOPF-D} and \textbf{ProOPF-B}, a dataset and benchmark for professional-grade OPF modeling: ProOPF-D contains 12K instances pairing NL requests with parameter adjustments and structural extensions to a canonical OPF, together with executable implementations; ProOPF-B provides 121 expert-annotated test cases with ground-truth code, enabling end-to-end evaluation under both concrete and abstract OPF modeling regimes.
Authors: Nicola Ramseyer, Matthieu Jacobs, Mario Paolone
This paper presents an approach for the modelling of dependent random variables using generalised polynomial chaos. This allows to write chance-constrained optimization problems with respect to a joint distribution modelling dependencies between different stochastic inputs. Arbitrary dependencies are modelled by using Gaussian copulas to construct the joint distribution. The paper exploits the problem structure and develops suitable transformations to ensure tractability. The proposed method is applied to a probabilistic power reserve procurement problem. The effectiveness of the method to capture dependencies is shown by comparing the approach with a standard approach considering independent random variables.
Authors: Manuel Treutlein, Pascal Bothe, Marc Schmidt, Roman Hahn, Oliver Neumann, Ralf Mikut, Veit Hagenmeyer
The last mile of the distribution grid is crucial for a successful energy transition, as more low-carbon technology like photovoltaic systems, heat pumps, and electric vehicle chargers connect to the low-voltage grid. Despite considerable challenges in operation and planning, researchers often lack access to suitable low-voltage grid data. To address this, we present the FeederBW dataset with data recorded by the German distribution system operator Netze BW. It offers real-world energy data from 200 low-voltage feeders over two years (2023-2025) with weather information and detailed metadata, including changes in low-carbon technology installations. The dataset includes feeder-specific details such as the number of housing units, installed power of low-carbon technology, and aggregated industrial energy data. Furthermore, high photovoltaic feed-in and one-minute temporal resolution makes the dataset unique. FeederBW supports various applications, including machine learning for load forecasting, conducting non-intrusive load monitoring, generating synthetic data, and analyzing the interplay between weather, feeder measurements, and metadata. The dataset reveals insightful patterns and clearly reflects the growing impact of low-carbon technology on low-voltage grids.
Authors: Tingwei Cao, Yan Xu
High penetration of renewable energy sources (RES) introduces significant uncertainty and intermittency into microgrid operations, posing challenges to economic and reliable scheduling. To address this, this paper proposes an end-to-end decision-focused framework that jointly optimizes probabilistic forecasting and robust operation for microgrids. A multilayer encoder-decoder (MED) probabilistic forecasting model is integrated with a two-stage robust optimization (TSRO) model involving direct load control (DLC) through a differentiable decision pathway, enabling gradient-based feedback from operational outcomes to improve forecasting performance. Unlike conventional sequential approaches, the proposed method aligns forecasting accuracy with operational objectives by directly minimizing decision regret via a surrogate smart predict-then-optimize (SPO) loss function. This integration ensures that probabilistic forecasts are optimized for downstream decisions, enhancing both economic efficiency and robustness. Case studies on modified IEEE 33-bus and 69-bus systems demonstrate that the proposed framework achieves superior forecasting accuracy and operational performance, reducing total and net operation costs by up to 18% compared with conventional forecasting and optimization combinations. The results verify the effectiveness and scalability of the end-to-end decision-focused approach for resilient and cost-efficient microgrid management under uncertainty.
Authors: Lasse Kötz, Jonas Sjöberg, Knut Åkesson
We present a falsification framework that integrates learned surrogate dynamics with optimal control to efficiently generate counterexamples for cyber-physical systems specified in signal temporal logic (STL). The unknown system dynamics are identified using neural ODEs, while known a-priori structure is embedded directly into the model, reducing data requirements. The learned neural ODE is converted into an analytical form via symbolic regression, enabling fast and interpretable trajectory optimization. Falsification is cast as minimizing STL robustness over input trajectories; negative robustness yields candidate counterexamples, which are validated on the original system. Spurious traces are iteratively used to refine the surrogate, while true counterexamples are returned as final results. Experiments on ARCH-COMP 2024 benchmarks show that this method requires orders of magnitude fewer experiments of the system under test than optimization-based approaches that do not model system dynamics.
Authors: Muddasir Rahim, Soumaya Cherkaoui
The rapid growth of Internet of Things (IoT) applications necessitates robust resource allocation in future sixth-generation (6G) networks, particularly at the upper mid-band (7-15 GHz, FR3). This paper presents a novel intelligent reconfigurable surface (IRS)-assisted framework combining terrestrial IRS (TIRS) and aerial IRS (AIRS) mounted on low-altitude platform stations, to ensure reliable connectivity under severe line-of-sight (LoS) blockages. Distinguishing itself from prior work restricted to terrestrial IRS and mmWave and THz bands, this work targets the FR3 spectrum, the so-called Golden Band for 6G. The joint beamforming and user association (JBUA) problem is formulated as a mixed-integer nonlinear program (MINLP), solved through problem decomposition, zero-forcing beamforming, and a stable matching algorithm. Comprehensive simulations show our method approaches exhaustive search performance with significantly lower complexity, outperforming existing greedy and random baselines. These results provide a scalable blueprint for real-world 6G deployments, supporting massive IoT connectivity in challenging environments.
Authors: Muddasir Rahim, Soumaya Cherkaoui
The increasing demand for Internet of Things (IoT) applications has accelerated the need for robust resource allocation in sixth-generation (6G) networks. In this paper, we propose a reconfigurable intelligent surface (RIS)-assisted upper mid-band communication framework. To ensure robust connectivity under severe line-of-sight (LoS) blockages, we use a two-tier RIS structure comprising terrestrial RISs (TRISs) and high-altitude platform station (HAPS)-mounted RISs (HRISs). To maximize network sum rate, we formulate a joint beamforming, power allocation, and IoT device association (JBPDA) problem as a mixed-integer nonlinear program (MINLP). The formulated MINLP problem is challenging to solve directly; therefore, we tackle it via a decomposition approach. The zero-forcing (ZF) technique is used to optimize the beamforming matrix, a closed-form expression for power allocation is derived, and a stable matching-based algorithm is proposed for device-RIS association based on achievable data rates. Comprehensive simulations demonstrate that the proposed scheme approaches the performance of exhaustive search (ES) while exhibiting substantially lower complexity, and it consistently outperforms greedy search (GS) and random search (RS) baselines. Moreover, the proposed scheme converges much faster than the ES scheme.
Authors: Martin Doff-Sotta, Florian Cech, Rishabh Manjunatha, Costantino Citro, Matthew Williams, Thomas Morstyn
Hybrid distribution transformers (HDTs) integrate conventional transformers with partially rated power electronic converters to improve power quality, enable advanced ancillary services and increase penetration of renewable energy sources in the national power grid. In this paper, we present an averaged mathematical model of a three-phase HDT equipped with two back-to-back voltage source converters connected in a series-shunt configuration. Cascaded PI controllers are designed in the synchronously rotating dq0 reference frame to regulate load voltage, compensate reactive power, achieve grid frequency regulation, and perform load phase balancing. Simulation results implemented in Python confirm that these simple yet effective control mechanisms allow HDTs to offer simultaneous grid services without introducing complexity. The complete model, control architecture, and implementation steps are detailed, enabling further validation and adoption.
Authors: Yousef Abudyak, Mohsen Alizadeh, Wei Sun
The rapid growth of hyperscale data centers driven by Large Language Models and Artificial Intelligence workloads has introduced new challenges for power systems. These facilities experience abrupt power variations during model training and check-point-saving events, causing voltage deviations and frequency disturbances. Moreover, they operate as passive loads that draw power without offering any grid support. This paper presents an integrated architecture that combines Battery Energy Storage Systems (BESSs) within data centers using Grid-Forming inverters to provide active grid-support functions. Simulation results through MATLAB/Simulink demonstrate accurate power reference tracking under dynamic loading, with eight coordinated BESS units supplying instantaneous power during training and saving conditions. Under single-phase voltage depression near the data center bus, the BESS delivered reactive power support similar to a Static Synchronous Compensator. During grid disconnection, seamless islanded operation was achieved with stable voltage, frequency, and continuous power delivery at the data center bus.
Authors: M. Izadi, D. Fernandez Zapico, M. Salazar, T. Hofman
Electrification of heavy-duty vehicles places substantial stress on distribution grids, and Charging Energy Hubs (CEHs) mitigate these impacts by integrating charging infrastructure with renewable energy sources and battery storage. Optimal sizing of CEH components is therefore a critical investment decision, yet challenging because design choices depend strongly on operational dynamics. This work presents a mixed-integer linear programming model for the optimal sizing of CEH components, using a co-design approach that jointly optimizes component sizing and operational decisions. A case study for a heavy-duty fleet demonstrates the effectiveness of the method for cost-efficient, scalable, and grid-compliant CEH planning.
Authors: Yingrui Fan, Junbo Zhao
Data centers (DCs) are increasingly recognized as flexible loads that can support grid frequency regulation. Yet, most existing methods treat workload scheduling and regulation capacity bidding separately, overlooking how queueing dynamics and spatial-temporal dispatch decisions affect the ability to sustain real-time regulation. As a result, the committed regulation may become infeasible or short-lived. To address this issue, we propose a unified day-ahead co-optimization framework that jointly decides workload distribution across geographically distributed DCs and regulation capacity commitments. We construct a space-time network model to capture workload migration costs, latency requirements, and heterogeneous resource limits. To ensure that the committed regulation remains deliverable, we introduce chance constraints on instantaneous power flexibility based on interactive load forecasts, and apply Value-at-Risk queue-state constraints to maintain sustainable response under cumulative regulation signals. Case studies on a modified IEEE 68-bus system using real data center traces show that the proposed framework lowers system operating costs, enables more viable regulation capacity, and achieves better revenue-risk trade-offs compared to strategies that optimize scheduling and regulation independently.
Authors: Wangkun Xu, Zhongda Chu, Fei Teng
With the increasing penetration of renewable energy, traditional physics-based power system operation faces growing challenges in achieving economic efficiency, stability, and robustness. Machine learning (ML) has emerged as a powerful tool for modeling complex system dynamics to address these challenges. However, existing ML designs are often developed in isolation and lack systematic integration with established operational decision frameworks. To bridge this gap, this paper proposes a holistic framework of Learning-Augmented Power System Operations (LAPSO, pronounced Lap-So). From a native mathematical optimization perspective, LAPSO is centered on the operation stage and aims to unify traditionally siloed power system tasks such as forecasting, operation, and control. The framework jointly optimizes machine learning and physics-based models at both the training and inference stages. Then, a complete set of design metrics is introduced to quantify and evaluate the impact of ML models on the existing decision-makings. These metrics facilitate a deeper understanding of representative applications such as stability-constrained optimization (SCO) and objective-based forecasting (OBF). Moreover, LAPSO is inherently extensible to emerging learning paradigms that integrate forecasting, operation, and control in a closed loop. It also enables the systematic identification and mitigation of different sources and timings of uncertainty from Bayesian perspective. Finally, a dedicated Python package \texttt{lapso} is developed to automatically augment existing power system optimization models with learnable components. All source code and datasets are publicly available at: this https URL.
Authors: Sandesh Rao Mattu, Nishant Mehrotra, Robert Calderbank
This letter describes how to improve performance of cellular systems by combining non-equiprobable signaling (shaping) with low-density parity check (LDPC) coding for an orthogonal frequency division multiplexing system. We focus on improving performance at the cell edge, where the 5G standard specifies a suite of LDPC codes with different rates that are applied to 4-QAM. We employ the method of shaping on rings which adds to the transmission rate as it shapes the input distribution. We double the size of the $4$-QAM constellation by introducing a second shell of signal points, and we implement non-equiprobable signaling through a shaping code which selects the high energy shell less frequently than the low energy shell. We describe how to combine coding and shaping by integrating shaping into the calculation of log-likelihood ratios (LLRs) necessary for decoding LDPC codes. We employ rate $1/2$ LDPC coding and select the rate of the shaping code to match that of rate $3/4$ LDPC coding using $4$-QAM. We present simulation results for a representative Veh-A channel showing gains of $4$ dB at a bit error rate (BER) of $10^{-3}$. When we choose an LDPC code from the 5G suite to match the BER performance of rate $1/2$ LDPC coding with shaping we show that transmission rate can be improved by $20 $%.
Authors: Ismaeel Babur, Jane Macfarlane
The widespread adoption of battery electric vehicles (BEVs) holds promise for mitigating emission-related health impacts, particularly for low-income communities disproportionately affected by exposure to traffic-related air pollution. However, designing effective charging infrastructure necessitates a regional modeling approach that accounts for the inherent cross-jurisdictional nature of mobility patterns. This study underscores the importance of regional modeling in optimizing charging station deployment and evaluating the environmental justice implications for equity priority communities. We present a large-scale regional transportation modeling analysis leveraging Mobiliti, a cloud-based platform that employs parallel discrete event simulation to enable rapid computation. Our approach identifies the spatial demand density for charging infrastructure by analyzing over 19 million trips in the San Francisco Bay Area and determining the threshold points where BEVs may require charging across a typical day. By transitioning these trips that originate outside equity priority communities to BEVs, we quantify the potential emission reductions within these vulnerable areas. The regional modeling framework captures the complex interactions between travel behavior, vehicle characteristics, and charging needs, while accounting for the interconnectivity of infrastructure across municipal boundaries. This study demonstrates the critical role of regional modeling in designing equitable BEV charging networks that address environmental justice concerns. The findings inform strategies for deploying charging infrastructure that maximizes accessibility, minimizes range anxiety, and prioritizes the health and well-being of communities disproportionately burdened by transportation emissions.
Authors: Khalid Mahmud Labib, Shabbir Ahmed
The complex electrochemical behavior of lithium-ion batteries results in non-linear dynamics and appropriate modeling of this non-linear dynamical system is of interest for better management and control. In this work, we proposed a family of dynamic mode decomposition (DMD)-based data-driven models that do not require detailed knowledge of the composition of the battery materials but can essentially capture the non-linear dynamics with higher computational efficiency. Only voltage and current data obtained from hybrid pulse power characterization (HPPC) tests were utilized to form the state space matrices and subsequently used for predicting the future terminal voltage at different state of charge (SoC) and aging levels. To construct the system model, 60\% of the data from a single HPPC test was utilized to generate time-delay embedded snapshots, with embedding dimension ranging from 40 to 2000. Among these, an embedding dimension of 1810 resulted in the least residual sum of squares (RSS) error of 3.86 for the dynamic mode decomposition with control (DMDc) model and 30 for the standard DMD model. For DMDc model, delay embeddings (ranging from 1 to 12) were also incorporated into the input current signals. For the input matrix, an embedding dimension of 6 resulted in a minimum RSS error of 1.74. Furthermore, the system matrices A and B, identified from the HPPC test when the cell is in its healthy state, were held fixed and used to simulate the system dynamics for aged batteries by updating only the control input. Despite the presence of nonlinear degradation effects in later cycles, the DMDc model effectively captured key inner dynamics such as voltage dips and transient responses for subsequent charge and discharge cycles.
Authors: Tanay Raghunandan Srinivasa, Vivek Deulkar, Jia Bhargava, Mohammad Hajiesmaili, Prashant Shenoy
Battery energy storage systems are increasingly deployed as fast-responding resources for grid balancing services such as frequency regulation and for mitigating renewable generation uncertainty. However, repeated charging and discharging induces cycling degradation and reduces battery lifetime. This paper studies the real-time scheduling of a heterogeneous battery fleet that collectively tracks a stochastic balancing signal subject to per-battery ramp-rate and capacity constraints, while minimizing long-term cycling degradation. Cycling degradation is fundamentally path-dependent: it is determined by charge-discharge cycles formed by the state-of-charge (SoC) trajectory and is commonly quantified via rainflow cycle counting. This non-Markovian structure makes it difficult to express degradation as an additive per-time-step cost, complicating classical dynamic programming approaches. We address this challenge by formulating the fleet scheduling problem as a Markov decision process (MDP) with constrained action space and designing a dense proxy reward that provides informative feedback at each time step while remaining aligned with long-term cycle-depth reduction. To scale learning to large state-action spaces induced by fine-grained SoC discretization and asymmetric per-battery constraints, we develop a function-approximation reinforcement learning method using an Extreme Learning Machine (ELM) as a random nonlinear feature map combined with linear temporal-difference learning. We evaluate the proposed approach on a toy Markovian signal model and on a Markovian model trained from real-world regulation signal traces obtained from the University of Delaware, and demonstrate consistent reductions in cycle-depth occurrence and degradation metrics compared to baseline scheduling policies.
Authors: Finn Vehlhaber, Mauro Salazar
The deployment of medium-range battery electric aircraft is a promising pathway to improve the environmental footprint of air mobility. Yet such a deployment would be accompanied by significant electric power requirements at airports due to aircraft charging. Given the growing prevalence of electric vehicles and their bi-directional charging capabilities--so-called vehicle-to-grid (V2G)--we study energy buffer capabilities of parked electric vehicles to alleviate pressure on grid connections. To this end, we present energy management strategies for airports providing cost-optimal apron and landside V2G charge scheduling. Specifically, we first formulate the optimal energy management problem of joint aircraft charging and landside V2G coordination as a linear program, whereby we use partial differential equations to model the aggregated charging dynamics of the electric vehicle fleet. Second, we consider a shuttle flight network with a single hub of a large Dutch airline, real-world grid prices, and synthetic parking garage occupancy data to test our framework. Our results show that V2G at even a single airport can indeed reduce energy costs to charge the aircraft fleet: Compared to a baseline scenario without V2G, the proposed concept yields cost savings of up to 32%, depending on the schedule and amount of participating vehicles, and has other potential beneficial effects on the local power grid, e.g., the reduction of potential power peaks.
Authors: Ran Yang, Zheng Dong, Yue Xiu, Guangyi Liu, Wanting Lyu, Xiangxin Meng, Yan Li, Ning Wei
Movable antennas (MAs) have demonstrated significant potential in enhancing the performance of dual-functional radar-communication (DFRC) systems. In this paper, we explore an MA-aided DFRC system that utilizes a reconfigurable intelligent surface (RIS) to enhance signal coverage for communications in dead zones. To enhance the radar sensing performance in practical DFRC environments, we propose a unified robust transceiver design framework aimed at maximizing the minimum radar signal-to-interference-plus-noise ratio (SINR) in a cluttered environment. Our approach jointly optimizes transmit beamforming, receive filtering, antenna placement, and RIS reflecting coefficients under imperfect channel state information (CSI) for both sensing and communication channels. To deal with the channel uncertainty-constrained issue, we leverage the convex hull method to transform the primal problem into a more tractable form. We then introduce a two-layer block coordinate descent (BCD) algorithm, incorporating fractional programming (FP), successive convex approximation (SCA), S-Lemma, and penalty techniques to reformulate it into a series of semidefinite program (SDP) subproblems that can be efficiently solved. We provide a comprehensive analysis of the convergence and computational complexity for the proposed design framework. Simulation results demonstrate the robustness of the proposed method, and show that the MA-based design framework can significantly enhance the radar SINR performance while achieving an effective balance between the radar and communication performance.
Authors: Charlotte Muth, Shrinivas Chimmalgi, Laurent Schmalen
We investigate precoding for multi-user (MU) multiple-input multiple-output (MIMO) joint communications and sensing (JCAS) systems, taking into account the potential interference between sensing and communication channels. We derive indicators for the sensing and communication performance, i.e., the detection probability and the communication signal-to-interference-and-noise ratio (SINR) for general input signals. Our results show that the use of the communication signal for sensing can prevent a loss in communication performance if channel interference occurs, while the kurtosis of the transmit alphabet of the communication signal limits the sensing performance. We present simulation results of example setups.
Authors: Jingguan Liu, Cong Chen, Xiaomeng Ai, Jiakun Fang, Jinsong Wang, Jinyu Wen
Distributed energy storage devices can be pooled and coordinated by aggregators to participate in power system operations and market clearings. This requires representing a massive device population as a single, tractable surrogate that is computationally efficient, accurate, and compatible with market participation requirements. However, surrogate identification is challenging due to heterogeneity, nonconvexity, and high dimensionality of storage devices. To address these challenges, this paper develops a mean-field learning framework for storage aggregation. We interpret aggregation as the average behavior of a large storage population and show that, as the population grows, aggregate performance converges to a unique, convex mean-field limit, enabling tractable population-level modeling. This convexity further yields a price-responsive characterization of aggregate storage behavior and allows us to bound the mean-field approximation error. Leveraging these results, we construct a convex surrogate model that approximates the aggregate behavior of large storage populations and can be embedded directly into power system operations and market clearing. Surrogate parameter identification is formulated as an optimization problem using historical market price-response data, and we adopt a gradient-based algorithm for efficient learning procedure. Case studies validate the theoretical findings and demonstrate the effectiveness of the proposed framework in approximation accuracy, data efficiency, and profit outcomes.
Authors: Sifat Chowdhury, Yihsu Chen, Yu Zhang
The growing prevalence of extreme weather events driven by climate change poses significant challenges to power system resilience. Infrastructure damage and prolonged power outages highlight the urgent need for effective grid-hardening strategies. While some measures provide long-term protection against specific hazards, they can become counterproductive under conflicting threats. In this work, we develop an adaptive two-stage stochastic optimization framework to support dynamic decision-making for hardening critical grid components under multiple hazard exposures. Unlike traditional approaches, our model adapts to evolving climate conditions, enabling more resilient investment strategies. Furthermore, we integrate long-term (undergrounding) and short-term (vegetation management) hardening actions to jointly minimize total system costs. Extensive simulation results validate the effectiveness of the proposed framework in reducing outage and repair costs while enhancing the adaptability and robustness of grid infrastructure planning.
Authors: B. da Costa Paulo, N. Aginako, J. Ugartemendia, I. Landa del Barrio, M. Quartulli, H. Camblong
In the last few years, energy efficiency has become a challenge. Not only mitigating environmental impact but reducing energy waste can lead to financial advantages. Buildings play an important role in this: they are among the biggest consumers. So, finding manners to reduce energy consumption is a way to minimise energy waste, and a technique for that is creating Demand Response (DR) strategies. This paper proposes a novel way to decrease computational effort of simulating the behaviour of a building using surrogate models based on active learning. Before going straight to the problem of a building, which is complex and computationally costly, the paper proposes the approach of active learning to a smaller problem: with reduced simulations, regress the curve of voltage versus current of a thermo-resistor. Then, the paper implements a surrogate model of energy consumption of a building. The goal is to be able to learn the consumption pattern based on a limited number of simulations. The result given by the surrogate can be used to set the reference temperature, maximising the PV self-consumption, and reducing energy usage from the grid. Thanks to the surrogate, the total time spent to map all possible consumption scenarios is reduced around 7 times.
Authors: Xiang Zhu, Xiuqiang He, Hongyang Qing, Hua Geng
This letter proposes a decentralized local gain condition (LGC) to guarantee oscillation damping in inverter-based resource (IBR)-dominated power systems. The LGC constrains the dynamic gain between each IBR and the network at its point of connection. By satisfying the LGC locally, the closed-loop poles are confined to a desired region, thereby yielding system-wide oscillation damping without requiring global information. Notably, the LGC is agnostic to different IBR dynamics, well-suited for systems with heterogeneous IBRs, and flexible to various damping requirements. Moreover, a low-complexity algorithm is proposed to parameterize LGC, providing scalable and damping-constrained parameter tuning guidance for IBRs.
Authors: Savvas Panagi, Chrysovalantis Spanias, Petros Aristidou
The growing electrification of transportation and heating through Electric Vehicles (EVs) and Heat Pumps (HPs) introduces both flexibility and complexity to Active Distribution Networks (ADNs). These resources provide substantial operational flexibility but also create tightly coupled thermal-electrical dynamics that challenge conventional network management. This paper proposes a unified co-optimization framework that integrates a calibrated 3R2C grey-box building thermal model into a network-constrained Optimal Power Flow (OPF). The framework jointly optimizes EVs, HPs, and photovoltaic systems while explicitly enforcing thermal comfort, Distributed Energy Resource (DER) limits, and full power flow physics. To maintain computational tractability, Second-Order Cone Programming (SOCP) relaxations are evaluated on a realistic low-voltage feeder. The analysis shows that, despite network heterogeneity violating some theoretical exactness conditions, the relaxation remains exact in practice. Comparative assessments of convex DistFlow, bus injection, and branch flow formulations reveal that convex DistFlow achieves sub-second runtimes and near-optimal performance even at high DER penetration levels. Simulations confirm the effectiveness of coordinated scheduling, yielding reductions of 41% in transformer aging, 54% in losses, and complete elimination of voltage violations, demonstrating the value of integrated thermal-electrical coordination in future smart grids.
Authors: Qian Zhang, Feng Zhao, Gord Stephen, Chanan Singh, Le Xie
Probabilistic resource adequacy assessment is a cornerstone of modern capacity accreditation. This paper develops a gradient-based framework, in which capacity accreditation is interpreted as the directional derivative of a probabilistic resource adequacy metric with respect to resource capacity, that unifies two widely used accreditation approaches: Effective Load Carrying Capability (ELCC) and Marginal Reliability Impact (MRI). Under mild regularity conditions, we show that marginal ELCC and MRI yield equivalent accreditation factors, while their numerical implementations exhibit markedly different computational characteristics. Building on this framework, we demonstrate how infinitesimal perturbation analysis enables up to a $1000\times$ speedup in gradient estimation for capacity accreditation, and we implement gradient-informed search algorithms that significantly accelerate ELCC computations relative to standard bisection methods. Large-scale Monte Carlo experiments show that MRI achieves substantial runtime reductions compared to ELCC and exhibits greater robustness to perturbation step-size selection. These results provide practical guidance for implementing efficient and scalable capacity accreditation in large-scale power systems.
Authors: Qian Zhang, Feng Zhao, Tongxin Zheng, Le Xie
To enhance the efficiency of capacity markets, many electricity markets in the U.S. are adopting or planning to implement marginal capacity accreditation reforms. This paper provides new insights into energy storage capacity accreditation using Marginal Reliability Impact (MRI). We reformulate the commonly used reliability-based storage dispatch model as an optimization problem, enabling direct calculation of the MRI from the Lagrange multipliers, rather than using brute-force perturbation analysis. The analysis demonstrates that the EUE is a piecewise linear function and the storage MRI retains a non-negative property across various system scenarios. We further explore the influence of qualified capacity (QC), storage dispatch rules, and other key factors on storage accreditation, providing practical insights for system operators. Additionally, comparisons of storage capacity accreditation under different reliability criteria offer valuable guidance for policymakers in setting future standards. Numerical results from a modified California system validate our findings and highlight several important phenomena associated with the MRI-based accreditation scheme.
Authors: Shuaicheng Tong, Michael A. Boateng, Mathieu Tanneau, Pascal Van Hentenryck
Voltage (Volt) and reactive-power (VAR) control in transmission networks is critical for reliability and increasingly needs fast, implementable decisions. This paper presents a transmission Volt/VAR Optimization (VVO) framework that co-optimizes discrete control of on-load tap-changing transformers (OLTCs) and capacitor banks (CBs) with AC power flow (ACPF) physics to improve voltage stability and minimize VAR generation. The framework follows a relax-round-resolve pipeline: a continuous relaxation proposes targets, a rounding step selects feasible discrete settings, and a final solve enforces AC power flow physics. Extensive experiments on IEEE, PEGASE, and RTE systems show consistent improvements in voltage and VAR quality metrics with modest generator redispatch while preserving economic operation and achieving compatible runtimes with real-time transmission operations.
Authors: Martin Kittel, Wolf-Peter Schill
Variable renewable energy droughts, so called Dunkelflaute events, emerge as a challenge for climate-neutral energy systems based on variable renewables. Here we characterize European drought events for on- and offshore wind power, solar photovoltaics, and renewable technology portfolios, using 38 historic weather years and an advanced identification method. Their characteristics heavily depend on the chosen drought threshold, questioning the usefulness of single-threshold analyses. Applying a multi-threshold framework, we quantify how the complementarity of wind and solar power temporally and spatially alleviates drought frequency, return periods, duration, and severity within (portfolio effect) and across countries (balancing effect). We identify the most extreme droughts, which drive major discharging periods of long-duration storage in a fully renewable European energy system, based on a policy-relevant decarbonization scenario. Such events comprise sequences of shorter droughts of varying severity. The most extreme event occurred in winter 1996/97 and lasted 55 days in an idealized, perfectly interconnected setting. The average renewable availability during this period was still 47% of its long-run mean. System planners must consider such events when planning for storage and other flexibility technologies. Methodologically, we conclude that using arbitrary single calendar years is not suitable for modeling weather-resilient energy scenarios.
Authors: Verena Häberle, Kehao Zhuang, Xiuqiang He, Linbin Huang, Gabriela Hug, Florian Dörfler
This paper presents preliminary results toward a conceptual foundation for Next Generation Grid Codes (NGGCs) based on decentralized stability and performance certification for dynamic ancillary services. The proposed NGGC framework targets two core outcomes: (i) guaranteed closed-loop stability and (ii) explicit performance assurances for power-system frequency and voltage dynamics. Stability is addressed using loop-shifting and passivity-based methods that yield local frequency-domain certificates for individual devices, enabling fully decentralized verification of the interconnected system. Performance is characterized by deriving quantitative bounds on key time-domain metrics (e.g., nadirs, rate-of-change-of-frequency (RoCoF), steady-state deviations, and oscillation damping) through frequency-domain constraints on local device behavior. The framework is non-parametric and model-agnostic, accommodating a broad class of device dynamics under mild assumptions, and provides an initial unified approach to stability and performance certification without explicit device-model parameterization. As such, these results offer a principled starting point for the development of future grid codes and control design methodologies in modern power systems.
Authors: Zixin Jiang, Weili Xu, Bing Dong
The urgent need for building decarbonization calls for a paradigm shift in future autonomous building energy operation, from human-intensive engineering workflows toward intelligent agents that interact with physics-grounded digital environments. This study proposes an end-to-end agentic AI-enabled Physics-Informed Machine Learning (PIML) environment for scalable building energy modeling, simulation, control, and automation. The framework consists of (1) a modular and physics-consistent PIML digital environment spanning building thermal dynamics, Heating, Ventilation, and Air Conditioning (HVAC), and distributed energy resources (DER) for grid-interactive energy management; and (2) an agentic AI layer with 11 specialist agents and 72 Model Context Protocol (MCP) tools that enable end-to-end execution of multi-step energy analytics. A representative case study demonstrates multi-domain, multi-agent coordination for assessing how system and control upgrades affect energy use, operating cost, thermal comfort, and flexibility. In addition, a large-scale benchmark (about 4000 runs) systematically evaluates workflow performance in terms of accuracy, token consumption, execution time, and inference cost. The results quantify the impacts of intelligence mode design, model size, task complexity, and orchestrator-specialist coordination, and provide key lessons for building future agentic AI systems in real-world building energy applications. This work establishes a scalable, physics-grounded foundation for deploying agentic AI in decarbonized and grid-interactive building operations.
Authors: Gaoze Mu, Yanzhao Hou, Mingjie Chen, Yuanyu Hu, Yongan Zheng, Qimei Cui, Xiaofeng Tao
This paper presents an analytical framework for evaluating the coverage performance of the fluid antenna system (FAS)-enhanced LoRa wide-area networks (LoRaWANs). We investigate the effects of large-scale pathloss in LoRaWAN, small-scale fading characterized by FAS, and dense interference (i.e., collision in an ALOHA-based mechanism) arising from randomly deployed end devices (EDs). Both co-spreading factor (co-SF) interference (with the same SF) and inter-SF interference (with different SFs) are introduced into the network, and their differences in physical characteristics are also considered in the analysis. Additionally, simple yet accurate statistical approximations of the FAS channel envelope and power are derived using the extreme-value theorem. Based on the approximated channel expression, the theoretical coverage probability of the proposed FAS-enhanced LoRaWAN is derived. Numerical results validate our analytical approximations by exhibiting close agreement with the exact correlation model. Notably, it is revealed that a FAS with a normalized aperture of 1 times 1 can greatly enhance network performance, in terms of both ED numbers and coverage range.
Authors: Yuhua Zhao, Tiejun Lv, Ke Wang
Satellite-based Internet of Things (S-IoT) faces a fundamental trilemma: propagation delay, dynamic fading, and bandwidth scarcity. While Layer-coded Hybrid ARQ (L-HARQ) enhances reliability, its backtracking decoding introduces age ambiguity, undermining the standard Age of Information (AoI) metric and obscuring the critical trade-off between data freshness and transmission efficiency. To bridge this gap, we propose a novel cross-layer optimization framework centered on a new metric, the Cross-layer Age of Error Information (C-AoEI). We derive a closed-form expression for C-AoEI, explicitly linking freshness to system parameters, establishing an explicit analytical connection between freshness degradation and channel dynamics. Building on this, we develop a packet-level encoded L-HARQ scheme for multi-GBS scenarios and an adaptive algorithm that jointly optimizes coding and decision thresholds. Extensive simulations demonstrate the effectiveness of our proposed framework: it achieves 31.8% higher transmission efficiency and 17.2% lower C-AoEI than conventional schemes. The framework also proves robust against inter-cell interference and varying channel conditions, providing a foundation for designing efficient, latency-aware next-generation S-IoT protocols.
Authors: Charlotte Muth, Shrinivas Chimmalgi, Laurent Schmalen
We investigate precoding for multi-user (MU) multiple-input multiple-output (MIMO) joint communications and sensing (JCAS) systems, taking into account the potential interference between sensing and communication channels. We derive indicators for the sensing and communication performance, i.e., the detection probability and the communication signal-to-interference-and-noise ratio (SINR) for general input signals. Our results show that the use of the communication signal for sensing can prevent a loss in communication performance if channel interference occurs, while the kurtosis of the transmit alphabet of the communication signal limits the sensing performance. We present simulation results of example setups.
Authors: Qian Gao, Ruikang Zhong, Yue Liu, Hyundong Shin, Yuanwei Liu
In this paper, a three-dimensional (3D) deployment scheme of pinching antenna array is proposed, aiming to enhances the performance of integrated sensing and communication (ISAC) systems. To fully realize the potential of 3D deployment, a joint antenna positioning, time allocation and transmit power optimization problem is formulated to maximize the sum communication rate with the constraints of target sensing rates and system energy. To solve the sum rate maximization problem, we propose a heterogeneous graph neural network based reinforcement learning (HGRL) algorithm. Simulation results prove that 3D deployment of pinching antenna array outperforms 1D and 2D counterparts in ISAC systems. Moreover, the proposed HGRL algorithm surpasses other baselines in both performance and convergence speed due to the advanced observation construction of the environment.
Authors: Qian Gao, Ruikang Zhong, Yuanwei Liu
In this paper, a general ISAC system where the base station (BS) communicates with multiple users and performs target detection is considered. Then, a sum communication rate maximization problem is formulated, subjected to the constraints of transmit power and the minimum sensing rates of users. To solve this problem, we develop a framework that leverages deep learning algorithms to provide a three-stage solution for ISAC beamforming. The three-stage beamforming optimization solution includes three modules: 1) an unsupervised learning based feature extraction algorithm is proposed to extract fixed-size latent features while keeping its essential information from the variable channel state information (CSI); 2) a reinforcement learning (RL) based beampattern optimization algorithm is proposed to search the desired beampattern according to the extracted features; 3) a supervised learning based beamforming reconstruction algorithm is proposed to reconstruct the beamforming vector from beampattern given by the RL agent. Simulation results demonstrate that the proposed three-stage solution outperforms the baseline RL algorithm by optimizing the intuitional beampattern rather than beamforming.
Authors: Ruining Fan, Xingyu Huang, Mouli Chakraborty, Avishek Nag, Anshu Mukherjee
The efficient user scheduling policy in the massive Multiple Input Multiple Output (mMIMO) system remains a significant challenge in the field of 5G and Beyond 5G (B5G) due to its high computational complexity, scalability, and Channel State Information (CSI) overhead. This paper proposes a novel Grover's search-inspired Quantum Reinforcement Learning (QRL) framework for mMIMO user scheduling. The QRL agent can explore the exponentially large scheduling space effectively by applying Grover's search to the reinforcement learning process. The model is implemented using our designed quantum-gate-based circuit, which imitates the layered architecture of reinforcement learning, where quantum operations act as policy updates and decision-making units. Moreover, the simulation results demonstrate that the proposed method achieves proper convergence and significantly outperforms classical Convolutional Neural Networks (CNN) and Quantum Deep Learning (QDL) benchmarks.
Authors: Julian Gutierrez, Redouane Silvente
We present two machine learning frameworks for forecasting aggregated curves and optimizing storage in the EPEX SPOT day-ahead market. First, a fast parametric model forecasts hourly demand and supply curves in a low-dimensional and grid-robust representation, with minimum and maximum volumes combined with a Chebyshev polynomial for the elastic segment. The model enables daily use with low error and clear interpretability. Second, for a more comprehensive analysis, though less suited to daily operation, we employ generative models that learn the joint distribution of 24-hour order-level submissions given weather and fuel variables. These models generate synthetic daily scenarios of individual buy and sell orders, which, once aggregated, yield hourly supply and demand curves. Based on these forecasts, we optimize a price-making storage strategy, quantify revenue distributions, and highlight the price-compression effect with lower peaks, higher off-peak levels, and diminishing returns as capacity expands.
Authors: Minerva Priyadarsini, Zilong Liu, Kuntal Deka, Sujit Kumar Sahoo, Sanjeev Sharma
This paper proposes, for the first time, a hybrid multiple access framework that integrates the principles of rate-splitting (RS) and sparse code multiple access (SCMA) in an SISO downlink scenario. The proposed scheme, termed RS-SCMA, unifies the powerful interference management capability of rate-splitting multiple access (RSMA) with the near-optimal multiuser detection of SCMA. A key feature of RS-SCMA is a tunable splitting factor $\alpha$, which governs the allocation between the generic $M$-ary modulated common messages and SCMA-encoded private messages. This enables dynamic control over the fundamental trade-off between system sum-rate, bit error rate (BER), and the overloading factor. We develop novel transmitter and receiver architectures based on soft successive interference cancellation (SIC), incorporating message passing algorithm (MPA) detection and soft-symbol reconstruction. Furthermore, a unified analytical expression for the achievable sum-rate is derived as a function of the splitting factor $\alpha$. The performance of the proposed RS-SCMA system is evaluated in terms of both BER and sum-rate. Simulation results confirm the superiority of RS-SCMA over conventional SCMA and multi-carrier RSMA, demonstrating its scalability and robustness even in the presence of channel estimation errors.
Authors: Martin Andersson, Anubhab Chowdhury, Erik G. Larsson
We present a framework for joint amplification and phase shift optimization of the repeater gain in dynamic time-division duplex (TDD) repeater-assisted massive MIMO networks. Repeaters, being active scatterers with amplification and phase shift, enhance the received signal strengths for users. However, they inevitably also amplify undesired noise and interference signals, which become particularly prominent in dynamic TDD systems due to the concurrent downlink (DL) and uplink (UL) transmissions, introducing cross-link interference among access points and users operating in opposite transmit directions. This causes a non-trivial trade-off between amplification of desired and undesired signals. To underpin the conditions under which such a trade-off can improve performance, we first derive DL and UL spectral efficiencies (SEs), and then develop a repeater gain optimization algorithm for SE maximization. Numerically, we show that our proposed algorithm successfully calibrates the repeater gain to amplify the desired signal while limiting the interference.
Authors: Murat Arda Onsu, Poonam Lohan, Burak Kantarci, Aisha Syed, Matthew Andrews, Sean Kennedy
Intelligent Transportation Systems (ITS) demand real-time collision prediction to ensure road safety and reduce accident severity. Conventional approaches rely on transmitting raw video or high-dimensional sensory data from roadside units (RSUs) to vehicles, which is impractical under vehicular communication bandwidth and latency constraints. In this work, we propose a semantic V2X framework in which RSU-mounted cameras generate spatiotemporal semantic embeddings of future frames using the Video Joint Embedding Predictive Architecture (V-JEPA). To evaluate the system, we construct a digital twin of an urban traffic environment enabling the generation of d verse traffic scenarios with both safe and collision events. These embeddings of the future frame, extracted from V-JEPA, capture task-relevant traffic dynamics and are transmitted via V2X links to vehicles, where a lightweight attentive probe and classifier decode them to predict imminent collisions. By transmitting only semantic embeddings instead of raw frames, the proposed system significantly reduces communication overhead while maintaining predictive accuracy. Experimental results demonstrate that the framework with an appropriate processing method achieves a 10% F1-score improvement for collision prediction while reducing transmission requirements by four orders of magnitude compared to raw video. This validates the potential of semantic V2X communication to enable cooperative, real-time collision prediction in ITS.
Authors: Dahlia Saba, Dominic Groß
This work presents (i) a framework for certifying small-signal frequency stability of a power system with line dynamics and heterogeneous bus dynamics, (ii) a novel reduced-order model of damper windings in synchronous machines, and (iii) a proportional-derivative (PD) damper winding emulation control for voltage-source converters (VSCs). Damper windings have long been understood to improve the frequency synchronization between machines. However, the dynamics of the damper windings are complex, making them difficult to analyze and directly emulate in the control of VSCs. This paper derives a reduced-order model of the damper windings as a PD term that allows grid-forming controls for VSCs to emulate their effect on frequency dynamics. Next, a framework for certifying small-signal frequency stability of a network with heterogeneous bus dynamics is developed that extends prior results by incorporating line dynamics. Finally, we analytically demonstrate that PD damper winding emulation can improve the stability of grid-forming converter controls. These results are validated with electromagnetic-transient (EMT) simulation.
Authors: Menghan Zhang, Caisheng Wang
As load varies continuously over time, it is essential to provide continuous-time price signals that accurately reflect supply-demand balance. However, conventional discrete-time economic dispatch fails to capture the intra-temporal variations in load and generation. Its dual solution--the marginal price--may distort economic signals, leading to inefficient market incentives. To analyze these issues, this paper develops a continuous-time dispatch model and derives its dual formulation for price analysis. The continuous-time dispatch produces dual variables that can be interpreted as price signals. Piecewise time-indexed generation and price trajectories are then constructed through a parametric programming approach. The resulting price, represented by the Lagrange multiplier of the system-wide power balance constraint, evolves piecewise along the continuous-time load profile. Each segment corresponds to a critical region characterized by a set of active constraints. Furthermore, we discuss the impact of unit-specific ramping constraints on price implications. Results indicate that continuous-time generation and price trajectories provide deeper insights into the price distortions and inefficient incentives inherent in discrete-time formulations. The proposed methodology is validated on an illustrative 5-bus system and a modified IEEE RTS-2019.
Authors: Giacomo Mastroddi, Jan Poland, Mats Larsson, Keith Moffat
We study damping of inter-area oscillations in transmission grids using voltage-source-converter-based high-voltage direct-current (VSC-HVDC) links. Conventional power oscillation damping controllers rely on system models that are difficult to obtain in practice. Data-driven Predictive Control (DPC) addresses this limitation by replacing explicit models with data. We apply AutoRegressive with eXogenous inputs (ARX)-based predictive control and its Transient Predictive Control (TPC) variant, and compare them with Data-enabled Predictive Control (DeePC) and two standard model-based controllers. The methods are evaluated in simulation on a system exhibiting both inter-area and local oscillation modes. ARX-based predictive control and DeePC both achieve effective damping, while the ARX-based methods require less online computation. Using warm-started, pre-factorized operator-splitting solvers, ARX/TPC control actions are computed in less than 1ms. These results demonstrate that DPC is a viable approach for power-system oscillation damping for the given test case.
Authors: Milad Hasanzadeh, Amin Kargarian
\textit{DPLib} is an open-source MATLAB-based benchmark library created to support research and development in distributed and decentralized power system analysis and optimization. Distributed and decentralized methods offer scalability, privacy preservation, and resilience to single points of failure, making them increasingly important for modern power systems. However, unlike centralized tools such as MATPOWER, no general-purpose, reproducible data library package currently exists for distributed power system studies. DPLib, available at \href{this https URL}{GitHub}, fills this gap by providing a standard power system library featuring over 20 multi-region benchmark test cases of varying sizes, along with a graph-based partitioning toolkit that decomposes any MATPOWER test system into multiple electrically coherent regions. The partitioning toolkit, an easy-to-use MATLAB code, generates standardized \texttt{.mat} and \texttt{.m} files, along with region visualizations for intuitive understanding. We also provide modular, easy-to-use distributed optimal power flow (OPF) solvers: an alternating direction method of multipliers(ADMM)-based DC-OPF solver implemented in YALMIP, and an ADMM-based AC-OPF solver leveraging IPOPT. These solvers validate the generated test systems for distributed optimization applications. Numerical results validate the generated test cases, establishing DPLib as a foundation for reproducible distributed power system research.
Authors: Yuxin Yang, Hang Zhou, Chaojie Li, Xin Li, Yingyi Yan, Mingyang Zheng
This paper revisits the classical formulation of the Z-transform and its relationship to the inverse Laplace transform (L-1), originally developed by Ragazzini in sampled-data theory. It identifies a longstanding mathematical oversight in standard derivations, which typically neglect the contribution from the infinite arc in the complex plane during inverse Laplace evaluation. This omission leads to inconsistencies, especially at discontinuities such as t = 0. By incorporating the full Bromwich contour, including all boundary contributions, we restore internal consistency between L-1 and the Z-transform, aligning the corrected L-1 with results from Discrete-Time Fourier Transform (DTFT) aliasing theory. Consequently, this necessitates a structural revision of the Z-transform, inverse Laplace transform, and the behavior of the Heaviside step function at discontinuities, providing a more accurate foundation for modeling and analysis of sampled-data systems.
Authors: Shivanshu Tripathi, Hossein Mohsenzadeh Yazdi, Maziar Raissi, Hamed Mohsenian-Rad
Inverter-based resources (IBRs) exhibit fast transient dynamics during network disturbances, which often cannot be properly captured by phasor and SCADA measurements. This shortcoming has recently been addressed with the advent of waveform measurement units (WMUs), which provide high-resolution, time-synchronized raw voltage and current waveform samples from multiple locations in the power system. However, transient model learning based on synchro-waveform measurements remains constrained by the scarcity of network disturbances and the complexity of the underlying nonlinear dynamics of IBRs. We propose to address these problems by developing a data-efficient physics-informed machine learning (PIML) framework for synchro-waveform analytics that estimates the IBR terminal current response from only a few network disturbance signatures. Here, the physics of the electrical circuits are used to compensate for limited data availability by constraining the learning process through known circuit relationships. Two cases are considered, with known and unknown circuit parameters. In the latter case, the framework jointly learns the transient dynamics of the IBRs and the parameters of the electrical circuit. Case studies using WMU disturbance data across multiple sampling rates shows consistently lower current estimation error with substantially fewer training events than a purely data-driven baseline.
Authors: Jack Umenberger, Anna Osguthorpe Rasmussen
This paper studies the problem of maximizing revenue from a grid-scale battery energy storage system, accounting for uncertain future electricity prices and the effect of degradation on battery lifetime. We formulate this task as an online resource allocation problem. We propose an algorithm, based on online mirror descent, that is no-regret in the stochastic i.i.d. setting and attains finite asymptotic competitive ratio in the adversarial setting (robustness). When untrusted advice about the opportunity cost of degradation is available, we propose a learning-augmented algorithm that performs well when the advice is accurate (consistency) while still retaining robustness properties when the advice is poor.
Authors: Camblong H., Curea O., Ugartemendia J.J., Boussaada Z., Lizarralde I., Etxegarai G
This research study analyses different types of photovoltaic (PV) energy sharing in a collective self-consumption (CSC) real-case in the Izarbel technological park in France. The analysis is carried out above all from the point of view of the self-consumption rate (SCR) and the savings. After explaining the emergence of the self-consumption concept for the integration of renewable energies, the study case is described. The PV energy is produced in ESTIA1 building and consumed in ESTIA1, 2 and 4 buildings. The main IoT components used to implement the CSC are smart meters and the Tecsol TICs; devices based on the LoRa protocol to retrieve production and consumption data. Then, the characteristics of PV energy sharing in France are explained, in particular the three possible types of energy sharing/allocation (static, dynamic by default and customised dynamic) and the structure of the electricity bill. Finally, the three types of sharing are compared in four scenarios (without and with a data centre, for low and high solar radiation). The results show that the dynamic allocations lead to increases of the SCR and that the customised dynamic sharing increases savings.
Authors: Lucu M., Martinez-Laserna E., Gandiaga I., Liu K., Camblong H., Widanage W.D., Marco J
Conventional Li-ion battery ageing models, such as electrochemical, semi-empirical and empirical models, require a significant amount of time and experimental resources to provide accurate predictions under realistic operating conditions. At the same time, there is significant interest from industry in the introduction of new data collection telemetry technology. This implies the forthcoming availability of a significant amount of real-world battery operation data. In this context, the development of ageing models able to learn from in-field battery operation data is an interesting solution to mitigate the need for exhaustive laboratory testing. In a series of two papers, a data-driven ageing model is developed for Li-ion batteries under the Gaussian Process framework. A special emphasis is placed on illustrating the ability of the Gaussian Process model to learn from new data observations, providing more accurate and confident predictions, and extending the operating window of the model. This first paper focusses on the systematic modelling and experimental verification of cell degradation through calendar ageing. A specific covariance function is composed, tailored for use in a battery ageing application. Over an extensive dataset involving 32 cells tested during more than three years, different training possibilities are contemplated in order to quantify the minimal number of laboratory tests required for the design of an accurate ageing model. A model trained with only 18 tested cells achieves an overall mean-absolute-error of 0.53% in the capacity curves prediction, after being validated under a broad window of both dynamic and static temperature and SOC storage conditions.
Authors: Bokan Chen, Raiden Hasegawa, Adriaan Hilbers, Ross Koningstein, Ana Radovanović, Utkarsh Shah, Gabriela Volpato, Mohamed Ahmed, Tim Cary, Rod Frowd
Shaping multi-megawatt loads, such as data centers, impacts generator dispatch on the electric grid, which in turn affects system CO2 emissions and energy cost. Substantiating the effectiveness of prevalent load shaping strategies, such as those based on grid-level average carbon intensity, locational marginal price, or marginal emissions, is challenging due to the lack of detailed counterfactual data required for accurate attribution. This study uses a series of calibrated granular ERCOT day-ahead direct current optimal power flow (DC-OPF) simulations for counterfactual analysis of a broad set of load shaping strategies on grid CO2 emissions and cost of electricity. In terms of annual grid level CO2 emissions reductions, LMP-based shaping outperforms other common strategies, but can be significantly improved upon. Examining the performance of practicable strategies under different grid conditions motivates a more effective load shaping approach: one that "cherry-picks" a daily strategy based on observable grid signals and historical data. The cherry-picking approach to power load shaping is applicable to any large flexible consumer on the electricity grid, such as data centers, distributed energy resources and Virtual Power Plants (VPPs).
Authors: Wenqian Jiang, Aditya Rangarajan, Line Roald
An increasing share of consumers care about the carbon footprint of their electricity. This paper analyzes a method to integrate consumer carbon preferences in the electricity market-clearing by introducing consumer-based carbon costs and a carbon allocation mechanism. Specifically, consumers submit not only bids for power but also assign a cost to the carbon emissions incurred by their electricity use. The carbon allocation mechanism then assigns emissions from generation to consumers to minimize overall carbon costs. Our analysis starts from a previously proposed centralized market clearing formulation that maximizes social welfare under consideration of generation costs, consumer utility, and consumer carbon costs. We then derive an equivalent equilibrium formulation that incorporates a carbon allocation problem and gives rise to a set of carbon-adjusted electricity prices for both consumers and generators. We prove that the carbon-adjusted prices are higher for low-emitting generators and consumers with high carbon costs. Further, we prove that this new paradigm satisfies the same desirable market properties as standard electricity markets based on locational marginal prices, namely revenue adequacy and individual rationality, and demonstrate that a carbon tax on generators is equivalent to imposing a uniform carbon cost on consumers. Using a simplified three-bus system and the RTS-GMLC system, we illustrate that consumer-based carbon costs contribute to greener electricity market clearing both through generation redispatch and demand reductions.
Authors: Yuanwei Liu, Hao Jiang, Xiaoxia Xu, Zhaolin Wang, Jia Guo, Chongjun Ouyang, Xidong Mu, Zhiguo Ding, Arumugam Nallanathan, George K. Karagiannidis, Robert Schober
Pinching antenna systems (PASS) present a breakthrough among the flexible-antenna technologies, and distinguish themselves by facilitating large-scale antenna reconfiguration, line-of-sight creation, scalable implementation, and near-field benefits, thus bringing wireless communications from the last mile to the last meter. A comprehensive tutorial is presented in this paper. First, the fundamentals of PASS are discussed, including PASS signal models, hardware models, power radiation models, and pinching antenna activation methods. Building upon this, the information-theoretic capacity limits achieved by PASS are characterized, and several typical performance metrics of PASS-based communications are analyzed to demonstrate its superiority over conventional antenna technologies. Next, the pinching beamforming design is investigated. The corresponding power scaling law is first characterized. For the joint transmit and pinching design in the general multiple-waveguide case, 1) a pair of transmission strategies is proposed for PASS-based single-user communications to validate the superiority of PASS, namely sub-connected and fully connected structures; and 2) three practical protocols are proposed for facilitating PASS-based multi-user communications, namely waveguide switching, waveguide division, and waveguide multiplexing. A possible implementation of PASS in wideband communications is further highlighted. Moreover, the channel state information acquisition in PASS is elaborated with a pair of promising solutions. To overcome the high complexity and suboptimality inherent in conventional convex-optimization-based approaches, machine-learning-based methods for operating PASS are also explored, focusing on selected deep neural network architectures and training algorithms. Finally, several promising applications of PASS in next-generation wireless networks are highlighted.
Authors: Zixin Jiang, Ruizhi Song, Guowen Li, Yuhang Zhang, Zheng O'Neill, Xuezheng Wang, Judah Goldfeder, Bing Dong
Modern buildings are increasingly interconnected with occupancy, heating, ventilation, and air-conditioning (HVAC) systems, distributed energy resources (DERs), and power grids. Modeling, control, and optimization of such multi-domain systems play a critical role in achieving building-sector decarbonization. However, most existing tools lack scalability and physical consistency for addressing these complex, multi-scale ecosystem problems. To bridge this gap, this study presents BESTOpt, a modular, physics-informed machine learning (PIML) framework that unifies building applications, including benchmarking, evaluation, diagnostics, control, optimization, and performance simulation. The framework adopts a cluster-domain-system/building-component hierarchy and a standardized state-action-disturbance-observation data typology. By embedding physics priors into data-driven modules, BESTOpt improves model accuracy and physical consistency under unseen conditions. Case studies on single-building and cluster scenarios demonstrate its capability for multi-level centralized and decentralized control. Looking ahead, BESTOpt lays the foundation for an open, extensible platform that accelerates interdisciplinary research toward smart, resilient, and decarbonized building ecosystems.
Authors: Maria Vabson, Muhy Eddin Zater, Amir Sajadi, Kyri Baker, Bri-Mathias Hodge
Data centers are growing rapidly, creating the pressing need for the development of critical infrastructure build out to support these resource-intensive large loads. Their immense consumption of electricity and, often, freshwater, continues to stress an already constrained and aging power grid and water resources. This paper presents a comprehensive modeling approach to determine the optimal locations to construct such facilities by quantifying their resource use and minimizing associated costs. The interdisciplinary modeling approach incorporates a number of factors including the power grid, telecommunications, climate, water use, and collocated generation potential. This work establishes the base model whose functionality is shown through several test cases focusing on carbon-free generation collocation on a county-level in the United States. The results suggest that while capital costs are the biggest driver, having a longer future outlook and allowing more variable generation collocation influences the model to choose sites with higher renewable potential.
Authors: Xueqing Gao, Jun Zhang, Tao Li, Mingming Zhang
This paper investigates the design and analysis of a novel grid-forming (GFM) control method for grid-connected converters (GCCs). The core novelty lies in a virtual flux observer-based synchronization and load angle control method. The terminal voltage of the converter is directly regulated to provide voltage-source behavior. The control parameters are designed for decoupling and pole placement. The proposed method exhibits strong robustness in stability and dynamical performance across varying and uncertain grid strengths. The robust control performance of the proposed method is first demonstrated by small-signal analysis, then validated by experiments on a 20 kVA power conversion system.
Authors: Antonio Bracale, Pasquale De Falco, Piotr Kuwałek, Grzegorz Wiczyński
The fundamental frequency is one of the parameters that define power quality. Correctly determining this parameter under the conditions that prevail in modern power grids is crucial. Diagnostic purposes often require an efficient estimation of this parameter within short time windows. Therefore, this article presents the results of numerical simulation studies that allow the assessment of errors in various fundamental frequency estimation methods, including the standard IEC 61000-4-30 method, when the analyzed signal has a form similar to that found in modern power grids. For the purposes of this study, a test signal was adopted recreating the states of the power grid, including the simultaneous occurrence of voltage fluctuations and distortions. Conclusions are presented based on conducted research.
Authors: Antonio Bracale, Jakub Janowicz, Piotr Kuwałek, Grzegorz Wiczyński
The most common instruments currently measuring active/reactive energy and power quality indicators are smart energy meters. Unfortunately, the verification of such meters is currently performed under ideal conditions or with simple signal models, which do not recreate actual states occurring in the power grid and do not ensure the verification of the properties of their signal chains. This paper presents challenges in the proper metrological verification of smart energy meters. It presents existing legal and normative requirements and scientific research directions regarding these meters. Selected test results are presented, which show that although the tested meters meet the normative and legal requirements because they have been approved for sale, numerous imperfections in the signal and measurement chains of the analyzed instruments are revealed for the selected test signal. On the basis of the presented research results, further directions of research in the field of smart energy meters have been determined.
Authors: Moisés J. B. B. Davi (1), Felipe V. Lopes (2), Vinícius A. Lacerda (3), Mário Oleskovicz (1), Oriol Gomis-Bellmunt (3) ((1) University of São Paulo, Department of Electrical Engineering, São Carlos, Brazil, (2) Federal University of Paraiba, Department of Electrical Engineering, João Pessoa, Brazil, (3) CITCEA-UPC, Universitat Politècnica de Catalunya, Spain)
The ongoing global transition towards low-carbon energy has propelled the integration of offshore wind farms, which, when combined with Modular Multilevel Converter-based High-Voltage Direct Current (MMC-HVDC) transmission, present unique challenges for power system protection. In collector cables connecting wind turbines to offshore MMC, both ends are supplied by Inverter-Based Resources (IBRs), which modify the magnitude and characteristics of fault currents. In this context, this paper investigates the limitations of conventional differential protection schemes under such conditions and compares them with enhanced strategies that account for sequence components. Using electromagnetic transient simulations of a representative offshore wind farm modeled in PSCAD/EMTDC software, internal and external fault scenarios are assessed, varying fault types and resistances. The comparative evaluation provides insights into the sensitivity and selectivity of differential protection and guides a deeper conceptual understanding of the evolving protection challenges inherent to future converter-dominated grids.
Authors: Francesca Rossi, Juan Carlos Olives-Camps, Eduardo Prieto-Araujo, Oriol Gomis-Bellmunt
This study proposes a control strategy to ensure the safe operation of modern power systems with high penetration of inverter-based resources (IBRs) within an optimal operation framework. The objective is to obtain operating points that satisfy the optimality conditions of a predefined problem while guaranteeing small-signal stability. The methodology consists of two stages. First, an offline analysis of a set of operating points is performed to derive a data-driven regression-based expression that captures a damping-based stability index as a function of the operating conditions. Second, an Online Feedback Optimization (OFO) controller is employed to drive the system toward an optimal operating point while maintaining a secure distance from the instability region. The proposed strategy is evaluated on an academic test case based on a modified version of the IEEE 9-bus system, in which synchronous generators are replaced by IBRs operating under both grid-following and grid-forming control modes. The results demonstrate the effectiveness of the method and are discussed in detail.
Authors: Francesca Rossi, Mauro Garcia Lorenzo, Eduardo Iraola de Acevedo, Elia Mateu Barriendos, Vinicius Albernaz Lacerda, Francesc Lordan-Gomis, Rosa Badia, Eduardo Prieto-Araujo
The increasing penetration of inverter-based resources (IBRs) is fundamentally reshaping power system dynamics and creating new challenges for stability assessment. Data-driven approaches, and in particular machine learning models, require large and representative datasets that capture how system stability varies across a wide range of operating conditions and control settings. This paper presents an open-source, high-performance computing framework for the systematic generation of such datasets. The proposed tool defines a scalable operating space for large-scale power systems, explores it through an adaptive sampling strategy guided by sensitivity analysis, and performs small-signal stability assessments to populate a high-information-content dataset. The framework efficiently targets regions near the stability margin while maintaining broad coverage of feasible operating conditions. The workflow is fully implemented in Python and designed for parallel execution. The resulting tool enables the creation of high-quality datasets that support data-driven stability studies in modern power systems with high IBR penetration.
Authors: Bowen Ou, Bin Wang, Slava Maslennikov, Hanchao Liu, Jim Follum
Phasor Measurement Units (PMUs) convert high-speed waveform data into low-speed phasor data, which are fundamental to wide-area monitoring and control in power systems, with oscillation detection and localization among their most prominent applications. However, representing electrical waveform signals with oscillations using PMU phasors is effective only for low-frequency oscillations. This paper investigates the root causes of this limitation, focusing on errors introduced by Discrete Fourier Transform (DFT)-based signal processing, in addition to the attenuation effects of anti-aliasing filters, and the impact of low reporting rates. To better represent and estimate waveform signals with oscillations, we propose a more general signal model and a multi-step estimation method that leverages one-cycle DFT, the Matrix Pencil Method, and the Least Squares Method. Numerical experiments demonstrate the superior performance of the proposed signal model and estimation method. Furthermore, this paper reveals that the phasor concept, let alone PMU phasors, can become invalid for waveform signals with high-frequency oscillations characterized by asymmetric sub- and super-synchronous components. These findings highlight the fundamental limitations of PMU data and phasor concept, and emphasize the need to rely on waveform data for analyzing high-frequency oscillations in modern power systems.
Authors: Eder Baron-Prada, Adolfo Anta, Florian Dörfler
This paper presents a novel approach to stability analysis for grid-connected converters utilizing Scaled Relative Graphs (SRG). Our method effectively decouples grid and converter dynamics, thereby establishing a comprehensive and efficient framework for evaluating closed-loop stability. Our analysis accommodates both linear and non-linear loads, enhancing its practical applicability. Furthermore, we demonstrate that our stability assessment remains unaffected by angular variations resulting from dq-frame transformations, significantly increasing the method's robustness and versatility. The effectiveness of our approach is validated in several simulation case studies, which illustrate its broad applicability in modern power systems.
Authors: Peng Wang, Zhengmao Li, Luis Badesa
In low-carbon grids, system flexibility can be enhanced through mechanisms such as Demand Response (DR), enabling the efficient utilization of renewable energy. However, as Synchronous Generators (SGs) are being replaced by renewable energy sources characterized by Inverter-Based Resources (IBR), system stability is severely affected. Due to the limited overload capability of IBRs, their Short-Circuit Current (SCC) contribution is much smaller than that of SGs. As a result, protection devices may fail to trip during faults. Consequently, the remaining SGs play a key role in providing sufficient SCC. Since the commitment of SGs is closely related to system loading conditions, DR can indirectly affect their SCC provision, a relationship that has not yet been investigated in the literature. Therefore, this paper incorporates both DR and SCC constraints into a unit commitment problem and conducts case studies on an IEEE 30-bus system. The results show that although DR can reduce total costs by adjusting power demand, it may also lead to inadequate SCC levels. Nevertheless, when flexible loads are properly coordinated with SCC requirements, the total cost increases by only 0.3%, which is significantly lower than the cost of system dispatch without DR. This demonstrates that DR can facilitate stable system operation in a cost-effective manner.
Authors: Ruixing Ren
The dense concentration of low-altitude, slow-speed, and small-size targets in the complex low-altitude environment poses significant security challenges, including failures in continuous wide-area sensing and ambiguous target intent, which existing regulatory frameworks struggle to address. Integrated sensing and communication (ISAC), a hallmark of next-generation mobile communication, offers a transformative approach to low-altitude security governance. By leveraging existing cellular infrastructure and spectrum resources, ISAC enables the construction of a seamless wide-area sensing network, supports intelligent feature extraction and intent inference, facilitates real-time collaborative decision-making, and establishes a dynamic trust authentication framework. This article systematically reviews the technical system, analyzes the security challenges, forecasts the enabling value of ISAC, and discusses the resulting open problems and challenges, thereby laying a foundation for future research and industrial implementation.
Authors: Milad Hasanzadeh, Amin Kargarian, Javad Lavaei
This paper presents a fractional approximation of the AC optimal power flow (AC OPF) problem based on an all-pass approximation of the exponential power flow kernel. The classical AC OPF relies on trigonometric coupling between bus voltage phasors, which yields a nonconvex program with oscillatory derivatives that can slow, or in some cases destabilize, interior-point methods. We replace the trigonometric terms with an all-pass fractional (APF) approximation whose real and imaginary components act as smooth surrogates for the cosine and sine functions, and we introduce a pre-rotation to shift the argument of the approximation toward its most accurate region, ensuring that the reformulated power flow model preserves physical loss behavior, maintains the symmetry of the classical kernels, and improves the conditioning of the Jacobian and Hessian matrices. The proposed APF OPF formulation remains nonconvex, as in the classical model, but it eliminates trigonometric evaluations and empirically produces larger and more stable Newton steps under standard interior-point solvers. Numerical results on more than 25 IEEE and PGLib test systems ranging from 9 to 10{,}000 buses demonstrate that the APF OPF model achieves solutions with accuracy comparable to that of the classical formulation while reducing solver times, indicating a more solver-friendly nonconvex representation of AC OPF. All code, functions, verification scripts, and generated results are publicly available on \href{this https URL}{GitHub}, along with a README describing how to run and reproduce the experiments.
Authors: Yogesh Pipada Sunil Kumar, S. Ali Pourmousavi, Jon A.R. Liisberg, Julian Lesmos-Vinasco
Reliable forecasting of prosumer flexibility is critical for demand response aggregators participating in frequency controlled ancillary services market, where strict reliability requirements such as the P90 standard are enforced. Limited historical data, dependence on exogeneous factors, and heterogenous prosumer behaviour introduce significant epistemic uncertainty, making deterministic or poorly calibrated probabilistic models unsuitable for market bidding. This paper proposes the use of scalable uncertainty quantification framework that integrates Monte Carlo dropout (MCD) with conformal prediction (CP) to produce calibrated, finite sample prediction intervals for aggregated prosumer flexibility. The proposed framework is applied to a behind-the-meter aggregator participating in the Danish manual frequency restoration reserve capacity market. A large-scale synthetic dataset is generated using a modified industry-grade home energy management system, combined with publicly available load, solar, price, activation and device-level data. The resulting machine learning surrogate model captures aggregate prosumer price responsiveness and provides uncertainty-aware estimates suitable for market bidding. Multiple multivariate CP strategies are evaluated and benchmarked against conventional MCD-based methods. Results show that standalone MCD systematically overestimates available flexibility and violates P90 compliance, whereas the proposed MCD-CP framework achieves reliable coverage with controlled conservatism. When embedded in aggregator bidding model, conformalised methods substantially reduce overbidding risk and achieve upto 70% of perfect-information profit while satisfying regulatory reliability constraints, providing practical, computationally efficient, and market-compliant solution for aggregator flexibility forecasting under uncertainty.
Authors: Sharaf K. Magableh, Caisheng Wang, Oraib Dawaghreh
The rapid growth of artificial intelligence (AI)-driven data centers is reshaping electricity demand patterns. This is achieved by introducing fast, multi-gigawatt load ramps that challenge the stability and resilience of modern power systems. Traditional resilience frameworks focus mainly on physical outages and largely overlook these emerging digital-era disturbances. This paper proposes a unified two-stage, risk-aware distributionally robust optimization (DRO)-MILP framework that coordinates the pre-allocation and post-event dispatch of Flexible Capacity Modules (FCMs), including BESS, fast-ramping generation, demand response, and potential long-duration storage. Stage-I optimally positions FCMs using DRO with CVaR to hedge against uncertain AI load surges. Stage-II models real-time stabilization following stochastic demand-shock scenarios, minimizing imbalance, unserved energy, and restoration penalties. The framework is designed to be applied on IEEE 33-bus system or expanded for scalability to larger IEEE test feeders capable of representing AI-scale loads. This contributes a scalable planning tool for resilient, AI-integrated distribution grids.
Authors: Yogesh Pipada Sunil Kumar, S. Ali Pourmousavi, Jon A.R. Liisberg, Julian Lesmos-Vinasco
The ongoing decarbonisation of power systems is driving an increasing reliance on distributed energy resources, which introduces complex and nonlinear interactions that are difficult to capture in conventional optimisation models. As a result, machine learning based surrogate modelling has emerged as a promising approach, but integrating machine learning models such as ReLU deep neural networks (DNNs) directly into optimisation often results in nonconvex and computationally intractable formulations. This paper proposes a linear programming (LP) reformulation for a class of convexified ReLU DNNs with non-negative weight matrices beyond the first layer, enabling a tight and tractable embedding of learned surrogate models in optimisation. We evaluate the method using a case study on learning the prosumer's responsiveness within an aggregator bidding problem in the Danish tertiary capacity market. The proposed reformulation is benchmarked against state-of-the-art alternatives, including piecewise linearisation (PWL), MIP-based embedding, and other LP relaxations. Across multiple neural network architectures and market scenarios, the convexified ReLU DNN achieves solution quality comparable to PWL and MIP-based reformulations while significantly improving computational performance and preserving model fidelity, unlike penalty-based reformulations. The results demonstrate that convexified ReLU DNNs offer a scalable and reliable methodology for integrating learned surrogate models in optimisation, with applicability to a wide range of emerging power system applications.
Authors: Hyeongon Park, Daniel K. Molzahn, Rahul K. Gupta
Power distribution networks are increasingly hosting controllable and flexible distributed energy resources (DERs) that, when aggregated, can provide ancillary support to transmission systems. However, existing aggregation schemes often ignore the ramping constraints of these DERs, which can render them impractical in real deployments. This work proposes a ramping-aware flexibility aggregation scheme, computed at the transmission-distribution boundary, that explicitly accounts for DER ramp limits and yields flexibility envelopes that are provably disaggregable. To further enhance the attainable flexibility region, we introduce a novel pre-ramping strategy, which proactively adjusts resource operating points to enlarge the aggregated flexibility envelope while preserving both network feasibility and disaggregation guarantees. The proposed method demonstrates a 5.2% to 19.2% improvement in flexibility relative to the baseline model, depending on system conditions. We validate the scheme on an IEEE-33 bus distribution system and provide formal proofs showing that both aggregation strategies are disaggregable for all feasible trajectories within the aggregate flexibility envelope.
Authors: Maiken Borud Omtveit, Qian Long, Valentin Chabaud, Marte Ruud-Olsen, Steinar Halsne, Tor-Christian Ystgaard
This paper presents an innovative offshore solution where oil & gas platform clusters are powered by a wind farm and a hydrogen hub. The results show a feasible off-grid design as an alternative to conventional electrification solutions. To address the challenges of design and operation of such a system, a power system model of the equipment and control was developed in a power system simulator called Process Power Simulator (PPSim). Power fluctuations in the wind farm are modelled using a state-of-the-art method encompassing turbulence and wakes. Various operation scenarios were used to evaluate the system design and find the right equipment size. An expensive component to over dimension is the battery energy storage system (BESS). The BESS power rating and energy capacity were found by running a combination of scenarios with extreme and natural wind variations, and contingencies. The control strategy and ramp rates of electrolyzers have significant impact on both system performance and design. A ramp rate in the order of seconds as opposed to minutes will decrease the required BESS size by 60-70%. Choosing synchronized control of the electrolyzers can further reduce the BESS size by 15-20%. The simulations also revealed challenges to achieve self-sufficiency of hydrogen and potential design improvements are suggested.
Authors: Felix Wald, Amir Sajadi, Barry Mather, Giovanni De Carne
This paper presents an integration study for a power electronic-based fast-frequency response technology, an asynchronous grid connection operating as an aggregator for behindthe-meter resources and distributed generators. Both technical feasibility and techno-economic viability studies are presented. The dynamic performance of the fast-frequency response enabled by the asynchronous grid connection is validated with Power Hardware-in-the-Loop experiments and transferred to an IEEE 9-bus system in DigSilent PowerFactory for dynamic stability analysis. We demonstrate that droop-based control enhancements to the local distributed generators could allow their aggregation to provide grid-supporting functionalities and participate in the market for ancillary services. To this end, we performed a long-term simulation embedding the system within the ancillary service market framework of PJM. The fast-frequency response regulation is subsequently used to calculate the potential revenue and project the results on a 15-year investment horizon. Finally, the techno-economic analysis concludes with recommendations for enhancements to access the full potential of distributed generators on a technical and regulatory level.
Authors: Khaled Bin Walid, Feng Ye, Jiaxiang Ji, Ahmed Aziz Ezzat, Travis Miles, Yazhou Leo Jiang
This study investigates the economic and reliability benefits of improved offshore wind forecasting for grid operations along the U.S. East Coast. We introduce and evaluate a state-of-the-art, machine-learning-based offshore wind forecasting model tailored for this region by integrating its improved forecasts into a dynamic reserve procurement framework aligned with New York Independent System Operator (NYISO) practices to evaluate their economic value. To determine system-wide reserve needs, plant-specific reserves are aggregated. However, conventional methods overlook spatial correlation across sites, often leading to over procurement. To address this, we propose a risk-based reserve aggregation technique that leverages spatial diversification. Additionally, we evaluate the reliability improvements enabled by the enhanced offshore wind forecast. To evaluate the operational impact, we propose an operational resource adequacy framework that captures uncertainty from forecast errors and grid conditions. Using this framework, we quantify key reliability metrics under different offshore wind forecast scenarios. Using New York State as a case study, we find that the improved forecast enables more accurate reserve estimation, reducing procurement costs by 5.53% in 2035 scenario compared to a well-validated numerical weather prediction model. Applying the risk-based aggregation further reduces total production costs by 7.21%. From a reliability perspective, the improved forecasts lower the system Loss of Load Probability (LOLP) by approximately 19% in the 2035 scenario, highlighting its potential to enhance system reliability during real-time grid operations.