I am open to collaboration and happy to discuss. If you're interested in my research, feel free to contact me. I am currently seeking a global industrial internship in 2026 summer as well.
During the PhD journey, the focus is on signal-informed sensing and learning. This means incorporating the intrinsic properties of physical signals when building deep learning solutions for sensing tasks. In the VLM-era of deep learning, data, operator design, and cross-modality transfer are central; however, directly migrating practices from image, speech, and text to wireless or wearable data often degrades performance and explainability due to temporal dynamics, complex-valued representations, and partially understood sensing artifacts. Based on this, the work follows the trajectory below, with emphasis on mmWave radar, acoustic sensing, and human-computer interaction.
Data in signal-informed sensing and learning: Wireless and wearable sensing face scarcity, noise, and privacy constraints. Key questions: How can we utilize existing open datasets in speech, image, and text to help synthesize wireless data in a way that supports multiple downstream tasks? How can we leverage unlabeled or weakly labeled streams while preserving privacy? How can we design self-supervised objectives that align with physical priors—such as phase, Doppler, and array geometry—instead of purely visual heuristics?
Operators in signal-informed sensing and learning: Generic models tailored for images, speech, and text can ignore signal intrinsics such as phase continuity, complex arithmetic, and array manifolds. Meanwhile, classical signal processing retains interpretability that end-to-end deep learning may sacrifice for accuracy. Key questions: How can we co-design modules that embed physical operators as learnable yet interpretable blocks? How can we retain interpretability and controllable operating points while improving robustness under challenging conditions, such as low SNR, multipath, and hardware drift? How can we make such modules reusable across tasks and sensors rather than task-specific?
Cross-modality problems in signal-informed sensing and learning: Purely data-driven pipelines struggle to bridge sensing modalities with heterogeneous semantics and sampling regimes. Key questions: How can we align representations across RF, acoustic, video, and inertial signals using shared physical events rather than label co-occurrence? How can we transfer knowledge when one modality is missing, occluded, or privacy-restricted? How can we calibrate and adapt models across devices and environments without extensive per-site annotations? How can we decode in multisensor/multimodal systems?
LLM-powered ability: LLMs can act as controllers or priors for all the aspects above, yet key questions remain about how to tailor LLMs to specific sensors or even to the signals themselves. Key questions: How to tailor LLM or its agents on edge devices for specific tasks? How to let LLM understand the temporal sensor data in a proper way?
Publications
RF-LEGO: Modularized Signal Processing-Deep Learning Co-Design for RF Sensing via Deep Unrolling
Luca Jiang-Tao Yu, Chenshu Wu
Mobicom 2026, Austin, TX, USA 🇺🇸
Coming soon...
"Take Me Home, Wi-Fi Drone": A Drone-based Wireless System for Wilderness Search and Rescue
Weiying Hou, Luca Jiang-Tao Yu, Chenshu Wu
Mobicom 2026, Austin, TX, USA 🇺🇸
Coming soon...
OctoNet: A Large-Scale Multi-Modal Dataset for Human Activity Understanding Grounded in Motion-Captured 3D Pose Labels