Sitemap

A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.

Pages

Posts

portfolio

publications

Shepherd - High-Precision Coverage Inference for Response-guided Blackbox Fuzzing (Registered Report)

Published in ISSTA Companion 2025 - 34th ACM SIGSOFT International Symposium on Software Testing and Analysis, 2025

In recent years, fuzzing has gained attention as a primary means for the early detection of vulnerabilities. Although coverage-based greybox fuzzing utilizes internal coverage information to achieve high exploration efficiency, it remains difficult to employ the fuzzing framework in some restricted environments where we cannot instrument the program, such as firmware or smartphone applications. In contrast, blackbox fuzzing does not require runtime information and is thus more widely applicable, but suffers from lower efficiency because coverage cannot be measured. To address this issue, there is a growing demand for methods that can approximate coverage in blackbox environments to optimize fuzzing. One existing study proposes estimating coverage based on the relationship between program responses and strings embedded in its binary. However, this approach faces challenges with ambiguous matching algorithms and the non-uniqueness that occurs when a single string is shared by multiple basic blocks, leading to frequent misestimations. In this research, we propose a new coverage inference method, Shepherd, which combines high-precision string matching with context analysis to resolve these problems. Experimental results show that Shepherd significantly improves estimation accuracy compared to the existing approach.

Download Paper

Delayed Momentum Aggregation: Communication-efficient Byzantine-robust Federated Learning with Partial Participation

Published in NeurIPS OPT 2025 - Optimization for Machine Learning Workshop, 2025

Federated Learning (FL) allows distributed model training across multiple clients while preserving data privacy, but it remains vulnerable to Byzantine clients that exhibit malicious behavior. While existing Byzantine-robust FL methods provide strong convergence guarantees (e.g., to a stationary point in expectation) under Byzantine attacks, they typically assume full client participation, which is unrealistic due to communication constraints and client availability. Under partial participation, existing methods fail immediately after the sampled clients contain a Byzantine majority, creating a fundamental challenge for sparse communication. First, we introduce delayed momentum aggregation, a novel principle where the server aggregates the most recently received gradients from non-participating clients alongside fresh momentum from active clients. Our optimizer D-Byz-SGDM (Delayed Byzantine-robust SGD with Momentum) implements this delayed momentum aggregation principle for Byzantine-robust FL with partial participation. Then, we establish convergence guarantees that recover previous full participation results and match the fundamental lower bounds we prove for the partial participation setting. Experiments on deep learning tasks validated our theoretical findings, showing stable and robust training under various Byzantine attacks.

Download Paper

talks

teaching

Teaching Assistant - IPA Security Camp

Security Training Camp, IPA Security Camp, 2019

Served as a Teaching Assistant at the prestigious IPA Security Camp, a government-sponsored intensive cybersecurity training program for talented young students in Japan.