ACM CoNEXT Workshop on In-Network Computing and AI for Distributed Systems
December 1-4, 2025
HKUST, Hong Kong
Call for Papers
In-network computing (INC) has emerged as a powerful paradigm that integrates computation directly into the network fabric. By leveraging programmable switches, SmartNICs, and other data plane devices, INC enables unprecedented opportunities to accelerate distributed systems, data analytics, and AI workloads. As modern applications demand lower latency, higher throughput, and greater resource efficiency, INC is rapidly gaining traction in both academia and industry.
INCAS 2025 aims to bring together researchers and practitioners at the intersection of networking, systems, and AI to explore how INC can reshape the architecture of distributed computing. We invite original research contributions that investigate the design, implementation, and deployment of in-network computing, especially in the context of real-world constraints such as limited hardware resources, system-level integration, and stringent performance requirements. Topics of interest include, but are not limited to:
1. INC for AI workloads
- In-network acceleration of machine learning inference (e.g., CNNs, RNNs, transformers)
- In-network acceleration for distributed model training, e.g., efficient in-network execution of collective operations
- Explorations on deploying large vision and language models with INC
2. INC for distributed data analytics
- In-network support for real-time data aggregation, transformation, and filtering
- Acceleration of stream and batch processing pipelines
- Integration of INC into distributed query execution and analytics engines
3. AI with INC for network management
- AI-enhanced network traffic engineering using INC, intelligent traffic prediction, advanced telemetry, and AI-enabled use of INC for real-time anomaly detection and fault diagnosis
- AI-driven network management, orchestration, and resource optimization leveraging INC, including applications of AI/ML with INC for efficient network control and automated management
- Predictive network maintenance using INC and AI for proactive fault prediction and intelligent maintenance scheduling
- On-path enforcement of routing, access control, and QoS policies
- AI-assisted and INC-enabled smart cloud-edge network management and coordination
- Programmable fault monitoring and distributed failure response mechanisms
- Protocol enhancement via INC, e.g., redesign of end-to-end protocols to take advantage of programmable dataplanes
4. INC for data storage and caching
- In-network support for key-value lookups, metadata indexing, and content-addressable storage
- Programmable data paths for consistency enforcement, replication, and coordination in distributed storage
- On-path caching, coherence, and eviction mechanisms for latency-sensitive workloads
- Acceleration of storage systems and protocols (e.g., NFS, NVMe-over-Fabrics), including those leveraging RDMA-based transports
- Offloading I/O processing and load balancing in large-scale storage clusters using programmable switches or SmartNICs
5. Protocol and system-level innovations for INC
- INC-aware routing protocol design, including programmable control over path selection, route updates, and fast failoverIn-network support for congestion control, reliability, and flow scheduling
- In-network support for transport-layer functions, such as congestion control, reliability, and flow scheduling
- Task coordination, pipelining, and batching mechanisms within in-network systems
- Adaptive resource allocation and scheduling for switch/NIC/FPGA-based environments
6. Tooling and development support for INC
- Compilers, debuggers, and developer tools for INC platforms
- Using foundation models (e.g., LLMs or diffusion models) to assist INC system design, and their application for intelligent network operations, management, and AI-powered assistants in INC environments.
7. Security and robustness enhancement for INC
- In-network security policies and anomaly detection mechanisms
- Fault isolation, recovery, and robustness in deployed programmable networks
- Experiences and lessons from production-level or testbed-scale INC deployments
Submission Instructions
We will follow the submission format requirements of previous CoNEXT workshops:
Submissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be at most six (6) pages long, excluding references and appendices, in two-column 10pt ACM format. Authors of accepted submissions are expected to present and discuss their work at the workshop.
All submissions will be peer-reviewed, and the review process will be double-blind. Per the anonymity guidelines, please prepare your paper in a way that preserves the anonymity of the authors. No information will be shared with third parties.
Please submit your paper via https://incas2025.hotcrp.com/.
Expected Number of Submissions and Participants
We anticipate receiving around 30 paper submissions with an acceptance rate of 40-50% (select eight papers for oral presentations; the remaining as posters). Besides organizers, we expect around 50 participants.