Jindan Li

Jindan Li  

Ph.D. student
Department of Electrical and Computer Engineering
Cornell University

Email: jl4767@cornell.edu

2 W Loop Rd, New York, NY 10044

About me

I am currently a Ph.D. student at Cornell University, and am fortunately advised by Prof. Tianyi Chen. From Sep 2024 to Aug 2025, I worked as a Research Assistant at RPI ECSE with Prof. Chen. In Spring 2025, I also served as a Teaching Assistant for the undergraduate course ECSE 2500 – Engineering Probability at RPI. I received my B.Eng. in Information Engineering from Zhejiang University in July 2024.

Research

Research interests: Analog In-Memory Computing (AIMC), in-memory (in-situ) training algorithms, and wireless communication.

My research focuses on algorithmic training frameworks that operate natively on analog hardware. The goal is to make in-memory training robust to limited precision and device asymmetry while preserving the parallelism and efficiency advantages of AIMC.

Modern deep learning training increasingly suffers from the cost of moving data between memory and compute units, which limits both throughput and energy efficiency in conventional von Neumann architectures. Analog In-Memory Computing (AIMC) offers a promising alternative by performing computation directly inside memory arrays. In AIMC accelerators, neural network parameters are stored as the conductance states of resistive devices arranged in crossbar arrays, and matrix–vector multiplication (MVM) can be executed in a highly parallel and energy-efficient manner by leveraging circuit physics (e.g., Ohm’s and Kirchhoff’s laws).

To fully harvest the efficiency of AIMC, it is crucial to enable in-memory training , where weight updates are applied directly on analog arrays via rank-update mechanisms driven by pulse streams. However, accurate training on real analog hardware is challenging due to intrinsic non-idealities. In particular, many practical memristive devices provide only a limited number of stable conductance states (often around 4-bit resolution), and their update behavior is asymmetric and state-dependent. These effects lead to unstable convergence and degraded accuracy.

Multi-tile Residual Learning for In-Memory Analog Training. My recent work proposes a multi-tile residual learning strategy to overcome the limited conductance-state bottleneck in AIMC training. The key idea is to represent each weight as a composite sum across multiple analog tiles, where each tile stores a residual component with geometric scaling. This structure expands the effective representable precision beyond what a single low-state device can provide. During training, tiles are coordinated using a multi-timescale schedule: earlier tiles capture coarse updates, and subsequent tiles progressively learn the residual left by preceding tiles, refining the solution over time. This residual decomposition improves robustness under device non-idealities (e.g., discretized updates and asymmetric responses) and supports stable convergence in low-state regimes. I validate the approach on standard image classification benchmarks (e.g., MNIST/Fashion-MNIST and CIFAR-10/100 with ResNet-style models) using analog training simulator, and analyze the associated efficiency trade-offs when scaling the number of tiles.

News

Our paper has been accepted to AISTATS 2026. The full paper is available on arXiv, and the implementation and experimental results are available in my code repository: github.com/Jindanli898/AIMC.

Education