Hi. I'm Shanshi Huang.

And I am an assistant professor in the Microelectronics Thrust (Function Hub) at the Hong Kong University of Science and Technology (GZ).

About me

I obtained my Ph.D. degree from Georgia Institute of Technology in 2022, where I was advised by Prof. Shimeng Yu. My research interests are in cross-layer hardware & software co-design, heterogeneous computing, EDA design and hardware security for AI chips. To date, I have published 20+ papers in peer-reviewed conferences and journals (including ICCAD, CICC, IEDM, DATE, TVLSI).

I am actively looking for motivated PhD/RA (Fall 2024) with background in computer science, software engineering, electronic engineering in my lab. Please feel free to email me with your CV and transcript if you are interested.

Education

  • B.S. in Communication Engineering, BIT, 2008-2012
  • M.S. in Electrical Engineering, ASU, 2012-2014
  • Ph.D. in Electrical and Computer Engineering, Gatech, 2019-2022
  • Experience

  • Physical Design Engineer (Marvell Semiconductor Inc, US), 2015-2016
  • Research Assistant, (Arizona State University, US), 2017-2018
  • Assistant Professor (HKUST, GZ), 2023-now
  • Publications

    [google scholar]

    Journal papers:

    1. S. Huang, H. Jiang, S. Yu, “Hardware-aware Quantization/Mapping Strategies for Compute-in-Memory Accelerators,” ACM Transactions on Design Automation of Electronic Systems, vol. 28, no. 34, pp. 1-23, 2023.
    2. W. Li, X. Sun, S. Huang, H. Jiang, S. Yu, “A 40nm MLC-RRAM compute-in-memory macro with sparsity control, on-chip write-verify, and temperature-independent ADC references,” IEEE Journal of Solid State Circuits, vol. 57, no. 9, pp. 2868-2877, 2022.
    3. H. Jiang, W. Li, S. Huang, S. Cosemans, F. Catthoor, S. Yu, “Analog-to-digital converter design exploration for compute-in-memory accelerators,” IEEE Design & Test, vol. 39, no. 2, pp, 48-55, 2022.
    4. S. Huang, X. Sun, X. Peng, H. Jiang, S. Yu, “Achieving high in-situ training accuracy and energy efficiency with analog non-volatile synaptic devices,” ACM Transactions on Design Automation of Electronic Systems, vol. 27, no. 4, p. 37, 2022.
    5. J.-W. Su, X. Si, Y.-C. Chou, T.-W. Chang, W.-H. Huang, Y.-N. Tu, R. Liu, P.-J. Lu, T.-W. Liu, J.-H. Wang, Y.-L. Chung, J.-S. Ren, H. Jiang, S. Huang, S.-H. Li, S.-S. Sheu, C.-I. Wu, C.-C. Lo, R.-S. Liu, C.-C. Hsieh, K.-T. Tang, S. Yu, M.-F. Chang, “Two-way transpose multibit 6T SRAM computing-in-memory macro for inference-training AI edge chips,” IEEE Journal of Solid-State Circuits, vol. 57, no. 2, pp. 609-624, 2022.
    6. S. Yu, H. Jiang, S. Huang, X. Peng, A. Lu, “Compute-in-memory chips for deep learning: recent trends and prospects”, IEEE Circuits and Systems Magazine, vol. 21, no. 3, pp. 31-56, 2021, invited review.
    7. X. Peng, S. Huang, H. Jiang, A. Lu, S. Yu, “DNN+NeuroSim V2.0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training,” IEEE Trans. CAD, vol. 40, no. 11, pp. 2306-2319, 2021.
    8. A. Lu, X. Peng, Y. Luo,S. Huang, S. Yu, “A runtime reconfigurable design of compute-in-memory based hardware accelerator for deep learning inference,” ACM Transactions on Design Automation of Electronic Systems, vol. 26, no. 6, p. 45, 2021.
    9. S. Huang, H. Jiang, X. Peng, W. Li, S. Yu, “Secure XOR-CIM engine: Compute-in-memory SRAM architecture with embedded XOR encryption,” IEEE Trans. VLSI Systems, vol. 29, no. 12, pp. 2027-2039, 2021.
    10. H. Jiang, X. Peng, S. Huang, S. Yu, “CIMAT: A compute-in-memory architecture for on-chip training based on transpose SRAM arrays,” IEEE Transactions on Computers, vol. 69, no. 7, pp. 944-954, 2020.

    Conference papers:

    1. H. Jiang, W. Li, S. Huang, S. Yu, “A 40nm analog-input ADC-free compute-in-memory RRAM macro with pulse-width modulation between sub-arrays,” IEEE Symposium on VLSI Technology and Circuits (VLSI) 2022, Hawaii, USA, highlight paper.
    2. W. Li, X. Sun, H. Jiang, S. Huang, S. Yu, “A 40nm RRAM compute-in-memory macro featuring on-chip write-verify and offset-cancelling ADC references,” IEEE European Solid-State Circuits Conference (ESSCIRC) 2021, virtual.
    3. W. Li, S. Huang, X. Sun, H. Jiang, S. Yu, “Secure-RRAM: A 40nm 16kb compute-in-memory macro with reconfigurability, sparsity control, and embedded security,” IEEE Custom Integrated Circuits Conference (CICC) 2021, virtual.
    4. A. Lu, X. Peng, Y. Luo, S. Huang, S. Yu, “A runtime reconfigurable design of compute-in-memory based hardware accelerator,” IEEE/ACM Design, Automation & Test in Europe (DATE) 2021, virtual.
    5. S. Huang, X. Peng, H. Jiang, Y. Luo, S. Yu, “Exploiting process variations to protect machine learning inference engine from chip cloning,” IEEE International Symposium on Circuits and Systems (ISCAS) 2021, virtual.
    6. S. Huang, H. Jiang, S. Yu, “Mitigating adversarial attack for compute-in-memory accelerator utilizing on-chip finetune,” IEEE Non-Volatile Memory Systems and Applications Symposium (NVMSA) 2021, virtual.
    7. J-W. Su, X. Si, Y-C. Chou, T-W. Chang, W-H. Huang, Y-N. Tu, R. Liu, P-J. Lu, T-W. Liu, J-H. Wang, Z. Zhang, H. Jiang, S. Huang, S. Yu, K-T. Tang, C-C. Hsieh, R-S. Liu, S-H. Li, S-S. Sheu, H-Y. Lee, S-C. Chang, M-F. Chang, “A 28nm 64Kb inference-training two-way transpose multibit 6T SRAM computing-in-memory macro for AI edge chips” IEEE International Solid-State Circuits Conference (ISSCC) 2020, San Francisco, USA.
    8. H. Jiang, S. Huang, X. Peng, J.-W. Su, Y.-C. Chou, W.-H. Huang, T.-W. Liu, R. Liu, M.-F. Chang, S. Yu, “A two-way SRAM array based accelerator for deep neural network on-chip training,” ACM/IEEE Design Automation Conference (DAC) 2020, virtual (nomination for the best paper).
    9. S. Yu, X. Sun, X. Peng, S. Huang, “Compute-in-memory with emerging nonvolatile-memories: challenges and prospects,” IEEE Custom Integrated Circuits Conference (CICC) 2020, virtual, invited.
    10. S. Huang, H. Jiang, X. Peng, W. Li, S. Yu, “XOR-CIM: Compute-in-memory SRAM architecture with embedded XOR encryption,” IEEE/ACM International Conference on Computer-Aided Design (ICCAD) 2020, virtual.
    11. S. Huang, X. Sun, X. Peng, H. Jiang, S. Yu, “Overcoming challenges for achieving high in-situ training accuracy with emerging memories,” IEEE/ACM Design, Automation & Test in Europe (DATE) 2020, virtual, invited.
    12. H. Jiang, X. Peng, S. Huang, S. Yu, “MINT: Mixed-precision RRAM-based in-memory training architecture,” IEEE International Symposium on Circuits and Systems (ISCAS) 2020, virtual.
    13. X. Peng, S. Huang, Y. Luo, X. Sun, S. Yu, “DNN+NeuroSim: An end-to-end benchmarking framework for compute-in-memory accelerators with versatile device technologies,” IEEE International Electron Devices Meeting (IEDM) 2019, San Francisco, USA.
    14. H. Jiang, X. Peng, S. Huang, S. Yu, “CIMAT: A transpose SRAM-based compute-in-memory architecture for deep neural network on-chip training,” ACM/IEEE International Symposium on Memory Systems (MEMSYS) 2019, Washington, DC, USA.

    People

    Avatar

    Cong Wang

    PhD student(2023Fall-),
    MS-SUSTech, BS-ZZU

    Contact

    Email: shanshihuang@hkust-gz.edu.cn

    Location: Rm 607, W2, 1 Duxue Rd, Nansha District, Guangzhou, 510000, China