Wednesday, January 26, 2022
HomeNatureA crossbar array of magnetoresistive reminiscence units for in-memory computing

A crossbar array of magnetoresistive reminiscence units for in-memory computing


  • 1.

    Horowitz, M. Computing’s vitality downside (and what we will do about it). In Proc. Worldwide Stable-State Circuits Convention (ISSCC) 10−14 (IEEE, 2014).

  • 2.

    Keckler, S. W., Dally, W. J., Khailany, B., Garland, M. & Glasco, D. GPUs and the way forward for parallel computing. IEEE Micro 31, 7–17 (2011).

    Article 

    Google Scholar
     

  • 3.

    Music, J. et al. An 11.5TOPS/W 1024-MAC butterfly construction dual-core sparsity-aware neural processing unit in 8nm flagship cellular SoC. In 2019 IEEE Int. Stable-State Circuits Convention Digest of Technical Papers (ISSCC) 130−131 (IEEE, 2019).

  • 4.

    Sebastian, A. et al. Reminiscence units and purposes for in-memory computing. Nat. Nanotechnol. 15, 529–544 (2020).

    ADS 
    CAS 
    Article 

    Google Scholar
     

  • 5.

    Wang, Z. et al. Resistive switching supplies for data processing. Nat. Rev. Mater. 5, 173–195 (2020).

    ADS 
    CAS 
    Article 

    Google Scholar
     

  • 6.

    Ielmini, D. & Wong, H. P. In-memory computing with resistive switching units. Nat. Electron. 1, 333–343 (2018).

    Article 

    Google Scholar
     

  • 7.

    Verma, N. et al. In-memory computing: advances and prospects. IEEE Stable-State Circuits Magazine. 11, 43–55 (2019).

    Article 

    Google Scholar
     

  • 8.

    Woo, J. et al. Improved synaptic habits underneath an identical pulses utilizing AlOx/HfO2 bilayer RRAM array for neuromorphic programs. IEEE Electron System Lett. 37, 994–997 (2016).

    ADS 
    CAS 
    Article 

    Google Scholar
     

  • 9.

    Yao, P. et al. Face classification utilizing digital synapses. Nat. Commun. 8, 15199 (2017).

    ADS 
    CAS 
    Article 

    Google Scholar
     

  • 10.

    Wu, H. et al. System and circuit optimization of RRAM for neuromorphic computing. In 2017 IEEE Worldwide Electron Units Assembly 11.5.1−11.5.4 (IEEE, 2017).

  • 11.

    Li, C. et al. Environment friendly and self-adaptive in-situ studying in multilayer memristor neural networks. Nat. Commun. 9, 2385 (2018).

    ADS 
    Article 

    Google Scholar
     

  • 12.

    Chen, W. et al. CMOS-integrated memristive non-volatile computing-in-memory for AI edge processors. Nat. Electron. 2, 420–428 (2019).

    CAS 
    Article 

    Google Scholar
     

  • 13.

    Yao, P. et al. Absolutely hardware-implemented memristor convolutional neural community. Nature 577, 641–646 (2020).

    ADS 
    CAS 
    Article 

    Google Scholar
     

  • 14.

    Le Gallo, M. et al. Blended-precision in-memory computing. Nat. Electron. 1, 246–253 (2018).

    Article 

    Google Scholar
     

  • 15.

    Ambrogio, S. et al. Equal-accuracy accelerated neural-network coaching utilizing analogue reminiscence. Nature 558, 60–67 (2018).

    ADS 
    CAS 
    Article 

    Google Scholar
     

  • 16.

    Merrikh-Bayat, F. et al. Excessive-performance mixed-signal neurocomputing with nanoscale floating-gate reminiscence cell arrays. IEEE Trans Neural Netw. Study. Syst. 29, 4782–4790 (2018).

    Article 

    Google Scholar
     

  • 17.

    Wang, P. et al. Three-dimensional NAND flash for vector-matrix multiplication. IEEE Trans. VLSI Syst. 27, 988–991 (2019).

    Article 

    Google Scholar
     

  • 18.

    Xiang, Y. et al. Environment friendly and sturdy spike-driven deep convolutional neural networks primarily based on NOR flash computing array. IEEE Trans. Electron Dev. 67, 2329–2335 (2020).

    ADS 
    Article 

    Google Scholar
     

  • 19.

    Lin, Y.-Y. et al. A novel voltage-accumulation vector-matrix multiplication structure utilizing resistor-shunted floating gate flash reminiscence machine for low-power and high-density neural community purposes. In 2018 IEEE Worldwide Electron Units Assembly 2.4.1−2.4.4 (IEEE, 2018).

  • 20.

    Music, Y. J. et al. Demonstration of extremely manufacturable STT-MRAM embedded in 28nm logic. In 2018 IEEE Worldwide Electron Units Assembly 18.2.1−18.2.4 (IEEE, 2018).

  • 21.

    Lee, Y. Ok. et al. Embedded STT-MRAM in 28-nm FDSOI logic course of for industrial MCU/IoT utility. In 2018 IEEE Symposium on VLSI Know-how 181−182 (IEEE, 2018).

  • 22.

    Wei, L. et al. A 7Mb STT-MRAM in 22FFL FinFET know-how with 4ns learn sensing time at 0.9V utilizing write-verify-write scheme and offset-cancellation sensing method. In 2019 IEEE Int. Stable-State Circuits Convention Digest of Technical Papers 214−216 (IEEE, 2019).

  • 23.

    LeCun, Y., Bengio, Y. & Hinton, G. Deep studying. Nature 521, 436–444 (2015).

    ADS 
    CAS 
    Article 

    Google Scholar
     

  • 24.

    Yu, S. Neuro-inspired computing with rising nonvolatile reminiscence. Proc. IEEE 106, 260–285 (2018).

    CAS 
    Article 

    Google Scholar
     

  • 25.

    Patil, A. D. et al. An MRAM-based deep in-memory structure for deep neural networks. In 2019 IEEE Worldwide Symposium on Circuits and Programs (IEEE, 2019).

  • 26.

    Zabihi, M. et al. In-memory processing on the spintronic CRAM: from {hardware} design to utility mapping. IEEE Trans. Comput. 68, 1159–1173 (2019).

    MathSciNet 
    Article 

    Google Scholar
     

  • 27.

    Kang, S. H. Embedded STT-MRAM for energy-efficient and cost-effective cellular programs. In 2014 IEEE Symposium on VLSI Know-how (IEEE, 2014).

  • 28.

    Zeng, Z. M. et al. Impact of resistance-area product on spin-transfer switching in MgO-based magnetic tunnel junction reminiscence cells. Appl. Phys. Lett. 98, 072512 (2011).

    ADS 
    Article 

    Google Scholar
     

  • 29.

    Kim, H. & Kwon, S.-W. Full-precision neural networks approximation primarily based on temporal area binary MAC operations. US patent 17/085,300.

  • 30.

    Hung, J.-M. et al. Challenges and developments in growing nonvolatile memory-enabled computing chips for clever edge units. IEEE Trans. Electron Dev. 67, 1444–1453 (2020).

    ADS 
    CAS 
    Article 

    Google Scholar
     

  • 31.

    Jiang, Z., Yin, S., Search engine marketing, J. & Seok, M. C3SRAM: an in-memory-computing SRAM macro primarily based on sturdy capacitive coupling computing mechanism. IEEE J. Stable-State Circuits 55, 1888–1897 (2020).

    ADS 
    Article 

    Google Scholar
     

  • 32.

    Hubara, I. et al. Binarized neural networks. In Advances in Neural Data Processing Programs 4107−4115 (NeurIPS, 2016).

  • 33.

    Rastegari, M., Ordonez, V., Redmon, J. & Farhadi, A. XNOR-Internet: ImageNet classification utilizing binary convolutional neural networks. In 2016 European Convention on Laptop Imaginative and prescient 525−542 (2016).

  • 34.

    Lin, X., Zhao, C. & Pan, W. In the direction of correct binary convolutional neural community. In Advances in Neural Data Processing Programs 345−353 (NeurIPS, 2017).

  • 35.

    Zhuang, B. et al. Structured binary neural networks for correct picture classification and semantic segmentation. In 2019 IEEE Convention on Laptop Imaginative and prescient and Sample Recognition 413−422 (IEEE, 2019).

  • 36.

    Shafiee, A. et al. ISAAC: a convolutional neural community accelerator with in-situ analog arithmetic in crossbars. In 2016 ACM/IEEE forty third Annual Worldwide Symposium on Laptop Structure 14−26 (IEEE, 2016).

  • 37.

    Liu, B. et al. Digital-assisted noise-eliminating coaching for memristor crossbar-based analog neuromorphic computing engine. In 2013 fiftieth ACM/EDAC/IEEE Design Automation Convention 1−6 (IEEE, 2013).

  • 38.

    Wu, B., Iandola, F., Jin, P. H. & Keutzer, Ok. SqueezeDet: unified, small, low energy absolutely convolutional neural networks for real-time object detection for autonomous driving. In 2017 IEEE Convention on Laptop Imaginative and prescient and Sample Recognition 129−137 (IEEE, 2017).

  • 39.

    Ham, D., Park, H., Hwang, S. & Kim, Ok. Neuromorphic electronics primarily based on copying and pasting the mind. Nat. Electron. 4, 635–644 (2021).

    Article 

    Google Scholar
     

  • 40.

    Wang, P. et al. Two-step quantization for low-bit neural networks. In 2018 IEEE Convention on Laptop Imaginative and prescient and Sample Recognition 4376−4384 (IEEE, 2018).

  • RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Most Popular

    Recent Comments