论文标题
在内存模拟计算体系结构上的可靠性意识到DNNS的部署
Reliability-Aware Deployment of DNNs on In-Memory Analog Computing Architectures
论文作者
论文摘要
常规的内存计算(IMC)体系结构由模拟回忆横杆组成,以加速矩阵向量乘法(MVM)和数字功能单元,以实现深神经网络(DNNS)中的非线性矢量(NLV)操作。但是,这些设计需要渴望能源的信号转换单元,这可能会消散系统总功率的95%以上。另一方面,内存中模拟计算(iMac)电路通过在模拟域中实现MVM和NLV操作,从而消除了信号转换器的需求,从而可以节省大量能量。但是,它们更容易受到可靠性挑战,例如互连寄生和噪声。在这里,我们引入了一种实用方法,将DNN中的大型矩阵部署到多个较小的iMac子阵列上,以减轻噪声和寄生虫的影响,同时将计算保持在模拟域中。
Conventional in-memory computing (IMC) architectures consist of analog memristive crossbars to accelerate matrix-vector multiplication (MVM), and digital functional units to realize nonlinear vector (NLV) operations in deep neural networks (DNNs). These designs, however, require energy-hungry signal conversion units which can dissipate more than 95% of the total power of the system. In-Memory Analog Computing (IMAC) circuits, on the other hand, remove the need for signal converters by realizing both MVM and NLV operations in the analog domain leading to significant energy savings. However, they are more susceptible to reliability challenges such as interconnect parasitic and noise. Here, we introduce a practical approach to deploy large matrices in DNNs onto multiple smaller IMAC subarrays to alleviate the impacts of noise and parasitics while keeping the computation in the analog domain.