# A Linear Algebraic Framework for Distributed Deep Learning

### Abstract

Training deep neural networks (DNNs) in large-cluster computing environments is increasingly necessary, as networks grow in size and complexity. Local memory and processing limitations require robust data and model parallelism for crossing compute node boundaries. We present a linear-algebraic approach to model parallelism in deep learning, which allows parallel distribution of any tensor in the DNN. Rather than rely on automatic differentiation tools, which do not universally support distributed memory parallelism models, we use the fact that operations on a computer’s memory are linear to build a suite of parallel data movement operations, e.g., broadcast, sum-reduce, and halo exchange, which are also linear operators. Thus, we can develop the adjoint operators required for gradient-based training of DNNs. We build distributed DNN layers using these parallel primitives, composed with sequential layer implementations, and demonstrate their application by building and training a distributed DNN using DistDL, a PyTorch and MPI-based distributed deep learning toolkit.