Multiplication (e.g., convolution) is arguably a cornerstone of modern deep neural networks (DNNs). However, intensive multiplications cause expensive resource costs that challenge DNN deployment on resource-constrained edge devices, driving several attempts for multiplication-less deep networks. This paper presented ShiftAddNet, whose main inspiration is drawn from a common practice in energy-efficient hardware implementation, that is, multiplication can be instead performed with additions and logical bit-shifts. We leverage this idea to explicitly parameterize deep networks in this way, yielding a new type of deep network that involves only bit-shift and additive weight layers. This hardware-inspired ShiftAddNet immediately lead to both energy-efficient inference and training, without compromising the expressive capacity compared to standard DNNs. The two complementary operations types (bit-shift and add) additionally enable finer-grained control of the model’s learning capacity, leading to more flexible trade-off between accuracy and (training) efficiency, as well as improved robustness to quantization. We conduct extensive experiments and ablation studies, all backed up by our FPGA-based ShiftAddNet implementation and real on-board measurements. Compared to existing DNNs or other multiplication-less models, ShiftAddNet aggressively reduces over 80% the hardware-quantified energy cost of DNN training and inference, while offering comparable or better accuracies.