With the increasing demand for low-power electronics, nanomagnetic devices have emerged as strong potential candidates to complement present day transistor technology. A variety of novel switching effects such as spin torque and giant spin Hall offer scalable ways to manipulate nano-sized magnets. However, the low intrinsic energy cost of switching spins is often compromised by the energy consumed in the overhead circuitry in creating the necessary switching fields. Scaling brings in added concerns such as the ability to distinguish states (readability) and to write information without spontaneous backflips (reliability). A viable device must ultimately navigate a complex multi-dimensional material and design space defined by volume, energy budget, speed and a target read-write-retention error. In this talk, I will present a multi-scale computational framework to explore possible innovations at different levels (material, device, or circuit), along with a holistic understanding of their overall energy-delay-reliability tradeoff. A particular example we will look at is building networks out of stochastic soft magnets that directly implement Reservoir Computing for temporal inferencing and pattern recognition. Efficient non von-Neumann hardware implementation of reservoir computers can open up a pathway for integration of temporal Neural Networks in a wide variety of emerging systems such as Internet of Things (IoTs), industrial controls, bio- and photo-sensors, and self-driving automotives.