AI applications require local fast memory/storage to support them and minimizing data movement. This is increasing the quantity and performance requirements for training AI models and the low power AI inference efficiency needs for edge and endpoint applications.