We’re back with a deeper dive into the computing needs of AI. In this episode we explore questions about the technical requirements to build and train an AI application. We chat with our guest, Tony Foster, about things like what kind of compute it takes to train an AI model, and if we should we be concerned about the environmental impact. You don’t want to miss Professor Wondernerd’s explanation!
Show notes – Computing needs of AI
- What architecture is required in order to support machine learning and deep learning architectures?
- What sort of carbon footprint are these distributed architectures giving off? Does that footprint taper off after the initial training has been completed?
- Should services like private GPT be offered as a service? Should consumers of the service be concerned about the environmental impacts?
Guest – Tony Foster
Our guest for this episode is Tony Foster. He’s a Sr. Principal Technical marketing engineer at Dell Technologies, an adjunct professor of Technology at Kansas State University, or simply the WonderNerd! He describes himself as a VDI/EUC and GPU fanatic bringing Deep Learning, Machine Learning, AI, and HPC to the virtual world.
See omny.fm/listener for privacy information.