S1E7: Computing needs of AI

We’re back with a deeper dive into the computing needs of AI. In this episode we explore questions about the technical requirements to build and train an AI application. We chat with our guest, Tony Foster, about things like what kind of compute it takes to train an AI model, and if we should we be concerned about the environmental impact. You don’t want to miss Professor Wondernerd’s explanation!

Show notes – Computing needs of AI

  • What architecture is required in order to support machine learning and deep learning architectures?
  • What sort of carbon footprint are these distributed architectures giving off? Does that footprint taper off after the initial training has been completed?
  • Should services like private GPT be offered as a service? Should consumers of the service be concerned about the environmental impacts?

Guest – Tony Foster

Our guest for this episode is Tony Foster. He’s a Sr. Principal Technical marketing engineer at Dell Technologies, an adjunct professor of Technology at Kansas State University, or simply the WonderNerd! He describes himself as a VDI/EUC and GPU fanatic bringing Deep Learning, Machine Learning, AI, and HPC to the virtual world.

You can connect with Tony on LinkedIn, on Twitter, and on his website, Wondernerd.net.

Don’t forget to check out https://techaunties.com and follow us on Instagram too! Have an idea for a show? Email us at hello@techaunties.com.

See omny.fm/listener for privacy information.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Verified by MonsterInsights