Skip to content

Latest commit

 

History

History
264 lines (209 loc) · 28.2 KB

File metadata and controls

264 lines (209 loc) · 28.2 KB

☁️ Cloud, DevOps, Infra

Infrastructure and performance for AI/ML/DL workloads.

icon icon

This section curates infrastructure, cloud, and DevOps resources for AI/ML/DL workloads. It covers compute/storage options, accelerators (CPU/GPU/TPU/IPU), platforms, tooling, and performance references with practical links.

Home · 📚 Data · 📓 Notebooks · 🧰 Tools · ☁️ Infrastructure

banner

Related

Table of Contents

↑ Back to top

System / Infra

Compute & Storage

Grid computing / Super computing

Cloud services

Tools

CPU

Thanks to the great minds on the mechanical sympathy mailing list for their responses to my queries on CPU probing.

FPGA

GPU

TPU

IPU

Performance

Related

Misc

Contributing

Contributions are very welcome, please share back with the wider community (and get credited for it)!

Please have a look at the CONTRIBUTING guidelines, also have a read about our licensing policy.


#cloud-devops-infra · ↑ Back to top · ← Back home

Data

  - [AutoML Core Concepts and Hands-On Workshop](https://www.youtube.com/watch?v=QbqsOcX7KZo&feature=em-lbcastemail)
- [Episode 3 Handling Categorical Features in Machine Learning Problems](https://lnkd.in/e9Qc5fe)

NLP

- The PlaidML Tensor Compiler - [webinar](https://event.on24.com/eventRegistration/console/EventConsoleApollo.jsp?&eventid=2026509&sessionid=1&username=&partnerref=&format=fhaudio&mobile=false&flashsupportedmobiledevice=false&helpcenter=false&key=B27628973F7FA8B9758983E373E36ED1&text_language_id=en&playerwidth=1000&playerheight=700&overwritelobby=y&eventuserid=246511746&contenttype=A&mediametricsessionid=207230377&mediametricid=2857349&usercd=246511746&mode=launch)

Infrastructure & Cloud

  - [Webinar slides: offload your code to GPU (part 1)](https://event.on24.com/event/23/51/32/1/rt/1/documents/resourceList1590781996922/s_webinarslides1590781995277.pdf)
- [Intel® DevCloud for oneAPI](https://devcloud.intel.com/oneapi/)
- [TVM is an open deep learning compiler stack for CPUs, GPUs, and specialized accelerators. It aims to close the gap between the productivity-focused deep learning frameworks, and the performance- or efficiency-oriented hardware backends](https://tvm.apache.org/docs/index.html)

Tools & Frameworks

  - [oneAPI Toolkits](https://software.intel.com/content/www/us/en/develop/tools/oneapi.html#oneapi-toolkits)
  - [Intel® Advisor](https://software.intel.com/content/www/us/en/develop/tools/advisor.html)

Notebooks

- [Hello, TPU in Colab notebook](https://colab.research.google.com/drive/1MefSa2P6UP-gO2S0-dCjIjbvRxOnewZK#scrollTo=llcFb_P_BNPM)
- [Useful TPU and Model example](https://colab.research.google.com/drive/1F8txK1JLXKtAkcvSRQz2o7NSTNoksuU2#scrollTo=mQnZM5JYlRvs)
- [Measure Performance on TPU, in a notebook](https://colab.research.google.com/drive/11VnRHgG_067fwPGhMwBz0SmplLsf9X5h)
- [Web traffic prediction](https://adaptpartners.com/technical-seo/website-traffic-prediction-with-google-colaboratory-and-facebook-prophet/)

Generative AI

- [GAN example, TPU version](https://colab.research.google.com/drive/1EkZPH6UE_I1a2TQfDDpjjqA7Na0_qd6v)

Computer Vision