# Train anywhere, Infer on Qualcomm Cloud AI 100 [Click here](https://www.qualcomm.com/developer/blog/2024/01/train-anywhere-infer-qualcomm-cloud-ai-100) # How to Quadruple LLM Decoding Performance with Speculative Decoding (SpD) and Microscaling (MX) Formats on Qualcomm® Cloud AI 100 [Click here](https://statics.teams.cdn.office.net/evergreen-assets/safelinks/1/atp-safelinks.html) # Power-efficient acceleration for large language models – Qualcomm Cloud AI SDK [Click here](https://www.qualcomm.com/developer/blog/2023/11/power-efficient-acceleration-large-language-models-qualcomm-cloud-ai-sdk) # Qualcomm Cloud AI 100 Accelerates Large Language Model Inference by ~2x Using Microscaling (Mx) Formats [click here](https://www.qualcomm.com/developer/blog/2024/01/qualcomm-cloud-ai-100-accelerates-large-language-model-inference-2x-using-microscaling-mx) # Qualcomm Cloud AI Introduces Efficient Transformers: One API, Infinite Possibilities [click here](https://www.qualcomm.com/developer/blog/2024/05/qualcomm-cloud-ai-introduces-efficient-transformers-one-api)