AMD unveils its first small language model, AMD-135M — AI performance enhanced by speculative decoding

by Pelican Press
1 views 3 minutes read

AMD unveils its first small language model, AMD-135M — AI performance enhanced by speculative decoding

As AMD flexes its muscles in the AI game, it is not only introducing new hardware but is betting on software too, trying to hit new market segments not already dominated by Nvidia. 

Thus, AMD has unveiled its first small language model, AMD-135M, which belongs to the Llama family and is aimed at private business deployments. It is unclear whether the new model has to do anything with the company’s recent acquisition of Silo AI (as the deal has to be finalized and cleared by various authorities, so probably not), but this is a clear step in the direction of addressing the needs of specific customers with a pre-trained model done by AMD – using AMD hardware for inference.  

The main reason why AMD’s models are fast is because they use so-called speculative decoding. Speculative decoding introduces a smaller ‘draft model’ that generates multiple candidate tokens in a single forward pass. Tokens are then passed to a larger, more accurate ‘target model’ that verifies or corrects them. On the one hand, this approach allows for multiple tokens to be generated simultaneously, yet on the other hand this comes at the cost of power due to increased data transactions.  

AMD’s new release comes in two versions: AMD-Llama-135M and AMD-Llama-135M-code, each designed to optimize specific tasks by accelerating inference performance by using speculative decoding technology, a logical thing to do for a small-language model-based AI service. Somehow, both prevail in performance tests conducted by AMD.

  • The base model, AMD-Llama-135M, was trained from the ground up on 670 billion tokens of general data. This process took six days using four 8-way AMD Instinct MI250-based nodes (in AMD’s nomenclature these are just ‘four AMD MI250 nodes’). 
  • In addition, AMD-Llama-135M-code was fine-tuned with an extra 20 billion tokens specifically focused on coding, completing this task in four days using the same hardware.

AMD believes that further optimizations can lead to even better performance. Yet, as the company shares benchmark numbers of its previous-generation GPUs, we can only imagine what its current-generation (MI300X) and next-generation (MI325X) could do. 



Source link

#AMD #unveils #small #language #model #AMD135M #performance #enhanced #speculative #decoding

You may also like