mirror of
https://github.com/huggingface/transformers.git
synced 2025-07-31 10:12:23 +06:00
correct TP implementation resources (#13248)
fix a few implementation links
This commit is contained in:
parent
4d10474fa5
commit
066fd047cc
@ -220,9 +220,12 @@ Special considerations: TP requires very fast network, and therefore it's not ad
|
||||
This section is based on the original much more [detailed TP overview](https://github.com/huggingface/transformers/issues/10321#issuecomment-783543530).
|
||||
by [@anton-l](https://github.com/anton-l).
|
||||
|
||||
Implementations:
|
||||
Alternative names:
|
||||
- DeepSpeed calls it [tensor slicing](https://www.deepspeed.ai/features/#model-parallelism)
|
||||
- [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) has an internal implementation.
|
||||
|
||||
Implementations:
|
||||
- [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) has an internal implementation, as it's very model-specific
|
||||
- [parallelformers](https://github.com/tunib-ai/parallelformers) (only inference at the moment)
|
||||
|
||||
🤗 Transformers status:
|
||||
- core: not yet implemented in the core
|
||||
|
Loading…
Reference in New Issue
Block a user