transformers/docs/source/en/model_doc/open-llama.md
Tom Aarsen 79444f370f
Deprecate unused OpenLlama architecture (#24922)
* Resolve typo in check_repo.py

* Specify encoding when opening modeling files

* Deprecate the OpenLlama architecture

* Add disclaimer pointing to Llama

I'm open to different wordings here

* Match the capitalisation of LLaMA
2023-07-20 07:03:24 -04:00

2.2 KiB

Open-Llama

This model is in maintenance mode only, so we won't accept any new PRs changing its code.

If you run into any issues running this model, please reinstall the last version that supported this model: v4.31.0. You can do so by running the following command: pip install -U transformers==4.31.0.

This model differs from the OpenLLaMA models on the Hugging Face Hub, which primarily use the LLaMA architecture.

Overview

The Open-Llama model was proposed in Open-Llama project by community developer s-JoL.

The model is mainly based on LLaMA with some modifications, incorporating memory-efficient attention from Xformers, stable embedding from Bloom, and shared input-output embedding from PaLM. And the model is pre-trained on both Chinese and English, which gives it better performance on Chinese language tasks.

This model was contributed by s-JoL. The original code can be found Open-Llama. Checkpoint and usage can be found at s-JoL/Open-Llama-V1.

OpenLlamaConfig

autodoc OpenLlamaConfig

OpenLlamaModel

autodoc OpenLlamaModel - forward

OpenLlamaForCausalLM

autodoc OpenLlamaForCausalLM - forward

OpenLlamaForSequenceClassification

autodoc OpenLlamaForSequenceClassification - forward