最新消息:Welcome to the puzzle paradise for programmers! Here, a well-designed puzzle awaits you. From code logic puzzles to algorithmic challenges, each level is closely centered on the programmer's expertise and skills. Whether you're a novice programmer or an experienced tech guru, you'll find your own challenges on this site. In the process of solving puzzles, you can not only exercise your thinking skills, but also deepen your understanding and application of programming knowledge. Come to start this puzzle journey full of wisdom and challenges, with many programmers to compete with each other and show your programming wisdom! Translated with DeepL.com (free version)

python - Does unsloth support cache directory for models? - Stack Overflow

matteradmin6PV0评论

I want to download a model from hugging face to be used with unsloth for trainig:

from unsloth import FastLanguageModel,

max_seq_length = 16384
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Llama-3.2-1B-Instruct",
    max_seq_length=max_seq_length,
    load_in_4bit=False,
)

However, this method doesn't seem to allow any sort of local caching, it downloads the whole model from hugging face every time.

My question: How can I load unsloth model from local hard drive?

I want to download a model from hugging face to be used with unsloth for trainig:

from unsloth import FastLanguageModel,

max_seq_length = 16384
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Llama-3.2-1B-Instruct",
    max_seq_length=max_seq_length,
    load_in_4bit=False,
)

However, this method doesn't seem to allow any sort of local caching, it downloads the whole model from hugging face every time.

My question: How can I load unsloth model from local hard drive?

Share Improve this question asked Nov 18, 2024 at 20:33 MattMatt 256 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

Turns out it is actually really simple, you load the model like this:

from unsloth import FastLanguageModel,

model, tokenizer = FastLanguageModel.from_pretrained(
    "/content/model"
)

Post a comment

comment list (0)

  1. No comments so far