-
Notifications
You must be signed in to change notification settings - Fork 28
Open
Description
Decryption works fine with no issue.
When running py chat.py, encounter following error:
Loading ./result...
gpu_count 1
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
Loading checkpoint shards: 33%|██████████████████▎ | 1/3 [00:07<00:14, 7.28s/it]
Traceback (most recent call last):
File "/home/dsu/ai/xf/src/transformers/modeling_utils.py", line 415, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "/home/dsu/p3/lib/python3.10/site-packages/torch/serialization.py", line 797, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "/home/dsu/p3/lib/python3.10/site-packages/torch/serialization.py", line 283, in __init__
super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dsu/ai/xf/src/transformers/modeling_utils.py", line 419, in load_state_dict
if f.read(7) == "version":
File "/usr/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 128: invalid start byte
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dsu/ai/palpaca/mychat.py", line 41, in <module>
load_model("./result")
File "/home/dsu/ai/palpaca/mychat.py", line 27, in load_model
model = transformers.LlamaForCausalLM.from_pretrained(
File "/home/dsu/ai/xf/src/transformers/modeling_utils.py", line 2709, in from_pretrained
) = cls._load_pretrained_model(
File "/home/dsu/ai/xf/src/transformers/modeling_utils.py", line 3023, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
File "/home/dsu/ai/xf/src/transformers/modeling_utils.py", line 431, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for './result/pytorch_model-00002-of-00003.bin' at './result/pytorch_model-00002-of-00003.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
I'm running with latest transformer 4.28.0.dev0 (pulled the code today), which has LlamaTokenizer, hence is the message I got earlier regarding tokenizer class warning. Tried the specific transfomers in requirements.txt (git+https://github.com/zphang/transformers.git@68d640f7c368bcaaaecfc678f11908ebbd3d6176), got the same error.
torch version: 2.0.0.
Anyone encounters similar issue and suggestion to resolve the issue? Thanks.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels