大模型量化 - 基于激活感知的權(quán)重量化AWQ

1. 背景

計劃通過FastChat加載一個語言大模型或代碼大模型讨永,7B參數(shù)的沒問題洞渤。
嘗試加載量化之后的13B或33B級別的模型橙凳。

FastChat支持AWQ(llm-awq)和GPTQ兩種量化模型丘损,本次先嘗試AWQ(llm-awq)。
https://github.com/lm-sys/FastChat/blob/main/docs/awq.md

AWQ量化還有一種實現(xiàn):autoawq奕筐,已經(jīng)被transformers嵌入舱痘,所以推薦采用這個版本的AWQ。
參考:transformers/src/transformers/integrations/awq.py at main · huggingface/transformers (github.com)
本文也會介紹AutoAWQ這種量化方法离赫。

LLM

2. 加載模型

qwen1.5

llm-awq不支持qwen2模型(實際是qwen1.5模型)

python3 -m fastchat.serve.cli \
    --model-path /data/shuzhang/models/qwen/Qwen1.5-14B-Chat-AWQ \
    --awq-wbits 4 \
    --awq-groupsize 128 
File "/home/jinxiao/code/llm-deploy/llm-awq/awq/quantize/quantizer.py", line 132, in real_quantize_model_weight
    layers = get_blocks(model)
  File "/home/jinxiao/code/llm-deploy/llm-awq/awq/quantize/pre_quant.py", line 43, in get_blocks
    raise NotImplementedError(type(model))
NotImplementedError: <class 'transformers.models.qwen2.modeling_qwen2.Qwen2ForCausalLM'>

如果采用AutoAWQ的話芭逝,可以直接啟動(腳本如下),會走transformers的加載過程渊胸,前提是需要pip install autoawq
并且旬盯,不在llm-awq的目錄下,否則,會報錯ModuleNotFoundError: No module named 'awq.modules'
其實胖翰,如果transformers可以直接load AWQ模型接剩,沒有采用llm-awq,說明這個模型也是采用AutoAWQ量化的
注意:下面的啟動腳本萨咳,沒有--awq-wbits 4 --awq-groupsize 128參數(shù)設(shè)置懊缺,fastchat會默認采用transformers庫加載預訓練模型

python3 -m fastchat.serve.cli \
    --model-path /data/shuzhang/models/qwen/Qwen1.5-14B-Chat-AWQ

deepseek

deepseek模型采用llama架構(gòu),所以llm-awq支持某弦。
但是報了一個莫名的錯誤桐汤,懷疑是量化checkpoint的問題而克。

自行量化靶壮,但是GPU顯存不足,一個24GB的3090會oom员萍。
llm-awq也不支持兩張卡腾降,拉胯!

$ python3 -m fastchat.serve.cli \
>     --model-path /data/shuzhang/models/deepseek/deepseek-coder-33B-instruct-AWQ \
>     --awq-wbits 4 \
>     --awq-groupsize 128

Loading AWQ quantized model...
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
real weight quantization...(init only): 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 62/62 [00:02<00:00, 28.97it/s]

[Warning] The awq quantized checkpoint seems to be in v1 format.
If the model cannot be loaded successfully, please use the latest awq library to re-quantized the model, or repack the current checkpoint with tinychat/offline-weight-repacker.py

Loading checkpoint:   0%|                                                                                                                                   | 0/1 [00:11<?, ?it/s]
Traceback (most recent call last):
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/fastchat/serve/cli.py", line 304, in <module>
    main(args)
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/fastchat/serve/cli.py", line 227, in main
    chat_loop(
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/fastchat/serve/inference.py", line 361, in chat_loop
    model, tokenizer = load_model(
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 294, in load_model
    model, tokenizer = load_awq_quantized(model_path, awq_config, device)
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/fastchat/modules/awq.py", line 65, in load_awq_quantized
    model = load_quant.load_awq_model(
  File "/home/jinxiao/code/llm-deploy/llm-awq/tinychat/utils/load_quant.py", line 82, in load_awq_model
    model = load_checkpoint_and_dispatch(
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/accelerate/big_modeling.py", line 589, in load_checkpoint_and_dispatch
    load_checkpoint_in_model(
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1645, in load_checkpoint_in_model
    model.load_state_dict(checkpoint, strict=False)
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LlamaForCausalLM:
      size mismatch for model.layers.34.mlp.up_proj.qweight: copying a param with shape torch.Size([7168, 2400]) from checkpoint, the shape in current model is torch.Size([4800, 7168]).
      size mismatch for model.layers.34.mlp.down_proj.qweight: copying a param with shape torch.Size([19200, 896]) from checkpoint, the shape in current model is torch.Size([1792, 19200]).
      size mismatch for model.layers.34.mlp.down_proj.scales: copying a param with shape torch.Size([150, 7168]) from checkpoint, the shape in current model is torch.Size([152, 7168]).
      ...

3. llm-awq量化過程

The current release supports of llm-awq :

  • AWQ search for accurate quantization.
  • Pre-computed AWQ model zoo for LLMs (LLaMA, Llama2, OPT, CodeLlama, StarCoder, Vicuna, VILA, LLaVA; load to generate quantized weights).
  • Memory-efficient 4-bit Linear in PyTorch.
  • Efficient CUDA kernel implementation for fast inference (support context and decoding stage).
  • Examples on 4-bit inference of an instruction-tuned model (Vicuna) and multi-modal LM (VILA).

記錄一下llm-awq的量化過程碎绎,以后如果GPU顯存充足螃壤,可以試試。(本次量化沒有成功筋帖,顯存不足)

環(huán)境安裝

mit-han-lab/llm-awq: AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration (github.com)

量化步驟

llm-awq/scripts/llama2_example.sh at main · mit-han-lab/llm-awq (github.com)

MODEL_NAME=deepseek-coder-6.7b-instruct
MODEL_PATH=/home/shuzhang/ai/deepseek/$MODEL_NAME

CACHE_PATH=/data/models/llm-awq
AWQ_CACHE=$CACHE_PATH/awq_cache
QUANT_CACHE=$CACHE_PATH/quant_cache


# run AWQ search (optional; we provided the pre-computed results)
python -m awq.entry --model_path $MODEL_PATH \
    --w_bit 4 --q_group_size 128 \
    --run_awq --dump_awq $AWQ_CACHE/$MODEL_NAME-w4-g128.pt 

# evaluate the AWQ quantize model (simulated pseudo quantization)
python -m awq.entry --model_path $MODEL_PATH \
    --tasks wikitext \
    --w_bit 4 --q_group_size 128 \
    --load_awq $AWQ_CACHE/$MODEL_NAME-w4-g128.pt \
    --q_backend fake

# generate real quantized weights (w4)
python -m awq.entry --model_path $MODEL_PATH \
    --w_bit 4 --q_group_size 128 \
    --load_awq $AWQ_CACHE/$MODEL_NAME-w4-g128.pt \
    --q_backend real --dump_quant $QUANT_CACHE/$MODEL_NAME-w4-g128-awq.pt

# load and evaluate the real quantized model (smaller gpu memory usage)
python -m awq.entry --model_path $MODEL_PATH \
    --tasks wikitext \
    --w_bit 4 --q_group_size 128 \
    --load_quant $QUANT_CACHE/$MODEL_NAME-w4-g128-awq.pt

遇到的問題

問題1

Traceback (most recent call last):
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/jinxiao/code/llm-deploy/llm-awq/awq/entry.py", line 15, in <module>
    from awq.quantize.pre_quant import run_awq, apply_awq
ModuleNotFoundError: No module named 'awq.quantize.pre_quant'

解決辦法

  • 創(chuàng)建文件/home/jinxiao/code/llm-deploy/llm-awq/awq/init.py

問題2

File "/home/jinxiao/code/llm-deploy/llm-awq/awq/utils/calib_data.py", line 7, in get_calib_dataset
    dataset = load_dataset("mit-han-lab/pile-val-backup", split="validation")

解決辦法

4. AutoAWQ量化過程

https://github.com/casper-hansen/AutoAWQ?tab=readme-ov-file#examples

  • 量化腳本如下日麸,耗時大概20分鐘(兩張3090 24GB顯卡)
  • 量化之后寄啼,通過fastchat加載測試,也沒問題代箭。顯存使用更少墩划,推理更快。
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer

model_path = '/data/shuzhang/models/deepseek/deepseek-coder-6.7b-instruct'
quant_path = 'deepseek-coder-6.7b-instruct-AWQ'
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }

# Load model
model = AutoAWQForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)

# Quantize
model.quantize(tokenizer, quant_config=quant_config)

# Save quantized model
model.save_quantized(quant_path)
tokenizer.save_pretrained(quant_path)

問題1

  • 加載數(shù)據(jù)失敗嗡综,網(wǎng)絡(luò)不通導致乙帮,有proxy應該沒問題
dataset = load_dataset("mit-han-lab/pile-val-backup", split="validation")

解決辦法

問題2

File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1067, in _update_causal_mask
    if hasattr(self.layers[0].self_attn, "past_key_value"):  # static cache
  File "/home/jinxiao/miniconda3/envs/llm_new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1688, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'Catcher' object has no attribute 'self_attn'

解決辦法

  • 文件:.../miniconda3/envs/llm_new/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py
if hasattr(self.layers[0].self_attn, "past_key_value"):  # static cache
=> 改成
if False:

5. 總結(jié)

由于沒有測試llm-awq的量化模型察净,也沒能通過llm-awq量化成功。
所以盼樟,并不清楚llm-awq量化后的模型氢卡,推理速度如何,顯存占用怎樣恤批。
如果和AutoAWQ資源占用情況和推理速度相似异吻,更推薦使用AutoAWQ。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市诀浪,隨后出現(xiàn)的幾起案子棋返,更是在濱河造成了極大的恐慌,老刑警劉巖雷猪,帶你破解...
    沈念sama閱讀 222,183評論 6 516
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件睛竣,死亡現(xiàn)場離奇詭異,居然都是意外死亡求摇,警方通過查閱死者的電腦和手機射沟,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,850評論 3 399
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來与境,“玉大人验夯,你說我怎么就攤上這事∷さ螅” “怎么了挥转?”我有些...
    開封第一講書人閱讀 168,766評論 0 361
  • 文/不壞的土叔 我叫張陵,是天一觀的道長共屈。 經(jīng)常有香客問我绑谣,道長,這世上最難降的妖魔是什么拗引? 我笑而不...
    開封第一講書人閱讀 59,854評論 1 299
  • 正文 為了忘掉前任借宵,我火速辦了婚禮,結(jié)果婚禮上矾削,老公的妹妹穿的比我還像新娘壤玫。我一直安慰自己,他們只是感情好怔软,可當我...
    茶點故事閱讀 68,871評論 6 398
  • 文/花漫 我一把揭開白布垦细。 她就那樣靜靜地躺著,像睡著了一般挡逼。 火紅的嫁衣襯著肌膚如雪括改。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 52,457評論 1 311
  • 那天家坎,我揣著相機與錄音嘱能,去河邊找鬼。 笑死虱疏,一個胖子當著我的面吹牛惹骂,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播做瞪,決...
    沈念sama閱讀 40,999評論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼对粪,長吁一口氣:“原來是場噩夢啊……” “哼右冻!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起著拭,我...
    開封第一講書人閱讀 39,914評論 0 277
  • 序言:老撾萬榮一對情侶失蹤纱扭,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后儡遮,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體乳蛾,經(jīng)...
    沈念sama閱讀 46,465評論 1 319
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 38,543評論 3 342
  • 正文 我和宋清朗相戀三年鄙币,在試婚紗的時候發(fā)現(xiàn)自己被綠了肃叶。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 40,675評論 1 353
  • 序言:一個原本活蹦亂跳的男人離奇死亡十嘿,死狀恐怖因惭,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情详幽,我是刑警寧澤筛欢,帶...
    沈念sama閱讀 36,354評論 5 351
  • 正文 年R本政府宣布浸锨,位于F島的核電站唇聘,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏柱搜。R本人自食惡果不足惜迟郎,卻給世界環(huán)境...
    茶點故事閱讀 42,029評論 3 335
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望聪蘸。 院中可真熱鬧宪肖,春花似錦、人聲如沸健爬。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,514評論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽娜遵。三九已至蜕衡,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間设拟,已是汗流浹背慨仿。 一陣腳步聲響...
    開封第一講書人閱讀 33,616評論 1 274
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留纳胧,地道東北人镰吆。 一個月前我還...
    沈念sama閱讀 49,091評論 3 378
  • 正文 我出身青樓,卻偏偏與公主長得像跑慕,于是被迫代替她去往敵國和親万皿。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 45,685評論 2 360

推薦閱讀更多精彩內(nèi)容