DeepSeek LLM:让答案触手可及

DeepSeek LLM:让答案触手可及

技术背景

在自然语言处理领域,大语言模型的发展日新月异。DeepSeek LLM作为一款先进的语言模型,由670亿参数构成,基于2万亿英语和中文混合语料从头开始训练。为推动研究发展,其7B/67B基础和聊天模型已开源。

实现步骤

模型下载

  • Huggingface:可从Huggingface下载DeepSeek LLM 7B/67B的基础和聊天模型。
  • 中间检查点:使用AWS CLI从AWS S3下载中间检查点,如:
1
2
3
4
5
# DeepSeek-LLM-7B-Base
aws s3 cp s3://deepseek-ai/DeepSeek-LLM/DeepSeek-LLM-7B-Base <local_path> --recursive --request-payer

# DeepSeek-LLM-67B-Base
aws s3 cp s3://deepseek-ai/DeepSeek-LLM/DeepSeek-LLM-67B-Base <local_path> --recursive --request-payer

快速开始

安装依赖

在Python >= 3.8环境下,运行以下命令安装依赖:

1
pip install -r requirements.txt

使用Huggingface的Transformers进行推理

  • 文本补全
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = "deepseek-ai/deepseek-llm-67b-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

text = "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
  • 聊天补全
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = "deepseek-ai/deepseek-llm-67b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

messages = [
{"role": "user", "content": "Who are you?"}
]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
print(result)

使用vLLM进行高吞吐量推理

  • 文本补全
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
from vllm import LLM, SamplingParams

tp_size = 4 # Tensor Parallelism
sampling_params = SamplingParams(temperature=0.7, top_p=0.9, max_tokens=100)
model_name = "deepseek-ai/deepseek-llm-67b-base"
llm = LLM(model=model_name, trust_remote_code=True, gpu_memory_utilization=0.9, tensor_parallel_size=tp_size)

prompts = [
"If everyone in a country loves one another,",
"The research should also focus on the technologies",
"To determine if the label is correct, we need to"
]
outputs = llm.generate(prompts, sampling_params)

generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)
  • 聊天补全
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

tp_size = 4 # Tensor Parallelism
sampling_params = SamplingParams(temperature=0.7, top_p=0.9, max_tokens=100)
model_name = "deepseek-ai/deepseek-llm-67b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, trust_remote_code=True, gpu_memory_utilization=0.9, tensor_parallel_size=tp_size)

messages_list = [
[{"role": "user", "content": "Who are you?"}],
[{"role": "user", "content": "What can you do?"}],
[{"role": "user", "content": "Explain Transformer briefly."}],
]
# Avoid adding bos_token repeatedly
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]

sampling_params.stop = [tokenizer.eos_token]
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)

generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)

核心代码解释

上述代码主要围绕模型的推理过程。使用Huggingface的Transformers时,通过AutoTokenizer进行文本的编码和解码,AutoModelForCausalLM进行模型加载和推理。使用vLLM时,通过LLM类加载模型,SamplingParams设置采样参数进行高吞吐量推理。

最佳实践

  • 数据处理:在预训练时,通过多种方法增强数据的丰富性和多样性,如使用分布式、频繁检查点的批处理系统“cc_cleaner”,采用确定性随机化,进行数据修剪和去重。
  • 模型训练:使用AdamW优化器,7B模型批量大小为2304,学习率为4.2e - 4;67B模型批量大小为4608,学习率为3.2e - 4,并采用多步学习率调度。

常见问题

能否提供tokenizer.model文件用于模型量化?

DeepSeek LLM使用HuggingFace Tokenizer实现字节级BPE算法,目前无法直接转换为SentencePiece分词器。正在为开源量化方法做出贡献以支持HuggingFace Tokenizer。

GGUF(llama.cpp)

提交了PR以支持所有HuggingFace预分词器,等待合并期间,可按以下步骤生成GGUF模型:

1
2
3
4
5
6
7
8
9
10
11
git clone https://github.com/DOGEwbx/llama.cpp.git
cd llama.cpp
git checkout regex_gpt2_preprocess
# set up the environment according to README
make
python3 -m pip install -r requirements.txt
# generate GGUF model
python convert-hf-to-gguf.py <MODEL_PATH> --outfile <GGUF_PATH> --model-name deepseekllm
# use q4_0 quantization as an example
./quantize <GGUF_PATH> <OUTPUT_PATH> q4_0
./main -m <OUTPUT_PATH> -n 128 -p <PROMPT>

GPTQ(exllamav2)

exllamav2已支持HuggingFace Tokenizer,拉取最新版本尝试。

GPU内存使用

不同批量大小和序列长度设置下,7B模型使用1块NVIDIA A100 - PCIE - 40GB GPU,67B模型使用8块NVIDIA A100 - PCIE - 40GB GPU进行推理,并给出了详细的内存使用表格。

模型局限性

  • 过度依赖训练数据:可能引入数据中的偏差,产生有偏见或歧视性的响应。
  • 幻觉问题:有时会生成看似合理但与事实不符的响应。
  • 重复问题:生成的响应可能出现重复,降低输出的多样性和吸引力。

许可证和引用

  • 许可证:代码仓库遵循MIT许可证,模型使用遵循Model License,支持商业使用。
  • 引用
1
2
3
4
5
6
7
@article{deepseek-llm,
author = {DeepSeek-AI},
title = {DeepSeek LLM: Scaling Open-Source Language Models with Longtermism},
journal = {arXiv preprint arXiv:2401.02954},
year = {2024},
url = {https://github.com/deepseek-ai/DeepSeek-LLM}
}

若有问题,可在仓库中提issue或联系[email protected]


DeepSeek LLM:让答案触手可及
https://119291.xyz/posts/deepseek-llm-introduction-and-evaluation/
作者
ww
发布于
2025年4月23日
许可协议