成人免费xxxxx在线视频软件_久久精品久久久_亚洲国产精品久久久_天天色天天色_亚洲人成一区_欧美一级欧美三级在线观看

使用Accelerate庫在多GPU上進(jìn)行LLM推理

人工智能
大型語言模型(llm)已經(jīng)徹底改變了自然語言處理領(lǐng)域。隨著這些模型在規(guī)模和復(fù)雜性上的增長(zhǎng),推理的計(jì)算需求也顯著增加。為了應(yīng)對(duì)這一挑戰(zhàn)利用多個(gè)gpu變得至關(guān)重要。

大型語言模型(llm)已經(jīng)徹底改變了自然語言處理領(lǐng)域。隨著這些模型在規(guī)模和復(fù)雜性上的增長(zhǎng),推理的計(jì)算需求也顯著增加。為了應(yīng)對(duì)這一挑戰(zhàn)利用多個(gè)gpu變得至關(guān)重要。

所以本文將在多個(gè)gpu上并行執(zhí)行推理,主要包括:Accelerate庫介紹,簡(jiǎn)單的方法與工作代碼示例和使用多個(gè)gpu的性能基準(zhǔn)測(cè)試。

本文將使用多個(gè)3090將llama2-7b的推理擴(kuò)展在多個(gè)GPU上

基本示例

我們首先介紹一個(gè)簡(jiǎn)單的示例來演示使用Accelerate進(jìn)行多gpu“消息傳遞”。

from accelerate import Accelerator
 from accelerate.utils import gather_object
 
 accelerator = Accelerator()
 
 # each GPU creates a string
 message=[ f"Hello this is GPU {accelerator.process_index}" ] 
 
 # collect the messages from all GPUs
 messages=gather_object(message)
 
 # output the messages only on the main process with accelerator.print() 
 accelerator.print(messages)

輸出如下:

['Hello this is GPU 0', 
  'Hello this is GPU 1', 
  'Hello this is GPU 2', 
  'Hello this is GPU 3', 
  'Hello this is GPU 4']

多GPU推理

下面是一個(gè)簡(jiǎn)單的、非批處理的推理方法。代碼很簡(jiǎn)單,因?yàn)锳ccelerate庫已經(jīng)幫我們做了很多工作,我們直接使用就可以:

from accelerate import Accelerator
 from accelerate.utils import gather_object
 from transformers import AutoModelForCausalLM, AutoTokenizer
 from statistics import mean
 import torch, time, json
 
 accelerator = Accelerator()
 
 # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books
 prompts_all=[
    "The King is dead. Long live the Queen.",
    "Once there were four children whose names were Peter, Susan, Edmund, and Lucy.",
    "The story so far: in the beginning, the universe was created.",
    "It was a bright cold day in April, and the clocks were striking thirteen.",
    "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.",
    "The sweat wis lashing oafay Sick Boy; he wis trembling.",
    "124 was spiteful. Full of Baby's venom.",
    "As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.",
    "I write this sitting in the kitchen sink.",
    "We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.",
 ] * 10
 
 # load a base model and tokenizer
 model_path="models/llama2-7b"
 model = AutoModelForCausalLM.from_pretrained(
    model_path,    
    device_map={"": accelerator.process_index},
    torch_dtype=torch.bfloat16,
 )
 tokenizer = AutoTokenizer.from_pretrained(model_path)   
 
 # sync GPUs and start the timer
 accelerator.wait_for_everyone()
 start=time.time()
 
 # divide the prompt list onto the available GPUs 
 with accelerator.split_between_processes(prompts_all) as prompts:
    # store output of generations in dict
    results=dict(outputs=[], num_tokens=0)
 
    # have each GPU do inference, prompt by prompt
    for prompt in prompts:
        prompt_tokenized=tokenizer(prompt, return_tensors="pt").to("cuda")
        output_tokenized = model.generate(**prompt_tokenized, max_new_tokens=100)[0]
 
        # remove prompt from output 
        output_tokenized=output_tokenized[len(prompt_tokenized["input_ids"][0]):]
 
        # store outputs and number of tokens in result{}
        results["outputs"].append( tokenizer.decode(output_tokenized) )
        results["num_tokens"] += len(output_tokenized)
 
    results=[ results ] # transform to list, otherwise gather_object() will not collect correctly
 
 # collect results from all the GPUs
 results_gathered=gather_object(results)
 
 if accelerator.is_main_process:
    timediff=time.time()-start
    num_tokens=sum([r["num_tokens"] for r in results_gathered ])
 
    print(f"tokens/sec: {num_tokens//timediff}, time {timediff}, total tokens {num_tokens}, total prompts {len(prompts_all)}")

使用多個(gè)gpu會(huì)導(dǎo)致一些通信開銷:性能在4個(gè)gpu時(shí)呈線性增長(zhǎng),然后在這種特定設(shè)置中趨于穩(wěn)定。當(dāng)然這里的性能取決于許多參數(shù),如模型大小和量化、提示長(zhǎng)度、生成的令牌數(shù)量和采樣策略,所以我們只討論一般的情況

1 GPU: 44個(gè)token /秒,時(shí)間:225.5s

2 gpu: 88個(gè)token /秒,時(shí)間:112.9s

3 gpu: 128個(gè)token /秒,時(shí)間:77.6s

4 gpu: 137個(gè)token /秒,時(shí)間:72.7s

5 gpu: 119個(gè)token /秒,時(shí)間:83.8s

在多GPU上進(jìn)行批處理

現(xiàn)實(shí)世界中,我們可以使用批處理推理來加快速度。這會(huì)減少GPU之間的通訊,加快推理速度。我們只需要增加prepare_prompts函數(shù)將一批數(shù)據(jù)而不是單條數(shù)據(jù)輸入到模型即可:

from accelerate import Accelerator
 from accelerate.utils import gather_object
 from transformers import AutoModelForCausalLM, AutoTokenizer
 from statistics import mean
 import torch, time, json
 
 accelerator = Accelerator()
 
 def write_pretty_json(file_path, data):
    import json
    with open(file_path, "w") as write_file:
        json.dump(data, write_file, indent=4)
 
 # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books
 prompts_all=[
    "The King is dead. Long live the Queen.",
    "Once there were four children whose names were Peter, Susan, Edmund, and Lucy.",
    "The story so far: in the beginning, the universe was created.",
    "It was a bright cold day in April, and the clocks were striking thirteen.",
    "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.",
    "The sweat wis lashing oafay Sick Boy; he wis trembling.",
    "124 was spiteful. Full of Baby's venom.",
    "As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.",
    "I write this sitting in the kitchen sink.",
    "We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.",
 ] * 10
 
 # load a base model and tokenizer
 model_path="models/llama2-7b"
 model = AutoModelForCausalLM.from_pretrained(
    model_path,    
    device_map={"": accelerator.process_index},
    torch_dtype=torch.bfloat16,
 )
 tokenizer = AutoTokenizer.from_pretrained(model_path)   
 tokenizer.pad_token = tokenizer.eos_token
 
 # batch, left pad (for inference), and tokenize
 def prepare_prompts(prompts, tokenizer, batch_size=16):
    batches=[prompts[i:i + batch_size] for i in range(0, len(prompts), batch_size)]  
    batches_tok=[]
    tokenizer.padding_side="left"     
    for prompt_batch in batches:
        batches_tok.append(
            tokenizer(
                prompt_batch, 
                return_tensors="pt", 
                padding='longest', 
                truncatinotallow=False, 
                pad_to_multiple_of=8,
                add_special_tokens=False).to("cuda") 
            )
    tokenizer.padding_side="right"
    return batches_tok
 
 # sync GPUs and start the timer
 accelerator.wait_for_everyone()    
 start=time.time()
 
 # divide the prompt list onto the available GPUs 
 with accelerator.split_between_processes(prompts_all) as prompts:
    results=dict(outputs=[], num_tokens=0)
 
    # have each GPU do inference in batches
    prompt_batches=prepare_prompts(prompts, tokenizer, batch_size=16)
 
    for prompts_tokenized in prompt_batches:
        outputs_tokenized=model.generate(**prompts_tokenized, max_new_tokens=100)
 
        # remove prompt from gen. tokens
        outputs_tokenized=[ tok_out[len(tok_in):] 
            for tok_in, tok_out in zip(prompts_tokenized["input_ids"], outputs_tokenized) ] 
 
        # count and decode gen. tokens 
        num_tokens=sum([ len(t) for t in outputs_tokenized ])
        outputs=tokenizer.batch_decode(outputs_tokenized)
 
        # store in results{} to be gathered by accelerate
        results["outputs"].extend(outputs)
        results["num_tokens"] += num_tokens
 
    results=[ results ] # transform to list, otherwise gather_object() will not collect correctly
 
 # collect results from all the GPUs
 results_gathered=gather_object(results)
 
 if accelerator.is_main_process:
    timediff=time.time()-start
    num_tokens=sum([r["num_tokens"] for r in results_gathered ])
 
    print(f"tokens/sec: {num_tokens//timediff}, time elapsed: {timediff}, num_tokens {num_tokens}")

可以看到批處理會(huì)大大加快速度。

1 GPU: 520 token /sec,時(shí)間:19.2s

2 gpu: 900 token /sec,時(shí)間:11.1s

3 gpu: 1205個(gè)token /秒,時(shí)間:8.2s

4 gpu: 1655 token /sec,時(shí)間:6.0s

5 gpu: 1658 token /sec,時(shí)間:6.0s

總結(jié)

截止到本文為止,llama.cpp,ctransformer還不支持多GPU推理,好像llama.cpp在6月有個(gè)多GPU的merge,但是我沒看到官方更新,所以這里暫時(shí)確定不支持多GPU。如果有小伙伴確認(rèn)可以支持多GPU請(qǐng)留言。

huggingface的Accelerate包則為我們使用多GPU提供了一個(gè)很方便的選擇,使用多個(gè)GPU推理可以顯著提高性能,但gpu之間通信的開銷隨著gpu數(shù)量的增加而顯著增加。

責(zé)任編輯:華軒 來源: DeepHub IMBA
相關(guān)推薦

2024-03-25 14:22:07

大型語言模型GaLore

2020-03-07 18:51:11

EclipseFedoraPHP

2009-01-06 10:04:44

CygwinGCCGUI

2024-02-04 00:00:00

Triton格式TensorRT

2022-02-09 15:29:35

Java組件編程語言

2010-02-24 15:19:38

ibmdwLinux

2010-12-09 09:12:28

2020-02-18 09:45:44

云計(jì)算云平臺(tái)IT

2009-04-14 18:50:55

Nehalem惠普intel

2025-04-24 10:26:40

2024-10-16 21:49:24

2023-06-20 08:00:00

2025-04-23 15:49:37

2023-09-01 15:22:49

人工智能數(shù)據(jù)

2025-03-18 08:00:00

大語言模型KubeMQOpenAI

2024-01-11 16:24:12

人工智能RAG

2011-10-08 11:05:04

GPUMATLAB

2024-08-28 13:34:13

2025-05-09 01:00:00

大語言模型LLMGPU內(nèi)存

2023-12-22 09:32:13

引擎模型
點(diǎn)贊
收藏

51CTO技術(shù)棧公眾號(hào)

主站蜘蛛池模板: 久久久久中文字幕 | 夜夜操操操| 一区二区三区免费在线观看 | 一区二区三区四区毛片 | 黄色毛片在线看 | 成人在线观看欧美 | 日韩欧美亚洲 | 午夜视频在线观看网址 | 精品一区久久 | 久久久久久国产精品 | 欧美视频在线播放 | 精品福利一区二区三区 | 国产高清免费 | 可以免费观看的av片 | 日日爽 | 欧美一级在线观看 | 91在线网 | 午夜精品久久久久久久99黑人 | 久草福利 | 国产精品九九九 | 色一情一乱一伦一区二区三区 | 久久天堂| 国产激情一区二区三区 | av中文字幕在线播放 | 日本a视频 | 日本精品在线观看 | 在线欧美视频 | 91精品国产综合久久婷婷香蕉 | 五月天婷婷狠狠 | 成人国内精品久久久久一区 | 青青草视频免费观看 | 成年人在线观看视频 | 亚洲免费观看视频网站 | 午夜精品久久久久久久99黑人 | 国色天香综合网 | 亚洲高清免费 | 蜜月va乱码一区二区三区 | 蜜月va乱码一区二区三区 | 一区二区中文 | 亚洲精视频 | 91看片网|