论文标题:ASPEN:HighThroughputLoRAFineTuningofLargeLanguageModelswithaSingleGPU论文地址:https:arxiv.orgabs2312.02515论文翻译https:arxivtools.blob.core.windows.netxueshuxiangzipaperhtml20231262312.02515.pdf参考代码:https:github.comTUDBLabsmultilorafinetune摘要基于transformer的LLM在不同领域表现出出色的性能,特别是针对特定领域进行微调时。微调LLM所需的资源可以通过低秩自适应(LoRA)等高效...