Qwen-VL本地化部署及微调实践

news/2024/11/29 14:53:44/

Qwen-VL本地化部署及微调实践

  • 创建虚拟环境
  • 模型部署
    • 下载模型文件
    • 下载项目代码
    • 安装python依赖环境
    • 修改web_demo_mm.py及openai_api.py的部分代码
    • 启动测试
  • 模型微调
    • 环境部署
    • 数据准备
    • 微调
  • 问题及解决方案

创建虚拟环境

conda create -name vl python=3.10.8

模型部署

下载模型文件

https://huggingface.co/Qwen/Qwen-VL-Chat/tree/main

在这里插入图片描述

下载项目代码

https://github.com/QwenLM/Qwen-VL

安装python依赖环境

pip3 install -r requirements.txt
pip3 install -r requirements_openai_api.txt
pip3 install -r requirements_web_demo.txt

修改web_demo_mm.py及openai_api.py的部分代码

在这里插入图片描述
在这里插入图片描述

启动测试

web页面:
在这里插入图片描述

open_ai接口:
在这里插入图片描述

模型微调

环境部署

pip3 install deepspeed
pip3 install peft
pip3 install optimum
pip3 install auto-gptq

数据准备

创建一个json文件:

[{"id": "identity_0","conversations": [{"from": "user","value": "你好"},{"from": "assistant","value": "我是Qwen-VL,一个支持视觉输入的大模型。"}]},{"id": "identity_1","conversations": [{"from": "user","value": "Picture 1: <img>https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg</img>\n图中的狗是什么品种?"},{"from": "assistant","value": "图中是一只拉布拉多犬。"},{"from": "user","value": "框出图中的格子衬衫"},{"from": "assistant","value": "<ref>格子衬衫</ref><box>(588,499),(725,789)</box>"}]},{ "id": "identity_2","conversations": [{"from": "user","value": "Picture 1: <img>assets/mm_tutorial/Chongqing.jpeg</img>\nPicture 2: <img>assets/mm_tutorial/Beijing.jpeg</img>\n图中都是哪"},{"from": "assistant","value": "第一张图片是重庆的城市天际线,第二张图片是北京的天际线。"}]},{"id": "identity_3","conversations": [{"from": "user","value": "Picture 1: <img>assets/mm_tutorial/gongren.png</img>\n解析图片中存在的问题,并返回相关的技术监督细则?"},{"from": "assistant","value": "图片中存在以下问题:工人没有佩戴安全帽。地面上有大量的扬尘,可能对环境和工人的健康造成威胁。电线杆之间的距离过近,可能存在安全隐患。工人站在未铺设完成的路上,可能存在安全风险。相关的技术监督细则包括:《建筑施工安全检查标准》规定,工人进入施工现场时必须佩戴安全帽。《中华人民共和国大气污染防治法》规定,企业应当采取措施防治扬尘污染。《电力设施保护条例》规定,架空电力线路的杆塔、拉线不得跨越道路。《建筑施工安全操作规程》规定,工人应当在已经铺设完成的路上工作,避免站在未完成的路面上。"}]}
]

为针对多样的VL任务,提供特殊tokens: <img> </img> <ref> </ref> <box> </box>

对于带图像输入的内容可表示为 Picture id: <img>img_path</img>\n{your prompt},其中id表示对话中的第几张图片。"img_path"可以是本地的图片或网络地址。

对话中的检测框可以表示为<box>(x1,y1),(x2,y2)</box>,其中 (x1, y1) 和(x2, y2)分别对应左上角和右下角的坐标,并且被归一化到[0, 1000)的范围内. 检测框对应的文本描述也可以通过<ref>text_caption</ref>表示。

微调

我这里因为资源有限,选择的QLora形式。修改finetune/finetune_qlora_single_gpu.sh中的一些设置,其中主要修改标红部分,其他参数自己设当调整。
在这里插入图片描述
运行sh finetune/finetune_qlora_single_gpu.sh开始微调:

(vl) [root@iZf8zjfeutx2vqfwk4rqc2Z Qwen-VL-master]# sh finetune/finetune_qlora_single_gpu.sh
[2024-02-23 10:51:48,457] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/root/miniconda3/envs/vl/lib/python3.10/site-packages/transformers/utils/generic.py:260: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.torch.utils._pytree._register_pytree_node(
/root/miniconda3/envs/vl/lib/python3.10/site-packages/transformers/utils/generic.py:260: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.torch.utils._pytree._register_pytree_node(
You passed `quantization_config` to `from_pretrained` but the model you're loading already has a `quantization_config` attribute and has already quantized weights. However, loading attributes (e.g. disable_exllama, use_cuda_fp16) will be overwritten with the one you passed to `from_pretrained`. The rest will be ignored.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:32<00:00,  6.48s/it]
Loading data...
Formatting inputs...Skip in lazy mode
Detected kernel version 3.10.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
Using /root/.cache/torch_extensions/py310_cu121 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py310_cu121/fused_adam/build.ninja...
Building extension module fused_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module fused_adam...
Time to load fused_adam op: 0.08002138137817383 seconds
/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/adam/fused_adam.py:96: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:83.)self._dummy_overflow_buf = get_accelerator().IntTensor([0])0%|                                                                                                                                                                            | 0/1 [00:00<?, ?it/s]/root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/utils/checkpoint.py:460: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.warnings.warn(
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:21<00:00, 21.81s/it]tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 1.1015, 'learning_rate': 0, 'epoch': 1.0}
{'train_runtime': 21.867, 'train_samples_per_second': 0.183, 'train_steps_per_second': 0.046, 'train_loss': 1.101470947265625, 'epoch': 1.0}
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:21<00:00, 21.86s/it]
/root/miniconda3/envs/vl/lib/python3.10/site-packages/peft/utils/save_and_load.py:148: UserWarning: Could not find a config file in /root/Qwen-VL-Chat-Int4 - will assume that the vocabulary was not modified.warnings.warn(

得到微调后的adapter模型:
在这里插入图片描述
Q-LoRA微调不支持模型的合并,可以采用以下代码读取模型:

from peft import AutoPeftModelForCausalLMmodel = AutoPeftModelForCausalLM.from_pretrained(path_to_adapter, # path to the output directorydevice_map="auto",trust_remote_code=True
).eval()

问题及解决方案

1、mpi4py安装失败

conda install mpi4py

2、运行微调脚本时,报错:

(vl) [root@iZf8zjfeutx2vqfwk4rqc2Z Qwen-VL-master]# sh finetune/finetune_qlora_single_gpu.sh
[2024-02-23 10:33:39,044] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/root/miniconda3/envs/vl/lib/python3.10/site-packages/transformers/utils/generic.py:260: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.torch.utils._pytree._register_pytree_node(
/root/miniconda3/envs/vl/lib/python3.10/site-packages/transformers/utils/generic.py:260: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.torch.utils._pytree._register_pytree_node(
You passed `quantization_config` to `from_pretrained` but the model you're loading already has a `quantization_config` attribute and has already quantized weights. However, loading attributes (e.g. disable_exllama, use_cuda_fp16) will be overwritten with the one you passed to `from_pretrained`. The rest will be ignored.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:30<00:00,  6.00s/it]
Loading data...
Formatting inputs...Skip in lazy mode
Detected kernel version 3.10.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
Using /root/.cache/torch_extensions/py310_cu121 as PyTorch extensions root...
/root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/utils/cpp_extension.py:388: UserWarning:!! WARNING !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (c++ 4.8.5) may be ABI-incompatible with PyTorch!
Please use a compiler that is ABI-compatible with GCC 5.0 and above.
See https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html.See https://gist.github.com/goldsborough/d466f43e8ffc948ff92de7486c5216d6
for instructions on how to install GCC 5 or higher.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! WARNING !!warnings.warn(ABI_INCOMPATIBILITY_WARNING.format(compiler))
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py310_cu121/fused_adam/build.ninja...
Building extension module fused_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] c++ -MMD -MF fused_adam_frontend.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/TH -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/miniconda3/envs/vl/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -DBF16_AVAILABLE -c /root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/fused_adam_frontend.cpp -o fused_adam_frontend.o
FAILED: fused_adam_frontend.o
c++ -MMD -MF fused_adam_frontend.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/TH -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/miniconda3/envs/vl/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -DBF16_AVAILABLE -c /root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/fused_adam_frontend.cpp -o fused_adam_frontend.o
c++: error: unrecognized command line option ‘-std=c++17’
c++: error: unrecognized command line option ‘-std=c++17’
[2/3] /usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output multi_tensor_adam.cuda.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/TH -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/miniconda3/envs/vl/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_86,code=compute_86 -DBF16_AVAILABLE -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -std=c++17 -c /root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o
FAILED: multi_tensor_adam.cuda.o
/usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output multi_tensor_adam.cuda.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/TH -isystem /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/miniconda3/envs/vl/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_86,code=compute_86 -DBF16_AVAILABLE -U__CUDA_NO_BFLOAT16_OPERATORS__ -U__CUDA_NO_BFLOAT162_OPERATORS__ -std=c++17 -c /root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o
nvcc warning : The -std=c++17 flag is not supported with the configured host compiler. Flag will be ignored.
In file included from /root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu:11:0:
/root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/ATen/ATen.h:4:2: error: #error C++17 or later compatible compiler is required to use ATen.#error C++17 or later compatible compiler is required to use ATen.^
In file included from /usr/include/c++/4.8.2/mutex:35:0,from /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/ATen/core/Generator.h:3,from /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/ATen/CPUGeneratorImpl.h:3,from /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/ATen/Context.h:3,from /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/ATen/ATen.h:7,from /root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu:11:
/usr/include/c++/4.8.2/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support is currently experimental, and must be enabled with the -std=c++11 or -std=gnu++11 compiler options.#error This file requires compiler and library support for the \^
In file included from /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/c10/util/StringUtil.h:6:0,from /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/c10/util/Exception.h:5,from /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/ATen/core/Generator.h:11,from /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/ATen/CPUGeneratorImpl.h:3,from /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/ATen/Context.h:3,from /root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/ATen/ATen.h:7,from /root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu:11:
/root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/include/c10/util/string_view.h:10:23: fatal error: string_view: No such file or directory#include <string_view>^
compilation terminated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2096, in _run_ninja_buildsubprocess.run(File "/root/miniconda3/envs/vl/lib/python3.10/subprocess.py", line 526, in runraise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.The above exception was the direct cause of the following exception:Traceback (most recent call last):File "/root/Qwen-VL-master/finetune.py", line 367, in <module>train()File "/root/Qwen-VL-master/finetune.py", line 360, in traintrainer.train()File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/transformers/trainer.py", line 1555, in trainreturn inner_training_loop(File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/transformers/trainer.py", line 1687, in _inner_training_loopmodel, self.optimizer, self.lr_scheduler = self.accelerator.prepare(File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/accelerate/accelerator.py", line 1220, in prepareresult = self._prepare_deepspeed(*args)File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/accelerate/accelerator.py", line 1605, in _prepare_deepspeedengine, optimizer, _, lr_scheduler = deepspeed.initialize(**kwargs)File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/__init__.py", line 176, in initializeengine = DeepSpeedEngine(args=args,File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 307, in __init__self._configure_optimizer(optimizer, model_parameters)File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1231, in _configure_optimizerbasic_optimizer = self._configure_basic_optimizer(model_parameters)File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1308, in _configure_basic_optimizeroptimizer = FusedAdam(File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/adam/fused_adam.py", line 94, in __init__fused_adam_cuda = FusedAdamBuilder().load()File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 478, in loadreturn self.jit_load(verbose)File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 522, in jit_loadop_module = load(name=self.name,File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1306, in loadreturn _jit_compile(File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1710, in _jit_compile_write_ninja_file_and_build_library(File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1823, in _write_ninja_file_and_build_library_run_ninja_build(File "/root/miniconda3/envs/vl/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2112, in _run_ninja_buildraise RuntimeError(message) from e
RuntimeError: Error building extension 'fused_adam'

原因: GCC版本较低导致的编译问题。
解决: GCC升级到 8以上版本

# GCC升级到 8以上版本(修改 8 更换其他版本)
sudo yum install centos-release-scl
sudo yum install devtoolset-8-gcc*
scl enable devtoolset-8 bash
source /opt/rh/devtoolset-8/enable
#替换软连接(不执行的话,尽管查看版本升级了,但仍然报错不支持c++17)
mv /usr/bin/gcc /usr/bin/gcc-4.8.5
ln -s /opt/rh/devtoolset-8/root/bin/gcc /usr/bin/gcc
mv /usr/bin/g++ /usr/bin/g++-4.8.5
ln -s /opt/rh/devtoolset-8/root/bin/g++ /usr/bin/g++
mv /usr/bin/c++ /usr/bin/c++-4.8.5
ln -s /opt/rh/devtoolset-8/root/bin/c++ /usr/bin/c++
#查看版本(8.3.1)
gcc --version

http://www.ppmy.cn/news/1360509.html

相关文章

如何快速导出vercel project中的环境变量

我在vercel中集成了某些插件或者链接了数据库&#xff0c;要如何快速的导出这些环境变量呢&#xff1f; 具体方法如下&#xff1a; npm i -g vercelvercel linkvercel env pull .env.local首先是安装vercel然后登录vercel 最后拉取环境变量到.env.local

如何进行高性能架构的设计

一、前端优化 减少请求次数页面静态化边缘计算 增加缓存控制&#xff1a;请求头 减少图像请求次数&#xff1a;多张图片变成 一张。 减少脚本的请求次数&#xff1a;css和js压缩&#xff0c;将多个文件压缩成一个文件。 二、页面静态化 三、边缘计算 后端优化 从三个方面进…

vue中使用wangEditor富文本编辑器

jsd-2306-vue-01: 教学项目教学项目教学项目教学项目教学项目 2306-vue-baking-teacher: 教学项目教学项目教学项目教学项目 一、脚手架工程中使用富文本编辑器wangEditor 1.通过以下命令 安装wangEditor npm i wangeditor -S 2.在main.js文件中添加以下配置信息 //引入wa…

Python爬虫技术详解:从基础到高级应用,实战与应对反爬虫策略【第93篇—Python爬虫】

前言 随着互联网的快速发展&#xff0c;网络上的信息爆炸式增长&#xff0c;而爬虫技术成为了获取和处理大量数据的重要手段之一。在Python中&#xff0c;requests模块是一个强大而灵活的工具&#xff0c;用于发送HTTP请求&#xff0c;获取网页内容。本文将介绍requests模块的…

设计模式: 策略模式

文章目录 一、什么是策略模式二、策略模式结构三、使用场景案例分析1、使用场景2、案例分析&#xff08;1&#xff09;消除条件分支 一、什么是策略模式 策略模式是一种行为型设计模式&#xff0c;它允许定义一组算法&#xff0c;并将每个算法封装在独立的类中&#xff0c;使它…

Jmeter学习系列之六:阶梯加压线程组Stepping Thread Group详解

性能测试中,有时需要模拟一种实际生产中经常出现的情况,即:从某个值开始不断增加压力,直至达到某个值,然后持续运行一段时间。 在jmeter中,有这样一个插件,可以帮我们实现这个功能,这个插件就是:Stepping Thread Group 1、下载配置方法 1.1.下载配置 插件下载地址:…

一休哥助手网页版如何使用

一休哥助手网页版可以使用GPT4提问了&#xff0c;具体操作流程如下&#xff1a; 1.登录网页版一休哥助手&#xff08;首次打开页面时&#xff0c;初始化久一点&#xff0c;请耐心等一下&#xff09; https://www.fudai.fun 2.登录后就可以使用GPT4了 3.你还可以自定义系统角色…

Redis 事务机制之ACID属性

事务属性 事务是对数据库进行读写的一系列操作。在事务执行时提供ACID属性保证&#xff1a; 包括原子性&#xff08;Atomicity&#xff09;、一致性&#xff08;Consistency&#xff09;、隔离性&#xff08;Isolation&#xff09;和持久性&#xff08;Durability&#xff09;…