文章目录
- 1. 环境准备
- 2. 启用诊断日志
- 3. 配置本地模型
- 4. 配置本地向量模型
- 5. LlamaIndex全局配置
- 6. 创建 PGVectorStore
- 7. 从数据库加载数据
- 8. 文本分割器: SpacyTextSplitter
- 9. 配置管道
- 10. 创建向量存储索引
- 11 .指定响应模式,以及启用流式响应
在现代的人工智能应用中,如何有效地管理和检索数据是一个重要的课题。LlamaIndex 提供了一种灵活的数据框架,使开发者能够轻松地构建和管理与大型语言模型(LLM)相关的应用。在本文中,我们将深入探讨如何使用 LlamaIndex 创建和检索知识库索引。
1. 环境准备
pip install llama_index
pip install llama-index-llms-ollama
pip install llama-index-embeddings-ollama
pip install llama-index-readers-database
pip install llama-index-vector-stores-postgres
pip install langchain
pip install langchain-core
pip install langchain-text-splitters
pip install spacy
2. 启用诊断日志
python">import os, logging, sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
3. 配置本地模型
请到 https://ollama.com/
安装 Ollama
,并下载大模型,比如:Llama 3、 Phi 3、 Mistral、Gemma、qwen等。为了测试方便,我们选用速度更快、效果较好的 qwen2:7B
模型。
python">from llama_index.llms.ollama import Ollama
llm_ollama = Ollama(base_url='http://127.0.0.1:11434',model="qwen2:7b", request_timeout=600.0)
4. 配置本地向量模型
这里选用nomic-embed-text
文本向量模型
python">from llama_index.embeddings.ollama import OllamaEmbedding
nomic_embed_text= OllamaEmbedding(base_url='http://127.0.0.1:11434',model_name='nomic-embed-text')
5. LlamaIndex全局配置
python">
from llama_index.core import Settings
# 指定 LLM
Settings.llm = llm_ollama
# 自定义文档分块
Settings.chunk_size=500
# 指定向量模型
Settings.embed_model = nomic_embed_text
6. 创建 PGVectorStore
python">vector_store = PGVectorStore.from_params(database="langchat",host="syg-node",password="AaC43.#5",port=5432,user="postgres",table_name="llama_vector_store",embed_dim=768
)
from llama_index.core import StorageContext
storage_context = StorageContext.from_defaults(vector_store=vector_store
)
7. 从数据库加载数据
python">from llama_index.readers.database import DatabaseReader
db = DatabaseReader(scheme="mysql",host="syg-node", # Database Hostport="3206", # Database Portuser="root", # Database Userpassword="AaC43.#5", # Database Passworddbname="stock_db", # Database Name
)query = f"""
select concat(title,'。\n',summary,'\n',content) as text from tb_article_info where content_flag =1 order by id limit 0,10
"""documents = db.load_data(query=query)
print(f"Loaded {len(documents)} Files")
print(documents[0])
8. 文本分割器: SpacyTextSplitter
安装 zh_core_web_sm
模型
## https://github.com/explosion/spacy-models/releases/download/zh_core_web_sm-3.7.0/zh_core_web_sm-3.7.0-py3-none-any.whl
python download zh_core_web_sm
from llama_index.core.node_parser import LangchainNodeParser
from langchain.text_splitter import SpacyTextSplitter
spacy_text_splitter = LangchainNodeParser(SpacyTextSplitter(pipeline="zh_core_web_sm", chunk_size = 512,chunk_overlap = 128
))
9. 配置管道
python">from llama_index.core.ingestion import IngestionPipeline
pipeline = IngestionPipeline(transformations=[spacy_text_splitter],vector_store=vector_store
)# 生成索引存入向量数据库
nodes = pipeline.run(documents=documents)
print(f"Ingested {len(nodes)} Nodes")
10. 创建向量存储索引
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex(nodes, storage_context=storage_context)
11 .指定响应模式,以及启用流式响应
index = VectorStoreIndex.from_vector_store(vector_store=vector_store,embed_model=nomic_embed_text)
query_engine = index.as_query_engine(response_mode='tree_summarize', streaming=True)
res = query_engine.query("孩子连着上七天八天的课,确实挺累的")
res.print_response_stream()