自己动手写数据库系统:实现一个小型SQL解释器(上)

news/2025/2/19 16:56:32/

数据库系统有一个核心部件,那就是SQL解释器。用过mySQL的同学都知道,我们需要写一系列由SQL语言组成的代码来驱动数据库的运行,由此它就必须要有一个SQL语言解释器来解读SQL代码,然后根据代码的意图来驱动数据库执行相应的操作,本节我们就完成一个简单的SQL解释器。

解释器的原理基于编译原理,我在B站上专门有视频解释编译原理算法,因此我在这里不再赘述。实现一个解释器的首要步骤就是完成一个词法解析器,我在B站编译原理视频中实现过一个小型编译器(dragon-compiler),因此我将其对应的词法解析器直接拿过来稍作改动,让其能对SQL代码进行词法解析。首先我们把其中的lexer部分直接拷贝到我们现在的项目,打开其中的token.go文件,我们首先修改其中token的定义,将SQL语言中关键字的定义添加进去,然后去除与 SQL无关的定义,修改后代码如下:

package lexertype Tag uint32const (//AND 对应SQL关键字AND Tag = iota + 256//BREAK//DOEQFALSEGEID//IF//ELSEINDEXLEINTFLOATMINUSPLUSNENUM//ORREAL//TRUE//WHILELEFT_BRACE    // "{"RIGHT_BRACE   // "}"LEFT_BRACKET  //"("RIGHT_BRACKET //")"AND_OPERATOROR_OPERATORASSIGN_OPERATORNEGATE_OPERATORLESS_OPERATORGREATER_OPERATORBASIC //对应int , float, bool, char 等类型定义//TEMP  //对应中间代码的临时寄存器变量//SEMICOLON//新增SQL对应关键字SELECTFROMWHEREINSERTINTOVALUESDELETEUPDATESETCREATETABLEINTVARCHARVIEWASINDEXONCOMMASTRING//SQL关键字定义结束EOFERROR
)var token_map = make(map[Tag]string)func init() {//初始化SQL关键字对应字符串token_map[AND] = "AND"token_map[SELECT] = "SELECT"token_map[WHERE] = "where"token_map[INSERT] = "INSERT"token_map[INTO] = "INTO"token_map[VALUES] = "VALUES"token_map[DELETE] = "DELETE"token_map[UPDATE] = "UPDATE"token_map[SET] = "SET"token_map[CREATE] = "CREATE"token_map[TABLE] = "TABLE"token_map[INT] = "INT"token_map[VARCHAR] = "VARCHAR"token_map[VIEW] = "VIEW"token_map[AS] = "AS"token_map[INDEX] = "INDEX"token_MAP[ON] = "ON"token_map[COMMA] = ","token_map[BASIC] = "BASIC"//token_map[DO] = "do"//token_map[ELSE] = "else"token_map[EQ] = "EQ"token_map[FALSE] = "FALSE"token_map[GE] = "GE"token_map[ID] = "ID"//token_map[IF] = "if"token_map[INT] = "int"token_map[FLOAT] = "float"token_map[LE] = "<="token_map[MINUS] = "-"token_map[PLUS] = "+"token_map[NE] = "!="token_map[NUM] = "NUM"token_map[OR] = "OR"token_map[REAL] = "REAL"//token_map[TEMP] = "t"token_map[TRUE] = "TRUE"//token_map[WHILE] = "while"//token_map[DO] = "do"//token_map[BREAK] = "break"token_map[AND_OPERATOR] = "&"token_map[OR_OPERATOR] = "|"token_map[ASSIGN_OPERATOR] = "="token_map[NEGATE_OPERATOR] = "!"token_map[LESS_OPERATOR] = "<"token_map[GREATER_OPERATOR] = ">"token_map[LEFT_BRACE] = "{"token_map[RIGHT_BRACE] = "}"token_map[LEFT_BRACKET] = "("token_map[RIGHT_BRACKET] = ")"token_map[EOF] = "EOF"token_map[ERROR] = "ERROR"//token_map[SEMICOLON] = ";"}type Token struct {lexeme stringTag    Tag
}func (t *Token) ToString() string {if t.lexeme == "" {return token_map[t.Tag]}return t.lexeme
}func NewToken(tag Tag) Token {return Token{lexeme: "",Tag:    tag,}
}func NewTokenWithString(tag Tag, lexeme string) *Token {return &Token{lexeme: lexeme,Tag:    tag,}
}

在上面代码修改中,我们把原来C语言的关键字去掉,增加了一系列SQL语言对应的关键字。打开文件word_token.go,做如下修改:

package lexertype Word struct {lexeme stringTag    Token
}func NewWordToken(s string, tag Tag) Word {return Word{lexeme: s,Tag:    NewToken(tag),}
}func (w *Word) ToString() string {return w.lexeme
}func GetKeyWords() []Word {key_words := []Word{}key_words = append(key_words, NewWordToken("||", OR))key_words = append(key_words, NewWordToken("==", EQ))key_words = append(key_words, NewWordToken("!=", NE))key_words = append(key_words, NewWordToken("<=", LE))key_words = append(key_words, NewWordToken(">=", GE))//增加SQL语言对应关键字key_words = append(key_words, NewWordToken("AND", AND))key_words = append(key_words, NewWordToken("SELECT", SELECT))key_words = append(key_words, NewWordToken("FROM", FROM))key_words = append(key_words, NewWordToken("INSERT", INSERT))key_words = append(key_words, NewWordToken("INTO", INTO))key_words = append(key_words, NewWordToken("VALUES", VALUES))key_words = append(key_words, NewWordToken("DELETE", DELETE))key_words = append(key_words, NewWordToken("UPDATE", UPDATE))key_words = append(key_words, NewWordToken("SET", SET))key_words = append(key_words, NewWordToken("CREATE", CREATE))key_words = append(key_words, NewWordToken("TABLE", TABLE))key_words = append(key_words, NewWordToken("INT", INT))key_words = append(key_words, NewWordToken("VARCHAR", VARCHAR))key_words = append(key_words, NewWordToken("VIEW", VIEW))key_words = append(key_words, NewWordToken("AS", AS))key_words = append(key_words, NewWordToken("INDEX", INDEX))key_words = append(key_words, NewWordToken("ON", ON))//key_words = append(key_words, NewWordToken("minus", MINUS))//key_words = append(key_words, NewWordToken("true", TRUE))//key_words = append(key_words, NewWordToken("false", FALSE))//key_words = append(key_words, NewWordToken("if", IF))//key_words = append(key_words, NewWordToken("else", ELSE))//增加while, do关键字//key_words = append(key_words, NewWordToken("while", WHILE))//key_words = append(key_words, NewWordToken("do", DO))//key_words = append(key_words, NewWordToken("break", BREAK))//添加类型定义//key_words = append(key_words, NewWordToken("int", BASIC))//key_words = append(key_words, NewWordToken("float", BASIC))//key_words = append(key_words, NewWordToken("bool", BASIC))//key_words = append(key_words, NewWordToken("char", BASIC))return key_words
}

这里的修改中也是把原来对应C语言的关键字去掉,增加上SQL语言的关键字定义。除了这些修改外,lexer的基本逻辑没有什么变化,其代码如下(lexer.go):

package lexerimport ("bufio""strconv""strings""unicode"
)type Lexer struct {Lexeme       stringlexemeStack  []stringtokenStack   []Tokenpeek         byteLine         uint32reader       *bufio.Readerread_pointer intkey_words    map[string]Token
}func NewLexer(source string) Lexer {str := strings.NewReader(source)source_reader := bufio.NewReaderSize(str, len(source))lexer := Lexer{Line:      uint32(1),reader:    source_reader,key_words: make(map[string]Token),}lexer.reserve()return lexer
}func (l *Lexer) ReverseScan() {/*back_len := len(l.Lexeme)只能un read 一个字节for i := 0; i < back_len; i++ {l.reader.UnreadByte()}*/if l.read_pointer > 0 {l.read_pointer = l.read_pointer - 1}}func (l *Lexer) reserve() {key_words := GetKeyWords()for _, key_word := range key_words {l.key_words[key_word.ToString()] = key_word.Tag}
}func (l *Lexer) Readch() error {char, err := l.reader.ReadByte() //提前读取下一个字符l.peek = charreturn err
}func (l *Lexer) ReadCharacter(c byte) (bool, error) {chars, err := l.reader.Peek(1)if err != nil {return false, err}peekChar := chars[0]if peekChar != c {return false, nil}l.Readch() //越过当前peek的字符return true, nil
}func (l *Lexer) UnRead() error {return l.reader.UnreadByte()
}func (l *Lexer) Scan() (Token, error) {if l.read_pointer < len(l.lexemeStack) {l.Lexeme = l.lexemeStack[l.read_pointer]token := l.tokenStack[l.read_pointer]l.read_pointer = l.read_pointer + 1return token, nil} else {l.read_pointer = l.read_pointer + 1}for {err := l.Readch()if err != nil {return NewToken(ERROR), err}if l.peek == ' ' || l.peek == '\t' {continue} else if l.peek == '\n' {l.Line = l.Line + 1} else {break}}l.Lexeme = ""switch l.peek {case ',':l.Lexeme = ","l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(COMMA)l.tokenStack = append(l.tokenStack, token)return token, nilcase '{':l.Lexeme = "{"l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(LEFT_BRACE)l.tokenStack = append(l.tokenStack, token)return token, nilcase '}':l.Lexeme = "}"l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(RIGHT_BRACE)l.tokenStack = append(l.tokenStack, token)return token, nilcase '+':l.Lexeme = "+"l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(PLUS)l.tokenStack = append(l.tokenStack, token)return token, nilcase '-':l.Lexeme = "-"l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(MINUS)l.tokenStack = append(l.tokenStack, token)return token, nilcase '(':l.Lexeme = "("l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(LEFT_BRACKET)l.tokenStack = append(l.tokenStack, token)return token, nilcase ')':l.Lexeme = ")"l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(RIGHT_BRACKET)l.tokenStack = append(l.tokenStack, token)return token, nilcase '&':l.Lexeme = "&"if ok, err := l.ReadCharacter('&'); ok {l.Lexeme = "&&"word := NewWordToken("&&", AND)l.lexemeStack = append(l.lexemeStack, l.Lexeme)l.tokenStack = append(l.tokenStack, word.Tag)return word.Tag, err} else {l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(AND_OPERATOR)l.tokenStack = append(l.tokenStack, token)return token, err}case '|':l.Lexeme = "|"if ok, err := l.ReadCharacter('|'); ok {l.Lexeme = "||"word := NewWordToken("||", OR)l.lexemeStack = append(l.lexemeStack, l.Lexeme)l.tokenStack = append(l.tokenStack, word.Tag)return word.Tag, err} else {l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(OR_OPERATOR)l.tokenStack = append(l.tokenStack, token)return token, err}case '=':l.Lexeme = "="if ok, err := l.ReadCharacter('='); ok {l.Lexeme = "=="word := NewWordToken("==", EQ)l.lexemeStack = append(l.lexemeStack, l.Lexeme)l.tokenStack = append(l.tokenStack, word.Tag)return word.Tag, err} else {l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(ASSIGN_OPERATOR)l.tokenStack = append(l.tokenStack, token)return token, err}case '!':l.Lexeme = "!"if ok, err := l.ReadCharacter('='); ok {l.Lexeme = "!="word := NewWordToken("!=", NE)l.lexemeStack = append(l.lexemeStack, l.Lexeme)l.tokenStack = append(l.tokenStack, word.Tag)return word.Tag, err} else {l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(NEGATE_OPERATOR)l.tokenStack = append(l.tokenStack, token)return token, err}case '<':l.Lexeme = "<"if ok, err := l.ReadCharacter('='); ok {l.Lexeme = "<="word := NewWordToken("<=", LE)l.lexemeStack = append(l.lexemeStack, l.Lexeme)l.tokenStack = append(l.tokenStack, word.Tag)return word.Tag, err} else {l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(LESS_OPERATOR)l.tokenStack = append(l.tokenStack, token)return token, err}case '>':l.Lexeme = ">"if ok, err := l.ReadCharacter('='); ok {l.Lexeme = ">="word := NewWordToken(">=", GE)l.lexemeStack = append(l.lexemeStack, l.Lexeme)l.tokenStack = append(l.tokenStack, word.Tag)return word.Tag, err} else {l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(GREATER_OPERATOR)l.tokenStack = append(l.tokenStack, token)return token, err}case '"':for {err := l.Readch()if l.peek == '"' {haveSeenQuote = falsel.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(STRING)l.tokenStack = append(l.tokenStack, token)return token, nil}if err != nil {panic("string no end with quota")}l.Lexeme += string(l.peek)}}if unicode.IsNumber(rune(l.peek)) {var v intvar err errorfor {num, err := strconv.Atoi(string(l.peek))if err != nil {if l.peek != 0 { //l.peek == 0 意味着已经读完所有字符l.UnRead() //将字符放回以便下次扫描}break}v = 10*v + numl.Lexeme += string(l.peek)l.Readch()}if l.peek != '.' {l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(NUM)token.lexeme = l.Lexemel.tokenStack = append(l.tokenStack, token)return token, err}l.Lexeme += string(l.peek)l.Readch() //越过 "."x := float64(v)d := float64(10)for {l.Readch()num, err := strconv.Atoi(string(l.peek))if err != nil {if l.peek != 0 { //l.peek == 0 意味着已经读完所有字符l.UnRead() //将字符放回以便下次扫描}break}x = x + float64(num)/dd = d * 10l.Lexeme += string(l.peek)}l.lexemeStack = append(l.lexemeStack, l.Lexeme)token := NewToken(REAL)token.lexeme = l.Lexemel.tokenStack = append(l.tokenStack, token)return token, err}if unicode.IsLetter(rune(l.peek)) {var buffer []bytefor {buffer = append(buffer, l.peek)l.Lexeme += string(l.peek)l.Readch()if !unicode.IsLetter(rune(l.peek)) {if l.peek != 0 { //l.peek == 0 意味着已经读完所有字符l.UnRead() //将字符放回以便下次扫描}break}}s := string(buffer)token, ok := l.key_words[s]if ok {l.lexemeStack = append(l.lexemeStack, l.Lexeme)l.tokenStack = append(l.tokenStack, token)return token, nil}l.lexemeStack = append(l.lexemeStack, l.Lexeme)token = NewToken(ID)token.lexeme = l.Lexemel.tokenStack = append(l.tokenStack, token)return token, nil}return NewToken(EOF), nil
}

为了节省篇幅,这里我没有把所有文件对应改动都贴出来,请在B站搜索"coding迪斯尼"查看详细内容,下面我们调用上面实现的代码试试效果,在main.go中添加如下测试代码:

import ("fmt""lexer"
)func main() {sqlLexer := lexer.NewLexer("select name , sex from student where age > 20")var tokens []*lexer.Tokentokens = append(tokens, lexer.NewTokenWithString(lexer.SELECT, "select"))tokens = append(tokens, lexer.NewTokenWithString(lexer.ID, "name"))tokens = append(tokens, lexer.NewTokenWithString(lexer.COMMA, ","))tokens = append(tokens, lexer.NewTokenWithString(lexer.ID, "sex"))tokens = append(tokens, lexer.NewTokenWithString(lexer.FROM, "from"))tokens = append(tokens, lexer.NewTokenWithString(lexer.ID, "student"))tokens = append(tokens, lexer.NewTokenWithString(lexer.WHERE, "where"))tokens = append(tokens, lexer.NewTokenWithString(lexer.ID, "age"))tokens = append(tokens, lexer.NewTokenWithString(lexer.GREATER_OPERATOR, ">"))tokens = append(tokens, lexer.NewTokenWithString(lexer.NUM, "20"))for _, tok := range tokens {sqlTok, err := sqlLexer.Scan()if err != nil {fmt.Println("lexer error")break}if sqlTok.Tag != tok.Tag {errText := fmt.Sprintf("token err, expect: %v, but got %v\n", tok, sqlTok)fmt.Println(errText)break}}fmt.Println("lexer testing pass...")
}

通过运行可以发现,最后一句"lexer testing pass…"能正常打印出来,因此词法解析器的基本逻辑是正确的。接下来看看语法解析的实现,基于篇幅所限,这里我们只处理SQL的一小部分,有兴趣的同学可以自行补全我们这里完成的SQL解释器,首先我们先定义要解析的SQL语法部分:

FIELD -> ID
CONSTANT -> STRING | NUM
EXPRESSION -> FIELD | CONSTANT
TERM -> EXPRESSION EQ EXPRESSION
PREDICATE -> TERM (AND PREDICATE)?

QUERY -> SELECT SELECT_LIST FROM TABLE_LIST (WHERE PREDICATE)?
SELECTION_LIST -> FIELD (COMMA SELECTION_LIST)?
TABLE_LIST -> ID (COMMA TABLE_LIST)?

UPDATE_COMMAND -> INSERT_COMMAND | DELETE_COMMAND | MODIFY_COMMAND | CREATE_COMMAND
CREATE_COMMAND -> CREATE_TABLE | CREATE_VIEW | CREATE_INDEX
INSERT_COMMAND -> INSERT INTO ID LEFT_BRACE FIELD_LIST RIGHT_BRACE VALUES CONSTANT_LIST
FIELD_LIST -> FIELD (COMMA FIELD_LIST)?
CONSTANT_LIST -> CONSTANT (COMMA CONSTANT_LIST)?

DELETE_COMMAND -> DELETE FROM ID (WHERE PREDICATE)?

MODIFY_COMMAND -> UPDATE ID SET FIELD EQ EXPRESSION (WHERE PREDICATE)?

CREATE_TABLE -> CREATE TABLE FIELD_DEFS
FIELD_DEFS -> FIELD_DEF (COMMA FIELD_DEFS)?
FIELD_DEF -> ID TYPE_DEF
TYPE_DEF -> INT | VARCHAR LEFT_BRACE NUM RIGHT_BRACE

CREATE_VIEW -> CREATE VIEW ID AS QUERY
CREATE_INDEX -> CREATE INDEX ID ON ID LEFT_BRACE FIELD RIGHT_BRACE

接下来我们看看如何通过上面语法规则对SQL代码进行解析。这里我们采用自顶向下的递归式解析法,具体算法过程可以参考我在b站的编译原理视频。在工程中新建一个文件夹叫parser,然后再里面添加parser.go文件,为了简单起见,我们一次完成一小部分,然后调用完成的代码看看结果是否正确,首先我们完成TERM这条规则的解析,代码如下:

package parserimport ("lexer""query""strconv""strings"
)type SQLParser struct {sqlLexer lexer.Lexer
}func NewSQLParser(s string) *SQLParser {return &SQLParser{sqlLexer: lexer.NewLexer(s),}
}func (p *SQLParser) Field() (lexer.Token, string) {tok, err := p.sqlLexer.Scan()if err != nil {panic(err)}if tok.Tag != lexer.ID {panic("Tag of FIELD is no ID")}return tok, p.sqlLexer.Lexeme
}func (p *SQLParser) Constant() *query.Constant {tok, err := p.sqlLexer.Scan()if err != nil {panic(err)}switch tok.Tag {case lexer.STRING:s := strings.Clone(p.sqlLexer.Lexeme)return query.NewConstantWithString(&s)breakcase lexer.NUM://注意堆栈变量在函数执行后是否会变得无效v, err := strconv.Atoi(p.sqlLexer.Lexeme)if err != nil {panic("string is not a number")}return query.NewConstantWithInt(&v)breakdefault:panic("token is not string or num when parsing constant")}return nil
}func (p *SQLParser) Expression() *query.Expression {tok, err := p.sqlLexer.Scan()if err != nil {panic(err)}if tok.Tag == lexer.ID {p.sqlLexer.ReverseScan()_, str := p.Field()return query.NewExpressionWithString(str)} else {p.sqlLexer.ReverseScan()constant := p.Constant()return query.NewExpressionWithConstant(constant)}
}func (p *SQLParser) Term() *query.Term {lhs := p.Expression()tok, err := p.sqlLexer.Scan()if err != nil {panic(err)}if tok.Tag != lexer.ASSIGN_OPERATOR {panic("should have = in middle of term")}rhs := p.Expression()return query.NewTerm(lhs, rhs)
}

TERM规则解析的是类似这样的表达式"age < 20", "name = ‘jim’ "等尝出现在where 右边的表达式。我们调用上面解析代码进行测试看看,在main.go中输入如下代码:

import ("fmt""parser"
)func main() {sqlParser := parser.NewSQLParser("age = 20")term := sqlParser.Term()s := fmt.Sprintf("term: %v\n", term)fmt.Println(s)
}

请到B站查看我对上面代码进行调试演示的过程,这样更容易理解和吃透代码逻辑。接下来我们继续完成如下语法的解析:
PREDICATE -> TERM (AND PREDICATE)?
QUERY -> SELECT SELECT_LIST FROM TABLE_LIST (WHERE PREDICATE)?
SELECTION_LIST -> FIELD (COMMA SELECTION_LIST)?
TABLE_LIST -> ID (COMMA TABLE_LIST)?

这里需要注意的是PREDICATE对应的是where 后面的部分,例如where a > b and c < d,这条语句中"a>b and c < d"就是语法中的PREDICATE,对应代码如下:


func (p *SQLParser) Predicate() *query.Predicate {//predicate 对应where 语句后面的判断部分,例如where a > b and c < b//这里的a > b and c < b就是predicatepred := query.NewPredicateWithTerms(p.Term())tok, err := p.sqlLexer.Scan()// 如果语句已经读取完则直接返回if err != nil && fmt.Sprint(err) != "EOF" {panic(err)}if tok.Tag == lexer.AND {pred.ConjoinWith(p.Predicate())} else {p.sqlLexer.ReverseScan()}return pred
}func (p *SQLParser) Query() *QueryData {//query 解析select 语句tok, err := p.sqlLexer.Scan()if err != nil {panic(err)}if tok.Tag != lexer.SELECT {panic("token is not select")}fields := p.SelectList()tok, err = p.sqlLexer.Scan()if err != nil {panic(err)}if tok.Tag != lexer.FROM {panic("token is not from")}//获取select语句作用的表名tables := p.TableList()//判断select语句是否有where子句tok, err = p.sqlLexer.Scan()if err != nil {panic(err)}pred := query.NewPredicate()if tok.Tag == lexer.WHERE {pred = p.Predicate()} else {p.sqlLexer.ReverseScan()}return NewQueryData(fields, tables, pred)
}func (p *SQLParser) SelectList() []string {//SELECT_LIST 对应select关键字后面的列名称l := make([]string, 0)_, field := p.Field()l = append(l, field)tok, err := p.sqlLexer.Scan()if err != nil {panic(err)}if tok.Tag == lexer.COMMA {//selct 多个列,每个列由逗号隔开selectList := p.SelectList()l = append(l, selectList...)} else {p.sqlLexer.ReverseScan()}return l
}func (p *SQLParser) TableList() []string {//TBALE_LSIT对应from后面的表名l := make([]string, 0)tok, err := p.sqlLexer.Scan()if err != nil {panic(err)}if tok.Tag != lexer.ID {panic("token is not id")}l = append(l, p.sqlLexer.Lexeme)tok, err = p.sqlLexer.Scan()if err != nil {panic(err)}if tok.Tag == lexer.COMMA {tableList := p.TableList()l = append(l, tableList...)} else {p.sqlLexer.ReverseScan()}return l
}

新增一个go文件名为query_data.go,我们使用数据结构QueryData来存储select语句的解析结果,起内容如下:

package parser//QueryData 用来描述select语句的操作信息
import ("query"
)type QueryData struct {fields []stringtables []stringpred   *query.Predicate
}func NewQueryData(fields []string, tables []string, pred *query.Predicate) *QueryData {return &QueryData{fields: fields,tables: tables,pred:   pred,}
}func (q *QueryData) Fields() []string {return q.fields
}func (q *QueryData) Tables() []string {return q.tables
}func (q *QueryData) Pred() *query.Predicate {return q.pred
}func (q *QueryData) ToString() string {result := "select "for _, fldName := range q.fields {result += fldName + ", "}// 去掉最后一个逗号result = result[:len(result)-1]result += " from "for _, tableName := range q.tables {result += tableName + ", "}// 去掉最后一个逗号result = result[:len(result)-1]predStr := q.pred.ToString()if predStr != "" {result += " where " + predStr}return result
}

假设有SQL语句如下:

select age, name, sex from student, department where age = 20 and sex = "male" 

那么我们就能调用上面代码中的Query来启动解析,其中select后面的列表名也就是"age, name, sex"由函数SelectList负责解析,from 后面的表名由函数TableList 负责解析,where后面的内容由Predicate解析,其中他会把age = 20 和 sex = "male“ 解析成expression,我们在main.go中添加如下代码,以便调用起上面代码:

import ("fmt""parser"
)func main() {sqlParser := parser.NewSQLParser("select age, name, sex from student, department where age = 20 and sex = \"male\" ")queryData := sqlParser.Query()fmt.Println(queryData.ToString())
}

具体的调试演示过程请大家参看b站上的视频,通过调试演示我们才能更好的理解解析逻辑。由于本节内容较多,我们将其分割成几个小节来处理。


http://www.ppmy.cn/news/449619.html

相关文章

山西电力市场日前价格预测【2023-06-19】

日前价格预测 预测明日&#xff08;2023-06-19&#xff09;山西电力市场全天平均日前电价为396.37元/MWh。其中&#xff0c;最高日前价格为468.17元/MWh&#xff0c;预计出现在21: 45。最低日前电价为345.23元/MWh&#xff0c;预计出现在13: 00。 以上预测仅供学习参考&#x…

2022单片机油烟机

油烟机智能控制系统&#xff08;A 平台&#xff09; 一、任务 设计并制作油烟机智能控制系统。开机后&#xff0c;屏幕第一行显示&#xff02;22 油烟机&#xff02;&#xff0c;第 二行显示“Y抽签号后 3 位”显示&#xff08;如 Y008&#xff09;&#xff0c;并自下而上滚…

C语言编译器哪个好_6款好用的C语言编译器推荐

转自 &#xff1a; http://m.elecfans.com/article/652926.html C语言编译器哪个好 其实win tc是款很不错的软件。去用一下你就知道了&#xff0c;因为我自学c时就是用的那个软件&#xff0c;真的向你推荐它&#xff01; 推荐使用VC6.0中文版&#xff0c;因为国家考试用的就…

一个AI关键词能卖500块?AI绘画引发研究关键词的商机

一个AI关键词能卖500块&#xff1f;AI绘画引发研究关键词的商机&#xff01; 绘画一直被人们看作是视觉艺术的呈现方式。通常情况下&#xff0c;学习绘画需要理解调色、构图和线条等要素。然而&#xff0c;自从AI介入绘画领域后&#xff0c;绘画的"画风"迅速发生了变…

JWT讲解(一)

1.简介 JWT&#xff08;JSON Web Token&#xff09;是一种用于在不同应用之间安全传输信息的开放标准&#xff08;RFC 7519&#xff09;。JWT通常用于Web应用程序和API中的身份验证和授权目的。 JWT由三个部分组成&#xff1a;头部&#xff08;header&#xff09;、负载&…

C++字节序测试

下面是一个简单的字节序测试示例&#xff0c;可以用来检查当前计算机的字节序&#xff1a; &#xff08;本人所写的所有博客知识点示例均可上机测试&#xff0c;需要可以收藏&#xff09; #include <iostream>int main() {uint32_t num 0x12345678;uint8_t* ptr reint…

Dell Inspiron13 7000常用设置

关闭F1~F12的电脑自带功能 自带功能会使一些快捷键没发正常使用&#xff0c;同时按下Fn Esc关闭,再按一次打开功能。关闭触屏功能 桌面右键我的电脑&#xff0c;选择-管理&#xff0c;选择-设备管理器&#xff0c;选择-人体学输入设备&#xff0c;选择-符合HID标准的触摸屏&am…

Dell灵越燃7000网络驱动被误删后无法安装解决方案

使用驱动精灵网卡版&#xff0c;360驱动大师网卡版&#xff0c;驱动人生万能网卡版之后还无法解决&#xff0c;且出现错误值为56的解决方法&#xff1a; 下载一个CClear&#xff1a; http://soft.onlinedown.net/soft/46616.htm 不要购买任何东西&#xff0c;不要注册&#xff…