AI学习记录已经发了十几篇,大佬们可以看看,如果有帮助动动小手点赞
token入门版,有空会更新具体代码操作,能学到一点东西的话,大佬们点个赞!!!
GPT4当中,我们提问问题是按照token进行扣费的,那到底什么是token?
在不同的语言模型当中,会有不一样的token训练方式,这却决于具体的应用场景,以GPT2为例,一共有50257个token。token就是词汇表,是训练使用到的所有的词汇。下面看下第1000个token到第1300个token之间的词汇长什么样?
。。。。。。‘ale’, ‘ĠSe’, ‘ĠIf’, ‘//’, ‘ĠLe’, ‘Ġret’, ‘Ġref’, ‘Ġtrans’,
‘ner’, ‘ution’, ‘ters’, ‘Ġtake’, ‘ĠCl’, ‘Ġconf’, ‘way’, ‘ave’,
‘Ġgoing’, ‘Ġsl’, ‘ug’, ‘ĠAmeric’, ‘Ġspec’, ‘Ġhand’, ‘Ġbetween’,
‘ists’, ‘ĠDe’, ‘oot’, ‘It’, ‘Ġear’, ‘Ġagainst’, ‘Ġhigh’, ‘gan’, ‘az’,
‘ather’, ‘Ġexp’, ‘Ġop’, ‘Ġins’, ‘Ġgr’, ‘Ġhelp’, ‘Ġrequ’, ‘ets’, ‘ins’,
‘ĠPro’, ‘ism’, ‘Ġfound’, ‘land’, ‘ata’, ‘uss’, ‘ames’, ‘Ġperson’,
‘Ġgreat’, ‘pr’, ‘Ġsign’, ‘ĠAn’, “'ve”, ‘Ġsomet’, ‘Ġser’, ‘hip’,
‘Ġrun’, ‘Ġ:’, ‘Ġter’, ‘irect’, ‘Ġfollow’, ‘Ġdet’, ‘ices’, ‘Ġfind’,
‘12’, ‘Ġmem’, ‘Ġcr’, ‘ered’, ‘ex’, ‘Ġext’, ‘uth’, ‘ense’, ‘co’,
‘Ġteam’, ‘ving’, ‘ouse’, ‘ash’, ‘att’, ‘ved’, ‘Ġsystem’, ‘ĠAs’, ‘der’,
‘ives’, ‘min’, ‘Ġlead’, ‘ĠBl’, ‘cent’, ‘Ġaround’, ‘Ġgovern’, ‘Ġcur’,
‘velop’, ‘any’, ‘Ġcour’, ‘alth’, ‘ages’, ‘ize’, ‘Ġcar’, ‘ode’, ‘Ġlaw’,
‘Ġread’, “'m”, ‘con’, ‘Ġreal’, ‘Ġsupport’, ‘Ġ12’, ‘…’, ‘Ġreally’,
‘ness’, ‘Ġfact’, ‘Ġday’, ‘Ġboth’, ‘ying’, ‘Ġserv’, ‘ĠFor’, ‘Ġthree’,
‘Ġwom’, ‘Ġmed’, ‘ody’, ‘ĠThey’, ‘50’, ‘Ġexper’, ‘ton’, ‘Ġeach’,
‘akes’, ‘Ġche’, ‘Ġcre’, ‘ines’, ‘Ġrep’, ‘19’, ‘gg’, ‘illion’, ‘Ġgrou’,
‘ute’, ‘ik’, ‘We’, ‘get’, ‘ER’, ‘Ġmet’, ‘Ġsays’, ‘ox’, ‘Ġduring’,
‘ern’, ‘ized’, ‘ared’, ‘Ġfam’, ‘ically’, ‘Ġhapp’, ‘ĠIs’, ‘Ġchar’,
‘med’, ‘vent’, ‘Ġgener’, ‘ient’, ‘ple’, ‘iet’, ‘rent’, ‘11’, ‘ves’,
‘ption’, ‘Ġ20’, ‘formation’, ‘Ġcor’, ‘Ġoffic’, ‘ield’, ‘Ġtoo’,
‘ision’, ‘Ġinf’, ‘ĠZ’, ‘the’, ‘oad’, ‘Ġpublic’, ‘Ġprog’, ‘ric’, ‘**’,
‘Ġwar’, ‘Ġpower’, ‘view’, ‘Ġfew’, ‘Ġloc’, ‘Ġdifferent’, ‘Ġstate’,
‘Ġhead’, “'ll”, ‘Ġposs’, ‘Ġstat’, ‘ret’, ‘ants’, ‘Ġval’, ‘Ġiss’,
‘Ġcle’, ‘ivers’, ‘anc’, ‘Ġexpl’, ‘Ġanother’, ‘ĠQ’, ‘Ġav’, ‘thing’,
‘nce’, ‘Wh’, ‘Ġchild’, ‘Ġsince’, ‘ired’, ‘less’, ‘Ġlife’, ‘Ġdevelop’,
‘ittle’, ‘Ġdep’, ‘Ġpass’, ‘ãĥ’, ‘Ġturn’, ‘orn’, ‘This’, ‘bers’,
‘ross’, ‘ĠAd’, ‘Ġfr’, ‘Ġresp’, ‘Ġsecond’, ‘oh’, ‘Ġ/’, ‘Ġdisc’, ‘Ġ&’,
‘Ġsomething’, ‘Ġcomple’, ‘Ġed’, ‘Ġfil’, ‘Ġmonth’, ‘aj’, ‘uc’,
‘Ġgovernment’, ‘Ġwithout’, ‘Ġleg’, ‘Ġdist’, ‘Ġput’, ‘Ġquest’, ‘ann’,
‘Ġprot’, ‘20’, ‘Ġnever’, ‘ience’, ‘Ġlevel’, ‘Ġart’, ‘Ġthings’,
‘Ġmight’, ‘Ġeffect’, ‘Ġcontro’, ‘Ġcent’, ‘Ġ18’, ‘Ġallow’, ‘Ġbelie’,
‘chool’, ‘ott’, ‘Ġincre’, ‘Ġfeel’, ‘Ġresult’, ‘Ġlot’, ‘Ġfun’, ‘ote’,
‘Ġty’, ‘erest’, ‘Ġcontin’, ‘Ġusing’, ‘Ġbig’, ‘201’, ‘Ġask’, ‘Ġbest’,
‘Ġ)’, ‘IN’, ‘Ġopp’, ‘30’, ‘Ġnumber’, ‘iness’, ‘St’, ‘lease’, ‘Ġca’,
‘Ġmust’, ‘Ġdirect’, ‘Ġgl’, ‘Ġ<’, ‘Ġopen’, ‘Ġpost’, ‘Ġcome’, ‘Ġseem’,
‘ording’, ‘Ġweek’, ‘ately’, ‘ital’, ‘Ġel’, ‘riend’, ‘Ġfar’, ‘Ġtra’,
‘inal’, ‘Ġpri’, ‘ĠUS’, ‘Ġplace’, ‘Ġform’, ‘Ġtold’, ‘":’, ‘ains’
。。。。。。
这个词汇表不是天生就有的,而是通过文本语料训练出来的。
训练的基础是使用utf-8编码。
utf-8编码是机器对计算机文本的一种表示形式,目前可以表示计算机世界当中的所有文本。如下举例:
上面举例的是英文,一个字符都可以用一个整数来表示,有的复杂字符需要2到4位自字符表示。如下,都是utf-8:
中 [228 184 173] 三个字节表示
¢ [194 162] 两个字节表示
假如在大量的文本训练当中,通过一些算法,计算出经常出现在一起的词汇,例如 “骑车” 二字
骑 [233, 170, 145] , 车 [232, 189, 166]
根据出现的频率,这两个单词出现的概率非常大,那么就合并成一个词汇 [233, 170, 145, 232, 189, 166]。
当一篇文章有100个词,那么转换成的utf-8编码数组的数组长度肯定是>=100的,然后经过一些算法,
发现 [ 233, 170, 145, 232, 189, 166 ] 这几个整数数组经常在一起,就把他们组合成一个token,然后将他放到我们的token词汇表当中
(位置:14430,token:“骑车”,utf8编码:[233, 170, 145, 232, 189, 166]) // 假设
经过大量的高频的词汇查找然后合并词汇,就构成了50257个token。
所以GPT-4当中,按照token计费,准确来说,一个token有可能代表一个字母,半个词,一个词,也有可能代表几个词。