用贝叶斯文本分类测试打过1329

接前一篇文章《Mahout0.9 打patch使其支持 Hadoop2.2.0》

为Mahout0.9打过Patch编译成功后,使用贝叶斯文本分类来测试Mahout0.9对Hadoop2.2.0的兼容性。

 

步骤一:将20news的文件都上传到hdfs

yarn@singletest:~/Mahout/mahout-distribution-0.7$ hadoop fs -ls /workspace/mahout/week4/data/20news

Found 2 items

drwxr-xr-x   - yarn supergroup          0 2014-09-04 21:52 /workspace/mahout/week4/data/20news/20news-bydate-test

drwxr-xr-x   - yarn supergroup          0 2014-09-04 21:57 /workspace/mahout/week4/data/20news/20news-bydate-train

步骤二:对数据创建序列文件

yarn@singletest:~/Mahout/mahout-distribution-0.7/bin$ ./mahout seqdirectory -i /workspace/mahout/week4/data/20news -o /workspace/mahout/week4/data/20news_seq

 

yarn@singletest:~/Mahout/mahout-distribution-0.7/bin$ hadoop fs -ls /workspace/mahout/week4/data/20news_seq

Found 1 items

-rw-r--r--   1 yarn supergroup   37064977 2014-09-04 22:12 /workspace/mahout/week4/data/20news_seq/chunk-0

第三步:将序列文件转化成向量

yarn@singletest:~/Mahout/mahout-distribution-0.7/bin$ ./mahout seq2sparse -i /workspace/mahout/week4/data/20news_seq/ -o /workspace/mahout/week4/data/20news_vectors -lnorm -nv -wt tfidf

 

yarn@singletest:~/Mahout/mahout-distribution-0.7/bin$ hadoop fs -ls /workspace/mahout/week4/data/20news_vectors

Found 7 items

drwxr-xr-x   - yarn supergroup          0 2014-09-04 22:20 /workspace/mahout/week4/data/20news_vectors/df-count

-rw-r--r--   1 yarn supergroup    1937084 2014-09-04 22:18 /workspace/mahout/week4/data/20news_vectors/dictionary.file-0

-rw-r--r--   1 yarn supergroup    1890053 2014-09-04 22:20 /workspace/mahout/week4/data/20news_vectors/frequency.file-0

drwxr-xr-x   - yarn supergroup          0 2014-09-04 22:19 /workspace/mahout/week4/data/20news_vectors/tf-vectors

drwxr-xr-x   - yarn supergroup          0 2014-09-04 22:21 /workspace/mahout/week4/data/20news_vectors/tfidf-vectors

drwxr-xr-x   - yarn supergroup          0 2014-09-04 22:18 /workspace/mahout/week4/data/20news_vectors/tokenized-documents

drwxr-xr-x   - yarn supergroup          0 2014-09-04 22:18 /workspace/mahout/week4/data/20news_vectors/wordcount

第四步:将向量集分为训练集测试数据

参数:

-tr训练集    

-te测试集

-rp参数设定的是测试数据集占总数据集的百分比,以下代码设定为20%!   

yarn@singletest:~/Mahout/mahout-distribution-0.7/bin$ ./mahout split -i /workspace/mahout/week4/data/20news_vectors/tfidf-vectors -tr /workspace/mahout/week4/data/train-vectors -te /workspace/mahout/week4/data/test-vectors -rp 20 -ow -seq -xm sequential

第五步:训练模型

yarn@singletest:~/Mahout/mahout-distribution-0.9/bin$ ./mahout trainnb -i /workspace/mahout/week4/data/train-vectors -el -o /workspace/mahout/week4/nbmodel -li /workspace/mahout/week4/labindex -ow -c

 

查看生成的索引:

yarn@singletest:~$ hadoop fs -text /workspace/mahout/week4/labindex

20news-bydate-test      0

20news-bydate-train     1

 

查看训练出来的模型:

yarn@singletest:~$ hadoop fs -ls /workspace/mahout/week4/nbmodel

Found 1 items

-rw-r--r--   1 yarn supergroup    2437874 2014-09-05 23:09 /workspace/mahout/week4/nbmodel/naiveBayesModel.bin

第六步:测试

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/65467401de123a794d47b069e8ce7ce2.html