Hive的安装配置,
首先, 安装java, mysql, hadoop环境,Hive只在一个节点上安装即可, 这里我安装在master上面。
1.上传tar包至/usr/myapp/hadoop
2.解压
tar -zxvf hive-0.9.0.tar.gz
重命名为hive3.检查mysql是否运行service mysql status
登陆mysql
mysql -u root -p1234564.配置hive, 使用mysql管理元数据
(a)配置HIVE_HOME环境变量
cp conf/hive-env.sh.template conf/hive-env.sh
vi conf/hive-env.sh
配置其中的HADOOP_HOME=$HADOOP_HOME
(b)配置元数据库信息 vi hive-site.xml (没有该文件的话, 就手动创建一个)
添加如下内容:
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password to use against metastore database</description>
</property>
</configuration>
删除之前(使用derby)生成的bin下的metastore_db
- 5.安装hive和mysq完成后,将mysql的连接jar包拷贝到$HIVE_HOME/lib目录下
如果出现没有权限的问题,在mysql授权(在安装mysql的机器上执行)
mysql -uroot -p
(执行下面的语句 .:所有库下的所有表 %:任何IP地址或主机都可以连接)
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'root123' WITH GRANT OPTION;
FLUSH PRIVILEGES;
6.解决一下Jline包版本不兼容问题:替换 /usr/myapp/hadoop-2.6.4/share/hadoop/yarn/lib中的老版本jline(jline-0.9.94.jar) 为hive的lib中的/usr/myapp/hadoop/hive/lib/jline-2.12.jar
7.启动hive
bin/hive
- 8.建表(默认是内部表)
create table trade_detail(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t';
建分区表
create table td_part(id bigint, account string, income double, expenses double, time string) partitioned by (logdate string) row format delimited fields terminated by '\t';
建外部表
create external table td_ext(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t' location '/td_ext';
- 9.创建分区表
普通表和分区表区别:有大量数据增加的需要建分区表
create table book (id bigint, name string) partitioned by (pubdate string) row format delimited fields terminated by '\t';
分区表加载数据
load data local inpath './book.txt' overwrite into table book partition (pubdate='2010-08-22');
load data local inpath '/root/data.am' into table beauty partition (nation="USA");
select nation, avg(size) from beauties group by nation order by avg(size);
- 查看hive是日志信息
hive -hiveconf hive.root.logger=DEBUG,console
- 安装完毕。
本站文章为和通数据库网友分享或者投稿,欢迎任何形式的转载,但请务必注明出处.
同时文章内容如有侵犯了您的权益,请联系QQ:970679559,我们会在尽快处理。