欢迎投稿

今日深度:

使用API访问开启kerberos集群下的HBASE,kerberoshbase

使用API访问开启kerberos集群下的HBASE,kerberoshbase


之前有说过用api访问开启安全下的hdfs,在这里

hbase也是一样的道理,就直接贴参考代码

package com.test.hbase;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.security.UserGroupInformation;

import java.security.PrivilegedAction;

public class HBaseTestZK {
    public static String master="192.168.1.120:60000";
    public static String quorum="192.168.1.121,192.168.1.122,192.168.1.120";
    public static String port="2181";
    public static void main(String[] args) {
        Connection conn=null;
        try {
            //设置jvm中krb5安全属性的配置文件,配置文件里面内容参照后面的,主要是需要知道kdc服务器host和kerberos的realm域名
            System.setProperty("java.security.krb5.conf","E:\\test\\krb5.conf");
            Configuration conf= HBaseConfiguration.create();
            conf.set("hbase.master",master);
            conf.set("hbase.zookeeper.quorum",quorum);
            conf.set("hbase.zookeeper.property.clientPort","2181");
            //设置集群和hbase的安全模式为kerberos
            conf.set("hadoop.security.authentication", "kerberos");
            //设置
            conf.set("hbase.security.authentication", "kerberos");
            //conf.setBoolean("hadoop.security.authorization", true);
            //conf.set("hbase.master.kerberos.principal","hbase/hadoop120@YOU-REALM.COM.COM"); //可选
            //设置各个region的principal,这个配置值其实是参照开始安全后hbase-site.xml配置文件中的hbase.regionserver.kerberos.principal参数的值的,            conf.set("hbase.regionserver.kerberos.principal","hbase/_HOST@YOU-REALM.COM.COM");//必须
            UserGroupInformation.setConfiguration(conf);
            UserGroupInformation.loginUserFromKeytab("user01@YOU-REALM.COM","E:\\test\\user01.keytab");
            //测试读取
            conn=ConnectionFactory.createConnection(conf);
            TableName tableName=TableName.valueOf("hu");
            Table table=conn.getTable(tableName);
            Scan scan=new Scan();
            ResultScanner rs=table.getScanner(scan);
            Boolean flag=true;
            while (flag){
                Result r=rs.next();
                if (r!=null){
                    byte[] rk= r.getRow();
                    for(Cell cell:r.rawCells()){
                        String col= new String(CellUtil.cloneQualifier(cell));
                        String val=new String(CellUtil.cloneValue(cell));
                        System.out.printf(Bytes.toString(rk)+"============"+col+"==="+val+"\r\n");
                    }
                }else {
                    flag=false;
                }
            }
        }catch (Exception e){
            e.printStackTrace();
        }

        System.out.printf("================="+conn.isClosed());
    }
}

其中kerberos的配置如下所示:

[libdefaults]
default_realm = TEST.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
udp_preference_limit = 1
kdc_timeout = 10000
[realms]
TEST.COM = {
    kdc = hadoop1
    admin_server = hadoop1
}
max_renewable_life = 7d
[logging]
  default = FILE:/var/log/krb5kdc.log
  admin_server = FILE:/var/log/kadmind.log
  kdc = FILE:/var/log/krb5kdc.log

www.htsjk.Com true http://www.htsjk.com/hbase/35359.html NewsArticle 使用API访问开启kerberos集群下的HBASE,kerberoshbase 之前有说过用api访问开启安全下的hdfs,在这里 hbase也是一样的道理,就直接贴参考代码 package com .test .hbase ; import org .apache .hadoop .conf .Co...
相关文章
    暂无相关文章
评论暂时关闭