hbase 源代码分析 (9) hbase启动过程,
上一章节:hbase 源代码分析 (8) delete 过程 详解
http://blog.csdn.net/chenfenggang/article/details/75094362
这一章节主要分析shell脚本。hbase的启动过程。
过程如下:
1)运行start-hbase.sh 2) 加载conf,加载需要lib,class文件,包括jdk里面的,hbase本身的。 3)判断安装模式 4)如果集群模式需要启动 a)zookeeper, b)Master c)RegionService d)master-backup 如果本地模式则只需要启动master就够了。 在master里面会new一个zk,和启动一个regionService,但是这个master和regionSerivce是同一个JVM
启动maseter主要是调用org.apache.hadoop.hbase.master.HMaster 而启动regionSeriver主要是启动org.apache.hadoop.hbase.regionserver.HRegionServer里的main函数 zk 对应org.apache.hadoop.hbase.zookeeper.HQuorumPeer
主要结论: HLog文件查看可以通过WALPrettyPrinter HFIle可以通过类HFilePrettyPrinter查看 下面开始分析,第一个脚本start-hbase.sh
usage="Usage: start-hbase.sh [--autostart-window-size <window size in hours>]\[--autostart-window-retry-limit <retry count limit for autostart>]\[autostart|start]"#获取当前目录名bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin">/dev/null; pwd`# default autostart args value indicating infinite window size and no retry limitAUTOSTART_WINDOW_SIZE=0AUTOSTART_WINDOW_RETRY_LIMIT=0#加载配置. "$bin"/hbase-config.sh# start hbase daemonserrCode=$?if [ $errCode -ne 0 ]thenexit $errCodefiif [ "$1" = "autostart" ]thencommandToRun="--autostart-window-size ${AUTOSTART_WINDOW_SIZE} --autostart-window-retry-limit ${AUTOSTART_WINDOW_RETRY_LIMIT} autostart"elsecommandToRun="start"fi#判断模式# HBASE-6504 - only take the first line of the output in case verbose gc is ondistMode=`$bin/hbase --config "$HBASE_CONF_DIR" org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed | head -n 1`//启动if [ "$distMode" == 'false' ]then"$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" $commandToRun masterelse"$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" $commandToRun zookeeper"$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" $commandToRun master"$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \--hosts "${HBASE_REGIONSERVERS}" $commandToRun regionserver"$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \--hosts "${HBASE_BACKUP_MASTERS}" $commandToRun master-backupfi
配置类hbase-config.sh ,这个就是指明bin,conf,home,javahome,master等文件。
最后一个。hbase 主要根据参数启动不同的java对象。
- ....
# Allow alternate hbase conf dir location.HBASE_CONF_DIR="${HBASE_CONF_DIR:-$HBASE_HOME/conf}"# List of hbase regions servers.HBASE_REGIONSERVERS="${HBASE_REGIONSERVERS:-$HBASE_CONF_DIR/regionservers}"# List of hbase secondary masters.HBASE_BACKUP_MASTERS="${HBASE_BACKUP_MASTERS:-$HBASE_CONF_DIR/backup-masters}"if [ -n "$HBASE_JMX_BASE" ] && [ -z "$HBASE_JMX_OPTS" ]; thenHBASE_JMX_OPTS="$HBASE_JMX_BASE"fi# Thrift JMX optsif [ -n "$HBASE_JMX_OPTS" ] && [ -z "$HBASE_THRIFT_JMX_OPTS" ]; thenHBASE_THRIFT_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10103"fi# Thrift optsif [ -z "$HBASE_THRIFT_OPTS" ]; thenexport HBASE_THRIFT_OPTS="$HBASE_THRIFT_JMX_OPTS"fi# REST JMX optsif [ -n "$HBASE_JMX_OPTS" ] && [ -z "$HBASE_REST_JMX_OPTS" ]; thenHBASE_REST_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10105"fi# REST optsif [ -z "$HBASE_REST_OPTS" ]; thenexport HBASE_REST_OPTS="$HBASE_REST_JMX_OPTS"fi# Source the hbase-env.sh. Will have JAVA_HOME defined.# HBASE-7817 - Source the hbase-env.sh only if it has not already been done. HBASE_ENV_INIT keeps track of it.if [ -z "$HBASE_ENV_INIT" ] && [ -f "${HBASE_CONF_DIR}/hbase-env.sh" ]; then. "${HBASE_CONF_DIR}/hbase-env.sh"export HBASE_ENV_INIT="true"fi# Verify if hbase has the mlock agentif [ "$HBASE_REGIONSERVER_MLOCK" = "true" ]; thenMLOCK_AGENT="$HBASE_HOME/lib/native/libmlockall_agent.so"if [ ! -f "$MLOCK_AGENT" ]; thencat 1>&2 <<EOFUnable to find mlockall_agent, hbase must be compiled with -PnativeEOFexit 1fiif [ -z "$HBASE_REGIONSERVER_UID" ] || [ "$HBASE_REGIONSERVER_UID" == "$USER" ]; thenHBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -agentpath:$MLOCK_AGENT"elseHBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -agentpath:$MLOCK_AGENT=user=$HBASE_REGIONSERVER_UID"fifiexport MALLOC_ARENA_MAX=${MALLOC_ARENA_MAX:-4}# Now having JAVA_HOME defined is requiredif [ -z "$JAVA_HOME" ]; thencat 1>&2 <<EOFEOFexit 1fi
最后执行Java类COMMAND : start/stop/restart OnOutOfMemoryError : 当内存溢出时直接杀死进程 HEAP_SETTINGS :设置最大堆, CLASS :这个就是org.apache.hadoop.hbase.regionserver.HRegionServer 等。
bin=`dirname "$0"`bin=`cd "$bin">/dev/null; pwd`# This will set HBASE_HOME, etc.. "$bin"/hbase-config.sh#判断cygwincygwin=falsecase "`uname`" inCYGWIN*) cygwin=true;;esac#是否是开发环境# Detect if we are in hbase sources dirin_dev_env=falseif [ -d "${HBASE_HOME}/target" ]; thenin_dev_env=truefi- #省去参数判断
#获取命令# get argumentsCOMMAND=$1shiftJAVA=$JAVA_HOME/bin/java- #去除加载文件
- #加载jruby 文件
- # check if the commmand needs jruby
declare -a jruby_cmds=("shell" "org.jruby.Main")for cmd in "${jruby_cmds[@]}"; doif [[ $cmd == "$COMMAND" ]]; thenjruby_needed=truebreakfidone# the command needs jrub计算命令执行那些东西# figure out which class to runif [ "$COMMAND" = "shell" ] ; then#find the hbase ruby sourcesif [ -d "$HBASE_HOME/lib/ruby" ]; thenHBASE_OPTS="$HBASE_OPTS -Dhbase.ruby.sources=$HBASE_HOME/lib/ruby"elseHBASE_OPTS="$HBASE_OPTS -Dhbase.ruby.sources=$HBASE_HOME/hbase-shell/src/main/ruby"fiHBASE_OPTS="$HBASE_OPTS $HBASE_SHELL_OPTS"CLASS="org.jruby.Main -X+O ${JRUBY_OPTS} ${HBASE_HOME}/bin/hirb.rb"elif [ "$COMMAND" = "hbck" ] ; thenCLASS='org.apache.hadoop.hbase.util.HBaseFsck'# TODO remove old 'hlog' versionelif [ "$COMMAND" = "hlog" -o "$COMMAND" = "wal" ] ; thenCLASS='org.apache.hadoop.hbase.wal.WALPrettyPrinter'elif [ "$COMMAND" = "hfile" ] ; thenCLASS='org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter'elif [ "$COMMAND" = "zkcli" ] ; thenCLASS="org.apache.hadoop.hbase.zookeeper.ZooKeeperMainServer"elif [ "$COMMAND" = "backup" ] ; thenCLASS='org.apache.hadoop.hbase.backup.BackupDriver'elif [ "$COMMAND" = "restore" ] ; thenCLASS='org.apache.hadoop.hbase.backup.RestoreDriver'elif [ "$COMMAND" = "upgrade" ] ; thenecho "This command was used to upgrade to HBase 0.96, it was removed in HBase 2.0.0."echo "Please follow the documentation at http://hbase.apache.org/book.html#upgrading."exit 1elif [ "$COMMAND" = "snapshot" ] ; thenSUBCOMMAND=$1shiftif [ "$SUBCOMMAND" = "create" ] ; thenCLASS="org.apache.hadoop.hbase.snapshot.CreateSnapshot"elif [ "$SUBCOMMAND" = "info" ] ; thenCLASS="org.apache.hadoop.hbase.snapshot.SnapshotInfo"elif [ "$SUBCOMMAND" = "export" ] ; thenCLASS="org.apache.hadoop.hbase.snapshot.ExportSnapshot"elseecho "Usage: hbase [<options>] snapshot <subcommand> [<args>]"echo "$options_string"echo ""echo "Subcommands:"echo " create Create a new snapshot of a table"echo " info Tool for dumping snapshot information"echo " export Export an existing snapshot"exit 1fielif [ "$COMMAND" = "master" ] ; thenCLASS='org.apache.hadoop.hbase.master.HMaster'if [ "$1" != "stop" ] && [ "$1" != "clear" ] ; thenHBASE_OPTS="$HBASE_OPTS $HBASE_MASTER_OPTS"fielif [ "$COMMAND" = "regionserver" ] ; thenCLASS='org.apache.hadoop.hbase.regionserver.HRegionServer'if [ "$1" != "stop" ] ; thenHBASE_OPTS="$HBASE_OPTS $HBASE_REGIONSERVER_OPTS"fielif [ "$COMMAND" = "thrift" ] ; thenCLASS='org.apache.hadoop.hbase.thrift.ThriftServer'if [ "$1" != "stop" ] ; thenHBASE_OPTS="$HBASE_OPTS $HBASE_THRIFT_OPTS"fielif [ "$COMMAND" = "thrift2" ] ; thenCLASS='org.apache.hadoop.hbase.thrift2.ThriftServer'if [ "$1" != "stop" ] ; thenHBASE_OPTS="$HBASE_OPTS $HBASE_THRIFT_OPTS"fielif [ "$COMMAND" = "rest" ] ; thenCLASS='org.apache.hadoop.hbase.rest.RESTServer'if [ "$1" != "stop" ] ; thenHBASE_OPTS="$HBASE_OPTS $HBASE_REST_OPTS"fielif [ "$COMMAND" = "zookeeper" ] ; thenCLASS='org.apache.hadoop.hbase.zookeeper.HQuorumPeer'if [ "$1" != "stop" ] ; thenHBASE_OPTS="$HBASE_OPTS $HBASE_ZOOKEEPER_OPTS"fielif [ "$COMMAND" = "clean" ] ; thencase $1 in--cleanZk|--cleanHdfs|--cleanAll)matches="yes" ;;*) ;;esacif [ $# -ne 1 -o "$matches" = "" ]; thenecho "Usage: hbase clean (--cleanZk|--cleanHdfs|--cleanAll)"echo "Options: "echo " --cleanZk cleans hbase related data from zookeeper."echo " --cleanHdfs cleans hbase related data from hdfs."echo " --cleanAll cleans hbase related data from both zookeeper and hdfs."exit 1;fi"$bin"/hbase-cleanup.sh --config ${HBASE_CONF_DIR} $@exit $?elif [ "$COMMAND" = "mapredcp" ] ; thenCLASS='org.apache.hadoop.hbase.util.MapreduceDependencyClasspathTool'elif [ "$COMMAND" = "classpath" ] ; thenecho $CLASSPATHexit 0elif [ "$COMMAND" = "pe" ] ; thenCLASS='org.apache.hadoop.hbase.PerformanceEvaluation'HBASE_OPTS="$HBASE_OPTS $HBASE_PE_OPTS"elif [ "$COMMAND" = "ltt" ] ; thenCLASS='org.apache.hadoop.hbase.util.LoadTestTool'HBASE_OPTS="$HBASE_OPTS $HBASE_LTT_OPTS"elif [ "$COMMAND" = "canary" ] ; thenCLASS='org.apache.hadoop.hbase.tool.Canary'HBASE_OPTS="$HBASE_OPTS $HBASE_CANARY_OPTS"elif [ "$COMMAND" = "version" ] ; thenCLASS='org.apache.hadoop.hbase.util.VersionInfo'elseCLASS=$COMMANDfi# Have JVM dump heap if we run out of memory. Files will be 'launch directory'# and are named like the following: java_pid21612.hprof. Apparently it doesn't# 'cost' to have this flag enabled. Its a 1.6 flag only. See:# http://blogs.sun.com/alanb/entry/outofmemoryerror_looks_a_bit_betterHBASE_OPTS="$HBASE_OPTS -Dhbase.log.dir=$HBASE_LOG_DIR"HBASE_OPTS="$HBASE_OPTS -Dhbase.log.file=$HBASE_LOGFILE"HBASE_OPTS="$HBASE_OPTS -Dhbase.home.dir=$HBASE_HOME"HBASE_OPTS="$HBASE_OPTS -Dhbase.id.str=$HBASE_IDENT_STRING"HBASE_OPTS="$HBASE_OPTS -Dhbase.root.logger=${HBASE_ROOT_LOGGER:-INFO,console}"if [ "x$JAVA_LIBRARY_PATH" != "x" ]; thenHBASE_OPTS="$HBASE_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$JAVA_LIBRARY_PATH"fi# Enable security logging on the master and regionserver onlyif [ "$COMMAND" = "master" ] || [ "$COMMAND" = "regionserver" ]; thenHBASE_OPTS="$HBASE_OPTS -Dhbase.security.logger=${HBASE_SECURITY_LOGGER:-INFO,RFAS}"elseHBASE_OPTS="$HBASE_OPTS -Dhbase.security.logger=${HBASE_SECURITY_LOGGER:-INFO,NullAppender}"fiHEAP_SETTINGS="$JAVA_HEAP_MAX $JAVA_OFFHEAP_MAX"# Exec unless HBASE_NOEXEC is set.export CLASSPATHif [ "${HBASE_NOEXEC}" != "" ]; then"$JAVA" -Dproc_$COMMAND -XX:OnOutOfMemoryError="kill -9 %p" $HEAP_SETTINGS $HBASE_OPTS $CLASS "$@"elseexec "$JAVA" -Dproc_$COMMAND -XX:OnOutOfMemoryError="kill -9 %p" $HEAP_SETTINGS $HBASE_OPTS $CLASS "$@"fi
最后执行Java类COMMAND : start/stop/restart OnOutOfMemoryError : 当内存溢出时直接杀死进程 HEAP_SETTINGS :设置最大堆, CLASS :这个就是org.apache.hadoop.hbase.regionserver.HRegionServer 等。
从上面可以看出,通过 HFilePrettyPrinter 可以参考hfile内容是什么。
WALPrettyPrinter 可以打开HLOG文件,可以
而且hbase shell 是启动的ruby 文件。
HMaster的启动入口
public static void main(String [] args) { VersionInfo.logVersion(); new HMasterCommandLine(HMaster.class).doMain(args); }HRegionSerive的启动入口
/** * @see org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine */ public static void main(String[] args) throws Exception { VersionInfo.logVersion(); Configuration conf = HBaseConfiguration.create(); @SuppressWarnings("unchecked") Class<? extends HRegionServer> regionServerClass = (Class<? extends HRegionServer>) conf .getClass(HConstants.REGION_SERVER_IMPL, HRegionServer.class); new HRegionServerCommandLine(regionServerClass).doMain(args); }
后面将继续将HMaster和HRegionService启动过程
未完待续。。。
本站文章为和通数据库网友分享或者投稿,欢迎任何形式的转载,但请务必注明出处.
同时文章内容如有侵犯了您的权益,请联系QQ:970679559,我们会在尽快处理。