欢迎投稿

今日深度:

HBase,

HBase,


标记下:先翻译下HBase,hadoop未必全部需要,HBase不可少(构建在Hadoop的HDFS之上,实际上依赖于Hadoop,如果只是测试在单机运行,不需要安装配置Hadoop,如果需要分布式,还是需要的),看了下cassandra,accumulo,都大同小异,主要是没有深入到源码级别。

When Would I Use HBase?

Use HBase when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. HBase is an open-source, distributed, versioned, column-oriented store modeled after Google's Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.

 

何时使用HBase?

如果需要随机,实时读写Big Data数据。这项目目标是支持巨型表——几十亿行,几百万烈——构建在集群硬件之上。

HBase是开源的,分布式,版本化的,面向列方式存储的,以Google的BigTable为模型。正如Google的GFS中,Bigtable在分布式存储上的核心,HBase是Hadoop和HDFS的分布式存储核心。

Features

 

  • Linear and modular scalability.
  • Strictly consistent reads and writes.
  • Automatic and configurable sharding of tables
  • Automatic failover support between RegionServers.
  • Convenient base classes for backing Hadoop MapReduce jobs with HBase tables.
  • Easy to use Java API for client access.
  • Block cache and Bloom Filters for real-time queries.
  • Query predicate push down via server side Filters
  • Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options
  • Extensible jruby-based (JIRB) shell
  • Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX

特性 

 

JIRB的启动方式:

$ ./bin/hbase org.jruby.Main PATH_TO_SCRIPT

PATH_TO_SCRIPT,是一个.rb文件。ruby,python这种还真是挺火的... 

www.htsjk.Com true http://www.htsjk.com/hbase/37438.html NewsArticle HBase, 标记下:先翻译下HBase,hadoop未必全部需要,HBase不可少(构建在Hadoop的HDFS之上,实际上依赖于Hadoop,如果只是测试在单机运行,不需要安装配置Hadoop,如果需要分布式,还是需要...
相关文章
    暂无相关文章
评论暂时关闭