您的当前位置:首页正文

hadoop 集群+kylin

来源:华拓网

说明:不少读者反馈,想使用开源组件搭建Hadoop平台,然后再部署Kylin,但是遇到各种问题。这里我为读者部署一套环境,请朋友们参考一下。如果还有问题,再交流。

系统环境以及各组件版本信息

Linux操作系统:

cat /etc/redhat-release

CentOS Linux release 7.2.1511 (Core)

JDK版本:

java -version

java version "1.8.0_111"

Java(TM) SE Runtime Environment (build1.8.0_111-b14)

Java HotSpot(TM) 64-Bit Server VM (build25.111-b14, mixed mode)

Hadoop组件版本:

Hive:apache-hive-1.2.1-bin

Hadoop:hadoop-2.7.2

HBase:hbase-1.1.9-bin

Zookeeper:zookeeper-3.4.6

Kylin版本:

apache-kylin-1.5.4.1-hbase1.x-bin

三个节点情况以及安装的组件(仅测试):

基础组件部署

  1.  JDK环境搭建(3个节点)
    

rpm包安装:

rpm -ivh jdk-8u111-linux-x64.rpm

配置环境变量:

vi /etc/profile

export JAVA_HOME=/usr/java/default

export JRE_HOME=/usr/java/default/jre

exportCLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH

exportPATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

source /etc/profile

验证:

java -version

java version "1.8.0_111"

Java(TM) SE Runtime Environment (build1.8.0_111-b14)

Java HotSpot(TM) 64-Bit Server VM (build25.111-b14, mixed mode)

  1.  Zookeeper环境搭建(3个节点)
    

安装:

tar -zxvf zookeeper-3.4.6.tar.gz -C /usr/local/

cd /usr/local/

ln -s zookeeper-3.4.6 zookeeper

创建数据和日志目录

mkdir /usr/local/zookeeper/zkdata

mkdir /usr/local/zookeeper/zkdatalog

配置Zookeeper参数

cd /usr/local/zookeeper/conf

cp zoo_sample.cfg zoo.cfg

修改好的配置文件如下:

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/usr/local/zookeeper/zkdata

dataLogDir=/usr/local/zookeeper/zkdatalog

clientPort=2181

server.1=ldvl-kyli-a01:2888:3888

server.2=ldvl-kyli-a02:2888:3888

server.3=ldvl-kyli-a03:2888:3888

创建myid

cd /usr/local/zookeeper/zkdata

echo 1 > myid #每个节点根据上面的配置(server.x)创建对应的文件内容

启动Zookeeper:

zkServer.sh start

查看状态:

192.168.1.129节点:

zkServer.sh status

JMX enabled by default

Using config:/usr/local/zookeeper/bin/../conf/zoo.cfg

Mode: follower

192.168.1.130节点:

zkServer.sh status

JMX enabled by default

Using config:/usr/local/zookeeper/bin/../conf/zoo.cfg

Mode: leader

192.168.1.131节点:

zkServer.sh status

JMX enabled by default

Using config:/usr/local/zookeeper/bin/../conf/zoo.cfg

Mode: follower

  1.  MariaDB数据库
    

安装:

yum install MariaDB-server MariaDB-client

启动:

systemctl start mariadb

设置root密码,安全加固等:

mysql_secure_installation

  1.  关闭防火墙
    

systemctl disable firewalld

systemctl stop firewalld

同时,也需要关闭SELinux,可修改 /etc/selinux/config 文件,将其中的 SELINUX=enforcing 改为 SELINUX=disabled即可。

  1.  三个节点保证时间同步
    

可以通过ntp服务进行设置

Hadoop组件部署

  1.  Hadoop
    

创建组和用户:

groupadd hadoop

useradd -s /bin/bash -d /app/hadoop -m hadoop-g hadoop

passwd hadoop

下面所有的操作都是在hadoop用户下面操作

切换到hadoop用户下面创建信任关系:

ssh-keygen -t rsa

ssh-copy-id -p 22 hadoop@192.168.1.129

ssh-copy-id -p 22 hadoop@192.168.1.130

解压缩:

$ tar -zxvf hadoop-2.7.2.tar.gz

设置软链接:

$ ln -s hadoop-2.7.2 hadoop

配置:

$ cd /app/hadoop/hadoop/etc/hadoop

l core-site.xml

<configuration>

<property>

   <name>fs.defaultFS</name>

   <value>hdfs://ldvl-kyli-a01:9000</value>

</property>

<property>

   <name>hadoop.tmp.dir</name>

   <value>file:/app/hadoop/hadoop/tmp</value>

</property>

<property>

   <name>io.file.buffer.size</name>

   <value>131702</value>

</property>

</configuration>

l hdfs-site.xml

<configuration>

<property>

   <name>dfs.namenode.name.dir</name>

   <value>file:/app/hadoop/hdfs/name</value>

</property>

<property>

   <name>dfs.datanode.data.dir</name>

   <value>file:/app/hadoop/hdfs/data</value>

</property>

<property>

   <name>dfs.replication</name>

   <value>3</value>

</property>

<property>

  <name>dfs.http.address</name> 

  <value>ldvl-kyli-a01:50070</value> 

</property>

<property>

   <name>dfs.namenode.secondary.http-address</name>

   <value>ldvl-kyli-a01:50090</value>

</property>

<property>

   <name>dfs.webhdfs.enabled</name>

   <value>true</value>

</property>

<property>

   <name>dfs.permissions</name>

   <value>false</value>

</property>

<property>

   <name>dfs.blocksize</name>

   <value>268435456</value>

   <description>HDFS blocksize of 256MB for largefile-systems.</description>

</property>

<property>

  <name>dfs.datanode.max.xcievers</name>

  <value>4096</value>

</property>

</configuration>

l yarn-site.xml

<configuration>

<property>

<name>yarn.resourcemanager.address</name>

<value>ldvl-kyli-a01:8032</value>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>ldvl-kyli-a01:8030</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>ldvl-kyli-a01:8031</value>

</property>

<property>

<name>yarn.resourcemanager.admin.address</name>

<value>ldvl-kyli-a01:8033</value>

</property>

<property>

<name>yarn.resourcemanager.webapp.address</name>

<value>ldvl-kyli-a01:8088</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

<description>Configuration to enable or disable logaggregation.Shuffle service that needs to be set for Map Reduceapplications.</description>

</property>

<property>

<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>

<value>org.apache.hadoop.mapred.ShuffleHandler</value>

</property>

</configuration>

l mapred-site.xml

<configuration>

<property>

   <name>mapreduce.framework.name</name>

   <value>yarn</value>

</property>

<property>

   <name>mapreduce.jobhistory.address</name>

   <value>ldvl-kyli-a01:10020</value>

</property>

<property>

   <name>mapreduce.jobhistory.webapp.address</name>

   <value>ldvl-kyli-a01:19888</value>

</property>

</configuration>

l slaves

ldvl-kyli-a01

ldvl-kyli-a02

ldvl-kyli-a03

l hadoop-env.sh,mapred-env.sh和yarn-env.sh

export JAVA_HOME=/usr/java/default

环境变量配置(这里我将所有的组件的环境变量都配置好了,后面每个组件我就不再说明):

$ cat .bashrc

export JAVA_HOME=/usr/java/default

export JRE_HOME=/usr/java/default/jre

exportCLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH

export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

export HIVE_HOME=/app/hadoop/hive

export HADOOP_HOME=/app/hadoop/hadoop

export HBASE_HOME=/app/hadoop/hbase

added by HCAT

export HCAT_HOME=/app/hadoop/hive/hcatalog

added by Kylin

export KYLIN_HOME=/app/hadoop/kylin

export KYLIN_CONF=/app/hadoop/kylin/conf

exportPATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$HBASE_HOME/bin:${KYLIN_HOME}/bin:$PATH

创建HDFS的数据目录

$ mkdir -p /app/hadoop/hdfs/data

$ mkdir -p /app/hadoop/hdfs/name

$ mkdir -p /app/hadoop/tmp

加入上面的hadoop所有配置都配置完成了,你也可以全部拷贝到其他节点。

HDFS格式化:

$ hdfs namenode -format

$ start-dfs.sh

$ start-yarn.sh

$ mr-jobhistory-daemon.sh starthistoryserver

然后进行验证操作,比如同通过jps查看进程,通过web页面服务hdfs和yarn,执行wordcount的测试程序等等

Hive组件部署

安装:

$ tar -zxvf apache-hive-1.2.1-bin.tar.gz

$ ln -s apache-hive-1.2.1-bin hive

配置:

$ cd /app/hadoop/hive/conf

l hive-env.sh

export HIVE_HOME=/app/hadoop/hive

HADOOP_HOME=/app/hadoop/hadoop

export HIVE_CONF_DIR=/app/hadoop/hive/conf

l hive-site.xml

<configuration>

<property>

<name>hive.metastore.warehouse.dir</name>

</property>

<property>

<name>hive.exec.scratchdir</name>

</property>

<property>

<name>javax.jdo.option.ConnectionURL</name>

</property>

<property>

<name>javax.jdo.option.ConnectionDriverName</name>

<value>com.mysql.jdbc.Driver</value>

</property>

<property>

<name>javax.jdo.option.ConnectionUserName</name>

<value>hive</value>

</property>

<property>

<name>javax.jdo.option.ConnectionPassword</name>

<value>123456</value>

</property>

<property>

<name>hive.metastore.local</name>

<value>true</value>

</property>

<property>

<name>hive.metastore.uris</name>

</property>

</configuration>

l hive-log4j.properties

hive.log.dir=/app/hadoop/hive/log

hive.log.file=hive.log

将mysql-connector-java-5.1.38-bin.jar放到Hive的lib目录下面:

$ cp mysql-connector-java-5.1.38-bin.jar/app/hadoop/hive/lib/

创建Hive元数据库:

MariaDB [(none)]> create database metastore character set latin1;

grant all on metastore.* to hive@"%" identified by "123456" with grant option;

flush privileges;

启动服务:

nohup hive --service metastore -v &

$ tailf nohup.out

Starting Hive Metastore Server

17/03/16 14:10:29 WARN conf.HiveConf:HiveConf of name hive.metastore.local does not exist

Starting hive metastore on port 9083

HBase组件部署

安装:

$ tar -zxvf hbase-1.1.9-bin.tar.gz

$ ln -s hbase-1.1.9 hbase

配置:

l hbase-site.xml

<configuration>

<property>

   <name>hbase.rootdir</name>

   <value>hdfs://ldvl-kyli-a01:9000/hbaseforkylin</value>

</property>

<property>

   <name>hbase.cluster.distributed</name>

   <value>true</value>

</property>

<property>

   <name>hbase.master.port</name>

   <value>16000</value>

</property>

<property>

   <name>hbase.master.info.port</name>

   <value>16010</value>

</property>

<property>

   <name>hbase.zookeeper.quorum</name>

   <value>ldvl-kyli-a01,ldvl-kyli-a02,ldvl-kyli-a03</value>

</property>

<property>

   <name>hbase.zookeeper.property.clientPort</name>

   <value>2181</value>

</property>

<property>

   <name>hbase.zookeeper.property.dataDir</name>

   <value>/usr/local/zookeeper/zkdata</value>

</property>

</configuration>

l regionservers

ldvl-kyli-a02

ldvl-kyli-a03

l hbase-env.sh

export JAVA_HOME=/usr/java/latest

exportHBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m-XX:MaxPermSize=128m"

exportHBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m-XX:MaxPermSize=128m"

export HBASE_LOG_DIR=${HBASE_HOME}/logs

export HBASE_PID_DIR=${HBASE_HOME}/logs

export HBASE_MANAGES_ZK=false

如果日志目录不存在,需要提前创建好。

启动HBase服务:

$ start-hbase.sh

Kylin环境部署(我只选第一个节点安装,仅测试)

安装:

$ tar -zxvf apache-kylin-1.5.4.1-hbase1.x-bin.tar.gz

$ ln -s apache-kylin-1.5.4.1-hbase1.x-bin kylin

配置:

$ cd kylin/conf/

l kylin.properties # 基本默认值

kyin.server.mode=all

kylin.rest.servers=192.168.1.129:7070

kylin.rest.timezone=GMT+8

kylin.hive.client=cli

kylin.hive.keep.flat.table=false

kylin.storage.url=hbase

kylin.storage.cleanup.time.threshold=172800000

kylin.hdfs.working.dir=/kylin

kylin.hbase.region.cut=5

kylin.hbase.hfile.size.gb=2

kylin.hbase.region.count.min=1

kylin.hbase.region.count.max=50

环境变量配置:

$ cat .bashrc

export JAVA_HOME=/usr/java/default

export JRE_HOME=/usr/java/default/jre

exportCLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH

exportPATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

export HIVE_HOME=/app/hadoop/hive

export HADOOP_HOME=/app/hadoop/hadoop

export HBASE_HOME=/app/hadoop/hbase

added by HCAT

export HCAT_HOME=/app/hadoop/hive/hcatalog

added by Kylin

export KYLIN_HOME=/app/hadoop/kylin

export KYLIN_CONF=/app/hadoop/kylin/conf

exportPATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$HBASE_HOME/bin:${KYLIN_HOME}/bin:$PATH

检查Kylin用来的环境变量:

$ ${KYLIN_HOME}/bin/check-env.sh

KYLIN_HOME is set to /app/hadoop/kylin

$ kylin/bin/find-hbase-dependency.sh

hbase dependency: /app/hadoop/hbase/lib/hbase-common-1.1.9.jar

$ kylin/bin/find-hive-dependency.sh

Logging initialized using configuration infile:/app/hadoop/apache-hive-1.2.1-bin/conf/hive-log4j.properties

HCAT_HOME is set to:/app/hadoop/hive/hcatalog, use it to find hcatalog path:

环境检查没有问题,开始启动Kylin服务:

kylin.sh start

导入样例:

$ sample.sh

然后通过Kylin的Web页面重新加载元数据,然后构建Cube就可以查询了:

image.png

查询:

select part_dt, sum(price) as total_selled,count(distinct seller_id) as sellers from kylin_sales group by part_dt order bypart_dt

image.png