king

Hadoop2.6.0自动化部署脚本(一)

king 运维技术 2022-11-20 509浏览 0

1 概述

最近自己写了一个Hadoop自动化部署脚本,包括Hadoop集群自动化部署脚本和Hadoop增加单节点自动化部署脚本。需要快速部署Hadoop集群的童鞋可以使用该脚本。这些脚本我在用5台虚拟机进行了测试,如果在使用中还有bug,欢迎指出。本文主要介绍Hadoop集群自动化部署脚本,安装的Hadoop版本为2.6.0。

Hadoop2.6.0自动化部署脚本(一)

2 依赖

安装Hadoop2.6.0集群需要依赖JDK和Zookeeper。本文安装的JDK版本为jdk-7u60-linux-x64,Zookeeper版本为zookeeper-3.4.6。

3 各文件及配置说明

该部署脚本由两部分构成:root用户下执行的脚本和Hadoop启动用户下执行的脚本。这些脚本都只需要在一台服务器上执行即可,执行脚本的服务器作为Hadoop的Master服务器。下面分别进行说明。

3.1 root脚本说明

root脚本的目录结构如下:

  • conf — 配置文件目录
    • init.conf
  • expect — expect脚本目录
    • password.expect
    • scp.expect
    • otherInstall.expect
  • file — 安装文件目录
    • hadoop-2.6.0.tar.gz
    • jdk-7u60-linux-x64.tar.gz
    • zookeeper-3.4.6.tar.gz
  • installRoot.sh — 脚本执行文件

3.1.1 conf目录

该目录下的init.conf文件为root执行脚本使用的配置文件,在执行脚本之前需要对该配置文件进行修改。文件内容如下:

#jdkfileandversion
JDK_FILE_TAR=jdk-7u60-linux-x64.tar.gz

#jdkunpackname
JDK_FILE=jdk1.7.0_60

#javahome
JAVAHOME=/usr/java

#Whetherinstallthepackagefordependence,0meansno,1meansyes
IF_INSTALL_PACKAGE=1

#hostconf
ALLHOST="hadoop1masterhadoop1masterhahadoop1slave1hadoop1slave2hadoop1slave3"
ALLIP="192.168.0.180192.168.0.184192.168.0.181192.168.0.182192.168.0.183"

#zookeeperconf
ZOOKEEPER_TAR=zookeeper-3.4.6.tar.gz
ZOOKEEPERHOME=/usr/local/zookeeper-3.4.6
SLAVELIST="hadoop1slave1hadoop1slave2hadoop1slave3"

#hadoopconf
HADOOP_TAR=hadoop-2.6.0.tar.gz
HADOOPHOME=/usr/local/hadoop-2.6.0
HADOOP_USER=hadoop2
HADOOP_PASSWORD=hadoop2

#rootconf:$MASTER_HA$SLAVE1$SLAVE2$SLAVE3
ROOT_PASSWORD="hadoophadoophadoophadoop"

下面是个别参数的解释及注意事项:

  1. ALLHOST为Hadoop集群各个服务器的hostname,使用空格分隔;ALLIP为Hadoop集群各个服务器的ip地址,使用空格分隔。要求ALLHOST和ALLIP要一一对应。
  2. SLAVELIST为zookeeper集群部署的服务器的hostname。
  3. ROOT_PASSWORD为除了Master服务器以外的其他服务器root用户的密码,使用逗号隔开。(在实际情况下,可能各个服务器的root密码并不相同。)

    3.1.2 expect目录

    该目录下包含password.expect、scp.expect、otherInstall.expect三个文件。password.expect用来设置hadoop启动用户的密码;scp.expect用来远程传输文件;otherInstall.expect用来远程执行其他服务器上的installRoot.sh。这三个文件都在installRoot.sh中被调用。

    password.expect文件内容如下:

    #!/usr/bin/expect-f
    setuser[lindex$argv0]
    setpassword[lindex$argv1]
    spawnpasswd$user
    expect"Newpassword:"
    send"$password\r"
    expect"Retypenewpassword:"
    send"$password\r"
    expecteof
    

    其中argv 0和argv 1都是在installRoot.sh脚本中进行传值的。其他两个文件argv *也是这样传值的。

    scp.expect文件内容如下:

    #!/usr/bin/expect-f
    #setdir,host,user,password
    setdir[lindex$argv0]
    sethost[lindex$argv1]
    setuser[lindex$argv2]
    setpassword[lindex$argv3]
    settimeout-1
    spawnscp-r$dir$user@$host:/root/
    expect{
    "(yes/no)?"
    {
    send"yes\n"
    expect"*assword:"{send"$password\n"}
    }
    "*assword:"
    {
    send"$password\n"
    }
    }
    expecteof
    

    otherInstall.expect文件内容如下:

    #!/usr/bin/expect-f
    #setdir,host,user,password
    setdir[lindex$argv0]
    setname[lindex$argv1]
    sethost[lindex$argv2]
    setuser[lindex$argv3]
    setpassword[lindex$argv4]
    settimeout-1
    spawnssh-q$user@$host"$dir/$name"
    expect{
    "(yes/no)?"
    {
    send"yes\n"
    expect"*assword:"{send"$password\n"}
    }
    "*assword:"
    {
    send"$password\n"
    }
    }
    expecteof
    

    3.1.3 file目录

    这里就是安装Hadoop集群及其依赖所需的安装包。

    3.1.4 installRoot.sh脚本

    该脚本是在root用户下需要执行的脚本,文件内容如下:

    #!/bin/bash
    
    if[$USER!="root"];then
    echo"[ERROR]:Mustrunasroot";exit1
    fi
    #Getabsolutepathandnameofthisshell
    readonlyPROGDIR=$(readlink-m$(dirname$0))
    readonlyPROGNAME=$(basename$0)
    hostname=`hostname`
    
    source/etc/profile
    #importinit.conf
    source$PROGDIR/conf/init.conf
    echo"installstart..."
    #installpackagefordependence
    if[$IF_INSTALL_PACKAGE-eq1];then
    yum-yinstallexpect>/dev/null2>&1
    echo"expectinstallsuccessful."
    #yuminstallopenssh-clients#scp
    fi
    
    #stopiptablesoropenports,nowstopiptables
    serviceiptablesstop
    chkconfigiptablesoff
    FF_INFO=`serviceiptablesstatus`
    if[-n"`echo$FF_INFO|grep"Firewallisnotrunning"`"];then
    echo"Firewallisalreadystop."
    else
    echo"[ERROR]:Failedtoshutdownthefirewall.Exitshell."
    exit1
    fi
    #stopselinux
    setenforce0
    SL_INFO=`getenforce`
    if[$SL_INFO=="Permissive"-o$SL_INFO=="disabled"];then
    echo"selinuxisalreadystop."
    else
    echo"[ERROR]:Failedtoshutdowntheselinux.Exitshell."
    exit1
    fi
    
    #hostconfig
    hostArr=($ALLHOST)
    IpArr=($ALLIP)
    for((i=0;i<=${#hostArr[@]};i++))
    {
    if[-z"`grep"${hostArr[i]}"/etc/hosts`"-o-z"`grep"${IpArr[i]}"/etc/hosts`"];then
    echo"${IpArr[i]}${hostArr[i]}">>/etc/hosts
    fi
    }
    
    #userconfig
    groupadd$HADOOP_USER&&useradd-g$HADOOP_USER$HADOOP_USER&&$PROGDIR/expect/password.expect$HADOOP_USER$HADOOP_PASSWORD>/dev/null2>&1
    
    #checkjdk
    checkOpenJDK=`rpm-qa|grepjava`
    #alreadyinstallopenJDK,uninstall
    if[-n"$checkOpenJDK"];then
    rpm-e--nodeps$checkOpenJDK
    echo"uninstallopenJDKsuccessful"
    fi
    #Awayofexceptionhandling.`java-version`performerrorthenperformafter||.
    `java-version`||(
    [!-d$JAVAHOME]&&(mkdir$JAVAHOME)
    tar-zxf$PROGDIR/file/$JDK_FILE_TAR-C$JAVAHOME
    echo"exportJAVA_HOME=$JAVAHOME/$JDK_FILE">>/etc/profile
    echo'exportJAVA_BIN=$JAVA_HOME/bin'>>/etc/profile
    echo'exportPATH=$PATH:$JAVA_HOME/bin'>>/etc/profile
    echo'exportCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar'>>/etc/profile
    echo'exportJAVA_HOMEJAVA_BINPATHCLASSPATH'>>/etc/profile
    echo"sunjdkdone"
    )
    
    #checkzookeeper
    slaveArr=($SLAVELIST)
    if[["${slaveArr[@]}"=~$hostname]];then
    `zkServer.shstatus`||[-d$ZOOKEEPERHOME]||(
    tar-zxf$PROGDIR/file/$ZOOKEEPER_TAR-C/usr/local/
    chown-R$HADOOP_USER:$HADOOP_USER$ZOOKEEPERHOME
    echo"exportZOOKEEPER_HOME=$ZOOKEEPERHOME">>/etc/profile
    echo'PATH=$PATH:$ZOOKEEPER_HOME/bin'>>/etc/profile
    echo"zookeeperdone"
    )
    fi
    
    #checkhadoop2
    `hadoopversion`||[-d$HADOOPHOME]||(
    tar-zxf$PROGDIR/file/$HADOOP_TAR-C/usr/local/
    chown-R$HADOOP_USER:$HADOOP_USER$HADOOPHOME
    echo"exportHADOOP_HOME=$HADOOPHOME">>/etc/profile
    echo'PATH=$PATH:$HADOOP_HOME/bin'>>/etc/profile
    echo'HADOOP_HOME_WARN_SUPPRESS=1'>>/etc/profile
    echo"hadoop2done"
    )
    source/etc/profile
    
    #sshconfig
    sed-i"s/^#RSAAuthentication\yes/RSAAuthentication\yes/g"/etc/ssh/sshd_config
    sed-i"s/^#PubkeyAuthentication\yes/PubkeyAuthentication\yes/g"/etc/ssh/sshd_config
    sed-i"s/^#AuthorizedKeysFile/AuthorizedKeysFile/g"/etc/ssh/sshd_config
    sed-i"s/^GSSAPIAuthentication\yes/GSSAPIAuthentication\no/g"/etc/ssh/sshd_config
    sed-i"s/^#UseDNS\yes/UseDNS\no/g"/etc/ssh/sshd_config
    servicesshdrestart
    
    #installotherservers
    rootPasswdArr=($ROOT_PASSWORD)
    if[$hostname==${hostArr[0]}];then
    i=0
    fornodein$ALLHOST;do
    if[$hostname==$node];then
    echo"thisserver,donothing"
    else
    #copeinstalldirtootherserver
    $PROGDIR/expect/scp.expect$PROGDIR$node$USER${rootPasswdArr[$i]}
    $PROGDIR/expect/otherInstall.expect$PROGDIR$PROGNAME$node$USER${rootPasswdArr[$i]}
    i=$(($i+1))#i++
    echo$node"installsuccessful."
    fi
    done
    #Lettheenvironmentvariablestakeeffect
    su-root
    fi
    

    这个脚本主要干了下面几件事:

    1. 如果在配置文件中设置了IF_INSTALL_PACKAGE=1,则安装expect,默认是安装expect。如果服务器上已经有了expect,则可以设置IF_INSTALL_PACKAGE=0。
    2. 关闭防火墙,停止selinux。
    3. 将Hadoop集群的各个机器host及ip对应关系写到/etc/hosts文件中。
    4. 新建Hadoop启动用户及用户组。
    5. 安装jdk、zookeeper、hadoop并设置环境变量。
    6. 修改ssh配置文件/etc/ssh/sshd_config。
    7. 如果判断执行脚本的机器是Master机器,则拷贝本机的root脚本到其他机器上并执行。

      注意:在执行该脚本之前,需要确保Hadoop集群安装的各个服务器上能够执行scp命令,如果不能执行,需要在各个服务器上安装openssh-clients,执行脚本为:yum –y install openssh-clients。

      3.2 hadoop脚本说明

      hadoop脚本的目录结构如下:

      • bin — 脚本目录
        • config_hadoop.sh
        • config_ssh.sh
        • config_zookeeper.sh
        • ssh_nopassword.expect
        • start_all.sh
      • conf — 配置文件目录
        • init.conf
      • template — 配置文件模板目录
        • core-site.xml
        • hadoop-env.sh
        • hdfs-site.xml
        • mapred-site.xml
        • mountTable.xml
        • myid
        • slaves
        • yarn-env.sh
        • yarn-site.xml
        • zoo.cfg
      • installCluster.sh — 脚本执行文件

      3.2.1 bin脚本目录

      该目录中包含installCluster.sh脚本中调用的所有脚本,下面一一说明。

      3.2.1.1 config_hadoop.sh

      该脚本主要是创建Hadoop所需目录,以及配置文件的配置,其中的参数均在init.conf中。

      #!/bin/bash
      
      #Getabsolutepathofthisshell
      readonlyPROGDIR=$(readlink-m$(dirname$0))
      #importinit.conf
      source$PROGDIR/../conf/init.conf
      
      fornodein$ALL;do
      #createdirs
      ssh-q$HADOOP_USER@$node"
      mkdir-p$HADOOPDIR_CONF/hadoop2/namedir
      mkdir-p$HADOOPDIR_CONF/hadoop2/datadir
      mkdir-p$HADOOPDIR_CONF/hadoop2/jndir
      mkdir-p$HADOOPDIR_CONF/hadoop2/tmp
      mkdir-p$HADOOPDIR_CONF/hadoop2/hadoopmrsys
      mkdir-p$HADOOPDIR_CONF/hadoop2/hadoopmrlocal
      mkdir-p$HADOOPDIR_CONF/hadoop2/nodemanagerlocal
      mkdir-p$HADOOPDIR_CONF/hadoop2/nodemanagerlogs
      "
      echo"$nodecreatedirdone."
      forconffilein$CONF_FILE;do
      #copy
      scp$PROGDIR/../template/$conffile$HADOOP_USER@$node:$HADOOPHOME/etc/hadoop
      #update
      ssh-q$HADOOP_USER@$node"
      sed-i's%MASTER_HOST%${MASTER_HOST}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%MASTER_HA_HOST%${MASTER_HA_HOST}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%SLAVE1%${SLAVE1}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%SLAVE2%${SLAVE2}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%SLAVE3%${SLAVE3}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%HDFS_CLUSTER_NAME%${HDFS_CLUSTER_NAME}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%VIRTUAL_PATH%${VIRTUAL_PATH}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%DFS_NAMESERVICES%${DFS_NAMESERVICES}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%NAMENODE1_NAME%${NAMENODE1_NAME}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%NAMENODE2_NAME%${NAMENODE2_NAME}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%NAMENODE_JOURNAL%${NAMENODE_JOURNAL}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%HADOOPDIR_CONF%${HADOOPDIR_CONF}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%ZOOKEEPER_ADDRESS%${ZOOKEEPER_ADDRESS}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%YARN1_NAME%${YARN1_NAME}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%YARN2_NAME%${YARN2_NAME}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%HADOOPHOME%${HADOOPHOME}%g'$HADOOPHOME/etc/hadoop/$conffile
      sed-i's%JAVAHOME%${JAVAHOME}%g'$HADOOPHOME/etc/hadoop/$conffile
      #updateyarn.resourcemanager.ha.idforyarn_ha
      if[$conffile=='yarn-site.xml'];then
      if[$node==$MASTER_HA_HOST];then
      sed-i's%YARN_ID%${YARN2_NAME}%g'$HADOOPHOME/etc/hadoop/$conffile
      else
      sed-i's%YARN_ID%${YARN1_NAME}%g'$HADOOPHOME/etc/hadoop/$conffile
      fi
      fi
      "
      done
      echo"$nodecopyhadooptemplatedone."
      done
      

      3.2.1.2 config_ssh.sh和ssh_nopassword.expect

      这两个文件是配置ssh无密码登录的,ssh_nopassword.expect被config_ssh.sh调用。

      config_ssh.sh文件如下:

      #!/bin/bash
      
      #Getabsolutepathofthisshell
      readonlyPROGDIR=$(readlink-m$(dirname$0))
      #importinit.conf
      source$PROGDIR/../conf/init.conf
      #Gethostname
      HOSTNAME=`hostname`
      
      #Configsshnopasswordlogin
      echo"Configsshonmaster"
      #Ifthedirectory"~/.ssh"isnotexist,thenexecutemkdirandchmod
      [!-d~/.ssh]&&(mkdir~/.ssh)&&(chmod700~/.ssh)
      #Ifthefile"~/.ssh/id_rsa.pub"isnotexist,thenexecutessh-keygenandchmod
      [!-f~/.ssh/id_rsa.pub]&&(yes|ssh-keygen-trsa-P''-f~/.ssh/id_rsa)&&(chmod600~/.ssh/id_rsa.pub)
      
      echo"Configsshnopasswordforcluster"
      #Forallnode,includingmasterandslaves
      fornodein$ALL;do
      #executebin/ssh_nopassword.expect
      $PROGDIR/ssh_nopassword.expect$node$HADOOP_USER$HADOOP_PASSWORD$HADOOPDIR_CONF/.ssh/id_rsa.pub>/dev/null2>&1
      echo"$nodedone."
      done
      echo"Configsshsuccessful."
      

      ssh_nopassword.expect文件如下:

      #!/usr/bin/expect-f
      
      sethost[lindex$argv0]
      setuser[lindex$argv1]
      setpassword[lindex$argv2]
      setdir[lindex$argv3]
      spawnssh-copy-id-i$dir$user@$host
      expect{
      yes/no
      {
      send"yes\r";exp_continue
      }
      -nocase"password:"
      {
      send"$password\r"
      }
      }
      expecteof
      

      3.2.1.3 config_zookeeper.sh

      该文件主要是对zookeeper的配置,文件内容如下:

      #!/bin/bash
      
      #Getabsolutepathofthisshell
      readonlyPROGDIR=$(readlink-m$(dirname$0))
      #importinit.conf
      source$PROGDIR/../conf/init.conf
      
      #updateconf
      sed-i"s%ZOOKEEPERHOME%${ZOOKEEPERHOME}%g"$PROGDIR/../template/zoo.cfg
      sed-i"s%ZOOKEEPER_SLAVE1%${ZOOKEEPER_SLAVE1}%g"$PROGDIR/../template/zoo.cfg
      sed-i"s%ZOOKEEPER_SLAVE2%${ZOOKEEPER_SLAVE2}%g"$PROGDIR/../template/zoo.cfg
      sed-i"s%ZOOKEEPER_SLAVE3%${ZOOKEEPER_SLAVE3}%g"$PROGDIR/../template/zoo.cfg
      
      zookeeperArr=("$ZOOKEEPER_SLAVE1""$ZOOKEEPER_SLAVE2""$ZOOKEEPER_SLAVE3")
      myid=1
      fornodein${zookeeperArr[@]};do
      scp$PROGDIR/../template/zoo.cfg$HADOOP_USER@$node:$ZOOKEEPERHOME/conf
      echo$myid>$PROGDIR/../template/myid
      ssh-q$HADOOP_USER@$node"
      [!-d$ZOOKEEPERHOME/data]&&(mkdir$ZOOKEEPERHOME/data)
      [!-d$ZOOKEEPERHOME/log]&&(mkdir$ZOOKEEPERHOME/log)
      "
      scp$PROGDIR/../template/myid$HADOOP_USER@$node:$ZOOKEEPERHOME/data
      myid=`expr$myid+1`#i++
      echo"$nodecopyzookeepertemplatedone."
      done
      

      3.2.1.4 start_all.sh

      该脚本主要用来启动zookeeper及Hadoop全部组件,文件内容如下:

      #!/bin/bash
      
      source/etc/profile
      #Getabsolutepathofthisshell
      readonlyPROGDIR=$(readlink-m$(dirname$0))
      #importinit.conf
      source$PROGDIR/../conf/init.conf
      
      #startzookeeper
      zookeeperArr=("$ZOOKEEPER_SLAVE1""$ZOOKEEPER_SLAVE2""$ZOOKEEPER_SLAVE3")
      forznodein${zookeeperArr[@]};do
      ssh-q$HADOOP_USER@$znode"
      source/etc/profile
      $ZOOKEEPERHOME/bin/zkServer.shstart
      "
      echo"$znodezookeeperstartdone."
      done
      
      #startjournalnode
      journalArr=($JOURNALLIST)
      forjnodein${journalArr[@]};do
      ssh-q$HADOOP_USER@$jnode"
      source/etc/profile
      $HADOOPHOME/sbin/hadoop-daemon.shstartjournalnode
      "
      echo"$jnodejournalnodestartdone."
      done
      
      #formatzookeeper
      $HADOOPHOME/bin/hdfszkfc-formatZK
      
      #formathdfs
      $HADOOPHOME/bin/hdfsnamenode-format-clusterId$DFS_NAMESERVICES
      
      #startnamenode
      $HADOOPHOME/sbin/hadoop-daemon.shstartnamenode
      
      #signinmaster_ha,syncfromnamenodetonamenode_ha
      ssh-q$HADOOP_USER@$MASTER_HA_HOST"
      $HADOOPHOME/bin/hdfsnamenode-bootstrapStandby
      "
      
      #startzkfconmaster
      $HADOOPHOME/sbin/hadoop-daemon.shstartzkfc
      
      #startnamenode_haanddatanode
      $HADOOPHOME/sbin/start-dfs.sh
      
      #startyarn
      $HADOOPHOME/sbin/start-yarn.sh
      
      #startyarn_ha
      ssh-q$HADOOP_USER@$MASTER_HA_HOST"
      source/etc/profile
      $HADOOPHOME/sbin/yarn-daemon.shstartresourcemanager
      "
      echo"startalldone."
      

      4 集群自动化部署流程

      4.1 root脚本的执行

      选择一台服务器作为Hadoop2.6.0的主节点,使用root用户执行。

      1. 确保Hadoop集群所在服务器可以执行scp命令:在各个服务器上执行scp,如果提示命令没有找到,执行安装命令:yum –y install openssh-clients。
      2. 执行以下操作:
        1. 执行cd ~,进入/root目录下
        2. 将root脚本所在目录打成tar包(假设打包后的文件名为root_install.tar.gz),执行rz -y,上传root_install.tar.gz(若无法找到rz命令,执行安装命令:yum -y install lrzsz)
        3. 执行tar -zxvf root_install.tar.gz解压
        4. 执行cd root_install,进入到root_install目录中
        5. 执行. /installRoot.sh,开始安装jdk、zookeeper、Hadoop,等待安装结束
        6. 检查/etc/hosts、/etc/profile的配置,执行java -version、hadoop version命令检查jdk和Hadoop的安装情况。若出现java、hadoop命令找不到的情况,重新登录一次服务器再进行检查。

          4.2 hadoop脚本的执行

          在主节点使用Hadoop启动用户执行(该启动用户是在root中执行的脚本里创建的,下面假设该用户为hadoop2):

          1. 在root用户中直接进入hadoop2用户,执行su – hadoop2
          2. 执行以下操作:
            1. 执行cd~,进入/home/hadoop2目录下
            2. 将hadoop脚本所在目录打成tar包(假设打包后的文件名为hadoop_install.tar.gz),执行rz -y,上传hadoop_install.tar.gz(若无法找到rz命令,执行安装命令:yum -y install lrzsz)
            3. 执行tar -zxvf hadoop_install.tar.gz解压
            4. 执行cd hadoop_install,进入到hadoop_install目录中
            5. 执行./installCluster.sh,开始配置并启动zookeeper、Hadoop,等待脚本执行结束
            6. 检查zookeeper、Hadoop启动日志,检查是否安装成功。通过Hadoop本身提供的监控页面来检查Hadoop集群的状态。
            7. 最后根据mountTable.xml中fs.viewfs.mounttable.hCluster.link./tmp的配置,执行如下命令创建该name对应的value目录:

              hdfs dfs -mkdir hdfs://hadoop-cluster1/tmp

              如果不创建,执行hdfs dfs -ls /tmp时会提示找不到目录。

              5 总结

              Hadoop2.6.0部署脚本仍有缺陷,比如配置文件中参数较多,有部分重复,脚本的编写也有待改进。权当抛砖引玉。如有错误请童鞋们指正,谢谢。

继续浏览有关 Hadoop 的文章
发表评论