如何删除hdp hadoop api中的多用户

&|&&|&&|&&|&&
当前位置: >
使用HDP快速搭建Hadoop开发环境
作者:灰仔 & 来源:转载 &
浏览次数:
摘要: 使用两台虚拟机搭建真实集群环境,操作系统为CentOS6.5 1.
服务器基本设置vim/etc/hosts192.168.159.145
hdp01.domain192.168.159.146
hdp02.domainvim/etc/selinux/configSELINUX=disabledserviceiptablesstopchkconfigiptablesoffvi
使用两台虚拟机搭建真实集群环境,操作系统为Cent OS 6.5
1.&&&&&& 服务器基本设置
vim /etc/hosts
192.168.159.145&&&& hdp01.domain
192.168.159.146&&&& hdp02.domain
vim /etc/selinux/config
SELINUX=disabled
service iptables stop
chkconfig iptables off
vim /etc/sysconfig/network
HOSTNAME=hdp01&&&& #主机名分别为hdp01, hdp02
关闭不必要的服务:
chkconfig NetworkManager off
chkconfig abrt-ccpp off
chkconfig abrtd off
chkconfig acpid off
chkconfig atd off
chkconfig bluetooth off
chkconfig cpuspeed off
chkconfig cpuspeed off
chkconfig ip6tables off
chkconfig iptables off
chkconfig netconsole off
chkconfig netfs off
chkconfig postfix off
chkconfig restorecond off
chkconfig httpd off
我是mini最小化安装,有些服务不存在。
完成后 reboot
2. 在hdp01上安装ambari
(1).下载HDP repo
下载HDP提供的yum repo文件并拷贝到/etc/yum.repos.d中
[root@hdp01 ~]# wget http://public-repo-/ambari/centos6/1.x/updates/1.4.1.61/ambari.repo
-- 04:57:58--& http://public-repo-/ambari/centos6/1.x/updates/1.4.1.61/ambari.repoResolving public-repo-... 54.230.127.224, 205.251.212.150, 54.230.124.207, ...
Connecting to public-repo-|54.230.127.224|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 770 [binary/octet-stream]
Saving to: “ambari.repo”
100%[======================================&] 770&&&&&&&& --.-K/s&& in 0s&&&&&
04:58:01 (58.8 MB/s) - “ambari.repo” saved [770/770]
[root@hdp01 ~]# cp ambari.repo /etc/yum.repos.d/
(2).使用yum安装ambari-server
[root@hdp01 ~]# yum –y install ambari-server
Total download size: 49 M
Installed size: 113 M
Installed:
& ambari-server.noarch 0:1.4.1.61-1&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
Dependency Installed:
& postgresql.x86_64 0:8.4.20-1.el6_5&&&&&&&&&&&&& postgresql-libs.x86_64 0:8.4.20-1.el6_5&&&&&&&&&&&&& postgresql-server.x86_64 0:8.4.20-1.el6_5&&&&&&&&&&&&
3. 配置root用户的ssh互信
分别在hdp01和hdp02生成key,再通过ssh-copy-id拷贝到hdp01和hdp02上去。
[root@hdp01 ~]# ssh-keygen -t rsa
[root@hdp02 ~]# ssh-keygen -t rsa
[root@hdp01 .ssh]# ssh-copy-id hdp01.domain
[root@hdp01 .ssh]# ssh-copy-id hdp02.domain
[root@hdp02 .ssh]# ssh-copy-id hdp01.domain
[root@hdp02 .ssh]# ssh-copy-id hdp02.domain
同步系统时间:
yum install& -y& ntpd
crontab –e
*/1 * * * * ntpdate
&& clock –w
&& clock –w
service ntpdate restart
service ntpd restart
service crond restart
4. 配置ambari server
Apache Ambari是基于Web的Apache Hadoop的自动部署、管理和监控工具。这里ambari server的metastore使用了自带了postgre数据库。
[root@hdp01 ~]# ambari-server setup
Using python& /usr/bin/python2.6
Initializing...
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Customize user account for ambari-server daemon [y/n] (n)?
Adjusting ambari-server permissions and ownership...
Checking iptables...
Checking JDK...
To download the Oracle JDK you must accept the license terms found at /technetwork/java/javase/terms/license/index.html and not accepting will cancel the Ambari Server setup.
Do you accept the Oracle Binary Code License Agreement [y/n] (y)?
Downloading JDK from http://public-repo-/ARTIFACTS/jdk-6u31-linux-x64.bin to /var/lib/ambari-server/resources/jdk-6u31-linux-x64.bin
JDK distribution size is
dk-6u31-linux-x64.bin... 100% (81.6 MB of 81.6 MB)
Successfully downloaded JDK distribution to /var/lib/ambari-server/resources/jdk-6u31-linux-x64.bin
Installing JDK to /usr/jdk64
Successfully installed JDK to /usr/jdk64/jdk1.6.0_31
Downloading JCE Policy archive from http://public-repo-/ARTIFACTS/jce_policy-6.zip to /var/lib/ambari-server/resources/jce_policy-6.zip
Successfully downloaded JCE Policy archive to /var/lib/ambari-server/resources/jce_policy-6.zip
Completing setup...
Configuring database...
Enter advanced database configuration [y/n] (n)? y
==============================================================================
Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
==============================================================================
Enter choice (1): 1
Database Name (ambari):
Username (ambari):
Enter Database Password (bigdata):
Default properties detected. Using built-in database.
Checking PostgreSQL...
Running initdb: This may take upto a minute.
About to start PostgreSQL
Configuring local database...
Connecting to the database. Attempt 1...
Configuring PostgreSQL...
Restarting PostgreSQL
Ambari Server 'setup' completed successfully.
使用root用户来启动ambari server
[root@hdp01 ~]$ ambari-server start
Using python& /usr/bin/python2.6
Starting ambari-server
Unable to check iptables status when starting without root privileges.
Please do not forget to disable or adjust iptables if needed
Unable to check PostgreSQL server status when starting without root privileges.
Please do not forget to start PostgreSQL server.
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Ambari Server 'start' completed successfully.
5.安装mysql
使用mysql-server来存hive metastore。
首先安装remi软件源(为了能通过yum安装Mysql 5.5):
[root@hdp01 ~]# yum install -y epel-release
Installed:
& epel-release.noarch 0:6-8&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
[root@hdp01 ~]# rpm -Uvh /enterprise/remi-release-6.rpm
Retrieving /enterprise/remi-release-6.rpm
warning: /var/tmp/rpm-tmp.JSZuMv: Header V3 DSA/SHA1 Signature, key ID 00f97f56: NOKEY
Preparing...&&&&&&&&&&&&&&& ########################################### [100%]
&& 1:remi-release&&&&&&&&&& ########################################### [100%]
&[root@hdp01 ~]# yum install –y mysql-server
Total download size: 12 M
[root@hdp01 ~]# yum --enablerepo=remi,remi-test list mysql mysql-server
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
Available Packages
mysql.x86_64&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& 5.5.36-1.el6.remi&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& &&&&&&&&&&&&&&&
mysql-server.x86_64&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& 5.5.36-1.el6.remi&
[root@hdp01 ~]# yum --enablerepo=remi,remi-test install mysql mysql-server
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
Total download size: 20 M
[root@hdp01 ~]# /usr/bin/mysql_secure_installation
[root@hdp01 ~]# /etc/init.d/mysqld start
&[root@hdp01 ~]# /usr/bin/mysql_secure_installation
[root@hdp01 ~]# chkconfig --level 235 mysqld on
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Change the root password? [Y/n] n
&... skipping.
Remove anonymous users? [Y/n] Y
&... Success!
Disallow root login remotely? [Y/n] Y
&... Success!
Remove test database and access to it? [Y/n] Y
&- Dropping test database...
&... Success!
&- Removing privileges on test database...
&... Success!
Reload privilege tables now? [Y/n] Y
&... Success!
All done!& If you've completed all of the above steps, your MySQL installation should now be secure.
Thanks for using MySQL!
&[root@hdp01 ~]# service mysqld start
Starting mysqld:&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& [& OK& ]
下面创建数据库和用户
[root@hdp01 ~]# mysql –u root –p
Query OK, 1 row affected (0.00 sec)
mysql& create user "hive" identified by "hive123";
Query OK, 0 rows affected (0.00 sec)
mysql& grant all privileges on hive.*
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
6.使用浏览器打开, 输入admin/admin
http://hdp01.domain:8080/#/login
Name your cluster: debugo_test
Stack: HDP 2.0.6
Target Hosts: hdp01,hdp02
Host Registration Information:
由于之前配置了root用户的ssh互信,这里需要选择hdp01.domain下的/root/.ssh下面id.rsa私钥文件,然后Register and confirm继续:
下面如果出现os_type_check.sh脚本执行失败导致的Local OS is not compatible with cluster primary OS报错,这是一个BUG,可以直接修改该os_type_check.sh使得输出里面直接在输出结果之前的RES=0。
这里我更新了系统:yum update 更新完成:reboot& (否则会出现兼容性错误)
成功后,ambari-agent 安装完成,可以通过ambari-agent命令来控制:
[root@hdp02 Desktop]# ambari-agent status
ambari-agent currently not running
Usage: /usr/sbin/ambari-agent {start|stop|restart|status}
#在hdp01和hdp02上让ambari-agent在开机时启动
[root@hdp02 Desktop]# chkconfig ambari-agent --level 35 on
下一步选择要安装的组件,这里不选择Nagios, Ganglia和Oozie。对于Hive,后面选择使用前面安装的mysql-server:
我这里没有Hive的修改选项,使用默认的配置。
对于Hive,这里选择使用前面安装的mysql-server。
将YARN的yarn.acl.enable设置为false,去掉后面的勾选项。下一步:
检查是否存在错误,进行下一步的Deploy。
这是一个极为漫长的过程,中途遇到failure就retry一下。大约一小时(我花了很长时间),对于其中某些无法下载的rpm安装包,可以手动下载后,yum install *.rpm,再继续retry。
安装完成:
Next以后就进入了期待已久的Dashboard界面,此时安装的组件已经全部启动。
Hive可能无法启动,根据Hive连接数据库的相关选项,在hdp01.domain进行授权。
7.开发环境的配置
下载eclipse 4.3(kepler),maven-3.2.1到/opt下,设置环境变量
[root@hdp01 opt]# vim /etc/profile
export JAVA_HOME=/usr/jdk64/jdk1.6.0_31
export MAVEN_HOME=/opt/apache-maven-3.2.1
export PATH=$PATH:$JAVA_HOME/bin:$MAVEN_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
[root@hdp01 opt]# chgrp –R hadoop apache-maven-3.2.1/ eclipse/ workspace/
[root@hdp01 opt]# useradd hadoop
[root@hdp01 opt]# echo “hadoop” & passwd –stdin Hadoop
打开eclipse -& help -& Install new softwares,下载maven插件( http://download.eclipse.org/m2e-wtp/releases/kepler/ )。安装完成后重启eclipse,就可以正式开始hadoop之旅了。
8. WordCount的编译
(1). 新建一个maven项目
(2). Create a simple project(skip archetype selection)
(3). 如果出现JRE安装相关的Warning
Build path specifies execution environment J2SE-1.5. There are no JREs installed in the workspace that are strictly compatible with this environment.
可以在项目properties页中删除JRE1.5SE这一项,然后Add Library -& JRE System Library -& workspace default JRE即可。
(4). WordCount.java
在.mapred包下创建WordCount类:
package&& com.debugo.hadoop.
import java.io.IOE
import java.util.StringT
import org.apache.hadoop.conf.C
import org.apache.hadoop.fs.P
import org.apache.hadoop.io.IntW
import org.apache.hadoop.io.T
import org.apache.hadoop.mapreduce.J
import org.apache.hadoop.mapreduce.M
import org.apache.hadoop.mapreduce.R
import org.apache.hadoop.mapreduce.lib.input.FileInputF
import org.apache.hadoop.mapreduce.lib.output.FileOutputF
import org.apache.hadoop.util.GenericOptionsP
public class WordCount {
& public static class TokenizerMapper
&&&&&& extends Mapper&Object, Text, Text, IntWritable&{
&&& private final static IntWritable one = new IntWritable(1);
&&& private Text word = new Text();
&&& public void map(Object key, Text value, Context context
&&&&&&&&&&&&&&&&&&& ) throws IOException, InterruptedException {
&&&&& StringTokenizer itr = new StringTokenizer(value.toString());
&&&&& while (itr.hasMoreTokens()) {
&&&&&&& word.set(itr.nextToken());
&&&&&&& context.write(word, one);
& public static class IntSumReducer
&&&&&& extends Reducer&Text,IntWritable,Text,IntWritable& {
&&& private IntWritable result = new IntWritable();
&&& public void reduce(Text key, Iterable&IntWritable& values,
&&&&&&&&&&&&&&&&&&&&&& Context context
&&&&&&&&&&&&&&&&&&&&&& ) throws IOException, InterruptedException {
&&&&& int sum = 0;
&&&&& for (IntWritable val : values) {
&&&&&&& sum += val.get();
&&&&& result.set(sum);
&&&&& context.write(key, result);
& public static void main(String[] args) throws Exception {
&&& Configuration conf = new Configuration();
&&& String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
&&& if (otherArgs.length != 2) {
&&&&& System.err.println("Usage: wordcount &in& &out&");
&&&&& System.exit(2);
&&& Job job = new Job(conf, "word count");
&&& job.setJarByClass(WordCount.class);
&&& job.setMapperClass(TokenizerMapper.class);
&&& job.setCombinerClass(IntSumReducer.class);
&&& job.setReducerClass(IntSumReducer.class);
&&& job.setOutputKeyClass(Text.class);
&&& job.setOutputValueClass(IntWritable.class);
&&& FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
&&& FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
&&& System.exit(job.waitForCompletion(true) ? 0 : 1);
编辑pom.xml,添加依赖库。通过maven的repository里可以查得(/artifact/org.apache.hadoop)
&dependencies&
&dependency&
&groupId&junit&/groupId&
&artifactId&junit&/artifactId&
&version&3.8.1&/version&
&scope&test&/scope&
&/dependency&
&dependency&
&groupId&org.apache.hadoop&/groupId&
&artifactId&hadoop-common&/artifactId&
&version&2.3.0&/version&
&/dependency&
&dependency&
&groupId&org.apache.hadoop&/groupId&
&artifactId&hadoop-hdfs&/artifactId&
&version&2.3.0&/version&
&/dependency&
&dependency&
&groupId&org.apache.hadoop&/groupId&
&artifactId&hadoop-mapreduce-client-jobclient&/artifactId&
&version&2.3.0&/version&
&/dependency&
&dependency&
&groupId&org.apache.hadoop&/groupId&
&artifactId&hadoop-client&/artifactId&
&version&2.3.0&/version&
&/dependency&
&/dependencies&
这里需要注意的是,直接运行会包map任务找不到WordCount中的子类,所以要在mvn install之后将自己项目这个包再次引入到mvn项目中来。
mvn install:install-file -DgroupId=com.debugo.hadoopDartifactId=mr -Dpackaging=jar -Dversion=0.1 -Dfile=mr-0.0.1-SNAPSHOT.jar -DgeneratePOM=true
&dependency&
&groupId&com.debugo.hadoop&/groupId&
&artifactId&mr&/artifactId&
&version&0.1&/version&
&/dependency&
另外,/spork/archive//1717592.html,也是一个很好的解决方案。
编辑Run Configuration,设置运行参数”/input /output”。
然后创建/input目录: hdfs dfs -mkdir /input
再使用hdfs dfs -put a.txt /input将一些文本传到该目录下。
最后执行这个项目,成功后结果就会输出到/output dfs目录中。
[ 09:52:20,282] INFO 19952[main] - org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1380) - Counters: 49
File System Counters
FILE: Number of bytes read=5263
FILE: Number of bytes written=183603
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=6739
HDFS: Number of bytes written=3827
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=3075
Total time spent by all reduces in occupied slots (ms)=6294
Total time spent by all map tasks (ms)=3075
Total time spent by all reduce tasks (ms)=3147
Total vcore-seconds taken by all map tasks=3075
Total vcore-seconds taken by all reduce tasks=3147
Total megabyte-seconds taken by all map tasks=4723200
Total megabyte-seconds taken by all reduce tasks=9667584
Map-Reduce Framework
Map input records=144
Map output records=960
Map output bytes=10358
Map output materialized bytes=5263
Input split bytes=104
Combine input records=960
Combine output records=361
Reduce input groups=361
Reduce shuffle bytes=5263
Reduce input records=361
Reduce output records=361
Spilled Records=722
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=26
CPU time spent (ms)=2290
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
Shuffle Errors
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=6635
File Output Format Counters
Bytes Written=3827
参考文献:
HDP官方文档: /
Hortonworks HDP:
/get-started/&
/hadoop/ambari/&
使用YUM安装MySQL 5.5 /Linux/98.htm&
Canon的maven构建hadoop 1.x版本项目指南 http://blog.fens.me/hadoop-maven-eclipse/&
版权所有 IT知识库 CopyRight (C)
, All Rights Reserved.当前位置: >
> HDP新版本让Hadoop真正成为企业级数据平台
HDP新版本让Hadoop真正成为企业级数据平台
导读:Hadoop欧洲地区峰会上,Hortonworks公布了Hortonworks Data Platform (HDP)2.1版本。这一Hadoop发行版的升级版本新增了数据治理、安全、流处理和搜索等新企业功能。
在近日结束的Hadoop欧洲地区峰会上,Hortonworks公布了Hortonworks Data Platform (HDP)2.1版本。这一Hadoop发行版的升级版本新增了数据治理、安全、流处理和搜索等新企业功能,代表着针对交互式SQL查询的Stinger Initiative工具发展到了一个新的高度。
Hortonworks产品营销总监Jim Walker说:&为了让Hadoop真正成为企业级数据平台,它们必须要满足某些特定需求。对此实践者们都有着非常明确的需求,那就是数据治理、数据访问、数据管理、安全和操作。HDP 2.1将这些都整合在了一起,让其成为企业级Hadoop。&
HDP 2.1是Apache开源项目的最新稳定版。在Hadoop交互式SQL查询方面,新的版本提供了Apache Hive 0.13。Apache Hive 0.13是Stinger Initiative社区为在Hadoop中提供拍字节级交互式SQL查询的最新努力成果。过去的13个月里,Apache Hive社区一直在专注于创新,来自微软、Teradata和SAP等45家公司的145名开发者为Hive新增了39万余行代码。
Walker表示,借助Apache Hive 0.13,Hive在SQL查询性能方面提升了100倍,让交互式查询达到了拍字节级别。除了具备了大范围的复杂查询和连接功能外,Hive还能够拓展Hadoop上分析应用的SQL语义范围。
数据治理和安全方面,HDP 2.1整合了Apache Falcon和Apache Knox。其中,Falcon提供了一个用于治理和编排Hadoop内部和周边数据流的数据处理框架。该框架为获取和处理数据集、复制与保留数据集、重新定向位于非Hadoop扩展中的数据集、维护审核跟踪与沿袭提供了关键性的管控框架。Knox拓展了Hadoop的安全边界,实现了与LDAP、用于证书管理的活动目录等框架进行了充分整合,为跨Hadoop和所有相关项目的授权提供了一个通用服务。
数据处理方面,升级后的平台包含有两个全新的处理引擎,即Apache Storm和Apache Solr。Storm为传感器和业务活动监控提供实时事件处理,它是创建数据湖架构(data lake architecture)的关键性组件,因为它允许用户每秒获取数百万的事件,实现对拍字节级数据的快速查询。
与此同时,通过与LucidWorks的深度技术合作,Solr也与HDP进行了整合。整合后的Solr可提供开源企业搜索,实现对数十亿的文档的高效索引和次秒级搜索。此外,作为用于配置、管理和监控Apache Hadoop集群的框架,Apache Ambari在HDP 2.1中被升级到了1.5.1版本,增加了对新数据访问引擎的支持,以及堆栈扩展、可插拔视图、无缝重启和维护模式等新功能。
目前HDP 2.1已经提供技术预览版,预计正式版将在2014年4月底推出。
¥27500.00
¥21600.00
¥25080.00
其他资源->
Copyright @ 2006- 版权所有 ICP经营许可证号
存储第一站,存储门户,存储在线交流平台Ambari与Hadoop的配置、管理和监控项目入门_服务器应用_Linux公社-Linux系统门户网站
你好,游客
Ambari与Hadoop的配置、管理和监控项目入门
来源:Linux社区&
作者:杨鑫奇
Ambari is 100% open source and included in HDP, greatly simplifying installation and initial configuration of
clusters. In this article we&ll be running through some installation steps to get started with Ambari. Most of the steps here are covered in the
Ambari 是一款100%开源的,包含于HDP平台,使得安装和初始化hadoop集群配置的项目。这篇文章我们将介绍Ambari的安装步骤。这里的大部分内容都包含在HDP的文档中。
The first order of business is getting Ambari Server itself installed. There are different approaches to this, but for the purposes of this short tour, we&ll assume Ambari is already installed on its own dedicated node somewhere or on one of the nodes on the (future) cluster itself. Instructions can be found under the installation steps linked above. Once Ambari Server is running, the hard work is actually done. Ambari simplifies cluster install and initial configuration with a wizard interface, taking care of it with but a few clicks and decisions from the end user. Hit http://&server_you_installed_ambari&:8080 and log in with admin/admin. Upon logging in, we are greeted with a user-friendly, wizard interface. Welcome to Apache Ambari! Name that cluster and let&s get going.
第一步安装Ambari服务端,我们简单的讲,我们假设Ambari服务端已经成功的安装在专有的节点上,这个节点是集群的一部分。安装方法在上面提到的连接里面。当Ambari服务端运行的时候,负责的工作已经开始了。Ambari提供一个友好的交互入口来简化集群的安装和配置,轻松的操作即可完成配置,具体是,登陆你的节点 http://ip:8080/ 然后使用 admin/admin 登陆系统。登陆后给集群起个名字。
--------------------------------------分割线 --------------------------------------
相关阅读:
13.04上搭建Hadoop环境
Ubuntu 12.10 +Hadoop 1.2.1版本集群配置
Ubuntu上搭建Hadoop环境(单机模式+伪分布模式)
Ubuntu下Hadoop环境的配置
单机版搭建Hadoop环境图文教程详解
Hadoop LZO 安装教程
Hadoop集群上使用Lzo压缩
--------------------------------------分割线 --------------------------------------
Now we can target hosts for installation with a full listing of host names or regular expressions (in situations when there are many nodes with similar names):
现在我们来配置机器列表(可以使用正则来匹配类似机器名的节点)
The next step is node registration, with Ambari doing all of the heavy lifting for us. An interface to track progress and drill down into log files is made available:
接下来是注册节点,Ambari帮我们做了,提供一个接口可以查看执行进程,
Upon registration completion, a detailed view of host checks run and options to re-run are also available:
当注册完成后,检测当前的机器状态
Next, we select which high level components we want for the cluster. Dependency checks are all built in, so no worries about knowing which services are pre-requisites for others:
接下来,我们选择我们需要安装的模块,内置了依赖检查
After service selection, node-specific service assignments are as simple as checking boxes:
接下来服务选择,方便定制
This is where some minor typing may be required. Ambari allows simple configuration of the cluster via an easy to use interface, calling out required fields when necessary:
这里只需要一些简单的输入.当你需要安装服务的时候,Ambari支持在页面上进行便捷的配置.
Once configuration has been completed, a review pane is displayed. This is a good point to pause and check for anything that requires adjustment. The Ambari wizard makes that simple. Things look fabulous here, though, so onwards!
当配置完成后,预览将会被隐藏. 这是暂停并开始检测依赖.Ambari引导使得这个显得很简单.虽然刚刚起步,但是看上去很赞.
更多详情见请继续阅读下一页的精彩内容:
相关资讯 & & &
& (05月19日)
   同意评论声明
   发表
尊重网上道德,遵守中华人民共和国的各项有关法律法规
承担一切因您的行为而直接或间接导致的民事或刑事法律责任
本站管理人员有权保留或删除其管辖留言中的任意内容
本站有权在网站内转载或引用您的评论
参与本评论即表明您已经阅读并接受上述条款

我要回帖

更多关于 hdp hadoop api 的文章

 

随机推荐