Install a single-node Hortonworks Data Platform (HDP) version 3.1.4 on Kubuntu 18.04






This post will lead you through the setup process of a single-node Hortonworks Data Platform, working on a Kubuntu 18.04 workstation.

Of course you can easily download a Hortonworks Sandbox image here.
Having said this, one can also prefer to get their hands dirty; and build it from scratch.
So here we go.

Scope

  • Ambari 2.7.3
  • HDP 3.1.4
    • HDFS 3.1.1
    • YARN 3.1.1
    • MapReduce2 3.1.1
    • HBase 2.0.2
    • ZooKeeper 3.4.6
    • Ambari Metrics 0.1.0
    • SmartSense 1.5.1.2.7.3.0-139
We'll first install Ambari Server. 
Then we'll register an Ambari Agent, using Ambari console.
During this registration, we'll choose the services above to be installed. This is a subset - For now we'll leave some important services out of our scope, such as Hive, Pig, Spark, Kafka, etc. 
But we'll come back to them soon.

Decide : Check the support matrix

The support matrix for Hortonworks Data Platform is here.
Choose your OS version under Operating systems. See which Ambari and HDP versions are supported for your OS.



I have an Ubuntu 18.04,and I will install HDP 3.1.4 using Ambari 2.7.3.
Don’t assume a newer version of your favorite OS will be ok. Stick to the versions listed in the support matrix. Author of this blog lost some time trying to install HDP on Ubuntu 19 – which is too new to be supported.

Prepare 1/6 : Maximum Open Files Requirements

Upper limit for open file descriptors in your system shall not be less than 10,000.

First let's check the current settings. Sn stands for soft limits whereas Hn is for Hard limits.



ulimit -Sn

ulimit -Hn



And here's how we change them.

ulimit -n 10000

Hard limit was maybe even more than 10,000, but we don't have to worry about that. These changes will be lost with the next reboot, but we need this only during the setup process anyway.

Check here for some nice details.






Prepare 2/6 : Setup password-less SSH

As you may already know, Ambari helps us administer all the nodes in a Hadoop cluster. Each of these nodes shall have an Ambari agent installed.
Ambari server is able to install these agents in nodes. - Only if an SSH connection is built.
Here we have a single node - and it's also where Ambari Server is installed. But still we need this SSH connection.
Now let's register our laptop as a trusted ssh connection for ... our laptop!
  • Generate SSH keys. Leave the passphrase empty.

oguz@dikanka:~$ sudo -i
root@dikanka:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
...
  • Add the public key to authorized keys file. 
root@dikanka:~# cd .ssh
root@dikanka:~/.ssh# cat id_rsa.pub >> authorized_keys
  • Change permissions of .ssh folder and authorized keys file.
root@dikanka:~/.ssh# cd ..
root@dikanka:~# chmod 700 .ssh  
root@dikanka:~# chmod 600 .ssh/authorized_keys4 
  •  Check the results
root@dikanka:~# ssh dikanka  
ssh: connect to host dikanka port 22: Connection refused
  •  This happens when :
    •  openssh is not installled. Install it as follows :
sudo apt-get update
sudo apt-get install openssh-server
    •  port 22 is blockedby the firewall. Allow this port as below :
sudo ufw allow 22
Rules updated
Rules updated (v6)
  • Check the results. When asked, type “yes” and press enter to confirm adding the server to known hosts.
root@dikanka:~# ssh dikanka
The authenticity of host 'dikanka (127.0.1.1)' can't be established.
ECDSA key fingerprint is ...
Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'dikanka' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 5.0.0-25-generic x86_64)
...
  • All seems OK. Type “exit” to close the ssh connection.
root@dikanka:~# exit
logout
Connection to dikanka closed.

Prepare 3/6 : Enable NTP on the Cluster and on the Browser Host

The clocks of all the nodes in your cluster and the machine that runs the browser through which you access the Ambari Web interface must be able to synchronize with each other.
First let's check if ntp is running :
oguz@dikanka:~$ ntpstat
Unable to talk to NTP daemon. Is it running?
It's not. The following command will start ntp service.
sudo service ntp start

And the following command will ensure that it gets automatically started during boot.
sudo update-rc.d ntp defaults

 Now it should be up and running :
oguz@dikanka:~$ ntpstat
synchronised to NTP server (213.136.0.252) at stratum 2
    time correct to within 943 ms
    polling server every 64 s


Prepare 4/6 : Configuring iptables

For Ambari to communicate during setup with the hosts it deploys to and manages, certain ports must be open and available. The easiest way to do this is to temporarily disable iptables, as follows:


sudo -i
ufw disable
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT


Prepare 5/6 : Umask

umask command is used to set the default permissions of newly created files and folders.
umask 022
umask with 022 value will set permissions of 755 for new files and folders.
If this numeric code does not ring a bell for you, I'd strongly suggest to learn more about linux file permissions. You can try wikipedia.


Prepare 6/6 : Repository Connection

Now we'll connect to Hortonworks software repository in order to install ambari server.


oguz@dikanka:~$sudo -i
root@dikanka:~#wget -O /etc/apt/sources.list.d/ambari.list http://public-repo-1.hortonworks.com/ambari/ubuntu18/2.x/updates/2.7.3.0/ambari.list
--2019-08-26 15:01:44--  http://public-repo-1.hortonworks.com/ambari/ubuntu18/2.x/updates/2.7.3.0/ambari.list
Resolving public-repo-1.hortonworks.com (public-repo-1.hortonworks.com)... 13.224.132.59, 13.224.132.44, 13.224.132.74, . ...

 2019-08-26 15:01:44 (10,8 MB/s) - ‘/etc/apt/sources.list.d/ambari.list’ saved [187/187] 


root@dikanka:~#
apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD 

Executing: /tmp/apt-key-gpghome.DmGQnBiPeG/gpg.1.sh --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD
gpg: key B9733A7A07513CAD: public key "Jenkins (HDP Builds) <jenkin@hortonworks.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1


Let's update our software repository and see if we can locate ambari packages.


apt-get update

apt-cache showpkg ambari-server
apt-cache showpkg ambari-agent
apt-cache showpkg ambari-metrics-assembly




Ambari 1/3: Install Ambari Server

We'll now install the ambari server, and then configure and start ambari server. Should be easier than what you think.
apt-get install ambari-server

Success. You can now start the database server using:

   /usr/lib/postgresql/10/bin/pg_ctl -D /var/lib/postgresql/10/main -l logfile start

Ver Cluster Port Status Owner    Data directory              Log file
10  main    5432 down   postgres /var/lib/postgresql/10/main /var/log/postgresql/postgresql-10-main.log
update-alternatives: using /usr/share/postgresql/10/man/man1/postmaster.1.gz to provide /usr/share/man/man1/postmaster
.1.gz (postmaster.1.gz) in auto mode
Setting up postgresql (10+190) ...
Setting up ambari-server (2.7.3.0-139) ...
Processing triggers for ureadahead (0.100.0-21) ...
Processing triggers for systemd (237-3ubuntu10.24) ...

Ambari 2/3: Setup Ambari Server

 ambari-server setup

  • When asked, type 1 to download and install Oracle JDK
  • Type y to confirm Oracle licence agreement
  • Type n (or leave empty) when asked to downoad LZO packages
  • Type y when asked to enter advanced database configuration. (Default is using embedded PostgreSQL. Actually we’ll not change the defaults but this way we’ll be informed about db name, user name, password, etc.)


Using python  /usr/bin/python
Setup ambari-server
Checking SELinux...
WARNING: Could not run /usr/sbin/sestatus: OK
Customize user account for ambari-server daemon [y/n] (n)?
Adjusting ambari-server permissions and ownership...
Checking firewall status...
Checking JDK...
[1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
[2] Custom JDK
==============================================================================
Enter choice (1): 1
To download the Oracle JDK and the Java Cryptography Extension (JCE) Policy Files you must accept the license terms fo
und at http://www.oracle.com/technetwork/java/javase/terms/license/index.html and not accepting will cancel the Ambari
Server setup and you must install the JDK and JCE files manually.
Do you accept the Oracle Binary Code License Agreement [y/n] (y)? y

Downloading JDK from http://public-repo-1.hortonworks.com/ARTIFACTS/jdk-8u112-linux-x64.tar.gz to /var/lib/ambari-serv
er/resources/jdk-8u112-linux-x64.tar.gz
jdk-8u112-linux-x64.tar.gz... 100% (174.7 MB of 174.7 MB)
Successfully downloaded JDK distribution to /var/lib/ambari-server/resources/jdk-8u112-linux-x64.tar.gz
Installing JDK to /usr/jdk64/
Successfully installed JDK to /usr/jdk64/
Downloading JCE Policy archive from http://public-repo-1.hortonworks.com/ARTIFACTS/jce_policy-8.zip to /var/lib/ambari
-server/resources/jce_policy-8.zip

Successfully downloaded JCE Policy archive to /var/lib/ambari-server/resources/jce_policy-8.zip
Installing JCE policy...
Check JDK version for Ambari Server...
JDK version found: 8
Minimum JDK version is 8 for Ambari. Skipping to setup different JDK for Ambari Server.
Checking GPL software agreement...
GPL License for LZO: https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
Enable Ambari Server to download and install GPL Licensed LZO packages [y/n] (n)? n
Completing setup...
Configuring database...
Enter advanced database configuration [y/n] (n)? y
Configuring database...
==============================================================================
Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
[3] - MySQL / MariaDB
[4] - PostgreSQL
[5] - Microsoft SQL Server (Tech Preview)
[6] - SQL Anywhere
[7] - BDB
==============================================================================
Enter choice (1):
Database admin user (postgres):
Database name (ambari):
Postgres schema (ambari):
Username (ambari):
Enter Database Password (bigdata):
Default properties detected. Using built-in database.
Configuring ambari database...
Checking PostgreSQL...
Configuring local database...
Configuring PostgreSQL...
Restarting PostgreSQL
Creating schema and user...
done.
Creating tables...
done.
Extracting system views...
....ambari-admin-2.7.3.0.139.jar

Ambari repo file contains latest json url http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json, updating stacks r
epoinfos with it...
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully.

Ambari server will use a postgresql database for its repository. Note down the default database name and user credentials configured for this database.


Ambari 3/3: Start Ambari Server

ambari-server start 
Using python  /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start...................
Server started listening on 8080

DB configs consistency check: no errors and warnings were found.
Ambari Server 'start' completed successfully.

Register a cluster



Login to Ambari using a browser having root access. That's needed because we'll upload SSH key from root directory, during this registration.
 
oguz@dikanka~$ sudo chromium-browser --no-sandbox


Navigate to address http://localhost:8080

user: admin
password: admin

Click "Launch Install Wizard" to register our first and only cluster.


 
Name your cluster, click next.

 
HDP 3.1 is selected by default. “Use Public Repository” is also selected. Click Next.


Add your computer name in the list of target hosts.
Click “choose file”. Locate file “id_rsa” under path “/root/.ssh” (to be able to access hidden path “.ssh”, you may need to right click and choose option “Show hidden files”)
Cick “Register and confirm”
Ignore the warning about the computer name not being a valid FDQN.



You may need to do some problem solving here. Please remember that your OS shall be listed in the support matrix, client shall be reachable through passwordless SSH, and there should be no issues on connectivity, like a firewall blockage, or busy ports, etc.
If everything goes fine, you should see the screen below.


Time to install some services.
Following services will be installed. Deselect the others and click next.

  • YARN + MapReduce2
  • Hbase
  • ZooKeeper
  • Ambari Metrics
  • SmartSense


Ignore the warnings about limited functionally, due to skipping Apache Ranger and Apache Atlas.

All masters will be the same. Click Next. 



Only one slave is available. Click Next



All passwords are set as “admin123”



No database settings for selected services. Therefore the database settings are disabled. We left the directories as their default values. See them listed below.


HDFS
DataNode directories
/hadoop/hdfs/data
NameNode directories
/hadoop/hdfs/namenode
SecondaryNameNode Checkpoint directories
/hadoop/hdfs/namesecondary
NFSGateway dump directory
/tmp/.hdfs-nfs
NameNode Backup directory
/tmp/upgrades
JournalNode Edits directory
/hadoop/hdfs/journalnode
NameNode Checkpoint Edits directory
${dfs.namenode.checkpoint.dir}
Hadoop Log Dir Prefix
/var/log/hadoop
Hadoop PID Dir Prefix
/var/run/hadoop



YARN
YARN NodeManager Local directories
/hadoop/yarn/local
YARN Timeline Service Entity Group FS Store Active directory
/ats/active/
YARN Node Labels FS Store Root directory
/system/yarn/node-labels
YARN NodeManager Recovery directory
{{yarn_log_dir_prefix}}/nodemanager/recovery-state
YARN Timeline Service Entity Group FS Store Done directory
/ats/done/
YARN NodeManager Log directories
/hadoop/yarn/log
YARN NodeManager Remote App Log directory
/app-logs
YARN Log Dir Prefix
/var/log/hadoop-yarn
YARN PID Dir Prefix
/var/run/hadoop-yarn


MAPREDUCE2
Mapreduce JobHistory Done directory
/mr-history/done
Mapreduce JobHistory Intermediate Done directory
/mr-history/tmp
YARN App Mapreduce AM Staging directory
/user
Mapreduce Log Dir Prefix
/var/log/hadoop-mapreduce
Mapreduce PID Dir Prefix
/var/run/hadoop-mapreduce


HBASE
HBase Java IO Tmpdir
/tmp
HBase Bulkload Staging directory
/apps/hbase/staging
HBase Local directory
${hbase.tmp.dir}/local
HBase root directory
/apps/hbase/data
HBase tmp directory
/tmp/hbase-${user.name}
ZooKeeper Znode Parent
/hbase-unsecure
HBase Log Dir Prefix
/var/log/hbase
HBase PID Dir
/var/run/hbase


ZOOKEEPER
ZooKeeper directory
/hadoop/zookeeper
ZooKeeper Log Dir
/var/log/zookeeper
ZooKeeper PID Dir
/var/run/zookeeper


AMBARI METRICS
Aggregator checkpoint directory
/var/lib/ambari-metrics-collector/checkpoint
Metrics Grafana data dir
/var/lib/ambari-metrics-grafana
HBase Local directory
${hbase.tmp.dir}/local
HBase root directory
file:///var/lib/ambari-metrics-collector/hbase
HBase tmp directory
/var/lib/ambari-metrics-collector/hbase-tmp
HBase ZooKeeper Property DataDir
${hbase.tmp.dir}/zookeeper
Phoenix Spool directory
${hbase.tmp.dir}/phoenix-spool
Phoenix Spool directory
/tmp
Metrics Collector log dir
/var/log/ambari-metrics-collector
Metrics Monitor log dir
/var/log/ambari-metrics-monitor
Metrics Grafana log dir
/var/log/ambari-metrics-grafana
HBase Log Dir Prefix
/var/log/ambari-metrics-collector
Metrics Collector pid dir
/var/run/ambari-metrics-collector
Metrics Monitor pid dir
/var/run/ambari-metrics-monitor
Metrics Grafana pid dir
/var/run/ambari-metrics-grafana
HBase PID Dir
/var/run/ambari-metrics-collector/



Accounts are left as default values. See them listed below.




Smoke User
ambari-qa
Mapreduce User
mapred
Hadoop Group
hadoop
Oozie User
oozie
Ambari Metrics User
ams
Yarn ATS User
yarn-ats
HBase User
hbase
Yarn User
yarn
HDFS User
hdfs
ZooKeeper User
zookeeper
Proxy User Group
users




 

All settings in Advanced configuration tab are left as defaults, but the SSL client password setting under “HDFS / Advanced” might raise an error.





It’s a password setting issue. Type “admin123” in both password fields to fix the issue.





Click Deploy and pray to your preferred God.



Port issue for YARN

Yeah, it failed.
Luckily all the services were installed without issues. The failure happened during starting services. YARN fails with the following error :


java.net.BindException: Problem binding to [dikanka:53] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException
This happens because port 53 is not available. Solution is simple :

Under YARN / Configs / Advanced, locate the setting named “RegistryDNS Bind Port” and change from 53 to 553



Start Services

Since the services failed to start, we have to start them one by one. To start services, we’ll choose “Restart All” under “Actions” menu for each of the services.




Let’s start the services in the following order :
  • Zookeeper
  • HDFS
  • Hbase
  • YARN
  • MapReduce2

Check Files View

If all services are up and running, it’s time now to check what we have in hand. Click “Views” menu and choose “Files View”



This might result with the following error message :

Unauthorized connection for super-user: root from IP 127.0.0.1



In this case, apply the following steps to solve this issue :

  • In Ambari Web, browse to Services > HDFS > Configs.
  • Under the Advanced tab, navigate to the Custom core-site section.
  • Change the values of the following parameters to *

hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=*

After these values are altered, you will need to restart all services. Retry opening files view. Confirm that the view looks like the screenshot below :

 


And this marks the end of the scope for this post. Soon we'll continue with other services like Pig, Tez and Hive.
Hope this was helpful for some of you. 

 

2 comments:

  1. helllo, really good article in the 3 step i have to confirm the host,
    i had this: E: Failed to fetch https://archive.cloudera.com/p/HDP/ubuntu18/3.x/updates/3.1.5.0/dists/HDP/InRelease 401 Authentication required.
    how can i find the solution ?

    ReplyDelete
  2. This cant be done anymore, re-install your ubuntu on your computer and next time choose 3.1.4 in ambari, otherwise you have to pay for cloudera credentials

    ReplyDelete