카테고리 없음

1. Prerequisites

Java 1.5.0 higher~

Hadoop requirs a working Java 1.5.x installation. However, using Java 1.6.x is recommended for running Hadoop.

I wrote "Java installation" at http://alloe.tistory.com/entry/How-to-Install-java


2. Adding a dedicatedHadoop system user

(now root privilege)
#> groupadd hadoop    <enter>
#> adduser --ingroup hadoop hadoop    <enter>
#> passwd hadoop   <enter>
Input new password...
#> su - hadoop    <enter>
(user level privilege)

This will add the user hadoop and the group hadoop to your local machine.


3. Configuring SSH
Hadoop requires SSH access to manage its nodes.

I wrote "SSH auto login" at http://alloe.tistory.com/entry/ssh-auto-login


4. Hadoop installation
You have to download hadoop from the Apache download mirrors and extract the contents of the hdoop pckage to a location of your choice. Piked /home/hadoop. Make sure to change the owner of all the files to the hadoop user and group.

$> cd /home/hadoop
$> tar xfzv hadoop-0.17.1.tar.gz
$> mv hadoop-0.17.1 hadoop
$> chown -R hadoop:hadoop hadoop


5. configuration
Goal is a single-node setup o hadoop.

- Modify hadoop-env.sh
- Goto last line.

export HADOOP_HOME=/home/hadoop/hadoop
export JAVA_HOME=/usr/local/java
export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
<save & exit>


6. Single node hadoop-site.xml setting

<description> is option.

<property>

  <name>hadoop.tmp.dir</name>
  <value>/your/path/to/hadoop/tmp/dir/hadoop-${user.name}</value>
  <description>A base for other temporary directories.</description>
</property>
 
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system. A URI whose
  scheme and authority determine the FileSystem implementation.  The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
 
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
 
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>


7. Formatting the name node
To format the filesystem, run the command
$> $<HADOOP_INSTALL>/hadoop/bin/hadoop namenode -format    <enter>

if success then you will show under message.

 07/09/21 12:00:25 INFO dfs.NameNode: STARTUP_MSG:
/***********************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu/127.0.0.1
STARTUP_MSG:   args = [-format]
***********************************************************/
07/09/21 12:00:25 INFO dfs.Storage: Storage directory [...] has been successfully formatted.
07/09/21 12:00:25 INFO dfs.NameNode: SHUTDOWN_MSG:
/***********************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.0.1
***********************************************************/


8. Starting single-node cluster
Run the command.

$> /bin/start-all.sh

if success then you will show under message

starting namenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-<hostname>.out
localhost: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-<hostname>.out
localhost: starting secondarynamenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-<hostname>.out
starting jobtracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-<hostname>.out
localhost: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-<hostname>.out

Run the jps.

19811 TaskTracker
19674 SecondaryNameNode
19735 JobTracker
19497 NameNode
20879 TaskTracker$Child
21810 Jps

카테고리 없음
Local -> Remote (no passwd login)

1. Key generating

- Local server home directory $> ssh-keygen -t dsa    <enter>

Show message

Generating public/private dsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:

<all enter>

The key fingerprint is:
34:23:12:e5:d3:53:323:34......................................  user_name@hostname


2. Brings Local server created key from  remote server.
Remote server home directory $> scp user_name@hostname:~/.ssh/id_dsa.pub .ssh/authorized_keys
user_name@hostname server's password input and enter.


3. from remote server to Local server ssh login
ssh user_name@hostname   <enter>
if no problem then no password login, success.


4. if failed then to do.
remote server directory privilege
1) if home directory name is "hadoop" then chmod 755 hadoop   <enter>
2) .ssh directory privilege is 700
3) authorized_keys file privilege is 644

and

try sshd config change

vi /etc/ssh/sshd_config  <enter>

 - change list -

1> StrictModes no
2> RSAAuthentication yes
3> PubkeyAuthentication yes
4> AuthorizedKeysFile    .ssh/authorized_keys

<save & exit>

/etc/init.d/sshd restart     <enter>


try ssh login


카테고리 없음
First
Download httpd source.
http://apache.tt.co.kr/httpd/httpd-2.2.11.tar.gz 

Second
Move and Uncompress.
- mv httpd-2.2.11.tar.gz /usr/local/
- tar xvfz httpd-2.2.11.tar.gz
- cd httpd-2.2.11

Third
Source tree setting. use to "./configure". and make & make install.
- ./configure --prefix=/usr/local/apache2 --enable-rewrite=shared --enable-speling=shared
- make & make install

Fourth
Environmental set. and Start.
- vim /usr/local/apache2/conf/httpd.conf
Edit ServerName,
- /usr/local/apache2/bin/apachectl -f /usr/local/apache2/conf/httpd.conf
1 ··· 31 32 33 34 35
블로그 이미지

개발자

우와신난다