'분류 전체보기'에 해당되는 글 104건

카테고리 없음

1. We need to have php5, gcc, gcc-c++, autoconf, automake and apache server installed.

2. Need to install php-pecl* and php-pear.

3. #> yum install php-pecl*     <enter>

4. #> yum install php-devel     <enter>

5. #> yum install php-pear      <enter>

6. #> pecl install Xdebug        <enter>

7. Need to configure your PHP to load the extension.
[Xdebug]
xdebug.profiler_enabl = 1
xdebug.profiler_output_dir = /tmp


8. apache server restart.
카테고리 없음

퍼옴

// --------------------------------------------------------------------------- 
OS : CentOS release 5.2 

* OS 설치시 package Group 내가 선택한 것.. 
- Editors 
- Engineering and Scientific 
- Development Tools 
- Administration Tools 
- System Tools 


mysql 설치 
mysql-5.0.24 

configure 시 오류 메세지 
오류 메세지 : checking for termcap functions library... configure: error: No curses/termcap library found 
해결 방법 : yum -y install ncurses-devel 

./configure --prefix=/usr/local/mysql --localstatedir=/usr/local/mysql/data --with-charset=euckr --enable-thread-safe-client 
make && make install 

# cd /usr/local/mysql/bin 
# ./mysql_install_db 
# useradd -M mysql 
# chown -R mysql:mysql /usr/local/mysql/data 
# /usr/local/mysql/bin/mysqld_safe & 
# cd /usr/local/mysql/bin 
# ./mysqladmin -u root password 암호 
# ./mysql -u root -p mysql 

apache 설치 
httpd-2.0.63 
접속자 수 최대로 올리기 
# vi server/mpm/prefork/prefork.c 
[EDITOR]#define DEFAULT_SERVER_LIMIT 256 을 
[EDITOR]#define DEFAULT_SERVER_LIMIT 1280 으로 수정 
저장하고 아웃 
# vi server/mpm/worker/worker.c 
[EDITOR]#define DEFAULT_SERVER_LIMIT 16 을 
[EDITOR]#define DEFAULT_SERVER_LIMIT 20 으로 수정 
저장하고 아웃 
./configure --prefix=/usr/local/apache2 --enable-so --enable-modules=so --with-mpm=worker --enable-rewrite 
make && make install 


php 설치 
php-5.2.6 
configure 시 오류 
오류 메세지 : configure: error: xml2-config not found. Please check your libxml2 installation 
해결 방법 : yum install libxml2 libxml2-devel -y 

오류 메세지 : configure: error: Please reinstall the BZip2 distribution 
해결 방법 : yum -y install bzip2-devel 

오류 메세지 : configure: error: libjpeg.(a|so) not found. 
해결 방법 : yum -y install libjpeg-devel 

오류 메세지 : configure: error: libpng.(a|so) not found. 
해결 방법 : yum -y install libpng-devel 

오류 메세지 : configure: error: freetype.h not found. 
해결 방법 : yum -y install freetype-devel 

오류 메세지 : configure: error: utf8_mime2text() has new signature, but U8T_CANONICAL is missing. This should not happen. Check config.log for additional information. 
해결 방법 : yum -y install libc-client-devel 

오류 메세지 : configure: error: Kerberos libraries not found. 
해결 방법 : yum -y install krb5-devel 

오류 메세지 : configure: error: Cannot find OpenSSL's <evp.h> 
해결 방법 : yum -y install openssl-devel 

./configure --enable-bcmath --enable-ftp --enable-filepro --enable-libxml2 --enable-memory-limit --enable-sockets --enable-spl --enable-sysvsem --enable-sysvshm --enable-track-vars --enable-versioning --enable-wddx --disable-cli --disable-debug --disable-dmalloc --disable-posix --disable-rpath --with-apxs2=/usr/local/apache2/bin/apxs --with-bz2 --with-freetype-dir --with-gd --with-gettext --with-imap=shared --with-jpeg-dir --with-kerberos --with-libxml-dir --with-mod-charset --with-mysql=/usr/local/mysql --with-png-dir --with-ttf --with-zlib --with-mysqli=/usr/local/mysql/bin/mysql_config --with-imap-ssl=/usr/lib --with-openssl 
ssl 추가시  


apache httpd.conf 설정 

NameVirtualHost *:80 
ServerName *:80 

KeepAlive On 
KeepAliveTimeout 2 
(2초간 접속을 끊지 않고 기다린다..) 

Timeout 30 
(dos 공격 방지) 

ServerLimit 20 --> 서버 갯수를 설정 
StartServers 20 --> 아파치를 처음 시작할때 생성하는 서버 갯수 
MaxClients 500 --> ThreadsPerChild * StartServers 값 
ThreadsPerChild 25 --> 서버 하나가 만들어 낼수 있는 쓰레드 갯수입니다. 최대 64개 
MinSpareThreads 25 --> 서버 하나가 만들어 낼수 있는 쓰레드의 최소 갯수입니다. 보통은 ThreadsPerChild 와 맞추어 줌 
MaxSpareThreads 500 --> 시스템 전체에서 만들어 낼수 있는 쓰레드 갯수의 최대치 


<VirtualHost *:80> 
    AddType application/x-httpd-php .php .html .inc .htm 
    #php 를 인식 시킬 확장자 명 
    ServerAdmin 이메일 
    #서버 관리자 이메일 
    DocumentRoot /home/test 
    #html 을 읽을 위치 
    ServerName 도메인 
    #도메인 주소 
#  php_admin_value auto_prepend_file /home/test/move_url/move_page.html 
    #도메인 접근시 최초 열어볼 페이지 주소 
#  RewriteEngine on 
    #Rewrite Engine 사용 여부 
#  RewriteRule ^/([a-zA-Z0-9]+)$ /home/test/rewrite.html?rewrite=$i 
    #Rewrite 사용시 load 할 페이지 주소 
    php_admin_flag register_globals On 
    #register_global 설정 
</VirtualHost> 

apache modules directory에서
#> chcon -c -v -R -u system_u -r object_r -t textrel_shlib_t libphp5.so

카테고리 없음

Stable version (hadoop-0.17.*) used ..
카테고리 없음
Windows Drive mount Used to ntfs-3g

   1. yum-priorities package install for rpmforge add to yum
     [root@localhost ~]# yum install yum-priorities -y

   2. "priority=N" add to /etc/yum/pluginconf.d/priorities.conf

     [root@localhost ~]# vi /etc/yum/pluginconf.d/priorities.conf
     [main]
      enabled = 1
      check_obsoletes = 1
      priority=2

   3. Install rpmforge 

   [root@localhost ~]# rpm -ivh http://apt.sw.be/redhat/el5/en/i386/RPMS.dag/rpmforge-release-0.3.6-1.el5.rf.i386.rpm

   4. update yum 

   [root@localhost ~]# yum check-update

   5. install "fuse", "fuse-ntfs-3g", "dkms", "dkms-fuse"

  [root@localhost ~]# yum install fuse fuse-ntfs-3g dkms dkms-fuse -y

   6. make directory "windows" for mount ntfs

   [root@localhost ~]# mkdir /windows

   7. mount ntfs filesystem to "/windows" , type is ntfs-3g

   [root@localhost ~]# mount -t ntfs-3g /dev/sda1 /windows
   [root@localhost ~]# ls -al /windows/

카테고리 없음

1. Download data set
http://people.csail.mit.edu/irennie/20Newsgroups/

Download to hadoop home

2. Go to HADOOP_HOME
$> cd /home/alloe/hadoop-0.18.0
$> tar xvfz 20news-18828.tar.gz
$> bin/hadoop dfs -put 20news-18828 20newsInput
$> bin/hadoop jar apache-mahout-core-0.1-dev.jar org.apache.mahout.classifier.cbayes.CBayesDriver 20newsInput/alt.atheism 20newsOutput

카테고리 없음


<property>
<name>hadoop.tmp.dir</name>
<value>/home/alloe/filesystem/tmp/dir/hadoop-${user.name}</value>
</property>

<property>
<name>dfs.name.dir</name>
<value>/home/alloe/filesystem/name</value>
</property>

<property>
<name>dfs.data.dir</name>
<value>/home/alloe/filesystem/data</value>
</property>

<property>
<name>mapred.system.dir</name>
<value>/home/alloe/filesystem/mapreduce/system</value>
</property>

<property>
<name>mapred.local.dir</name>
<value>/home/alloe/filesystem/mapreduce/local</value>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://alloe:54310</value>
</property>

<property>
<name>mapred.job.tracker</name>
<value>hdfs://alloe:54311</value>
</property>

<property>
<name>dfs.replication</name>
<value>1</value>
</property>


54310 and 54311 is port number.

카테고리 없음
Machine Learning 은 AI의 한 분야.

1. 신경망 (Neural Network)
2. 자율학습 (Unsupervised Learning)
3. 지도학습 (Supervised Learning)
4. 패턴인식 (Pattern recognition)



카테고리 없음

diff -urN a.txt b.txt > c.txt

result to c.txt


 

카테고리 없음



 

export JAVA_HOME=/usr/local/jdk1.6.0_03
export ANT_HOME=/usr/local/apache-ant-1.7.0
export CATALINA_HOME=/usr/local/tomcat
export PATH=$JAVA_HOME/bin:$ANT_HOME/bin:$PATH:$CATALINA_HOME/bin
export CLASSPATH=".:$JAVA_HOME/lib/tools.jar:$CATALINA_HOME/common/lib/jsp-api.jar:$CATALINA_HOME/common/lib/servlet-api.jar"

카테고리 없음

1. Prerequisites

Java 1.5.0 higher~

Hadoop requirs a working Java 1.5.x installation. However, using Java 1.6.x is recommended for running Hadoop.

I wrote "Java installation" at http://alloe.tistory.com/entry/How-to-Install-java


2. Adding a dedicatedHadoop system user

(now root privilege)
#> groupadd hadoop    <enter>
#> adduser --ingroup hadoop hadoop    <enter>
#> passwd hadoop   <enter>
Input new password...
#> su - hadoop    <enter>
(user level privilege)

This will add the user hadoop and the group hadoop to your local machine.


3. Configuring SSH
Hadoop requires SSH access to manage its nodes.

I wrote "SSH auto login" at http://alloe.tistory.com/entry/ssh-auto-login


4. Hadoop installation
You have to download hadoop from the Apache download mirrors and extract the contents of the hdoop pckage to a location of your choice. Piked /home/hadoop. Make sure to change the owner of all the files to the hadoop user and group.

$> cd /home/hadoop
$> tar xfzv hadoop-0.17.1.tar.gz
$> mv hadoop-0.17.1 hadoop
$> chown -R hadoop:hadoop hadoop


5. configuration
Goal is a single-node setup o hadoop.

- Modify hadoop-env.sh
- Goto last line.

export HADOOP_HOME=/home/hadoop/hadoop
export JAVA_HOME=/usr/local/java
export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
<save & exit>


6. Single node hadoop-site.xml setting

<description> is option.

<property>

  <name>hadoop.tmp.dir</name>
  <value>/your/path/to/hadoop/tmp/dir/hadoop-${user.name}</value>
  <description>A base for other temporary directories.</description>
</property>
 
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system. A URI whose
  scheme and authority determine the FileSystem implementation.  The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
 
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
 
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>


7. Formatting the name node
To format the filesystem, run the command
$> $<HADOOP_INSTALL>/hadoop/bin/hadoop namenode -format    <enter>

if success then you will show under message.

 07/09/21 12:00:25 INFO dfs.NameNode: STARTUP_MSG:
/***********************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu/127.0.0.1
STARTUP_MSG:   args = [-format]
***********************************************************/
07/09/21 12:00:25 INFO dfs.Storage: Storage directory [...] has been successfully formatted.
07/09/21 12:00:25 INFO dfs.NameNode: SHUTDOWN_MSG:
/***********************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.0.1
***********************************************************/


8. Starting single-node cluster
Run the command.

$> /bin/start-all.sh

if success then you will show under message

starting namenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-<hostname>.out
localhost: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-<hostname>.out
localhost: starting secondarynamenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-<hostname>.out
starting jobtracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-<hostname>.out
localhost: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-<hostname>.out

Run the jps.

19811 TaskTracker
19674 SecondaryNameNode
19735 JobTracker
19497 NameNode
20879 TaskTracker$Child
21810 Jps

1 ··· 7 8 9 10 11
블로그 이미지

개발자

우와신난다