How to Register a Linux Client into RH Satellite /Proxy server

Method-1

  1. Login to the client and run the below command:

rhnreg_ks –serverUrl=https://<proxy servername> –activationkey=1-rhel5-base

rhnreg_ks –serverUrl=https://<proxy servername> –activationkey=1-rhel6-base

 

 

The above command will run for some time and get registered in the Satellite Server or the proxy server. Login the RH Satellite Server to check whether the system is  registered. When we use proxy server as the serverUrl , the machine gets gets registered through the proxy server to the satellite server.  The Proxy acts as a go-between for client systems and Red Hat Network (or an RHN Satellite Server).

Note: The “activation key” is a key that is defined in the main satellite server Web UI based on Redhat versions, software channels etc.

 

Method-2

  1. Login to the client and download the certificate from the Satellite Sever with the below command:

cd /usr/share/rhn

wget http://satellite-servername/pub/RHN-ORG-TRUSTED-SSL-CERT -O RHN-ORG-TRUSTED-SSL-CERT

 

  1. Update the file /etc/sysconfig/rhn/up2date with the these lines:

serverURL=https://satellite-servername/XMLRPC or

serverURL=https://proxy-servername/XMLRPC

sslCACert=/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT

 

  1. Run the rhn_register command and follow the wizard.

audit_log_user_command(): Connection refused Sudoers file on Cent OS 5

This is due to the stock CentOS 5.3 kernel not being compiled with the proper support for auditing. RedHat offers more advanced auditing support in its version of sudo as a custom patch, but the patch is applied whether the kernel has proper support for auditing or not.If you go to the CentOS RPM repository, you can pick up the source package for sudo. Unpack the SPRM with:

sudo rpm -i sudo-[blah blah].src.rpm

cd on to over to /usr/src/redhat/SPECS, where you’ll find the compilation spec sudo.spec. Either follow RedHat Bugzilla by changing the lines as so:

   1  - if( err <= 0 && !(errno == EPERM && getuid() != 0) )
   2  + if( err <= 0 && !((errno == EPERM && getuid() > 0) || errno == ECONNREFUSED )

Or
By commenting out all references to patch5, the audit patch added to sudo by RedHat:
# Patch5: sudo-1.6.9p13-audit.patch #... # %patch5 -p1 -b .audit
You can  choose any of the method the audit features. YMMV.

Once you’re done with that, build the SPRM:
sudo rpmbuild -bb sudo.spec

And install:
rpm --force -i /usr/src/redhat/RPMS/[arch]/sudo-[blah blah].rpm

Note that this will overwrite your system sudo with your custom compiled version, so keep a root shell open or enable your root user until you’re sure that your new sudo works. Also, keep in mind that system updates to sudo may overwrite your existing installation. YMMV. This is but one solution of many.

Sqlite to Mysql Coversion

1) Use .schema output to construct valid mysql CREATE TABLE statements. It may be hard to guess your layout of SQLite because it’s typeless.

You can have such a schema —

CREATE TABLE yourtable (a,b,c,d,e);

— so it may be need to analyze structure of the each field and guess it’s type for all entries there. It maybe simplier if you have CREATE TABLE with types assigned.

Edit the header file in a separate file.

sqlite> .sch

— and copy&paste the table declaration into an editor of your preference.

2) Next step is create INSERT statements:

$ sqlite yourdatabase.db
sqlite> .mode insert
sqlite> .output yourdatabase2mysql.sql

OR:

$ echo “.dump” | sqlite yourdatabase.db > yourdatabase2mysql.sql

3) Insert your a) header file (see #1), b) yourdatabase2mysql.sql

$ mysql -u user -p password yourdatabase <your_header.sql
$ mysql -u user -p password yourdatabase <yourdatabas2mysql.sql

Hope this helps!

quoted from mysql forum

HOWTO set up a MySQL Cluster for two servers

Introduction

This HOWTO was designed for a classic setup of two servers behind a loadbalancer. The aim is to have true redundancy – either server can be unplugged and yet the site will remain up.

Notes:

You MUST have a third server as a managment node but this can be shut down after the cluster starts. Also note that I do not recommend shutting down the managment server (see the extra notes at the bottom of this document for more information). You can not run a MySQL Cluster with just two servers And have true redundancy.

Although it is possible to set the cluster up on two physical servers you WILL NOT GET the ability to “kill” one server and for the cluster to continue as normal. For this you need a third server running the managment node.

I am going to talk about three servers:

mysql1.domain.com 		192.168.0.1
mysql2.domain.com 		192.168.0.2
mysql3.domain.com 		192.168.0.3

Servers 1 and 2 will be the two that end up “clustered”. This would be perfect for two servers behind a loadbalancer or using round robin DNS and is a good replacement for replication. Server 3 needs to have only minor changes made to it and does NOT require a MySQL install. It can be a low-end machine and can be carrying out other tasks.

STAGE 1: Install MySQL on the first two servers:

Complete the following steps on both mysql1 and mysql2:

cd /usr/local/
http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/
	from/http://www.signal42.com/mirrors/mysql/
groupadd mysql
useradd -g mysql mysql
tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
rm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
ln -s mysql-max-4.1.9-pc-linux-gnu-i686 mysql
cd mysql
scripts/mysql_install_db --user=mysql
chown -R root  .
chown -R mysql data
chgrp -R mysql .
cp support-files/mysql.server /etc/rc.d/init.d/
chmod +x /etc/rc.d/init.d/mysql.server
chkconfig --add mysql.server

Do not start mysql yet.

STAGE 2: Install and configure the managment server

You need the following files from the bin/ of the mysql directory: ndb_mgm and ndb_mgmd. Download the whole mysql-max tarball and extract them from the bin/ directory.

mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/
	from/http://www.signal42.com/mirrors/mysql/
tar -zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
rm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
cd mysql-max-4.1.9-pc-linux-gnu-i686
mv bin/ndb_mgm .
mv bin/ndb_mgmd .
chmod +x ndb_mg*
mv ndb_mg* /usr/bin/
cd
rm -rf /usr/src/mysql-mgm

You now need to set up the config file for this managment:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi [or emacs or any other editor] config.ini

Now, insert the following (changing the bits as indicated):

[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Managment Server
[NDB_MGMD]
HostName=192.168.0.3		# the IP of THIS SERVER
# Storage Engines
[NDBD]
HostName=192.168.0.1		# the IP of the FIRST SERVER
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=192.168.0.2		# the IP of the SECOND SERVER
DataDir=/var/lib/mysql-cluster
# 2 MySQL Clients
# I personally leave this blank to allow rapid changes of the mysql clients;
# you can enter the hostnames of the above two servers here. I suggest you dont.
[MYSQLD]
[MYSQLD]

Now, start the managment server:

ndb_mgmd

This is the MySQL managment server, not maganment console. You should therefore not expect any output (we will start the console later).

STAGE 3: Configure the storage/SQL servers and start mysql

On each of the two storage/SQL servers (192.168.0.1 and 192.168.0.2) enter the following (changing the bits as appropriate):

vi /etc/my.cnf

Enter i to go to insert mode again and insert this on both servers (changing the IP address to the IP of the managment server that you set up in stage 2):

[mysqld]
ndbcluster
ndb-connectstring=192.168.0.3	# the IP of the MANAGMENT (THIRD) SERVER
[mysql_cluster]
ndb-connectstring=192.168.0.3	# the IP of the MANAGMENT (THIRD) SERVER

Now, we make the data directory and start the storage engine:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
/usr/local/mysql/bin/ndbd --initial
/etc/rc.d/init.d/mysql.server start

If you have done one server now go back to the start of stage 3 and repeat exactly the same procedure on the second server.

Note: you should ONLY use --initial if you are either starting from scratch or have changed the config.ini file on the managment.

STAGE 4: Check its working

You can now return to the managment server (mysql3) and enter the managment console:

/usr/local/mysql/bin/ndb_mgm

Enter the command SHOW to see what is going on. A sample output looks like this:

[root@mysql3 mysql-cluster]# /usr/local/mysql/bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.0.1  (Version: 4.1.9, Nodegroup: 0, Master)
id=3    @192.168.0.2  (Version: 4.1.9, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.3  (Version: 4.1.9)

[mysqld(API)]   2 node(s)
id=4   (Version: 4.1.9)
id=5   (Version: 4.1.9)

ndb_mgm>

If you see

not connected, accepting connect from 192.168.0.[1/2/3]

in the first or last two lines they you have a problem. Please email me with as much detail as you can give and I can try to find out where you have gone wrong and change this HOWTO to fix it.

If you are OK to here it is time to test mysql. On either server mysql1 or mysql2 enter the following commands: Note that we have no root password yet.

mysql
use test;
CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER;
INSERT INTO ctest () VALUES (1);
SELECT * FROM ctest;

You should see 1 row returned (with the value 1).

If this works, now go to the other server and run the same SELECT and see what you get. Insert from that host and go back to host 1 and see if it works. If it works then congratulations.

The final test is to kill one server to see what happens. If you have physical access to the machine simply unplug its network cable and see if the other server keeps on going fine (try the SELECT query). If you dont have physical access do the following:

ps aux | grep ndbd

You get an output like this:

root      5578  0.0  0.3  6220 1964 ?        S    03:14   0:00 ndbd
root      5579  0.0 20.4 492072 102828 ?     R    03:14   0:04 ndbd
root     23532  0.0  0.1  3680  684 pts/1    S    07:59   0:00 grep ndbd

In this case ignore the command “grep ndbd” (the last line) but kill the first two processes by issuing the command kill -9 pid pid:

kill -9 5578 5579

Then try the select on the other server. While you are at it run a SHOW command on the managment node to see that the server has died. To restart it, just issue

ndbd

Note: no --inital!

Further notes about setup

I strongly recommend that you read all of this (and bookmark this page). It will almost certainly save you a lot of searching.

The Managment Server

I strongly recommend that you do not stop the managment server once it has started. This is for several resons:

  • The server takes hardly any server resources
  • If a cluster falls over, you want to be able to just ssh in and type ndbd to stat it. You dont want to have to start messing around with another server
  • If you want to take backups then you need the managment server up
  • The cluster log is sent to the management server so to check what is going on in the cluster or has happened since last this is an important tool
  • All commands from the ndb_mgm client is sent to the management server and thus no management commands without management server.
  • The managment server is required in case of cluster reconfiguration (crashed server or network split). In the case that it is not running, “split-brain” scenario will occure. The management server arbitration role is required for this type of setup to provide better fault tollerance.

However you are welcome to stop the server if you prefer.

Starting and stopping ndbd automatically on boot

To achieve this, do the following on both mysql1 and mysql2:

echo "ndbd" > /etc/rc.d/init.d/ndbd chmod +x /etc/rc.d/init.d/ndbd chkconfig --add ndbd

Note that this is a really quick script. You ought really to write one that at least checks if ndbd is already started on the machine.

Use of hostnames

You will note that I have used IP addresses exclusively throught this setup. This is because using hostnames simply increases the number of things that can go wrong. Mikael Ronstro”m of MySQL AB kindly explains: “Hostnames certainly work with MySQL Cluster. But using hostnames introduces quite a few error sources since a proper DNS lookup system must be set-up, sometimes /etc/hosts must be edited and their might be security blocks ensuring that communication between certain machines is not possible other than on certain ports”. I strongly suggest that while testing you use IP addresses if you can, then once it is all working change to hostnames.

RAM

Use the following formula to work out the amount of RAM that you need on each storage node:

(Size of database * NumberofReplicas * 1.1) / Number of storage nodes

NumberofReplicas is set to two by default. You can change it in config.ini if you want. So for example to run a 4GB database over two servers with NumbeOfReplicas set to two you need 4.4 GB of RAM on each storage node. For the SQL nodes and managment nodes you dont need much RAM at all. To run a 4GB database over 4 servers with NumberOfReplicas set to two you would need 2.2GB per node.

Note: A lot of people have emailed me querying the maths above! Remember that the cluster is fault tolerant, and each piece of data is stored on at least 2 nodes. (2 by default, as set by NumberOfReplicas). So you need TWICE the space you would need just for one copy, multiplied by 1.1 for overhead.

Adding storage nodes

If you decide to add storage nodes, bear in mind that 3 is not an optimal numbers. If you are going to move from two (above) then move to 4. Adding SQL nodes

Adding SQL nodes

To add storage nodes, you need to add another [NDBD] section to config.ini as per the template above, edit the /etc/my.cnf on the new storage node as per the example above and then create the directory /var/lib/mysql-cluster. You then need to SHUTDOWN the cluster, start the managment daemon (ndb_mgmd) start all the ndbd nodes including the new one and then restart all the MySQL servers.

[mysqld]
ndbcluster
ndb-connectstring=192.168.0.3	# the IP of the MANAGMENT (THIRD) SERVER
[mysql_cluster]

ndb-connectstring=192.168.0.3	# the IP of the MANAGMENT (THIRD) SERVER

Then you need to make sure that there is another [MYSQLD] line at the end of config.ini on the managment server. Restart the cluster (see below for an important note) and restart mysql on the new API. It should be connected.

Important note on changing config.ini

If you ever change config.ini you must stop the whole cluster and restart it to re-read the config file. Stop the cluster with a SHUTDOWN command to the ndb_mgm package on the managment server and then restart all the storage nodes.

Some useful configuration options that you will need if you have large tables:

DataMemory: defines the space available to store the actual records in the database. The entire DataMemory will be allocated in memory so it is important that the machine contains enough memory to handle the DataMemory size. Note that DataMemory is also used to store ordered indexes. Ordered indexes uses about 10 bytes per record. Default: 80MB

IndexMemory The IndexMemory is the parameter that controls the amount of storage used for hash indexes in MySQL Cluster. Hash indexes are always used for primary key indexes, unique indexes, and unique constraints. Default: 18MB

MaxNoOfAttributes This parameter defines the number of attributes that can be defined in the cluster. Default: 1000

MaxNoOfTables Obvious (bear in mind that each BLOB field creates another table for various reasons so take this into account). Default: 128

View this page for further information about the things you can put in the [NDBD] section of config.ini

A note about security

MySQL cluster is not secure. By default anyone can connect to your managment server and shut the whole thing down. I suggest the following precautions:

  • Install APF and block all ports except those you use (do NOT include any MySQL cluster ports). Add the IPs of your cluster machines to the /etc/apf/allow_hosts file.
  • Run MySQL cluster over a second network card on a second, isolated, network.

——–

Quoted from http://dev.mysql.com . Credits goes to Alex Davies if you find a mistake in his HOWTO or you have any
suggestions. Please Contact Him

Installing Memcached On VPS

mkdir -p /root/source
cd /root/source

wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
rpm -ivh rpmforge-release-0.3.6-1.el5.rf.i386.rpm

yum -y install libevent* ( required for memcached binary )
yum -y install memcached* ( memcached binary required for libmemcached )

gem source -a http://gems.github.com

————-
combination of libmemcached-0.25.14 and memcached-0.13 gem is found to be a working stable combination.
————-

wget http://blog.evanweaver.com/files/libmemcached-0.25.14.tar.gz
tar -xzvf libmemcached-0.25.14.tar.gz
cd libmemcached-0.25.14
./configure && make && make install

cd /root/source
wget http://blog.evanweaver.com/files/memcached-0.13.gem
gem install memcached-0.13.gem

gem install memcache-client—version=1.6.3

FFMPEG Installation

ffMPEG commonly consists of and includes the following software:

– Essential / MPlayer
– FLVtool2 (Requires a Ruby Core)
– LAME MP3 Encoder
– php-ffMPEG
– ffMPEG
– libOgg
– libvorbis

To start out, enter into a temporary source directory and download all the binaries:

cd /usr/src
wget http://www3.mplayerhq.hu/MPlayer/releases/codecs/essential-20061022.tar.bz2
wget http://www4.mplayerhq.hu/MPlayer/releases/MPlayer-1.0rc2.tar.bz2
wget http://rubyforge.org/frs/download.php/17497/flvtool2-1.0.6.tgz
wget http://easynews.dl.sourceforge.net/sourceforge/lame/lame-3.97.tar.gz
wget http://superb-west.dl.sourceforge.net/sourceforge/ffmpeg-php/ffmpeg-php-0.5.0.tbz2

*These are the latest stable versions at the time this article was written. If you are unable to download any of the above, you’ll need to visit the distributor’s site and download the latest stable version available.

Now extract everything:

bunzip2 essential-20061022.tar.bz2; tar xvf essential-20061022.tar
tar zxvf flvtool2-1.0.6.tgz
tar zxvf lame-3.97.tar.gz
bunzip2 ffmpeg-php-0.5.0.tbz2; tar xvf ffmpeg-php-0.5.0.tar
bunzip2 MPlayer-1.0rc2.tar.bz2 ; tar -xvf MPlayer-1.0rc2.tar

Create and import the Codecs directory:

mkdir /usr/local/lib/codecs/
mv essential-20061022/* /usr/local/lib/codecs/
chmod -Rf 755 /usr/local/lib/codecs/

Install Subversion and Ruby

yum install subversion
yum install ruby  (If you're on cPanel you can alternatively use /scripts/installruby)
yum install ncurses-devel

Get ffMPEG and MPlayer from SVN:

svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg
svn checkout svn://svn.mplayerhq.hu/mplayer/trunk mplayer

Install LAME:

cd /usr/src/lame-3.97
./configure && make && make install

Install libOgg and libVorbis:

yum install libogg.i386
yum install libvorbis.i386
yum install libvorbis-devel.i386

Install flvtool2

cd /usr/src/flvtool2-1.0.6/
ruby setup.rb config
ruby setup.rb setup
ruby setup.rb install

Install MPlayer and then  ffmpeg-devel rpm

cd /usr/src/MPlayer-1.0rc2
./configure && make && make install
cd /usr/src/
wget ftp://rpmfind.net/linux/dag/redhat/el5/en/i386/dag/RPMS/ffmpeg-devel-0.4.9-0.9.20070530.el5.rf.i386.rpm
rpm -ivh ffmpeg-devel-0.4.9-0.9.20070530.el5.rf.i386.rpm --nodeps

Install ffMPEG:

cd /usr/src/ffmpeg/ ./configure –enable-libmp3lame –enable-libvorbis –disable-mmx –enable-shared make && make install

This is the typical configure line that I use, but you can customize this to what you need. For available configure options, type ./configure –help. Your custom configuration may require the installation of additional software on the server.

ln -s /usr/local/lib/libavformat.so.50 /usr/lib/libavformat.so.50 ln -s /usr/local/lib/libavcodec.so.51 /usr/lib/libavcodec.so.51 ln -s /usr/local/lib/libavutil.so.49 /usr/lib/libavutil.so.49 ln -s /usr/local/lib/libmp3lame.so.0 /usr/lib/libmp3lame.so.0 ln -s /usr/local/lib/libavformat.so.51 /usr/lib/libavformat.so.51

You may get an error about a library path not being found, if so, run

export LD_LIBRARY_PATH=/usr/local/lib

If this is being installed on a dedicated server, you might also get an error about the /tmp directory not be executable, which is common when installing on a dedicated server with a separate /tmp partition mounted noexec. In this case, you will need to create a tmp dir in the ffmpeg folder and use that as the tmp disk for now:

mkdir tmp chmod 777 tmp export TMPDIR=./tmp

Then run the configure command and set the TMPDIR variable back.

export TMPDIR=/tmp

Install ffMPEG-php

cd /usr/src/ffmpeg-php-0.5.0/
phpize
./configure && make && make install
ln -s /usr/local/bin/ffmpeg /usr/bin/ffmpeg
ln -s /usr/local/bin/mplayer /usr/bin/mplayer

When the installation is complete, it will give you a long path to the shared libraries. This needs to be copied to the php.ini as so:

[ffmpeg]
extension=/usr/local/lib/php/extensions/no-debug-non-zts-20020429/ffmpeg.so

or in most cases where the extension_dir variable is set, just do:

extension="/ffmpeg.so"

The ‘no-debug-non-zts-xxxxxxxx’ directory will be the one provided during installation. When this is done, restart Apache and check that the module is loaded in PHP:

/etc/init.d/httpd restart
php -r 'phpinfo();' | grep ffmpeg

Look for this:

ffmpeg
fmpeg support (ffmpeg-php) => enabled
ffmpeg-php version => 0.5.0

ffmpeg.allow_persistent => 0 => 0

If you only get output for the ‘PWD’ variables, make sure that the extension_dir path is correct in the phpinfo file. Sometimes there are two specified, and if that is the case then the incorrect one should be commented out.

Test out ffmpeg for errors just by typing ffmpeg at the command line. The most common error is:

ffmpeg: error while loading shared libraries: libavformat.so.51: cannot open...

To correct this, edit /etc/ld.so.conf and add the line

/usr/local/lib

then save and exit.

Now run this command to reload the library cache:

ldconfig -v

-------------------------------------------------------------------------------

nginx + mongrel installation and configuration for RoR app.

I’m going to compile in nginx server from the source. I’m assuming that you’ve packages like ruby, Rails etc. pre-installed on your system/server.

I’ve compiled in nginx from the source, so you’ll need a gcc complier.

Download the source.
wget http://sysoev.ru/nginx/nginx-0.5.35.tar.gz

Untar it
tar zxvf nginx-0.5.35.tar.gz

Run.

cd nginx-0.5.35
./configure --prefix=/usr/local/nginx
make
make install

If everything does fine, nginx will get installed /usr/local/nginx

Here is a sample nginx configuration file. Just copy it as nginx.conf(/usr/local/nginx/conf/nginx.conf) and modify it accordingly four your application setup.

user  www www;
worker_processes  3;

error_log  /var/log/nginx/error.log;

pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /usr/local/nginx/conf/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] $status '
                      '"$request" $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    tcp_nopush     on;
    tcp_nodelay    on;

        gzip  on;
    upstream mongrel {
     server 127.0.0.1:4000;
     server 127.0.0.1:4001;
    }

        #Rails App here
            server {
        listen       80;
        root /var/www/railsapp/public;
        index index.html index.htm;
        server_name yourdomain.com www.yourdomain.com;
        client_max_body_size 50M;

        access_log  /var/log/nginx/localhost.access.log;

        location / {
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for;
         proxy_set_header Host $http_host;
         proxy_redirect false;
         proxy_max_temp_file_size 0;

         if (-f $request_filename) {
            break;
          }
         if (-f $request_filename/index.html) {
            rewrite (.*) $1/index.html break;
         }
         if (-f $request_filename.html) {
            rewrite (.*) $1.html break;
         }
         if (!-f $request_filename) {
            proxy_pass http://mongrel;
            break;
         }

        }
        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /500.html;
        location = /500.html {
            root  /var/www/railsapp/public;
        }
    }

}

You may start the nginx server using the command

/usr/local/nginx/bin/nginx -c /usr/local/nginx/conf/nginx.conf

Now you may want an init startup script to start|stop|restart the server. Just copy the below script as /etc/init.d/nginx and set it executable(chmod 755 /etc/init.d/nginx)

#!/bin/sh

# Description: Startup script for nginx webserver on Debian. Place in /etc/init.d and
# run 'sudo update-rc.d nginx defaults', or use the appropriate command on your
# distro.
#
# Author:       Ryan Norbauer
# Modified:     Geoffrey Grosenbach http://topfunky.com

set -e

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DESC="nginx daemon"
NAME=nginx
DAEMON=/usr/local/nginx/sbin/$NAME
CONFIGFILE=/usr/local/nginx/conf/nginx.conf
PIDFILE=/var/run/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME

# Gracefully exit if the package has been removed.
test -x $DAEMON || exit 0

d_start() {
  $DAEMON -c $CONFIGFILE || echo -n " already running"
}

d_stop() {
  kill -QUIT `cat $PIDFILE` || echo -n " not running"
}

d_reload() {
  kill -HUP `cat $PIDFILE` || echo -n " can't reload"
}

case "$1" in
  start)
        echo -n "Starting $DESC: $NAME"
        d_start
        echo "."
        ;;
  stop)
        echo -n "Stopping $DESC: $NAME"
        d_stop
        echo "."
        ;;
  reload)
        echo -n "Reloading $DESC configuration..."
        d_reload
        echo "reloaded."
  ;;
  restart)
        echo -n "Restarting $DESC: $NAME"
        d_stop
        # One second might not be time enough for a daemon to stop,
        # if this happens, d_start will fail (and dpkg will break if
        # the package is being upgraded). Change the timeout if needed
        # be, or change d_stop to have start-stop-daemon use --retry.
        # Notice that using --retry slows down the shutdown process somewhat.
        sleep 1
        d_start
        echo "."
        ;;
  *)
          echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2
          exit 3
        ;;
esac

exit 0

Also don’t forget to add the user www using “useradd -s /sbin/nologin www” if the user doesn’t exists.

Now time to setup mongrel for your app. Install the mongrel gem, if it’s not installed already.

gem install mongrel

Then configure the mongrel cluster for your app and start the mongrel server.

cd /var/www/railsapp (your application directory)

mongrel_rails cluster::configure -e production -p 4000 -N 2

mongrel_rails cluster::start

You can of course use the mongrel_rails cluster::start|stop|restart commands to manage your mongrel instance.

Hope that this tutorial is useful…

Ruby on Rails Caching Tutorial

This tutorial is going to show everything you need to know to use Caching in your Rails applications,

Table of Contents

  1. Why for art thou caching?
  2. Configuration
  3. Page Caching
  4. Page caching with pagination
  5. Cleaning up your cache
  6. Sweeping up your mess
  7. Playing with Apache/Lighttpd
  8. Moving your cache
  9. Clearing out your whole/partial cache
  10. Advanced page caching techniques
  11. Testing your page caching
  12. Conclusion

Caching!

Caching, in the web application world, is the art of taking a processed web page (or part of a webpage), and storing it in a temporary location. If another user requests this same webpage, then we can serve up the cached version.

Loading up a cached webpage can not only save us from having to do ANY database queries, it can even allow us to serve up websites without touching our Ruby on Rails Server. Sounds kinda magical doesn’t it? Keep on reading for the good stuff.

Before we get our feet wet, there’s one small configuration step you need to take..

Configuration

There’s only one thing you’ll need to do to start playing with caching, and this is only needed if you’re in development mode. Look for the following line and change it to true in your /config/environments/development.rb:


config.action_controller.perform_caching = true

Normally you probably don’t want to bother with caching in development mode, but we want try it out already!

Page Caching

Page caching is the FASTEST Rails caching mechanism, so you should do it if at all possible. Where should you use page caching?

  • If your page is the same for all users.
  • If your page is available to the public, with no authentication needed.

If your app contains pages that meet these requirements, keep on reading. If it doesn’t, you probably should know how to use it anyways, so keep reading!

Say we have a blog page (Imagine that!) that doesn’t change very often. The controller code for our front page might look like this:

1
2
3
4
5 
class BlogController < ApplicationController
  def list
     Post.find(:all, :order => "created_on desc", :limit => 10)
  end
  ...

As you can see, our List action queries the latest 10 blog posts, which we can then display on our webpage. If we wanted to use page caching to speed things up, we could go into our blog controller and do:

1
2
3
4
5
6
7 
class BlogController < ApplicationController
   caches_page :list
  
   def list
     Post.find(:all, :order => "created_on desc", :limit => 10)
   end
  ...

The “caches_page” directive tells our application that next time the “list” action is requested, take the resulting html, and store it in a cached file.

If you ran this code using mongrel, the first time the page is viewed your /logs/development.log would look like this:

1
2
3
4
5
6 
Processing BlogController#list (for 127.0.0.1 at 2007-02-23 00:58:56) [GET]
 Parameters: {"action"=>"list", "controller"=>"blog"}
SELECT * FROM posts ORDER BY created_on LIMIT 10
Rendering blog/list
Cached page: /blog/list.html (0.00000)
Completed in 0.18700 (5 reqs/sec) | Rendering: 0.10900 (58%) | DB: 0.00000 (0%) | 200 OK [http

See the line where it says “Cached page: /blog/list.html”. This is telling you that the page was loaded, and the resulting html was stored in a file located at /public/blog/list.html. If you looked in this file you’d find plain html with no ruby code at all.

Subsequent requests to the same url will now hit this html file rather then reloading the page. As you can imagine, loading a static html page is much faster than loading and processing a interpreted programming language. Like 100 times faster!

However, it is very important to note that Loading Page Cached .html files does not invoke Rails at all! What this means is that if there is any content that is dynamic from user to user on the page, or the page is secure in some fashion, then you can’t use page caching. Rather you’d probably want to use action or fragment caching, which I will cover in part 2 of this tutorial.

What if we then say in our model:


caches_page :show

Where do you think the cached page would get stored when we visited “/blog/show/5” to show a specific blog post?

The answer is /public/blog/show/5.html

Here are a few more examples of where page caches are stored.:

1
2
3
4
5 
http://localhost:3000/blog/list => /public/blog/list.html
http://localhost:3000/blog/edit/5 => /public/edit/5.html
http://localhost:3000/blog => /public/blog.html
http://localhost:3000/ => /public/index.html
http://localhost:3000/blog/list?page=2 => /public/blog/list.html

Hey, wait a minute, notice how above the first item is the same as the last item. Yup, page caching is going to ignore all additional parameters on your url.

But what if I want to cache my pagination pages?

Very interesting question, and a more interesting answer. In order to cache your different pages, you just have to create a differently formed url. So instead of linking “/blog/list?page=2”, which wouldn’t work because caching ignores additional parameters, we would want to link using “/blog/list/2”, but instead of 2 being stored in params[:id], we want that 2 on the end to be params[:page].

We can make this configuration change in our /config/routes.rb

1
2
3
4
5 
map.connect 'blog/list/:page',
    :controller => 'blog',
    :action => 'list',
    :requirements => { :page => /\d+/},
    :page => nil

With this new route defined, we can now do:


<%= link_to "Next Page", :controller => 'blog', :action => 'list', :page => 2 %>

the resulting url will be “/blog/list/2”. When we click this link two great things will happen:

  1. Rather than storing the 2 in params[:id], which is the default, the application will store the 2 as params[:page],
  2. The page will be cached as /public/blog/list/2.html

The moral of the story is; If you’re going to use page caching, make sure all the parameters you require are part of the URL, not after the question mark! Many thanks to Charlie Bowman for inspiration.

Cleaning up the cache

You must be wondering, “What happens if I add another blog post and then refresh /blog/list at this point?”

Absolutely NOTHING!!!

Well, not quite nothing. We would see the /blog/list.html cached file which was generated a minute ago, but it won’t contain our newest blog entry.

To remove this cached file so a new one can be generated we’ll need to expire the page. To expire the two pages we listed above, we would simply run:

1
2
3
4
5 
# This will remove /blog/list.html
expire_page(:controller => 'blog', :action => 'list')

# This will remove /blog/show/5.html
expire_page(:controller => 'blog', :action => 'show', :id => 5)

We could obviously go and add this to every place where we add/edit/remove a post, and paste in a bunch of expires, but there is a better way!

Sweepers

Sweepers are pieces of code that automatically delete old caches when the data on the cached page gets old. To do this, sweepers observe of one or more of your models. When a model is added/updated/removed the sweeper gets notified, and then runs those expire lines I listed above.

Sweepers can be created in your controllers directory, but I think they should be separated, which you can do by adding this line to your /config/environment.rb.

1
2
3
4
5 
Rails::Initializer.run do |config|
   # ...
   config.load_paths += %W( #{RAILS_ROOT}/app/sweepers )
   # ...
end

(don’t forget to restart your server after you do this)

With this code, we can create an /app/sweepers directory and start creating sweepers. So, lets jump right into it. /app/sweepers/blog_sweeper.rb might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27 
class BlogSweeper < ActionController::Caching::Sweeper
  observe Post # This sweeper is going to keep an eye on the Post model

  # If our sweeper detects that a Post was created call this
  def after_create(post)
          expire_cache_for(post)
  end
  
  # If our sweeper detects that a Post was updated call this
  def after_update(post)
          expire_cache_for(post)
  end
  
  # If our sweeper detects that a Post was deleted call this
  def after_destroy(post)
          expire_cache_for(post)
  end
          
  private
  def expire_cache_for(record)
    # Expire the list page now that we posted a new blog entry
    expire_page(:controller => 'blog', :action => 'list')
    
    # Also expire the show page, incase we just edited a blog entry
    expire_page(:controller => 'blog', :action => 'show', :id => record.id)
  end
end

NOTE: We can call “after_save”, instead of “after_create” and “after_update” above, to dry out our code.

We then need to tell our controller when to invoke this sweeper, so in /app/controllers/BlogController.rb:

1
2
3
4 
class BlogController < ApplicationController
   caches_page :list, :show
   cache_sweeper :blog_sweeper, :only => [:create, :update, :destroy]
   ...

If we then try creating a new post we would see the following in our logs/development.log:

1
2 
Expired page: /blog/list.html (0.00000)
Expired page: /blog/show/3.html (0.00000)

That’s our sweeper at work!

Playing nice with Apache/Lighttpd

When deploying to production, many rails applications still use Apache as a front-end, and dynamic Ruby on Rails requests get forwarded to a Rails Server (Mongrel or Lighttpd). However, since we are actually pushing out pure html code when we do caching, we can tell Apache to check to see if the page being requested exists in static .html form. If it does, we can load the requested page without even touching our Ruby on Rails server!

Our httpd.conf might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 
<VirtualHost *:80>
  ...
  # Configure mongrel_cluster
  <Proxy balancer://blog_cluster>
    BalancerMember http://127.0.0.1:8030
  </Proxy>

  RewriteEngine On
  # Rewrite index to check for static
  RewriteRule ^/$ /index.html [QSA]

  # Rewrite to check for Rails cached page
  RewriteRule ^([^.]+)$ $1.html [QSA]

  # Redirect all non-static requests to cluster
  RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f
  RewriteRule ^/(.*)$ balancer://blog_cluster%{REQUEST_URI} [P,QSA,L]
  ...
</VirtualHost>

In lighttpd you might have:

1
2
3 
server.modules = ( "mod_rewrite", ... )
url.rewrite += ( "^/$" => "/index.html" )
url.rewrite += ( "^([^.]+)$" => "$1.html" )

The proxy servers will then look for cached files in your /public directory. However, you may want to change the caching directory to keep things more separated. You’ll see why shortly.

Moving your Page Cache

First you’d want to add the following to your /config/environment.rb:


config.action_controller.page_cache_directory = RAILS_ROOT + "/public/cache/"

This tells Rails to publish all your cached files in the /public/cache directory. You would then want to change your Rewrite rules in your httpd.conf to be:

1
2
3
4
5 
  # Rewrite index to check for static
  RewriteRule ^/$ cache/index.html [QSA]

  # Rewrite to check for Rails cached page
  RewriteRule ^([^.]+)$ cache/$1.html [QSA]

Clearing out a partial/whole cache

When you start implementing page caching, you may find that when you add/edit/remove one model, almost all of your cached pages need to be expired. This could be the case if, for instance, all of your website pages had a list which showed the 10 most recent blog posts.

One alternative would be to just delete all your cached files. In order to do this you’ll first need to move your cache directory (as shown above). Then you might create a sweeper like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 
class BlogSweeper < ActionController::Caching::Sweeper
  observe Post

  def after_save(record)
    self.class::sweep
  end
  
  def after_destroy(record)
    self.class::sweep
  end
  
  def self.sweep
    cache_dir = ActionController::Base.page_cache_directory
    unless cache_dir == RAILS_ROOT+"/public"
      FileUtils.rm_r(Dir.glob(cache_dir+"/*")) rescue Errno::ENOENT
      RAILS_DEFAULT_LOGGER.info("Cache directory '#{cache_dir}' fully sweeped.")
    end
  end
end

That FileUtils.rm_r simply deletes all the files in the cache, which is really all the expire_cache line does anyways. You could also do a partial cache purge by only deleting a cache subdirectory. If I just wanted to remove all the caches under /public/blog I could do:

1
2 
        cache_dir = ActionController::Base.page_cache_directory
        FileUtils.rm_r(Dir.glob(cache_dir+"/blog/*")) rescue Errno::ENOENT

If calling these File Utilities feels too hackerish for you, Charlie Bowman wrote up the broomstick plugin which allows you to “expire_each_page” of a controller or action, with one simple call.

Needing something more advanced?

Page caching can get very complex with large websites. Here are a few notable advanced solutions:

Rick Olson (aka Technoweenie) wrote up a Referenced Page Caching Plugin which uses a database table to keep track of cached pages. Check out the Readme for examples.

Max Dunn wrote a great article on Advanced Page Caching where he shows you how he dealt with wiki pages using cookies to dynamically change cached pages based on user roles.

Lastly, there doesn’t seem to be any good way to page cache xml files, as far as I’ve seen. Mike Zornek wrote about his problems and figured out one way to do it. Manoel Lemos figured out a way to do it using action caching. We’ll cover action caching in the next tutorial.

How do I test my page caching?

There is no built in way to do this in rails. Luckily Damien Merenne created a swank plugin for page cache testing. Check it out!

Quoted from  railsenvy.com

A Flickr-based Introduction to Ruby on Rails 2.0

Installation and Basic Setup
The first thing you have to do is install the Rails 2.0 framework and create a basic application scaffold to verify that everything has been setup properly. If the Ruby language and RubyGems, the standard packaging system for Ruby libraries, are not already installed on your system, refer to the Ruby and RubyGems web sites for further installation information. Also, check out Pastie, a tool that checks if your applications based on Rails 1.x are ready to be migrated to the new version.

Once you have these ready to go, you can install Rails 2.0 using the same procedure as for the previous version of the framework. Then open a terminal and enter this command:


gem install rails --include-dependencies

You can also launch the gem command with explicit references to the required libraries and verify that it downloads the correct packages from the Internet:


$ sudo gem update actionmailer actionpack activerecord activesupport
$ sudo gem install activeresource
$ sudo gem update rails
$ ruby -v
ruby 1.8.6 (2007-09-24 patchlevel 111) [universal-darwin9.0]
$ gem -v
1.0.1
$ rails -v
Rails 2.0.2
$ gem list --local

*** LOCAL GEMS ***

actionmailer (2.0.2, 1.3.6, 1.3.3)
actionpack (2.0.2, 1.13.6, 1.13.3)
actionwebservice (1.2.6, 1.2.3)
activerecord (2.0.2, 1.15.6, 1.15.3)
activeresource (2.0.2)
activesupport (2.0.2, 1.4.4, 1.4.2)
rails (2.0.2, 1.2.6, 1.2.3)
... other libraries here ...

As you can see, you can keep the previous version of Rails alongside with the new one, to facilitate the transition of existing applications built upon previous Rails versions. The above example has both Rails 2.0.2 and two previous versions of Rails 1.2. (The Ruby on Rails download page describes additional ways of installing it.)

To verify that everything is working correctly, you can generate a scaffold for a new web application with this command:

rails testapp cd testapp/ script/server Open your browser at the URL http://localhost:3000 to verify that you are using the latest version of Rails. You should see a welcome screen for your newly created Rails 2.0 application. If you don’t, check the RAILS_GEM_VERSION in the testapp/config/environment.rb file.

The RailTrackr Application
RailTrackr, the visually rich, web-based Flickr photo browser, will demonstrate some notable Rails 2.0 capabilities. You can launch the sample application now by downloading the source code attached to this article and launching it with the traditional script/server command. Since the application uses the Flickr APIs to load photos, you have to request an API key from the Flickr services site and type it into the flickr_helper.rb file bundled with the source code.

The application provides a way to navigate through Flickr users, their photosets, and the photos contained within them. It therefore defines three entities: FlickrUser, Photoset, and Photo. In the application domain, a FlickrUser may have many Photosets, and each Photoset may have many Photos. These will be the Ruby models for RailTrackr.

Quoted from http://www.devx.com

SafeErb for Rails 2

You might have noticed that the SafeErb plugin does not work in Rails 2 applications. That is because of old method signatures used in the plugin. The author has put up a blog post (in japanese) about a new version created by Aaron Bedra which points to this plugin installer (possibly replace http by svn):

./script/plugin install http://safe-erb.rubyforge.org/svn/plugins/safe_erb

The author has tested it with Rails 2.0.2 and it works fine. On my system however, it has problems with methods from the FormHelper (text_field and so on), most likely because of the output values in the value parameter. Does this happen on your system, as well? I hope to find a fix for that. Apart from that, the plugin works fine for Rails 2 applications.

 

Quoted from rorsecurity.info