Ruby on Rails Caching Tutorial

This tutorial is going to show everything you need to know to use Caching in your Rails applications,

Table of Contents

  1. Why for art thou caching?
  2. Configuration
  3. Page Caching
  4. Page caching with pagination
  5. Cleaning up your cache
  6. Sweeping up your mess
  7. Playing with Apache/Lighttpd
  8. Moving your cache
  9. Clearing out your whole/partial cache
  10. Advanced page caching techniques
  11. Testing your page caching
  12. Conclusion

Caching!

Caching, in the web application world, is the art of taking a processed web page (or part of a webpage), and storing it in a temporary location. If another user requests this same webpage, then we can serve up the cached version.

Loading up a cached webpage can not only save us from having to do ANY database queries, it can even allow us to serve up websites without touching our Ruby on Rails Server. Sounds kinda magical doesn’t it? Keep on reading for the good stuff.

Before we get our feet wet, there’s one small configuration step you need to take..

Configuration

There’s only one thing you’ll need to do to start playing with caching, and this is only needed if you’re in development mode. Look for the following line and change it to true in your /config/environments/development.rb:


config.action_controller.perform_caching = true

Normally you probably don’t want to bother with caching in development mode, but we want try it out already!

Page Caching

Page caching is the FASTEST Rails caching mechanism, so you should do it if at all possible. Where should you use page caching?

  • If your page is the same for all users.
  • If your page is available to the public, with no authentication needed.

If your app contains pages that meet these requirements, keep on reading. If it doesn’t, you probably should know how to use it anyways, so keep reading!

Say we have a blog page (Imagine that!) that doesn’t change very often. The controller code for our front page might look like this:

1
2
3
4
5 
class BlogController < ApplicationController
  def list
     Post.find(:all, :order => "created_on desc", :limit => 10)
  end
  ...

As you can see, our List action queries the latest 10 blog posts, which we can then display on our webpage. If we wanted to use page caching to speed things up, we could go into our blog controller and do:

1
2
3
4
5
6
7 
class BlogController < ApplicationController
   caches_page :list
  
   def list
     Post.find(:all, :order => "created_on desc", :limit => 10)
   end
  ...

The “caches_page” directive tells our application that next time the “list” action is requested, take the resulting html, and store it in a cached file.

If you ran this code using mongrel, the first time the page is viewed your /logs/development.log would look like this:

1
2
3
4
5
6 
Processing BlogController#list (for 127.0.0.1 at 2007-02-23 00:58:56) [GET]
 Parameters: {"action"=>"list", "controller"=>"blog"}
SELECT * FROM posts ORDER BY created_on LIMIT 10
Rendering blog/list
Cached page: /blog/list.html (0.00000)
Completed in 0.18700 (5 reqs/sec) | Rendering: 0.10900 (58%) | DB: 0.00000 (0%) | 200 OK [http

See the line where it says “Cached page: /blog/list.html”. This is telling you that the page was loaded, and the resulting html was stored in a file located at /public/blog/list.html. If you looked in this file you’d find plain html with no ruby code at all.

Subsequent requests to the same url will now hit this html file rather then reloading the page. As you can imagine, loading a static html page is much faster than loading and processing a interpreted programming language. Like 100 times faster!

However, it is very important to note that Loading Page Cached .html files does not invoke Rails at all! What this means is that if there is any content that is dynamic from user to user on the page, or the page is secure in some fashion, then you can’t use page caching. Rather you’d probably want to use action or fragment caching, which I will cover in part 2 of this tutorial.

What if we then say in our model:


caches_page :show

Where do you think the cached page would get stored when we visited “/blog/show/5” to show a specific blog post?

The answer is /public/blog/show/5.html

Here are a few more examples of where page caches are stored.:

1
2
3
4
5 
http://localhost:3000/blog/list => /public/blog/list.html
http://localhost:3000/blog/edit/5 => /public/edit/5.html
http://localhost:3000/blog => /public/blog.html
http://localhost:3000/ => /public/index.html
http://localhost:3000/blog/list?page=2 => /public/blog/list.html

Hey, wait a minute, notice how above the first item is the same as the last item. Yup, page caching is going to ignore all additional parameters on your url.

But what if I want to cache my pagination pages?

Very interesting question, and a more interesting answer. In order to cache your different pages, you just have to create a differently formed url. So instead of linking “/blog/list?page=2”, which wouldn’t work because caching ignores additional parameters, we would want to link using “/blog/list/2”, but instead of 2 being stored in params[:id], we want that 2 on the end to be params[:page].

We can make this configuration change in our /config/routes.rb

1
2
3
4
5 
map.connect 'blog/list/:page',
    :controller => 'blog',
    :action => 'list',
    :requirements => { :page => /\d+/},
    :page => nil

With this new route defined, we can now do:


<%= link_to "Next Page", :controller => 'blog', :action => 'list', :page => 2 %>

the resulting url will be “/blog/list/2”. When we click this link two great things will happen:

  1. Rather than storing the 2 in params[:id], which is the default, the application will store the 2 as params[:page],
  2. The page will be cached as /public/blog/list/2.html

The moral of the story is; If you’re going to use page caching, make sure all the parameters you require are part of the URL, not after the question mark! Many thanks to Charlie Bowman for inspiration.

Cleaning up the cache

You must be wondering, “What happens if I add another blog post and then refresh /blog/list at this point?”

Absolutely NOTHING!!!

Well, not quite nothing. We would see the /blog/list.html cached file which was generated a minute ago, but it won’t contain our newest blog entry.

To remove this cached file so a new one can be generated we’ll need to expire the page. To expire the two pages we listed above, we would simply run:

1
2
3
4
5 
# This will remove /blog/list.html
expire_page(:controller => 'blog', :action => 'list')

# This will remove /blog/show/5.html
expire_page(:controller => 'blog', :action => 'show', :id => 5)

We could obviously go and add this to every place where we add/edit/remove a post, and paste in a bunch of expires, but there is a better way!

Sweepers

Sweepers are pieces of code that automatically delete old caches when the data on the cached page gets old. To do this, sweepers observe of one or more of your models. When a model is added/updated/removed the sweeper gets notified, and then runs those expire lines I listed above.

Sweepers can be created in your controllers directory, but I think they should be separated, which you can do by adding this line to your /config/environment.rb.

1
2
3
4
5 
Rails::Initializer.run do |config|
   # ...
   config.load_paths += %W( #{RAILS_ROOT}/app/sweepers )
   # ...
end

(don’t forget to restart your server after you do this)

With this code, we can create an /app/sweepers directory and start creating sweepers. So, lets jump right into it. /app/sweepers/blog_sweeper.rb might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27 
class BlogSweeper < ActionController::Caching::Sweeper
  observe Post # This sweeper is going to keep an eye on the Post model

  # If our sweeper detects that a Post was created call this
  def after_create(post)
          expire_cache_for(post)
  end
  
  # If our sweeper detects that a Post was updated call this
  def after_update(post)
          expire_cache_for(post)
  end
  
  # If our sweeper detects that a Post was deleted call this
  def after_destroy(post)
          expire_cache_for(post)
  end
          
  private
  def expire_cache_for(record)
    # Expire the list page now that we posted a new blog entry
    expire_page(:controller => 'blog', :action => 'list')
    
    # Also expire the show page, incase we just edited a blog entry
    expire_page(:controller => 'blog', :action => 'show', :id => record.id)
  end
end

NOTE: We can call “after_save”, instead of “after_create” and “after_update” above, to dry out our code.

We then need to tell our controller when to invoke this sweeper, so in /app/controllers/BlogController.rb:

1
2
3
4 
class BlogController < ApplicationController
   caches_page :list, :show
   cache_sweeper :blog_sweeper, :only => [:create, :update, :destroy]
   ...

If we then try creating a new post we would see the following in our logs/development.log:

1
2 
Expired page: /blog/list.html (0.00000)
Expired page: /blog/show/3.html (0.00000)

That’s our sweeper at work!

Playing nice with Apache/Lighttpd

When deploying to production, many rails applications still use Apache as a front-end, and dynamic Ruby on Rails requests get forwarded to a Rails Server (Mongrel or Lighttpd). However, since we are actually pushing out pure html code when we do caching, we can tell Apache to check to see if the page being requested exists in static .html form. If it does, we can load the requested page without even touching our Ruby on Rails server!

Our httpd.conf might look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 
<VirtualHost *:80>
  ...
  # Configure mongrel_cluster
  <Proxy balancer://blog_cluster>
    BalancerMember http://127.0.0.1:8030
  </Proxy>

  RewriteEngine On
  # Rewrite index to check for static
  RewriteRule ^/$ /index.html [QSA]

  # Rewrite to check for Rails cached page
  RewriteRule ^([^.]+)$ $1.html [QSA]

  # Redirect all non-static requests to cluster
  RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f
  RewriteRule ^/(.*)$ balancer://blog_cluster%{REQUEST_URI} [P,QSA,L]
  ...
</VirtualHost>

In lighttpd you might have:

1
2
3 
server.modules = ( "mod_rewrite", ... )
url.rewrite += ( "^/$" => "/index.html" )
url.rewrite += ( "^([^.]+)$" => "$1.html" )

The proxy servers will then look for cached files in your /public directory. However, you may want to change the caching directory to keep things more separated. You’ll see why shortly.

Moving your Page Cache

First you’d want to add the following to your /config/environment.rb:


config.action_controller.page_cache_directory = RAILS_ROOT + "/public/cache/"

This tells Rails to publish all your cached files in the /public/cache directory. You would then want to change your Rewrite rules in your httpd.conf to be:

1
2
3
4
5 
  # Rewrite index to check for static
  RewriteRule ^/$ cache/index.html [QSA]

  # Rewrite to check for Rails cached page
  RewriteRule ^([^.]+)$ cache/$1.html [QSA]

Clearing out a partial/whole cache

When you start implementing page caching, you may find that when you add/edit/remove one model, almost all of your cached pages need to be expired. This could be the case if, for instance, all of your website pages had a list which showed the 10 most recent blog posts.

One alternative would be to just delete all your cached files. In order to do this you’ll first need to move your cache directory (as shown above). Then you might create a sweeper like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 
class BlogSweeper < ActionController::Caching::Sweeper
  observe Post

  def after_save(record)
    self.class::sweep
  end
  
  def after_destroy(record)
    self.class::sweep
  end
  
  def self.sweep
    cache_dir = ActionController::Base.page_cache_directory
    unless cache_dir == RAILS_ROOT+"/public"
      FileUtils.rm_r(Dir.glob(cache_dir+"/*")) rescue Errno::ENOENT
      RAILS_DEFAULT_LOGGER.info("Cache directory '#{cache_dir}' fully sweeped.")
    end
  end
end

That FileUtils.rm_r simply deletes all the files in the cache, which is really all the expire_cache line does anyways. You could also do a partial cache purge by only deleting a cache subdirectory. If I just wanted to remove all the caches under /public/blog I could do:

1
2 
        cache_dir = ActionController::Base.page_cache_directory
        FileUtils.rm_r(Dir.glob(cache_dir+"/blog/*")) rescue Errno::ENOENT

If calling these File Utilities feels too hackerish for you, Charlie Bowman wrote up the broomstick plugin which allows you to “expire_each_page” of a controller or action, with one simple call.

Needing something more advanced?

Page caching can get very complex with large websites. Here are a few notable advanced solutions:

Rick Olson (aka Technoweenie) wrote up a Referenced Page Caching Plugin which uses a database table to keep track of cached pages. Check out the Readme for examples.

Max Dunn wrote a great article on Advanced Page Caching where he shows you how he dealt with wiki pages using cookies to dynamically change cached pages based on user roles.

Lastly, there doesn’t seem to be any good way to page cache xml files, as far as I’ve seen. Mike Zornek wrote about his problems and figured out one way to do it. Manoel Lemos figured out a way to do it using action caching. We’ll cover action caching in the next tutorial.

How do I test my page caching?

There is no built in way to do this in rails. Luckily Damien Merenne created a swank plugin for page cache testing. Check it out!

Quoted from  railsenvy.com

A Flickr-based Introduction to Ruby on Rails 2.0

Installation and Basic Setup
The first thing you have to do is install the Rails 2.0 framework and create a basic application scaffold to verify that everything has been setup properly. If the Ruby language and RubyGems, the standard packaging system for Ruby libraries, are not already installed on your system, refer to the Ruby and RubyGems web sites for further installation information. Also, check out Pastie, a tool that checks if your applications based on Rails 1.x are ready to be migrated to the new version.

Once you have these ready to go, you can install Rails 2.0 using the same procedure as for the previous version of the framework. Then open a terminal and enter this command:


gem install rails --include-dependencies

You can also launch the gem command with explicit references to the required libraries and verify that it downloads the correct packages from the Internet:


$ sudo gem update actionmailer actionpack activerecord activesupport
$ sudo gem install activeresource
$ sudo gem update rails
$ ruby -v
ruby 1.8.6 (2007-09-24 patchlevel 111) [universal-darwin9.0]
$ gem -v
1.0.1
$ rails -v
Rails 2.0.2
$ gem list --local

*** LOCAL GEMS ***

actionmailer (2.0.2, 1.3.6, 1.3.3)
actionpack (2.0.2, 1.13.6, 1.13.3)
actionwebservice (1.2.6, 1.2.3)
activerecord (2.0.2, 1.15.6, 1.15.3)
activeresource (2.0.2)
activesupport (2.0.2, 1.4.4, 1.4.2)
rails (2.0.2, 1.2.6, 1.2.3)
... other libraries here ...

As you can see, you can keep the previous version of Rails alongside with the new one, to facilitate the transition of existing applications built upon previous Rails versions. The above example has both Rails 2.0.2 and two previous versions of Rails 1.2. (The Ruby on Rails download page describes additional ways of installing it.)

To verify that everything is working correctly, you can generate a scaffold for a new web application with this command:

rails testapp cd testapp/ script/server Open your browser at the URL http://localhost:3000 to verify that you are using the latest version of Rails. You should see a welcome screen for your newly created Rails 2.0 application. If you don’t, check the RAILS_GEM_VERSION in the testapp/config/environment.rb file.

The RailTrackr Application
RailTrackr, the visually rich, web-based Flickr photo browser, will demonstrate some notable Rails 2.0 capabilities. You can launch the sample application now by downloading the source code attached to this article and launching it with the traditional script/server command. Since the application uses the Flickr APIs to load photos, you have to request an API key from the Flickr services site and type it into the flickr_helper.rb file bundled with the source code.

The application provides a way to navigate through Flickr users, their photosets, and the photos contained within them. It therefore defines three entities: FlickrUser, Photoset, and Photo. In the application domain, a FlickrUser may have many Photosets, and each Photoset may have many Photos. These will be the Ruby models for RailTrackr.

Quoted from http://www.devx.com

SafeErb for Rails 2

You might have noticed that the SafeErb plugin does not work in Rails 2 applications. That is because of old method signatures used in the plugin. The author has put up a blog post (in japanese) about a new version created by Aaron Bedra which points to this plugin installer (possibly replace http by svn):

./script/plugin install http://safe-erb.rubyforge.org/svn/plugins/safe_erb

The author has tested it with Rails 2.0.2 and it works fine. On my system however, it has problems with methods from the FormHelper (text_field and so on), most likely because of the output values in the value parameter. Does this happen on your system, as well? I hope to find a fix for that. Apart from that, the plugin works fine for Rails 2 applications.

 

Quoted from rorsecurity.info

InvalidAuthenticityToken for in_place_editing

There is a problem with InvalidAuthenticityToken errors that are raised in the methods for the in_place_editing plugin. This happens in Rails 2.0.2 (and possibly earlier versions). It’s because there is no authenticity_token sent at all. You can apply this patch until there is a new version out.

If you have something like this:

<%= in_place_editor(“title”, {:url => url_for(:action => “update_title” …)}) %>

the update_title method will throw an error. Apply the patch to make it work.

 

Quoted from rorsecurity.info

Thin : A fast and very simple Ruby web server

What

Thin is a Ruby web server that glues together 3 of the best Ruby libraries in web history:

  • the Mongrel parser, the root of Mongrel speed and security
  • Event Machine, a network I/O library with extremely high scalability, performance and stability
  • Rack, a minimal interface between webservers and Ruby frameworks

Which makes it, with all humility, the most secure, stable, fast and extensible Ruby web server bundled in an easy to use gem for your own pleasure.

Installation & Usage
Minimum Requirements include either Ruby 1.86 or 1.9. Love that last bit about 1.9.

sudo gem install thin

Using with Rails

After installing the Gem, a thin script should be in your path to easily start your Rails application.

cd to/your/rails/app
thin start    But Thin can also load Rack config file so you can use it with any
framework that supports Rack. Even your own that is, like, soooo much
better then Rails, rly!

test.ru

app = proc do |env|
  [
    200,          # Status code
    {             # Response headers
      'Content-Type' => 'text/html',
      'Content-Length' => '2',
    },
    ['hi']        # Response body
  ]
end

# You can install Rack middlewares
# to do some crazy stuff like logging,
# filtering, auth or build your own.
use Rack::CommonLogger

run app
thin start -r test.ru  See Rack doc for more.

Deploying

Deploying a cluster of Thins is super easy. Just specify the number of servers you want to launch.

thin start --servers 3

You can also install Thin as a runlevel script (under /etc/init.d/thin) that will start all your servers after boot.

sudo thin install

and setup a config file for each app you want to start:

thin config -C /etc/thin/myapp.yml -c /var/...

Run thin -h to get all options.


Behind Nginx

Check out this sample Nginx config file to proxy requests to a Thin backend.

then start your Thin cluster like this:

thin start -s3 -p 5000

You can also setup a Thin config file and use it to control your cluster:

thin config -C myapp.yml -s3 -p 5000
thin start -C myapp.yml           ----------

To connect to Nginx using UNIX domain sockets edit the upstream block

in your nginx config file:

nginx.conf

      upstream  backend {
   server   unix:/tmp/thin.0.sock;
   server   unix:/tmp/thin.1.sock;
   server   unix:/tmp/thin.2.sock;
}

and start your cluster like this:

    thin start -s3 --socket /tmp/thin.sock       --------------   Quoted from  code.macournoyer.com

iPhone on Rails

Creating an iPhone optimised version of your Rails site using iUI and Rails 2

With Rails 2 you can create a mime type specifically for the iPhone and then use that format in a respond_to block (along with views such as index.iphone.erb).

Before you start – iPhoney

iPhoney is an indispensable Mac-only tool for aiding the development of an iPhone specific site.

Looking for a way to see how your web creations will look on iPhone? Look no further. iPhoney gives you a pixel-accurate web browsing environment—powered by Safari—that you can use when developing web sites for iPhone. It’s the perfect 320 by 480-pixel canvas for your iPhone development. And it’s free. iPhoney is not an iPhone simulator but instead is designed for web developers who want to create 320 by 480 (or 480 by 320) websites for use with iPhone. It gives you a canvas on which to test the visual quality of your designs.

Ensure iPhoney’s user agent is set to iPhone User Agent in the menu.

iPhone mime type

Create an iPhone mime type alias using Rails 2 initializers.

config/initializers/mime_types

Mime::Type.register_alias "text/html", :iphone

Detecting iPhone user agents

Apple recommends that rather than redirecting iPhone users to an iPhone-optimised version of your site you should instead show the original site with a link to the alternative.

This can be achieved via user agent sniffing; looking for Mobile Safari (as Apple suggests), rather that iPhone or iPod touch, to allow for future device support.

Adding a helper method to application_helper.rb allows a notification message to be shown for only iPhone users (try accessing www.trawlr.com from an iPhone).

app/helpers/application_helper.rb

# Request from an iPhone or iPod touch? (Mobile Safari user agent)
def iphone_user_agent?
  request.env["HTTP_USER_AGENT"] && request.env["HTTP_USER_AGENT"][/(Mobile\/.+Safari)/] 
end

In your view, show a message for iPhone user agents directing them to the iPhone version.

<% if iphone_user_agent? # Show message for iPhone users -%>
<div class="message">
    <p>Using an iPhone? <a href="http://iphone.trawlr.com/">Use the optimised version</a>.</p>
</div>
<% end -%>

iPhone subdomain

Instead of forcing users straight to our iPhone version, we offer them the option by using a separate subdomain (iphone.trawlr.com) with a link back to the regular site if they wish. When developing locally I modified my /etc/hosts file as follows so that I could use http://iphone.localhost.com:3000/.

/etc/hosts

127.0.0.1 iphone.localhost.com   You may need to flush the DNS cache after making the changes.  sudo dscacheutil -flushcache

Adjust format for iPhone

I chose to require login for all requests to the iPhone version of the site.

class ApplicationController < ActionController::Base
    before_filter :adjust_format_for_iphone
    before_filter :iphone_login_required

private

  # Set iPhone format if request to iphone.trawlr.com
  def adjust_format_for_iphone    
    request.format = :iphone if iphone_request?
  end

  # Force all iPhone users to login
  def iphone_login_required
    if iphone_request?
      redirect_to login_path unless logged_in?
    end
  end

  # Return true for requests to iphone.trawlr.com
  def iphone_request?
    return (request.subdomains.first == "iphone" || params[:format] == "iphone")
  end
end

Note that sessions_controller.rb (handles login) requires skip_before_filter :iphone_login_required.

Using iUI and creating iPhone views

The iUI framework, based on Joe Hewitt’s iPhone navigation work, hugely simplifies iPhone web development. All you need to do is include the iUI JavaScript and CSS files along with included images and create your views in a particular structure to have native iPhone behaviour such as sliding menus and AJAX page loading.

Rails 2 makes it trivial to create different views depending upon the format, including layouts. Our iPhone layout includes a few specifics for iUI and a viewport meta tag for the device.

app/views/layouts/application.iphone.erb

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
    <meta http-equiv="content-type" content="text/html; charset=utf-8" />
    <meta id="viewport" name="viewport" content="width=320; initial-scale=1.0; maximum-scale=1.0; user-scalable=0;"/>
    <title><%= @page_title -%></title>
  <%= stylesheet_link_tag 'iui' %>
  <%= javascript_include_tag 'iui' %>
</head>
<body>
    <div class="toolbar">
        <h1 id="pageTitle"></h1>
        <a id="backButton" class="button" href="#"></a>
    </div>

    <%= yield %>
</body>

When creating your iPhone views you should follow the iUI style guide, an example page is given below. Standard hyperlinks are loaded using AJAX and slide into view, navigating back is handled by iUI. Links may be prefixed with target="_self" to replace the entire page or target="_replace" to replace the element with the response (using AJAX).

index.iphone.erb

————

<ul title="Home" selected="true">
    <li><%= link_to 'Example action', example_path %></li>
    <li><%= link_to 'Logout', logout_path, :method => :delete, :target => '_self' %></li>
</ul>

———-

show.iphone.erb ----------  <div class="panel" title="Example" selected="true">
    <h2>Example Content</h2>
    <p>Here's some content</p>
</div> --------- It’s important to remember that iUI will load content using AJAX, thus you only need to render a layout (such as application.iphone.erb) for the first request or page of your iPhone site. All following requests should use render :layout => false (unless loaded into a new page with target="_replace"). If you experience any wierd rendering issues it’ll most likely be due to this irregularity.  respond_to do |format|
    format.iphone do  # action.iphone.erb
      render :layout => false
    end
end  --------------

References

The following resources on the new Rails 2 iPhone format ability and iUI library were extremely helpful; the documentation from Apple not so much!

 ------------------------   Quoted from slashdotdash.net

Ruby : Timeout code execution

Just a small tip, if you wish to ensure a snippet of Ruby code doesn’t run for too long you can use the timeout function. You might want to do this when making a request to a remote server with net/http for example.

timeout.rb

A way of performing a potentially long-running operation in a thread, and terminating it‘s execution if it hasn‘t finished within fixed amount of time.

Here’s a quick example using the excellent rFeedParser (Universal Feed Parser in Ruby) to fetch an RSS feed.

require 'timeout'
require 'zlib'
require 'rubygems'
require 'rfeedparser'

fp = nil
begin
  # Don't take longer than 20 seconds to retrieve & parse an RSS feed
  Timeout::timeout(20) do
    fp = FeedParser.parse("http://feeds.feedburner.com/slashdotdash")
  end
rescue Timeout::Error
  # Too slow!! end   Quoted  from slashdotdash.net

GIT Cheatsheet

Setup
-----

git clone <repo>
  clone the repository specified by <repo>; this is similar to "checkout" in
  some other version control systems such as Subversion and CVS

Who doesn't like colors?  Optionally add the following to your ~/.gitconfig
file:

  [color]
    branch = auto
    diff = auto
    status = auto
  [color "branch"]
    current = yellow reverse
    local = yellow
    remote = green
  [color "diff"]
    meta = yellow bold
    frag = magenta bold
    old = red bold
    new = green bold
  [color "status"]
    added = yellow
    changed = green
    untracked = cyan

Configuration
-------------

git config user.email johndoe@example.com
  Sets your email for commit messages.

git config user.name 'John Doe'
  Sets your name for commit messages.

git config branch.autosetupmerge true
  Tells git-branch and git-checkout to setup new branches so that git-pull(1)
  will appropriately merge from that remote branch.  Recommended.  Without this,
  you will have to add --track to your branch command or manually merge remote
  tracking branches with "fetch" and then "merge".

You can add "--global" after "git config" to any of these commands to make it
apply to all git repos (writes to ~/.gitconfig).

Info
----

git diff
  show a diff of the changes made since your last commit

git status
  show files added to the index, files with changes, and untracked files

git log
  show recent commits, most recent on top

git show <rev>
  show the changeset (diff) of a commit specified by <rev>, which can be any
  SHA1 commit ID, branch name, or tag

git blame <file>
  show who authored each line in <file>

git blame <file> <rev>
  show who authored each line in <file> as of <rev> (allows blame to go back in
  time)

Adding / Deleting
-----------------

git add <file1> <file2> ...
  add <file1>, <file2>, etc... to the project

git add <dir>
  add all files under directory <dir> to the project, including subdirectories

git add .
  add all files under the current directory to the project

git rm <file1> <file2> ...
  remove <file1>, <file2>, etc... from the project

Committing
----------

git commit <file1> <file2> ... [-m <msg>]
  commit <file1>, <file2>, etc..., optionally using commit message <msg>,
  otherwise opening your editor to let you type a commit message

git commit -a [-m <msg>]
  commit all files changed since your last commit, optionally using commit
  message <msg>

git commit -v [-m <msg>]
  commit vebosely, i.e. includes the diff of the contents being committed in the
  commit message screen

Sharing
-------

git pull
  update the current branch with changes from the server.  Note: .git/config
  must have a [branch "some_name"] section for the current branch.  Git 1.5.3
  and above adds this automatically.

git push
  update the server with your commits across all branches that are *COMMON*
  between your local copy and the server.  Local branches that were never pushed
  to the server in the first place are not shared.

git push origin <branch>
  update the server with your commits made to <branch> since your last push. 
  This is always *required* for new branches that you wish to share.  After the
  first explicity push, "git push" by itself is sufficient.

Branching
---------

git branch
  list all local branches

git branch -r
  list all remote branches

git branch -a
  list all local and remote branches

git branch <branch>
  create a new branch named <branch>, referencing the same point in history as
  the current branch

git branch <branch> <start-point>
  create a new branch named <branch>, referencing <start-point>, which may be
  specified any way you like, including using a branch name or a tag name

git branch --track <branch> <remote-branch>
  create a tracking branch. Will push/pull changes to/from another repository.
  Example: git branch --track experimental origin/experimental

git branch -r -d <remote branch>
  delete a "local remote" branch, used to delete a tracking branch.
Example: git branch -r -d wycats/master

git branch -d <branch>
  delete the branch <branch>; if the branch you are deleting points to a commit
  which is not reachable from the current branch, this command will fail with a
  warning.

git branch -D <branch>
  even if the branch points to a commit not reachable from the current branch,
  you may know that that commit is still reachable from some other branch or
  tag. In that case it is safe to use this command to force git to delete the
  branch.

git checkout <branch>
  make the current branch <branch>, updating the working directory to reflect
  the version referenced by <branch>

git checkout -b <new> <start-point>
  create a new branch <new> referencing <start-point>, and check it out.

git remote add <branch> <remote branch>
  adds a remote branch to your git config. Can be then fetched locally.
Example: git remote add coreteam git://github.com/wycats/merb-plugins.git

git push <repository> :heads/<branch>
  removes a branch from a remote repository. Example: git push origin
  :refs/old_branch_to_be_deleted

Merging
-------

git merge <branch>
  merge branch <branch> into the current branch; this command is idempotent and
  can be run as many times as needed to keep the current branch up-to-date with
  changes in <branch>

git merge <branch> --no-commit
  merge branch <branch> into the current branch, but do not autocommit the
  result; allows you to make further tweaks

git merge <branch> -s ours
  merge branch <branch> into the current branch, but in the case of any
  conflicts, the files in the current branch win.

Conflicts
---------

If merging resulted in conflicts in file(s) <file1>, <file2>, etc..., resolve
the conflict(s) manually and then do:

  git add <file1> <file2> ...
  git commit -a  

Reverting
---------

git revert <rev>
  reverse commit specified by <rev> and commit the result.  This does *not* do
  the same thing as similarly named commands in other VCS's such as "svn revert"
  or "bzr revert", see below

git checkout <file>
  re-checkout <file>, overwriting any local changes

git checkout .
  re-checkout all files, overwriting any local changes.  This is most similar to
  "svn revert" if you're used to Subversion commands

Undo
----

git reset --hard
  abandon everything since your last commit; this command can be DANGEROUS.  If
  merging has resulted in conflicts and you'd like to just forget about the
  merge, this command will do that

git reset --hard ORIG_HEAD
  undo your most recent *successful* merge *and* any changes that occurred
  after.  Useful for forgetting about the merge you just did.  If there are
  conflicts (the merge was not successful), use "git reset --hard" (above)
  instead.

git reset --soft HEAD^
  undo your last commit

------------------
Quoted from  errtheblog.com

Rails performance – using YSlow

YSlow from Yahoo! is a Firefox add-on to analyse web pages and tell you why they’re slow based on rules for high performance web sites. YSlow requires the indispensable Firebug extension.

The 13 rules YSlow checks your site against are as follows:

———————————-

1. Make Fewer HTTP Requests
2. Use a Content Delivery Network
3. Add an Expires Header
4. Gzip Components
5. Put CSS at the Top
6. Move Scripts to the Bottom
7. Avoid CSS Expressions
8. Make JavaScript and CSS External
9. Reduce DNS Lookups
10. Minify JavaScript
11. Avoid Redirects
12. Remove Duplicate Scripts 13. Configure ETags -------------  This post will demonstrate that most of these are easily achievable for
a Rails website through a combination of plugins and with correct
configuration of a proxy web server (in front of a mongrel cluster) – 
in this case Nginx. This guide follows experience with improving performance for trawlr.com (an online RSS reader).

———————–

Make Fewer HTTP Requests, Minify JavaScript, Put CSS at the Top, Move Scripts to the Bottom, Remove Duplicate Scripts

The easiest way to make fewer HTTP requests is to combine all JavaScript and CSS files into one. The asset packager plugin does exactly this, plus it will also compress the source files (in production mode) and correctly handles caching (without query string parameters).

Moving CSS to the top (within the head section) and moving JavaScript to the bottom of the page are both manual tasks that should be done in the layout templates (such as app/views/layouts/application.rhtml). Remember to use stylesheet_link_merged :base and javascript_include_merged :base rather than the default Rails helpers.

By using asset packager you can also verify that scripts are only included once – another performance hit otherwise!

Excluding the Google analytics JavaScript file, trawlr.com now uses a single css and js file (including the entire prototype library). Note: You may need to add a missing semi-colon as per this defect for prototype to work correctly.

Asset Packager can be included as part of a Capistrano deployment with the following recipe:

desc "Compress JavaScript and CSS files using asset_packager" 
task :after_update_code, :roles => [:web] do
  run <<-EOF
    cd #{release_path} &&
    rake RAILS_ENV=production asset:packager:build_all
  EOF

Add an Expires Header

A first-time visitor to your page may have to make several HTTP requests, but by using the Expires header you make those components cacheable. This avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often used with images, but they should be used on all components including scripts, stylesheets, and Flash components.

Nginx allows adding arbitrary HTTP headers via the expire and add_header directives. Adding the expires header to static content is done with a regular expression looking for relevant file extensions in the request URL. This example uses the maximum expiry date but could be set to more appropriate values as required (e.g. 24h, 7d, 1M)

# Add expires header for static content
location ~* \.(js|css|jpg|jpeg|gif|png)$ {
  if (-f $request_filename) {
        expires      max;
    break; 
  }

Gzip Components

Nginx can gzip any responses – including those proxied from a mongrel cluster.

gzip on;
gzip_min_length  1100;
gzip_buffers     4 8k;
gzip_proxied any;              
gzip_types  text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
end

Rails searching with Sphinx

------------  $ wget http://www.sphinxsearch.com/downloads/sphinx-0.9.7.tar.gz
$ tar xvzf sphinx-0.9.7.tar.gz
$ cd sphinx-0.9.7
$ ./configure --with-mysql-includes=/opt/local/include/mysql5/mysql/ --with-mysql-libs=/opt/local/lib/mysql5/mysql/
$ make
$ sudo make install ------------------  $ rake sphinx:index
$ rake sphinx:start  $ time rake sphinx:index

using config file 'sphinx.conf'...
indexing index 'items'...
collected 1455733 docs, 1255.2 MB
sorted 182.4 Mhits, 100.0% done
total 1455733 docs, 1255246639 bytes
total 438.695 sec, 2861316.50 bytes/sec, 3318.32 docs/sec

real    7m25.307s
user    4m28.963s
sys     0m17.578s    ------------------ Searching with acts_as_sphinx via the console (ruby script/console) for the term ‘Google’, sorted by published date.    ------------ >> search = Item.find_with_sphinx 'Google', :sphinx => {:sort_mode => [:attr_desc, 'pub_date'], :page => 1}, :order => 'items.pub_date DESC'; 0
=> 0
>> search.total
=> 1000
>> search.total_found
=> 73717
>> search.time       
=> "0.000"   -------- Within the Rails controller, search is done via:---  @items = Item.find_with_sphinx(params[:query], 
      :sphinx => {:sort_mode => [:attr_desc, 'pub_date'], :limit => 50, :page => (params[:page] || 1)}, 
      :order => 'items.pub_date DESC') -------------

Updating the Sphinx index

There’s another rake task for updating the Sphinx index which can be called via a cron job, rather than ‘live’ updates. The rotate command allows the index to be rebuilt whilst the Sphinx daemon is running, forcing a restart once completed.

$ rake sphinx:rotate   -------------------   Quoted from  slashdotdash.net/