Friday, December 9, 2011

Oracle finding columns

 

finding columns

Find all tables that have at least one column that matches a specific PATTERN in the column name

 SELECT       TABLE_NAME,       COLUMN_NAME    FROM       ALL_TAB_COLUMNS    WHERE       COLUMN_NAME LIKE '%PATTERN%';

Thursday, December 8, 2011

iphone courses online

iOS programming resources online ; 

http://teamtreehouse.com

Friday, November 25, 2011

who is locking whom oracle locks

select s1.username || '@' || s1.machine
|| ' ( SID=' || s1.sid || ' ) is blocking '
|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' )
' AS blocking_status
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2 ;

Thursday, November 17, 2011

Oracle alter table add or drop column

alter table supplier drop column supplier_id;

alter table
cust_table
add
cust_sex varchar2(1) NOT NULL;

Her is an example of Oracle "alter table" syntax to add multiple data columns.

ALTER TABLE
cust_table
ADD
(
cust_sex char(1) NOT NULL,
cust_credit_rating number
);

Oracle add or drop primary key

ALTER TABLE supplier
add CONSTRAINT supplier_pk PRIMARY KEY (supplier_id);

ALTER TABLE supplier
drop CONSTRAINT supplier_pk;

Monday, October 17, 2011

Installing XCode/IOS Simulator on Mac OS 10.6.1

First you need to upgrade your Mac OS.
Click on the Apple Icon -> Software Update and follow the instructions.
In my case it downloaded 1.33GB of updates.
First time it wasn't downloaded correctly so it asked me to download again.
I did that and restarted and I had been moved to Mac OS 10.6.8.

Then I installed the .dmg file for iOS SDK(4 GB). Instructions.
So, in all, I ended up downloading 7GB of data just to install XCode on my MAC. Amazing !

Sunday, October 16, 2011

Downloading iOS SDK from command line through wget

1. Install Export Cookies Extension for Firefox.
2. Login to iOS Dev Center and start downloading the file, after a
while cancel the download.(from the Downloads Window, get the download
URL)
3. Export cookies to cookies.txt
4. Upload cookies.txt to the server where you want to use wget.
5. Login to your server and :
wget -U firefox -ct 0 --timeout=60 --waitretry=60 --load-cookies
cookies.txt -c <download_url>
6. My download URL :
http://adcdownload.apple.com/Developer_Tools/xcode_3.2.6_and_ios_sdk_4.3__final/xcode_3.2.6_and_ios_sdk_4.3.dmg
7. This file is roughly 4 GB in size !

Installing Google Command Line

http://code.google.com/p/googlecl/wiki/Install

Friday, October 14, 2011

orable table row count

SELECT table_name, nvl(num_rows,1) 
FROM dba_tables where table_name like 'XYZ';

Sunday, October 9, 2011

Mac OS : Forcing black ink printing for Canon MP280 series printer

System Preferences -> 
Printer & Fax -> 
Select Canon MP280 on left -> 
Options & Supplies ->
Utility ->
Open Printer Utility ->
Ink Cartridge Settings ->
Ink Cartridge : Black Only

Wednesday, October 5, 2011

Objective C Programming on Windows

Source

Steps : 
1. Download and install these 3 exes : GNUStep 0.23 (System | Core | Devel)
2. Open Programs -> Gnustep -> Shell
3. Create helloworld.m :

 #import <Foundation/Foundation.h>
  int main (int argc, const char * argv[]) { 
  NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];  
 NSLog (@"Hello World!"); 
  [pool drain];  
 return 0;
 }

 4. gcc -o helloworld helloworld.m -I /GNUstep/System/Library/Headers -L /GNUstep/System/Library/Libraries -lobjc -lgnustep-base -fconstant-string-class=NSConstantString

Mac OS quitting front most application

Command + Q

Mac OS Keyboard shortcut for spotlight

Command + Space Bar

Tuesday, October 4, 2011

Mac Window maximize keyboard shortcut

System Preferences
-> Keyboard & Mouse
-> Keyboard Shortcuts
-> Click the +
-> Make sure it says All Applications in the dropdown.
-> In the menu title type in "Zoom" (Capitalization matters, I believe).
-> Click in the keyboard shortcut area and set the keyboard shortcut
you want. I'd make it a alt-space-x

Useful gmail keyboard shortcuts

g k -> show task window
Esc -> hide task window
tab Enter -> send mail

Monday, October 3, 2011

Pig RegExp Match

recs = load 'a.txt';
recs1 = filter recs by $0 matches '.*hiv.*';
dump recs1;

Tuesday, September 27, 2011

computer programming for kids

http://www.flipkart.com/books/8184890923

oracle removing all locks

SELECT  l.inst_id,
SUBSTR(L.ORACLE_USERNAME,1,8) ORA_USER,
SUBSTR(L.SESSION_ID,1,3) SID,
S.serial#,
SUBSTR(O.OWNER||'.'||O.OBJECT_NAME,1,40) OBJECT, P.SPID OS_PID,
DECODE(L.LOCKED_MODE,   0,'NONE',
1,'NULL',
2,'ROW SHARE',
3,'ROW EXCLUSIVE',
4,'SHARE',
5,'SHARE ROW EXCLUSIVE',
6,'EXCLUSIVE',
NULL) LOCK_MODE
FROM    sys.GV_$LOCKED_OBJECT L
, DBA_OBJECTS O
, sys.GV_$SESSION S
, sys.GV_$PROCESS P
WHERE     L.OBJECT_ID = O.OBJECT_ID
 and     l.inst_id = s.inst_id
 AND     L.SESSION_ID = S.SID
 and     s.inst_id = p.inst_id
 AND     S.PADDR = P.ADDR(+)
order by l.inst_id

alter system kill session '247,52553' immediate;
where 247 is SID and 52553 is serial #

Friday, September 16, 2011

Wednesday, August 24, 2011

Oracle explain plan


1. explain plan set statement_id = 'bad6' for
select a from b;

Then :

2. SELECT cardinality "Rows",
      lpad(' ',level-1)||operation||' '||
      options||' '||object_name "Plan"
 FROM PLAN_TABLE

CONNECT BY prior id = parent_id
       AND prior statement_id = statement_id
 START WITH id = 0
       AND statement_id = 'bad6'
 ORDER BY id;

OR

2. SELECT * FROM TABLE(dbms_xplan.display('PLAN_TABLE','bad6','ALL'));

Friday, August 12, 2011

Perl Dollar Dollar ($$)

Do you know what does $$ stand for in Perl?
It's the process id of the current process.

print $$;

will print the process id of the current process.

Thursday, August 4, 2011

Friday, July 29, 2011

Making the hand tool the default tool on Adobe Reader for Mac OS X

Source

To set Adobe Reader X to open with the hand tool active what I did was:

 

Open Adobe Reader X

 

Right Click on the toolbar

 

Mouse Down to "Select and Zoom"

 

Click on the Hand Tool

 

The Hand Tool appears on the Tool Bar

 

Click on the Hand Tool

 

Close Adobe Reader X

 

The next time you open the Reader the Hand Tool is selected.

Thursday, July 28, 2011

PageRank PPT

PageRank

eigenvector nice explanation






This is Google's cache of http://www.sosmath.com/matrix/markov/markov.html. It is a snapshot of the page as it appeared on 24 Jul 2011 22:25:14 GMT. The current page could have changed in the meantime. Learn more

These search terms are highlighted: eigenvector stochastic matrix  

 Markov Chains

In a previous page, we studied the movement between the city and suburbs. Indeed, if I are S are the initial population of the inner city and the suburban area, and if we assume that every year 40% of the inner city population moves to the suburbs, while 30% of the suburb population moves to the inner part of the city, then after one year the populations are given by 

\begin{displaymath}\left(\begin{array}{c} 0.6 I + 0.3 S\\ 0.4 I + 0.7 S\\ \end... ...ay}\right) \left(\begin{array}{c} I\\ S\\ \end{array}\right).\end{displaymath}

The matrix 

\begin{displaymath}P = \left(\begin{array}{cc} 0.6&0.3\\ 0.4&0.7\\ \end{array}\right) \end{displaymath}

is very special. Indeed, the entries of each column vectors are positive and their sum is 1. Such vectors are called probability vectors. A matrix for which all the column vectors are probability vectors is called transition orstochastic matrix. Andrei Markov, a russian mathematician, was the first one to study these matrices. At the beginning of this century he developed the fundamentals of the Markov Chain theory. 
A Markov chain is a process that consists of a finite number of states and some known probabilities pij, where pij is the probability of moving from state j to state i. In the example above, we have two states: living in the city and living in the suburbs. The number pij represents the probability of moving from state i to state j in one year. We may have more than two states. For example, political affiliation: Democrat, Republican, and Independent. For example, pij represents the probability of a son belonging to party i if his father belonged to party j. 
Of particular interest is a probability vector p such that $A \mbox{\bf p} = \mbox{\bf p}$, that is, an eigenvector of A associated to the eigenvalue 1. Such vector is called a steady state vector. In the example above, the steady state vectors are given by the system 

\begin{displaymath}\left(\begin{array}{cc} 0.6&0.3\\ 0.4&0.7\\ \end{array}\rig... ...} -0.4&0.3\\ 0.4&-0.3\\ \end{array}\right)\cdot X = {\cal O}.\end{displaymath}

This system reduces to the equation -0.4 x + 0.3 y = 0. It is easy to see that, if we set $x = 0.3 \alpha$, then 

\begin{displaymath}X = \left(\begin{array}{c} x\\ y\\ \end{array}\right) = \alpha \left(\begin{array}{c} 0.3\\ 0.4\\ \end{array}\right).\end{displaymath}

So the vector $\mbox{\bf p}_1 = \displaystyle \left(\begin{array}{c} 0.3\\ 0.4\\ \end{array}\right)$ is a steady state vector of the matrix above. So if the populations of the city and the suburbs are given by the vector $\mbox{\bf p}_1$, after one year the proportions remain the same (though the people may move between the city and the suburbs). 

Let us discuss another example on population dynamics. 

Example: Age Distribution of Trees in a Forest 
Trees in a forest are assumed in this simple model to fall into four age groups: b(k) denotes the number of baby trees in the forest (age group 0-15 years) at a given time period k; similarly y(k),m(k) and o(k) denote the number of young trees (16-30 years of age), middle-aged trees (age 31-45), and old trees (older than 45 years of age), respectively. The length of one time period is 15 years. 
How does the age distribution change from one time period to the next? The model makes the following three assumptions:

  • A certain percentage of trees in each age group dies.
  • Surviving trees enter into the next age group; old trees remain old.
  • Lost trees are replaced by baby trees. 

Note that the total tree population does not change over time. 

We obtain the following difference equations: 

b(k+1)=$\displaystyle d_b\cdot b(k)+d_y \cdot y(k) +d_m\cdot m(k) + d_o\cdot o(k)$ (1)
y(k+1)=(1-dbb(k) (2)
m(k+1)=(1-dyy(k) (3)
o(k+1)=(1-dmm(k) + (1-doo(k) (4)

Here 0 < db,dy,dm,do <1 denote the loss rates in each age group in percent. 

Let 

\begin{displaymath}{\bf x}(k)=\left(\begin{array}{c}b(k)\\ y(k)\\ m(k)\\ o(k)\end{array}\right)\end{displaymath}

be the ``age distribution vector". Consider the matrix 

\begin{displaymath}A = \left(\begin{array}{cccc} d_b&d_y&d_m&d_o\\ 1-d_b&0&0&0\\ 0&1-d_y&0&0\\ 0&0&1-d_m&1-d_o\\ \end{array}\right).\end{displaymath}

Then we have 

\begin{displaymath}{\bf x}(k+1)=A\cdot {\bf x}(k).\end{displaymath}

Note that the matrix A is a stochastic matrix
If, db=.1,dy=.2,dm=.3 and do=.4, then 

\begin{displaymath}A = \left(\begin{array}{cccc} 0.1&0.2&0.3&0.4\\ 0.9&0&0&0\\ 0&0.8&0&0\\ 0&0&0.7&0.6\\ \end{array}\right).\end{displaymath}

After easy calculations, we find the steady state vector for the age distribution in the forest: 

\begin{displaymath}\mbox{\bf p} = \left(\begin{array}{cccc} 1/3.88\\ 0.9/3.88\\... ...n{array}{cccc} 1\\ 0.9\\ 0.72\\ 1.26\\ \end{array}\right) .\end{displaymath}

Assume a total tree population of 50,000 trees. Suppose the forest is newly planted, i.e. 

\begin{displaymath}{\bf x}(0)=\left(\begin{array}{c}50,000\\ 0\\ 0\\ 0\end{array}\right)\end{displaymath}

After 15 years, the age distribution in the forest is given by 

\begin{displaymath}{\bf x}(1) = \left(\begin{array}{cccc} 0.1&0.2&0.3&0.4\\ 0.9... ...\begin{array}{cccc} 0.1\\ 0.9\\ 0\\ 0\\ \end{array}\right).\end{displaymath}

After 30 years, we have 

\begin{displaymath}{\bf x}(2) = \left(\begin{array}{cccc} 0.1&0.2&0.3&0.4\\ 0.9... ...in{array}{cccc} 0.19\\ 0.09\\ 0.72\\ 0\\ \end{array}\right)\end{displaymath}

and after 45 years 

\begin{displaymath}{\bf x}(3) = 50,000\left(\begin{array}{cccc} 0.19\\ 0.09\\ 0.72\\ 0\\ \end{array}\right)\end{displaymath}

After 15n years, where $n=1,2,\cdots$, the age distribution in the forest is given by 

\begin{displaymath}{\bf x}(n) = \left(\begin{array}{cccc} 0.1&0.2&0.3&0.4\\ 0.9... ...eft(\begin{array}{cccc} 1\\ 0\\ 0\\ 0\\ \end{array}\right).\end{displaymath}

So the problem is to find the nth-power of the matrix A. We have seen that diagonalization technique may be helpful to solve this problem. Another problem deals with the long term behavior of the sequence x(n) when ngets large. 

The calculations on the example above becomes tedious. Let us illustrate the problem on a small matrix

Example. Consider the stochastic matrix 

\begin{displaymath}A = \left(\begin{array}{cccc} 0.8&0.2\\ 0.2&0.8\\ \end{array}\right).\end{displaymath}

Note this is a symmetric matrix. The characteristic polynomial of A is 

\begin{displaymath}p(\lambda) = (0.8 - \lambda)^2-0.2^2 = (1-\lambda)(0.6 - \lambda)\end{displaymath}

An eigenvector associated to 1 is 

\begin{displaymath}\left(\begin{array}{cccc} 1\\ 1\\ \end{array}\right)\end{displaymath}

and an eigenvector associated to 0.6 is 

\begin{displaymath}\left(\begin{array}{cccc} 1\\ -1\\ \end{array}\right).\end{displaymath}

If we set 

\begin{displaymath}P = \left(\begin{array}{rr} 1&1\\ 1&-1\\ \end{array}\right),\end{displaymath}

then we have 

\begin{displaymath}P^{-1}AP = D = \left(\begin{array}{cc} 1&0\\ 0&0.6\\ \end{array}\right).\end{displaymath}

So, we have 

\begin{displaymath}A^n = P D^nP^{-1} = P\left(\begin{array}{cc} 1&0\\ 0&(0.6)^n\\ \end{array}\right)P^{-1}.\end{displaymath}

When n gets large, the matrices An get closer to the matrix 

\begin{displaymath}P\left(\begin{array}{cc} 1&0\\ 0&0\\ \end{array}\right)P^{-1}.\end{displaymath}

So the sequence of vectors defined by 

\begin{displaymath}X(n+1) = A X(n),\;\;\mbox{given $X(0)$}\end{displaymath}

will get closer to 

\begin{displaymath}X(\infty) = P\left(\begin{array}{cc} 1&0\\ 0&0\\ \end{array}\right)P^{-1}X(0)\end{displaymath}

when n gets large. If $X(0) = \displaystyle \left(\begin{array}{cc} a\\ b\\ \end{array}\right)$, then we have 

\begin{displaymath}X(\infty) = \left(\begin{array}{rr} 1&1\\ 1&-1\\ \end{array... ...ac{a+b}{2}\left(\begin{array}{cc} 1\\ 1\\ \end{array}\right).\end{displaymath}

Note that the vector $X(\infty)$ is proportional to the unique steady state vector of A 

\begin{displaymath}{\bf p} = \frac{1}{2}\left(\begin{array}{cc} 1\\ 1\\ \end{array}\right).\end{displaymath}

This is not surprising. In fact there is a general result similar to the one above for any stochastic matrix.

[Geometry] [Algebra] [Trigonometry ]
[Calculus] [Differential Equations] [Matrix Algebra]

S.O.S MATH: Home Page

Do you need more help? Please post your question on our S.O.S. Mathematics CyberBoard.

AuthorM.A. Khamsi

Copyright © 1999-2011 MathMedics, LLC. All rights reserved. 
Contact us 
Math Medics, LLC. - P.O. Box 12395 - El Paso TX 79913 - USA 
800 users online during the last hour

pagerank/eigenvector



Source


AMS
|

How Google Finds Your Needle in the Web's Haystack

As we'll see, the trick is to ask the web itself to rank the importance of pages...
David Austin
Grand Valley State University
david at merganser.math.gvsu.edu 
Email to a friendMail to a friend Print this articlePrint this article

Imagine a library containing 25 billion documents but with no centralized organization and no librarians. In addition, anyone may add a document at any time without telling anyone. You may feel sure that one of the documents contained in the collection has a piece of information that is vitally important to you, and, being impatient like most of us, you'd like to find it in a matter of seconds. How would you go about doing it?
Posed in this way, the problem seems impossible. Yet this description is not too different from the World Wide Web, a huge, highly-disorganized collection of documents in many different formats. Of course, we're all familiar with search engines (perhaps you found this article using one) so we know that there is a solution. This article will describe Google's PageRank algorithm and how it returns pages from the web's collection of 25 billion documents that match search criteria so well that "google" has become a widely used verb.
Most search engines, including Google, continually run an army of computer programs that retrieve pages from the web, index the words in each document, and store this information in an efficient format. Each time a user asks for a web search using a search phrase, such as "search engine," the search engine determines all the pages on the web that contains the words in the search phrase. (Perhaps additional information such as the distance between the words "search" and "engine" will be noted as well.) Here is the problem: Google now claims to index 25 billion pages. Roughly 95% of the text in web pages is composed from a mere 10,000 words. This means that, for most searches, there will be a huge number of pages containing the words in the search phrase. What is needed is a means of ranking the importance of the pages that fit the search criteria so that the pages can be sorted with the most important pages at the top of the list.
One way to determine the importance of pages is to use a human-generated ranking. For instance, you may have seen pages that consist mainly of a large number of links to other resources in a particular area of interest. Assuming the person maintaining this page is reliable, the pages referenced are likely to be useful. Of course, the list may quickly fall out of date, and the person maintaining the list may miss some important pages, either unintentionally or as a result of an unstated bias.
Google's PageRank algorithm assesses the importance of web pages without human evaluation of the content. In fact, Google feels that the value of its service is largely in its ability to provide unbiased results to search queries; Google claims, "the heart of our software is PageRank." As we'll see, the trick is to ask the web itself to rank the importance of pages.

How to tell who's important

If you've ever created a web page, you've probably included links to other pages that contain valuable, reliable information. By doing so, you are affirming the importance of the pages you link to. Google's PageRank algorithm stages a monthly popularity contest among all pages on the web to decide which pages are most important. The fundamental idea put forth by PageRank's creators, Sergey Brin and Lawrence Page, is this: the importance of a page is judged by the number of pages linking to it as well as their importance.
We will assign to each web page P a measure of its importance I(P), called the page's PageRank. At various sites, you may find anapproximation of a page's PageRank. (For instance, the home page of The American Mathematical Society currently has a PageRank of 8 on a scale of 10. Can you find any pages with a PageRank of 10?) This reported value is only an approximation since Google declines to publish actual PageRanks in an effort to frustrate those would manipulate the rankings.
Here's how the PageRank is determined. Suppose that page Pj has lj links. If one of those links is to page Pi, then Pj will pass on 1/lj of its importance to Pi. The importance ranking of Pi is then the sum of all the contributions made by pages linking to it. That is, if we denote the set of pages linking to Pi by Bi, then

\[  I(P_i)=\sum_{P_j\in B_i} \frac{I(P_j)}{l_j}  \]

This may remind you of the chicken and the egg: to determine the importance of a page, we first need to know the importance of all the pages linking to it. However, we may recast the problem into one that is more mathematically familiar.
Let's first create a matrix, called the hyperlink matrix, $ {\bf H}=[H_{ij}] $  in which the entry in the ith row and jth column is

\[  H_{ij}=\left\{\begin{array}{ll}1/l_{j} &  	\hbox{if } P_j\in B_i \\ 	0 & \hbox{otherwise} 	\end{array}\right.  \]

Notice that H has some special properties. First, its entries are all nonnegative. Also, the sum of the entries in a column is one unless the page corresponding to that column has no links. Matrices in which all the entries are nonnegative and the sum of the entries in every column is one are called stochastic; they will play an important role in our story.
We will also form a vector $ I=[I(P_i)] $  whose components are PageRanks--that is, the importance rankings--of all the pages. The condition above defining the PageRank may be expressed as

\[  I = {\bf H}I  \]

In other words, the vector I is an eigenvector of the matrix H with eigenvalue 1. We also call this a stationary vector of H.
Let's look at an example. Shown below is a representation of a small collection (eight) of web pages with links represented by arrows.


The corresponding matrix is

with stationary vector

This shows that page 8 wins the popularity contest. Here is the same figure with the web pages shaded in such a way that the pages with higher PageRanks are lighter.


Computing I

There are many ways to find the eigenvectors of a square matrix. However, we are in for a special challenge since the matrix H is a square matrix with one column for each web page indexed by Google. This means that H has about n = 25 billion columns and rows. However, most of the entries in H are zero; in fact, studies show that web pages have an average of about 10 links, meaning that, on average, all but 10 entries in every column are zero. We will choose a method known as the power method for finding the stationary vector I of the matrix H.
How does the power method work? We begin by choosing a vector I 0 as a candidate for I and then producing a sequence of vectors I k by

\[  I^{k+1}={\bf H}I^k  \]

The method is founded on the following general principle that we will soon investigate.

General principle: The sequence k will converge to the stationary vector I.

We will illustrate with the example above.

01234...6061
10000.0278...0.060.06
00.50.250.16670.0833...0.06750.0675
0 0.5000...0.030.03
000.50.250.1667...0.06750.0675
00 0.250.16670.1111...0.09750.0975
0000.250.1806...0.20250.2025
0000.08330.0972...0.180.18
0000.08330.3333...0.2950.295

It is natural to ask what these numbers mean. Of course, there can be no absolute measure of a page's importance, only relative measures for comparing the importance of two pages through statements such as "Page A is twice as important as Page B." For this reason, we may multiply all the importance rankings by some fixed quantity without affecting the information they tell us. In this way, we will always assume, for reasons to be explained shortly, that the sum of all the popularities is one.

Three important questions

Three questions naturally come to mind:
  • Does the sequence I k always converge?
  • Is the vector to which it converges independent of the initial vector I 0?
  • Do the importance rankings contain the information that we want?
Given the current method, the answer to all three questions is "No!" However, we'll see how to modify our method so that we can answer "yes" to all three.
Let's first look at a very simple example. Consider the following small web consisting of two web pages, one of which links to the other:

with matrix

Here is one way in which our algorithm could proceed:

0123=I
1000
0100

In this case, the importance rating of both pages is zero, which tells us nothing about the relative importance of these pages. The problem is that P2 has no links. Consequently, it takes some of the importance from page P1 in each iterative step but does not pass it on to any other page. This has the effect of draining all the importance from the web. Pages with no links are called dangling nodes, and there are, of course, many of them in the real web we want to study. We'll see how to deal with them in a minute, but first let's consider a new way of thinking about the matrix H and stationary vector I.

A probabilitistic interpretation of H

Imagine that we surf the web at random; that is, when we find ourselves on a web page, we randomly follow one of its links to another page after one second. For instance, if we are on page Pj with lj links, one of which takes us to page Pi, the probability that we next end up on Pipage is then $ 1/l_j $  .
As we surf randomly, we will denote by $ T_j $  the fraction of time that we spend on page Pj. Then the fraction of the time that we end up on pagePi page coming from Pj is $ T_j/l_j $  . If we end up on Pi, we must have come from a page linking to it. This means that

\[  T_i = \sum_{P_j\in B_i} T_j/l_j  \]

where the sum is over all the pages Pj linking to Pi. Notice that this is the same equation defining the PageRank rankings and so $  I(P_i) = T_i $  . This allows us to interpret a web page's PageRank as the fraction of time that a random surfer spends on that web page. This may make sense if you have ever surfed around for information about a topic you were unfamiliar with: if you follow links for a while, you find yourself coming back to some pages more often than others. Just as "All roads lead to Rome," these are typically more important pages.
Notice that, given this interpretation, it is natural to require that the sum of the entries in the PageRank vector I be one.
Of course, there is a complication in this description: If we surf randomly, at some point we will surely get stuck at a dangling node, a page with no links. To keep going, we will choose the next page at random; that is, we pretend that a dangling node has a link to every other page. This has the effect of modifying the hyperlink matrix H by replacing the column of zeroes corresponding to a dangling node with a column in which each entry is 1/n. We call this new matrix S.
In our previous example, we now have

with matrix
and eigenvector

In other words, page P2 has twice the importance of page P1, which may feel about right to you.
The matrix S has the pleasant property that the entries are nonnegative and the sum of the entries in each column is one. In other words, it is stochastic. Stochastic matrices have several properties that will prove useful to us. For instance, stochastic matrices always have stationary vectors.
For later purposes, we will note that S is obtained from H in a simple way. If A is the matrix whose entries are all zero except for the columns corresponding to dangling nodes, in which each entry is 1/n, then S = H + A.

How does the power method work?

In general, the power method is a technique for finding an eigenvector of a square matrix corresponding to the eigenvalue with the largest magnitude. In our case, we are looking for an eigenvector of S corresponding to the eigenvalue 1. Under the best of circumstances, to be described soon, the other eigenvalues of S will have a magnitude smaller than one; that is, $ |\lambda| < 1 $  if $ \lambda $  is an eigenvalue of S other than 1.
We will assume that the eigenvalues of S are $ \lambda_j $  and that

\[  1 = \lambda_1 > |\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_n|   \]

We will also assume that there is a basis vj of eigenvectors for S with corresponding eigenvalues $ \lambda_j $  . This assumption is not necessarily true, but with it we may more easily illustrate how the power method works. We may write our initial vector 0 as

\[  I^0 = c_1v_1+c_2v_2 + \ldots + c_nv_n  \]

Then

 \begin{eqnarray*} I^1={\bf S}I^0 &=&c_1v_1+c_2\lambda_2v_2 + \ldots + c_n\lambda_nv_n \\ I^2={\bf S}I^1 &=&c_1v_1+c_2\lambda_2^2v_2 + \ldots + c_n\lambda_n^2v_n \\ \vdots & & \vdots \\ I^{k}={\bf S}I^{k-1} &=&c_1v_1+c_2\lambda_2^kv_2 + \ldots + c_n\lambda_n^kv_n \\ \end{eqnarray*}

Since the eigenvalues $ \lambda_j $  with $ j\geq2 $  have magnitude smaller than one, it follows that $ \lambda_j^k\to0 $  if $ j\geq2 $  and therefore $ I^k\to I=c_1v_1 $  , an eigenvector corresponding to the eigenvalue 1.
It is important to note here that the rate at which $ I^k\to I $  is determined by $ |\lambda_2| $  . When $ |\lambda_2| $  is relatively close to 0, then $ \lambda_2^k\to0 $  relatively quickly. For instance, consider the matrix

\[  {\bf S} = \left[\begin{array}{cc}0.65 & 0.35 \\ 0.35 & 0.65 \end{array}\right].   \]

The eigenvalues of this matrix are $ \lambda_1=1 $  and $ \lambda_2=0.3 $  . In the figure below, we see the vectors k, shown in red, converging to the stationary vector I shown in green.


Now consider the matrix

\[  {\bf S} = \left[\begin{array}{cc}0.85 & 0.15 \\ 0.15 & 0.85 \end{array}\right].   \]

Here the eigenvalues are $ \lambda_1=1 $  and $ \lambda_2=0.7 $  . Notice how the vectors k converge more slowly to the stationary vector I in this example in which the second eigenvalue has a larger magnitude.


When things go wrong

In our discussion above, we assumed that the matrix S had the property that $ \lambda_1=1 $  and $  |\lambda_2|<1 $  . This does not always happen, however, for the matrices S that we might find.
Suppose that our web looks like this:


In this case, the matrix S is


Then we see

012345
100001
010000
001000
0 00100
000010

In this case, the sequence of vectors k fails to converge. Why is this? The second eigenvalue of the matrix S satisfies $ |\lambda_2|=1 $  and so the argument we gave to justify the power method no longer holds.
To guarantee that $ |\lambda_2|<1 $  , we need the matrix S to be primitive. This means that, for some mSm has all positive entries. In other words, if we are given two pages, it is possible to get from the first page to the second after following m links. Clearly, our most recent example does not satisfy this property. In a moment, we will see how to modify our matrix S to obtain a primitive, stochastic matrix, which therefore satisfies $ |\lambda_2|<1 $  .
Here's another example showing how our method can fail. Consider the web shown below.


In this case, the matrix S is

with stationary vector

Notice that the PageRanks assigned to the first four web pages are zero. However, this doesn't feel right: each of these pages has links coming to them from other pages. Clearly, somebody likes these pages! Generally speaking, we want the importance rankings of all pages to be positive. The problem with this example is that it contains a smaller web within it, shown in the blue box below.


Links come into this box, but none go out. Just as in the example of the dangling node we discussed above, these pages form an "importance sink" that drains the importance out of the other four pages. This happens when the matrix S is reducible; that is, S can be written in block form as

\[  S=\left[\begin{array}{cc} * & 0 \\ * & * \end{array}\right].  \]

Indeed, if the matrix S is irreducible, we can guarantee that there is a stationary vector with all positive entries.
A web is called strongly connected if, given any two pages, there is a way to follow links from the first page to the second. Clearly, our most recent example is not strongly connected. However, strongly connected webs provide irreducible matrices S.
To summarize, the matrix S is stochastic, which implies that it has a stationary vector. However, we need S to also be (a) primitive so that$ |\lambda_2|<1 $  and (b) irreducible so that the stationary vector has all positive entries.

A final modification

To find a new matrix that is both primitive and irreducible, we will modify the way our random surfer moves through the web. As it stands now, the movement of our random surfer is determined by S: either he will follow one of the links on his current page or, if at a page with no links, randomly choose any other page to move to. To make our modification, we will first choose a parameter $\alpha$  between 0 and 1. Now suppose that our random surfer moves in a slightly different way. With probability $\alpha$  , he is guided by S. With probability $ 1-\alpha $  , he chooses the next page at random.
If we denote by 1 the $ n\times n $  matrix whose entries are all one, we obtain the Google matrix:

\[  {\bf G}=\alpha{\bf S}+ (1-\alpha)\frac{1}{n}{\bf 1}  \]

Notice now that G is stochastic as it is a combination of stochastic matrices. Furthermore, all the entries of G are positive, which implies that Gis both primitive and irreducible. Therefore, G has a unique stationary vector I that may be found using the power method.
The role of the parameter $\alpha$  is an important one. Notice that if $ \alpha=1 $  , then G = S. This means that we are working with the original hyperlink structure of the web. However, if $ \alpha=0 $  , then $ {\bf G}=1/n{\bf 1} $  . In other words, the web we are considering has a link between any two pages and we have lost the original hyperlink structure of the web. Clearly, we would like to take $\alpha$  close to 1 so that we hyperlink structure of the web is weighted heavily into the computation.
However, there is another consideration. Remember that the rate of convergence of the power method is governed by the magnitude of the second eigenvalue $ |\lambda_2| $  . For the Google matrix, it has been proven that the magnitude of the second eigenvalue $ |\lambda_2|=\alpha $  . This means that when $\alpha$  is close to 1 the convergence of the power method will be very slow. As a compromise between these two competing interests, Serbey Brin and Larry Page, the creators of PageRank, chose $ \alpha=0.85 $  .

Computing I

What we've described so far looks like a good theory, but remember that we need to apply it to $ n\times n $  matrices where n is about 25 billion! In fact, the power method is especially well-suited to this situation.
Remember that the stochastic matrix S may be written as

\[  {\bf S}={\bf H} + {\bf A}  \]

and therefore the Google matrix has the form

\[  {\bf G}=\alpha{\bf H} + \alpha{\bf A} + \frac{1-\alpha}{n}{\bf 1}  \]

Therefore,

\[  {\bf G}I^k=\alpha{\bf H}I^k + \alpha{\bf A}I^k + \frac{1-\alpha}{n}{\bf 1}I^k  \]

Now recall that most of the entries in H are zero; on average, only ten entries per column are nonzero. Therefore, evaluating Hk requires only ten nonzero terms for each entry in the resulting vector. Also, the rows of A are all identical as are the rows of 1. Therefore, evaluatingAk and 1k amounts to adding the current importance rankings of the dangling nodes or of all web pages. This only needs to be done once.
With the value of $\alpha$  chosen to be near 0.85, Brin and Page report that 50 - 100 iterations are required to obtain a sufficiently good approximation to I. The calculation is reported to take a few days to complete.
Of course, the web is continually changing. First, the content of web pages, especially for news organizations, may change frequently. In addition, the underlying hyperlink structure of the web changes as pages are added or removed and links are added or removed. It is rumored that Google recomputes the PageRank vector I roughly every month. Since the PageRank of pages can be observed to fluctuate considerably during this time, it is known to some as the Google Dance. (In 2002, Google held a Google Dance!)

Summary

Brin and Page introduced Google in 1998, a time when the pace at which the web was growing began to outstrip the ability of current search engines to yield useable results. At that time, most search engines had been developed by businesses who were not interested in publishing the details of how their products worked. In developing Google, Brin and Page wanted to "push more development and understanding into the academic realm." That is, they hoped, first of all, to improve the design of search engines by moving it into a more open, academic environment. In addition, they felt that the usage statistics for their search engine would provide an interesting data set for research. It appears that the federal government, which recently tried to gain some of Google's statistics, feels the same way.
There are other algorithms that use the hyperlink structure of the web to rank the importance of web pages. One notable example is the HITS algorithm, produced by Jon Kleinberg, which forms the basis of the Teoma search engine. In fact, it is interesting to compare the results of searches sent to different search engines as a way to understand why some complain of a Googleopoly.

References

  • Michael Berry, Murray BrowneUnderstanding Search Engines: Mathematical Modeling and Text Retrieval. Second Edition, SIAM, Philadelphia. 2005.
  • Sergey Brin, Lawrence Page, The antaomy of a large-scale hypertextual Web search engine, Computer Networks and ISDN Systems,33: 107-17, 1998. Also available online at http://infolab.stanford.edu/pub/papers/google.pdf
  • Kurt Bryan, Tanya Leise, The $25,000,000,000 eigenvector. The linear algebra behind Google. SIAM Review, 48 (3), 569-81. 2006. Also avaiable at http://www.rose-hulman.edu/~bryan/google.html
  • Google Corporate Information: Technology.
  • Taher Haveliwala, Sepandar KamvarThe second eigenvalue of the Google matrix.
  • Amy Langville, Carl Meyer, Google's PageRank and Beyond: The Science of Search Engine Rankings. Princeton University Press, 2006.
    This is an informative, accessible book, written in an engaging style. Besides providing the relevant mathematical background and details of PageRank and its implementation (as well as Kleinberg's HITS algorithm), this book contains many interesting "Asides" that give trivia illuminating the context of search engine design.
David Austin
Grand Valley State University
david at merganser.math.gvsu.edu 

NOTE: Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some these materials. Some of the items above can be accessed via the ACM Portal, which also provides bibliographic services.

Comments: Email Webmaster
© Copyright 2011, American Mathematical Society
Contact Us · Sitemap · Privacy Statement
Powered by Translate
AMS Social


Blog Archive