JOHN BACKUS by Shanitha (roll no:31)

Leave a comment

John Warner Backus

(1924 – 2007)

Born: December 3, 1924
Field: Programming languages
Focus: Developed the FORTRAN programming language for scientific     and mathematical applications.
Country: United States
Era: 1950 to 1969

FORTRAN (an acronym for FORmula TRANslation, the first high-level programming language [HLL]), was invented by John Backus for IBM in 1954 and released commercially in 1957. It is still used today for programming scientific and mathematical applications.
Backus headed the IBM team of researchers that invented FORTRAN at the Watson Scientific Laboratory at Columbia University in New York. The IBM team didn’t invent HLL or the idea of compiling programming language into machine code, but FORTRAN was the first successful high-level language, and the FORTRAN I compiler held the record for translating code for over 20 years.
FORTRAN is now over 45 years old and remains the top language in scientific and industrial programming. It has been constantly updated. FORTRAN has been used for programming video games, air traffic control systems, payroll calculations, numerous scientific and military applications, and parallel computer research. As the first high-level computer programming language, FORTRAN was able to convert standard mathematical formulas and expressions into the binary code used by computers. Thus a non-specialist could write a program in familiar words and symbols, and different computers could use programs generated in the same language. This paved the way for other computer languages such as COBOL, ALGOL and BASIC.

Backus won the 1993 National Academy of Engineering’s Charles Stark Draper Prize, the highest national prize awarded in engineering, and in 1977 he won the A.M. Turing Award for profound, influential, and lasting contributions to the design of practical high-level programming systems, notably through his work on FORTRAN, and for seminal publication of formal procedures for the specification of programming languages.

John Backus photos


FORTRAN was also extremely efficient, 2003628212running as fast as programs painstakingly hand-coded by the programming elite, who worked in arcane machine languages. This was a feat considered impossible before FORTRAN. It was achieved by the masterful design of the  FORTRAN compiler, a program that captures the human intent of a program and recasts it in a way that a computer can process.

In the FORTRAN project, Mr. Backus tackled two fundamental problems in computing — how to make programming easier for humans, and how to structure the underlying code to make that possible. Mr. Backus continued to work on those challenges for much of his career, and he encouraged others as well.

“His contribution was immense, and it influenced the work of many, including me,” Frances Allen, a retired research fellow at I.B.M., said yesterday.

Mr. Backus was a bit of a maverick even as a teenager. He grew up in an affluent family in Wilmington, Del., the son of a stockbroker. He had a complicated, difficult relationship with his family, and he was a wayward student.

In a series of interviews in 2000 and 2001 in San Francisco, where he lived at the time, Mr. Backus recalled that his family had sent him to an exclusive private high school, the Hill School in Pennsylvania.


As projectleader with IBM John Backus developed in the early 1950’s with his team:  FORTRAN– Formula Translator. FORTRAN was released in 1954. The first high level programming language. This language is most widely used in physics and engineering.

He was also responsible for the Backus-Naur Form (or BNF), a standard notation which can be used to decribe the sytanx of a computer language in a formal and unambiguous way.


1942 Graduated from Hill school Potts town

1942 Entered the University of Virginia. Joined the army

1945 Entered Flower and Fifth Avenue Medical School in New York

1949 Worked on IBM’S SSEC computer

1950-1952 Watson Lab

1954 Backus and his team publish Fortran

1959 Developing a notation called Backus-Naur Form in collaboration with Naur

1991 Retirement

Honors and awards

1976 Receives National medal of Science

1977 received the ACM Turing Award Lecture.(2)

1989 Received doctor honoris causa of the Université Henri PoincaréNancy, France, on December 14 1989.(4)

1993 Receives Charles Stark Draper price for his work on Fortran

1998 Fellow Award Recipient of the Computer History Museum for his development of FORTRAN, contributions to computer systems theory and software project management.




  • Backus, John W., “The IBM 701 Speedcoding System”, IBM, New York (10 Sep 1953), 4pp.
  • Backus, John W., “The IBM Speedcoding System”, The Journal of the Association for Computing Machinery, Vol.1 No.1 (Jan 1954), pp.4-6.
  • Backus, John W., and Harlan Herrick, “IBM 701 Speedcoding and Other Automatic Programming Systems”, Symposium on Automatic Programming for Digital Computers, Office of Technical Services, US Dept of Commerce, Washington DC (May 1954), pp.106-113.
  • Specifications for the IBM Mathematical FORmula TRANslating System, FORTRAN, IBM Applied Science Division, New York (10 Nov 1954), 43pp.
  • Amdahl, G.M, and J.W. Backus, The System Design of the IBM Type 704, IBM Engineering Laboratory, Poughkeepsie NY (1955), 11pp.
  • Backus, J.W., et al., The FORTRAN Automatic Coding System, Proceedings of the Western Joint Computing Conference 1957, pp.188-198.
  • Backus, J.W., The Syntax and Semantics of the Proposed International Algebraic Language of Zürich ACM-GAMM Conference, Proceedings of the International Conference on Information Processing, UNESCO, 1959, pp.125-132.
  • J.W. Backus, et al., and P. Naur (ed.), Revised Report on the Algorithmic Language ALGOL 60, CACM, Vol. 6, p. 1; The Computer Journal, Vol. 9, p. 349; Num. Math., Vol. 4, p. 420. (1963)
  • J.W. Backus, “The History of Fortran I, II, and III”, Annals of the History of Computing, Vol.1 No.1 (July-September 1979).

Ivar jacobson

Leave a comment




Ivar Jacobson, is the creator of the Objectory method and is founder of Objectory AB in Sweden. He is currently retired and was formerly Vice President of Business Engineering at Rational Software where he was involved with the development of UML. IvarJacobson is well known for his pioneering work and over 25 years of experience using object methods for the design of large, real-time systems. His work on large-scale, architected reuse was a key elementof the success of Ericsson’s AXE telecommunications switch.

He is the principal author of two important OO books, Object-Oriented Software Engineering – A Use Case Driven Approach and The Object Advantage: Business Process Reengineering with Object Technology. He also wrote Software Reuse, my personal favorite.


He is most well known for the invention of the UseCase. Ivar invented the UseCase as a way of collecting and organizing requirements for a telephone switch. A use case diagram at its simplest is a representation of a user’s interaction with the system and depicting the specifications of a use case. A use case diagram can portray the different types of users of a system and the various ways that they interact with the system. This type of diagram is typically used in conjunction with the textual use case and will often be accompanied by other types of diagrams as well.

Another thing Ivar did was to propose separate models for analysis and design. I haven’t seen much about this in UML and RUP – if it’s there then it is well buried. For large scale, complex systems it is very useful to maintain an idealised object model of the system that shows how objects in a fluffy cloud can collaborate to meet functional requirements. You can ignore all the non-functional clutter about concurrency, distribution, persistence, performance, scalability, and so on. Those are tackled in the design model.


I’m applying the analysis model idea to reverse-engineer a very large mobile telecomms network, hoping that such an object model will help us localise changes and identify commonality & abstractions.

Ivar spoke at Software Development West, 2004, giving a keynote on AOP and use cases. He made the point that AOP is a perfect way of retaining the idealized separation of use cases all the way down to the code level. He also gave some excellent evidence of having proposed an early AOP-like mechanism.


IvarJacobson was a Keynote Speaker at OtTwoThousandAndFour in Bedfordshire, England in March 2004 – MattStephenson

IvarJacobson is also behind the EssentialUnifiedProcess. The Essential Unified Process integrates practices from the unified process camp, agile methods camp, and process improvement camp.


LUCA CARDELLI—by Kripa (Roll No:19)

Leave a comment


Luca Cardelli was born near Montecatini Terme, Italy, studied at the University of Pisa and has a Ph.D. in computer science from the University of Edinburgh . He worked at Bell Labs, Murray Hill, from 1982-04-05 to 1985-09-20, and at Digital Equipment Corporation, Systems Research Center in Palo Alto, from 1985-09-30 to 1997-10-31.he is currently Principal Researcher and head of the Programming Principles and Tools and Security groups.

 His main interests are in type theory and operational semantics , and in concurrency theory . He implemented the first compiler for ML ,one of the most popular typed functional language. He was a member of the Modula-3 design committee, and has designed a few experimental languages, including Obliq.


He has published over 100 papers, 1 book, and 3 conference proceedings as chair/editor . He has served in over 80 Program Committees, and as editor of Theoretical Computer Science ,





Type Systems


Luca Cardelli

Microsoft Research



The fundamental purpose of a type system is to prevent the occurrence of execution errors duringthe running of a program. This statement motivates the study of type systems. Its accuracy depends, first of all, on the rather subtle issue of what constitutesan execution error, which we will discuss in detail. Even when that is settled, the absence

of execution errors is a nontrivial property. When such a property holds for all of the program runs that can be expressed within a programming language, we say that the language is type sound. It turns out that a fair amount of careful analysis is required to avoid false and embarrassing claims of type soundness for programming languages. As a consequence, the classification,description, and study of type systems has emerged as a formaldiscipline.

The formalization of type systems requires the development of precise

notations and definitionsand the detailed proof of formal properties that give confidence in the appropriateness

of the definitions. Sometimes the discipline becomes rather abstract. One should always remember,though, that the basic motivation is pragmatic: the abstractions have arisen out of necessity and can usually be related directly to concrete intuitions. Moreover, formal techniques need not be applied in full in order to be useful and influential. A knowledge of the main principles of type systems can help in avoiding obvious and not so obvious pitfalls, and can inspire regularity

and orthogonality in language design.

When properly developed, type systems provide conceptual tools with which to judge the adequacy of important aspects of language definitions. Informal language descriptions often fail to specify the type structure of a language in sufficient detail to allow unambiguous implementation.

It often happens that different compilers for the same language implement slightly different type systems. Moreover, many language definitions have been found to be type unsound, allowing a program to crash even though it is judged acceptable by a typechecker. Ideally, formaltype systems should be part of the definition of all typed programming languages. This way,typechecking algorithms could be measured unambiguously against precise specifications and,if at all possible and feasible, whole languages could be shown to be type sound.In this introductory section we present an informal nomenclature for typing, execution errors, and related concepts. We discuss the expected properties and benefits of type systems, and we review how type systems can be formalized. The terminology used in the introduction is not completely standard; this is due to the inherent inconsistency of standard terminology arising from various sources. In general, we avoid the words type and typing when referring to run time

concepts; for example we replace dynamic typing with dynamic checking and avoid common but ambiguous terms such as strong typing. The terminology is summarized in the Defining

 Execution errors and safety

It is useful to distinguish between two kinds of execution errors: the ones that cause the computation

to stop immediately, and the ones that go unnoticed (for a while) and later cause arbitrary

behavior. The former are called trapped errors, whereas the latter are untrapped errors.

An example of an untrapped error is improperly accessing a legal address, for example, accessing data past the end of an array in absence of run time bounds checks.

  Another untrappederror that may go unnoticed for an arbitrary length of time is           jumping to the wrong address:

      memory there may or may not represent an instruction stream. Examples of trapped errors are

 division by zero and accessing an illegal address:

      the computation stops immediately (on many

computer architectures).

 A program fragment is safe if it does not cause untrapped errors to occur. Languages where all program fragments are safe are called safe languages. Therefore, safe languages rule out the most insidious form of execution errors: the ones that may go unnoticed. Untyped languages may enforce safety by performing run time checks. Typed languages may enforce safety by statically rejecting all programs that are potentially unsafe. Typed languages may also use a mixture of run time and static checks.Although safety is a crucial property of programs, it is rare for a typed language to be concerned exclusively with the elimination of untrapped errors. Typed languages usually aim to rule out also large classes of trapped errors, along with the untrapped ones.

Execution errors and well-behaved programs

For any given language, we may designate a subset of the possible execution errors as forbidden errors. The forbidden errors should include all of the untrapped errors, plus a subset of the trapped errors. A program fragment is said to have good behavior, or equivalently to be well behaved, if it does not cause any forbidden error to occur. (The contrary is to have bad behavior, or equivalently to be ill behaved.) In particular, a well behaved fragment is safe.A language where all of the (legal) programs have good behavior is called strongly checked.Thus, with respect to a given type system, the following holds for a strongly checked language:

• No untrapped errors occur (safety guarantee).

• None of the trapped errors designated as forbidden errors occur.

• Other trapped errors may occur; it is the programmer’s responsibility to avoid them.

Typed languages can enforce good behavior (including safety) by performing static (i.e.,

compile time) checks to prevent unsafe and ill behaved programs from ever running. These languages

are statically checked; the checking process is called typechecking, and the algorithm

that performs this checking is called the typechecker. A program that passes the typechecker is

said to be well typed; otherwise, it is ill typed, which may mean that it is actually ill-behaved,

The issue of whether programming languages should have types is still subject to some debate.

There is little doubt that production code written in untyped languages can be maintained only

with great difficulty. From the point of view of maintainability, even weakly checked unsafe

languages are superior to safe but untyped languages (e.g., C vs. LISP). Here are the arguments

that have been put forward in favor of typed languages, from an engineering point of view:

Economy of execution : Type information was first introduced in programming to improve code generation and run time efficiency for numerical computations, for example, in FORTRAN. In ML, accurate type information eliminates the need for nil-checking on pointer dereferencing. In general, accurate type information at compile time leads to the application of the appropriate operations at run time without the need of expensive tests.

Economy of small-scale development:When a type system is well designed, typechecking

can capture a large fraction of routine programming errors, eliminating lengthy debugging

sessions. The errors that do occur are easier to debug, simply because large

classes of other errors have been ruled out. Moreover, experienced programmers adopt

a coding style that causes some logical errors to show up as typechecking errors: they

use the typechecker as a development tool. (For example, by changing the name of a

field when its invariants change even though its type remains the same, so as to get error

reports on all its old uses.)

Economy of compilation:Type information can be organized into interfaces for program

modules, for example as in Modula-2 and Ada. Modules can then be compiled independently

of each other, with each module depending only on the interfaces of the others.

Compilation of large systems is made more efficient because, at least when interfaces

are stable, changes to a module do not cause other modules to be recompiled.


Economy of large-scale development:Interfaces and modules have methodological advantages

for code development. Large teams of programmers can negotiate the interfaces

to be implemented, and then proceed separately to implement the corresponding

pieces of code. Dependencies between pieces of code are minimized, and code can be

locally rearranged without fear of global effects. (These benefits can be achieved also by

informal interface specifications, but in practice typechecking helps enormously in verifying

adherence to the specifications.)


Economy of development and maintenance in security areas.:

Although safety is necessary

to eliminate

other catastrophic security breaches. Here is a typical one: if there is any way at all,

no matter how convoluted, to cast an integer into a value of pointer type (or object type),

then the whole system is compromised. If that is possible, then an attacker can access

any data anywhere in the system, even within the confines of an otherwise typed language,

according to any type the attached chooses to view the data with. Another helpful

(but not necessary) technique is to convert a given typed pointer into an integer, and then

into a pointer of different type as above. The most cost effective way to eliminate these

security problems, in terms of maintenance and probably also of overall execution efficiency,

is to employ typed languages. Still, security is a problem at all levels of a system:

typed languages are an excellent foundation, but not a complete solution.


Economy of language features.:Type constructions are naturally composed in orthogonal

ways. For example, in Pascal an array of arrays models two-dimensional arrays; in

ML, a procedure with a single argument that is a tuple of n parameters models a procedure

of n arguments. Thus, type systems promote orthogonality of language feature

question the utility of artificial restrictions, and thus tend to reduce the complexity of

programming languages.



The ClearDesk Project(or), 1990-1997

Luca Cardelli and Ken Beckman, Digital Equipment Corporation, Systems Research Center

The ClearDesk projector currently rests in our Sub-Forum, where it can still be marveled at. Right above it, in the Forum, a modern large-screen projector dominates the room. Pull up a chair in front of it and pretend, just for a second, to be sitting behind your desk.

1. Science

The ClearDesk project was formulated during a time when many people at SRC acquired three displays on their desks, therefore making their desks completely useless for any other purposes. Well, we said, why not place one or more displays behind the desk, so as to clear up the desk surface? Hence the name of the project.

A display suitable for this use had to be big enough to be readable from behind one’s desk. To preserve the customary solid view angle, such a display had to be about 40″ diagonal. The only conceivable technology at the time was either front or back projection onto a large piece of glass. Front projection would have required attaching a massive harness to the ceiling, right over my head. After the Loma Prieta earthquake, this was somewhat unappealing. Therefore, we chose back projection. Never mind that this design would consume vast amounts of floor space; the follow-up project, ClearFloor, would fix all that.

A suitable display also had to be bright enough to be viewable under ordinary office lighting conditions, including coping with the summer light shining through our floor-to-ceiling windows. Mighty photons are required under those conditions, much mightier than one can get out of an overhead slide projector or a back-projection TV. Therefore, we chose a bright auditorium projector. It was a small step, from the personal computer to the personal auditorium.

2. Engineering

Our chosen projector (ElectroHome ECP 3000) was no doubt cutting-edge technology. Even without any knowledge of electronics, this fact could be inferred from the frequency with which the electronics failed. Moreover, the manual contained an interesting little footnote: if the light output was cranked up ever so slightly beyond specs, the projector could start emitting soft X-rays! Fortunately, our design provided for a vertically mounted projector with a 45-degree mirror to deflect the light towards the viewer. Soft X-rays go straight through optical mirrors (yes, we asked) and away from the viewer. In any case, we did it for Science.

The next pieces of technology we required were the materials along the optical path. It turns out that a 99% reflectance mirror is just ordinary stuff you can buy in almost any size. More problematic was the choice of the viewing surface. We tested several fancy Fresnel-like sheets of plastic, designed to concentrate light in nice parallel wavefronts instead of scattering it in all directions. Usually, this technique sends most light towards the viewers, making the picture brighter. However, at our unusual viewing distance these materials created a bright hot-spot in the middle of the picture, with most of the light zipping by our ears. We also tested a little piece of F-16 anti-glare cockpit glass (honest!), but this was just too dark and probably beyond our budget. We ended up using the cheapest piece of plain frosted glass we could find. This worked great, giving us equally good viewing from any angle.

So, now we had a state-of-the-art projector, an optical-quality mirror, a piece of cheap frosted glass, and a light path. All we needed was something to hold them in place. Enter the amazing UNISTRUT. This miracle of modern mechanical engineering beats Lego and Fisher-Price hands down. This is an industrial-strength erector set. My early experience with the kiddie version finally paid off. After much cutting and fastening (but no welding!), and much assistance from Chuck Needham, we had a sturdy frame to screw all the rest onto.

Eventually, one ClearDesk projector was assembled. It was designed so that if multiple units were built, they could be adjoined horizontally without any viewing gaps. Three units next to each other would have been fantastic. But I had had enough of mechanical engineering. Moreover, it turned out that the assembled unit was precisely large enough to pass through the door of my office, and I decided not to push my luck any further.

3. Social Impact

A ClearDesk-style display is obviously great for personal enjoyment, for giving demos to large groups of people, and for impressing random visitors. Just as obvious are the facts that focusing at a distance is easier on the eyes, and that a large display is easier to read for people with not-so-good eyesight. However, the ClearDesk projector had other unexpected uses.

First of all, if a job candidate did not comment on the display within 5 minutes of entering my office, that was a clear indication of a non-forward-thinking person. We have little use for such people here.

Second, the ClearDesk display was incredibly useful for cooperative work. A coauthor could sit by my side and comfortably view the paper I was editing, so we could work interactively instead of working on paper or squinting at a regular monitor. In fact, this is the way Martin Abadi and I spent much of our time while writing our book. Martin would use a laser pointer to indicate on the big screen were I should make corrections. Unless, that is, the correction happened to be in lower left corner of the display, because then the laser beam would bounce off the glass into my eyes. Martin was well trained to never, ever point there.

Most of all, I am proud of the fact that, well ahead of the current Carpal Tunnel Syndrom pandemic, we built the ultimate ergonomic computing environment . It pleases me to think that if just one person (i.e., me) was saved from CTS and back pain, then my company probably saved millions of dollars in legal fees, therefore compensating for the cost of the project many times over.

4. Decline and Fall

As a research project ClearDesk was very successful, being functional and well over 5 years ahead of the times. After frequent failures in the early years, the projector proved surprisingly reliable. In fact, it was eventually discovered that the electronics failed only when not being used. That is, the projector would work flawlessly for several months during which it would never be turned off. Then a general power failure would cause it lose power. If promptly powered up, everything would be fine. But if the power failure happened over a weekend or while I was in vacation, then, after more than a day of inactivity the projector would be D.O.P. (dead-on-power-up). After that discovery, prompt re-power-ups ensured years of uninterrupted service.

What finally retired the ClearDesk projector was not failing electronics, but rather the relentless march of technology, as well as the relentless stupidity of Apple Computer. For most of its life the projector was connected to a Macintosh, but Apple made it necessary to eventually connect it to a PC. Modern PC’s, however, switch resolution at least three times during booting, and this is not a pretty thing to watch on the ElectroHome projector. Moreover, the higher resolutions in use nowadays result in fuzzier pixels on older-technology projectors. So, it eventually came time to rearrange my furniture.

The ClearDesk projector currently rests in our Sub-Forum, where it can still be marveled at. Right above it, in the Forum, a modern large-screen projector dominates the room. Pull up a chair in front of it and pretend, just for a second, to be sitting behind your desk.






  A Theory of Objects
by Martin Abadi, Luca Cardelli



 DNA Computing and Molecular Programming:


Ecoop 2003 – Object-Oriented Programming:


Sergey Brin-Roll no:28

Leave a comment

Sergey Brin was born on August 21, 1973 in Moscow, Russia. He is an internet entrepreneur and also a computer scientist. His family emigrated to the United States to escape Jewish persecution in 1979. He met Larry Page at Stanford University and the two created a search engine called google. Sergey Mihailovich Brin is the cofounder of Google, and is now the President of Technology at Google and has a net worth estimated at 11 billion US dollars.


Sergey Brin co-founded Google Inc. in 1998. Today, he directs special projects. From 2001 to 2011, Sergey served as president of technology, where he shared responsibility for the company’s day-to-day operations with Larry Page and CEO Eric Schmidt.




He is the son of a Soviet mathematician economist. Brin had an interest in computers from an early age, and he received his first computer, a Commodore 64, from his father for his 9th birthday. Sergey’s natural talent for mathematics and computing was soon apparent, surprising a teacher by submitting a project printed from the computer, at a time before computers were commonplace. Brin also gives credit for his success to having attended Montessori schools. In 1990, after he finished high school, Brin enrolled in the University of Maryland to study Computer Science and Mathematics, receiving his Bachelors of Science in 1993 with high honors. After graduating he received a graduate fellowship from the National Science Foundation, which he used to study a masters degree in Computer Science at Stanford University, and completing it ahead of schedule in august 1995.




After receiving his degree in mathematics and computer science from the University of Maryland at College Park, Brin entered Stanford University, where he met Larry Page. Both students were completing doctorates in computer science. Sergey Brin’s defining moment in his life was when he met future Co-president of Google, Larry Page. From there started a partnership that changed the face of World Wide Web. Brin was assigned to show Larry around the university. However they did not get on well in the beginning, arguing about every topic they discussed. The pair soon found a shared common interest in retrieving information from large data sets. The pair later wrote what is widely considered their seminal contribution, a paper called “The Anatomy of a Large-scale Hypertextual Web Search Engine”. The paper has since become the tenth most accessed scientific paper at Stanford University.

The homepage for the Google News web site.

Soon after they started working on a project that later became the Google search engine. After trying to sell the idea failed, they wrote up a business plan and brought in a total initial investment of almost $1 million to start their own company. In September 1998 Google Inc. opened in Menlo Park, California. The company grew so quickly and gained so many employees’ a few office relocations were made due to lack of space, with Google Inc. finally settled in its current place at Mountain View, California. Over the next few years headed by Larry and Sergey Google made many innovations and added to its list of products and employee’s (nearly 5000 by 2006). By October 2004 Google announced their first quarterly results as a public offered company, with record revenues of $805.9 million. As of 2005 Brin has been estimated to be worth US$11 billion and is sixteenth in Forbes 400 list and ranked the 2nd richest American under the age of 40.


Despite Brin’s success, he has remained fairly unknown to the public. He is not known to live a lavish lifestyle, driving an inexpensive car and still renting a two-bedroom flat.


As a research project at Stanford University, Brin and Page created a search engine that listed results according to the popularity of the pages, after concluding that the most popular result would often be the most useful. They called the search engine Google after the mathematical term “Googol,” which is a 1 followed by 100 zeros, to reflect their mission to organize the immense amount of information available on the Web.

images (1)                                               google

Today Google is worth US $ 150 billion and is the biggest media corporation in the world. Sergey Brin himself has garnered a personal fortune of US $ 19.8 billion, and was ranked as the fifth most powerful man in the world by Forbes in 2009. He married Anne Wojcicki in the summer of 1997 on The Bahamas.


Brin and Page launched the company in 1998. Google has since become the world’s most popular search engine, receiving more than 200 million queries each day. Headquartered in the heart of California’s Silicon Valley, Google held its initial public offering in August 2004, making Brin and Page billionaires. Brin continues to share the company’s day-to-day responsibilities with Larry Page and CEO Eric Schmidt. In 2006, Google purchased the most popular Web site for user-submitted streaming videos, YouTube, for $1.65 billion in stock.


The search engine with Page and Brin’s unique algorithm was initially named “Backrub,” but they later settled on “PageRank,” named after Page. It soon caught on with other Stanford users when Page and Brin let them try it out. The two set up a simple search page for users, because they did not have a web page developer to create anything very impressive. They also began stringing together the necessary computing power to handle searches by multiple users, by using any computer part they could find. As their search engine grew in popularity among Stanford users, it needed more and more servers to process the queries.


In their first years in business, Brin served as president. The company continued to grow exponentially during 2001. Google even became a verb—to “Google” someone or something meant to search for it via the engine, but it was most commonly used in reference to checking out the Web presence of potential dates. Page and Brin’s company was the subject of articles in mainstream publications, but they continually rejected offers to go public—make their company a publicly traded one on Wall Street. They did, however, hire Eric Schmidt as chief executive officer and board chair in 2001. Schmidt was a veteran of Sun, where he had served as chief technology officer. As Brin explained to Betsy Cummings in Sales & Marketing Management, “Larry and I have done a good job,” but conceded that “the probability of doing something dumb” was still likely. “It’s clear we need some international strategy, and Eric brings that.”


Sergey received a bachelor’s degree with honors in mathematics and computer science from the University of Maryland at College Park. He is currently on leave from the Ph.D. program in computer science at Stanford University, where he received his master’s degree. Sergey is a member of the National Academy of Engineering and a recipient of a National Science Foundation Graduate Fellowship.


Sergey’s research interests include search engines, information extraction from unstructured sources, and data mining of large text collections and scientific data. He has published more than a dozen academic papers, including Extracting Patterns and Relations from the World Wide Web; Dynamic Data Mining: A New Architecture for Data with High Dimensionality, which he published with Larry Page; Scalable Techniques for Mining Casual Structures; Dynamic Itemset Counting and Implication Rules for Market Basket Data; and Beyond Market Baskets: Generalizing Association Rules to Correlations.

download (1)

Sergey has been a featured speaker at several international academic, business and technology forums, including the World Economic Forum and the Technology, Entertainment and Design Conference. He has shared his views on the technology industry and the future of search on the Charlie Rose Show, CNBC, and CNNfn. In 2004, he and Larry Page were named “Persons of the Week” by ABC World News Tonight.




Sergey has been a featured speaker at several international academic, business and technology forums, including the World Economic Forum and the Technology, Entertainment and Design Conference.




In May 2007, Brin married Anne Wojcicki in the Bahamas. Wojcicki is a biotech analyst and a 1996 graduate of Yale University with a B.S. in biology. She has an active interest in health information, and together she and Brin are developing new ways to improve access to it. As part of their efforts, they have brainstormed with leading researchers about the human genome project.




Brin is working on other, more personal projects that reach beyond Google. For example, he and Page are trying to help solve the world’s energy and climate problems at Google’s philanthropic arm, which invests in the alternative energy industry to find wider sources of renewable energy. The company acknowledges that its founders want “to solve really big problems using technology.”




In 2012, Brin has been involved with the Project Glass program and has demoed eyeglass prototypes. Project Glass is a research and development program by Google to develop an augmented reality head-mounted display (HMD). The intended purpose of Project Glass products would be the hands-free displaying of information currently available to most smartphone users, and allowing for interaction with the Internet via natural language voice commands.




Brin was also involved in the Google driverless car project. In September 2012, at the signing of the California Driverless Vehicle Bill, Brin predicted that within five years, robotic cars will be available to the general public.




Page and Brin strove to keep Google’s corporate culture relaxed in other ways, which they felt benefited the company in the long run. Its perks were legendary. There was free Ben and Jerry’s ice cream, an on-site masseuse, a ping-pong table, yoga classes, and even a staff physician. Employees could bring their dogs to work, and the company cafeteria was run by a professional chef who used to work for the rock band the Grateful Dead. Brin discussed his management philosophy with Cummings. “Since we started the company, we’ve grown twenty percent per month. Our employees can do whatever they want.”







Douglas.Engelbart-Roll no:24

1 Comment



Douglas Carl Engelbart was born on 30th of January, 1925,  lived in Portland Douglas graduated from Portland’s Franklin High School in 1942 and went on to study Electrical Engineering at Oregon State University.

Midway through his college studies at Oregon State University, in 1944 he was drafted into the US Navy, serving two years as a electronic/radar technician in the Philippines. It was there on a small island in a tiny hut up on stilts, that he first read the famous article of Vannevar Bush—”As We May Think“, which greatly inspired him.

He returned to Oregon State and completed his B.S. in Electrical Engineering, Oregon State University in 1948, then received a position as an electrical engineer in NACA Ames Laboratory, Mountain View, CA (now NASA), where he worked until 1951.

However, within three years he grew restless, feeling there was something more important he should be working on, dedicating his career to. He thought about the world’s problems, and what he as an engineer might possibly be able to do about them. He had read about the development of the computer, and even assisted in the construction of the California Digital Computer project (CALDIC), and seriously considered how it might be used to support mankind’s efforts to solve these problems. As a radar technician during the war he had seen how information could be displayed on a screen. He began to envision people sitting in front of displays, “flying around” in an information space where they could formulate and organize their ideas with incredible speed and flexibility. So he applied to the graduate program in Electrical Engineering at U.C. Berkeley to launch his new crusade.

Engelbart obtained a M.S. in Electrical Engineering in 1952, and a Ph.D. in Electrical Engineering with a specialty in Computers in 1955, along with a half dozen patents in “bi-stable gaseous plasma digital devices”, and then stayed on as Acting Assistant Professor. However, within a year he was tipped off by a colleague that if he kept talking about his “wild ideas” he’d be an Acting Assistant Professor forever. So he ventured back down the Peninsula in search of a more suitable outpost to pursue his vision.

He then formed a startup company, Digital Techniques, to commercialize some of his doctorate research on storage devices, but after a year decided instead to find a venue where he could pursue the research he had been dreaming of since 1951.

In 1959 started the most productive period in the life of Engelbart, as he was appointed as a Director of Augmentation Research Center (ARC) at Stanford Research Institute, position, which he keep until 1977. He recruited a research team (up to 47 people) in his new center, and became the driving force behind the design and development of the On-Line System, or NLS. He and his team developed computer-interface elements such as bit-mapped screens, the first computer mouse, hypertext, collaborative tools, and precursors to the graphical user interface, groupware (inc. shared-screen teleconferencing and computer-supported meeting room), etc.
He initiated ARPANet’s Network Information Center (NIC). On October 29, 1969, the world’s first electronic computer network, the ARPANET, was established between nodes at Leonard Kleinrock’s lab at UCLA and Engelbart’s lab at SRI. Interface Message Processors at both sites served as the backbone of the first Internet.



Engelbart slipped into relative obscurity after 1976 due to various misfortunes and misunderstandings. Several of his best researchers became alienated from him and left his organization for Xerox PARC, in part due to frustration, and in part due to differing views of the future of computing. Engelbart saw the future in collaborative, networked, timeshare (client-server) computers, which younger programmers rejected in favor of the personal computer. The conflict was both technical and social: the younger programmers came from an era where centralized power was highly suspect, and personal computing was just barely on the horizon.
From 1977 until 1984 Engelbart worked as a Senior Scientist in Tymshare, Inc., Cupertino, CA. Tymshare had bought the commercial rights to NLS, renamed itAUGMENT, and set the system up as a principal line of business in their newly formed Office Automation Division.


Douglas Engelbart has over 45 other patents to his name, e.g. seven patents relating to bi-stable gaseous plasma digital devices, resulting from work 1954-58, twelve patents relating to all-magnetic digital devices, resulting from work 1954-58, magnetic-core logic devices and circuits, the patent for the computer mouse from 1970, etc.

Douglas Engelbart is a holder of over forty awards and honors, including the National Medal of Technology, the Certificate of Special Congressional Recognition, the Lemelson-MIT Prize, the IEEE John Von Neumann Medal Award, the ACM Turing Award and the American Ingenuity Award.



Human Computer Interaction (HCI)


The term human computer interaction is often used interchangeably with man machine interaction or interfacing. Descriptively, HCI is often termed as a design that enables the required functionality to be delivered by a computing device in line with its relationship between the user and the device itself.

The Basis of Human Computer Interaction lies in the core concept of usability. It was for the main reason of usefulness for man that machines were created. The best of computing machines are those that interact with individuals in the best possible manner that is required of them. The better the usability of a computing machine is, the better is its interaction with its stakeholders and thus it serves its purpose best. The inputs that a computing machine receives from its users are used to improve the extent of the human computer interaction.

The nature of human computer interaction has taken a new turn with the extensive use of internet and the ever increasing advancements in technology based devices. This new turn is characterized by networks and the social connections that have established via therm.  The ‘social’ nature of human computer interaction emerged when users got connected to each other via networks.

Computer  Mouse


Stephen R. Bourne-Bini P.B,Roll no:11

Leave a comment

Stephen R. Bourne

Stephen Richard Bourne (Steve) (born 7 January 1944) is a computer scientist, originally from the United Kingdom and based in the United States for most of his career. He is most famous as the author of the Bourne shell (sh), which is the foundation for the standard command line interfaces to Unix.
Bourne has a Bachelor’s degree in mathematics from King’s College London, England. He has a Diploma in Computer Science and a Ph.D. in mathematics from Trinity College, Cambridge. Subsequently he worked on an ALGOL 68 compiler at the University of Cambridge Computer Laboratory (see ALGOL 68C).
After Cambridge, Bourne spent nine years at Bell Labs with the Seventh Edition Unix team. As well as the Bourne shell, he wrote the adb debugger and The UNIX System, the second book on the UNIX system, intended for a general readership.
After Bell Labs, Bourne worked in senior engineering management positions at Silicon Graphics, Digital Equipment Corporation, Sun Microsystems and Cisco Systems.
Stephen R. Bourne
From 2000 to 2002 he was President of the Association for Computing Machinery.[2]
He is presently chief technology officer at ICON Ventures, a Menlo Park-based venture capital group in California.[3] He is also the chair of the Editorial Advisory Board for ACM Queue, a magazine he helped found when he was President of the ACM.[4] Additionally, he is a Fellow of the Association for Computing Machinery and of the Royal Astronomical Society.

Peter J Denning-Roll no:30

1 Comment



Peter J. Denning (born 1942) is an American computer scientist and prolific writer. He is best known for pioneering work in virtual memory, especially for inventing the working-set model for program behavior, which addressed thrashing in operating systems and became the reference standard for all memory management policies. He is also known for his works on principles of operating systems, operational analysis of queueing network systems, design and implementation of CSNET, the ACM digital library, codifying the great principles of computing , and most recently for his book The Innovator’s Way, on innovation as a set of learnable practices.


Denning was born January 6, 1942, in Queens, NY, and raised in Darien, CT.  He was interested in science from an early age and began building electronic circuits as a teenager. His computer built from pinball machine parts won the science fair in 1959, launching him into the new field of computing. He attended Manhattan College for a Bachelor inEE (1964) .At MIT for his doctorate in 1968, he worked on prototypes of computer utilities, precursors of today’s “cloud computing”. He became an educator and taught computer science at Princeton, Purdue, George Mason University, and Naval Postgraduate School. He was a pioneer in operating systems and computer networks and invented the “working set”, a way of automatically managing data flows in memory that is widely used in modern operating systems from desktops to smartphones. A strong advocate of computing as a domain of science on par with the traditional physical, life, and social sciences, he has codified the Great Principles of Computing. In the 1980s, while directing a research lab at NASA Ames Research Center, he became interested in how he could teach his students and researchers to be successful innovators, broadening his attentions to the human practices of technology adoption. From 1980 to 1982 he wrote 24 columns as ACM President, focusing on technical and political issues of the field. From 1985 to 1993 he wrote 47 columns on “The Science of Computing” for American Scientist magazine,focusing on scientific principles from across the field. Beginning in 2001 he has written quarterly “IT Profession” columns for Communications of the ACM, focusing on principles of value to practicing professionals.


ACM honored Peter J. Denning, Naval Postgraduate School (who served as President of ACM from 1980-82), with a special award “for his exceptional vision, devotion, and commitment to excellence. His 40 years of dedication and guidance have been an inspiration to the Association and all those who have served with him.”

In 1970 he published a classic paper that displayed a scientific framework for virtual memory and the validating scientific evidence, putting to rest a controversy over virtual memory stability and performance.

In 1966 he proposed the working set as a dynamic measure of memory demand and explained why it worked using the locality idea introduced by Les Belady of IBM. His working set paper,became a classic.

In 1999, he expanded the search for fundamental principles to cover all of computing. The discovery of natural information processes in biology, physics, economics, materials, and other fields convinced him that the basic definitions of computation had to be modified to encompass natural information processes as well as artificial.


Denning has been a major influence in computing education. In the early 1970s he led a task force that designed the first core course on operating systems (OS) principles. OS became the first non-math CS core course. In the mid 1980s he led a joint ACM/IEEE committee that described computing as a discipline with nine functional areas and three cognitive processes, the basis of ACM Curriculum 1991. In the 1990s he set out on a quest to codify the great principles of computing. He maintains that computing is a science both of natural and artificial information processes. NSF designated him a Distinguished Education Fellow in 2007 to launch a movement to use the Great Principles framework for innovations in education and research. In 2009, ACM’s SIGCSE (Special Interest Group on Computer Science Education) recognized his contributions with its lifetime service award.





Older Entries