PETER G.GYARMATI

Leave a comment

PETER  G .GYARMATI

 

 

                                                                            biggyar

 

 

Personal facts

 

  • Known as    : Peter Gyarmati

  • Date of birth: 14-07-1941

  • Nationality   :USA

  • Profession   :Computer Scientist

peter

 

 

             Peter G. Gyarmati  is a software engineer and computer scientist.He is best-known for the development of  OS/360+HASP for the System/360.Later developed  the OS/VS for the System/370, for  resource allocation system.He introduced here the adaptive allocation strategy based on his earlier engineering works.

Born in Budapest,Hungary and there he received BSc(ENG)from Budapesti Muszaki Egyetem and MSc from Mancester University,England in 1972,and he received a PhD in Applied Computer Science from ELTE in 1981.

gyar2

        In Manchester University he joined for research to IBM from 1972 until 1981, working in Poughkeepsie, Yorktown, New York, and the Delft University, Holland. In his PhD work “Adaptive Controls in Operating System”, he proposed the so called ADIOS solution, extension to the System/370 family, with the OS/VS2 software.

He actively studied about the ALOHA type networks and suggested the solutions for the collision problem and gave a proof for the radio communication channel capacity. It is essential for the Ethernet(ieCSMA/CD protocol).         

                                                                                        peter-gyarmati

                                       

                           

He then developed the PC and  micro Computing, where he introduced a fore runner of the portable computer, a portable data collector machine, called MOBI-X, then MOBI-2000. He also designed, created and patented the portable OS.Later he worked with networking reliability, security, in Vienna  and Stuttgart and also in Budapest for BSB, TCC and worked in Stanford University, Palo Alto, U.S. as a guest professor, and as emeritus returned to Szentendre, where he lives now. He is member of the PhD body of the HAS (Hungarian Academy of Sciences) and of the Bolyai Society of Mathematics.

gyar1

 

PRANAV MISTRY by ANUSHA P,ROLL NO 8

1 Comment

Pranav Mistry, the inventor of revolutionary Six Sense technology is a well known name amongst Imagethe youth of India.He, took the Nasscom Leadership Forum attendees by storm, recently. Science fiction hardly seems reserved for the big screen, but rather achievable at one’s finger tips.Mistry is passionate about integrating the digital informational experience with our real-world interactions. The idea behind all his innovations is to move ahead of the technological limitations and provide a humane touch to it. He also mentions that his main inspirations are Hindu mythological characters.

Pranav Mistry was born on 17 December 1982 in Palanpur (Gujarat), India .Before that he pursued his career life as a visiting Researcher at JST ERATO Igarashi Design Interface Project, Research Intern at MicrosoftResearch, Research Intern at Global Connection Project (NASA, CMU, Google, Unesco).Then he worked as a  Research Assistant and PhD candidate at MIT Media Lab.He joined Samsung electronics as the Director of Research recently.

Education

  • Bachelor of Computer Engineering at Gujarat University
  • Master of Design at IDC, IIT Bombay
  • Master of Science at MIT Media Lab, Massachusetts Institute of Technology
  • PhD at MIT Media Lab, Massachusetts Institute of Technology

Pranav’s Magic – ‘The Sixth Sense Technology’

The exceptional Sixth Sense Technology is a lively example of Pranav Mistry’s designing genius. This super technology enables us to use the whole world as an information source. ‘Sixth Sense’ is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information. Now we can make a newspaper to show live video news or dynamic information, a gesture of drawing a circle on the user’s wrist projects an analog watch and display someone’s personal details at first sight that too on his face.
A simple mobile like device and a pendant can make use the whole world for scribbling. Click pitcures with our gestures. Display details on a wall or anywhere we wish to. The camera, projector and the processor will empower us with the Six Sense Technology

Image                          Image

Third Eye – His Forthcoming Revolutionized Technology

The designing genius is currently working on a project called Third Eye which will enables multiple viewers to see different things on a same display screen at the same time. Single sign boards will display information in variant languages at the same time. Two people watching TV can watch their favourite channel on a single TV screen. The world is awaiting this exceptional invention.

Pranav is already famous for the legendary Sixth Sense technology. This time around, the latest in his quiver was the SPARSH project. He demonstrated how it would be possible to transfer data from one device by just a touch, and then copying it to another by touchinthe other device.

Image  Image

Additionally, every device that’s connected to a network, essentially, have an IP address. It is that IP address that helps uniquely identify any object and just can be controlled with just a flick of a switch on an electronic device. For example, if a lamp in the house can be identified using an IP address, then it can be easily switch off and on by invoking a relevant application on one’s mobile phone. Simple as that Mistry calls this technology TeleTouch.

Image

He had also showcased a pair of HD glasses that could project any object on the wall, and could also be used to translate any piece of text into one’s native language, a copy of a similar project being currently developed by Google.

Some previous projects from Mistry’s work at MIT includes intelligent sticky notes, Quickies, that can be searched and can send reminders; a pen that draws in 3D; and TaPuMa, a tangible public map that can act as Google of physical world. His research interests also include Gestural and Tangible Interaction, Ubiquitous Computing, AI, Machine Vision, Collective Intelligence and Robotics.

Pranav Mistry’s Achievments

Patents:

  • Digital user interface for inputting Indic scripts (20070174771)
  • Radio frequency control of computing system (1416/DEL/2006)
  • Start menu display model (1417/DEL/2006)
  • Multi-mode multimedia device and computing system (WO/2007/124083) (1418/DEL/2006)
  • Task-oriented start menu (1419/DEL/2006)
  • Hardware control initiated task switching (1420/DEL/2006)
  • Single hardware control initiated switching between application and utilities (1422/DEL/2006)

Awards:

  •  Winner of ‘TR35 2009’ award, Technology Review
  • Winner of ‘INVENTION OF THE YEAR 2009’ award, Popular Science
  • Winner of ‘Young Indian Innovator 2009’ award, Digit Magazine
  • Speaker for TED 2009 talk on ‘sixthsense, TED 2009, Long Beach, CA.
  • Second in SPACE competition in SIGGRAPH2004, Los Angeles.
  • First in Innovation Fair at India level, for project MARBO.
  • Best Paper in USID2007 for ‘Akshar’.
  • All India third in National Open Hardware Contest at IIT Bombay for project DATAG2.02
  • Third in Model Presentation at INGENIUM 2002.
  • Third in Creative art competition organized by ISRO.
  • First in Design competition organized by IEEE, India chapter.
  • Second in website designing organized by ACES.
  • Selected for the prestigious DIRUBHAI AMBANI FOUNDATION AWARD for securing first rank in district
  • Second in on the spot Model Making contest at techfest 2001 at IIT Bombay.

Recent Projects

  • SixthSense
  • Third Eye
  • Inktuitive
  • QUICKiES
  • TaPuMa
  • Invent
  • DATAG2.02
  • Marbo
  • ProjectCHILD
  • SunFlower
  • Sandesh
  • Ghost in the machine
  • RoadRunner
  • VET
  • Sthiti
  • Akshar

Edwin Catmull- Ammu Archa,Roll no:04

Leave a comment

EDWIN CATMULL

edwin
Dr. Edwin Catmull (born 1945 in West Virginia) was a vice president of the Computer Division of Lucasfilm Ltd. He is a computer scientist and the co-founder and currently the president of Pixar Animation Studios. As a computer scientist, Catmull has contributed to many important developments in computer graphics.

edwin1
Early in life, Catmull found inspiration in Disney movies such as Peter Pan and Pinocchio and dreamed of becoming a feature film animator. However, he assessed his chances realistically and decided that his talents lay elsewhere. Instead of pursuing a career in the movie industry, he enrolled in the physics and computer science programs at the University of Utah. It was there that he made three fundamental computer graphics discoveries: Z-buffering, texture mapping, and bicubic patches. While at the university, he invented algorithms for anti-aliasing and rendering subdivision surfaces and created, in 1974, his earliest contribution to the film industry, an animated version of his left hand for Futureworld, the science fiction sequel to the film Westworld and the first film to use 3D computer graphics.
edcatmull
After leaving the university, Catmull founded the Computer Graphics Lab at the New York Institute of Technology. In 1979 he went to work for George Lucas at Lucasfilm. It was at Lucasfilm that he helped develop digital image compositing technology used to combine multiple images in a convincing way. Later, in 1986, Catmull founded Pixar with Alvy Ray Smith. At Pixar, Catmull was a key developer of the RenderMan rendering system used in films such as Toy Story and Finding Nemo.
edwin2
AMPAS+Scientific+Technical+Awards+Dinner+0CunKCBQBH8l
In 1993, the Academy of Motion Picture Arts and Sciences presented Catmull with his first Academy Award “for the development of PhotoRealistic RenderMan software which produces images used in motion pictures from 3D computer descriptions of shape and appearance.” Again in 1996, he received an Academy Award “for pioneering inventions in Digital Image Compositing”. Finally, in 2001, he received an Oscar “for significant advancements to the field of motion picture rendering as exemplified in Pixar’s RenderMan.”

EDGAR F CODD

Leave a comment

150px-Edgar_F_Codd (1)                   

         

Edgar Frank “Ted” Codd (August 23, 1923 – April 18, 2003) was a British ComputerScientist who, while working for IBM, invented the RELATIONAL MODEL  for database management, the theoretical basis for relational database. He made other valuable contributions to computer science, but the relational model, a very influential general theory of data management, remains his most mentioned achievement.

 

selective

Edgar Frank Codd was born on the Isle of Portland in England.Codd attended Oxford University, where he earned degrees in mathematics and chemistry, and flew in the Royal Air Force during World War II. He then moved to the United States and joined IBM as a mathematical programmer in 1949 for the  Selective Sequence Electronic Calculator, a huge tube-based computer that had the speed and flexibility to solve many of the larges scientific problems of its day.

Image

 

He then invented a novel “multiprogramming” method for IBM’s pioneering STRETCH computer.Thismethodenabled STRETCH, the forerunner to modern mainframe computers, to run several programs at the same time.This is the maintenance console of IBM’s Stretch, the industry’s most powerful computer when first delivered in 1961. Stretch had 150,000 transistors and could perform 100 billion computations a day. Stretch pioneered in various advanced systems concepts, such as look-ahead, overlapping/pipelining of instructions, error-checking and correction, control-program operating systems and the 8-bit byte. (VV3073) .

After earning his doctorate in computer science at the University of Michigan in 1967 under a full scholarship from IBM, Codd moved to IBM’s San Jose Research Laboratory in San Jose, where he conceived his relational model.Codd was named an IBM Fellow in 1976, and in 1981 he received the Turing Award, the highest technical honor in the computing profession. In 2002, Forbes Magazine listed Codd’s relational model of data as one of the most important innovations of the previous 85 years.

us__en_us__ibm100__relational_database__system_Illustration__620x350

His longtime collaborator, Chris Date, said “Codd’s biggest overall achievement was to make database management into a science. He put the field on solid scientific footing by providing a theoretical framework — the relational model — within which a variety of important problems could be attacked in a scientific manner.”

Janet Perna, general manager of Data Management Solutions for IBM’s Software Group, expressed her admiration for the inventor of the product for which she is now responsible. She remarked that “Ted Codd will forever be remembered as the father of relational database. His remarkable vision and intellectual genius ushered in a whole new realm of innovation that has shaped the world of technology today — but perhaps his greatest achievement is inspiring generations of people who continue to build upon the foundations he laid. Database professionals all over the world mourn his passing”.

images

To access data using this relational model, Codd envisioned a relatively easy-to-use query language based on a foundation of relational set theory. Codd also believed that a database management system should provide a standard access approach so that an application program did not have to be aware of how the data was organized.

Image

In 1977, Oracle became the first commercial relational database management system.As a result, IBM in 1982 came out with the first version of what later became the Structured Query Language (SQL). Retiring from IBM after a serious injury in the early 1980s, Codd had his own consulting group until 1999. He died on April 18, 2003 at his home on Williams Island, Florida.

OE2_0104

What does Codd’s Rules mean?

Codd’s rules refers to a set of 13 database management system rules (0-12) developed by E.F. Codd in 1969-1970. He designed these rules as the prerequisites to consider a database management systems (DBMS) as a relational data base management system (RDBMS). Although the rules were not initially widely popular in commercial use, later DBMSs were based on Codd’s rules.

Codd’s rules are also referred to as Codd’s law, Codd’s 12 rules or Codd’s 12 commandments.

CODD’s 12 rules define an ideal relational database which is used as a guideline for designing relational database systems today. Though no commercial database system completely conform to all 12 rules, they do interpret the relational approach. Here are the CODD’s 12 rules:

  • Rule 0: Foundation rule: The system must qualify as relational both as a database and as a management system.
  • Rule 1: The information rule: All information in the database must be represented in one and only one way (that is, as values in a table).
  • Rule 2: The guaranteed access rule: All data should be logically accessible through a combination of table name, primary key value and column name.
  • Rule 3: Systematic treatment of null values: A DBMS must support Null Values to represent missing information and inapplicable information in a systematic manner independent of data types.
  • Rule 4: Active online catalog based on the relational model: The database must support online relational catalog that is accessible to authorized users through their regular query language.
  • Rule 5: The comprehensive data sublanguage rule: The database must support at least one language that defines linear syntax functionality, supports data definition and manipulation operations, data integrity and database transaction control.
  • Rule 6: The view updating rule: Representation of data can be done using different logical combinations called Views. All the views that are theoretically updatable must also be updatable by the system.
  • Rule 7: High-level insert, update, and delete: The system must support set at a time insert, update and delete operators.
  • Rule 8: Physical data independence: Changes made in physical level must not impact and require a change to be made in the application program.
  • Rule 9: Logical data independence: Changes made in logical level must not impact and require a change to be made in the application program.
  • Rule 10: Integrity independence: Integrity constraints must be defined and separated from the application programs. Changing Constraints must be allowed without affecting the applications.
  • Rule 11: Distribution independence: The user should be unaware about the database location i.e. whether or not the database is distributed in multiple locations.
  • Rule 12: The nonsubversion rule: If a system provides a low level language, then there should be no way to subvert or bypass the integrity rules of high-level language.

l
Of all the rules, rule 3 is the most controversial. This is due to a debate about three-valued or ternary, logic. Codd’s rules and SQL use ternary logic, where null is used to represent missing data and comparing anything to null results in an unknown truth state. However, when both booleans or operands are false, the operation is false; therefore, not all data that is missing is unknown, hence the controversy.

Edgar F. Codd first proposed the process of normalization and what came to be known as the 1st normal form in his paper A Relational Model of Data for Large Shared Data Banks Codd stated:

Database normalization is the process of removing redundant data from your tables in to improve storage efficiency, data integrity, and scalability.

In the relational model, methods exist for quantifying how efficient a database is. These classifications are called normal forms (or NF), and there are algorithms for converting a given database between them.Normalization generally involves splitting existing tables into multiple ones, which must be re-joined or linked each time a query is issued

“There is, in fact, a very simple elimination procedure which we shall call normalization. Through decomposition nonsimple domains are replaced by ‘domains whose elements are atomic (nondecomposable) values.’”

Edgar F. Codd originally established three normal forms: 1NF, 2NF and 3NF. There are now others that are generally accepted, but 3NF is widely considered to be sufficient for most applications. Most tables when reaching 3NF are also in BCNF (Boyce-Codd Normal Form).

Rear Admiral Grace Murray Hopper (Roll No:18)

Leave a comment

(December 9, 1906 – January 1, 1992)

                                                                                                    Image                                                                                                             

Amazing Grace was in at the start of the modern computing revolution and dedicated her life to making computers more distributed, easier to use, and more efficient. She invented the first code compiler, was pivotal in the development of COBOL, popularized the term “bug“, and was so good at what she did that the US Navy couldn’t let her go – recalling her twice to duty before old age did her for good.

AMAZING CHILDHOOD:                                                                                                                                                      Image 

                                                                                                                                                                                                                      Hopper was blessed with parents who were insistent that she receive as good an education as her brother, and she was accepted at Vassar at the tender age of 17 to study mathematics and physics. She joined the faculty there, but carried on studying at Yale to earn her MA in 1930 and a PhD in 1934. She is one of the first people to program the Automatic Sequence Controlled Calculator, better known as the Mark I computer. She wrote the instruction book on how to use the system, and never stopped working with computers after this introduction

                      Image

                                                 The Mark 1 was the first large computer

As a young girl Grace met her great grandfather, who was an Admiral in the Navy, and was dazzled by his distinguished appearance (Orlando 26). Grace Joined the Navy in 1943, after acquiring waivers for the weight and age requirements. Grace rapidly climbed through the ranks, and by the time she went to work on the Mark I, she was a lieutenant.

Grace left the Navy in 1949 to continue working on computers, then returned in 1967 to teach younger people about the rapidly increasing field of computers. Grace loved teaching as much as working with computers. She was a firm believer in children being the future, and a favorite phrase of hers, quoted by Karwatka is “Ships are safe in port, but that’s not what ships are built for.” Ships being young people, and what they’re made for is going out into the world and making a difference.

                                     Image

                                      The clock in Grace’s office ran backwards

                                                           Image

BUGS & COBOL:

It was while working on the Mark II that on September 9, 1947, Hopper began popularizing the term “bug” to describe a computer error. Back then the bug in question was an actual moth, which had fallen into one of the computer’s mechanical relays and jammed it. Hopper never claimed she invented the term, but she did popularize it and the term “debugging” to describe cleaning-u

Image  The first computer ‘bug’

In 1949, Hopper transferred to work on the Binary Automatic Computer (BINAC) and the UNIVersal Automatic Computer I (UNIVAC I), a commercial computer designed by Eckert-Mauchley Computer Corporation, which later became Unisys. There she worked with  other important female programmers Betty Holburton, Kay McNulty, Marlyn Wescoff, Ruth Lichterman, Betty Jean Jennings, and Fran Bilas.

In 1952 she invented the first compiler, A-0, which translated mathematical symbols into machine code, and updated the system with A-1 and A-2 the following year. “Nobody believed that,” she said. “I had a running compiler and nobody would touch it. They told me computers could only do arithmetic; they could not do programs.”

At the same time she was becoming concerned that computer languages were needlessly complex and began to push for standardization. In 1954 her department introduced the FLOW-MATIC programming language, which used limited English phrases. It was this that led her to play a pivotal role in developing the COmmon Business-Oriented Language (COBOL) in 1959.

Image

Image

COBOL was one of the most successful computer languages and is still in use today. Research by Datamonitor found that in 2008 there were between 1.5 and 2 million developers still working with the 50-year-old programming language, adding five billion lines of code to the 200 billion already running on live systems.

NAVY PLAYS PUSH-ME, PULL-YOU      

Image

In 1966 Hopper was retired from the Naval Reserve on the grounds of age, but the military found they couldn’t live without her and she was reactivated less than a year later – the first woman to be so engaged. Five years later, at the age of 65, she was let go again and then promptly rehired the following year.

Image

RETIREMENT

In 1986 she was involuntarily retired from the Naval Reserve (she was then its oldest member) after being promoted to the rank of Rear Admiral, the first woman to achieve such a high rank. She was also presented with the Defense Distinguished Service Medal, the Navy’s highest non-combat medal. It was not her only award. In 1969 she was won the first “computer sciences man of the year” award from the Data Processing Management Association, and was the first woman to be made a Distinguished Fellow of the British Computer Society in 1973.

The Navy loved her back, and in 1995 she became only the second woman to have a fighting ship named after her. The guided missile destroyer USS Hopper is still on active duty and the ship’s coat of arms contains her motto “Aude et Effice” – “Dare and Do”

Hopper carried on teaching, this time as an ambassador for DEC. She still wore her naval uniform to lectures and provided continuing inspiration, particularly to women. Female staff at Microsoft formed a group calling itself Hoppers, and set up a scholarship in her name.

Hopper always said she wanted to see the new century roll over but sadly it was not to be. She passed away on New Year’s Day 1996 and was buried with full Naval honors at Arlington National Cemetery.

AWARD NAMED GRACE MURRAY HOPPER

While many awards name Grace Hopper added to its name after her death in 1992,the award named Grace Murray Hopper was awarded the Association for Computing Machinery(ACM) since 1971.Award is given to tha young(under 35 years)specialist,who has made significant contributions to the field of computing.  .

Award Winners

1971 Donald Knuth
1972 Paul H. Dirksen, Paul H. Cress
1973 Lawrence M. Breed, Richard H. Lathwell, Roger Moore
1974 George N. Baird
1975 Allen L. Scherr
1976 Edward H. Shortliffe
1977 not awarded
1978 Raymond Kurzweil
1979 Steve Wozniak
1980 Robert M. Metcalfe
1981 Daniel S. Bricklin
1982 Brian K. Reid
1983 not awarded
1984 Daniel Henry Holmes Ingalls, Jr.
1985 Cordell Green
1986 Bill Joy
1987 John Osteraut
1988 Guy L. Steele
1989 W. Daniel Hillis
1990 Richard Stallman
1991 Feng-hsiung Hsu
1992 not awarded
1993 Bjarne Stroustrup
1994 not awarded
1995 not awarded
1996 Shafrira Goldwasser
1997 not awarded
1998 not awarded
1999 Wen-mei Hwu
2000 Lydia Kavraki
2001 George Necula
2002 Ramakrishnan Srikant
2003 Stephen W. Keckler
2004 Jennifer Rexford
2005 Omer Reingold
2006 Daniel Klein
2007 Vern Paxson
2008 Dawson Engler
2009 Tim Roughgarden
2010 Craig Gentry (Craig Gentry)

Image

Scientist:Jeffrey D Ullman-Vishnupriya Shaji,Roll no:36

Leave a comment

Jeffrey D Ullman
ullman
Jeffrey David Ullman is a renowned computer scientist who was born on November 22, 1942.His textbooks on compilers (various editions are popularly known as the Dragon Book), data structures, theory of computation, and databases are regarded as standards in their fields.Ullman received a Bachelor of Science degree in Engineering Mathematics from Columbia University in 1963 and his Ph.D. in Electrical Engineering from Princeton University in 1966.He worked for several years at Bell Labs.From 1969 to 1979 he was a professor at Princeton.
Few publications of Ullman:-
images (4)
pr-100015-250x250
images (3)
pr-10001-250x250

Scientist:Paul Mockapetris -Vishnupriya Shaji,Roll no:36

Leave a comment

PAUL MOCKAPETRIS

Paul Mockapetris
Paul Mockapetris was born on 18 November, 1948 in Boston Massachusetts. He received BS degrees in Physics and Electrical Engineering from MIT in 1971, and a PhD in Information and Computer Science from the University of California, Irvine in 1982.

images

Paul’s earliest professional work was while he was an MIT student: an early multiprocessor operating system for the Architecture Machine Group; virtual machine operating systems for IBM; and simulation work at Draper Labs.At UC Irvine for his PhD, Paul worked on the Distributed Computer System where he built one of the earliest ring LAN hardware systems and matching network operating system.
images (1)

At USC’s Information Science Institute, Paul started as a research assistant and eventually headed the Communications division. During this time Paul’s research included work on many of the fundamental internet protocols, including development of the first SMTP server, and later the invention of the Domain Name System, and deployment of early root servers and DNS operations. The DNS is an essential part of all web and email addresses and essentially every application on the internet.Paul has been active in internet community service, spending 3 years as program manager for networking at ARPA, and 2 years as IETF chair, as well as numerous other roles.In 1995, Paul left academia, and took leadership roles at startups including cable internet at @Home, email at Software.com/Openwave, integrated SONET and IP products at Fiberlane/Cerent/Siara.
PVM_standing-250-100

At present, he is Chairman and Chief Scientist at Nominum, where he has returned to his interest in DNS, advancing naming and directory systems for the internet. He also serves and advisor and board member for various other startups. Paul continues to believe that the internet’s future is ahead of it.
Paul Mockapetris expanded the Internet beyond its academic origins by inventing the Domain Name System (DNS) in 1983.

dns_500px

At USC’s Information Sciences Institute, Mockapetris recognized the problems with the early Internet (then ARPAnet)’s system of holding name to address translations in a single table on a single host (HOSTS.TXT). Instead, he proposed a distributed and dynamic naming system, essentially the DNS of today.Rather than simply looking up host names, DNS created easily identifiable names for IP addresses, making the Internet far more accessible for everyday use. After the formal creation of the Internet Engineering Task Force (IETF) in 1986, DNS became one of the original Internet Standards.

T-Shirt OS – wearable, shareable, programmable clothing-Roll no:28

Leave a comment

T-Shirt OS – wearable, shareable, programmable clothing

The T-shirt OS is an internet enabled, 100% cotton t-shirt with a 1024-pixel LED screen and is currently at the prototype stage. Using your smartphone to choose what you display on to the t-shirt. There is naturally potential to broadcast your twitter and facebook posts as well. The t-shirt also has a headphone jack to share songs from iTunes and seems like you can plug other devices into your t-shirt. The potential is exciting and hope the technology gets pushed more and more.

tshirtos

T-shirts have long been vehicles of personal expression. Be it your faded old Slayer T, your snippet of snark brashly emblazoned across your chest, your favorite comic book character set to fabric, or just your plain old white vest replete with a few coffee stains; your t-shirt tells people something about who you are. You could change it, constantly, by making it digitally interactive.

Sounds like an idea one could only come up with while drinking? Well, perhaps that’s what the folks at Cute Circuit were doing when they teamed up with Whiskey-maker Ballentine’s to create T-Shirt OS, the world’s first connected clothing concept that actually looks cool and worth wearing.

London based Cute Circuit has previously made a name for itself with flashy (think LEDs) fabrics and creepy concepts like shirts that can hug you via text message, but this latest project combines every wacky idea into one.

The firm wants to turn the t-shirt into the most creative canvas it can, made up of a large LED screen, camera, microphone, accelerometer and speakers for sound.

The T-shirt itself would act as a thin client with a small electrical brain, that can be paired up with the much larger processor in a person’s cell phone to make it the most “wearable, shareable, programmable” piece of clothing ever created.

images (1)

What could you do with such an adaptable shirt? Connect it to Twitter, display your photos, status updates, play your music, take snaps of people on the go, the options are almost endless.

The current version of the T-shirt is controlled by iOS but an Android version will be available later, the company says.

Of course, right now, it’s just a prototype, and not a cheap item to buy by any means, but Cute Circuit believes that could change. The firm is asking for feedback on its idea, and claims it will look into producing the shirts in volume if demand reaches a certain level.

Actually, the question it really begs is, “can you wash it .Actually, Cute Circuit says the T-shirt is hand washable if battery is removed.

images (8)

Online T-Shirt Design Software-LiveArt Publisher’s Description

Online Design Software, Online Lettering Design, Online T-Shirt Design and Online Boat Sign Design Tool – LiveArt

NewtonIdeas Live Art is a WYSIWYG software to create natural-looking design for your decal, t-shirt embroidery or other kind of sign (vinyl, for boats) with our online Flash lettering, t-shirt, boat sign design software. NewtonIdeas Live Art is a WYSIWYG software to create natural-looking design for your decal, t-shirt embroidery or other kind of sign (vinyl, for boats) with our online Flash lettering, t-shirt, boat sign design software

To create a preview you just do few steps:

* Write your message.

* Choose the fonts, colors, effects: make it arc, apply shadow or stroke any color you like.

* Add a picture from the gallery.

* Modify size, quantity of your sign or embroidery, add comments.

That’s all you need to do to get nice custom product (custom online lettering design, custom online t-shirt design, custom online sign design, custom online vinyl or decal design)!

Benefits for visitors

* Ability to preview design before buying it, testing various combinations of texts, fonts, background, colors etc

* Ability to get more information about products, see samples gallery etc

* Ability to obtain all necessary information about your services

* Ability to provide feedback and to contact company representative

Supporting features

* Powerful Live Art flash component with intuitive interface to compose the desired design

* Easy-to-use interfaces to find sample works, managed gallery, possible promotional and   educational content

* Very usable navigation, clear content and site structure, involving demonstrations, impressive Flash movies (that could be inserted as intro or as the parts of web site)

* Easy available contact and feedback forms to quickly achieve company staff

images (3)                                          images (4)

images (2)

Autonomic Computing-Roll no:24

1 Comment

AUTONOMIC  COMPUTING

Autonomic computing is a self-managing computing model named after, and patterned on, the human body’s autonomic nervous system. An autonomic computing system would control the functioning of computer applications and systems without input from the user, in the same way that the autonomic nervous system regulates body systems without conscious input from the individual. The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning while keeping the system’s complexity invisible to the user.

Autonomic computing is one of the building blocks of pervasive computing, an anticipated future computing model in which tiny – even invisible – computers will be all around us, communicating through increasingly interconnected networks. Many industry leaders, including IBM, HP, Sun, and Microsoft are researching various components of autonomic computing. IBM’s project is one of the most prominent and developed initiatives. In an effort to promote open standards for autonomic computing, IBM recently distributed a document that it calls “a blueprint for building self-managing systems,” along with associated tools to help put the concepts into practice. Net Integration Technologies advertises its Nitix product as “the world’s first autonomic server operating system.”

Autonomic Computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. An autonomic system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions. An autonomic computing framework is composed of autonomic components (AC) interacting with each other. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting policies based on self- and environment awareness.

A general problem of modern distributed computing systems is that their complexity, and in particular the complexity of their management, is becoming a significant limiting factor in their further development. Large companies and institutions are employing large-scale computer networks for communication and computation. The distributed applications running on these computer networks are diverse and deal with many tasks, ranging from internal control processes to presenting web content and to customer support.

Additionally, mobile computing is pervading these networks at an increasing speed: employees need to communicate with their companies while they are not in their office. They do so by using laptops, personal digital assistants, or mobile phones with diverse forms of wireless technologies to access their companies’ data.

This creates an enormous complexity in the overall computer network which is hard to control manually by human operators. Manual control is time-consuming, expensive, and error-prone. The manual effort needed to control a growing networked computer-system tends to increase very quickly.

A possible solution could be to enable modern, networked computing systems to manage themselves without direct human intervention. The Autonomic Computing Initiative (ACI) aims at providing the foundation for autonomic systems. It is inspired by the autonomic nervous system of the human body. This nervous system controls important bodily functions (e.g. respiration, heart rate, and blood pressure) without any conscious intervention.

In a self-managing autonomic system, the human operator takes on a new role: instead of controlling the system directly, he/she defines general policies and rules that guide the self-management process. For this process, IBM defined the following four functional areas:

  • Self-configuration: Automatic configuration of components;
  • Self-healing: Automatic discovery, and correction of faults;[5]
  • Self-optimization: Automatic monitoring and control of resources to ensure the optimal functioning with respect to the defined requirements;
  • Self-protection: Proactive identification and protection from arbitrary attacks.

CHARACTERISTICS:

1.Automatic:This essentially means being able to self-control its internal functions and            operations.

2.Adaptive:An autonomic system must be able to change its operation

(i.e., its configuration, state and functions).

3.Aware:An autonomic system must be able to monitor (sense) its operational context as well as its internal state in order to be able to assess if its current operation serves its purpose.

MODEL

300px-AutonomicSystemModel

A fundamental building block of an autonomic system is the sensing capability (Sensors Si), which enables the system to observe its external operational context. Inherent to an autonomic system is the knowledge of the Purpose (intention) and the Know-how to operate itself (e.g., bootstrapping, configuration knowledge, interpretation of sensory data, etc.) without external intervention. The actual operation of the autonomic system is dictated by the Logic, which is responsible for making the right decisions to serve its Purpose, and influence by the observation of the operational context (based on the sensor input).

This model highlights the fact that the operation of an autonomic system is purpose-driven. This includes its mission (e.g., the service it is supposed to offer), the policies (e.g., that define the basic behaviour), and the “survival instinct”. If seen as a control system this would be encoded as a feedback error function or in a heuristically assisted system as an algorithm combined with set of heuristics bounding its operational space.

Autonomic Cloud Bursts on Amazon EC2

Cluster-based data centers have become dominant computing platforms in industry and research for enabling complex and compute intensive applications. However, as scales, operating costs, and energy requirements increase, maximizing efficiency, cost-effectiveness, and utilization of these systems becomes paramount. Furthermore, the complexity, dynamism, and often time critical nature of application workloads makes on-demand scalability, integration of geographically distributed resources, and incorporation of utility computing services extremely critical. Finally, the heterogeneity and dynamics of the system, application, and computing environment require context-aware dynamic scheduling and runtime management.

Autonomic cloud bursts is the dynamic deployment of a software application that runs on internal organizational compute resources to a public cloud to address a spike in demand. Provisioning data center resources to handle sudden and extreme spikes in demand is a critical requirement, and this can be achieved by combining both private data center resources and remote on-demand cloud resources such as Amazon EC2, which provides resizable computing capacity in the cloud.

images (5)

This project envisions a computational engine that can enable autonomic cloud bursts capable of: (1) Supporting dynamic utility-driven on-demand scale-out of resources and applications, where organizations incorporate computational resources based on perceived utility. These include resources within the enterprise and across virtual organizations, as well as from emerging utility computing clouds. (2) Enabling complex and highly dynamic application workflows consisting of heterogeneous and coupled tasks/jobs through programming and runtime support for a range of computing patterns (e.g., master-slave, pipelined, data-parallel, asynchronous, system-level acceleration). (3) Integrated runtime management (including scheduling and dynamic adaptation) of the different dimensions of application metrics and execution context. Context awareness includes system awareness to manage heterogeneous resource costs, capabilities, availabilities, and loads, application awareness to manage heterogeneous and dynamic application resources, data and interaction/coordination requirements, and ambient-awareness to manage the dynamics of the execution context.

 

Comet service model has three kinds of clouds. One is highly robust and secure cloud and nodes in this cloud can be masters. In most application, data is critical and should be in the secure space. Hence, only masters in this cloud can treat the whole data for the application. Another is secure but not robust cloud. Nodes in this cloud can be workers and provide Comet shared coordination space. Robust/secure masters and secure workers construct a global virtualized Comet space. A master generates tasks which are small unit of work for parallelization and inserts them into Comet shared coordination space. Each task is mapped to a node on the overlay using its keyword and stored in the storage space of the mapped node. Hence, robust/secure masters and secure workers have Comet shared space in its architecture substrate. The master provides a management agent for tasks, scheduling and monitoring tasks. It also provides a computing agent because it can provide computing capability. A secure worker gets a task from the space one at a time, hence, it has a computing agent in its architecture. The workers consume the tasks and return the results back to the master through direct connection. The other cloud is for unsecured workers. Unsecured workers cannot access Comet shared space directly and also cannot provide their storage to store tasks but provide their computing capability. Hence they have only computing agent in their architecture. They request a task to one of the masters in the robust/secure network. Then the master accesses to the Comet shared space, gets a task and forwards it to the unsecured worker. When the worker finishes its task, then it sends the result back to the master.

images (7)

Can you CHOP up autonomic computing?                    

 

The autonomic computing architecture provides a foundation on which self-managing information technology systems can be built. Self-managing autonomic systems exhibit the characteristics of self-configuring, self-healing, self-optimizing, and self-protecting; these characteristics are sometimes described with the acronym CHOP. This article discusses the self-CHOP attributes and, in particular, explains why they are not independent of each other and how self-managing autonomic systems can integrate the CHOP functions.

 

The acronym CHOP is shorthand for configure, heal, optimize, and protect, the fundamental aspects of autonomic computing technology. Autonomic systems are designed to address one or more of these aspects.

images (6)

The  attributes are  defined  as:

  • Self-configuring – Can dynamically adapt to changing environments. Self-configuring components adapt dynamically to changes in the environment, using policies provided by the IT professional. Such changes could include the deployment of new components or the removal of existing ones, or dramatic changes in the system characteristics. Dynamic adaptation helps ensure continuous strength and productivity of the IT infrastructure, resulting in business growth and flexibility.
  • Self-healing – Can discover, diagnose and react to disruptions. Self-healing components can detect system malfunctions and initiate policy-based corrective action without disrupting the IT environment. Corrective action could involve a product altering its own state or effecting changes in other components in the environment. The IT system as a whole becomes more resilient because day-to-day operations are less likely to fail.
  • Self-optimizing – Can monitor and tune resources automatically. Self-optimizing components can tune themselves to meet end-user or business needs. The tuning actions could mean reallocating resources — such as in response to dynamically changing workloads — to improve overall utilization, or ensuring that particular business transactions can be completed in a timely fashion. Self-optimization helps provide a high standard of service for both the system’s end users and a business’s customers.
  • Self-protecting – Can anticipate, detect, identify and protect against threats from anywhere. Self-protecting components can detect hostile behaviors as they occur and take corrective actions to make themselves less vulnerable. The hostile behaviors can include unauthorized access and use, virus infection and proliferation, and denial-of-service attacks. Self-protecting capabilities allow businesses to consistently enforce security and privacy policies.

A  self-healing autonomic manager can detect disruptions in a system and perform corrective actions to alleviate problems. One form that those corrective actions might take is a set of operations that reconfigure the resource that the autonomic manager is managing. For example, the autonomic manager might alter the resource’s maximum stack size to correct a problem that is caused by erroneous memory utilization. In this respect, the self-healing autonomic manager might be considered to be performing self-configuration functions by reconfiguring the resource to accomplish the desired corrective action.

Self-healing and self-optimizing management could involve self-configuration functions (so, too, could self-protection). Indeed, it often may be the case that actions associated with healing, optimizing, or protecting IT resources are performed by configuration operations. Although self-configuration itself is a broader topic that includes dynamic adaptation to changing environments, perhaps involving adding or removing system components, self-configuration is also fundamental for realizing many self-CHOP functions.

Autonomic  Manager

figure-am1

Figure illustrates that an autonomic manager might include only some of the four control loop functions. Consider two such partial autonomic managers: a self-healing partial autonomic manager that performs monitor and analyze functions, and a self-configuring partial autonomic manager that performs plan and execute functions, as depicted in the following Figure.

 

Integrating self-healing and self-configuring autonomic management functions

fig2

The first autonomic manager could monitor data from managed resources and correlate that data to produce a symptom; the symptom in turn is analyzed, and the autonomic manager determines that some change to the managed resource is required. This desired change is captured in the form of Change Request knowledge. The change request is passed to the self-configuring partial autonomic manager that performs the plan function to produce a change plan that is then carried out by the execute function. This scenario details the integration of self-healing and self-configuring autonomic management functions that was introduced earlier.

Self-CHOP describes important attributes of a self-managing autonomic system. Self-CHOP is a useful way to characterize the aspects of autonomic computing, but the four disciplines should not be considered in isolation. Instead, a more integrated approach to self-CHOP, such as this article describes, offers a more holistic view of self-managing autonomic systems.

 

 

 

 

 

 

 

 

COMPUTABLE DOCUMENT FORMAT (CDF) by AHALYA,ROLL NO 1

2 Comments

Introducing the Computable Document Format (CDF)

            Today’s online documents are like yesterday’s paper—flat, lifeless, inactive. Instead, CDF puts easy-to-author interactivity at its core, empowering readers to drive content and generate results live.

Launched by the Wolfram Group, the CDF standard is a computation-powered knowledge      container—as everyday as a document, but as interactive as an app.

Adopting CDF gives ideas a broad communication pipeline—accelerating research, education, technical development, and progress.

Image

Why Use the Computable Document Format (CDF)?

CDF offers content creators easy-to-author interactivity and convenient deployment options—empowering their readers to drive content and generate results live.

 

Key Advantages of CDF:

 Broader communication pipeline: Create content as everyday as a document, but as interactive as an app.

Built-in computation: Let the reader drive new discovery—live.

Easy-to-author interactivity: Use automated functions and plain English input instead of specialist programming skills for a wide range of applications.

Deployment flexibility: Create once—deploy as slide shows, reports, books, applications, and web objects.

Integrated knowledge: Access specialized algorithms, data, and visualizations for hundreds of subjects

 

Features of CDF Documents

From bloggers, students, and teachers to business consultants, scientists, engineers, or publishers, CDF delivers features that far surpass those of traditional document formats.

Live Interactive Content

Image

Any element in CDF can be transformed into interactive content easily—true interactivity, not pre-generated or scripted. With the computing power ofMathematica technology, dynamic content in CDF can be driven by real-time computation or prompt live computation for new results, which deeply immerses readers in the content.

All-in-One Format

Image

All elements of a project—calculations, visualizations, data, code, documentation, and even interactive applications—stay together in a uniquely flexible format. That means working on a problem with CDF automatically creates a document that can deliver knowledge to readers and let them drive content live.

Dynamic Math Typesetting

Image

CDF makes mathematical typesetting semantic-faithful, unlike traditional typography. In addition to publication-quality typesetting, a formula can be input in a fully typeset form and then immediately evaluated to produce typeset output that can be edited and re-evaluated. It is no wonder that Wolfram was the major force behind the MathML standard.

Integrated Computational Knowledge

Image

Powered by Mathematica and Wolfram|Alpha technology, CDF brings trillions of pieces of expert-level data and the world’s largest collection of algorithms together in a single platform. Authors in a wide range of fields can instantly create subject-specific content without requiring additional tools.

 

The Power behind CDF

CDF is built on the same technologies that are behind Mathematica—the world’s leading computation platform—and Wolfram|Alpha—the world’s first computational knowledge engine. Every CDF comes with the technology innovations that Wolfram has brought to the world for decades.

Automation by Design

Image

Automation is the key to productive creation. CDF technology applies intelligent automation in every part of the system, from algorithm selection to plot layouts to user interface design. You get reliable, high-quality results without needing expertise—and even if you’re an expert, you get results faster.

Free-Form Linguistic Input

Image

At the core of CDF technology lies the Mathematica language, a powerful and versatile language for content creation. With free-form linguistic input, programming in theMathematica language can be as easy as entering plain English. Type in your idea and let the system transform it—whether it is a simple plot or a complex image processing operation.

Built-in Knowledge: Algorithms

Image

CDF technology builds in specialized algorithms for many scientific and technical areas, from financial engineering to computational biology, making CDFs on almost any topic easy to create. Specialist functionality is tightly integrated with the core of CDF, providing a smooth workflow for authors and delivering unprecedented computational power to readers.

Built-in Knowledge:Computable Data

Image

With CDF, you have full access to a vast collection of computable data across hundreds of fields, from economy to life science to geography. Real-time access to frequently updated and meticulously maintained computable data makes CDF documents as live and as accurate as possible.

Integrated Graphics & Visualization

Image

CDF and its underlyingMathematica platform provide the world’s most sophisticated graphics and visualization functionality by any measure. Interactive 3D graphics, complex scientific plots, expansive business charts, and automatic graph visualization—everything is fully built-in and ready to use.

Symbolic Documents

Image

With CDF technology, everything is an expression, even whole documents. That allows them to be operated on programmatically. The symbolic basis of CDFs underlies many features, from cascading stylesheets to immediate deployment of CDFs as presentations, for print or the web, and as applications.

Platform Support for CDF

For Windows and Mac OS X, Wolfram CDF Player offers desktop and web plugin functionality. On Linux systems, CDF Player currently supports desktop functionality only.

Image

The web plugin has been tested with the following browsers:

Windows 7/Vista/XP: Internet Explorer, Firefox, Chrome, Opera, Safari

Mac OS X 10.5+: Safari, Firefox, Chrome(4.0+), Opera(10.5+)

Linux 2.4+: Desktop functionality only

System requirements:
Processor: Intel Pentium III 650 MHz or equivalent
System Memory (RAM): 512 MB required; 1 GB+ recommended

 

CDF on Mobile Devices

We are actively pursuing solutions for mobile devices, including cloud-based services, to make CDF available to anyone, anywhere.
           The iPad is an important part of our CDF strategy for accessing educational apps, business reports, and other interactive computational material.

Older Entries Newer Entries