VISIBLE LIGHT COMMUNICATION(roll no:39)

Leave a comment

Visible Light Communication

            v4

Visible light is only a small portion of the electromagnetic spectrum.Visible light communication (VLC) is a data communications medium using visible light between 400 THz (780 nm) and 800 THz (375 nm). Visible light is not injurious to vision.

 

v10

VLC systems are presently being developed by scientists seeking to create ultra high-speed, high security, biologically friendly communications networks that allow the creation and expansion of seamless computing applications using very large bandwidth high-frequency pulsed light instead of radiowaves and microwaves.

Their use may help provide both partial and full solutions to a number of technological problems: increasingly limited availability of conventional bandwidths for electronic equipment;

VLC appears to be an important potential component in expanding usable bandwidth, protecting sensitive electrical equipment and data, creating more biologically friendly communications technology, and helping develop seamless computing applications.

The Visible Light Communications Consortium (VLCC) which is mainly comprised of Japanese technology companies was founded in November 2003. It promotes usage of visible light for data transmission through public relations and tries to establish consistent standards. The work done by the VLCC is split up among 4 different committees:

1.Research Advancement and Planning Committee

This committee is concerned with all organizational and administrative tasks such as budget management and supervising different working groups. It also researches questions such as intellectual rights in relation to VLC.

2. Technical Committee

The Technical Committee is concerned with technological matters such as data transmission via LEDs and fluorescent lights.

3. Standardization Committee

The standardization committee is concerned with standardization efforts and proposing new suggestions and additions to existing standards.

4. Popularization Committee

The Popularization Committee aims to raise public awareness for VLC as a promising technology with widespread applications. It also conducts market research for that purpose

LED (Light Emitting Diode) VLC technology

LED (Light Emitting Diode) Visible Light Communications (VLC) systems are recognized as creating a possible valuable addition to future generations of technology, which have the potential to utilise light for the purposes of advanced technological communication at ultra high speed surpassing that of current wireless systems. One of the goals of researchers is to allow 100 megabits of data transference per second (Mbps) in offices and homes by modulation of light from upgraded lighting systems.

If it is developed correctly, the possibility exists that many of the problems associated with present day infrared, radiowave and microwave communications systems (and lighting technology) could be at least partially resolved, and a more biologically friendly system made available to industries and the general public.

A further advantage is that VLC systems can transmit data more securely over short distances than radiofrequency/microwave communications devices whose signals can be easily detected outside the rooms and buildings they originate in.

Lighting Types and VLC

At present incandescent and fluorescent lamps are the predominant sources of artificial-lighting, with incandescent units being phased out under a strong drive by many governments worldwide to reduce energy wastage. They are generally being replaced with energy-efficient alternatives, such as fluorescent lights, compact fluorescents and LEDs.

In Visible Light Communication project, the characteristic of short transient time in turning the light on/off processes was further investigated. A high-speed wireless communication system, which is embedded in our LED lighting system, was built. The duplex communication system consists of both downlink and uplink media through different frequencies of lights.Several experiments were conducted in the visible light communication system. In this communication system, off-the-self components were taken part in building the driver circuit and the performance of the system was evaluated, such as, data transmission rate, data transmission distance and the field of view of the transmitter.

With preparations well under way for a societal shift to solid-state lighting based on high-output LEDs, a proverbial light bulb has appeared above the heads of some forward-looking engineers. Their proposal: With enough advance work, every new LED light fixture could also be wired into the network backbone, accomplishing ubiquitous wireless communications to any device in a room without burdening the already crowded radio-frequency bands. Visible light communications (VLC) is being refined by industry, standards groups and well-funded government initiatives. And the stakes are enormous, since the traditional lighting market is measured in trillions of dollars and the transition to solid-state has already begun.

Visible Light Communication uses light emitting diodes (LEDs), for the dual role of illumination and data transmission. Using the visible light spectrum, which is free and less crowded than other frequencies, wireless services can be piggy-backed over existing lighting installations. With this leading edge technology, data including video and audio, internet traffic, etc, can be transmitted at high speeds using LED light.

VLC technology has the potential to deliver data transfer rates in excess of hundreds of megabits per second. Light radiation neither constitutes nor suffers from electromagnetic interference (EMI) making VLC a very attractive technology in places/environments where electromagnetic interference (EMI) is an issue, such as in hospitals and in aircraft. In addition, where security of local communication is important eg defence and finance applications, D-Light technology offer a secure medium for communication in an office/building environment.

 

v12

VLC Applications:

A wide range of applications would benefit from using novel visible light communications:

Wi-Fi Spectrum Relief – Providing additional bandwidth in environments where licensed and/or unlicensed communication bands are congested

Smart Home Network – Enabling smart domestic/industrial lighting; home wireless communication including media streaming and internet access

Commercial Aviation – Enabling wireless data communications such as in-flight entertainment and personal communications

Hazardous Environments- Enabling data communications in environments where RF is potentially dangerous, such as oil & gas, petrochemicals and mining

Hospital and Healthcare – Enabling mobility and data communications in hospitals

Defence and Military Applications – Enabling high data rate wireless communications within military vehicles and aircraft

Corporate and Organisational Security – Enabling the use of wireless networks in applications where (WiFi) presents a security risk

Underwater Communications – Enabling communications between divers and/or remote operated vehicles

Location-Based Services – Enabling navigation and tracking inside buildings.

v13

Biologically friendly Visible Light Communication technology

Most wireless communications today are based on various radio frequency waves generated, transmitted and received by electronic devices. WI-FI, 3G and Bluetooth are examples of this most wide technology. Radio frequency based  systems are suffering from increasingly limited availability of conventional bandwidths for electronic equipment. Even the fastest of these radio frequency data transmission networks cannot compete with the potential of superior communication attributes provided by visible light transmission at higher speeds. The fastest networks today are equipped with lasers, fiber optic cabling network appliances and adaptive equipment. The next generation of wireless networks 4G will be using light as its transmission medium with the potential to deliver data transfer rates in excess of hundreds of megabits per second.

Light radiation neither constitutes nor suffers from electromagnetic interference making visible light modulation a very attractive technology in environments where electromagnetic interference is an issue, such as in hospitals and ambulance vehicles. It ensures increased safety avoiding interference with GPS navigation systems on board of marine vessels and aircraft..

Future of the wireless communications require biologically friendly communications networks capable to provide less harmful electromagnetic environment. Currently used communication signals in the microwave spectrum in many cases may lead to negative health consequences, when humans are  exposed to raised radio frequency and microwave levels.Visible Light Communication systems are presently being developed by all major tele communication companies, government agencies and scientific research institutions. The Institute of Electrical and Electronics Engineers IEEE Wireless Personal Area Networks working group worked out the unified standard proposal for Visible Light Communication technologies. The integration of the new systems promises to be seamless as they use modulated light wavelengths in the visible spectrum emitted and received by a variety of suitably adapted and widely used sources of light.

Commonly used devices such as indoor and outdoor lighting, car lights, displays, illuminated signs, televisions, computer screens, digital cameras and mobile phones which have light emitting diodes used for new type of communications. Biologically friendly light modulation system provide basic networking capabilities in offices and homes, by upgrading lighting systems incorporated into home and outdoor networks.use of light modulation provide solutions to a number of biological and technological problems avoiding negative health consequences, as well as communications interference with sensitive electrical equipment.

Car makers are working on incorporating the visible light modulation principles in the optoelectronic spectrum for car and driver increased safety. Car lights with light emitting diodes may be using infrared and visible light frequencies to transmit signals between cars. Just like the brake lights send a message to the driver to stop, the modulated signals of emitting and receiving light diodes establish the photonic communication channel to send information to the VLC aware vehicle onboard control system which interacts with the driver and braking system to help avoid collision.

v14

Advertisements

3D GESTURE APPLICATION IN SMARTPHONES(roll no:38)

Leave a comment

g1

Microchip has announced its patented GestIC technology, which could enable people to use simple hand gestures to control their smartphone. The configurable MGC3130 is the world’s first electrical-field-based 3D gesture controller, and could lead to intuitive user interfaces for a range of devices.

It uses sensors to track changes in the electrical field around the phone instead of a camera, using 90% less power. As its power consumption is only 150 microwatts in its active sensing state, the MGC3130 enables always-on 3D gesture recognition.

GestIc  technology utilizes thin sensing electrodes made of any conductive material to enable invisible integration behind the device’s housing.  This allows for visually appealing industrial designs at very low total system costs.  It also provides 100% surface coverage (eliminating “angle of view” blind spots) and has a detection range of up to 15 cm. Mass production of this technology is expected in April 2013

g9

Microchip Technology Include a leading provider of microcontroller, analog and Flash-IP solutions, today announced its patented GestIC® technology, which enables the next dimension in intuitive, gesture-based, non-contact user interfaces for a broad range of end products.  The configurable MGC3130 is the world’s first electrical-field (E-field)-based 3D gesture controller, offering low-power, precise, fast and robust hand position tracking with free-space gesture recognition.

g10

With power consumption as low as 150 microwatts in its active sensing state, the MGC3130 enables always-on 3D gesture recognition—even for battery-powered products where power budgets are extremely tight.  In fact, the MGC3130’s low-power design and variety of configurable power modes provide the lowest power consumption of any 3D sensing technology—up to 90% lower than camera-based gesture systems.

g8

GestIC technology achieves the exceptionally high gesture-recognition rates required by today’s consumer products through its on-chip library—called the Colibri Suite—of intuitive and natural human gestures.  The Colibri Suite combines a stochastic Hidden Markov model and x/y/z hand-position vectors to provide designers with a reliable set of recognized 3D hand and finger gestures that can be easily employed in their products.  Examples include Wake-Up on Approach, Position Tracking, Flick Gestures, Circle Gestures and Symbol Gestures to perform functions such as on/off, open application, point, click, zoom, scroll, free-space mouseover and many others.  Designers can use this library to get to market quickly and reduce development risks, by simply matching their system commands to Microchip’s extensive set of predetermined and proven gestures.  Additionally, the chip provides developers the flexibility to utilize pre-filtered electrode signals for additional functionality in their applications.

g11

GestIC technology utilizes thin sensing electrodes made of any conductive material, such as Printed Circuit Board (PCB) traces or a touch sensor’s Indium Tin Oxide (ITO) coating, to enable invisible integration behind the device’s housing.  This allows for visually appealing industrial designs at very low total system costs.  Additionally, the technology provides 100% surface coverage, eliminating “angle of view” blind spots found in other technologies.  With a detection range of up to 15 cm, the MGC3130 is the ideal technology for products designed to be used in close proximity for direct user-to-device interaction.  With its range of configurable, smart features, the MGC3130 uniquely enables the next breakthrough in human-machine-interface design across various industries.  Microchip is already working with input-device and other product manufacturers to implement exciting and efficient user-input controls.  Example applications include keyboards that take advantage of the advanced interface capabilities in the new Windows® 8 operating system, using hovering motions and free-space gesture controls, instead of reaching over to touch a screen.

g3

The MGC3130 provides a sophisticated, precise and robust 3D gesture-interface and hand-position tracking solution, with features such as:

150 DPI, mouse-like resolution, and a 200 Hz sampling rate to sense even the fastest hand and finger motions

Super-low-noise analog front end for high-accuracy interpretation of electrode sensor inputs

Configurable Auto Wake-Up on Approach at 150 microwatts current consumption, enabling always-on gesture sensing in power-constrained mobile applications

Automated self calibration, for continued high accuracy over a product’s lifetime

32-bit digital signal processing, for real-time processing of x/y/z positional data and the Colibri Suite gesture library

Integrated Flash memory for the easy upgrading of deployed products70-130 kHz E-field with frequency hopping to eliminate RF interference, and resistant to ambient light and sound interferencre.Microchip’s Sabrewing  Single Zone Evaluation Kit (part # DM160217), also announced today, is available now for $169 via any Microchip sales representative.  It enables development with the MGC3130 by providing a selectable electrode size of 5” or 7”.   The Colibri Suite is an extensive library of proven and natural 3D gestures for hands and fingers that is pre-programmed into the MGC3130.

Pricing & Availability

Samples of Microchip’s MGC3130, featuring GestIC technology, are also available today in a 5×5 mm 28-pin QFN package.  Volume production is expected in April 2013, at $2.26 each in high volumes.

This week, Microchip Technology, a large U.S. semiconductor manufacturer, says it is releasing the first controller that uses electrical fields to make 3-D measurements.

The low-power chip makes it possible to interact with mobile devices and a host of other consumer electronics using hand gesture recognition, which today is usually accomplished with camera-based sensors. A key limitation is that it only recognizes motions, such as a hand flick or circular movement, within a six-inch range.

“That’s the biggest drawback,” says University of Washington computing interface researcher Sidhant Gupta. “But I think, still, it’s a pretty big win, especially when compared to a camera system. It’s low-cost and low-power. I can completely see it going into phones.”

Gesture recognition technology has advanced in recent years with efforts to create more-natural user interfaces that go beyond touch screens, keyboards, and mouses (see “What Comes after the Touchscreen?”). Microsoft’s Kinect made 3-D gesture recognition popular for game consoles, for example. But while creative uses of the Kinect have proliferated, the concept hasn’t become mainstream in desktops, laptops, or mobile devices quite yet.

g6

Today, Microsoft, along with other companies such as Leap Motion and Flutter, are working to improve upon and expand camera-based technology to new markets (see “Leap 3D Out-Kinects Kinect” and “Hold Your Hand Up to Play Some Music”). For smart phones and tablets, Qualcomm’s newest Snapdragon mobile device chip includes gesture recognition abilities, via its camera, but few mobile devices make use of gesture control.

Despite the six-inch distance limitation, the electrical-field controller could have some interesting advantages compared to camera sensors. Power consumption is a key issue for battery-powered devices. Microchip’s controller uses 90 percent less than camera-based gesture systems, the company says, and it can be left always on, so that it could be used to, say, wake up a smart-phone screen from sleep mode when a person’s hand nears.

The controller works by transmitting an electrical signal and then calculating the three-coordinate position of a hand based on the disturbances to the field the hand creates. Whereas many camera systems have “blind spots” for close-up hand gestures and can fail in low light, the Microchip controller works well under these conditions and doesn’t require an external sensor (its sensing electrodes can sit behind a device’s housing).

Perhaps most interesting, the controller could easily go into electronics that don’t have a camera, including car dashboards, keyboards, light switches, or a music docking station. In fact, Microchip Technology already sells components to 70,000 customers that make these products

The controller comes with the ability to recognize 10 predefined gestures, including wake-up on approach, position tracking, and various hand flicks, but it can also be programmed to respond to custom movements. Similar to the programming of voice recognition software, Microchip Technology built the gesture library using algorithms that learned from how different people make the same movements. These gestures can then be translated to functions on a device, such as on/off, open application, point, click, zoom, or scroll.

The precision is about the same as using a mouse, but the system has limitations. It can’t yet distinguish between, say, an open hand and a closed fist, or simultaneous movements of different fingers, an area the company wants to improve.

g7

Today, less than a year after acquiring the German startup that developed the technology, the company is making a development kit available for sale.

3D gesture breakthrough for tablets and smartphones 

g4

US-based semiconductor developer Microchip claims its new GestIC MGC3130 chip will prove revolutionary for mobile devices.

Using an electrical field to allow for 3D gesture control, Eric Lawson of Microchip says that while inserting the 5cm x 5cm chip into a smartphone may prove to be a “difficult design job”, use within tablet devices will be “far more straightforward”.

The director of the firm’s human-machine interface division, Fanie Duvenhage, said at its launch that GestIC technology is likely to appear within tablets or e-readers by Christmas 2013.

GestIC uses thin sensing electrodes to enable “invisible integration” behind a device’s design, as marketing manager Lawson puts it, while Duvenhage has invited comparisons with the gesture technology seen in the 2002 movie Minority Report, joking that GestIC is “pretty much in line” with what viewers saw in the movie “except without the ugly gloves” worn by Tom Cruise’s character, John Anderton.

Power

While GestIC is limited to a six-inch range, Sutherland notes that one of the previous complaints about camera-based gesture control is that it has a “blind spot” when users are within a few inches of the sensor. Lawson foresees GestIC being used “in tandem” with camera devices in the near future as well.

An estimated power consumption as low as 150 microwatts in its active sensing state, will allow GestIC to have always-on 3D gesture recognition, and Sutherland is impressed by this element of the product, with the chip said to consume 90 per cent less power than camera-based 3D sensing technology.

“Previously it’s been difficult to achieve gesture control with mobile devices because the environment in which they’re being used is changed rapidly – the phone itself changes its position, the distance from the camera changes and the lighting as well. Then there’s lots of shaking that you have to deal with,” says Sutherland.

“All of those are problems that have to be solved. That’s obviously why are looking at alternatives to the camera sensors.”

Intended to fit a broad range of products –  Microchip’s customer base numbers 70,000 companies and individuals – GestIC is based on technology acquired through the company’s purchase this year of Germany-based Ident Technology.

Recognised gestures will include possibilities such as “wake-up on approach” says Lawson as well as position tracking, flick gestures, circle gestures and symbol gestures, all of which can be applied for turning something on or off, opening an application, as well as pointing, clicking, zooming and scrolling.

Tablets, keyboards and mice and most types of “peripheral interface devices” were pinpointed by Lawson as immediate development areas.

XYZ Interactive enables touchless 3D gesture recognition for mobile, automotive and consumer electronics. Our low-cost, low-power technology delivers touchless interactions such as scrolling through menus, photos, and maps without touching the device. Unlike camera-based solutions, our technology can be always-on and performs right up to the surface of the screen without blind spots. XYZ Interactive is enabling next generation touchless and gesture input for 3D games & user interfaces.

g5

INDIAN SUPER COMPUTERS – by FATHIMA MEHSINA

Leave a comment

 Supercomputer program of India was started in late 1980s because Cray supercomputers were denied for import due to an arms embargo imposed on India, as it could be used for developing nuclear weapons and it was a dual use technology.

Our normal computers have a single processor with multiple cores. Supercomputers are designed with a large number of processors.

Supercomputers, combined with artificial intelligence, are gradually emulating the human brain. But will they ever match the power of the brain?

Supercomputers are being used to understand the brain even, in some cases, by reverse engineering it. We can simulate the brain using supercomputers but the brain remains a mystery. Many functions of the brain will be mimicked by supercomputers but the brain will continue to remain a mystery for a long time to come.

What are supercomputers primarily used for in India?

Supercomputers are primarily used for weather forecasting, which requires a lot of computing power. They are also being used for oil exploration for companies like the  Indian Oil Corp.Ltd.   Climate modelling to detect trends like global warming is another area. Supercomputers are also needed for space programmes, nuclear reaction simulations, bio-technology and gene sequencing and a whole range of scientific applications (highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modelling and physical simulations). All these applications are connected by C-DAC on the national knowledge network (NKN) and use the grid computing model to fire the applications from anywhere while combining the computing power from these different groups. This has been taking place since 2005 when the grid was introduced.

 PARAM:

 Vijay Pandurang Bhatkar one of India’s most acclaimed scientists, is best known as the architect of India’s first supercomputer—PARAM 8000. The PARAM series of supercomputers have been designed and assembled by the Pune-based Centre for Development of Advanced Computing (C-DAC), of which Bhatkar was the founder executive director. He is also credited with the creation of several national institutions, including the Electronics Research and Development Centre (ER&DC) in Thiruvananthapuram, the ETH Research Laboratory in Pune, the International Institute of Information Technology (I2IT), also in Pune, and the India International Multiversity. With the help of this Pune-based varsity, Bhatkar aims to resurrect India’s ancient ‘Gurukul’ system of learning that originated in the Vedic times.

The Param Yuva, which is the predecessor version of the Yuva II, has a peak speed of 54 Teraflops.Param Yuva-II, which is claimed to be India’s fastest supercomputer, has been unveiled in Pune. The Param-Yuva-II is developed by Centre for Development of Advanced Computing (CDAC). 

Fig1:

Image 
The entire project is said to have involved around 300 CDAC engineers. The Param Yuva-II is also said to be a milestone in the Indian Information Technology industry.

SAGA 220:

Indian Space Research Organisation unveiled a supercomputer on 2 May 2011, which is to be India’s fastest supercomputer in terms of theoretical peak performance of 220 TeraFLOPS (220 Trillion Floating Point Operations per second). The supercomputing facility named as Satish Dhawan Supercomputing Facility is located at Vikram Sarabhai Space Centre (VSSC), Thiruvananthapuram. The supercomputer SAGA-220 was inaugurated by Dr K Radhakrishnan, Chairman, ISRO at VSSC. 

Fig2:

Image

 

The new Graphic Processing Unit (GPU) based supercomputer named SAGA-220 (Supercomputer for Aerospace with GPU Architecture-220 TeraFLOPS) will be used by space scientists for solving complex aerospace problems. 

 

EKA:
 
EKA is a supercomputer built by the Computational Research Laboratories (a subsidiary of Tata Sons) with technical assistance and hardware provided by Hewlett-Packard. EKA uses 14,352 cores based on the Intel QuadCore Xeon processors. The primary interconnect is Infiband 4x DDR. EKA occupies about 4000 sq. feet area.

Fig3.

Image

With India making a mark in every sector of the technology field, the country has shown its importance in the supercomputing race too. Eight of the top 500 supercomputers are of India with Tata Group’s Eka, a Hewlett Packard (HP) based system leading the race in the 13th rank.

 

VIRGO:

TOP500, a global project that details the most powerful known computer systems in the world, has ranked the new IBM Virgo Super Cluster at IIT-Madras as the 224th in the top 500. If a normal desktop CPU requires power of 300 units, this cluster works on 120 Kilo units per hour and is spread across 36 sq m.

Fig4.

Image

PRITHIVI:

Indian Institute of Tropical Meteorology, Pune, has a 45 terraflop/s machine, called Prithvi, which is being used for climate research and operational forecasting.

India’s Top 15 super computers

Pune has 3 out of the top 6 computers in India, while IBM leads in the number of super computers in the list with 6 systems, followed by HP with 5 systems, and SGI with 3 systems.

Image

 

EKA is also the nearly 4 times faster than 2nd places super-computer developed by CDAC, Pune.

As of November 2012, India has 8 systems on the Top500 list ranking 82, 127, 186, 199, 200, 288, 364 and 386.

Rank

Site

Name

Rmax
(TFlop/s)

Rpeak
(TFlop/s)

82

Centre for Mathematical Modelling and Computer Simulation

303.9

360.8

127

Vikram Sarabhai Space Centre, ISRO

SAGA-220

188.7

394.8

186

Computational Research Laboratories

EKA

132.8

172.6

199

Semiconductor Company

129.2

182.0

200

Semiconductor Company

129.2

182.0

288

IT Services Provider

104.2

199.7

364

Indian Institute of Technology, Madras

Virgo

91.1

97.8

386

IT Services Provider

88.5

168.1

   
 
 

 

LED DISPLAY TECHNOLOGY : Roll No:15

Leave a comment

LED DISPLAY TECHNOLOGY

About LED Displays:

Image

 

Image

                                      Unlike traditional media such as newspaper and magazines, LED displays quickly capture attention with a combination of light, color, motion and graphics that get noticed. They also offer infinite options for creating a brand that gets remembered and, in turn, gets results. For most advertisers, the most important market is the local one. Think of LED displays as local advertising and sales partners that are “on” 24/7, working night and day to attract attention, deliver marketing messages and drive sales. LED displays also cost 40 to 60 percent less per thousand viewers than most other forms of advertising. In fact, businesses can communicate with thousands of people each day for just a few dollars. When purchasing an LED display, it is important to understand a few basic principles that will help you select the right product for your application. The following sections present a brief overview of some of these basic principles.

Image

Pitch (Resolution)

Resolution, or the total number of pixels in a display, is a very important factor that affects the performance of the sign. More resolution means more LED diodes and more circuits, which usually means better picture quality.

Pitch is the distance (usually in millimeters) between pixels. Pitch is always measured from the center of one pixel to the center of an adjacent pixel. The smaller the pitch number, the higher the resolution quality. Large  pitch numbers indicate a lower resolution. A pixel can be one single diode, or a pixel can be a cluster of many diodes running off the same circuit.

Viewing Distance and Speed

The distance between your sign and its viewers is the number one factor in determining the type of LED display you will need. Longer distances require less resolution and shorter distances require higher resolutions. In addition, if you are traveling at 55 MPH on a freeway, and the sign is 600 feet away at a truck stop, the text letters must be at least 20 inches tall to be legible. Likewise, if you are standing 60 feet away from a street level sign, the letters need only be two inches tall to be legible.

The rule of thumb is that you need 1 inch for every 30 feet of viewing distance.               

LED Diode Density

LED diode density and pixel pitch are the two most critical factors to determine the quality of resolution and display brightness. LED density of a display is the total number of LED diodes in one square meter. It is calculated by multiplying the number of pixels per square meter by the number of LED diodes per pixel.

Optec Digital Billboards’s LED display systems have always featured more LEDs per pixel than other LED display manufacturers.  Our higher LED diode density provides significantly higher brightness and longer display life since each individual diode can be driven at lower levels of intensity, reserving capacity to extend the display’s life – without sacrificing proper viewing brightness.

Virtual Pixel vs True Pixel

Some LED display manufacturers use “virtual pixel” technology. They claim that “virtual pixel” doubles the actual resolution of screen, i.e., a screen with a physical (true) resolution of 320×240 pixels in reality is expressed as the “virtual” resolution of 640×480.   In a “Virtual Pixel” Display in an attempt to smooth out digital image, each pixel of the image corresponds not to an actual module pixel but to a light/data source, that is part of the group of pixels that form the “Virtual pixel”. By this mode of pixel sharing one pixel contains the “Virtual group”* of pixels image information. *(The 2 or 4 pixels that are combined to form the “Virtual” effect). Virtual pixels are also known as “pixel sharing” or “dynamic pixels”.  Some claim that with “virtual pixels” the displayed image has twice the resolution as the “physical” resolution. This is not true, since one module pixel cannot memorize or hold and display the majority of information from the initial pixel. The majority of the original information vanishes. This results in distortion of important details and other elements such as colors that are part of the initial image.

In the actual, physical or true pixel technology, the image is displayed on a display in as each pixel of the original image corresponds to pixel on a screen.  It takes the actual color in a pixel as it is balanced in brightness and contrast and no additional corrections are required.  Optec Digital Billboards only uses true pixel technology.                                 

Virtual Pixel

Uses pixel sharing to achieve “virtual” pitch.Attempts to achieve equal resolution using 1/4 the diodes of a true pixel display.                                                               

True Pixel

Each pixel is distinct, using individual groups of LEDs.  Display has 4x the diode density of a virtual pitch, resulting in 4x brightness, greater color depth and better color accuracy.          

Brightness

The Brightness of an LED display is generally expressed by a numerical value in NITs. A NIT is defined as unit of illuminative brightness described as candela output per square meter (cd/M2). The higher the number of NITs, the brighter the display. In general, 1,500 NITs, provides readable text in outdoor daylight, while grayscale and outdoor video require up to 5,000 NITs for acceptable color depth.

Optec Digital Billboards displays are built with a high-density of super-bright, high quality LED diodes, so our displays typically exceed this standard by a long shot. Contrast ratio is another important factor in overall brightness, and refers to the difference between levels of blacks compared to the levels of whites in the display. Things like reflective surfaces, glare from the sun, and dimming all affect contrast ratio. To optimize the contrast ratio and overall brightness, Optec Digital Billboards displays feature a unique louver system to shade each individual diode from the glare of the sun. The louvers were computer modeled to optimize the view from onlookers below while blocking the maximum amount of sun from all sides. No other system on the market offers this unique, individual diode shading. Together with the high-density array, Optec Digital Billboards displays are truly the brightest, longest lasting product on the market today.

Viewing Angle

Diodes can put out a single, narrow beam of light like a flashlight, or they can output a wide array across a room like a light bulb. Diodes output about the same amount of light no matter what type they are – but the “high-beam” diodes with a narrow angle focus more light into one small spot, whereas the “wide angle” diodes spread their light across the horizon. So, if you were to stand in front of a sign made from “high-beam” diodes with narrow viewing angles, you would see an extremely bright sign if you stood directly in front of it, but the minute you walked away from the small spot light of its focus, you would see nothing but black.

With wide viewing angle LEDs, the image is visible in consistent brightness and uniform colors throughout the entire viewing range of the display.  Optec Digital Billboards only uses wide angle 1400 LEDs to maximize audience exposure, maintain the highest color accuracy and extend reading times.  Maintaining brightness across a wide spectrum also requires greater light output, which is why Optec Digital Billboards uses a higher LED diode density than any other manufacturer in the industry.

LED Overview

LEDs differfrom traditional light sources in the way they produce light. In an incandescent lamp, atungsten filament is heated by electriccurrent until it glows or emits light. In a fluorescent lamp, anelectric arc excites mercury atoms, which emit ultraviolet (UV)radiation.

A light emitting diode, in contrast, is made from a chip of semiconducting material in a way that when connected to a power source, current flows in one direction but not the other. When currentflows through the diode, energy is released in the form of a photon (light). The specific wavelengthor color emitted by the LED depends on the materials used to make the diode.

Red LEDs are based on aluminum gallium arsenide (AlGaAs). Blue LEDs are made from indium gallium nitride (InGaN) and green from aluminum gallium phosphide (AlGaP).  White light is created by combining the light from red, green, and blue

Image

(RGB) LEDs or by coating a blue LED withyellow phosphor. OCS displays in use today utilize red LEDs.

To make a readable display from LEDs, they are arranged in a matrix as shown below. The matrix isconfigured with sufficient number of LEDs to allow alphanumeric characters to be formed byilluminating specific patterns.

Image

Wide Temperature Operation –  LEDs have wider temperature operating limits; however their brightness and expected lifetime falls off significantly with heat. Red LEDs, utilized in OCS products on the market

today perform worst, falling off to 40% of brightness at 100°C.

Image

Pixel Pitch – Due to the relatively large size of individual LEDs, they inherently have farfewer pixelsper inch than LCDs (17:1 ratio of LCD pixels to LED pixels for the same size display area).Accordingly, they are not optimal for displaying pictures, international language characters or complex figures when viewed at close range as with OCS applications.

Number of Colors – As mentioned above, red LEDs are typically used in OCS equipment in the field today. A single color limits the amount of information that can be conveyed to the customer.Additionally, red is not the optimum color for high contrast readability in direct sunlight.

Graphic Display Capability – LEDs (as used in OCS equipment today) are limited to displaying simple alpha numeric characters and have no graphic display  capability.

International Characters LEDs used in OCS equipment today cannot support complex multi byte international language character fonts due to their inherent limitation in resolution.

Flexibility of Display Layout – OCS LED displays are arranged as a fixed number of rows and columns of fixed size characters, allowing no flexibility in the layout of the display.

Cost – Due to the simplicity of the design of LED based OCS displays, they are typically lower cost  than LCD based OCS displays.

 

 

 

 

 

CLUSTER COMPUTING

3 Comments

Heading

cluster computing

 INTRODUCTION

                                 Computing is an evolutionary process. Five generations of development history— with     each generation improving on the   previous  one’s technology, architecture, software, applications, and representative systems—make that clear. As part of this evolution, computing requirements driven by applications have always outpaced the available technology. So, system designers have always needed to seek faster, more cost effective computer systems. Parallel and distributed computing provides the best solution, by offering computing power that greatly exceeds the technological limitations of single processor systems. Unfortunately, although the parallel and distributed computing concept has been with us for over three decades, the high cost of multiprocessor systems has blocked commercial success so far. Today, a wide range of applications are hungry for higher computing power, and even though single processor PCs and workstations now can provide extremely fast processing; the even faster execution that multiple processorscan achieve by working concurrently is still needed. Now, finally, costs are falling as well. Networked clusters of commodity PCs and workstations using off-the-shelf processors and communication platforms  such as Myrinet, Fast Ethernet, and Gigabit Ethernet are becoming   increasingly  cost effective and popular.  This concept, known as cluster computing, will surely continue to flourish: clusters can provide enormous computing power that a pool of users can share or that can be collectively used to solve a single application. In addition, clusters do not incur a very high cost, a factor that led to the sad demise of massively parallel machines.

                                     Clusters, built using commodity-off-the-shelf (COTS) hardware components and free, or          commonly used, software, are playing a major role in solving large-scale science, engineering, and commercial applications. Cluster computing has emerged as a result of the convergence   of   several trends, including the availability of  inexpensive high performance microprocessors and high speed networks, the  development of standard software tools for high performance distributed computing, and the increasing need of computing power for computational  science and commercial applications.

What is Clustering?

Clustering is the use of multiple computers, typically PCs or UNIX workstations, multiple storage devices, and redundant interconnections, to form what appears to users as a single highly available system. Cluster computing can be used for load balancing as well as for high availability. It is used as a relatively low-cost form of parallel processing machine for scientific and other applications that lend themselves to parallel operations.

Computer cluster technology puts clusters of systems together to provide better system reliability and performance. Cluster server systems connect a group of servers together in order to jointly provide processing service for the clients in the network.

Cluster operating systems divide the tasks amongst the available servers. Clusters of systems or workstations, on the other hand, connect a group of systems together to jointly share a critically demanding computational task. Theoretically, a cluster operating system should provide seamless optimization in every case.

At the present time, cluster server and workstation systems are mostly used in High Availability applications and in scientific applications such as numerical computations.

Advantages of clustering

  • High performance
  • Large capacity
  • High availability
  • Incremental growth

Applications of Clustering

  • Scientific computing
  • Making movies
  • Commercial servers (web/database/etc)

CLUSTER HISTORY 

                                  The first commodity clustering product was ARC net, developed by Data point in 1977. ARC net wasn’t a commercial success and clustering didn’t really take off until DEC released their VAX cluster   product in the 1980s for the VAX/VMS operating system. The ARC net and VAX cluster products not only supported parallel computing, but also shared file systems and peripheral devices. They were supposed to give you the advantage of parallel processing while maintaining data reliability and uniqueness. VAX cluster, now VMS cluster, is still available on OpenVMS systems from HP running on Alpha and Itanium systems. The history of cluster computing is intimately tied up with the evolution of networking technology. As networking technology has become cheaper and faster, cluster computers have become significantly more attractive.

clusterCLUSTER COMPUTING

 How to run applications faster?

 There are 3 ways to improve performance:

 ✓  Work Harder

✓  Work Smarter

✓   Get Help

 Era of Computing

✓  Rapid technical advances

✓  The recent advances in VLSI technology

✓  Software technology grand challenge applications have become the main driving force

✓  Parallel computing

COMPONENTS OF CLUSTER COMPUTER 

1.  Multiple High Performance Computers

a. PCs                                                   b. Workstations                                                   c. SMPs (CLUMPS)

2.  State of the art  Operating Systems

a. Linux (Beowulf)                          b.  Microsoft NT (Illinois HPVM)                 c. SUN Solaris (Berkeley NOW)

d. HP UX (Illinois – PANDA)       e. OS gluing layers (Berkeley Glunix)

 3.  High Performance Networks/Switches

a. Ethernet (10Mbps)                     b. Fast Ethernet (100Mbps)                          c. Gigabit Ethernet (1Gbps)

d. Myrinet     (1.2Gbps)                  e. Digital Memory Channel                             f. FDDI

4.  Network Interface Card

 a. Myrinet has NIC                          b. User-level access support

5.  Fast Communication Protocols and Services

a. Active Messages (Berkeley)     b. Fast Messages (Illinois)                             c. U-net (Cornell)

d. XTP (Virginia)

6.  Cluster Middleware

a. Single System Image (SSI)         b. System Availability (SA) Infrastructure

 7.  Hardware

a. DEC Memory Channel, DSM (Alewife, DASH), SMP Techniques

8.  Operating System Kernel/Gluing Layers

a. Solaris MC, Unixware,   GLUnix

9.  Applications and Subsystems

a. Applications (system management and electronic forms)                            b. Runtime systems (software DSM, PFS etc.)

c. Resource management and scheduling software (RMS)

10. Parallel Programming Environments and Tools

a.Threads (PCs, SMPs, NOW..)                              b. MPI                                            c. PVM

d. Software DSMs                                                       e. Compilers                                f. RAD (rapid application development tools)

g. Debuggers                                                                h. Performance   Analysis Tools

i. Visualization Tools

11. Applications

a. Sequential                                                                b. Parallel / Distributed (Cluster-aware app.)

COMPARING  OLD  AND  NEW

                                      Today, open standards-based HPC systems are being used to solve problems from High-end,   floating-point intensive scientific and engineering problems to data intensive tasks in industry. Some of the reasons why HPC clusters outperform RISC based systems Include:

 1. Collaboration

                                      Scientists can collaborate in real-time across dispersed locations- bridging isolated islands       of scientific research and discovery- when HPC clusters are based on open source and building block technology.

 2. Scalability

                                      HPC clusters can grow in overall capacity because processors and nodes can be added as demand increases.

 3. Availability

                                      Because single points of failure can be eliminated, if any one system component goes Down, the system as a whole or the solution (multiple systems) stay highly available.

 4. Ease of technology refresh

                                      Processors, memory, disk or operating system (OS) technology be easily updated.New processors and nodes can be added or upgraded as needed.

 5. Affordable service and support

                                      Compared to proprietary systems, the total cost of ownership can be much lower. This includes service, support and training.

 6. Vendor lock-in

                                     The age-old problem of proprietary vs. open systems that use industry-accepted standards is eliminated.

 7. System manageability

                                    The installation, configuration and monitoring of key elements of  proprietary  systems is usually accomplished with proprietary technologies, complicating system management.The servers of an HPC cluster can be easily managed from a single point using readily available network infrastructure and enterprise management software.

 8.Reusability of components

                                     Commercial components can be reused, preserving the investment. For example, older nodes can be deployed as file/print servers, web servers or other infrastructure servers.

 9.Disaster recovery

                                     Large SMPs are monolithic entities located in one facility. HPC systems can be collocated or geographically dispersed to make them less susceptible to disaster.

 CLUSTER CLASSIFICATIONS

Clusters are classified in to several sections based on the facts such as :

♣  Application target.

♣  Node ownership.

♣  Node Hardware.

♣  Node operating System.

♣  Node configuration.

Clusters based on Application Target are again classified into two:

♣  High Performance (HP) Clusters

♣  High Availability (HA) Clusters

 Clusters based on Node Ownership are again into two:

♣  Dedicated clusters

♣  Non-dedicated clusters

Clusters based on Node Hardware are again classified into three:

♣  Clusters of PCs (CoPs)

♣  Clusters of Workstations (COWs)

♣  Clusters of SMPs (CLUMPs)

 Clusters based on Node Operating System are again classified into:

♣  Linux Clusters (e.g., Beowulf)

♣  Solaris Clusters (e.g., Berkeley NOW)

♣  Digital VMS Clusters

♣  HP-UX clusters

♣  Microsoft Wolf pack clusters

 Clusters based on Node Configuration are again classified into:

♣  Homogeneous Clusters -All nodes will have similar architectures and run the same OS

♣  Heterogeneous Clusters- All nodes will have different architectures and run different OS

ARCHITECTURE

                                  A cluster is a type of parallel or distributed processing system, which consists of  a collection of interconnected stand-alone computers  cooperatively  working  together  as  a  single, integrated computing  resource A  node:

 ♠  A   single   or multiprocessor system   with memory,   I/O facilities, & OS.

♠  Generally 2 or more computers (nodes) connected together.

♠  In a single cabinet, or   physically separated    & connected via   a   LAN.

  Appear as a single system to users and applications

♠ Provide a cost-effective way to gain features   and   benefits.

Cluster1

Three principle features usually provided by cluster computing are:

1. Availability    2. Scalability   3. Simplification.

                                      Availability is provided by the cluster of computers operating as a single system by continuing to provide services even when one of the individual computers is lost due to a hardware failure or other reason.

                                      Scalability is provided by the inherent ability of the overall system to allow new components, such as computers, to be assed as the overall system’s load is increased.

                                      The Simplification comes from the ability of the cluster to allow administrators to manage the entire group as a single system. This greatly simplifies the management of groups of systems and their applications. The goal of cluster computing is to facilitate sharing a computer load over several systems without either the users of system or the administrators needing to know that more than one system is involved.

 

FUTURE TRENDS – GRID COMPUTING

Cloud_Grid_Computing_adarsh

                                        As computer networks become cheaper and faster, a new computing paradigm, called the Grid has evolved. The Grid is a large system of computing resources that performs tasks and provides to users a single point of access, commonly based on the World Wide Web interface, to these   distributed resources. Users consider the Grid as a single computational resource. Resource management software, frequently referenced as   middleware, accepts jobs submitted by users and schedules them for execution on appropriate systems in the Grid, based upon resource   management policies.

                                        Users can submit thousands of jobs at a time without being concerned about where they run. The Grid may scale from single systems to supercomputer-class compute farms that utilize thousands of   processors. Depending on the type of applications, the interconnection between the Grid parts can be performed using dedicated high speed networks or the Internet. By providing scalable, secure, high-performance mechanisms for discovering and negotiating access to remote resources, the Grid promises to make it possible for scientific collaborations to share resources on an unprecedented scale, and for geographically distributed   groups to work together in ways that were previously impossible.

                                        Some examples of new applications that benefit from using   Grid technology constitute a coupling of advanced scientific instrumentation or desktop computers with remote super- computers; collaborative design of complex systems via high-bandwidth access to shared resources; ultra-large virtual supercomputers constructed to solve problems too large to fit on any single computer; rapid, large-scale parametric studies. The Grid technology is currently under intensive development. Major Grid projects include NASA’s Information Power Grid, two NSF Grid projects (NCSA Alliance’s Virtual Machine Room and NPACI), the European Data Grid Project and   the   ASCI  Distributed  Resource Management project. Also first Grid tools are already available for developers. The   Globus Toolkit [20] represents one such example and includes a set of services and software libraries to support Grids and Grid applications.

HOW CLUSTERS WORKS ?

slide-13-728

                                          A cluster is a network of queue managers that are logically associated in some way. The queue managers in a cluster might be physically remote. For example, they might represent the branches of an international chain store and be physically located in different countries. Each cluster within an enterprise must have a unique name.

Typically a cluster contains queue managers that are logically related in some way and need to share some data or applications. For example you might have one queue manager for each department in your company, managing data and applications specific to that department. You could group all these queue managers into a cluster so that they all feed into the payroll application. Or you might have one queue manager for each branch of your chain store, managing the stock levels and other information for that branch. If you group these queue managers into a cluster, they can all access the same set of sales and purchases applications. The sales and purchases application might be held centrally on the head-office queue manager.


A  LOGICAL VIEW FOR  CLUSTERS

slide-14-728

A Beowulf cluster uses multi computer architecture, as depicted in figure. It features a parallel computing system that usually consists of one or more master nodes and one or more compute nodes, or cluster nodes, interconnected via widely available network interconnects. All of the nodes in a typical Beowulf cluster are commodity systems- PCs, workstations, or servers-running commodity software such as Linux

The master node acts as a server for Network File System (NFS) and as a gateway to the outside world. As an NFS server, the master node provides user file space and other common system software to the compute nodes via NFS. As a gateway, the master node allows users to gain access through it to the compute nodes. Usually, the master node is the only machine that is also connected to the outside world using a second network interface card (NIC). The sole task of the compute nodes is to execute parallel jobs. In most cases, therefore, the compute nodes do not have keyboards, mice, video cards, or monitors

ISSUES TO BE CONSIDERED

 Cluster Networking

                                           If you are mixing hardware that has different networking technologies, there will be large differences in the speed with which data will be accessed and how individual nodes can communicate. If it is in your budget make sure that all of the machines you want to include in your cluster have similar networking capabilities, and if at all possible, have  network adapters from the same manufacturer.

 Cluster Software

                                           We  have to build versions of clustering software for each kind of system to include in the cluster.

 Programming

                                           Our code will have to be written to support the lowest common denominator for data types supported by the least powerful node in our cluster. With mixed machines, the more powerful machines will have attributes that cannot be attained in the powerful machine.

 Timing

                                           This is the most problematic aspect of   heterogeneous cluster. Since these machines have different performance profile our code   will execute at different rates on the different kinds of nodes. This can cause serious bottlenecks if a process on one node is waiting for results of a calculation ona slower node. The second kind of  heterogeneous clusters is made from different machines in the same architectural family: e.g. a collection of Intel boxes where the machines are different generations or machines of same  generation from different manufacturers.

 Network Selection

                                           There are a number of different kinds of network   topologies, including buses, cubes of various degrees, and grids/meshes. These network   topologies will be implemented by use of one or more network interface cards, or NICs, installed into the head-node and compute nodes of our cluster.

 Speed Selection

                                           No matter what topology you choose for your cluster, you will want to get fastest network that your budget allows. Fortunately, the availability of high speed computers has also forced the development of high speed networking systems. Examples are 10Mbit Ethernet, 100Mbit Ethernet, gigabit networking, channel bonding etc.

 CONCLUSION

                                            Clusters are being used to solve many scientific, engineering, and commercial applications. We have discussed a sample of these application areas and how they benefit from the use of clusters. The applications studied include, a Web server, an audio processing system (voice based email), data mining, network simulations, and image processing. Many large international Web portals and e-commerce sites use clusters to process customer requests  quickly and maintain high availability for 24 hours a day and throughout the year. The  capability of clusters to deliver high performance and availability on a single platform is empowering many existing and emerging applications and making clusters the platform of choice!

REFERENCE

http://www.scfbio-iitd.res.in/doc/clustering.pdf

http://sparkscoop.com/cluster-computing

  http://www.adarshpatil.com

DNA computing-Roll no:30

Leave a comment

Adleman

Today’s computers are millions of times more powerful than their crude ancestors in the 40’s and 50’s. Almost every two years, computers have become twice as fast whereas their components have assumed only half the space. One of the recently introduced unconventional paradigms, which promises to have a tremendous influence on the theoretical and practical progress of computer science is DNA computing. . The concept of DNA computing was born in 1993, when Professor Leonard Adleman, a mathematician specializing in computer science and cryptography at the Laboratory of Molecular Science, Department of Computer Science, University of Southern California accidentally stumbled upon the similarities between conventional computers and DNA while reading the book “Molecular Biology of the Gene,” written by James Watson, who co-discovered the structure of DNA in 1953. Adleman came to the conclusion that DNA had computational potential to solve complex mathematical problems. In 1994, Leonard Adleman introduced the idea of using DNA to solve complex mathematical problems. In fact, DNA is very similar to a computer hard drive in how it stores permanent information about your genes.

 

WHAT IS DNA?

DNA is the master molecule of every cell. It contains vital information that gets passed on to each successive generation. It coordinates the making of itself as well as other molecules (proteins). If it is changed slightly, serious consequences may result. If it is destroyed beyond repair, the cell dies.Changes in the DNA of cells in multicellular organisms produce variations in the characteristics of a species. .DNA is one of the nucleic acids, information-containing molecules in the cell (ribonucleic acid, or RNA, is the other nucleic acid). DNA is found in the nucleus of every human cell.The information in DNA:

* guides the cell (along with RNA) in making new proteins that determine all of our biological traits

* gets passed (copied) from one generation to the next

3701191_f520

image2

image1

images (5)

WHAT IS DNA COMPUTING

A DNA computer, as the name implies, uses DNA strands to store information and taps the recombinative properties of DNA to perform operations. A small test tube of DNA strands suspended in a solution could yield millions to billions of simultaneous interactions at speeds — in theory — faster than today’s fastest supercomputers.DNA computer uses the recombinative property of dna to perform operations.The main benefit of using DNA computers to solve complex problems is that different possible solutions are created all at once. This is known as parallel processing. Humans and most electronic computers attempt to solve the problem one process at a time (linear processing).DNA itself provides the added benefits of being a cheap, energy-efficient resource.In a different perspective, more than 10 trillion DNA molecules can fit into an area no larger than 1 cubic  centimetre. With this, a DNA computer could hold 10 terabytes of data and perform 10 trillion calculations at a time.

COMPARISON WITH CONVENTIONAL COMPUTERS

DNA’s key advantage is that it will make computers smaller than any computer that has come before them, while at the same time holding more data. One pound of DNA has the capacity to store more information than all the electronic computers ever built; and the computing power of a teardrop-sized DNA computer, using the DNA logic gates, will be more powerful than the world’s most powerful supercomputer.More than 10 trillion DNA molecules can fit into an area no larger than 1 cubic entimetre (0.06 cubic inches). With this small amount of DNA, a computer would be able to hold 10 terabytes of data, and perform 10 trillion calculations at a time. By adding more DNA, more calculations could be performed.Unlike conventional computers, DNA computers perform calculations parallel to other calculations.Conventional computers operate linearly, taking on tasks one at a time. It is parallel computing that allows DNA to solve complex mathematical problems in hours, whereas it might take electrical computers hundreds of years to complete them.The first DNA computers are unlikely to feature word processing, e-mailing and solitaire programs.Instead, their powerful computing power will be used by national governments for cracking secret codes, or by airlines wanting to map more efficient routes. Studying DNA computers may also lead us to a better understanding of a more complex computer – the human brain.DNA computers will be capable of storing billions of times more data (say, at a density of 1 bit per cubic nanometer – a trillion times less space) than your personal computer. The DNA computer has very low energy consumption, so if it is put inside the cell it would not require much energy to work. Using DNA logic gates the DNA computers will be more powerful than the world’s most powerful supercomputer. DNA computers perform calculations parallel to other calculations.

  • Speed :Combining DNA strands as demonstrated by Dr Adleman, made computations equivalent to 10^9 or better,arguably over 100 times faster than fastest computer.
  • Minimal storage requirements: DNA stores memory at a density of about one bit per cubic nanometer where conventional storage media requires 10^12 cubic nanometers to storage one bit.
  • Minimal power requirements :No power is required for DNA computing while computation is taking place. The chemical bondsthat are the building blocks of DNA happen without any outside power source.

ADVANTAGES

  • Perform millions of operations simultaneously (Parallel Computing).
  • Generate a complete set of potential solutions and conduct large parallel searches.
  • Capable of storing billions of times more data
  • Over 100 times faster than fastest computer
  • Minimal storage requirements.
  • Minimal power requirements
  • They are inexpensive to build, being made of common biological materials.
  • The clear advantage is that we have a distinct memory block that encodes bits.
  • Using one template strand as a memory block also allows us to use its compliment. As another  memory block, thus effectively doubling our capacity to store information.
  • More powerful than the world’s most powerful supercomputer
  • DNA computers smaller than any computer

 DISADVANTAGES

  • Generating solution sets, even for some relatively simple problems, may require impractically large amounts of memory (lots and lots of DNA strands are required)
  • Many empirical uncertainties, including those involving: actual error rates, the generation of optimal encoding techniques, and the ability to perform necessary bio-operations conveniently in vitro (for every correct answer there are millions of incorrect paths generated that are worthless).
  • DNA computers could not (at this point) replace traditional computers.
  • They are not programmable and the average dunce can not sit down at a familiar keyboard and get to work.
  • It requires human assistance.

THE FUTURE OF DNA COMPUTING

The significance of this research is two-fold: it is the first demonstrable use of DNA molecules for representing information, and also the first attempt to deal with an NP-complete problem. But still much more work needs to be done to develop error-resistant and scalable laboratory computations. Designing experiments that are likely to be successful in the laboratory and algorithms that proceed through polynomial-sized volumes of DNA is the need of the hour. It is unlikely that DNA computers will be used for tasks like word processing, but they may ultimately find a niche market for solving large-scale intractable combinatorial problems. The goal of automating, miniaturizing and integrating them into a general-purpose desktop DNA computer may take much longer time.

REFERENCES

ORGANIC USER INTERFACE(roll no:33)

Leave a comment

 

ou5

Organic User Interfaces is an emerging vision for future user interfaces that attempts to map out a future where these technologies are a common place. It is based on an understanding physical shape of the displays and computing devices will become an important design variable for future interfaces. Indeed, not onlywill future displays be able to take any arbitrary shape, but the shape itself will by dynamic, either modified by the user, or self-actuated.

OUIs are flexible non-planar displays that act both as output and input devices. “When flexible, OUIs have the ability to become the data on display through deformation, either via manipulation or actuation. Their fluid physics-based graphics are shaped through multitouch and bi-manual gestures.”OUI design principles include that they should be created so the function of the device equals its form, for an intuitive interaction. They should be made from transitive, shape shifting materials that allows the form of the display to follow its changing function and flow

The WIMP interaction style, introduced by D.Engelbart in 1968, was a major milestone for the designof graphical user interfaces and is still predominantamong current operating systems . Since then, manydesign guidelines and rules  have been proposed.

With the notion of Organic User Interfaces (OUI), we try to capture the essence of these principles in a new design metaphor.OUIs respect and are inspired by the natural laws of physics, biology, and human cognition. They must follow the principles of fluidity, intuitiveness,robustness, and calmness.

 

ou6

FLUIDITY

An interface is fluid if it s governed by a set of simple rules, easily understood by the user. This principle is manifested in our seamless interactions with the physical world: a typical desk activity like thumbing through a pile of paper is a graceful transition from awareness to focus. Because the laws of physics are always in effect, the pile never acts in an unexpected manner. Similarly complex biological systems, like bird flight paths, emerge from simple rules that make the overallsystem appearself-organizing. OUIs support this notion by providing clear rules that enforce consistent constraintsthroughout the system.

INTUITIVENESS

OUIs appear familiar by making use of clear affordances, natural mappings, and constraints.Analogies are found in biologicalsystems: a leaf avoids being weighed down by manifesting an appropriate form and texture to repel water. This can be considered as an evolutionary affordance. For the user, we must understand the abilities and limitations of human cognition, and eventually find the most natural interactions and representations to support the task.OUIs convey a natural understanding of their underlying functionality.

ROBUSTNESS

The underlying system of an OUI must be as robust as possible. Like biologicalsystems, itshould avoid hazards, recover from errors, and operate with degraded functionality, until repair is available. Although this notion has been explored in biologically inspired algorithms, it has not been adapted to the user interface.

CALMNESS

A calm interface will never interfere with the user’s natural flow of work. Information output is represented in a non-intrusive way, which is immediately available if needed but otherwise not distracting. In nature, a forest conveys a great deal of information in a very calm and soothing way. The visitor can concentrate on the information provided or simply ignore it and focus on something else. Similarly, OUIs must allow the user to decide, how much attention to focus on the interface.

CURRENT PROJECTS

Fly  is an organic presentation tool that adds a spatial structure to a presentation. It uses the concept of Mind Maps to organize the structure of a presentation. Using color associations, spatial relations, and fluid movement, Fly creates a meaningful overview of the underlying content.

FIG:OUI

ou1

The three tightly knit themes, which define what we refer to as an Organic User Interface (OUI):

1. Input Equals Output: Where the display is the input device. The developments of flexible display technologies will result in computer displays that are curved, flexible, or any other form, printed on couches, clothing, credit cards, or paper. How will we interact with displays that come in any shape imaginable? What new interaction principles and visual designs becomepossible when curved computers are a reality? One thing is clear: current point-and-click inter- faces designed for fixed planar coordinate systems, and controlled via some separate special-purpose input device like a mouse or a joystick, will not be adequate. Rather,input in this brave new world of computing will depend on multi-touch gestures and 3D-surface deformations that are performed directly on and with the display surface itself. In future interfaces, input and output design spaces are thus merged: the input device is the output device

Figure . D20 is a concept of multifaceted handheld display device, shaped as a regular icosahedron. The user interacts with it by rotatingit and touching its faces [5]. The visual interface is structured totake advantage of the device shape.

ou2

2. Function Equals Form: Where the display can take on any shape. Today’s planar, rectangular displays,such as LCD panels, will eventually become secondary when any object, from a credit card to a building, no matter how large, complex, dynamic or flexible will be wrapped with high resolution, full-color, interactive graphics. Several pioneering projects are already exploring this future, such as the D20 concept that proposed an interface for an icosohedral display [5] (see the above fig). One important observation that emerges from such experimentation is that the form of the display equals its function. In other words , designers should tightly coordinate the physical shape of the display with the functionality that its graphics afford.

3. Form Follows Flow: Where displays can change their shape. In the foreseeable future, the physical shape of computing devices will no longer necessarily be static. On the one hand, we will be able to bend, twist, pull, and tear apart digital devices just like a piece of paper or plastic. We will be able to fold displays like origami, allowing the construction of complex 3D structures with continuous display surfaces. On the other hand, augmented with new actuating devices and materials, future computing devices will be able to actively alter their shape. Form will be able to follow the flow of user interactions when the display, or entire device, is able to dynamically reconfigure, move, or transform itself to reflect data in physical shapes. The 3D physical shape itself will be a form of display, and its kinetic motion will become an important variable in future interactions

These three general directions together comprise what we refer to in this section as Organic User Interfaces: User interfaces with non-planar displays that may actively or passively change shape via analog physical inputs. We chose the term “organic” not only because of the technologies that underpin some of themost important developments in this area, that is, organic electronics, but also because of the inspiration provided by millions of organic shapes that we can observe in nature, forms of amazing variety, forms that are often transformable and flexible, naturally adaptable and evolvable, while extremely resilient and reliable at the same time. We see the future of computing flourishing with thousands of shapes of computing devices that will be as scalable, flexible, and transformable as organic life itself. We should note that the OUI vision is strongly influenced by highly related areas of user interface research, most notably Ubiquitous and Context-aware Computing, Augmented Reality, Tangible User Interfaces, and Multi-touch Input. . Naturally, OUIs incorporate some of the most important concepts that have emerged in theprevious decade of HCI, in particular embodied interaction, haptic, robotic, and physical interfaces, computer vision, the merging of digital and physical environments, and others. At the same time, OUIs extend and develop those concepts by placing them in a framework where our environment is not only embedded with billions of tiny networked computers, but where that environment is the interface, physically and virtually reactive, malleable, and adaptable to the needs of the user.

ou3

This diagram shows how OUI interaction styles might eventually relate to those found in traditional GUIs. In OUIs, simple pointing will be supplanted by multi-touch manipulations. Although menus will still serve a purpose, many functions may be triggered through manipulations of shape. OUIs will take the initiative in user dialogue through active shape-changing behaviors. Finally, OUIs’ superior multitasking abilities will be based on the use of multiple displays with different shapes for different purposes. These will appear in the foreground when picked up or rolled out, and they will be put away when no longer needed.

Designing Kinetic Interactions for Organic User Interfaces

We are surrounded by a sea of motion: everything around us is in a state of continuous movement. We experience numerous and varied kinds of motions: voluntarily motions of our own body as we walk, passive motion induced by natural forces, such as the rotation of windmill blades in the wind or the fall of a leaf from a tree due to the force of gravity; physical transformations such as the growth of a flower or the inflation of balloon, and the mechanical motion of the machines and mechanisms that populate our living spaces.

Kinetic interaction design forms part of the larger framework of Organic User Interfaces (OUI) that is discussed in this special issue: interfaces that can have any shape or form . We define Kinetic Organic Interfaces (KOIs) as organic user interfaces that employ physical kinetic motion to embody and communicate information to people. Shape-changing inherently involves some form of motion since any body transformation can be represented as motion of its parts. Thus kinetic interaction and kinetic design are key components of the OUI concept. With KOIs, the entire real world, rather then a small computer screen, becomes the design environment for future interaction designers.

FIG:EMERGING DISPLAY TECHNOLOGY

ou4

FIG:DESIGNING KINETIC

 

MODERN ORGANIC ARCHITECTURE

REFERENCE:

http://www.organicui.org

http://www.humanmedialab.org

Older Entries Newer Entries