T-Shirt OS – wearable, shareable, programmable clothing-Roll no:28

Leave a comment

T-Shirt OS – wearable, shareable, programmable clothing

The T-shirt OS is an internet enabled, 100% cotton t-shirt with a 1024-pixel LED screen and is currently at the prototype stage. Using your smartphone to choose what you display on to the t-shirt. There is naturally potential to broadcast your twitter and facebook posts as well. The t-shirt also has a headphone jack to share songs from iTunes and seems like you can plug other devices into your t-shirt. The potential is exciting and hope the technology gets pushed more and more.


T-shirts have long been vehicles of personal expression. Be it your faded old Slayer T, your snippet of snark brashly emblazoned across your chest, your favorite comic book character set to fabric, or just your plain old white vest replete with a few coffee stains; your t-shirt tells people something about who you are. You could change it, constantly, by making it digitally interactive.

Sounds like an idea one could only come up with while drinking? Well, perhaps that’s what the folks at Cute Circuit were doing when they teamed up with Whiskey-maker Ballentine’s to create T-Shirt OS, the world’s first connected clothing concept that actually looks cool and worth wearing.

London based Cute Circuit has previously made a name for itself with flashy (think LEDs) fabrics and creepy concepts like shirts that can hug you via text message, but this latest project combines every wacky idea into one.

The firm wants to turn the t-shirt into the most creative canvas it can, made up of a large LED screen, camera, microphone, accelerometer and speakers for sound.

The T-shirt itself would act as a thin client with a small electrical brain, that can be paired up with the much larger processor in a person’s cell phone to make it the most “wearable, shareable, programmable” piece of clothing ever created.

images (1)

What could you do with such an adaptable shirt? Connect it to Twitter, display your photos, status updates, play your music, take snaps of people on the go, the options are almost endless.

The current version of the T-shirt is controlled by iOS but an Android version will be available later, the company says.

Of course, right now, it’s just a prototype, and not a cheap item to buy by any means, but Cute Circuit believes that could change. The firm is asking for feedback on its idea, and claims it will look into producing the shirts in volume if demand reaches a certain level.

Actually, the question it really begs is, “can you wash it .Actually, Cute Circuit says the T-shirt is hand washable if battery is removed.

images (8)

Online T-Shirt Design Software-LiveArt Publisher’s Description

Online Design Software, Online Lettering Design, Online T-Shirt Design and Online Boat Sign Design Tool – LiveArt

NewtonIdeas Live Art is a WYSIWYG software to create natural-looking design for your decal, t-shirt embroidery or other kind of sign (vinyl, for boats) with our online Flash lettering, t-shirt, boat sign design software. NewtonIdeas Live Art is a WYSIWYG software to create natural-looking design for your decal, t-shirt embroidery or other kind of sign (vinyl, for boats) with our online Flash lettering, t-shirt, boat sign design software

To create a preview you just do few steps:

* Write your message.

* Choose the fonts, colors, effects: make it arc, apply shadow or stroke any color you like.

* Add a picture from the gallery.

* Modify size, quantity of your sign or embroidery, add comments.

That’s all you need to do to get nice custom product (custom online lettering design, custom online t-shirt design, custom online sign design, custom online vinyl or decal design)!

Benefits for visitors

* Ability to preview design before buying it, testing various combinations of texts, fonts, background, colors etc

* Ability to get more information about products, see samples gallery etc

* Ability to obtain all necessary information about your services

* Ability to provide feedback and to contact company representative

Supporting features

* Powerful Live Art flash component with intuitive interface to compose the desired design

* Easy-to-use interfaces to find sample works, managed gallery, possible promotional and   educational content

* Very usable navigation, clear content and site structure, involving demonstrations, impressive Flash movies (that could be inserted as intro or as the parts of web site)

* Easy available contact and feedback forms to quickly achieve company staff

images (3)                                          images (4)

images (2)


Autonomic Computing-Roll no:24

1 Comment


Autonomic computing is a self-managing computing model named after, and patterned on, the human body’s autonomic nervous system. An autonomic computing system would control the functioning of computer applications and systems without input from the user, in the same way that the autonomic nervous system regulates body systems without conscious input from the individual. The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning while keeping the system’s complexity invisible to the user.

Autonomic computing is one of the building blocks of pervasive computing, an anticipated future computing model in which tiny – even invisible – computers will be all around us, communicating through increasingly interconnected networks. Many industry leaders, including IBM, HP, Sun, and Microsoft are researching various components of autonomic computing. IBM’s project is one of the most prominent and developed initiatives. In an effort to promote open standards for autonomic computing, IBM recently distributed a document that it calls “a blueprint for building self-managing systems,” along with associated tools to help put the concepts into practice. Net Integration Technologies advertises its Nitix product as “the world’s first autonomic server operating system.”

Autonomic Computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. An autonomic system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions. An autonomic computing framework is composed of autonomic components (AC) interacting with each other. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting policies based on self- and environment awareness.

A general problem of modern distributed computing systems is that their complexity, and in particular the complexity of their management, is becoming a significant limiting factor in their further development. Large companies and institutions are employing large-scale computer networks for communication and computation. The distributed applications running on these computer networks are diverse and deal with many tasks, ranging from internal control processes to presenting web content and to customer support.

Additionally, mobile computing is pervading these networks at an increasing speed: employees need to communicate with their companies while they are not in their office. They do so by using laptops, personal digital assistants, or mobile phones with diverse forms of wireless technologies to access their companies’ data.

This creates an enormous complexity in the overall computer network which is hard to control manually by human operators. Manual control is time-consuming, expensive, and error-prone. The manual effort needed to control a growing networked computer-system tends to increase very quickly.

A possible solution could be to enable modern, networked computing systems to manage themselves without direct human intervention. The Autonomic Computing Initiative (ACI) aims at providing the foundation for autonomic systems. It is inspired by the autonomic nervous system of the human body. This nervous system controls important bodily functions (e.g. respiration, heart rate, and blood pressure) without any conscious intervention.

In a self-managing autonomic system, the human operator takes on a new role: instead of controlling the system directly, he/she defines general policies and rules that guide the self-management process. For this process, IBM defined the following four functional areas:

  • Self-configuration: Automatic configuration of components;
  • Self-healing: Automatic discovery, and correction of faults;[5]
  • Self-optimization: Automatic monitoring and control of resources to ensure the optimal functioning with respect to the defined requirements;
  • Self-protection: Proactive identification and protection from arbitrary attacks.


1.Automatic:This essentially means being able to self-control its internal functions and            operations.

2.Adaptive:An autonomic system must be able to change its operation

(i.e., its configuration, state and functions).

3.Aware:An autonomic system must be able to monitor (sense) its operational context as well as its internal state in order to be able to assess if its current operation serves its purpose.



A fundamental building block of an autonomic system is the sensing capability (Sensors Si), which enables the system to observe its external operational context. Inherent to an autonomic system is the knowledge of the Purpose (intention) and the Know-how to operate itself (e.g., bootstrapping, configuration knowledge, interpretation of sensory data, etc.) without external intervention. The actual operation of the autonomic system is dictated by the Logic, which is responsible for making the right decisions to serve its Purpose, and influence by the observation of the operational context (based on the sensor input).

This model highlights the fact that the operation of an autonomic system is purpose-driven. This includes its mission (e.g., the service it is supposed to offer), the policies (e.g., that define the basic behaviour), and the “survival instinct”. If seen as a control system this would be encoded as a feedback error function or in a heuristically assisted system as an algorithm combined with set of heuristics bounding its operational space.

Autonomic Cloud Bursts on Amazon EC2

Cluster-based data centers have become dominant computing platforms in industry and research for enabling complex and compute intensive applications. However, as scales, operating costs, and energy requirements increase, maximizing efficiency, cost-effectiveness, and utilization of these systems becomes paramount. Furthermore, the complexity, dynamism, and often time critical nature of application workloads makes on-demand scalability, integration of geographically distributed resources, and incorporation of utility computing services extremely critical. Finally, the heterogeneity and dynamics of the system, application, and computing environment require context-aware dynamic scheduling and runtime management.

Autonomic cloud bursts is the dynamic deployment of a software application that runs on internal organizational compute resources to a public cloud to address a spike in demand. Provisioning data center resources to handle sudden and extreme spikes in demand is a critical requirement, and this can be achieved by combining both private data center resources and remote on-demand cloud resources such as Amazon EC2, which provides resizable computing capacity in the cloud.

images (5)

This project envisions a computational engine that can enable autonomic cloud bursts capable of: (1) Supporting dynamic utility-driven on-demand scale-out of resources and applications, where organizations incorporate computational resources based on perceived utility. These include resources within the enterprise and across virtual organizations, as well as from emerging utility computing clouds. (2) Enabling complex and highly dynamic application workflows consisting of heterogeneous and coupled tasks/jobs through programming and runtime support for a range of computing patterns (e.g., master-slave, pipelined, data-parallel, asynchronous, system-level acceleration). (3) Integrated runtime management (including scheduling and dynamic adaptation) of the different dimensions of application metrics and execution context. Context awareness includes system awareness to manage heterogeneous resource costs, capabilities, availabilities, and loads, application awareness to manage heterogeneous and dynamic application resources, data and interaction/coordination requirements, and ambient-awareness to manage the dynamics of the execution context.


Comet service model has three kinds of clouds. One is highly robust and secure cloud and nodes in this cloud can be masters. In most application, data is critical and should be in the secure space. Hence, only masters in this cloud can treat the whole data for the application. Another is secure but not robust cloud. Nodes in this cloud can be workers and provide Comet shared coordination space. Robust/secure masters and secure workers construct a global virtualized Comet space. A master generates tasks which are small unit of work for parallelization and inserts them into Comet shared coordination space. Each task is mapped to a node on the overlay using its keyword and stored in the storage space of the mapped node. Hence, robust/secure masters and secure workers have Comet shared space in its architecture substrate. The master provides a management agent for tasks, scheduling and monitoring tasks. It also provides a computing agent because it can provide computing capability. A secure worker gets a task from the space one at a time, hence, it has a computing agent in its architecture. The workers consume the tasks and return the results back to the master through direct connection. The other cloud is for unsecured workers. Unsecured workers cannot access Comet shared space directly and also cannot provide their storage to store tasks but provide their computing capability. Hence they have only computing agent in their architecture. They request a task to one of the masters in the robust/secure network. Then the master accesses to the Comet shared space, gets a task and forwards it to the unsecured worker. When the worker finishes its task, then it sends the result back to the master.

images (7)

Can you CHOP up autonomic computing?                    


The autonomic computing architecture provides a foundation on which self-managing information technology systems can be built. Self-managing autonomic systems exhibit the characteristics of self-configuring, self-healing, self-optimizing, and self-protecting; these characteristics are sometimes described with the acronym CHOP. This article discusses the self-CHOP attributes and, in particular, explains why they are not independent of each other and how self-managing autonomic systems can integrate the CHOP functions.


The acronym CHOP is shorthand for configure, heal, optimize, and protect, the fundamental aspects of autonomic computing technology. Autonomic systems are designed to address one or more of these aspects.

images (6)

The  attributes are  defined  as:

  • Self-configuring – Can dynamically adapt to changing environments. Self-configuring components adapt dynamically to changes in the environment, using policies provided by the IT professional. Such changes could include the deployment of new components or the removal of existing ones, or dramatic changes in the system characteristics. Dynamic adaptation helps ensure continuous strength and productivity of the IT infrastructure, resulting in business growth and flexibility.
  • Self-healing – Can discover, diagnose and react to disruptions. Self-healing components can detect system malfunctions and initiate policy-based corrective action without disrupting the IT environment. Corrective action could involve a product altering its own state or effecting changes in other components in the environment. The IT system as a whole becomes more resilient because day-to-day operations are less likely to fail.
  • Self-optimizing – Can monitor and tune resources automatically. Self-optimizing components can tune themselves to meet end-user or business needs. The tuning actions could mean reallocating resources — such as in response to dynamically changing workloads — to improve overall utilization, or ensuring that particular business transactions can be completed in a timely fashion. Self-optimization helps provide a high standard of service for both the system’s end users and a business’s customers.
  • Self-protecting – Can anticipate, detect, identify and protect against threats from anywhere. Self-protecting components can detect hostile behaviors as they occur and take corrective actions to make themselves less vulnerable. The hostile behaviors can include unauthorized access and use, virus infection and proliferation, and denial-of-service attacks. Self-protecting capabilities allow businesses to consistently enforce security and privacy policies.

A  self-healing autonomic manager can detect disruptions in a system and perform corrective actions to alleviate problems. One form that those corrective actions might take is a set of operations that reconfigure the resource that the autonomic manager is managing. For example, the autonomic manager might alter the resource’s maximum stack size to correct a problem that is caused by erroneous memory utilization. In this respect, the self-healing autonomic manager might be considered to be performing self-configuration functions by reconfiguring the resource to accomplish the desired corrective action.

Self-healing and self-optimizing management could involve self-configuration functions (so, too, could self-protection). Indeed, it often may be the case that actions associated with healing, optimizing, or protecting IT resources are performed by configuration operations. Although self-configuration itself is a broader topic that includes dynamic adaptation to changing environments, perhaps involving adding or removing system components, self-configuration is also fundamental for realizing many self-CHOP functions.

Autonomic  Manager


Figure illustrates that an autonomic manager might include only some of the four control loop functions. Consider two such partial autonomic managers: a self-healing partial autonomic manager that performs monitor and analyze functions, and a self-configuring partial autonomic manager that performs plan and execute functions, as depicted in the following Figure.


Integrating self-healing and self-configuring autonomic management functions


The first autonomic manager could monitor data from managed resources and correlate that data to produce a symptom; the symptom in turn is analyzed, and the autonomic manager determines that some change to the managed resource is required. This desired change is captured in the form of Change Request knowledge. The change request is passed to the self-configuring partial autonomic manager that performs the plan function to produce a change plan that is then carried out by the execute function. This scenario details the integration of self-healing and self-configuring autonomic management functions that was introduced earlier.

Self-CHOP describes important attributes of a self-managing autonomic system. Self-CHOP is a useful way to characterize the aspects of autonomic computing, but the four disciplines should not be considered in isolation. Instead, a more integrated approach to self-CHOP, such as this article describes, offers a more holistic view of self-managing autonomic systems.











Introducing the Computable Document Format (CDF)

            Today’s online documents are like yesterday’s paper—flat, lifeless, inactive. Instead, CDF puts easy-to-author interactivity at its core, empowering readers to drive content and generate results live.

Launched by the Wolfram Group, the CDF standard is a computation-powered knowledge      container—as everyday as a document, but as interactive as an app.

Adopting CDF gives ideas a broad communication pipeline—accelerating research, education, technical development, and progress.


Why Use the Computable Document Format (CDF)?

CDF offers content creators easy-to-author interactivity and convenient deployment options—empowering their readers to drive content and generate results live.


Key Advantages of CDF:

 Broader communication pipeline: Create content as everyday as a document, but as interactive as an app.

Built-in computation: Let the reader drive new discovery—live.

Easy-to-author interactivity: Use automated functions and plain English input instead of specialist programming skills for a wide range of applications.

Deployment flexibility: Create once—deploy as slide shows, reports, books, applications, and web objects.

Integrated knowledge: Access specialized algorithms, data, and visualizations for hundreds of subjects


Features of CDF Documents

From bloggers, students, and teachers to business consultants, scientists, engineers, or publishers, CDF delivers features that far surpass those of traditional document formats.

Live Interactive Content


Any element in CDF can be transformed into interactive content easily—true interactivity, not pre-generated or scripted. With the computing power ofMathematica technology, dynamic content in CDF can be driven by real-time computation or prompt live computation for new results, which deeply immerses readers in the content.

All-in-One Format


All elements of a project—calculations, visualizations, data, code, documentation, and even interactive applications—stay together in a uniquely flexible format. That means working on a problem with CDF automatically creates a document that can deliver knowledge to readers and let them drive content live.

Dynamic Math Typesetting


CDF makes mathematical typesetting semantic-faithful, unlike traditional typography. In addition to publication-quality typesetting, a formula can be input in a fully typeset form and then immediately evaluated to produce typeset output that can be edited and re-evaluated. It is no wonder that Wolfram was the major force behind the MathML standard.

Integrated Computational Knowledge


Powered by Mathematica and Wolfram|Alpha technology, CDF brings trillions of pieces of expert-level data and the world’s largest collection of algorithms together in a single platform. Authors in a wide range of fields can instantly create subject-specific content without requiring additional tools.


The Power behind CDF

CDF is built on the same technologies that are behind Mathematica—the world’s leading computation platform—and Wolfram|Alpha—the world’s first computational knowledge engine. Every CDF comes with the technology innovations that Wolfram has brought to the world for decades.

Automation by Design


Automation is the key to productive creation. CDF technology applies intelligent automation in every part of the system, from algorithm selection to plot layouts to user interface design. You get reliable, high-quality results without needing expertise—and even if you’re an expert, you get results faster.

Free-Form Linguistic Input


At the core of CDF technology lies the Mathematica language, a powerful and versatile language for content creation. With free-form linguistic input, programming in theMathematica language can be as easy as entering plain English. Type in your idea and let the system transform it—whether it is a simple plot or a complex image processing operation.

Built-in Knowledge: Algorithms


CDF technology builds in specialized algorithms for many scientific and technical areas, from financial engineering to computational biology, making CDFs on almost any topic easy to create. Specialist functionality is tightly integrated with the core of CDF, providing a smooth workflow for authors and delivering unprecedented computational power to readers.

Built-in Knowledge:Computable Data


With CDF, you have full access to a vast collection of computable data across hundreds of fields, from economy to life science to geography. Real-time access to frequently updated and meticulously maintained computable data makes CDF documents as live and as accurate as possible.

Integrated Graphics & Visualization


CDF and its underlyingMathematica platform provide the world’s most sophisticated graphics and visualization functionality by any measure. Interactive 3D graphics, complex scientific plots, expansive business charts, and automatic graph visualization—everything is fully built-in and ready to use.

Symbolic Documents


With CDF technology, everything is an expression, even whole documents. That allows them to be operated on programmatically. The symbolic basis of CDFs underlies many features, from cascading stylesheets to immediate deployment of CDFs as presentations, for print or the web, and as applications.

Platform Support for CDF

For Windows and Mac OS X, Wolfram CDF Player offers desktop and web plugin functionality. On Linux systems, CDF Player currently supports desktop functionality only.


The web plugin has been tested with the following browsers:

Windows 7/Vista/XP: Internet Explorer, Firefox, Chrome, Opera, Safari

Mac OS X 10.5+: Safari, Firefox, Chrome(4.0+), Opera(10.5+)

Linux 2.4+: Desktop functionality only

System requirements:
Processor: Intel Pentium III 650 MHz or equivalent
System Memory (RAM): 512 MB required; 1 GB+ recommended


CDF on Mobile Devices

We are actively pursuing solutions for mobile devices, including cloud-based services, to make CDF available to anyone, anywhere.
           The iPad is an important part of our CDF strategy for accessing educational apps, business reports, and other interactive computational material.

Google Cloud Print – Roll No:14 (Christie)

Leave a comment

Google Cloud Print

Google Cloud Print is a new technology that connects  printers to the internet. Using Google Cloud Print, one can make your home and work printers available to them. Google Cloud Print works on your phone, tablet, PC, and any other web-connected device you want to print from. . Google Cloud Print makes life easier for both system administrators and users.

In Google Chrome OS, all applications are web apps. Therefore, in designing the printing experience for Google Chrome OS.. Additionally, with the proliferation of web-connected mobile devices such as those running Google Chrome OS and other mobile operating systems, it is not feasible to build and maintain complex print subsystems and print drivers for each platform.Apps no longer rely on the local operating system (and drivers) to print. Instead, apps (whether they be a native desktop/mobile app or a web app) use Google Cloud Print to submit and manage print jobs. Google Cloud Print is then responsible for sending the print job to the appropriate printer, with the particular options the user selected, and providing job status to the app.


Replicating the complex printing architectures of traditional PC operating systems on these new class of devices is not desirable and often not feasible.This is accomplished through the use of a cloud print service. Apps no longer rely on the local operating system (and drivers) to print. Instead, as shown in the diagram below, apps (whether they be a native desktop/mobile app or a web app) use GCP to submit and manage print jobs.


Google Cloud Print Components


Any type of application can use Google Cloud Print, including web apps (such as Gmail and certain third-party apps) and native apps (such as a desktop word processor or an Android/iOS device). These apps call Google Cloud Print APIs. They can use these APIs to collect the necessary data to show their own user interface for custom print options or simply use the common print dialog that Google Cloud Print provides. Third-party app developers can use Google Cloud Print in their web, desktop, and mobile apps as well.


Google Cloud Print is a web service offered by Google. Users associate printers with their Google Account. Printers are treated in much the same way as documents are in Google Docs: it is very easy to share printers with your coworkers, friends, and family anywhere in the world. No need for complex network setups to make print sharing work! Once the service receives a print job, it sends it to the printer. The service also receives regular updates on the status of the print job from the printer and makes this status available to the app.


Google Cloud Print disambiguates between two types of printers: Cloud Ready and non-cloud printers.

Cloud Ready Printers

Cloud Ready printers are a new generation of printers with native support for connecting to cloud print services. A Cloud Ready printer has no need for a PC connection of any kind or for a print driver. The printer is simply registered with one or more cloud print services and awaits print jobs. Cloud printing has tremendous benefits for end users and for the industry, and will increasingly come to be expected from users given the rapid shift to cloud-based apps and data storage, and to mobile computing. The only way that the benefits of cloud printing can be realized is if the protocols are open, freely implementable, and  based on existing industry standards.

Non-cloud Printers

Most printers in existence today fall into this category. This category includes printers connected directly to PCs  as well as networked printers (Ethernet or WiFi). This category also includes the recent crop of “web-connected” printers that provide users with access to certain web services  directly from the on-printer LCD. While these are “web connected,” they are not Cloud Ready printers as described above, because they do not have the ability to directly communicate with a cloud print service to fetch print jobs.

We want users to be able to print to non-cloud printers via Google Cloud Print. This is accomplished through the use of a connector, a small piece of software that runs on a PC where the printer is installed. The connector takes care of registering the printer with Google Cloud Print and waiting for print jobs from the service. When a job arrives, the connector submits it to the printer using the PC operating system’s native printer software, and sends job statuses back to the service.

Thus far, developed a connector for Mac and Windows and are currently developing one for Linux. To help users avoid the trouble of having to install yet another piece of software on their PCs, these connectors are distributed with Google Chrome.

Google Chrome OS printing

When users print from a web app that directly integrates with Google Cloud Print, then the print operation is standalone and does not involve Chrome OS. When users print a web page which is not directly integrated with Google Cloud Print the app responsible for printing is the Google Chrome browser on Chrome OS. Here, Google Chrome on Chrome OS is a native app that uses Google Cloud Print and the common print dialog. The content to be printed is uploaded to the Google Cloud Print service along with the job ticket information and then sent to the selected printer.

The only problem is that no printer supports Google Cloud Print and that’s why Google revealed some details about the service’s interfaces, hoping that printer manufacturers will update their software and support it. If a printer doesn’t support Google’s service, you’ll need a proxy software for the computer where the printer is installed. Google says that the proxy software will be bundled with Google Chrome.

It may seem that Google’s solution is complicated and difficult to implement: we need an open standard for cloud printing, cloud-aware printers and users need to associate printers with an online service. Instead of sending the printing job directly to the printer, you’ll send it to the online service, which forwards it to the printer. Despite all these hurdles, Google Cloud Print allows you to print documents from a mobile phone, tablet, notebook or any other mobile device. You’ll be able to print files from an Android phone or tablet, from a Chrome OS computer, from any computer that runs Google Chrome and from other devices that will support Google Cloud Print.

How Does Google Cloud Print Work?

 When you print through Google Cloud Print, your file is securely sent to your printer over the web. Because it’s the web, Google Cloud Print works whether you’re in the same room as your printer, or on another continent. It also doesn’t matter whether you’re on a phone, a traditional desktop, or anything in between (like a tablet).




Google Cloud Print takes the security of your files very seriously. Documents are transferred over a secure HTTPS web connection. After a job is completed, the associated document is deleted from servers. In addition, you can delete jobs and their history at any time


Google Cloud Print allows you to share printers with friends, family, or coworkers as easily as you would share a Google Doc file – perfect for visiting guests looking to print a flight boarding pass.


3>>Chromebook Ready

Google Cloud Print is the standard printing technology used by ChromeOS on Chromebooks. Your Chromebook and the web live hand in hand and, for that reason, it didn’t feel right tying you down with local printer software.


4>>Enterprise Ready

Google Cloud Print is used internally by Google on over a thousand printers. It can scale to meet your organization’s needs, and can either complement or replace your existing printing infrastructure


Applications that work with Google Cloud Print

The following applications allow you to print to Google Cloud Print.

On all devices

Chrome Browser

You can print any of the open tabs on Chrome to Google Cloud Print by hitting Ctrl + P or “Print” from the wrench menu and selecting “Print with Google Cloud Print” from the destination dropdown. On the Chromebook, Cloud Print is the default print option. Made by Google.

On your Android device

Cloud Print BETA

Cloud Print is an Android application that allows you to print files off of your Android device, including emails and attachments, text messages, contacts, web pages, documents, and more.

Cloud Printer

Cloud Printer is an add-on to the Mozilla Firefox mobile browser.


Easy Print

Easy Print is an Android application that allows you to manage your printers and print jobs, and print documents and emails.


Fiabee is a cloud sync application that allows you to print documents to Google Cloud


Wireless mobile printing app for your Email, Microsoft® Office docs, PDF files, Photos, Web pages and more. Print from Google Cloud Print to a range of Wi-Fi enabled laser printers/MFPs.

On your iOS device

PrintCentral Pro

PrintCentral Pro is an iOS application that allows you to print to a number of services, including Google Cloud Print.

On any mobile device

Mobile Google Apps

If you access Gmail or Google Docs through your phone’s browser, you can print any email, document, spreadsheet, or other Docs file through Google Cloud Print. Using the new print2docs feature, you can also “print” any file you wish to your Docs account for safe keeping.

The Web

On any web page, if you see a “Print” button with the Google Cloud Print logo, you can print without leaving your browser.

KODAK Email Print

Send emails and attachments to your KODAK all-in-one printer from anywhere, using any email capable, web connected device.

On your Mac or Windows PC

Cloud Printer

Cloud Printer allows you to print from any application on your Mac (Leopard and up), through the regular Mac print menu.

The Web

On any web page, if you see a “Print” button with the Google Cloud Print logo, you can print without leaving your browser.

Paperless Printer

Paperless Printer® is a Windows virtual printer that allows you to print from a desktop application and have the print job sent to a remote Google cloud printer.

Cloud Print for Windows

Cloud Print for Windows by Software Devices LLC registers your Windows printers as cloud printers in Google Cloud Print. Print jobs sent to those cloud printers are then automatically printed on the corresponding local printers. You can easily submit documents from your PC to any of your cloud printers.


WappwolfAutomator is an easy tool that helps you save a lot of time when processing files. It does this by providing powerful automations that you can add to a folder in your favorite cloud storage service. Whenever you add one or more files to that folder, those files will be processed automatically .


Leave a comment


Intel wifi chip is a chip code named   as  rose-point. Rose-point design puts a digital 2.4Ghz wifi chip and dual core atom processor into a single chip .it as a big impact in digitization of communication. Wifi chip used here is digital in nature, the main aim of wifi chip is to increase power efficiency by removing unnecessary circuitary. Using wifi chip,it is possible for mobile devices like phone, tablets, laptops to be slimmer, less battery and reduced cost.


Introducing  2.4Ghz  wifi radio and low power atom CPU onto a same chip, though it sounds simple, embedding the two components onto the same chip is not very easy.

Wifi chips are difficult to miniaturize because they are based upon  complex  analog  circuitary.

Both wifi and CPU can emit destruptive  variations ,these radiations sweeps into the RF-module and corrupts the data.

Traditional modem designs encooperate a large number of analog circuitary components such as synthesizers and amplifiers, this allows wireless modems to operate on large range of device voltages.

Electromagnetic frequency interference. The speed of wifi communications is close to speed of the CPU ‘s clock speed. The two portions of the chip could interfere with each other, leading to correction issues.

Elimination of radio wave emission that cause interference between two components.



– Introducing   revised   silicon   modem  that uses only two voltages.

– En co-operating new anti-radiations and noise cancelling shielding to prevent the components from correcting each other.

– Introduction   of  wifi  that is in digital in  nature.

– Creation of   frequency synthesizers , sigma-delta analog to digital convertors, digital phase modulators and digital radio frequency power amplifiers.


FIG:  Intel’s “PC On-Chip” includes dual Atom processor cores, and a complete RF WiFi transceiver.

Intel on chip system connects to various semiconductor blocks like asynchronous receiver /transmitter, general purpose input-output etc.Intel on chip system provides testing , debugging validation capabilities with visibility to individual  IP b locks.

The atom processors supports two way simultaneous multi-threading. To minimize the power SoC supports burst mode to increase the clock speed, when higher performance is required. The architecture supports both windows and Linux operating systems.


Intel has redesign WIFI device trans-receiver. Generally WIFI   trans-receivers are  analog in nature and are  harder to miniaturize.

The analog chip components like inductors cannot work efficiently, when chip size is reduced. WIFI radios are typically on separate connectivity chip often combined with others radio’s such as Bluetooth   and   FM  .

In-order to overcome these drawbacks digital WIFI radio trans-receiver

was introduced. They are less power consuming and can run much faster.

The information that are usually processed using RF signals is kept in digital domain until signal is amplified and goes out on the air wave. The strength of digital radio is that, it can swift to different radio protocols by only changing the SW settings. It will be beneficial for mobile phones, even-though  intel new trans-receiver approximately matches the digital radio schematics, it still includes some analog based filtering in the radio receiver.


FIG: Intel puts CPU and WiFi radio together on same chip

The aim of Rose-point is to deliver “State of the art power efficiency” by removing unnecessary circuitary. Intel embeds the WIFI radio and dual core atom CPU onto the same chip of the silicon.This chip provides three things

  1. More electronic device will be able to network wirelessly.
  2. Devices could be more energy efficient.
  3. Multi digital radio can be combined on single chip , this can make mobile phones  cheaper.

The unique feature of PC on chip is spread spectrum clock generated(SSC) .The SSC is used to spread the frequency of the clock CPU ,DDR3 and various blocks in-order to reduce the energy of the clock noise, hence it can improve isolation of RF block.


FIG: Intel’s wireless-enabled processor is shown here being attached to a motherboard.

Radios are technically called trans-receiver and are made up of number of components .A trans-receiver is composed of :

RECEIVER –   That brings in the signal from outside .

TRANSMITTER- Sends out the signals to the world.

AMPLIFIERS – To make small signals larger.

FILTERS AND MIXERS – To select and fine tune the signals.

BASEBAND – To modulate and demodulate, encode and decode data.


FIG: The “Moore’s Law Radio” chip, on a test board used for the demonstration at the Intel Developer Forum.

“ Moore’s Law Radio “  , it’s an intel claim of  their first digital WIFI radio fabricated with 32nm.The test radio built with radio on a chip and field programmable gate array to string to digital radio from one computer to another.

By reducing the size from 90nm to 32nm improve performance and reduce half of power consumption.

Intel Demos Atom “Rosepoint” SoC with Built-In All-Digital Wi-Fi Transceiver.

Intel Rosepoint system-on-chip features two Intel Atom cores (presumably based on Saltwell micro-architecture), integrated DDR3 memory controller, built-in PCI Express 2.0 x4 controller, miscellaneous input/output capabilities as well as digital 802.11g Wi-Fi transceiver. Potentially, such chip could power various net books, but lacks integrated video and graphics processing capabilities, it will hardly ever make it to mass market. In fact, Rose-point will likely be a test vehicle for Intel’s wireless all-digital radio or a base for certain special-purpose devices.


Virtually all modern devices have cables, either for charging or for transferring data or both. Intel  believes that in the future all equipment have to be completely wireless, whether it is a laptop, a display or something else. To achieve that, Intel wants to integrate radio in every applicable chip it makes, which essentially adds wireless tech to any client chip these days, given the trend towards highly-integrated system-on-chip devices.


FIG: Intel chip consist of digital silicon radios, wi-fi docking, and centralized LTE base station computation.

A key to enable radio and wireless data transfer in every device possible, whether it is a notebook or a remote controller for TV, cost efficiently is to implement it using common building blocks that are used to make microprocessors. The thinner manufacturing technology is, the less expensive wireless radio blocks will be.


The boards that are the radio

The chip  called Rose-point, a dual core Atom with a fully digital Wi-Fi radio. Intel has identified four major parts to be converted from analog to digital, the Sigma Delta ADC, Digital Frequency  Rose-point is a technical demonstration, not a product, Synthesizer, Digital RF Power Amplifier, and Digital Phase Modulator.

More important than the sheer cost and die area is flexibility.  analog radios have a limited range of frequencies . A digital radio should be able to provide far greater ranges, possibly even doing multiple frequencies at once. The chip is smaller but you could potentially have one digital radio instead of a half-dozen analog devices.

Cell towers in a centralized location to save power and boost performance. Instead of a CPU crunching LTE packets at each tower,  sending to all central location where a bunch of servers crunch the numbers centrally. pushing all that data across a MAN or WAN and crunching the numbers in a single rack better than be spoke silicon at a tower. Moving data like that isn’t free, it add cost to the network.





NANO COMPUTING(roll no:32)

Leave a comment



Nanocomputing is an emerging technology that is at the early stage of its development.It is the technology in which computing is done by using  extremly small, or nanoscale, devices.

Nanocomputing shows great potential, but there are significant technical barriers and obstacles to overcome. Worldwide initiatives are in progress to develop the technology. Japan, Europe, and the United States are in a race to the finish line. Governments are beginning to see the potential and are investing heavily in research and development programs. This interest and investment will accelerate progress.


l Computing with nanoscale devices

  • Ø 1nm = 10-3 µm = width of 10 H atoms = diameter of sugar molecule
  • Ø 1011- 1012 devices/cm2
  • Ø 100– 1000 billion-device chips
  • Ø 1 – 50 nanometer device features


Until the mid-1990s, the term “nanoscale” generally denoted circuit features smaller than 100 nm. As the IC industry started to build commercial devices at such size scales since the beginning of the 2000s, the term “nanocomputing” has been reserved for device features well below 50 nm to even the size of individual molecules, which are only a few nm. . In 2001, state-of-the-art electronic devices could be as small as about 100 nm, which is about the same size as a virus. Scientists and engineers are only beginning to conceive new ways to approach computing using extremely small devices and individual molecules.


All computers must operate by basic physical processes. Contemporary digital computers use currents and voltages in tens of millions of complementary metal oxide semiconductor (CMOS) transistors covering a few square centimeters of silicon. If device dimensions could be scaled down by a factor of 10 or even 100, then circuit functionality would increase 100 to 10,000 times. Today’s transistors operate with microampere currents and only a few thousand electrons generating the signals, but as they are scaled down, fewer electrons are available to create the large voltage swings required of them. This compels scientists and engineers to seek new physical phenomena that will allow information processing to occur using other mechanisms than those currently employed for transistor action.


Note  : CMOS device circa 2016

•Cost 10-11$/gate

•Size 8 nm / device

•Speed 0.2 p s /operation

•Energy 10-18J/operation

Future nanocomputers could be evolutionary, scaled-down versions of today’s computers, working in essentially the same ways and with similar but nanoscale devices. Or they may be revolutionary, being based on some new device or molecular structure not yet developed. Current nanocomputing research involves the study of very small electronic devices and molecules, their fabrication, and architectures that can benefit from their inherent electrical properties.  Nanostructures that have been studied include semiconductor quantum dots, single electron structures, and various molecules. Very small particles of material confine electrons in ways that large ones do not, so that the quantum mechanical nature of the electrons becomes important.




v CMOS scaling will continue for next 12 –15 years

v Alternative new technologies will emerge and begin to be integrated on CMOS by 2015

v Nanoscience research is needed to facilitate radical new scalable technologies beyond 2020


Quantum dots behave like artificial atoms and molecules in that the electrons inside of them can have only certain values of energy, which can be used to represent logic information robustly. Another area is that of “single electron devices,” which, as the name implies, represent information by the behavior of only one, single electron. The ultimate scaled-down electronic devices are individual molecules on the size scale of a nm. Chemists can synthesize molecules easily and in large quantities; these can be made to act as switches or charge containers of almost any desirable shape and size. One molecule that has attracted considerable interest is that of the common deoxyribonucleic acid (DNA), best known from biology. Ideas for attaching smaller molecules, called “functional groups,” to the moleculesand creating larger arrays of DNA for computing are under investigation. These are but a few of the many approaches being considered.

As the size of computer chips gets smaller and smaller, companies continue to invest research dollars to reduce the size. The near future of nanocomputing could bring powerful computers that are smaller than the head of a pin.

In addition to discovering new devices on the nanoscale, it is critically important to devise new ways to interconnect these devices for useful applications. One potential architecture is called cellular neural networks (CNN) in which devices are connected to neighbors, and as inputs are provided at the edge, the interconnects cause a change in the devices to sweep like a wave across the array, providing an output at the other edge.

An extension of the CNN concept is that of quantum-dot cellular automata (QCA). This architecture uses arrangements of single electrons that communicate with each other by Coulomb repulsion over large arrays.

Emerging Research Architectures









CMOS with dissimilar material systems

Arrays of quantum dots

Intelligently assembles nanodevices

Molecular switches and memories

Single electron array architectures

Spin resonance


Less interconnect delay, Enables mixed technology solutions

High functional density. No interconnects in signal path

Supports hardware with defect densities >50%

Supports memory based computing

Enables utilization of single electron devices at room temperature

Exponential performance scaling, Enables unbreakable cryptography


Heat removal, No design tools, Difficult test and measurement

Limited fan out, Dimensional control (low temperature operation), Sensitive to background charge

Requires pre-computing test

Limited functionality

Subject to background noise, Tight tolerances

Extreme application limitation, Extreme technology








Another potential architecture is that of “crossbar switching” in which molecules are placed at the intersections of nanometer-scale wires. These molecules provide coupling between the wires and provide computing functionality.

In summary, nanocomputing technology has the potential for revolutionizing the way that computers are used. However, in order to achieve this goal, major progress in device technology, computer architectures, and IC processing must first be accomplished. It may take decades before revolutionary nanocomputi ng technology becomes commercially feasible.

A nanocomputer is similar in many respects to the modern personal computer—but on a scale that’s very much smaller. With access to several thousand (or millions) of nanocomputers, depending on your needs or requirements—gives a whole new meaning to the expression “unlimited computing”—you may be able to gain a lot more power for less money.


Nanocomputing is evolving along two distinct paths:

New nanoproducts, techniques, and enhancements will be integrated into current technology such as the PC, the mainframe, and servers of all types. Mass storage will change significantly as thousands of cheap storage devices will become available. Storage need never be a problem or cost again.


Research and development are working toward making entirely new nanocomputers that run software—similar to that on today’s PC.






Leave a comment


Plastic electronic is an internationally networked technology company which develops, produces and markets products with intelligent multi-layer surfaces.These products are based on multiskin, a trademarked technology developed by plastic electronic.  Similar to touch skin, multiskin sensors and characteristics are hidden underneath its surface. Just like skin, multiskin is robust and flexible for full design freedom.

v Touch skin


It  is a touch-sensitive intelligent surface which completely replaces mechanical switches, sliders and wheels with capacitive electronics.touchskin can be combined with any moulded surface and also other materials like wood or fabric. And it can be shaped in any 3D surface, opening up an entirely new world of design freedom. Its seamless surface makes touchskin devices dirt- and water-resistant with low wear and abrasion.


Supply Chain Optimisation

storeskin is a unique, highly reliable tool for optimising your customers’ purchase orders. Wherever your have got products on your customers’ shelves, storeskin is an automated standing re-order.




Marketing and Research

It is a unique tool for reading out your product conversion rate directly at the POS. storeskin will give you exclusive information about your customers’ behaviour at the POS.



Retail Loss Prevention

Storeskin‘s sensors not only give you information about good movements, position and stock.storeskin detects these patterns, thus helps prevent theft and financial loss caused by it.




Carbon nanotube films for transparent and plastic electronics

A two-dimensional network – often referred to as a thin film – of carbon nanotubes can be regarded as a novel transparent electronic “material” with excellent – and tunable – electrical, optical and mechanical properties. The films display high conductivity, high carrier mobility and optical transparency, in addition to flexibility, robustness and environmental resistance


Eg  for carbon nano tubes





Different types of plastic electronics technology

  • Fexible-plastic-electronic-displays




  • Home_plastic-electronics


  • Coloured-plastic display




  • Plastic-polymer-granules




  • Plasticlogictaxipr






silicon processing requires temperatures above 1000 °C and clean room conditions, “plastic electronics” merely require room temperature. Comparatively speaking, the manufacturing methods are environmentally friendly and save resources. In contrast to the current time-consuming and thus expensive technology, organic semiconductors can be mass-produced at a low cost. The availability of raw materials is practically unlimited. In a wafer-thin layer, the electronic components can be applied to the different carrier materials. They easily and flexibly adapt to surfaces, requiring only little space, and are virtually unbreakable. It is even possible to produce coatings with ink-jet printers using electronic ink or with classic printing methods. A complete test printer already exists. It is possible to print transistors, LEDs, solar cells, sensors, batteries and displays. Another option is to integrate barely visible electronic circuits into fabrics and wallpapers.


The technology has already reached market maturity and is used efficiently with organic LEDs (OLED).                Low degrees of efficiency of about 8 % suggest even more potential: transparent films with solar cells on mobile phones, laptops and other mobile devices could considerably extend battery life. Besides marketable RFID chips and price tags, a team of researches has even created microprocessors that are made of polymer films. The first organic lasers for optical measuring and for batteries have left the research lab. With the consistent refinement of organic electronics, numerous application possibilities for everyday use will arise..



Organic Electronics


Organic Electronics is a new field of electronics in which the structures that are used are based on organic materials: dielectric, conductive or semiconductor polymers or small organic molecules deposited mainly on flexible substrates

Organic semiconductors offer a low cost alternative to

established semiconductors when it comes to large area and low cost applications .It provides new materials to build next generation electronics and photonics devices. With a focus on organic semiconductors and dielectrics, these    materials will enable the creation of products for:

Organic Thin Film Transistors (TFT) – notably for radio frequency identification (RFID) and display backplanes.Although these materials may be applied by a variety of techniques including evaporation and thermal transfer, they are primarily designed for solution processing of creating organic electronic (OE) systems. The advent of high-volume, cost-effective OE print manufacturing technology will allow manufacturers of OE products to turn their existing materials into “inks” for printing.

Two different groups:


  • Small molecular meterials

mainly prepared by thermal evaporation



Pentacene Anthracene







  • Polymers


prepared by solution processing (spin-coating, inkjet printing)




Polythiophene (PT)




Polyphenylen-vinylen (PPV)



Organic Electronics Classical Electronics


reduced costs


high manufacturing costs


simple process


complex process


flexible substrates



rigid substrates


small integration density


extremely high density


high switching times


switching times very small


reduced performances


high performances


large area


small areas




Organic Thin Film Transistors


  • Possible use in active matrix flat panel displays,electronic

paper” displays, sensors or radio-frequency identification


  • In competition with a:Si:H which is normally used in active matrix displays




Polycrystalline Pentacene


When diffusing from one grain to another, charge carriers get scattered at the defects introduced by the grain boundaries. These boundaries hence reduce the effective mobility.


Organic Photovoltaic Cells


Photovoltaic effect in single layer organic molecules first observed in the 1970s, later on also for polymers Cells consisting of a single material reach only very low efficiencies ! combination of at least two materials necessary






Advantages of organic electronics


  • The possibility of manufacturing components and circuits over large areas, while silicon chips  are restricted to the area of circular pads of limited sizes.
  • It  can be fabricated on plastic substrates, thin and flexible.
  • Only the flash memory transistor, the silicon component found in the pen-drives in digital cameras and MP3 players, continued to resist the benefits of plastic.

The low electrical conductivity is a disadvantage and currently limits possible applications. Research and development of new polymer combinations to increase conductivity cost money and time. There is also too little detailed evidence on long-term durability.


Older Entries