Showing posts with label nt. Show all posts
Showing posts with label nt. Show all posts

Wednesday, 26 October 2011

second-generation version XO laptop computer in the works

second-generation version XO laptop computer in the works

One Laptop per Child (OLPC), the non-profit organization focused on providing educational tools to help children in developing countries "learn learning," is working on a second-generation version of its XO laptop computer. Leveraging new advances in technology, the primary goal of the "XO-2" will be to advance new concepts of learning as well as to further drive down the cost of the laptop so that it is affordable for volume purchase by developing nations.

Google Android 2.0

android 2.0



Google Android is a mobile operating system running on Linux kernel. Android developed by Android Inc., later purchased by Google. Android allow users to write development code in java language, controlling the device via Google-developed java libraries. Google used latest technology to develop mobile os.
android 2.0

# Optimized hardware speed
# Support for more screen sizes and resolutions
# Revamped UI
# New browser UI and HTML5 support
# New contact lists
# Better white/black ratio for backgrounds

android 2.0

# Improved Google Maps 3.1.2
# Microsoft Exchange support
# Built in flash support for Camera
# Digital Zoom
# Improved virtual keyboard
# Bluetooth 2.1

android 2.0
# Ability to record and watch videos with the camcorder mode
# Uploading videos to YouTube and pictures to Picasa directly from the phone
# A new soft keyboard with an "Autocomplete" feature
# Bluetooth A2DP support
# Ability to automatically connect to a Bluetooth headset within a certain distance
# New widgets and folders that can populate the desktop
# Animations between screens.
android 2.0
# Expanded ability of Copy and paste to include web pages
# An integrated camera, camcorder, and gallery interface.
# Gallery now enables users to select multiple photos for deletion.
# Updated Voice Search, with faster response and deeper integration with native applications, including the ability to dial contacts.

android 2.0
# Updated search experience to allow searching bookmarks, history, contacts, and the web from the home screen.
# Updated Technology support for CDMA/EVDO, 802.1x VPN, Gestures, and a Text-to-speech engine
# Speed improvements for searching, the camera.

SmartBook mobility computing device

This is the time when all the concept laptop are launched in that series one of the most interesting laptop is launched that is SmartBook mobile computing device that is look very similar laptop but it has a lots of quality than a laptop. This is the future generation concept technology computer that make true your dream to have a unique computer.


SmartBook  mobility computing device
This superb device designed by a very famous and talented designer Roland Cernat. he is the winner of many design award all around the world. He also invented some of it's design these all are concept design. This SmartBook is support all type of software that are used commonly in laptops.

SmartBook  mobility computing device
The SmartBook has a screen that is the digital writing pad on which one can just scribble with the digital pen and you can write and draw any thing on that screen , this pen is kept under the keyboard. SmartBook has all the multimedia feature that is useful for the laptop, this has a WIFI device and a bluetooth device is also embedded with this laptop.

SmartBook  mobility computing device
SmartBook is a mobility computing device that with a multi functional design that allow you to rearrange it's parts according to your need and style. SmartBook has two side monitor and the side that is not in use i can work as monitor or you can use one of it's monitor as keyboard device.

SmartBook  mobility computing device

Meeting your Company's Storage Needs with Directory Virtualization

Virtualization

by Don MacVittie

The one constant with enterprise storage is that you always need more of it. Since the 1990s, storage has seen a steady double-digit growth in most activities, and a much faster growth in enterprises that rely heavily on video for communications. Even the worldwide economic turmoil of 2008 and 2009 did not stop this need for more storage; it merely forced IT management to make tough decisions about targeting their limited dollars wisely.

Of course, with the growth of storage comes the growth of security concerns. Seventeen racks of storage present a larger threat surface than 10 do, and managing access rights to an ever growing pool of storage can be intimidating and fraught with room for error.

Enter directory virtualization, the technology that places a device between users and the various NAS devices on the network. While directory virtualization has been around for a good long while, the growing pressure of budget controls and increasing storage demands are just now bringing this technology to the fore.

The purpose of directory virtualization is to put a strategic point of control between users and the storage that they require for daily operations. By doing so, there is a platform that allows several things to happen. First, resource utilization can be greatly normalized because on the user side, directory virtualization devices present a single directory tree for all the various devices behind it. Thus, while the user stores to the same place in the directory tree, the actual physical location of the file can be moved to meet the needs of IT. Second, this movement can be automated by setting criteria to move files between storage tiers (storage tiering). For example, IT can say “if a file is used a lot, keep it on the really fast storage; if it is never accessed, roll it off to the slowest.” Furthermore, if the device behind a directory is reaching capacity, some or all of the files in that directory can be moved to a new device while the user sees no change – the file appears in the same place while IT was able to “expand” the space behind the directory tree.

Finally, utilizing a directory virtualization device that recognizes your cloud storage provider of choice, or a cloud storage gateway that is presented to the directory virtualization device as just another NAS device makes it possible to roll the least frequently accessed files out to cloud storage, which does not increase your storage infrastructure expense (although it will cost you a monthly fee - usually small - per gigabyte). This moves rarely accessed files completely out of your building, but since they still show up in the directory tree, they can be retrieved relatively quickly if needed. And once retrieved, they can work their way back up the hierarchy of tiers.

The other major benefit of this strategic point of control is one of security. Since everything is presented to users as a single directory tree, security can be moved from the large number of NAS devices into the directory virtualization device. The effect is that the various NAS devices can be locked down so that they only communicate through the directory virtualization device, and all security can happen in one place. A group does not have to be maintained at multiple locations/levels/devices, and storage/security administrators need only learn a single UI for day-to-day operations.

With the growing amount of data being transferred to remote locations, there is also a security risk in Internet communications. Knowing that the cloud storage communication device you deploy – be it part of a directory virtualization tool, a WAN optimization tool, or a stand-alone cloud storage gateway – can encrypt the data it is sending to the cloud will resolve many issues for you. First and foremost, it will allow IT to send any data it needs to cloud storage without concern for how well protected it is. Though of nearly equal importance, encryption of outgoing data will allow you to offload encryption from your servers. Since encryption is a heavily CPU-intensive operation, such offloading extends the useful life of servers by allowing them to service applications instead of encryption. While machines have continued to follow Moore’s law and gotten faster, virtualization has added to the encryption burden by multiplying the number of requests for encryption that software could be sending to the hardware CPUs. Moving this functionality off of VMs and physical servers and onto a device designed to handle encryption saves CPU cycles and either increases VM density or allows your server to do more.

Even though storage growth continues practically unabated, the options for how to deal with both increasing volume and securing the resulting storage against prying unauthorized eyes are expanding also. The advent of directory virtualization for NAS has improved security by enabling the ability to lock down storage access on the virtualization appliance, before the physical storage is even accessed. Directory virtualization also allows organizations to put off costly storage purchases by spreading the storage more evenly across the available infrastructure. Cloud storage brought with it enhanced encryption to protect data sent to the cloud, and the various deduplication/compression schemes implemented at every layer of the storage network by default obfuscate the data they are acting upon. While not the same as security, it can protect against casual prying eyes, and when used on data at-rest, can make deciphering data on physically stolen disks more difficult for would-be data compromisers.

The same is true with SAN encryption and compression of course; offloading these functions from the servers and onto purpose-built hardware available from most SAN vendors and many third-party vendors allows servers to focus on what they’re best at: serving up applications and data. While SAN virtualization has more caveats than NAS virtualization, it does allow for a certain amount of optimization and load balancing between SAN devices, which theoretically improves performance.

There is a lot going on in the storage space that can help resolve some of the more problematic issues of today – protecting data at rest, locking down data access, securely transferring some data to cloud storage vendors, and simplifying NAS infrastructure, to name a few. Taking advantage of some or all of this functionality will help you to better serve your customers, making your storage, and by extension your applications, more secure, fast, and available.

Nexenta Systems and Zmanda debut next generation backup and recovery offering

Zmanda Inc., a provider of open source based back-up solutions, and Nexenta Systems Inc., a provider of OpenStorage solutions, announced Wednesday availability of jointly developed and certified back-up solutions. Amanda Enterprise (AE), Zmanda’s network back-up product, now natively supports backup of NexentaStor powered storage appliances, as well as back-up of heterogeneous environments to NexentaStor.

Newly released Zmanda NexentaStor Client provides fast and efficient back-ups of data stored on NexentaStor. Data ships directly from NexentaStor to the Amanda Enterprise backup server, saving valuable network bandwidth. Zmanda Nexenta Client leverages various ZFS and zVol backup methods and is configurable to meet the needs of a specific workload. Zmanda NexentaStor Client also provides direct backups of raw volumes (zvols). For example, if VMware based VMs are using volumes stored on Nexenta (accessed via iSCSI or Fibre Channel), Zmanda Nexenta Client provides raw backups of these volumes (while maintaining thin provisioning) without installing any client software in the VMs.
In addition, NexentaStor has been formally certified as the destination of back-up files for the Amanda Enterprise server. Amanda Enterprise can perform network-wide back-up of heterogeneous systems, including Linux, Windows, Solaris and OS X-based systems onto NexentaStor. NexentaStor, built on ZFS technology, eliminates redundant back-ups, reduces business downtime and ensures critical information is always protected and can be restored in seconds. With it, corporate compliance and rules of governance for data retention are managed easily. Users can reuse information across lines of business and industry. As well, the entire back-up environment can be managed from a single console.
“Zmanda provides extremely cost-effective and simple to use back-up solutions for our customers who prize openness and efficiency,” said Evan Powell, CEO of Nexenta. “Optimized back-ups of NexentaStor storage from Zmanda will enable our users to realize even more value from our platform.”
“NexentaStor provides open, enterprise-class storage based on ZFS, making it a perfect storage repository for Amanda Enterprise,” said Chander Kant, CEO of Zmanda. “With this integration, our customers will find NexentaStor a scalable and robust choice for storing their back-up files.”
Amanda Enterprise is an enhanced and supported version of the world’s most popular open source back-up and recovery software. Amanda Enterprise allows users to back-up, archive, and recover servers, workstations, desktops and business-critical applications across a network. The back-up data and archives can be stored on disks, tape, optical devices, or storage clouds such as Amazon S3.
The Nexenta Client for Amanda Enterprise is available immediately for $300.
i

Wednesday, 19 October 2011

Engineers Create Touchscreen Braille Writer

Some of the undergraduates are gathered into teams. Some work alone. All are assigned mentors and tasked with a challenge. They compete, American Idol-style, for top honors at the end of the summer.
The competition is made possible in part by a collaboration between the U.S. Army and several university and industry partners that makes up the AHPCRC.
Adam Duran is one such undergraduate, a student both lucky and good. He is now in his senior year at New Mexico State University. Last June, he came to Stanford at the suggestion of one of his professors. His mentors were Adrian Lew, an assistant professor of mechanical engineering, and Sohan Dharmaraja, a doctoral candidate at Stanford studying computational mathematics.
"Originally, our assignment was to create a character-recognition application that would use the camera on a mobile device -- a phone or tablet -- to transform pages of Braille into readable text," said Duran. "It was a cool challenge, but not exactly where we ended up."
Bigger fish
Even before Duran arrived for the summer, Lew and Dharmaraja began to talk to the Stanford Office of Accessible Education, people whose profession is helping blind and visually impaired students negotiate the world of higher learning. It became clear that there were bigger fish to fry.
While a Braille character reader would be helpful to the blind, Lew and Dharmaraja learned, there were logistics that were hard to get around.
"How does a blind person orient a printed page so that the computer knows which side is up? How does a blind person ensure proper lighting of the paper?" said Duran. "Plus, the technology, while definitely helpful, would be limited in day-to-day application."
"It was a nice-to-have, not a must-have," said Dharmaraja.
So, the three began to ask questions. That is when they stumbled upon a sweet spot.
"The killer app was not a reader, but a writer," said Dharmaraja.
"Imagine being blind in a classroom, how would you take notes?" said Lew. "What if you were on the street and needed to copy down a phone number? These are real challenges the blind grapple with every day."
There are devices that help the blind write Braille, to send email and so forth, but they are essentially specialized laptops that cost, in some cases, $6,000 or more. All for a device of limited functionality, beyond typing Braille, of course.
"Your standard tablet has more capability at a tenth the price," said Duran.
"So, we put two and two together. We developed a tablet Braille writer," said Dharmaraja, "A touchscreen for people who can't see."
First, however, the student-mentor team had to learn Braille. Originally developed for the French military, Braille is a relatively simple code with each character made up of variations of six dots -- or bumps, really -- arranged in a 2-by-3 matrix. The blind read by feeling the bumps with their fingertips.
As any computational mathematician will tell you, such a matrix yields two-to-the-sixth minus one variations, or 63 possible characters. These 63 characters are enough for a Western alphabet plus 10 numerical digits, with several left over for punctuation and some special characters.
Over the years, however, those 63 characters got quickly gobbled up -- through the addition of character-modification keystrokes, the total grew and now includes chemical, mathematical and other symbols.
Challenge
A modern Braille writer looks like a laptop with no monitor and an eight-key keyboard -- six to create the character, plus a carriage return and a delete key.
Duplicating the Braille keypad on a touch-based tablet seemed simple enough, but there was at least one significant challenge: How does a blind person find the keys on a flat, uniformly smooth glass panel?
Dharmaraja and Duran mulled their options before arriving at a clever and simple solution. They did not create virtual keys that the fingertips must find; they made keys that find the fingertips. The user simply touches eight fingertips to the glass, and the keys orient themselves to the fingers. If the user becomes disoriented, a reset is as easy as lifting all eight fingers off the glass and putting them down again.
"Elegant, no?" said Lew. "The solution is so simple, so beautiful. It was fun to see."
Beyond the price difference, touchscreens offer at least one other significant advantage over standard Braille writers: "They're customizable," Dharmaraja noted. "They can accommodate users whose fingers are small or large, those who type with fingers close together or far apart, even to allow a user to type on a tablet hanging around the neck with hands opposed as if playing a clarinet."
"No standard Braille writer can do this," said Professor Charbel Farhat, the chair of the Aeronautics and Astronautics Department and executive director of the summer program. "This is a real step forward for the blind."
Showing off
In a demo, Duran donned a blindfold and readied himself before the touchscreen. He typed out an email address and a simple subject line. Then he typed one of the best-known mathematical formulas in the world, the Burgers Equation, and followed with the chemical equation for photosynthesis -- complex stuff -- all as if writing a note to his mother.
For Duran, who has an uncle who is blind, the greatest joy was in seeing a blind person using his creation for the first time. "That was so awesome," he said. "I can't describe the feeling. It was the best."
In the immediate future, there are technical and legal hurdles to address, but someday, perhaps soon, the blind and visually impaired may find themselves with a more cost-effective Braille writer that is both portable and blessed with greater functionality than any device that went before.
"AHPCRC is an excellent model for outreach, which not only trains undergraduate students in computational sciences but also exposes students to real-world research applications," said Raju Namburu, the cooperative agreement manager for AHPCRC.
The center addresses the Army's most difficult scientific and engineering challenges using high-performance computing. Stanford University is the AHPCRC lead organization with oversight from the Army Research Laboratory.
As for his summer courses, Farhat is optimistic. "Let's remember," he points out, "This was a two-month summer project that evolved because a few smart people asked some good questions. I'm always amazed by what the students accomplish in these courses, but this was something special. Each year it seems to get better and more impressive."

'Robot Biologist' Solves Complex Problem from Scratch

An interdisciplinary team of scientists at Vanderbilt University, Cornell University and CFD Research Corporation, Inc., has taken a major step toward this goal by demonstrating that a computer can analyze raw experimental data from a biological system and derive the basic mathematical equations that describe the way the system operates. According to the researchers, it is one of the most complex scientific modeling problems that a computer has solved completely from scratch.
The paper that describes this accomplishment is published in the October issue of the journal Physical Biology and is currently available online.
The work was a collaboration between John P. Wikswo, the Gordon A. Cain University Professor at Vanderbilt, Michael Schmidt and Hod Lipson at the Creative Machines Lab at Cornell University and Jerry Jenkins and Ravishankar Vallabhajosyula at CFDRC in Huntsville, Ala.
The "brains" of the system, which Wikswo has christened the Automated Biology Explorer (ABE), is a unique piece of software called Eureqa developed at Cornell and released in 2009. Schmidt and Lipson originally created Eureqa to design robots without going through the normal trial and error stage that is both slow and expensive. After it succeeded, they realized it could also be applied to solving science problems.
One of Eureqa's initial achievements was identifying the basic laws of motion by analyzing the motion of a double pendulum. What took Sir Isaac Newton years to discover, Eureqa did in a few hours when running on a personal computer.
In 2006, Wikswo heard Lipson lecture about his research. "I had a 'eureka moment' of my own when I realized the system Hod had developed could be used to solve biological problems and even control them," Wikswo said. So he started talking to Lipson immediately after the lecture and they began a collaboration to adapt Eureqa to analyze biological problems.
"Biology is the area where the gap between theory and data is growing the most rapidly," said Lipson. "So it is the area in greatest need of automation."
Software passes test
The biological system that the researchers used to test ABE is glycolysis, the primary process that produces energy in a living cell. Specifically, they focused on the manner in which yeast cells control fluctuations in the chemical compounds produced by the process.
The researchers chose this specific system, called glycolytic oscillations, to perform a virtual test of the software because it is one of the most extensively studied biological control systems. Jenkins and Vallabhajosyula used one of the process' detailed mathematical models to generate a data set corresponding to the measurements a scientist would make under various conditions. To increase the realism of the test, the researchers salted the data with a 10 percent random error. When they fed the data into Eureqa, it derived a series of equations that were nearly identical to the known equations.
"What's really amazing is that it produced these equations a priori," said Vallabhajosyula. "The only thing the software knew in advance was addition, subtraction, multiplication and division."
Beyond Adam
The ability to generate mathematical equations from scratch is what sets ABE apart from Adam, the robot scientist developed by Ross King and his colleagues at the University of Wales at Aberystwyth. Adam runs yeast genetics experiments and made international headlines two years ago by making a novel scientific discovery without direct human input. King fed Adam with a model of yeast metabolism and a database of genes and proteins involved in metabolism in other species. He also linked the computer to a remote-controlled genetics laboratory. This allowed the computer to generate hypotheses, then design and conduct actual experiments to test them.
"It's a classic paper," Wikswo said.
In order to give ABE the ability to run experiments like Adam, Wikswo's group is currently developing "laboratory-on-a-chip" technology that can be controlled by Eureqa. This will allow ABE to design and perform a wide variety of basic biology experiments. Their initial effort is focused on developing a microfluidics device that can test cell metabolism.
"Generally, the way that scientists design experiments is to vary one factor at a time while keeping the other factors constant, but, in many cases, the most effective way to test a biological system may be to tweak a large number of different factors at the same time and see what happens. ABE will let us do that," Wikswo said.
The project was funded by grants from the National Science Foundation, National Institute on Drug Abuse, the Defense Threat Reduction Agency and the National Academies Keck Futures Initiative.

Could a Computer One Day Rewire Itself? New Nanomaterial 'Steers' Electric Currents in Multiple Dimensions

Scientists at Northwestern University have developed a new nanomaterial that can "steer" electrical currents. The development could lead to a computer that can simply reconfigure its internal wiring and become an entirely different device, based on changing needs.

As electronic devices are built smaller and smaller, the materials from which the circuits are constructed begin to lose their properties and begin to be controlled by quantum mechanical phenomena. Reaching this physical barrier, many scientists have begun building circuits into multiple dimensions, such as stacking components on top of one another.
The Northwestern team has taken a fundamentally different approach. They have made reconfigurable electronic materials: materials that can rearrange themselves to meet different computational needs at different times.
"Our new steering technology allows use to direct current flow through a piece of continuous material," said Bartosz A. Grzybowski, who led the research. "Like redirecting a river, streams of electrons can be steered in multiple directions through a block of the material -- even multiple streams flowing in opposing directions at the same time."
Grzybowski is professor of chemical and biological engineering in the McCormick School of Engineering and Applied Science and professor of chemistry in the Weinberg College of Arts and Sciences.
The Northwestern material combines different aspects of silicon- and polymer-based electronics to create a new classification of electronic materials: nanoparticle-based electronics.
The study, in which the authors report making preliminary electronic components with the hybrid material, will be published online Oct. 16 by the journal Nature Nanotechnology. The research also will be published as the cover story in the November print issue of the journal.
"Besides acting as three-dimensional bridges between existing technologies, the reversible nature of this new material could allow a computer to redirect and adapt its own circuitry to what is required at a specific moment in time," said David A. Walker, an author of the study and a graduate student in Grzybowski's research group.
Imagine a single device that reconfigures itself into a resistor, a rectifier, a diode and a transistor based on signals from a computer. The multi-dimensional circuitry could be reconfigured into new electronic circuits using a varied input sequence of electrical pulses.
The hybrid material is composed of electrically conductive particles, each five nanometers in width, coated with a special positively charged chemical. (A nanometer is a billionth of a meter.) The particles are surrounded by a sea of negatively charged atoms that balance out the positive charges fixed on the particles. By applying an electrical charge across the material, the small negative atoms can be moved and reconfigured, but the relatively larger positive particles are not able to move.
By moving this sea of negative atoms around the material, regions of low and high conductance can be modulated; the result is the creation of a directed path that allows electrons to flow through the material. Old paths can be erased and new paths created by pushing and pulling the sea of negative atoms. More complex electrical components, such as diodes and transistors, can be made when multiple types of nanoparticles are used.
The title of the paper is "Dynamic Internal Gradients Control and Direct Electric Currents Within Nanostructured Materials." In addition to Grzybowski and Walker, other authors are Hideyuki Nakanishi, Paul J. Wesson, Yong Yan, Siowling Soh and Sumanth Swaminathan, from Northwestern, and Kyle J. M. Bishop, a former member of the Grzybowski research group, now with Pennsylvania State University.

3-D Gesture-Based Interaction System Unveiled

Touch screens such as those found on the iPhone or iPad are the latest form of technology allowing interaction with smart phones, computers and other devices. However, scientists at Fraunhofer FIT has developed the next generation non-contact gesture and finger recognition system. The novel system detects hand and finger positions in real-time and translates these into appropriate interaction commands. Furthermore, the system does not require special gloves or markers and is capable of supporting multiple users.

Touch screens such as those found on the iPhone or iPad are the latest form of technology allowing interaction with smart phones, computers and other devices. However, scientists at Fraunhofer FIT has developed the next generation non-contact gesture and finger recognition system. The novel system detects hand and finger positions in real-time and translates these into appropriate interaction commands. Furthermore, the system does not require special gloves or markers and is capable of supporting multiple users.
With touch screens becoming increasingly popular, classic interaction techniques such as a mouse and keyboard are becoming less frequently used. One example of a breakthrough is the Apple iPhone which was released in summer 2007. Since then many other devices featuring touch screens and similar characteristics have been successfully launched -- with more advanced devices even supporting multiple users simultaneously, e.g. the Microsoft Surface table becoming available. This is an entire surface which can be used for input. However, this form of interaction is specifically designed for two-dimensional surfaces.
Fraunhofer FIT has developed the next generation of multi-touch environment, one that requires no physical contact and is entirely gesture-based. This system detects multiple fingers and hands at the same time and allows the user to interact with objects on a display. The users move their hands and fingers in the air and the system automatically recognizes and interprets the gestures accordingly.
Cinemagoers will remember the science-fiction thriller Minority Report from 2002 which starred Tom Cruise. In this film Tom Cruise is in a 3-D software arena and is able to interact with numerous programs at unimaginable speed, however the system used special gloves and only three fingers from each hand.
The FIT prototype provides the next generation of gesture-based interaction far in advance of the Minority Report system. The FIT prototype tracks the user's hand in front of a 3-D camera. The 3-D camera uses the time of flight principle, in this approach each pixel is tracked and the length of time it takes light to be filmed travelling to and from the tracked object is determined. This allows for the calculation of the distance between the camera and the tracked object.
"A special image analysis algorithm was developed which filters out the positions of the hands and fingers. This is achieved in real-time through the use of intelligent filtering of the incoming data. The raw data can be viewed as a kind of 3-D mountain landscape, with the peak regions representing the hands or fingers." said Georg Hackenberg, who developed the system as part of his Master's thesis. In addition plausibility criteria are used, these are based around: the size of a hand, finger length and the potential coordinates.
A user study was conducted and found that the system both easy to use and fun. However, work remains to be done on removing elements which confuses the system, for example reflections caused by wristwatches and palms which are positioned orthogonal to the camera.
"With Microsoft announcing Project Natal, it is likely that similar techniques will very soon become standard across the gaming industry. This technology also opens up the potential for new solutions in the range of other application domains, such as the exploration of complex simulation data and for new forms of learning," predicts Prof. Dr. Wolfgang Broll of the Fraunhofer Institute for Applied Information Technology FIT.

Wearable Depth-Sensing Projection System Makes Any Surface Capable of Multitouch Interaction

OmniTouch employs a depth-sensing camera, similar to the Microsoft Kinect, to track the user's fingers on everyday surfaces. This allows users to control interactive applications by tapping or dragging their fingers, much as they would with touchscreens found on smartphones or tablet computers. The projector can superimpose keyboards, keypads and other controls onto any surface, automatically adjusting for the surface's shape and orientation to minimize distortion of the projected images.
"It's conceivable that anything you can do on today's mobile devices, you will be able to do on your hand using OmniTouch," said Chris Harrison, a Ph.D. student in Carnegie Mellon's Human-Computer Interaction Institute. The palm of the hand could be used as a phone keypad, or as a tablet for jotting down brief notes. Maps projected onto a wall could be panned and zoomed with the same finger motions that work with a conventional multitouch screen.
Harrison was an intern at Microsoft Research when he developed OmniTouch in collaboration with Microsoft Research's Hrvoje Benko and Andrew D. Wilson. Harrison will describe the technology Oct. 19 at the Association for Computing Machinery's Symposium on User Interface Software and Technology (UIST) in Santa Barbara, Calif.
A video demonstrating OmniTouch and additional downloadable media are available at: http://www.chrisharrison.net/index.php/Research/OmniTouch
The OmniTouch device includes a short-range depth camera and laser pico-projector and is mounted on a user's shoulder. But Harrison said the device ultimately could be the size of a deck of cards, or even a matchbox, so that it could fit in a pocket, be easily wearable, or be integrated into future handheld devices.
"With OmniTouch, we wanted to capitalize on the tremendous surface area the real world provides," said Benko, a researcher in Microsoft Research's Adaptive Systems and Interaction group. "We see this work as an evolutionary step in a larger effort at Microsoft Research to investigate the unconventional use of touch and gesture in devices to extend our vision of ubiquitous computing even further. Being able to collaborate openly with academics and researchers like Chris on such work is critical to our organization's ability to do great research -- and to advancing the state of the art of computer user interfaces in general."
Harrison previously worked with Microsoft Research to develop Skinput, a technology that used bioacoustic sensors to detect finger taps on a person's hands or forearm. Skinput thus enabled users to control smartphones or other compact computing devices.
The optical sensing used in OmniTouch, by contrast, allows a wide range of interactions, similar to the capabilities of a computer mouse or touchscreen. It can track three-dimensional motion on the hand or other commonplace surfaces, and can sense whether fingers are "clicked" or hovering. What's more, OmniTouch does not require calibration -- users can simply wear the device and immediately use its features. No instrumentation of the environment is needed; only the wearable device is needed.

Juniper Networks unveils cloud-based environment for network operators

Juniper Networks has announced the availability of Junosphere Lab, a new virtual environment that revolutionizes the way service providers and enterprises design, test and operate networks. Junosphere is a and cloud offering that allows network operators to create and run networks on-demand, enabling network modeling, testing and planning at a scale that is practically impossible to achieve with physical equipment. Using Junosphere Lab, companies can “rent” networks for as little as USD 50 per day, enabling them to speed modeling projects by over 30 percent and lower total cost of ownership by as much as 90 percent when compared to the alternative of building a physical lab.

The Junosphere Lab overcomes traditional challenges associated with network modeling and design by harnessing the power of virtualization to reduce reliance on overburdened physical network labs and dramatically improving time efficiency by transforming how organizations approach network modeling. It can be used to speed service introduction, plan more effectively and reduce the risk of network changes. Using Junosphere Lab, network personnel can create and model virtual networks running the Junos operating system as a substitute for or supplement to physical test labs, eliminating significant power, cooling and space requirements.  It also provides a simple, cost effective tool for training and maintaining the skills of technicians and NOC personnel.

 “Junosphere appears to be a powerful tool to test new architectures and evaluate new ways to enhance our network operations, security and convergence capabilities for both current and future needs,” said David Roy, network engineer, France Telecom / Orange. “Junosphere Lab allows us to test new prototypes in a virtual environment and enable modeling at a level of scale that is often impossible in the physical world, while significantly reducing risk and costs.”  

“Energy efficiency and environmental responsibility is very important to us at NTT Communications, and we highly value technologies that can help us reduce power demand by delivering a highly scalable, realistic environment at less cost,” said Dr. Shin Miyakawa, director, Network System and Technologies, Innovative IP Architecture Center at NTT Communications. “We are excited about Junosphere Lab because it could enable us to perform many testing and modeling exercises in the cloud, rather than with physical test labs, which in turn could significantly reduce the amount of power, space and cooling our labs require.”

“Service providers and large enterprises struggle to implement lab environments that emulate their operational networks. Reproducing the scale of the network is simply not feasible in a lab,” said Michael Kennedy, principal analyst, ACG Research. “Because the Junosphere virtual environment is hosted in the cloud and generates no demand for power, space or cooling resources, it is capable of testing a full scale network without any capital expenditures and in less time than a physical test lab. The full scale network test has greater power, enabling it to discover the actual behavior of the operational network and reduce the number of risks and unknowns because no compromises are made in the test scale. All of these factors make Junosphere Lab a very compelling offer.”

“Junosphere Lab leverages the power of cloud networking to help customers plan and operate their networks more efficiently,” said Manoj Leelanivas, executive vice president, Junos Application Software Business Group, Juniper Networks. “The economic and operational benefits of cloud-based network modeling are astounding, and the innovation of Junosphere Lab brings these benefits to our customers so they can shape and scale their next generation networks with agility.”

Storage maps the future of digital data

The one consistent theme in the digital world is that growth is a constant. It is estimated that from 2009 to 2020, the size of the digital universe will have increased 44 fold ; that is a 41 per cent increase in capacity every year. Storing, locating and extracting value from high volumes of data will become increasingly complex.
As the digitally-enabled business world evolves, the mix of data and its anticipated usages are going to change as well. Already, there is an increased diversity of data types with 80 per cent  of today’s data being unstructured and the reuse of data is shrinking, with 80 per cent of data never being used after 90 days . However, regulation and compliance dictates that data is adequately archived for long periods of time, sometimes up to triple digits in number of years. 
The fall out of the way data storage is currently handled is massive – the impact on the environment is one of these factors. Storage already consumes 40 per cent of datacentre power and it is predicted that within ten years the total energy consumed by storage solutions could increase to more than six times what it is today. Based on these predictions, storage could represent over 75 per cent of the energy consumed within the datacentre and if you consider that 80 per cent of data is never looked at again after three months, storage is a major IT trigger for energy burn out.
Another fallout is cost and the added expense of managing growing volumes of data.  The business critical nature of data is driving up storage management costs by 25 per cent per year , so in the long term it will become the number one cost within many datacentres. Therefore, it’s becoming increasingly more important to align the value of data with the capabilities and cost of the storage it is stored on.
Looking forwards, the future of storage management must be simple, easily accessible, cost efficient, environmentally friendly and streamlined, so organizations can function and perform quicker and better.
Striving for nirvana There are three essential elements that must be considered when formulating a storage strategy to meet growing data demands – the evolving function of the datacentre, business drivers, and the ‘nirvana’ storage solution.
Today’s typical datacentre is migrating from a physical, static, and heterogeneous set-up, to a grid-based virtualised infrastructure to a cloud computing environment that enables self service, policy-based resource management, and capacity planning. Along the way, the storage solution must be able to support this style of datacentre, so it is critical that the storage system is dynamic enough to support the difficult to predict demands of these application environments through a tiered approach.

Reducing cost was at the top of the CIO’s agenda yesterday, now business growth and profitability is. The storage strategy must fall in line with these objectives.  So, regardless of an organisation’s size, the storage solution must be able to scale to solve the larger, more complex business problems and it has to perform in real-time so organisations can react and make business decisions immediately. Likewise, the infrastructure has to be efficient so complex business problems can be effectively solved at a reduced cost and improved speed, and there must be data integrity built in to meet long-term business and regulatory compliancy.
Finally, there is the liberating act of creating a ‘storage nirvana’, should cost and incumbent infrastructure not be an object. For a CIO, this would probably include on-demand secure data access, application aware storage optimisation, unlimited capacity, scalable performance, appliance-like rapid deployment, and integrated application, system and storage management. Although, this nirvana is a distance away, these ideas must be taken into consideration to guide organisations onto a path of accelerated performance, profitability and lower IT costs.

World’s data more than doubling every two years, says study

EMC recently announced results of the EMC-sponsored IDC Digital Universe study, “Extracting Value from Chaos”—which found that the world’s information is more than doubling every two years—with a colossal 1.8 zettabytes to be created and replicated in 2011, which is growing faster than Moore's Law.

The study’s fifth anniversary, measuring and forecasting the amount of digital information created and copied annually—analyzing the implications for individuals, enterprises, and IT professionals—has huge economical, social and technology implications for big data and other opportunities.

In terms of sheer volume, 1.8 zettabytes of data which was created in the 2011 global Digital Universe, is equivalent to:

  • Every person in India tweeting 3 tweets per minute for 6,883 years non-stop.
  • 32 days of (1.8 zettabytes) data download by the entire population of India (1.21 billion appx.).
  • Every person in the world having over 215 million high-resolution MRI scans per day.
  • Over 200 billion HD movies (each 2 hours in length)—would take 1 person 47 million years to watch every movie 24x7.
  • The amount of information needed to fill 57.5 billion 32GB Apple iPads.
With these many iPads, we could:
  1. Create a wall of iPads, 4,005-miles long and 61-feet high extending from Anchorage, Alaska to Miami, Florida.
  2. Build the Great iPad Wall of China—at twice the average height of the original.
  3. Build a 20-foot high wall around South America.
  4. Cover 86 percent of Mexico City.
  5. Build a mountain 25-times higher than Mt. Fuji.
The forces behind this relentless growth are driven by technology and money. New “information taming” technologies are driving the cost of creating, capturing, managing and storing information down to one-sixth of what it was in 2005. Additionally, since 2005 annual enterprise investments in the Digital Universe—cloud, hardware, software, services, and staff to create, manage, store and generate revenue from the information—have increased 50 percent to USD 4 trillion.

Study Highlights:
Massive server, data management and file growth not keeping pace with staffing: IDC notes that the skills, experience, and resources to manage the deluge of data and resources simply aren’t keeping pace with all areas of growth. Over the next decade (by 2020), IT departments worldwide will experience:
  • 50X the amount of information to be managed.
  • 10X the number of servers (virtual and physical).
  •  75X  the number of files or containers that encapsulate the information in the digital universe, which is growing even faster than the information itself as more and more embedded systems, such as sensors in clothing, in bridges, or medical devices.
  • 1.5X the number of IT professionals available to manage it all.
Cloud computing cost and operational efficiency: While cloud computing accounts for less than 2 percent of IT spending today, IDC estimates that by 2015 nearly 20 percent of the information will be "touched" by cloud computing service providers — meaning that somewhere in a byte's journey from originator to disposal it will be stored or processed in a cloud. Perhaps as much as 10 percent will be maintained in a cloud.

The digital shadow has a mind of its own: The amount of information individuals create themselves—writing documents, taking pictures, downloading music, etc.—is far less than the amount of information being created about them in the digital universe.
The liability and responsibility is with enterprises: While 75 percent of the information in the digital universe is generated by individuals, enterprises have some liability for 80 percent of information in the digital universe at some point in its digital life.
“The chaotic volume of information that continues growing relentlessly presents an endless amount of opportunity—driving transformational societal, technological, scientific, and economic changes,” said Jeremy Burton, Chief Marketing Officer, EMC Corporation. “Big Data  is forcing change in the way businesses manage and extract value from their most important asset – information."

Can storage virtualization ease vendor lock in?

One of the benefits of some storage virtualization systems is that they allow you to use any vendor's hardware and bring it under a single storage services umbrella. The basic idea is that you not be locked into any one vendor in particular. This sounds like nirvana, but so far it hasn't really lived up to expectations. That may change thanks to server virtualization.
The concept of abstracting the services that a storage controller provides like LUN management, snapshots, and thin provisioning has been around for more than a decade. Most storage systems today are not really a tight integration between hardware and software. Vendors, with a few exceptions, are software developers first and they often use off-the-shelf hardware. You are really buying the software, or what I call the storage services, and with those service comes the hardware they the select but probably did not design.
The goal of vendor agnostic storage virtualization is to break that model. This traditionally meant buying a relatively powerful set of servers, clustering them for availability and running the storage software vendors product. From there you could essentially attach any vendors disk system, giving you leverage when it came time to buy. Again sounds like nirvana, and while some users bought into the idea, most did not.
The reason for the lack of adoption was there a bit of a "kit" nature to this approach. You had to assemble the products, connect it to the servers running the storage services software, and get it all working. When implemented, these systems are impressive. They can migrate between storage vendors, replicate to different ones, and even stripe volumes across different manufactures systems.
If something went wrong though, you had to go to your hardware vendors and ask for help. This was sometimes difficult to do since you were not using their software. Basically the lights were on, so they thought their job was done. While the storage software companies tried to help out, there was only so much they could do and often the customer was left to figure it out on his or her own.
This lead to the systems that currently dominate the storage virtualization market, single manufacturer systems that provide the software complement of virtualization like abstracting volume creation from disk spindle management, thin provisioning, snapshots, replication and so on. The hardware though came from the same manufacturer to eliminate the "kit" nature described above. The user community has voted with their dollars that this was an acceptable compromise and vendor agnostic storage virtualization is a relatively niche market today.
I've said many times that we have only scratched the surface of how server virtualization will change the way IT operates. One of those changes may be at the storage layer. The hypervisor may end up virtualizing storage just as it virtualizes CPU's and network connectivity.
The hypervisor may make the "kit" nature of vendor agnostic storage virtualization seem more manageable. Just as users are becoming less concerned about what brand of server they use, they may become less concerned about the brand of storage they use. You will get to focus on reliability and performance of the storage system instead of who has the best snapshot capabilites.
In fairness today's hypervisors lack the complete capabilities to be able to perform all the storage service functions like replication, snapshots and clones, but as we discuss in our article "The VDI Storage Trade Off", software is now available to fill those gaps.
Letting the hypervisor handle macro storage services like data location and then using software to provide the more granular services like scalable snapshots may be a viable alternative. For many, this may be an ideal path to making storage a more cost effective part of a server or desktop virtualization project.

End of Microsoft XP support accelerating desktop virtualization

With less than thousand days to go until Microsoft no longer supports Microsoft® Windows XP, organizations across the globe are reporting they are accelerating their migration to modern desktops powered by Microsoft Windows 7.  In addition, the high level of awareness among these organizations of desktop virtualization’s potential to simplify the move to a new operating system such as Windows 7 is driving their decision to invest.


These are some of the key findings of a commissioned study conducted by Forrester Consulting on behalf of Dimension Data on the desktop virtualization market.   Of the 546 organizations that were surveyed, close to half (46 percent) said that they had begun ‘aggressive efforts’ to migrate to Windows 7, with a further 17 percent) planning to deploy within the next year.
While 13 percent of companies said they had completed their enterprise-wide migrations, 51 percent of IT managers surveyed said they have linked their Windows 7 migrations to their organization’s PC refresh cycle.  Around 21 percent of enterprises are prioritizing desktop and application virtualization over their Windows 7 upgrade, and 29 percent are deliberately  coinciding their investments in Windows 7 and desktop virtualization.

Neville Burdan, General Manager of Microsoft Solutions, Dimension Data Asia Pacific said, “The Forrester research tells us that organizations are under pressure to beat the Windows XP end-of-support deadline.  Of those 124 Enterprise IT decision-makers surveyed in Asia which included Singapore, India, Hong Kong and China, the respondents confirmed that they still support a large population of Windows XP and Vista users (40.6 percent and 9.5 percent respectively) compared to 36.5 percent of users already on Windows 7. However, most of the organizations are aggressively upgrading their end users to Windows 7 desktop.  16 percent of the respondents have already completed their Windows 7 migration, 48 percent are in the process of deploying Windows 7 and 16 percent planning to start deploying Windows 7 within 6 to 12 months.”
With the use of desktop virtualization predicted to grow significantly in the next two years, Burdan believes Windows 7 is an ideal opportunity for organizations to implement a more modern, next- generation desktop that will be more secure and less time and labor intensive to deliver end-users with the functionality, interface and access they desire. However, he warned that desktop virtualization is not a silver bullet to address all desktop related challenges. 

“Organizations must first understand their business drivers, workforce demands, and the state of their application ecosystem before they define their next generation desktop roadmap. Many of our clients are grappling with complex issues relating to their applications ecosystems.  And while the research indicates that the major drivers behind desktop virtualization are cost reduction and security, 56 percent of participants said that they recognized that applications virtualization will help them to migrate to Windows 7.  To reduce complexity, organizations would do well to tie virtualization investments into their Windows 7 migration plan,” Burdan said.  

Cisco, Citrix do video via virtual desktop

Cisco Systems Wednesday announced new technology to deliver high-definition video and voice through a virtual desktop infrastructure which sends the signals from one endpoint to the other, bypassing the data center and reducing the high CPU processing and bandwidth that makes for subpar video. Cisco is also entering into a "strategic alliance" with virtual desktop provider Citrix to tightly integrate the Cisco technology, called Virtual Experience Infrastructure (VXI) with the Citrix XenDesktop virtual desktop platform.

The VXI product line, being rolled out this quarter and early in 2012, consists of the VXC 6215 thin client, a small tower that plugs into a desktop computer, and the VXC 4000 software appliance that runs on a computer with a Microsoft Windows XP or Windows 7 operating system. The VXC 4000 also integrates with Citrix's HDX desktop virtualization that delivers enterprise applications on any device or any network. The alliance will also involve wide area application services (WAAS) technology being optimized for Citrix XenDesktop.

Cisco says existing video and voice communications on virtual desktops are hampered by the fact that the signal travels from one endpoint to the data center where it makes a "hairpin turn" back out of the data center to the receiving endpoint. In a demonstration video, Cisco showed that this technology gobbles up 45-50 percent of CPU cycles and 100 megabits per second (Mbps) of network bandwidth and still results in inferior video and audio quality. With VXI, CPU usage drops to the range of 3-5 percent and bandwidth consumption drops to kilobits instead of megabits.

With VXI, the only information about the call that is going to the data center is a few kilobytes of information for signaling, explained Phil Sherburne, VP of enterprise architecture and systems at Cisco, who hosted a TelePresence videoconference with reporters scattered throughout North America. The voice and video are encoded and transmitted over the network using more natural protocols like real-time transport protocol (RTP) to the receiving endpoint. The result is a significant reduction in bandwidth and CPU consumption because the encoding and decoding is taking place at the end points.

"What this leads to, and what we're really excited about, is enterprise-grade voice and video based on Cisco UC for that virtual desktop environment," said Sherburne.

While Cisco's VXI technology will also run in a VMware View desktop virtualization environment, Cisco's relationship with Citrix is deeper, involving joint technology development for the VXI-Citrix platform, a closer go-to-market partnership between the two companies with channel partners and the alignment of Cisco Wide-Area Application Services (WAAS) with XenDesktop. Cisco will also gain access to the Citrix high-definition user experience (HDX) protocols to integrate with its technology.

"Customers have given us direct feedback that what they would like is for Citrix to do everything that it does really well and preserve the opportunity for Cisco to do everything it does extremely well," said Dave Frampton, VP and general manager of the application delivery business unit at Cisco, who is in charge of the WAAS application optimization portfolio. "By opening up and getting access to the HDX protocols, we can improve the user experience across all this range of end user devices."

The TelePresence news conference included comments from Cisco customers, including Brian Kachel, director of global network services and core IT at Quintiles, a medical, biotechnology and pharmaceutical research organization based in Virginia.

"This [VXI] capability will allow me to leverage the investment in WAAS and further optimize this network traffic and provide a consistent experience for the end users without having to do costly bandwidth upgrades," Kachel said.

Cisco said the VXC 6215 thin client will be available for order in the current quarter and begin shipping in the first quarter of 2012. The VXC 4000 software appliance will be available as a voice only system in the fourth quarter, with video capability to be added sometime in 2012.

While initially the software appliance will run only on desktops running Windows XP or 7, Cisco plans to add support for other mobile operating systems, such as Apple iOS for iPads and iPhones, and Google Android, though a Cisco spokesperson was vague about when or on other specifics.

TapSense touchscreen technology distinguishes taps by parts of finger

TapSense touchscreen technology distinguishes taps by parts of finger (w/ video)

Smartphone and tablet computer owners have become adept at using finger taps, flicks and drags to control their touchscreens. But Carnegie Mellon University researchers have found that this interaction can be enhanced by taking greater advantage of the finger's anatomy and dexterity.
By attaching a microphone to a touchscreen, the CMU scientists showed they can tell the difference between the tap of a fingertip, the pad of the finger, a fingernail and a knuckle. This technology, called TapSense, enables richer touchscreen interactions. While typing on a virtual keyboard, for instance, users might capitalize letters simply by tapping with a fingernail instead of a finger tip, or might switch to numerals by using the pad of a finger, rather toggling to a different set of keys.
Another possible use would be a painting app that uses a variety of tapping modes and finger motions to control a pallet of colors, or switch between drawing and erasing without having to press buttons.

"TapSense basically doubles the input bandwidth for a touchscreen," said Chris Harrison, a Ph.D. student in Carnegie Mellon's Human-Computer Interaction Institute (HCII). "This is particularly important for smaller touchscreens, where screen real estate is limited. If we can remove mode buttons from the screen, we can make room for more content or can make the remaining buttons larger." TapSense touchscreen technology distinguishes taps by parts of finger (w/ video)
TapSense was developed by Harrison, fellow Ph.D. student Julia Schwarz, and Scott Hudson, a professor in the HCII. Harrison will discuss the technology today (Oct. 19) at the Association for Computing Machinery's Symposium on User Interface Software and Technology in Santa Barbara, Calif.
"TapSense can tell the difference between different parts of the finger by classifying the sounds they make when they strike the touchscreen," Schwarz said. An inexpensive microphone could be readily attached to a touchscreen for this purpose. The microphones already in devices for phone conversations would not work well for the application, however, because they are designed to capture voices, not the sort of noise that TapSense needs to operate.
TapSense touchscreen technology distinguishes taps by parts of finger (w/ video)
The technology also can use sound to discriminate between passive tools (i.e., no batteries) made from such materials as wood, acrylic and polystyrene foam. This would enable people using styluses made from different materials to collaboratively sketch or take notes on the same surface, with each person's contributions appearing in a different color or otherwise noted.
The researchers found that their proof-of-concept system was able to distinguish between the four types of finger inputs with 95 percent accuracy, and could distinguish between a pen and a finger with 99 percent accuracy.

Friday, 14 October 2011

Intel focuses on many-core computing

At his keynote at the Intel Developer Forum (IDF) this morning, Intel CTO Justin Rattner discussed the move to many-core computing. The shift is important not only for high-performance computing (HPC), but also for many standard tasks as well. The long-awaited Knights Corner chip will launch with more than 50 cores on 22nm, he said. Highlights of the keynote featured new parallel extensions for JavaScript and demonstrations of both many-core applications and future designs that use much less power for both processing and memory.
Five years ago, Rattner introduced the Core architecture and the company’s first multi-core processors. But now, we are “just beginning the age of many-core processors,” Rattner said, which often includes heterogeneous cores.

We can expect to soon see general purpose many-core products, as Intel pushes its Many Integrated Core (MIC) architecture, starting with Knights Corner. (A development version known as Knights Ferry is already available.) The MIC architecture shares the memory model and instruction set with existing Xeon processors, along with enhanced floating point. Intel’s Tera-Scale Computing Research Program is currently testing a 48-core single-chip cloud computer (SCC).
Many-core will not just be for HPC applications, Rattner assured, showing a very large range of applications with 30 or more times performance improvements as the number of cores increases to 64.
Andrzej Nowak of CERN Open Lab talked about using it at the Large Hadron Collider, which created 15-25 petabytes of data per year. Physics analysis at CERN uses distributed computing with about 250,000 Intel cores.
CERN has worked with Northeastern University to parallelize its software. The lab has seen a fortyfold performance improvement on a 40-core Xeon implementation. The company uses the compatible MIC architecture. Nowak ran an application on both a single core and on a 32-core MIC, noting that on its heavily vectorized applications, they were getting nearly perfect scaling.
Programming for many-cores is no longer difficult. “You don’t need to be a ninja programmer to do it,” Rattner said.
He demoed multi- and many-core computing, for “mega data centers,” web apps, wireless communications, and PC security.
The first demonstration dealt with content in the cloud. A 48-core rack ran an in-memory database (using MemCache) showing a 48-core rack capable of handling 800,000 queries per second. 
Mozilla CTO Brendan Eich, creator of JavaScript in 1995, then joined Rattner on stage to show off “River Trail,” a set of parallel extensions to JavaScript. They demoed a 3D n-body simulation that could run at three frames per second in sequential mode, but at 45 frames per section with the parallel extensions. These are available now on github.com/rivertrail. 
Rattner displayed an LTE Base station implemented on multi-core Intel hardware. Done so in conjunction with China Mobile as part of the Cloud Radio Access Network (CRAN), only the actual radio is at the physical base station location; all the base station problems are actually computed on a “base station in the cloud.”
Following that, a PC security demo illustrated how so much confidential information can now be distributed online on the cloud. All the photos on a Web site were encrypted individually and then, when you appeared before a web camera, they could be decrypted differently for each user. (Some people could see all of the photos; some could see photos just of themselves; and some could see none at all.) Each picture is encrypted separated, which uses multiple parts of the CPU, including processor graphics, AES encryption and others.
Looking ahead, Rattner talked about “extreme scale computing.” Intel’s ten year goal was a 300-fold improvement in energy efficiency, decreasing power to 20 picojoules per floating point operation (FLOP) at the system level.
Intel’s Shekhar Borkar, who works on the DARPA Ubiquitous High Performance Computing project, said today’s 100 gigaFLOPs computer uses 200 watts. By 2019, it should use about 2 watts, due to reductions in power required not only by the cores, but by the whole system, including memory and storage. He specified running processors at near the threshold voltage operation. 
Next, a concept chip, called Claremont, which can run at near threshold voltage and can ramp from full performance to low power, ran on less than ten milliwatts of power. The chip also ran on a small solar-powered chip. Earlier in the week, Intel CEO Paul Otellini ran Windows; today’s demo ran both Windows and Linux. The chip can scale to over ten times the frequency when running at nominal voltage, so it could be both very fast and very low power. When it runs at full power, Rattner said, it uses less power than a current Atom chip on standby.
In conjunction with Micron, a “hybrid memory cube” showing both the lowest energy ever required for DRAM at 8 picojoules per bit and the fastest throughput at about 128GB/sec.
“Technology is no longer the limiting factor,” Rattner concluded. “If you can imagine it, we can create it.”

The Social-Network Chip

Looking at friends' pictures on Facebook or searching résumés on LinkedIn are relatively simple computing tasks in which information is called up, retrieved, and then shipped to a user's screen from a distant data center. Yet such tasks are handled mostly by powerful microprocessors designed for more complex jobs like number crunching and running operating systems.
That means a waste of electrical power, says Ihab Bishara, director of cloud computing products at Tilera, a chip startup in San Jose, California. Microprocessors serving the cloud are too powerful, he says; in the future, he believes, many tasks carried out in data centers will be handled by cheaper, low-power chips like those his company makes.
Currently, the chips inside data-center servers are nearly all manufactured by Intel, which commands roughly 90 percent of the server market with its family of Xeon microprocessors. Xeon chips have up to 10 processing centers, known as cores, that work in parallel to do hefty computational lifting. In contrast, Tilera's chips contain up to 100 smaller, lower-power cores. When networked together, the cores are capable of handling common cloud applications like retrieving user data while consuming about half as much electrical power, Bishara claims.
Electrical power use is an increasing economic concern for companies such as Facebook, Salesforce.com, and Google. Data centers now consume about 1.5 percent of the world's electrical power. Electric bills currently account for one-third of the cost of running a data center, according to recent estimates from Amazon, and that percentage is expected to rise steadily as the price of computer equipment falls.
Some cloud operators are already starting to put computationally intensive jobs on servers that can handle them while shifting simpler tasks to low-power servers, says Reuben Miller, a senior research analyst with IDC. "Large companies [need] processors that are more power efficient," he says. "It's creating opportunities."
Low-power contenders include Tilera as well as SeaMicro, which makes servers using Intel's Atom processors (and sells them to buyers like France Telecom and Mozilla), and Calxeda, a company that builds low-power servers using mobile-phone chips from ARM Holdings.
Intel is likely to remain dominant, not least because of the large amount of software that's already designed to run on the company's chips. Intel executives also say that performance still matters more than power consumption for many cloud applications, such as data mining and financial services. "It's about the most useful work done per watt per dollar," says Raejeanne Skillern, director of cloud computing marketing for Intel.
However, IDC's Miller says that as simple cloud computing tasks proliferate, the market for other chip designs will expand. In the next few years, he says, "I think Intel has the potential to see its market share come down."
Bishara believes that changes in the market for servers could speed the adoption of new chip designs. Ten years ago, he says, no company bought more than 10,000 servers annually, but today companies like Amazon, Google, Apple, and Baidu collectively buy hundreds of thousands every year. "You're getting a little bit of a Walmart effect in the supply chain," he says. Today big buyers can demand new types of less expensive chips custom-designed for the cloud. "Before, the supply chain was controlled by Intel," Bishara says. "Now companies can make a choice."

Facebook Shares Its Cloud Designs

If you invented something cheaper, more efficient, and more powerful than what came before, you might want to keep the recipe a closely guarded secret. Yet Facebook took the opposite approach after opening a 147,000-square-foot computing center in rural Oregon this April. It published blueprints for everything from the power supplies of its computers to the super-efficient cooling system of the building. Other companies are now cherry-picking ideas from those designs to cut the costs of building similar facilities for cloud computing.
The Open Compute Project, as the effort to open-source the technology in Facebook's vast data center is known, may sound altruistic. But it is an attempt to manipulate the market for large-scale computing infrastructure in Facebook's favor. The company hopes to encourage hardware suppliers to adopt its designs widely, which could in turn drive down the cost of the sever computers that deal with the growing mountain of photos and messages posted by its 750 million users. Just six months after the project's debut, there are signs that the strategy is working and that it will lower the costs of building—and hence using—cloud computing infrastructure for other businesses, too.
Facebook's peers, such as Google and Amazon, maintain a tight silence about how they built the cloud infrastructure that underpins their businesses. But that stifles the flow of ideas needed to make cloud technology better, says Frank Frankovsky, Facebook's director of technical operations and one of the founding members of the Open Compute Project. He's working to encourage other companies to contribute improvements to Facebook's designs.
Among the partners: chip makers Intel and AMD, which helped Facebook's engineers tweak the design of the custom motherboards in its servers to get the best computing performance for the least electrical power use. Chinese Web giants Tencent and Baidu are also involved; after touring Facebook's Oregon facility, Tencent's engineers shared ideas about how to distribute power inside a data center more efficiently. Even Apple, which recently launched its iCloud service, is testing servers based on Facebook's designs. Eventually the Open Compute Project could exist independently of the company that started it, as a shared resource for the industry.
Facebook's project may be gaining traction because companies that manufacture servers, such as Hewlett-Packard and Dell, face a threat as business customers stop buying their own servers and instead turn to enormous third-party cloud operations like those offered by Amazon. "IT purchasing power is being consolidated into a smaller number of very large data centers," Frankovsky says. "The product plans and road maps of suppliers haven't been aligned with that." Being able to study the designs of one of the biggest cloud operators around can help suppliers reshape their product lines for the cloud era.
However, not everyone wants servers to run just like Facebook's, which are designed specifically for the demands of a giant online social network. That's why Nebula, which offers a cloud computing platform derived from one originally developed at NASA, is tweaking Facebook's designs and contributing them back to the Open Compute project. Nebula CEO Chris Kemp says this work will help companies that need greater memory and computing resources, such as biotech companies running simulations of drug mechanisms.
Larry Augustin, CEO of SugarCRM, which sells open-source cloud software to help businesses manage customer relations, sees challenges for Facebook's project. "There have always been efforts on open hardware, but it is much harder to collaborate and share ideas than with open software," he says. Nevertheless, Augustin expects the era of super-secret data center technology to eventually fade, simply because the secrecy is a distraction for businesses. "Many Internet companies today think that the way they run a data center is what differentiates them, but it is not," he says. "Facebook has realized that opening up will drive down data centers' costs so they can focus on their product, which is what really sets them apart."