Robots With Emotions on Display at the ICT'08 Event in Lyon
University of Hertfordshire (11/21/08) Roberts, Emma
A European project is developing robots that are capable of growing emotionally, responding to humans, and expressing their own emotional states during interaction with people. The researchers behind the FEELIX GROWING project will display their mid-term results at ICT 2008, which takes place in Lyon from November 25-27, 2008. "The aim is to develop robots that grow up and adapt to humans in everyday environments," says University of Hertfordshire professor Lola Canamero, coordinator of FEELIX GROWING. "If robots are to be truly integrated in humans' everyday lives as companions or care-givers, they cannot be just taken off the shelf and put into a real-life setting, they need to live and grow interacting with humans, to adapt to their environment." The FEELIX GROWING project plans to offer live demonstrations of a baby pet robot learning to control its stress as it explores a new environment, and robotic heads responding with facial expressions to human faces and voices. Other prototypes to be shown include humanoid robots learning to execute simple tasks by observing and imitating humans, and an interactive floor responding to human touch and movement with different light and sound patterns.
http://www.innovations-report.de/html/berichte/messenachrichten/robots_emotions_display_ict_039_08_event_lyon_122819.html
First SIGGRAPH Asia Sees Significant Participation From the Region's Talents
TAXI Design Network (11/19/08)
The first ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia opens in Singapore on December 10, 2008, reflecting the area's growing importance in computer graphics and digital media. From 1998 to 2005, the number of SIGGRAPH technical papers submissions from Asia increased by 300 percent. This year 30 percent of the materials accepted were from Asia, and 14 universities from across Asia will be represented. SIGGRAPH Asia will feature special sessions that focus on some of the most important developments in the industry, including a panel discussion on the issues and challenges of establishing a new production studio in Asia. The Computer Animation Festival (CAF), a popular attraction at SIGGRAPH, will display a variety of selected screenings, from feature films, games, and visual effects. Asia, and specifically China, Japan, and Korea, contributed almost half of the CAF entries this year. Another highlight of the conference will be the courses program workshop hosted by Animation Options LLC CEO Kevin Geiger, director of the nonprofit Animation Co-op, who will share his organizational insight on better planning and management of the production pipeline and workflow.
http://www.designtaxi.com/news.jsp?id=22180&monthview=0&month=11&year=2008
Real-Time Beethoven
Norwegian University of Science and Technology (11/21/08) Oksholen, Tore
A student at the Norwegian University of Science and Technology has developed a computer instrument that takes the skills of jazz musicians to the next level. Oyvind Brandtsegg has developed a computer program and a musical instrument for improvisation and variation for his Ph.D. research. The computer instrument is capable of taking recorded music and splitting the sound into sound particles that last between one and 10 milliseconds, infinitely reshuffling the fragments, and making it possible to vary the music without changing its fundamental theme. "It's easy to change a bit of music into something that can't be recognized," Brandtsegg says. "It's the opposite that is the challenge: To create variations in which the musical theme remains clear." The instrument allows composers to add new tonal variations and timbres to the musical palette and work in real time. Brandtsegg worked with the Department of Computer and Information Science to develop the software architecture, and the acoustics group at the Department of Electronics and Telecommunications to create the particle synthesizer.
http://www.alphagalileo.org/index.cfm?_rss=1&fuseaction=readrelease&releaseid=534087
Carnegie Mellon Theory of Visual Computation Reveals How Brain Makes Sense of Natural Scenes
Carnegie Mellon News (11/19/08) Spice, Byron; Watzman, Anne
A new computational model from researchers at Carnegie Mellon University helps explain how the brain processes images in the foreground and the background to interpret natural scenes. Michael S. Lewicki in the Computer Science Department and the Center for the Neural Basis of Cognition worked with graduate student Yan Karklin to build a model that uses an algorithm to analyze the patterns that compose natural scenes and determine which patterns are most likely associated with each other. "Our model takes a statistical approach to making these generalizations about each patch in the image," says Lewicki, a computational neuroscientist. The model of the visual neurons that can detect lines matched well with neurons that are involved in more complex visual processing. "We were astonished that the model reproduced so many of the properties of these cells just as a result of solving this computational problem," says Lewicki. Computer vision system should improve as researchers learn more about how the brain processes contours and surfaces. Better computer vision algorithms could make it easier for computers to understand the three-dimensional nature of objects, and recognize what they see around them as a complete picture. Karklin earned a Ph.D. in computational neuroscience, machine learning, and computer science last year, and is now a post-doctoral fellow at New York University.
http://www.cmu.edu/news/archive/2008/November/nov19_visualcomputation.shtml
IBM Tries to Bring Brain Power to Computers
IDG News Service (11/19/08) Shah, Agam
IBM Research has been working on a project to give computers the same processing capabilities as the human brain. The goal is to integrate brain-related senses such as perception and interaction into hardware and software to enable computers to process and understand data faster while consuming less power, says IBM researcher Dharmendra Modha. Modha says neuroscience, nanotechnology, and supercomputing are all being combined as part of the effort to create a new computing platform. "If we could design computers that could be in real-world environments and sense and respond in an intelligent way, it would be a tremendous step forward," Modha says. A typical computing process starts out by defining the objectives and then creates algorithms to achieve those objectives. Modha says the brain works the opposite way, with a pre-established algorithm that is applied to problems as they arise, creating a platform that can address a wider variety of problems. For example, the brain-based approach could help manage the world's water supplies using real-time analysis of data gathered by a network of sensors that monitor variables to discover new patterns. Such an approach also could also be applied to world markets. The researchers are not working on concrete applications yet, but rather an understanding of what the brain does and how it can be implemented in computing, Modha says.
http://www.pcworld.com/article/154222/.html