Artificial Intelligence Development

News

Controlling brain waves to improve vision

Have you ever accidently missed a red light or a stop sign? Or have you heard someone mention a visible event that you passed by but totally missed seeing?

"When we have different things competing for our attention, we can only be aware of so much of what we see," said Kyle Mathewson, Beckman Institute Postdoctoral Fellow. "For example, when you're driving, you might really be concentrating on obeying traffic signals."

But say there's an unexpected event: an emergency vehicle, a pedestrian, or an animal running into the road -- will you actually see the unexpected, or will you be so focused on your initial task that you don't notice?

"In the car, we may see something so brief or so faint, while we're paying attention to something else, that the event won't come into our awareness," says Mathewson. "If you present this scenario hundreds of times to someone, sometimes they will see the unexpected event, and sometimes they won't because their brain is in a different preparation state."

By using a novel technique to test brain waves, Mathewson and colleagues are discovering how the brain processes external stimuli that do and don't reach our awareness. A paper about their results, "Dynamics of Alpha Control: Preparatory Suppression of Posterior Alpha Oscillations by Frontal Modulators Revealed with Combined EEG and Event-related Optical Signal," published this month in the Journal of Cognitive Neuroscience, reveals how alpha waves, typically thought of as your brain's electrical activity while it's at rest, can actually influence what we see or don't see.

The researchers used both electroencephalography (EEG) and the event-related optical signal (EROS), developed in the Cognitive Neuroimaging Laboratory of Gabriele Gratton and Monica Fabiani, professors of psychology and members of the Beckman Institute's Cognitive Neuroscience Group, and authors of the study.

While EEG records the electrical activity along the scalp, EROS uses infrared light passed through optical fibers to measure changes in optical properties in the active areas of the cerebral cortex. Because of the hard skull between the EEG sensors and the brain, it can be difficult to find exactly WHERE signals are produced. EROS, which examines how light is scattered, can noninvasively pinpoint activity within the brain.

"EROS is based on near-infrared light," explained Fabiani and Gratton via email. "It exploits the fact that when neurons are active, they swell a little, becoming slightly more transparent to light: this allows us to determine when a particular part of the cortex is processing information, as well as where the activity occurs."

This allowed the researchers to not only measure activity in the brain, but also allowed them to map where the alpha oscillations were originating. Their discovery: the alpha waves are produced in the cuneus, located in the part of the brain that processes visual information.

The alpha can inhibit what is processed visually, making it hard for you to see something unexpected.

By focusing your attention and concentrating more fully on what you are experiencing, however, the executive function of the brain can come into play and provide "top-down" control -- putting a brake on the alpha waves, thus allowing you to see things that you might have missed in a more relaxed state.

"We found that the same brain regions known to control our attention are involved in suppressing the alpha waves and improving our ability to detect hard-to-see targets," said Diane Beck, a member of the Beckman's Cognitive Neuroscience Group, and one of the study's authors.

"Knowing where the waves originate means we can target that area specifically with electrical stimulation" said Mathewson. "Or we can also give people moment-to-moment feedback, which could be used to alert drivers that they are not paying attention and should increase their focus on the road ahead, or in other situations alert students in a classroom that they need to focus more, or athletes, or pilots and equipment operators."

The study examined 16 subjects and mapped the electrical and optical data onto individual MRI brain images.

 

Ancient Maya and virtual worlds: Different perspectives on material meanings

If Facebook were around 1,400 years ago, the ancient Maya might have been big fans of the virtual self.

The Maya believed that part of your identity could inhabit material objects, like a courtier's mirror or sculptor's carving tool. Maya might even name these objects, talk to them or take them to special events. They considered these items to be alive.

The practice of sharing your identity with material possessions might seem unusual in a modern context.

But is it that different from today's selfie-snapping, candy-crushing online culture, where social media profiles can be as important to a person's identity as his or her real-world interactions? Even money is virtual now, as digital currency such as Bitcoin gains popularity.

Research by University of Cincinnati assistant professor Sarah Jackson is beginning to uncover some interesting parallels between ancient Maya and modern-day views on materiality.

"This relates to a lot of things that people are feeling out right now about virtual realities and dealing with computers and social lives online," says Jackson, an anthropological archaeologist. "These things start to occupy this uncomfortable space where we question, 'Is it real, or is it not real?' I look at the Maya context and consider, 'How different is that from some of the concerns we have now?' There are some parallels in terms of preoccupation with roles that objects play and how attached we are to things."

Jackson will present her research "Classic Maya Material Meanings (and Modern Archaeological Consequences)" on April 25 at the Society for American Archaeology's (SAA) annual meeting, which runs through April 27 in Austin, Texas. More than 3,000 scientists from around the world attend the event to learn about research covering a broad range of topics and time periods.

THE MAYA PERSPECTIVE

For her research, Jackson uses hieroglyphic textual evidence to help her understand how the Maya might have viewed the material world. She's building a database of Maya material terminology and tracking certain property qualifiers -- visual markings on glyphs indicating from what material an object is made, like wood or stone.

Key to the process is trying to look at these property qualifiers from the Maya perspective. Jackson has found that the Maya applied property qualifiers in a broad manner, including some unexpected areas of divergence from literal interpretation.

For example, to the Maya, a temple might have "stony" qualities but so might a calendar or different things related to time. Other known Maya behaviors suggest belief in the concepts of object agency and partible personhood, meaning objects have the power to act in their own right and that the identity can be split into sections which can live outside the body.

So when Jackson analyzes a glyph that appears to show a Maya ruler having a conversation with his mirror or another that depicts a sculptor carving a "living" statue, it's important for her to overcome her own material assumptions.

"There are some really interesting possibilities if we can try to incorporate at least some kind of reconstructed understanding of how the Maya would have seen these materials, not just how we see them," Jackson says.

TRANSFORMING ARCHAEOLOGY

Jackson envisions potentially major changes in some fundamental aspects of archaeology, including the excavation process itself. She says even standard paperwork can encode certain assumptions and direct an archaeologist's interpretation in certain ways.

"It's really important to me that this isn't just abstract," Jackson says. "Let's see if we can think about how the Maya think, but let's also think about how this can transform what we're doing archaeologically."

Jackson plans to return to Belize next spring for additional field work, and she intends to test some experimental techniques. She's working with Christopher Motz, a doctoral student in UC's Department of Classics, to develop a database and interface for mobile tablet use in field work. The new technology is intended to allow researchers to catalog field data in a way that conveniently integrates traditional and new recording methods, similar to the innovative methods in use at UC's archaeological research project at Pompeii.

"Some of these things I'm thinking about could really shift how we characterize objects, how we record them, what is our vision of what they look like. And then how we construct ideas of assemblages, like how objects are relating to each other in a particular context and how we document them," Jackson says.

Story Source:

The above story is based on materials provided by University of Cincinnati. The original article was written by Tom Robinette. Note: Materials may be edited for content and length.

 

iPad users explore data with their fingers: Kinetica converts tabular data into touch-friendly format

Spreadsheets may have been the original killer app for personal computers, but data tables don't play to the strengths of multi-touch devices such as tablets. So researchers at Carnegie Mellon University have developed a visualization approach that allows people to explore complex data with their fingers.

Called Kinetica, this proof-of-concept system for the Apple iPad converts tabular data, such as Excel spreadsheets, so that data points appear as colored spheres on the touchscreen. People can directly manipulate this data, using natural gestures to sort, filter, stack, flick and pull data points as needed to help them answer questions or explore hidden relationships.

"The interactions are intuitive, so people quickly figure out how to explore the data with minimal training," said Jeffrey Rzeszotarski, a Ph.D. student in the Human-Computer Interaction Institute (HCII) who developed Kinetica with Aniket Kittur, assistant professor in the HCII. They will present their findings April 29 at the CHI Conference on Human Factors in Computing Systems in Toronto.

"People often try to make sense of data where you have to balance many dimensions against each other, such as deciding what model of car to buy," Kittur said. "It's not enough to see single points -- you want to understand the distribution of the data so you can balance price vs. gas mileage vs. horsepower vs. head room."

Kinetica solves this problem by taking advantage of the multi-touch capabilities of tablets. Someone sorting through data on car models, for instance, could pull all of the different models into a chart that graphs their gas mileage and horsepower. Afterward, they could put two fingers on the touchscreen to create a virtual sieve and pull it through a field of spherical data points, screening out models that don't meet a certain criteria, such as those costing more than $20,000. Or, they could use one finger to draw a transparent lens that highlights inexpensive models.

"It's not about giving you one way to do things, but giving you a sandbox in which to play," Rzeszotarski said. A video showing some of the potential interactions possible with Kinetica is available on the project website, http://getkinetica.com/. Data points don't just pop into place after they have been manipulated with Kinetica, as they do in a traditional spreadsheet. Seeing where data points come from as they are sorted can give the user deeper insights into relationships, according to Kittur and Rzeszotarski. For instance, when a user drags a virtual sieve across points to filter them, they can watch as the points are screened out. Outliers -- data points that don't fit with most of the others -- also can be readily identified.

In user studies, people using an Excel spreadsheet to analyze data typically made about the same number of observations within a 15-minute time span as did Kinetica users, Rzeszotarski noted. But the Kinetica users had a better understanding across multiple dimensions of data. For instance, Excel users analyzing data on Titanic shipwreck passengers might extract facts such as the passengers' average age, while Kinetica users would note relationships, such as the association between age and survival.

Approaches such as Kinetica could expand the functionality of tablets, they contend.

"Web browsing and book reading are among the most popular uses for tablet computers, at least in part, because the tools available for many apps simply aren't good enough," Rzeszotarski said. "A mouse might be superior when you're working with a desktop computer, but tablets can accommodate much more natural gestures and apps need to play to that strength."

Though Kinetica was developed initially for the iPad, the researchers also are exploring versions adapted to other devices.

Story Source:

The above story is based on materials provided by Carnegie Mellon University. Note: Materials may be edited for content and length.

   

Computer-assisted accelerator design

If you walk by room 201 in Building 911 at the U.S. Department of Energy's Brookhaven National Laboratory, you might think Stephen Brooks is playing a cool new video game. But Brooks is doing important, innovative work. He's using his own custom designed software to create a 3-D virtual model of the electron accelerator Brookhaven physicists hope to build inside the tunnel currently housing the Relativistic Heavy Ion Collider (RHIC). His mission is to put the virtual pieces together and help test out designs for eRHIC -- a proposed machine that would provide unforeseen insight into the inner structure of protons and heavy ions.

"Once the eRHIC layout is in my code, I put beams through it to verify it works," Brooks said. "But I can also add errors in the alignment of the magnets, beams, and so on to verify it will work in a practical setting."

By work he means produce extremely focused high-energy electron beams to pierce into the very heart of RHIC's counter-circulating protons or heavy ions to create precision 3-D images of gluons -- the particles that bind quarks within protons and neutrons, thus imparting visible matter with 99 percent of its mass. This proposed electron-ion collider would open a new window into nuclear matter, ensuring U.S. leadership in the field for the next several decades. And building such a machine by adding an electron accelerator to the existing RHIC complex would be a cost-effective strategy for achieving this goal.

But keeping the cost down and ensuring functionality of the hundreds of different accelerator components takes planning to be sure things go right.

Designing a subatomic particle racetrack

While there are many codes that can track particles through accelerators, the fully 3-D, interactive nature of Brooks' code, and the ability to incorporate complex accelerators the size of RHIC, makes it unique.

Using a mouse to navigate from a birds-eye view to a close-up, 3-D, edge-on view of the magnets and the beams circulating inside the machine, he explains, "We can use this code to test that the individual accelerator components in the machine are compatible with each other when they are assembled together." And to be sure those components will fit within the existing RHIC tunnel, the model incorporates a conventional architectural drawing including physical constraints like concrete walls.

Even more innovative, Brooks' program incorporates an "evolutionary algorithm optimization feature" -- essentially an artificial intelligence mode that can vary any aspect of the accelerator and search for the best design to achieve a particular objective by running repeated simulations.

One goal is to track and minimize the amount of synchrotron radiation emitted by the electron beam. That's energy that spews off tangent to the charged particles' circular path, like water droplets flying off a wet towel swung around in a circle, gradually depleting the beam's energy.

"The design tool also determines, for a given layout of magnets and sequence of beam energies, whether each beam will be focused in a stable way and not spread out in size and become unuseable," Brooks said.

Two rings are better (and cheaper) than six

Testing different designs and parameters, Brooks and other accelerator scientists arrived at a plan that circulates multiple beams of electrons at a range of energies within each of two electron accelerator rings. It incorporates an innovative "non-scaling, fixed field, alternating gradient" (FFAG) accelerator design originally developed by Brookhaven physicist Dejan Trbojevic, who supervises Brooks.

The "alternating gradient" -- alternating directions of the magnetic field -- keeps the design relatively compact. "Fixed field" means that beams don't have to be injected periodically and ramped up to reach higher energy. Instead, the beam can be on continuously as it is brought up to "speed." And because non-scaling FFAG accelerators can be made out of fairly standard accelerator magnets, such a design would achieve high collision rates while controlling costs.

"Trbojevic realized that you can build magnet channels with stronger focusing than normal that can tolerate a large range of beam energies, with the beams of different energy transported side-by-side of each other within the same ring," Brooks said. "At eRHIC, the beams would spiral through the machine with the external linear accelerator adding energy each turn, and the beam then following the next path farther out but still within the same beam pipe."

Brooks' optimization software tool helped the team identify the ideal design: with five electron beams in a low-energy ring, spanning a factor of 5x in energy, and up to 11 electron beams in a high-energy ring, spanning a factor of 2.7x. This design, fitting all these beams within two stacked accelerator rings instead of the six that were called for in an earlier design, represents a significant cost savings.

So, with its results pointing to fewer rings, relatively low-cost magnets, continuous beam, minimized energy loss, and a plan for how to absorb that lost energy, Brooks' "gaming" with the eRHIC accelerator design seems to be paying off.

 

Augmented reality: Bringing history and the future to life

Have you ever wished you had a virtual time machine that could show you how your street looked last century? Or have you wanted to see how your new furniture might look, before you've even bought it? Thanks to VENTURI, an EU research project, you can now do just that.

Très Cloîtres Numérique , due to be launched this summer, is a 'living memorial' to a neglected quarter of Grenoble, says VENTURI Project Coordinator Paul Chippendale. The project was designed to appeal "to people familiar with the neighbourhood as well as those who are interested in Grenoble's rich cultural heritage and human history."

Participants can use a tablet or smartphone to look at the city through a virtual lens. The modern-day scene that they can see through their device's camera is overlaid with historical photographs and 3D reconstructions of ancient buildings, allowing the users to look at their surroundings, going backwards through time. Local schoolchildren have collected photographs and memories from their parents and grandparents in order to preserve their memories for future generations.

Beyond Smartphones and Tablets: Wearable AR

Whilst Très Cloitres Numérique is ambitious, it still relies on the user looking through the screen of their smart device. "But rather than having to view the world through your device," says Mr Chippendale, "it should be possible to experience an augmented environment seamlessly through smart glasses, watches and earpieces.

"The customary 'letterbox' paradigm of AR -- holding up your Smartphone and using it as a magic looking glass -- certainly makes AR accessible to the masses, but in my opinion it is not a comfortable experience. Even though I work in this field I still do not use AR Apps in my everyday life," says Mr Chippendale. "They are just too generic and do not give me the information that I need according to where I am, what I am doing and what I enjoy.

"However, I do believe that this is about to change. In VENTURI, we have been exploring cutting edge 'reality sensing' through computer-vision and sensor fusion, and have tied this together with intuitive 'world augmentation' through 3D audio, Smartwatch interaction and HMDs like GoogleGlass.

"It's the aim of the VENTURI project to create augmented reality applications that blend seamlessly with the user's interaction with the real world." Rather than needing to stop to look at their smartphone or tablet, users would receive information that would enhance their experience of the world around them through an earpiece or smart glasses.

Using AR to Help Customers.

It's not only virtual history galleries that can be created using the VENTURI project's augmented reality systems. Companies like Volkswagen, Audi and IKEA are working with project partner Metaio to create exciting new tools. For example, Audi customers can take a virtual tour of their new vehicle to learn its features and Volkswagen allows users to customise a car before ordering. IKEA and Mitsubishi both allow clients to see how their products would look in their homes or offices, before buying.

By working with Metaio and Sony, the VENTURI project is creating what they believe will be the first generation of ubiquitous AR tools. Says Paul Chippendale:

"Thanks to Sony's participation in VENTURI, we have had privileged access to their future vision of wearable devices, ranging from smart life logging bands (wrist-worn devices that log a user's activity) to advanced head mounted displays. We have been using this insight together with Metaio's strong market knowledge, to create personalised AR content according to a user's social profile, the current environmental and what it is that they're currently doing."

Story Source:

The above story is based on materials provided by European Commission, CORDIS. Note: Materials may be edited for content and length.

   

Page 4 of 28

Our Partners