Robot

0 5
Avatar for urmi5
Written by
3 years ago

Human Brain’s Light Processing Ability Could Lead to Better Robotic Sensing

The human brain often serves as inspiration for artificial intelligence (AI), and that is the case once again as a team of Army researchers has managed to improve robotic sensing by looking at how the human brain processes bright and contrasting light. The new development can help lead to the collaboration between autonomous agents and humans.

According to the researchers, it is important for machine sensing to be effective across changing environments, which leads to developments in autonomy.

The research was published in the Journal of Vision

100,000-to-1 Display Capability

Andre Harrison is a researcher at the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. 

“When we develop machine learning algorithms, real-world images are usually compressed to a narrower range, as a cellphone camera does, in a process called tone mapping,” Harrison said. “This can contribute to the brittleness of machine vision algorithms because they are based on artificial images that don’t quite match the patterns we see in the real world.” 

The team of researchers developed a system with 100,000-to-1 display capability, which enabled them to gain insight into the brain’s computing process in the real-world. According to Harrison, this allowed the team to implement biological resilience into sensors.

The current vision algorithms still have a long way to go before becoming ideal. This has to do with the limited range in luminance, at around 100-to-1 ratio, due to the algorithms being based on human and animal studies with computer monitors. The 100-to-1 ratio is less-than-ideal in the real world, where the variation can go all the way up to 100,000-to-1. This high ratio is termed high dynamic range, or HDR.

Dr. Chou Po Hung is an Army researcher. 

“Changes and significant variations in light can challenge Army systems — drones flying under a forest canopy could be confused by reflectance changes when wind blows through the leaves, or autonomous vehicles driving on rough terrain might not recognize potholes nor other obstacles because the lighting conditions are slightly different from those of which their vision algorithms were trained,” Hung said. 

The Human Brain’s Compressing Capability

The human brain is capable of automatically compressing the 100,000-to-1 input into a narrower range, and this is what allows humans to interpret shape. The team of researchers set out to understand this process by studying early visual processing under HDR. The team looked toward simple features such as HDR luminance. 

“The brain has more than 30 visual areas, and we still have only a rudimentary understanding of how these areas process the eye’s image into an understanding of 3D shape,” Hung continued. “Our results with HDR luminance studies, based on human behavior and scalp recordings, show just how little we truly know about how to bridge the gap from laboratory to real-world environments. But, these findings break us out of that box, showing that our previous assumptions from standard computer monitors have limited ability to generalize to the real world, and they reveal principles that can guide our modeling toward the correct mechanisms.” 

By discovering how light and contrast edges interact in the brain’s visual representation, algorithms will be more effective at reconstructing the 3D world under real-world luminance. When estimating 3D shape from 2D information, there are always ambiguities, but this new discovery allows for them to be corrected.

“Through millions of years of evolution, our brains have evolved effective shortcuts for reconstructing 3D from 2D information,” Hung said. “It’s a decades-old problem that continues to challenge machine vision scientists, even with the recent advances in AI.”

The team’s discovery is also important for the development of AI-devices like radar and remote speech understanding, which utilize wide dynamic range sensing. 

“The issue of dynamic range is not just a sensing problem,” Hung said. “It may also be a more general problem in brain computation because individual neurons have tens of thousands of inputs. How do you build algorithms and architectures that can listen to the right inputs across different contexts? We hope that, by working on this problem at a sensory level, we can confirm that we are on the right track, so that we can have the right tools when we build more complex Als.” 

Commercial developers are preparing for the rise of the robot worker;-

Robots taking over from humans – and then usually turning on them – has long been a popular theme in fiction and movie-making, including Terminator2001: A Space Odyssey, The Transformers and The Matrix.

So it’s startling to learn that the future is now. Robots are already cleaning some of Australia’s Woolworths stores, Google is patenting a robot with personality and the first fully automated shipping port has now opened in China, with everything controlled by artificial intelligence.

It means massive changes for the workplace, and already commercial real estate developers are getting ready for the rise of the machines.

“It’s fascinating to think what the future of work is going to look like,” said Paul Edwards, general manager of workplace experiences at developer Mirvac, which has just co-authored a discussion paper with The WORKTECH Academy.

“There’ll be less task-oriented work as that will be automated, and humans will focus much more on human skills and problem-solving and critical thinking and collaboration.

“COVID-19 has accelerated this trend of digital working and machines will complement this shift. This whole new world of technology is now starting to impact the office and industrial spaces, and will impact the design of workplaces into the future.”

The white paper, Augmented Work: how new technologies are reshaping the global workplace, argues that the advent of so many robots is nothing to be feared; they will work with humans to improve efficiency, eliminate tedious repetitive tasks for us, and do them far more quickly than people ever could.

They enable companies like Visa, for instance, to process 24,000 transactions a second, far in excess of what any human could do in a month. Automation can also produce vast quantities of data – with more being produced in the past two years than in the whole of the rest of history – that can be used to predict changes that need to be made in our lives.

Report editor Jeremy Myerson, director of WORKTECH and research professor at the Royal College of Art in London, said the change that’s on its way should be welcomed.

“Robots will serve to create more imaginative and higher quality human jobs because they’ll take out some of the drudgery with automation and machine intelligence,” he said. “Everyone has gone on for years about robots taking human jobs, but a lot of those jobs are much better done by robots.

“In theory, that will release time for humans to do the things that humans are good at, like using human intelligence, intuition and judgment, the things you can’t program a machine to do and humans can do a hell of a lot better.”

The implications for the workplace will be far-reaching.

With more machines in the workplace, the way offices are designed will change dramatically to accommodate robots and the tasks they have been designed to do. There’ll also be more focus on the space for human-centred skills such as creativity, collaboration, empathy, integrity and adaptive thinking.

The discussion paper predicts that future buildings will be built with designated machine spaces such as robotic service tunnels and dark offices, where machines carry out physical work that does not need human-visible light.

These can run without the need for services like airconditioning and lighting, which means logistical sorting centres, for example – which can run for 24 hours straight – are perfect for office basements while humans work in more pleasant surroundings up above.

And for those humans, there will be facilities like digital ceilings, a single integrated network or digital backbone running along ceilings.

Elsewhere, there are already examples of automation completely transforming business. The Qingdao Ghost Port in China is fully computerised, with electric trucks driving containers between cranes, while loading machines laser scan to assess containers, leading to a 30 per cent increase in efficiency.

Deutsche Bahn, the largest railway operator in Europe, also uses smart sensors and predictive maintenance technology to forecast when trains will need repairs, reducing costs by 25 per cent.

Mirvac is trialling technologies to collect data on how staff are interacting, to review workplace design and increase productivity, efficiency and collaboration among staff.

“Industrial business will also have a lot more focus on creating smart sheds and smart industrial spaces that are capable of managing mass orders,” said Mr Edwards.

“We have images of Amazon robots sorting and delivering incredible numbers of packages and products, and being able to carry 340 kilograms of product on their heads.

“Humans will always need to check and manage and pick the next level of that operation and ensure the system is delivering the right products to the right people, and co-existence is part of that process. We need to have the right infrastructure in place to enable humans and robots to co-exist in the same location without incident and deliver the best outcomes.”

So far, the experts have identified five different models of human-robot augmented work: assigned, where humans input instructions to machines; supervised, where human operators retain a degree of monitoring like plane’s automatic pilots; co-existent, where machines work alongside humans in parallel work steams, like, for instance, wearable technologies; assistive, where robots help us complete tasks faster and more accurately; and symbiotic, the most advanced form of human- machine collaboration, where humans can input high-level objectives for the machines to deliver.

“It’s a very, very fast-moving subject,” said Mr Myerson. “People still worry that robots will take their jobs, but other jobs will be created. The Industrial Revolution put men out of work, but did create one job for every one lost.

“For instance, there’ll be automated buses that don’t need drivers, but then those drivers will be freed up to become tour guides on those buses and talk to passengers, help people on and off, and serve coffee. So our workplaces and the work we do will change, but enable us to be much more community-focused instead.”

Driving behavior less 'robotic' thanks to new model:-

Researchers from TU Delft have now developed a new model that describes driving behaviour on the basis of one underlying 'human' principle: managing the risk below a threshold level. This model can accurately predict human behaviour during a wide range of driving tasks. In time, the model could be used in intelligent cars, to make them feel less 'robotic'. The research conducted by doctoral candidate Sarvesh Kolekar and his supervisors Joost de Winter and David Abbink will be published in Nature Communications on Tuesday 29 September 2020.

Risk threshold

Driving behaviour is usually described using models that predict an optimum path. But this is not how people actually drive. 'You don't always adapt your driving behaviour to stick to one optimum path,' says researcher Sarvesh Kolekar from the Department of Cognitive Robotics. 'People don't drive continuously in the middle of their lane, for example: as long as they are within the acceptable lane limits, they are fine with it.'

Models that predict an optimum path are not only popular in research, but also in vehicle applications. 'The current generation of intelligent cars drive very neatly. They continuously search for the safest path: i.e. one path at the appropriate speed. This leads to a "robotic" style of driving,' continues Kolekar. 'To get a better understanding of human driving behaviour, we tried to develop a new model that used the human risk threshold as the underlying principle.'

Driver's Risk Field

To get to grips with this concept, Kolekar introduced the so-called Driver's Risk Field (DRF). This is an ever-changing two-dimensional field around the car that indicates how high the driver considers the risk to be at each point. Kolekar devised these risk assessments in previous research. The gravity of the consequences of the risk in question are then taken into account in the DRF. For example, having a cliff on one side of the road boundary is much more dangerous than having grass. 'The DRF was inspired by a concept from psychology, put forward a long time ago (in 1938) by Gibson and Crooks. These authors claimed that car drivers 'feel' the risk field around them, as it were, and base their traffic manoeuvres on these perceptions.' Kolekar managed to turn this theory into a computer algorithm.

Predictions

Kolekar then tested the model in seven scenarios, including overtaking and avoiding an obstacle. 'We compared the predictions made by the model with experimental data on human driving behaviour taken from the literature. Luckily, a lot of information is already available. It turned out that our model only needs a small amount of data to 'get' the underlying human driving behaviour and could even predict reasonable human behaviour in previously unseen scenarios. Thus, driving behaviour rolls out more or less automatically; it is 'emergent'.

Elegant

This elegant description of human driving behaviour has huge predictive and generalising value. Apart from the academic value, the model can also be used in intelligent cars. 'If intelligent cars were to take real human driving habits into account, they would have a better chance of being accepted. The car would behave less like a robot.'

Can Science Explain What Makes Robots Creepy?

Life-size dolls. Amusement park automatons. Realistic robots. Why do some of these humanlike forms elicit positive responses, while others seem disturbing and downright creepy? Are humans hard-wired to feel more uneasy and fearful the more realistic robots become? Last week Emory University psychologists published in the journal Perception a study that provides insights on the cognitive mechanisms of this phenomenon known as the uncanny valley.

Half a century ago, Masahiro Mori, a professor of robotics at the Tokyo Institute of Technology, published his uncanny valley hypothesis sans fanfare in the esoteric Japanese journal Energy. Mori put forth the theory that robots with humanlike features are more likeable than purely machine-like ones — up to a certain point. As a humanlike object becomes more realistic, it starts to approach the so-called uncanny valley, where it no longer elicits positive emotional responses from the viewer and starts to become disturbing and off-putting.

Mathematically, this can be expressed as a graph with viewer affinity on the y-axis, and human likeness on the x-axis. As an object such as a robot becomes more humanlike, the line that represents the relationship between the viewer’s affinity and human likeness of the object eventually dips significantly to form a valley. On the graph, likeable toy robots would be on the positive slope and prosthetic hands in the uncanny valley.

The Emory University research team of Wang Shensheng, Yuk Fai Cheong, Daniel Dilks and Philippe Rochat tested the hypothesis that the human brain is doing more than just anthropomorphizing when looking at androids.

Anthropomorphizing is attributing human traits, behavior, emotion or form to non-human beings or objects. Anthropomorphism can be commonly found in books, animation, and movies. Examples of anthropomorphized fictional characters include Pinocchio — a wily wooden marionette, C-3PO — an affable and quite courteous fictional robot from Star Wars, Thomas the Tank Engine — a cheeky, over-excitable steam engine of Sodor, and HAL 9000 — a conflicted sentient artificial intelligence (AI) computer from the epic film 2001: A Space Odyssey inspired by Arthur C. Clarke’s novels.

First, 62 participants provided feedback on emotional responses to and the human likeness of 89 synthetic and real human faces. Next, a different set of 62 participants performed a visual looming task with the same 89 faces so the team could measure the perceived threat. Then the study enlisted 36 participants to sort faces as synthetic or real in a timed task, allowing researchers to measure categorical uncertainty associated with the perception of how alive or animate the image seemed.

The researchers observed that the “perceived animacy decreased as a function of exposure time only in android but not in human or mechanical-looking robot faces.” When it was apparent that an object is human or mechanical, the perceived animacy (quality of being alive), did not lower over time. The uncanniness in androids are connected to the “temporal dynamics of face animacy perception.” This study suggests that the drop in perceived animacy of the viewer is the reason for the uncanny valley phenomenon.

In the past decades, there have been numerous research studies that attempt to explain the uncanny valley phenomena. These theories vary in approach and explanations.

From a social neuroscience perspective, is the uncanny valley a window into the social brain’s predictive processing? Scientists at the University of California San Diego (UCSD) monitored the brain activity of 20 participants, using electroencephalogram (EEG), while they viewed video clips in two modes (motion only and still-then-motion) with a humanoid robot, a realistic robot, and a human performing actions such as drinking from a cup, hand-waving, throwing a piece of paper, and so forth. In their study published in 2018 in Neuropsychologia, the researchers attribute the negative reaction of the uncanny valley to violations of a person’s predictions about human norms when confronted with an artificial but realistic human form.

Can the turning point to negativity for realistic robots be due to underlying neurological mechanisms? A team of psychology and neuroscience researchers from the University of Cambridge and University Duisburg-Essen published last summer in the Journal of Neuroscience their functional MRI (fMRI) study of the neural activity of 26 participants who were shown pictures of different robots and humans. The researchers posit that the uncanny valley response is due to the nonlinear value-coding in a key area of the brain’s reward system, the ventromedial prefrontal cortex.

“A distinct amygdala signal predicted rejection of artificial agents,” wrote the UK and German researchers. “Our data suggest that human reactions toward artificial agents are governed by a neural mechanism that generates a selective, nonlinear valuation in response to a specific feature combination (humanlikeness in nonhuman agents).”

Is it possible that the uncanny valley phenomenon is not related at all to the human-likeness factor, but rather is due to something like cognitive dissonance? Psychology researchers from the University of Guelph, Canada and the Yale University School of Medicine propose that the uncanny valley reflects a general form of stimulus devaluation that happens when inhibition is triggered to resolve conflicting stimulus-related representations. In their study published in Frontiers in Psychology, the authors attribute the uncanny valley to conflicting perceptual cues that elicit psychological discomfort. The researchers studied the reactions from 69 participants who viewed various computer-generated morph images such as human-robot, human-stag, human-tiger, human-lion, and human-bird.

“Negative affect for stimuli within an Uncanny Valley context may therefore occur to the extent that selecting one stimulus interpretation over the other requires inhibition of visual-category information associated with the non-selected interpretation,” wrote the researchers. “The greater the inhibition during identification, the greater the negative affect for the associated stimulus.”

What does the uncanny valley theory, a concept that emerged half a century ago, hold for the future? The worldwide robotics market is projected to increase at a compound annual growth rate of 26 percent to reach nearly $210 billion by 2025, according to figures from Statista. Factoring in the potential for the uncanny valley phenomenon may be rising in importance in the market segment of personal and service robotics, where there are high levels of interaction between humans and robots, as opposed to in the industrial robot sector. Going beyond aesthetics, and taking into account the numerous research studies of the uncanny valley phenomenon, maybe a key competitive advantage in robotics design in the not-so-distant future.

 

1
$ 0.00
Avatar for urmi5
Written by
3 years ago

Comments