Big Understanding
Networks, Sensors, & Mobility
Autonomy
Cybersecurity
[next]
Regional Meetings
Prior conferences
Upcoming conference
more about us

 

 

Autonomy
July 11-13, 2016
Brooklyn, New York

Library Selection
Our Robots, Ourselves: Robotics and the Myths of Autonomy (Viking, 2015)
By David A. Mindell





agenda


Monday, July 11
6:00 PM
First-Timers Reception
6:30 PM
Reception
7:00 PM
Welcome Dinner

Tuesday, July 12
7:30 AM

Breakfast Grand Ballroom, Salon E

8:30 AM
Len Kleinrock, TTI/Vanguard Advisory Board
Conference Welcome
8:50 AM
Marc Raibert, Founder, Boston Dynamics
Animal-Inspired Robots
Dynamic robots with advanced control systems and high-performance mechanical designs are leaving the laboratory and entering the world. They can operate in rough terrain, where wheeled and tracked vehicles can't go. Examples of such robots are AlphaDog, the follow-on to BigDog, Spot and Atlas, an anthropomorphic robot designed to explore mobile manipulation and other real-world tasks. These robots move dynamically, balance as they go, and rely on sensing and computing for their behavior.
9:45 AM
Robert Howe, Professor, School of Engineering, Harvard University
Design, Sensing, and Motor Control in Biological and Robotic Systems
The commercial opportunities for a robotic hand that can perform human tasks are as great as the engineering challenges in creating one. The human hand is a remarkable device with 25 degrees of freedom and complex feedback loops, guided by the most marvelous computer that nature can create. A truly useable robotic hand could pick and pack goods at a warehouse, outperform Baxter on the factory floor, and help grandma in the kitchen. But the reliability requirements are off the scale—it needs to be able to manipulate a wide range of objects, but also never drop one. The Harvard BioRobotics Lab, and RightHand Robotics, a startup spun out of it, have developed working hands that meet these requirements.
10:25 AM
Coffee Break Northside Foyer
10:55 AM
Julie Shah, Head, Interactive Robotics Group, Computer Science and Artificial Intelligence Laboratory, MIT
Robots for High-Intensity, Safety-Critical Applications
Every team has top performers—people who excel at working in a team to find the right solutions in complex, difficult situations. They include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they often cannot tell us how they do it.  Similarly, even when a machine can do the job better than most of us, it can’t explain how. This is because people and machines don’t think the same way. Recent work enables machines to learn from the best human team members how to extract information from human conversation, and participate in real-time to improve human team decision-making. The MIT Interactive Robotics Lab has conducted experiments in which intelligent agents that use these models assist people in classification and planning tasks. Studies demonstrate statistically significant improvements in people’s performance in decision-making tasks, when aided by the intelligent agents.
11:35 AM
Pam Mueller, Department of Psychology, Princeton University 
The Hazards of Mechanized Notetaking
Technology is often implemented to make things easier. However, some level of difficulty can actually be desirable, particularly for learning and memory. Recent research shows that students who take notes on laptops tend to transcribe lectures verbatim. Students who take longhand notes are forced to process the content and be more selective about what they write, because their note-taking speed is slower. This "difficulty" leads to improved learning and performance on conceptual questions. Consequences of these "desirable difficulties" outside the classroom will also be discussed.
12:05 PM
Amit Zoran, Senior Lecturer at the School of Engineering and Computer Science, The Hebrew University
Smart Tools and Hybrid Human–Computer Interaction
CAD modeling has come a long way, and so have additive and subtractive manufacturing methods. The more we can use them to automate the process of turning out, say, 100,000 identical automobile bumpers, the better. But for many processes, the man-machine integration needs to be more subtle. When crafting a unique work of art or artisanal product, we want the human to stay in control, with the machine acting as a safety net, preventing errors. FreeD is a hand-held digital milling device, monitored by a computer while preserving the maker’s freedom to manipulate the work in many creative ways. Relying on a pre-designed 3-D model, the computer gets into action only when the milling bit risks the objects integrity, preventing damage by slowing down the spindle speed; the rest of the time it allows complete gestural freedom. The same model of man-machine interaction, leaving the human in control, is important in landing an airplane, cooking a meal, or teaching you to dance.
12:45 PM
Members’ Working Lunch Grand Ballroom, Salon E
1:55 PM
Duncan Wass, Research Director, University of Bristol School of Chemistry
Self-Healing Materials
Fiber-reinforced composites are the modern materials of choice where high performance must be combined with low weight, such as airplanes and wind turbines. However, impact damage to composite structures that result in a drastic reduction in mechanical properties can be difficult to detect. The Wass Research Group at the University of Bristol has adopted a bio-inspired approach to effect damage detection and self-healing. The essential chemistry relies on catalysts that are “triggered” by damage events to cause the polymerization of a reservoir of monomer held within the composite structure.
2:35 PM
Ian Glenn, Chief Executive Officer and Chief Technology Officer, Ing Robotics
Robotic Aviation Services in Harsh Conditions
Over one million consumer drones were sold in North America in the last year, but they are also making inroads in the commercial sector, with oil and gas, clean energy, and the environment leading the way. Turbine inspection, powerline monitoring, situational awareness, and emergency response mapping are just a few of the use-cases benefiting from ultra-high-resolution aerial data gathered by drones that can survey remote and harsh environments.  
3:10 PM
Coffee Break Northside Foyer
3:40 PM
Hod Lipson, Professor of Mechanical Engineering, Columbia University
Curious and Creative Machines  
Where are the robots? For 40 years we’ve been told robots will cooking, cleaning, shopping, building. Our current robots aren’t even as functional and adaptable as a beetle. Can we harness evolution to create better robots? Can machines be curious and creative? Can robots ultimately design and make other robots? Experiments with models and physical robots suggest that self-awareness may not only be a sufficient condition for effective robots, it may be necessary.
4:15 PM
Robin Hanson, Associate Professor of Economics, George Mason University
The Age of Em: Work, Love, and Life when Robots Rule the Earth
The three most disruptive transitions in history were the introduction of humans, farming, and industry. If another such radical transition lies ahead, a good guess for its source is artificial intelligence in the form of whole brain emulations, or “ems,” sometime in the next century. Drawing on academic consensus from many disciplines, a baseline scenario set modestly far into a post-em-transition world can be outlined. The scenario takes into account computer architecture, energy use, cooling infrastructure, mind speeds, body sizes, security strategies, virtual reality conventions, labor market organization, management focus, job training, career paths, wage competition, identity, retirement, life cycles, reproduction, mating, conversation habits, wealth inequality, city sizes, growth rates, coalition politics, governance, law, and war.
6:00 PM

Buses Depart for Brooklyn Winery (213 N 8th St, Brooklyn, N.Y.)

6:30 PM

Reception & Dinner


wednesday, July 13
7:30 AM

Breakfast Grand Ballroom, Salon E

8:30 AM
Rodolphe Gelin, Chief Scientific Officer, Aldebaran
Emotion for Better Robot Interactions 
In 2007, SoftBank Robotics commercialized Nao, the first in a series of a companion robots designed to improve human quality of life. More than 9000 units of this 58 cm (height) robot have been sold. In 2014, the company presented Pepper, a 1.2 m service robot that offers an intuitive human–robot interaction. To be accepted at first sight, especially by non-technophile users, a robot should have a humanoid appearance and be capable of natural interaction, preferably speech. But more—non-verbal communication, carrying emotion, turns out to be at least as important. The body language of the robot gives a real added value, and its understanding the gestures and other non-verbal signals by the human can improve dramatically its efficiency.
9:20 AM
Antoine Blondeau, Chief Executive Officer, Sentient AI
The Autonomous Hedge Fund and Other Intelligent Systems
If there’s a core technology for corporations that will see them through the 21st century and into the 22nd, it’s artificial intelligence—the basis of autonomous cars, voice interaction, and so much more, including, e-commerce, financial investing, and biotechnology. Sentient has created a system of massively distributed AI to breed better and better programs across a wide variety of verticals, from hedge fund trading to detecting prostate cancer and sepsis. Of course, as Moore’s Law slows down, computation is still an issue, so the system harvests dark cycles from thousands of sites around the world, in data centers and game centers alike.
10:00 AM
Sonia Chernova, Director, Robot Autonomy and Interactive Learning Lab, Georgia Institute of Technology
Human-in-the-Loop Deployment of Scalable Autonomous Systems
Over the past two decades, robotics has been undergoing an exciting transition from factory automation to the deployment of autonomous systems in less structured environments, including warehouses, hospitals and homes. One of the critical barriers to the wider adoption of autonomous robotic systems in commercial applications is the challenge of achieving 100%-reliable autonomy in complex human environments.  Research at the Georgia Tech Robot Autonomy and Interactive Learning Lab has leveraged innovations in cloud computing, crowdsourcing and remote access technologies to gain broader access to data and users, fundamentally altering the way in which interactive robotic systems are developed and deployed. This talk will discuss the current state of autonomous robotic systems designed to operate alongside and in collaboration with human partners, and will present applications of this research paradigm to robot learning from demonstration, object manipulation and semantic reasoning, as well as explore some exciting avenues for future research in this area.
10:45 AM
Coffee Break Northside Foyer
11:15 AM
TTI/Vanguard Announcements
11:25 AM
Jeff Legault, Director of Strategic Business Development, National Robotics Engineering Center, Carnegie Mellon University
The Business Case of Autonomy
How much does autonomy cost? Should you invest in robotics technology development or simply wait for the market to come up with new products? Three case studies of commercialization of robotics technology, including one failure, will be discussed. It’s easy to underestimate the costs of automation, but on the other hand there are low-hanging fruit that companies often overlook.
12:05 PM
Chad Bouton, Managing Director, Center for Bioelectronic Medicine, Feinstein Institute for Medical Research
Conquering Paralysis through Bioelectronic Medicine
At the core of bioelectronic medicine is the electrical signal used by the nervous system to communicate information. Virtually every cell of the body is directly or indirectly controlled by these neural signals. Recently, an experimental device, developed at Battelle Memorial Institute and tested at Ohio State University, allowed a paralyzed man to move his hand for first time, using his thoughts to achieve complex motor functions. It opens a wide array of possibilities for millions of patients recovering from spinal cord injury, stroke and brain injury. It also has potential applications to the partially disabled and able-bodied—as an assistive device for the elderly, or to train better motor skills in civilians and the military.
12:45 PM
Members' Working Lunch Grand Ballroom, Salon E
2:00 PM
Jeremy Heffner, Product Manager for HunchLab, Azavea
Prescriptive Policing: How Man and Machine Interact to Improve Public Safety 
The point of automation is to let machines do what they’re good at, and let people do what they’re good at. An ideal methodology for policing would allow commanders (or the public) to set priorities—how important preventing an assault is in comparison with preventing a burglary?—and then let predictive policing systems advise them on how best to deploy resources. Still, questions abound. Can software motivate officers to do good police work? Can such systems help to overcome biases and reflect a sense of fairness? What does fairness mean in this context? And how can we make these systems transparent, given that policing is traditionally secretive, yet never more in the public eye than now?
2:40 PM
Ian Stewart, Partner and Chief UK Economist, Deloitte
The Future of Work
A positive narrative about technology and progress has dominated history. An accumulation of life-changing innovations—from the steam engine and antibiotics, and mass travel to or mass communication—have improved living standards for most, generation after generation. Yet beginning at least with the the Luddites of the eighteenth century, this narrative has been punctuated by fears about the job-destroying effects of technology. The mood now is once again of caution. Advances in machine learning and enormous increases in storage, processing and communication capacity are enabling machines to tackle complex tasks involving thought and judgement, which were once the sole preserve of humans. Jobs are being automated more quickly, or made less labor intensive. If the effects of technology on employment are so unpredictable, are there any durable lessons that can be gleaned from history? We have examined two major data sources for clues: census records on employment in England and Wales since 1871 and Labor Force Survey data from 1992.The short answer: Machines will take on more repetitive and laborious tasks, but seem no closer to eliminating the need for human labor than at any time in the last 150 years.
3:20 PM
Bob Lucky,TTI/Vanguard Advisory Board
Conference Reflections
4:00 PM
Close of Conference


home about us activities and deliverables contact faqs copyright