Designing and Doing
Regional Meetings
Prior conferences
Upcoming conference
more about us

 

 

Intelligence, Natural and Artificial
June 11–12, 2018
Brooklyn, NY

Steven Cherry PREVIEWS THE CONFERENCE
conference mp3 icon download (MP3)

FIELD TRIP
NYU’S TANDON SCHOOL OF ENGINEERING AND THE CENTER FOR URBAN SCIENCE + PROGRESS (CUSP)
June 13, 2018
8:30 AM–1:00 PM



agenda


Sunday, June 10
6:00 PM
First-Timers Reception — Northside Foyer
6:30 PM
Welcome Reception — Northside Foyer
7:00 PM
Welcome Dinner — Northside Ballroom

Monday, June 11
7:30 AM

Breakfast — Grand Ballroom, Salon E

8:30 AM
Len Kleinrock, TTI/Vanguard Advisory Board
Conference Welcome
8:50 AM

TTI/Vanguard Announcements

9:00 AM

Noam Brown, Ph.D. Student, Computer Science, Carnegie Mellon University (@polynoamial)
From Poker AI to Negotiation AI: Dealing with Hidden Information

Despite AI successes in perfect-information games, the private information and massive game tree have made no-limit poker difficult to tackle. Libratus is an AI that, in a 120,000-hand competition, defeated four top human specialist professionals in Heads-Up No-Limit Texas Hold’em, the leading benchmark in imperfect-information game solving. Its game-theoretic approach features application-independent techniques: an algorithm for computing a blueprint for the overall strategy, an algorithm that fleshes out the details of the strategy for subgames that are reached during play, and a self-improver algorithm that fixes potential weaknesses that opponents have identified in the blueprint strategy. Libratus’s success augers well for real-world multi-agent imperfect-information settings such as marketing, negotiations, and cybersecurity.

9:50 AM

Bo Zhu, Research Fellow in Radiology, Harvard Medical School
Artificial Intelligence from Sensor to Image

Our eyes are very different from cameras—they’re not great sensors. Despite, or perhaps because of that, human visual perception can capture important features, even with low signal-to-noise. MRI also has a low s-n ratio. Can we train the MRI in the same way we train our vision? A new artificial-intelligence-based approach to image reconstruction, called AUTOMAP, yields higher quality images from less data, reducing radiation doses for CT and PET and shortening scan times for MRI. The same techniques could also improve regular photography.

10:35 AM
Coffee Break — Ballroom Foyer
11:05 AM

Anthony Zador, M.D., Professor of Neurosciences, Cold Spring Harbor Laboratory
Understanding the Connectome to Build Better Brain-like Algorithms

We need new methods for determining the complete wiring instructions of the brain at single-neuron resolution (the “Connectome”). Previous methods, making use of microscopy, are insufficient to penetrate the tangle of neurons in the cortex and map many neurons at once. A new method exploits high-throughput DNA sequencing. Because the costs of DNA sequencing are plummeting, these methods have the potential to yield the complete wiring diagram of an entire human brain for just thousands of dollars. With an adequate mapping, it then becomes possible to apply the results to work on neuropsychiatric disorders that are disorders of the wiring, such as autism, schizophrenia, and depression. Ultimately, it may be possible to generate improved algorithms that process data more like the brain does.

11:50 AM

Jeff Jonas, Founder, Chief Executive Officer, and Chief Scientist, Senzing
Big Data and the General Data Protection Regulation (GDPR)

Effective May 2018, the European Union's General Data Protection Regulation (GDPR) goes into effect. With this comes big responsibilities for organizations. And the more data you have, the bigger the responsibility.

12:35 PM
Members’ Working Lunch — Grand Ballroom, Salon E
1:50 PM

David Gunning, Program Manager, DARPA
The Explainable AI Project

Dramatic success in machine learning has led to a torrent of AI applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users—a concern for military as well as civilian applications. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners. The Explainable AI program aims to create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance, and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

2:35 PM

Topical Breakout Groups

3:05 PM
Coffee Break — Ballroom Foyer
3:35 PM

David Prior, Founder and Chief Technology Officer, Xuvasi Ltd, and Chief Executive Officer, Resilio Ltd (@davidprior)
Everyday Superpowers Through Applied Intelligence

Why—in a world in which what we want to achieve is often well understood and derived from decades of experience—do we continue to stifle the creative capacity of human beings with mundane, repetitive tasks? An approach called “Applied Intelligence” can enable human-computer teams to deliver effects that are greater than the sum of their parts, a result that can be seen in a variety of real-world examples.

4:20 PM

Kenneth Stanley, Senior Research Manager, Uber AI Labs, and Professor of Computer Science, University of Central Florida
Neuroevolution: How Evolving Neural Networks Contributes to the Quest for AI

The human brain, with its trillions of internal connections, ranks among the crowning achievements of evolution on Earth, and the capabilities of the brain stand as the central motivation for the field of AI as a whole. How is it that such an unguided and unplanned process as evolution yields an artifact so incredibly complex and powerful? And should we not be able to provoke a similar complexity explosion within a computer program, possibly lighting a path to general AI? The history of neuroevolution as a field, which draws its inspiration from the evolution of brains on Earth, parallels the rise of deep learning within the broader field of machine learning. Whereas deep learning focuses on the algorithm for training the neural network, neuroevolution searches for the origin of the neural network itself, which can include its structure, weights, and even its learning algorithm. Ultimately, neuroevolution and deep learning are likely complementary, and their gradual convergence and hybridization in recent years has sparked renewed interest in neuroevolution and its potential. Our ever-growing computational power is revolutionizing the field, which has matured as our understanding of evolution, open-endedness, and increasing complexity has improved.

5:10 PM

End of First Conference Day


6:15 PM

7:00 PM
7:30 PM
Reception & Dinner Cruise around the Statue of Liberty
Boarding buses begins
Board boat
Cruise begins

Tuesday, June 12
7:30 AM

Breakfast — Grand Ballroom, Salon E

8:30 AM

Gary Marcus (@GaryMarcus), Professor of Psychology and Director, New York University Infant Language Learning Center
Why AI is Harder Than You Think

Although deep learning has historical roots going back decades, neither the term “deep learning” nor the approach was popular as recently as five years ago, before the field was reignited by papers such as Krizhevsky, Sutskever, and Hinton’s now classic 2012 deep net model of Imagenet. What has the field discovered in the five subsequent years? Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, at least ten deep concerns remain for deep learning. They suggest that if we are to reach artificial general intelligence, deep learning must be supplemented by other techniques.

9:15 AM

Doug Lenat, TTI/Vanguard Advisory Board
Compensating for Cognitive Brittleness

The obstacles to solving important problems—for us as individuals, as organizations, as nations, and even as a species—stem from limitations baked into our genetics. True, over the past thousands of years, we’ve invented a handful of ways to help us think better: language; writing; the scientific method; the Internet and its “appsphere.” Nature has done her part, too, empowering us in several ways, notably with a cerebral cortex and differently-abled brain hemispheres. Yet some of our capabilities are double-edged, such as the Faustian bargain we make by slipping into a paradigm or ideology: The tremendous power it gives us comes at a tremendous cost, blinding us to other ways of perceiving the world and dealing with it. The current AI paradigm (training multi-layer neural nets on big data) in particular is one such sword, yielding correlations that can be useful in the near term but leaving us bereft of the explanatory power that we need in the long term. So it’s important to understand that there are other ways that people and computers can draw useful inferences and solve problems. These aren't competing alternatives, so much as other tools in our human and AI cognitive toolbox—you can’t build a house with a saw as your only tool, but you wouldn't want to build a house without one. This toolbox approach also casts a glimmer of light against the dark singularity scenarios predicted by Elon Musk, the late Stephen Hawking, and others.

10:00 AM

Ben Kuipers, Professor of Computer Science and Engineering, and Director, Intelligent Robotics Lab, University of Michigan
How Can We Trust a Robot?

Trust is essential for the successful functioning of society. Trust is necessary for cooperation, which produces the resources society needs. Morality, ethics, and other social norms encourage individuals to act in trustworthy ways, avoiding selfish decisions that exploit vulnerability, violate trust, and discourage cooperation. As we contemplate robots and other AIs that perceive the world and select actions to pursue their goals in that world, we must design them to follow the social norms of our society. (Doing this does not require them to be true moral agents, capable of genuinely taking responsibility for their actions.) Self-driving cars may well be the first widespread examples of trustworthy robots, designed to earn trust by demonstrating how well they follow social norms—yet the design focus for self-driving cars should not be on the Deadly Dilemma, but on how a robot’s everyday behavior can demonstrate its trustworthiness.

10:40 AM
Coffee Break — Ballroom Foyer
11:10 AM

Simon Tong, FiveAI
Autonomous Vehicles Without Maps

There are many, many companies today working on autonomous car systems, and many plan to build their own fleets of them. However, these companies by and large are U.S.- and Chinese-based. Yet dense European cities present totally different technical, behavioral, regulatory and infrastructure challenges for safe urban driverless technologies. UK-based FiveAI uses deep neural networks with unsupervised learning for reliable scene segmentation, object classification, depth, state and position to operate a vehicle without GPS. Finally, a behavioral modeling layer assigns probabilities to safely interact with other cars on the road.

11:50 AM

Rohini Rewari, Director, Market Trends and Innovation, Intel; Robie
Samanta Roy, Vice President, Technology Strategy & Innovation, Lockheed Martin; Vijay Sankaran, Chief Innovation Office, TD Ameritrade

Member Panel: AI in Use

12:35 PM
Members’ Working Lunch — Grand Ballroom, Salon E
1:50 PM

Elias Bareinboim, Assistant Professor, Purdue University
Causal Inference and Data-Fusion

Causal inference is usually dichotomized into two categories—experimental and observational—that, by and large, are studied separately. Reality is more demanding. Experimental and observational studies are but two extremes of a rich spectrum of research designs that generate the bulk of the data available in practical, large-scale situations. In typical medical explorations, for example, data from multiple observations and experiments are collected, coming from distinct experimental setups, different sampling conditions, and heterogeneous populations. The data-fusion problem concerns piecing together multiple datasets collected under heterogeneous conditions so as to obtain valid answers to queries of interest. The availability of multiple heterogeneous datasets presents new opportunities. However, the biases that emerge in heterogeneous environments— including confounding, sampling selection, and cross-population biases—require new analytical tools.

2:30 PM
Erik Andrejko, Chief Technology Officer, Wellio
The Intelligent Assistant as Personal Chef

In recent years, machine learning has proliferated, in the number of individual models and their performance. As a result, across many specific tasks, the new models can operate on a par with human experts—a trend that will itself accelerate. Using a food and nutrition app as an example, we will explore strategies for effectively leveraging existing models to bridge the gap to new domains and explore practical techniques for building domain-specific AI systems.

3:10 PM

Guy Hoffman (@guyhoffman), Assistant Professor, Mechanical and Aerospace Engineering, Department, Cornell University
A Handmade Approach to Social Robotics

As robots are becoming closer to being consumer products, we may witness the emergence of a new technology sector, “robotic experience design.” In contrast to existing computer interfaces, which focus on screens, keyboards, and trackpads, a robot can use its whole body to interact. This requires an embodied approach to AI and a multidisciplinary approach to robot-human interaction. Already, designers are drawing from psychology, animation, theater, and music to produce a robotic body language that can connect with users on a visceral level. The latest robot to come out of this process is Blossom, a robot design that rejects almost all common wisdom of what a robot looks like and moves like. Blossom is the first robot to be soft both inside and outside, using a compliant internal structure to enable movements that give the robot a somewhat imperfect personality. Its exterior is handcrafted; building it is a slow, inefficient, and one-of-a-kind process, making it more deeply personal—more like a cherished robotic rag doll than a computer on wheels.

3:50 PM

TTI/Vanguard Announcements

4:00 PM
Meeting Closes

Wednesday, June 13
8:30 AM
to
1:00 PM

Field trip to NYU Tandon School of Engineering

MAGNET (2 Metrotech, Brooklyn, N.Y.)
Ability Lab; Motion Capture Studio (gaming/AR/VR); Mobile AR Lab (demos and apps developed by students and faculty); NYU Wireless (the reality of 5G; demos)

Center for Urban Science + Progress (CUSP) (370 Jay Street)
Urban Observatory Control Room; Motion Capture Lab (yes, a second one—this for LiDAR detector, etc.); Sounds of New York (SoNYC) Lab



home about us activities and deliverables contact faqs copyright