Data Big and Small
Regional Meetings
Prior conferences
Upcoming conference
more about us

 

 

Risk, Security, & Privacy
September 11–13, 2017
Washington, D.C.


FIELD TRIP
Johns Hopkins University Applied Physics Laboratory
11100 Johns Hopkins Road
Laurel, Maryland 20723
September 14, 2017
7:30AM-12:30PM

agenda


Monday, September 11
6:00 PM
First-Timers Reception
6:30 PM
Welcome Reception
7:00 PM
Welcome Dinner

Tuesday, September 12
7:30 AM

Breakfast

8:30 AM
Len Kleinrock, TTI/Vanguard Advisory Board
Conference Welcome
8:50 AM

Serge Leef, Vice President, New Ventures, and General Manager, System Level Engineering, Mentor Graphics
Chip-Level Security: Challenges and Opportunities

Attacks targeting hardware range from piracy to data theft to intentional hardware compromise and sabotage. Of particular concern are systemic, or combined, attacks that impact both software and hardware elements: As the secure computing architectures become more common, their hardware-based trust anchors will become attractive targets for hardware-based attacks whose goal is to create vulnerabilities in the supposedly secure software. Design of trusted chips requires awareness of the security needs at every step of the design flow to combat a wide range of threats. Some of the security needs can be served by the design tools specifically targeting threat detection and verification of design security and integrity. Other threats can only be resisted using special design elements and properties. This speech will discuss common attack vectors and emerging technologies that reduce attack surface by including counter-measures into chip design and manufacturing processes.

9:35 AM

Arvind Narayanan, Assistant Professor of Computer Science, Princeton University
De-anonymization of Sparse Datasets

For years, the key ethic for safe, sustainable data sharing was anonymization. As long as a researcher or organization took steps to anonymize datasets, they could be freely used and shared. This notion was even embedded in such laws and policies as the HIPAA Privacy Rule and the European Union’s Data Protection Directive, which facilitate sharing of anonymized datasets. But it turns out that anonymization is not foolproof. Researchers have shown that individuals can be identified in many different datasets once thought to have been fully protected. We need new methods and policies to protect individual privacy in the big-data/machine-learning era.

10:20 AM
Coffee Break
10:50 AM

Arwen P. Mohun, Professor of History, University of Delaware
Learning from Rollercoasters: Risk Perception, Tech Fixes, and the Madness of Crowds

History is inescapable. All new technological risks are connected to what came before by analogy, psychology, and culture. The example of one 20th-century technology—rollercoasters—can help us think through some of the challenges of managing of 21st-century cyber-risk. The conversation will focus on issues of risk perception, user-agency, and the limitations of technological fixes for managing risk.

11:30 AM

John Nay, Co-founder and Chief Executive Officer of Skopos Labs
Policy + Machine Learning = Real-Time Prediction of Congressional Legislation

It’s an interesting exercise—and potentially a useful one—to predict which of the thousands of bills that get introduced during a Congressional session will pass both houses of Congress. As it turns out, a machine-learning algorithm, plus a dozen other variables, can determine the odds of a bill becoming a law. The system makes effective predictions and scales across millions of pages of legal text. One concern with A.I. models is that typically the ones with the most predictive power are a series of black boxes—there is no coherent explanation behind the prediction. In this case, different explanations can be automatically extracted so the automated system can explain its decisions.  

12:10 PM
Members’ Working Lunch
1:25 PM

Jason Hong, Associate Professor, School of Computer Science, Human Computer Interaction Institute, Carnegie Mellon University
Toward a Safe and Secure Internet of Things

There may well be over 20 billion connected IoT devices by 2020, due to advances in processors, sensing, displays, storage, wireless networking, and battery life. However, these same technologies pose many new and daunting challenges for cybersecurity. And while IoT is often talked about as a single monolithic concept, it is more useful to think of it as a three-tier pyramid. Each tier represents a different class of device, based on the computational power of the device, as well as the amount of interaction and attention a person needs to devote to each device. Each tier also poses different kinds of security challenges due to the nature of the devices in that tier.

2:05 PM

Julie Ancis, Associate Vice President, Institute Diversity, Georgia Institute of Technology
The Social and Algorithmic Risks of Implicit Bias

As we automate more and more processes, we need to guard against the way biases and other implicit associations weaken the algorithms we construct just as it can impact human decision-making. These unconscious, automatic mental processes affect, for example, how we evaluate candidates when hiring or promoting personnel. Faulty assumptions including hypothesized predictors of behavior often result in less than optimal decisions, at best, and harmful decisions, at worst. Studies have shown an empirical relationship between appearance-related associations and perceptions of competence, reliability and intelligence, when considering such critical documents as vitae and letters of recommendations. Fortunately, there are empirically-based strategies and best practices for avoiding or minimizing cognitive shortcuts and making equitable and better decisions. These practices can be used as well in the construction of the algorithms that replace or augment human decision-making processes.

2:45 PM

Douglas Guilbeault, Doctoral Student, Annenberg School for Communication, University of Pennsylvania
Hacking Politics—and Marketing—with Automated Botnets

The Computational Propaganda Project at Oxford University found that automated accounts had a significant influence on political communication over Twitter, in the weeks leading up to the 2016 U.S. election. Governmental, corporate, and citizen populations have used bots, which are easily created through the Twitter API, to create an illusion of popularity around fringe issues or political candidates. What does this mean for democracy, marketing, and the future of the Internet?

3:25 PM
Coffee Break
3:55 PM

Russ Warner, Vice President, Marketing and Operations, Converus
The Eyes Are the Window to the Soul—Especially for Lie Detection

Lie detector technologies used nowadays for employee and job applicant screening include polygraph, voice stress analysis, integrity tests, and EEG. A new means, which measures deception by analyzing involuntary eye behaviors, threatens to disrupt this 90-year old industry and radically improve credibility assessment solutions for businesses and government. It turns out that the eyes truly are the window to the soul.

4:35 PM

Matt Alsdorf, Vice President of Criminal Justice, Laura and John Arnold Foundation
Who’s Really in Jail? A Case Study Using Data and Analytics to Assess Risk

The decisions made at the earliest phase of a criminal case—including who is booked into jail and who remains there awaiting trial—have enormous impacts on public safety, government spending, and fundamental fairness. Yet, in most jurisdictions, judges and other decision-makers typically have virtually no information about the risks a given defendant poses or how to best mitigate those risks. Key decisions, such as who is released and who is detained, are typically made without the aid of rigorous research or objective metrics, and instead are based on a judge’s gut instinct or a fixed bail schedule. As a result, in many jurisdictions, large proportions of the highest-risk defendants are released before trial, often with minimal or no supervision, while significant numbers of low-risk, non-violent defendants remain in jail for weeks and sometimes months. The Laura and John Arnold Foundation’s Public Safety Assessment was designed to help remedy this situation by providing judges with a data-driven assessment of each defendants’ risk. It was created using a database of over 1.5 million cases drawn from more than 300 U.S. jurisdictions. It is an example of how leveraging data, analytics, and technology, we can help jurisdictions improve decision-making at a critical point the criminal justice process.

5:15 PM

End of First Conference Day

5:30 PM
6:00 PM
6:30 PM
Bar Opens in Ballroom Foyer
Reception at Hotel
Dinner at Hotel

Wednesday, September 13
7:30 AM

Breakfast

8:30 AM

Andrew Bud, Founder and Chief Executive Officer, iProov
Meeting the Challenges of Strong Face Verification

Biometric authentication of users is fast becoming a necessity, both for logon, physical access security and for ID checks. Face verification could be the simplest, easiest, most ubiquitous solution—if a range of replica and replay attacks can be reliably repelled. This is a hard problem, on that Apple seeks to solve in the iPhone 8 with costly, proprietary hardware. A successful solution, which works across platforms and is simple enough for the old and infirm to use, yet will resist even the most sophisticated attacks whilst safeguarding users’ privacy.

9:15 AM

Richard Ford, Chief Scientist, Forcepoint
Protecting the Human Point

In Philip K. Dick’s 1956 “The Minority Report,” the Pre-Crime Division anticipated and prevented crime before it happened by going into a trance-like state. In 2017, we still haven’t found any precogs, but similar results can be achieved by looking at user behavior through the lens of predictive analytics. By focusing on the point at which humans touch a company’s data, we can avoid the fragmentation that results from treating symptoms instead of behaviors. Perhaps the greatest challenge is doing this while still respecting user privacy.

9:50 AM

Suzanne Barber, Professor, Electrical and Computer Engineering, and Director, Center for Identity, University of Texas at Austin
A Scorecard for Identity Management and Your Ability to Combat Fraud  

The Center for Identity at The University of Texas (UT CID) is developing computational cradle-to-grave "maps" of identity for people, organizations, and devices. These Identity Maps will provide a foundation for defining mathematically-describable value and risks of dynamic transactions. The resulting observations can be used to gain a deeper understanding of the basic nature of digital identities. Leveraging empirical data from the UT CID national repository of identity theft and fraud cases and trends, these Identity Maps offer statistically predictive insights, solutions, and recommendations for improved best practices to identity security and privacy, fraud protection, and trust management in cyberspace.

10:30 AM
Coffee Break
11:00 AM

Vincent Weafer, Senior Vice President, McAfee
What to Do When the Black Hats Discover Machine Learning

The availability of machine learning toolkits, documentation, and tutorials has exploded in recent years. In as little as an hour, an individual can be training complex models on large datasets on a distributed architecture. This has given the white hats with a new arsenal of defensive weapons, but it has also similarly armed the black hats with new offensive weapons as well. The cyberthreat landscape is reaching an inflection point, driven by structural changes in attacker platforms, changing motivations, ability to make profit, and the changing attack surface. For example, one of several persistent threats we track today is the Business Email Compromise scam, where threat actors target individuals with financial responsibility within a business and, through skillful social engineering, dupe the individual into transferring funds into a fraudulent bank account.  Although it remains unclear how victims are selected, a considerable amount of research is conducted before the attacks are initiated. We believe that cybercriminals are leveraging data analysis and data aggregation models to target victims for BEC and similar scams. It’s useful to look at these attacks from the adversary vantage point and to walk through the motivations, ML tools, and market enablers that drive these attacks.

11:45 AM

Jason Blackstock, Founding Head, Department of Science, Technology, Engineering and Public Policy, University College London, and Madeline Carr, Associate Professor, International Relations and Cyber Security, University College London
Risk and Liability in the Internet of Things

Managing the risks and opportunities associated with the emerging Internet of Things goes far beyond understanding technical issues. Questions of responsibility, liability, privacy and consent pervade emerging IoT applications, and are closely coupled with deep questions of law, regulation, politics, economics, and social value. In the short term, these issues are entwined in rapidly evolving discussions of IoT standards frameworks and government policies, and tightly coupled with cybersecurity concerns. In the medium and long term, however, these issues have far-reaching strategic and societal implications that corporate and governmental leadership need to be carefully tracking today.

12:30 PM

TTI/Vanguard Announcements

12:40 PM
Members’ Working Lunch
1:55 PM

David McGrew, Fellow, Cisco Systems
Encrypted Traffic Analytics

With the move towards public and private clouds and other distributed security architectures, network encryption has become increasingly important and widespread. While encryption greatly benefits application security, it also provides a place for malware to hide. Organizations need to understand how encryption is used on their networks, so they can ensure that critical data and systems are fully protected, and can detect potentially malicious communications. Both of these needs can be met by using deep visibility into session metadata, along with a deep knowledge of malware communication obtained by applying machine learning to threat intelligence. By leaving encrypted sessions intact, this strategy strikes a balance between security and privacy, creating a useful alternative to more invasive techniques, such as decryption proxies.

2:40 PM

Nikhil Naik, Prize Fellow, Harvard University
Remote and Near Sensing with Artificial Intelligence

Corporations photograph the world at the street level, from the air, and from space. Advances in artificial intelligence are providing the capability to analyze these photographs and collect data on people, places, and firms at an unprecedented resolution and scale. Using Google’s Street View as a collection of images, we can explore opportunities to track changes in cities over time, and assess the risks and privacy implications that arise when we combine remote and near sensing with artificial intelligence.

3:20 PM

Bob Lucky, TTI/Vanguard Advisory Board
Conference Reflections

4:00 PM
Meeting Closes

Thursday, September 14
  Field Trip to Johns Hopkins University Applied Physics Laboratory
11100 Johns Hopkins Road, Laurel, Maryland 20723
7:30 AM
Buses depart from hotel
8:30 AM

Tour begins. Tour will have a threefold focus on cybersecurity, space, and robotics/prosthetics; recurring themes will be visualization and collaboration.

12:30 PM
Tour ends; buses depart for airports / return to hotel


home about us activities and deliverables contact faqs copyright