š11 must watch RAAIS 2018 talks and commentary
Greetings from London āļøš„! Iām Nathan Benaich. This issue is about the 4th Research and Applied AI Summit we ran a couple of weeks ago.
Here youāll find all the videos of our speakersā presentations, an overview of our new RAAIS Foundation to advance education and research in AI for the common good, as well as Air Street Capital, a specialist venture capital partnership for founders building AI-driven technology companies that solve substantive global problems.
ASK: If you have an open source AI project or research proposal that you're keen to work on but lack the resources to get started, just hit reply (or email info@raais.org)!
ó ± 1ļøā£Opening remarks: Community and Foundation
We're extremely proud to share the achievements of RAAIS alumni in the last 12 months!
Financings: Graphcore partners with Sequoia, Darktrace becomes a UK-based unicorn, Mapillary raises $15M Series B, Starship closes $25M financing, Benevolent.AI is now worth >$2B.
Exits: Adyen going public in Amsterdam, Bloomsbury.AI acquired by Facebook, BlueYonder acquired by JDA Software, Matrix Mill acquired by Niantic.
In my talk, I make the case that AI has been elevated from a priority for the major public technology companies towards a priority at a national level. You can find an elaborate and well enunciated piece on this point by Ian Hogarth in his piece entitled AI Nationalism.
What's more, progress in AI systems today is rate limited to a meaningful extent by the availability of machines. In turn, access to machines is rate limited by financial resources. It's not surprising that academic labs, new entrants, and the less economically fortunate find it hard to compete.
Against this backdrop, we decided to set up The RAAIS Foundation. We pool philanthropic contributions from our community attending RAAIS and London.AI to advance education and research in AI for the common good. Our RAAIS alumni speakers form an Advisory Board to help us promote, select and support impactful open source projects and research. We focus on Fellows who would otherwise have limited or no opportunity to participate. Apply for a grant here! We will start with 10x $4k cash grants + cloud credits per year.
2ļøā£Friederike SchĆ¼Ć¼r, Cloudera Fast Forward Labs
In her presentation, Friederike demonstrates how FF Labs builds fully functioning enterprise prototypes of ML-driven software. She focuses on new work on multi-task learning, which is an approach to training ML systems on more than one task. Learn more about this work in an official blog post and report she wrote here. I'm particularly interested in FF Labs as a case study for how to structure teams and workflows around ML R&D where the goal is to rapidly transfer research into technology products in the enterprise.
3ļøā£Shakir Mohamed, DeepMind
Shakir takes us through a search for the principles of reasoning and intelligence. In particular, he focuses on generative models, which can allow us to learn a simulator of high-dimensional data. He makes the point that probabilistic inference is the central question of AGI. Shakir runs through a variety of generative models that are continuously improving our ability to generate new methods and to more accurately represent probabilities. I'm particularly excited about the application of generative models for empirical computation. The idea is to develop a computer system that generates hypotheses, tests them empirically, uses the results to update its hypothesis and completes this loop recursively in an autonomous fashion. James Field of LabGenius provides a glimpse into how this framework applies to protein evolution.
4ļøā£Panel: Kenn Cukier, Ian Hogarth, Nathan Benaich
In this panel, Ian and I introduce our collaboration on The State of AI Report 2018. Kenn drives a conversation around major breakthroughs in AI, the nature of AI nationalism and its impact on geopolitics and the economy, centralisation of AI power, and how business will evolve in an AI-first era. If you haven't already, I suggest you read Ian's AI Nationalism piece as a starter and then comb through The State of AI Report as your main course. They're best consumed together š.
5ļøā£Justin Gilmer, Google Brain
Justin dives into the subject of adversarial example research in ML. He argues that adversarial example defence papers have, to date, mostly considered abstract, toy games that do not relate to any specific security concern. Furthermore, defence papers have not yet precisely described all the abilities and limitations of attackers that would be relevant in practical security. Towards this end, he establishes a taxonomy of motivations, constraints, and abilities for more plausible adversaries. Finally, he provides a series of recommendations outlining a path forward for future work to more clearly articulate the threat model and perform more meaningful evaluation. Read their paper here. This work is really fascinating and Justin's frameworks certainly help reason through the true implications of certain types of attacks.
6ļøā£FranƧois Chollet, Google Brain
In his talk, FranƧois presents the current limitations of deep learning and suggests routes to overcome them. Specifically, he makes the point that DL models are extremely sensitive to both adversarial perturbations and any input change not seen in the training data, as explored in the prior talk by Justin. A DL model can only make sense of what it has seen before and does not approximate what happens in the brain. To achieve extreme generalisation, FranƧois argues that we need better evaluation metrics, richer models and stronger priors. What I find particularly interesting is the proposed blending of symbolic AI (programming) with geometric AI (deep learning). Here, you'd have a modular task-level programme that learns on the fly to solve a specific task using a library of reusable symbolic and geometric modules shared across many tasks and many systems.
7ļøā£Blake Richards, University of Toronto
Blake approaches a similar topic to FranƧois, albeit from the perspective of a computational neuroscientist. Describing deep learning as end-to-end optimisation, he argues that the brain does in fact show evidence of performing deep learning. It's quite striking how end-to-end trained neural networks trained on images can fit the neural activity of the visual cortex better than longstanding neuroscience-inspired models. He presents supporting experiments and reconciles the 'biologically problematic' features of back propagation in very interesting ways. Together, this work has implications for energy efficient hardware, new methods for regularisation and new types of units for neural networks.
8ļøā£Phil Keslin, Niantic
Phil gives us a tour-de-force overview of Niantic's popular game franchise, PokƩmon Go, and the technical requirements for creating and running a world-scale immersive AR games. He announces the release of the Niantic Real World Platform and the acquisition of Matrix Mill (whose founder spoke at RAAIS 2015). Phil also presents new work on learning depth from single images (Matrix Mill's tech) and shows how that significantly improves the fidelity of a Pikachu running in front and behind people in AR. We're shown a glimpse of a new multi-player AR game built on Niantic's platform as a teaser for what developers might be creating very soon!
9ļøā£Chris RĆ©, Stanford University
Chris presents work that changed the arc of his lab: Software 2.0, the idea that ML is eating more and more of the software stack. Given that training data is the input to Software 2.0 (vs. engineer-encoded rules), his lab has worked on leveraging variety quality sources of training data via higher level abstractions. To that end, he presents work on overcoming label paucity using Snorkel, an open-source project his lab has built. Briefly, the pipeline consists of 1) users writing labelling functions to generate noisy labels from unlabelled data, 2) inputting these into a generate model that learns to label functions' behaviour to de-noise them, and 3) use the outputted probabilistic training labels to train a noise-aware discriminative model that effectively labels the original training data.
š Luc Vincent, Lyft
Luc kicks off with the story of Street View at Google and his transition to Lyft to build and lead the Level 5 Engineering Center, Lyft's ambitious programme to build self-driving vehicles. He demonstrates recent milestones of this programme, the evolution of their autonomy stack, and makes the case for Lyft's compelling position in the market.
š Closing remarks!
I announced my next chapter: Air Street Capital, a specialist venture capital partnership for founders building technology companies that solve substantive global problems. We're a team of investors, engineers, researchers, and operators with deep technology and AI experience from from Google, Facebook, Niantic, Lyft, DeepMind, Mapillary, ElementAI and more. We'll be investing from the earliest stages in Europe and the US starting later this year.Ā If you're interested in learning more, drop me a line.
š
Register your interest for RAAIS 2019!
The Research and Applied Artificial Intelligence Summit (RAAIS) ā raais.co The Research and Applied Artificial Intelligence Summit (RAAIS) explores the frontiers of AI research and applications on the world's most exciting problems. The London based AI conference is a one day event covering machine learning, deep learning, reinforcement learning, data science, healthcare, life sciences, logistics, manufacturing, robotics, fintech, and self-driving cars.
--
Signing off,Ā
Nathan Benaich, 5 August 2019
Air Street Capital | Twitter | LinkedIn | RAAIS | London.AI