đYour back to school guide to AI, by nathan.ai
Greetings from London đŹđ§! Iâm Nathan Benaich. Welcome to the back to school guide to AI covering Q2 and the summer. As usual, Iâll synthesise a narrative that analyses and links important news, data, research and startup activity from the AI world. Grab your beverage of choice â and enjoy the read!
Do hit reply if youâre up for a brainstorming session on building AI-first products, new research papers or if youâre considering a career move in the startup world.
London.AI #13 reminder (Request đ« here): If youâre in London this coming Thursday 26th, do check out our London.AI #13 with a focus on healthcare and life sciences. Thanks to Facebook London đ weâre able to host founders and data scientists who are presenting on Project Sapiens, Oxford Nanoimaging, Visulytix and Edge Health. đ Request đ« here.
Referred by a friend? Sign up here. Help share by giving this it a tweet :)
đ Technology news, trends and opinions
đ Department of Driverless Cars
SoftBankâs Vision Fund announced a $2.25B investment into GM-owned Cruise Automation hours before Waymo announced it would 100x its order of Fiat Chrysler vehicles to 62,000. GM, whose position in Cruise is now worth $9.2B, agreed to invest another $1.1B into the company. New employees at Cruise are offered options directly in the company vs. in the GM parent. Meanwhile, Morgan Stanley analysts upgraded their rating on Alphabet because they believe that Waymo could grow to $175B in enterprise value đ if its business encompasses ride sharing, logistics and technology/product licensing.
Mapbox launched their Vision SDK, which powers mobile AR-based driving navigation and feature detection at the street level. The company is working with ARM to optimise on-device processing and with Microsoft Azure for streaming incremental data updates to the cloud. This is part of Mapboxâs push into automotive following SoftBankâs $164M capital injection last year.
On the topic of maps, TechCrunch ran a long piece on Appleâs efforts to rebuild their Maps using first party data captured from iPhones but also from vehicles equipped with a full-blown suite of AV sensors. Instead of being explicitly for self-driving, the piece says that these HD maps will serve for real-world AR and publishing much more regular updates to the maps as the physical world changes. I think this move shows that Maps for Apple means more than cartography for navigation but instead it means a live data infrastructure upon which the company can publish applications that require granular real world understanding, e.g. AR games. Or perhaps itâs all to power the iOS12 killer camera-based ruler feature. Â
Voyage, the SF-based AV company focused on retirement communities, has built their 2nd generation vehicle on the Chrysler Pacifica Hybrid minivan platform. Along with Waymoâs boosted order, it sounds like Chrysler is getting a new lease of energy by providing picks and shovels for the AV wave! Voyage also entered a partnership with Enterprise Fleet Management who will procure, lease, and service Voyageâs fleet of G2 autonomous vehicles. As such, auto companies focus on what theyâre good at while technology-focused AV companies like Voyage focus on building best of breed software. Commercial terms unknownâŠ
In a wild turn of events, Zoox has gone from being held up as an ambitious bet to reinvent the car and autonomous mobility (as profiled by Bloomberg here and WIRED here) to another major management shakeup that saw the Board-level ousting of its co-founder and resolute visionary CEO, Tim Kentley-Klay. This firing comes just a month after Zoox closed a massive $500M financing round at a $3.2 billion post-money valuation. The company has raised $800M to date and has 450+ employees. I really hope they make it to market even so.
Uber shut down its self-driving trucking business line (meanwhile trucking company Convoy just raised $185M at $1B valuation) and its self-driving passenger cars are still off the road post-Arizona. This comes as Uber refocuses on its mission to take you from âA to Bâ using multi-modal transport, ranging from bikes, cars, scooters and potentially public transport (like Lyft has recently announced). Uber has also accepted a $500M injection from Toyota, which appears to focus on its self-driving technology platform develop and potential operations of that fleet.
Aurora is starting to open up a bit by publishing about its approach to building autonomous technology and how they practically do testing, the design of learning systems, and product engineering.
Nuro too has published their approach to building a safe autonomous on-road delivery service.
đȘ The giants
Google ultimately decided not to renew its Maven contract with the US DoD. According to emails obtained from the company, the contract was worth at least $15M and could have grown to $250M. The scope included creating a âGoogle-Earth-likeâ surveillance tool that enabled users to click on a building and âsee everything associated with itâ, as well a monitor assets of interest (vehicles, peopleâŠ). A week after this news, Sundar Pichai published âAI at Google: our principlesâ, a set of 7 standards that will actively govern their research and product development and impact their business decisions. This includes building for safety, avoiding unfair bias, being socially beneficial and accountable to people, observing privacy, upholding scientific rigor and supporting uses that accord with these principles. Sundar also adds that Google wonât design or deploy AI in application areas that cause or are likely to cause overall harm, are used in weapons or to injure people, for surveillance or contravenes with human rights and international law. In contrast to the same post available on Google AIâs microsite, Sundar adds that Google will nonetheless continue to work with the military and governments in areas such as cybersecurity, training, military recruitment, search and rescue. The lines between supporting cybersecurity for the government/military and not working on surveillance that contravenes with human rights is unclearâŠ
Separately, both Google and Facebook announced expansions of their research teams (Google Brain and FAIR, respectively) around the world. Notably, Google chose Ghana, which is a hotbed of ambitious talent eager to work in technology. Just have a poke around Andela to see for yourself! FAIR London is now open, due in part to the acquisition of Bloomsbury.AI (congrats, team!). Whatâs more, DeepMind is supporting professorships at the foundersâ alma mater, Cambridge and UCL in machine learning. Seeing successful alumni return to support future generations is, in my view, so much more valuable than the capital gains that many universities attempt to generate by imposing onerous equity ownership/licensing fees on spinouts.
DeepMindâs data center cooling project has made it into production at Google data centers.
USâs DARPA announced a $2B investment in AI over the next five years, adding to its 20 existing research programs on the topic. This move indicates that the defense budget of nation states is moving feeling gravitational pull towards AI as it escalates to a national priority.
đȘ Hardware
Google released sparse details about their TPU v3 chip at IO earlier this year. In this piece, TheNextPlatform drills down into the design and performance. They note that the âTPU v3 is more of a TPU v2.5 than a new generation of chips because most of the new hardware development appears to be happening at a system level around the TPU v3 chip.â This blog post is a really neat description of how CPUs, GPUs, and TPUs differ in how they run computations and access memory.
The view has traditionally been that GAFAMBAT are focused data center workloads, which leaves room for new players to compete at the edge. No more. Google has announced the Edge TPU that runs small models for rapid inference on IoT devices.
Since 2013, China held the title of hosting the World's most powerful supercomputer. Now, a team at Oak Ridge National Lab in the US have unveiled Summit, a supercomputer capable (at peak performance) of 200 petaflops. This makes it 60% faster than the TaihuLight in China. The Summit machine has over 27,000 GPUs (!) from NVIDIA and fills an area the size of two tennis courts. Unsurprisingly, keeping the Summit cool requires quite a feat. It must carry 4,000 gallons of water a minute through its cooling system to carry away about 13 megawatts of heat.
Itâs clear that access to computational resources (namely the GPU) has driven lots of progress in applied machine learning. However, what are the implications of ML hardware on society, governance, surveillance, geopolitics and technological unemployment? In a paper entitled Computational power and the social impact of artificial intelligence, Tim Hwang of MIT Media Lab digs into these issues. Specifically, he examines how changes in computing architectures, technical methodologies, and supply chains might influence the future of AI. This paper shines a spotlight on how hardware works to exacerbate a range of concerns around ubiquitous surveillance (esp. in China), technological unemployment, and geopolitical conflict. It states that the use of trained models implemented directly on custom ASICs operating at the edge makes potential bugs or biases less easily rectifiable. As such, entities creating and providing such platforms will see their set of responsibilities grow.
Weâve previously explored Chinaâs ambitions to on-shore a significant semiconductor industry due to the vital role it plays in AI progress and national security. The report here states that in 2014 China accounted for 57% of worldwide semiconductor consumption, while in 2015 China possessed only 6% of the most advanced semiconductor fabrication companies globally. To narrow this gap, the total volume of transactions of Chinaâs semiconductor overseas completed M&A deals exceeded$11B. One of the reasons this is a particularly touchy issue for the US is because American chip companies such as NVIDIA depend on the manufacturing capability of a Chinese neighbor, Taiwan. More here.
Facebook continued its AI hardware team build-up by hiring Shahriar Rabii as VP and Head of Silicon. Rabii previously worked at Google, where he (according to LinkedIn) âheaded and scaled silicon engineering, product/program management, production and Technology Engineering. He released many products to mass production including Pixel Visual Core for ML and computational photography, Titan family of secure elements, VP9 and AV1 video transcoders and othersâ.
A study in Science (paper here) showed that it is possible to construct a physical artificial neural network made of stacked layers of optical elements.
Intel has been working on a new chip architecture that moves them away from x86 and their general purpose processors. This architecture, termed Configurable Spatial Accelerator, is a dataflow engine (not a serial processor or vector coprocessor) that can work directly on the graphs that programs create before they are compiled down to CPUs in a traditional sense. In this way, the design is inspired by Graphcoreâs Intelligence Processing Unit. Intel is developing the CSA in conjunction with the Department of Defense.
đ„ Healthcare
Making the cover of Nature Medicine, DeepMind and Londonâs Moorfields Eye Hospital published the results from their study predicting eye disease from routine OCT clinical scans (paper here). Whatâs notable about this work is that it is designed to integrate into existing clinical pathways and required significant data collection, labelling and patient outcome tracking to generate ground truth. The study uses a two-stage deep learning approach. First, a raw 3D OCT scan (of multiple slices) is analysed by a U-Net CNN (originally proposed by researchers in Freiburg in 2015) to produce a semantic segmentation map of the eye. This segmentation map mimics how an opthamologist first identifies the micro-structures of the eye from an OCT to subsequently figure out whether any structures look particularly abnormal and what to do about it. Using this segmentation map, the authors then train a second neural network to predict the appropriate clinical referral path: urgent care (a doctor must see the patient within days), semi-urgent (weeks), routine or just observation. While the two-stage strategy performs equally well as a single model learned end-to-end from OCT scan to referral pathway prediction, it affords two advantages: 1) clinical interpretability of the segmentation map and 2) an âintermediate representationâ of the OCT scan data that is independent of the device used to generate the scan. This means that if a clinic wants to implement this system, they would need to only retrain the first segmentation network to adjust for peculiarities of the scan creation process. The team is now progressing this work through clinical validation with further results expected next year. Separately, the DeepMind Health team reported early results from their deep learning-based radiotherapy planning system that seeks to accelerate the path from diagnosis to radiotherapy administration at UCL Hospital (paper here). Here too they use a 3D U-Net architecture and a significant hand-segmented dataset of 21 organs in the head and neck region.
London-based Kheiron Medical Technologies has conducted a trial of its deep learning-based mammogram analysis system on 5,000 patients with 1-2 year follow-up and is due to release data demonstrating human level performance on diagnostic assistance tasks. The data is not yet public, but the company is said to have been awarded regulatory approval from European agencies.
The FDA permitted the marketing of a computer-aided detection and diagnosis software designed to detect wrist fractures in adult patients from 2D X-rays. The software is produced by Imagen OsteoDetect, a 40-person strong NYC-based startup. The company had submitted two bodies of evidence to the FDA: 1) a retrospective study of 1,000 radiograph images that compared their software against three board certified hand surgeons in detecting and localising wrist fractures (note: 3 human experts sounds pretty small as a comparison), 2) a retrospective study of 24 providers who reviewed 200 patient cases.
On the topic of detecting fractures on X-rays, researchers from the University of Adelaide and Queensland present a model-agnostic interpretability method for generating textual explanations for deep learning-based X-Ray fracture detection software. They show evidence that doctors prefer the problem location highlights and textual descriptions together rather than either method alone.
đšđł AI in China
Many governments have now published national AI plans and this living blog post lists resources and summaries that describe them all.
Now, letâs focus on China. This piece suggests that in contrast to Europe, China is throwing extreme funding behind new companies, heavily promoting local winners and developing a clear industrial policy for the digital sector. Take note!
Tencent, Alibaba and JD.com are separately giving brick and mortar retail stores a technology facelift to boost sales and weave them into their commerce ecosystems. The view is that consumers will no longer draw the difference between online and offline commerce because stores in both worlds will fall under the same umbrella company. Alibaba, for example, has refitted one million mom and pop stores with in-store sensors and analytics in the last year. These stores become part of the Tmall brand and must procure at least $1,500 of goods per month from the Tmall platform. Hereâs a cool walk through of the in-store experience at an Alibaba concept store.
JD.com, which offers a same-day delivery service across the country as long as an order comes through before 11am, has a multi-modal automation system for warehousing, processing orders packing and delivery (e.g. with these robots). A JD.com facility can automatically process 200k orders a day. Scale in technology investing is everything; the cost to get there doesnât matter, claims their CTO.
Abacus news released a China Internet Report 2018 that is very much worth your time to read. It enforces the view that the US and China are parallel universes with regards to technology, where almost every layer of the stack is owned by local megaplayers. Whatâs more, there is so much innovation and locally-tailored products that are massively successful in China that havenât even been conceived in the US yet.
The story in the media is often about China investing in or attempting to buy US technology companies working in AI. The opposite happened recently when the US-based programmable logic devices supplier, Xilinx, purchased DeePhi Technology, a Chinese startup (and Xilinx portfolio investment) working on ML solutions using the Xilinx platform.
China has also made several moves over the last few years to deploy its hardware and software solutions for public security use cases in Africa, with Zimbabwe being the latest point of focus.
Meanwhile, back home there are reports in industrial factories that up to 40% of labor has been lost to automation in the last three years in Zhejiang, Jiangsu and Guangdong provinces.
đź Where AI is heading next
McKinsey have published several simulations on the effects of early or late adoption of AI and the resulting economic gains, as well as how AI could widen gaps between various countries. Useful charts inside.
đŹ On research directions
Turing Award winner Judea Pearl and his work on Bayesian networks in the 80s is profiled in The Atlantic. He believes that âall the impressive achievements of deep learning amount to just curve fittingâ. To achieve major breakthroughs, Pearl argues that machines must move beyond reasoning by association (curve fitting) towards causal reasoning. This means a machine must genuinely understand the drivers of cause and effect, as well as be able to ask counterfactual questions of a causal relationship. For machines to invoke causal models, Pearl says we must equip machines with a model of the environment: âWithout a model of reality, you cannot expect the machine to behave intelligently in that reality.â Machines must then proactively posit world models and iterate over them with experience. This feels intuitively correct.
In a series of blog posts that caught fire on Twitter, Filip Piekniewski opines on the hype of deep learning and its limitations. In Part 1 and Part 2, he argues that achievements in deep learning have come at a great computational expense, but they do not solve key problems of generalisation and robustness. In Part 3 (worth a read), he suggests that the AI field should be focused on Moravecâs Paradox, which posits that the apparently simplest real world tasks (low-level sensorimotor skills that babies quickly learn) are actually far more complex than we think (and more computationally complex than high level reasoning.
Several groups are embarking in this direction. For example, François Cholletâs talk at RAAIS 2018 offers a pragmatic overview (YouTube link) on how stronger priors, richer models (both geometric and symbolic) and better evaluation metrics will help us expand the capabilities of todayâs intelligent systems. Furthermore, PROWLER.ioâs work on industrial-grade, data-efficient decision-making systems that combine the predictive power of probabilistic modelling with the correct optimisation of model-based RL decision-making can help too. Meanwhile, researchers at Google Brain, DeepMind, MIT and Edinburgh explore how to todayâs AI systems could express combinatorial generalization (arXiv paper), a hallmark of human intelligence, that allows us to construct new inferences, predictions, and behaviors from known building blocks. In particular, they present a general framework for entity- and relation-based reasoningâwhich they term graph networksâfor unifying and extending existing methods which operate on graphs. They also describe key design principles for building powerful architectures using graph networks as building blocks.
đš On product design
I think weâre still in the very early days of writing best practice for product design and development around the kernel of AI technology. In an excellent piece entitled Building AI-first products, David Bessis makes this case clearly: âYouâre building âAI-firstâ when youâre taking AI as the starting point of the design process. Itâs no longer about adding cool AI-powered features, itâs about removing pre-AI legacy features and creating an entirely new, AI-centric product experience. AI-first products are products that just would not make sense without AI...AI-first design is about renegotiating the deal between what humans do and what machines do.â Indeed, âany AI-first product changes its usersâ life by taking away something that used to be part of their job. Identifying the right something is the most important AI-first product design question.â I think this a good working filter to determine whether a product or company is really AI-first or using a sprinkling of AI to make existing (legacy) functionality a bit better. Whatâs more, he rightly points out that âthere is no established methodology for building AI-first products.â Indeed, the book on AI-first product management is still being written.
đš On tooling for AI-first products
Lukas Biewald, ex-CEO and co-founder of Figure Eight (the original data labelling company), has set up his second ML tooling business and shares an insightful piece into why heâs done so. In particular, he writes: âTen years ago training data was the biggest problem holding back real world machine learning. Today, the biggest pain is a lack of basic software and best practices to manage a completely new style of coding.â For more detail on how software 2.0 (programming a machine to learn rules from data) from software 1.0 (explicitly programming rules into a machine), watch Andrej Karpathyâs talk on Building the software 2.0 stack and Chris RĂ© talk at RAAIS 2018 on Software 2.0.
Finally, Françoit Chollet of Google Brain published a widely shared and valuable list of learnings on software development, API design and careers.
đŹ Research
Hereâs a selection of impactful work that caught my eye:
MolGAN: An implicit generative model for small molecular graphs, University of Amsterdam. In the last year, Iâve noted quite an uptick in the number of papers applying ML techniques to various steps of the drug discovery and development pipeline. Here the authors address the molecule generation and search space problem. Specifically, how do we interrogate the vast search space of drug-like molecules to determine which subset is likely to be potent in the real world against a specific target? Prior methods used SMILES (string) based representations of molecules and likelihood-based methods that were prone to mode collapse. This paper shows that a generative adversarial network can operate on the molecular graph representation (Lewis structure) and learn to generate compounds that are almost 100% valid compared to the QM9 chemical dataset. Moreover, they add a reinforcement learning objective to encourage the generation of molecules with specific desired chemical properties.
Planning chemical syntheses with deep neural networks and symbolic AI, WestfĂ€lische Wilhelms-UniversitĂ€t MĂŒnster, Shanghai University, Benevolent.AI. In this paper, the authors ask whether a deep learning approach can recapitulate the ability of chemists to design efficient synthesis plans to create drug-like molecules of interest. I hope my organic chemistry friends will forgive me for suggesting that you think of this problem as recipe generation for a complex, exotic meal. Given a finished meal and a set of starting ingredients, each of which has implicit rules about how they can be manipulated and validly combined, what is the optimal recipe to follow? As a starting point, the authors use transformation rules from a database of 12.4 million single step reactions. Their system includes three neural networks with Monte Carlo Tree Search (â3N-MCTSâ, related to AlphaGo). The first neural network (the expansion policy) guides the search in promising directions by proposing a restricted number of automatically extracted transformations. A second neural network then predicts whether the proposed reactions are actually chemically feasible. Finally, to estimate the position value, transformations are sampled from a third neural network during the rollout phase. MCTS solved more than 80% of the test set with a time limit of 5 seconds per target molecule, whereas existing state of the art search methods neural BFS and heuristic BFS solved 40% and 0% of the test set, respectively. Even when given a 20 minute runtime per molecule, MCTS outperformed by large margins. Interestingly, however, when provided with infinite runtime the algorithms converge to the same performance. Thus, this new method is both much quicker at generating synthesis plans and does not require as much tedious and biased expert encoding or curation of datasets.
Playing hard exploration games by watching YouTube, DeepMind. Learning tasks using deep RL in complex environments with sparse rewards is a challenge. One can use either imitation learning (a human demonstrates good behaviours) or intrinsic motivation methods that provide an auxiliary reward that encourages the agent to explore states or action trajectories that are ânovelâ or âinformativeâ with respect to some measure. The challenge the authors take on in this paper is to develop a learning system that relies on noisy, unaligned footage without direct access to data from the simulator (note: most game playing AIs have this direct access). First, they learn to map unaligned videos from multiple sources to a common representation using self-supervised objectives constructed over both time and modality (i.e. vision and sound). Second, they embed a single YouTube video in this representation to construct a reward function that encourages an agent to imitate human gameplay. This method of one-shot imitation allows their agent to convincingly exceed human-level performance on Montezumaâs Revenge, Pitfall! and Private Eye even if the agent is not presented with any environment rewards.
The Limits and Potentials of Deep Learning for Robotics, Queensland University of Technology, Technical University of Berlin, University of Notre Dame. In this review, the authors discuss some current research questions (focused on perception or acting) and challenges for deep learning in robotics. The paper motivates work in several new directions: âa) Robots that could utilize their embodiment to reduce the uncertainty in perception, decision making, and execution; b) Robots that learn complex multi-stage tasks, while incorporating prior model knowledge or heuristics, and exploiting a semantic understanding of their environment; c) Robots that learn to discover and exploit the rich semantic regularities and geometric structure of the world, to operate more robustly in realistic environments with open-set characteristics.â
Blind Justice: Fairness with Encrypted Sensitive Attributes, Cambridge, TĂŒbingen, UCL. Excerpt from the paper that provides a great overview â âReal world fair learning has suffered from a dilemma: in order to enforce fairness, sensitive attributes must be examined; yet in many situations, users may feel uncomfortable in revealing these attributes, or modelers may be legally restricted in collecting and utilizing them. By introducing recent methods from multi-party computation and extending them to handle linear constraints as required for various notions of fairness, the authors have demonstrated that it is practical on real-world datasets to: (i) certify and sign a model as fair; (ii) learn a fair model; and (iii) verify that a fair-certified model has indeed been used; all while maintaining cryptographic privacy of all usersâ sensitive attributes. Connecting concerns in privacy, algorithmic fairness and accountability, our proposal empowers regulators to provide better oversight, modelers to develop fair and private models, and users to retain control over data they consider highly sensitive.â
Other highlights include:
Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net, Uber. Most self-driving software pipelines consist of scene understanding, object tracking, motion forecasting and motion planning modules. Each of these problems is engineered or learned separately, even though it often helps to use tracking and prediction to inform object detection (i.e. cars and people move in noticeably different ways). Here, researchers at Uber ATG Toronto propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D birds eye view sensor. They also present a large real world driving dataset captured in the US and show that joint training yields very good accuracy on each individual task.
Loads have written about OpenAI Five, so Iâll direct you to their results blog post here. Briefly, they won lots of games :)
MnasNet: Platform-Aware Neural Architecture Search for Mobile, Google Brain. This work proposes an automated neural architecture search approach for designing resource constrained mobile CNN models. They include latency information into the reward function to balance the tradeoff between accuracy and inference latency on the Pixel phone platform.
Unsupervised machine translation: A novel approach to provide fast, accurate translations for more languages, Facebook. This work demonstrates major improvements to unsupervised machine translation.
BDD100K: A Diverse Driving Video Database with Scalable Annotation Tooling, UC Berkeley, Georgia Tech, Peking University, Uber AI Labs. This work presents a new driving database of 100k street level video recordings of 40 seconds each from 50k rides. It provides 40 object classes for classification and semantic segmentation. While lacking geographic diversity (roads in NYC, Berkeley, SF and Bay Area), dataset is video so the single image frames offer a temporal dimension that is useful for modelling driving behaviours. The dataset can be accessed here.
đ Resources
Frameworks for approaching the machine learning process: a simple stepwise guide that is helpful to keep in mind when evaluating proposed approaches to using ML to solve problems.
OpenAI increased their game-based RL training environments from 70 Ataris games and 30 Sega games to over 1k games across a variety of backing emulators.
RunwayML is a toolkit that adds AI capabilities to design and creative platforms. It provides an intuitive visual interface and pre-trained ML models to allow creatives to experiment client-side. Blog post motivating the work is here.
Hereâs a series of blog posts with code examples for how to implement data science into a business such that the function provides key input for product development.
Using Appleâs ARKit 2 to control your mobile interface with your eyes đPretty weird!
This blog post explains how knowledge distillation works in machine learning to condense key learnings that can be transferred from one network (or task) to another.
Machine learning models are susceptible to either erroneous assumptions in the learning algorithm, or too small fluctuations in the training set. This beautifully crafted web app explains why.
Sick of searching for open source datasets? Both Google and Microsoft have released portals that index public datasets for you, and another resource is Academic Torrents.
A nice overview discussion of how inverse reinforcement learning works.
NLP-progress is a repository to track the progress of natural language processing, including the datasets and the current state-of-the-art for the most common NLP tasks.
A visualisation of accepted papers at NIPS and a comparison with ICML. Google wins both :-/
Success stories for deep reinforcement learning, a summary by David Silver here.
đ° Venture capital financings and exits
Quite a few big ticket investments, including:
Sensetime, the Chinese maker of facial recognition software, raised a $620M Series C two months after a $600M round led by Alibaba. The business employs 413 people (+54% in the past year), over 50% of which are in engineering and research. This makes Sensetime 2x larger than its competitor Face++ (Megvii).
Automation Anywhere, a US-based RPA company founded in 2003, raised a $250M round from NEA and Goldman Sachs at a $1.8B valuation. The company says revenue has grown 100% YoY.
UiPath, a European RPA company born a couple of years after Automation Anywhere, raised a $225M Series C led by Sequoia Capital and CapitalG. The company announced that it has grown from 0 â $100M ARR in under 21 months. Yikes!
Starship Technologies, the Estonian company offering an on-demand, autonomous ground delivery robot service raised an additional $25M round and brought on a 5 year AirBnB executive as CEO. Robots managed by Starship have covered >100k miles in 20 countries and 200 cities.
Tractable, the London-based company offering AI-based repair cost predictions for auto and disaster damage insurance, raised a $20M Series B financing led by Insight Venture Partners.
GreyOrange, the US maker of automated warehouse robotics (think Kiva-eque), closed a $140M financing round as the market continues to grow under pressure of consumer demand for immediate delivery satisfaction.
ï»żA couple of M&A deals, including:
Vertex.AI was acquired by Intel to roll into the companyâs AI Products Group, specifically Intel Movidius. The sub 10 person team at Vertex.AI developed PlaidML, a platform for deploying deep learning models on any device, especially those running macOS or Windows. It supports ONNX and Keras, and claims to be often 10x faster (or more) than popular platforms (like TensorFlow CPU) because it supports all GPUs, independent of make and model. The business was founded in 2015 in Seattle and the deal price was undisclosed.
FeatureX was acquired by Orbital Insight to boost the latterâs computer vision-based analysis of satellite imagery. FeatureX was based in Boston and started in 2016 and was backed by Tudor Investment Corporation. In fact, FeatureXâs CEO had experience applying ML to the hedge fund business and FeatureX focused on deriving insights from satellite imagery for the purposes of trading financial markets. Hence the fit with Orbital Insights. The deal price was undisclosed.
Lobe.ai was acquired by Microsoft to help non-technical users build ML models and pipelines. Lobe launched on Hacker News earlier this year and raised from Lowercase Capital and Tony Fadell. Their CEO, Mike Matas, is recognised as a very gifted user interface designer with experience from Apple, Facebook (via the acquisition of his company Push Pop Press in 2011), and Nest. The price of the deal was undisclosed.
----
Congrats on making it to the end!
Speak soon,
Nathan