❤️ Your guide to AI: January 2021
Dear readers,
Welcome to Your guide to AI. Here you’ll find an analytical (and opinionated) narrative covering key developments in AI tech, geopolitics, health/bio, startups, research, and blogs over the month of January 2021.
Two quick shout outs before we get started:
On Thursday, I announced Air Street’s latest investment into Intenseye, the AI-first workplace safety company. Their software is already keeping over 30,000 industrial professionals from harm's way while at work in facilities.
We’re back with London.AI! The next edition is from 6-8pm GMT on Thursday 25th of February and is all about NLP. Come hang out with Neal Lathia of Monzo (the coral colored UK Bank), Aaro Isosaari of Flowrite (AI writing assistant), and Raza Habib of Humanloop (active learning for NLP).
As always, if you’d like to chat about something you’re working on, share some news before it happens, or have feedback on the issue, just hit reply! I'm looking to add 5 seed stage AI-first companies to the Air Street Capital portfolio this year - thanks for last time's awesome referrals 💎
If you enjoyed the read, I’d appreciate you hitting forward to a couple of friends.
🆕 Technology news and opinions
🏥 Life (and) science
The last newsletter issue discussed how AI models in healthcare must learn clinically-relevant tasks. To continue this thread, Xiao (who works on SPIRIT-AI and CONSORT-AI) raised an important point that is easy to forget: improved disease detection does not automatically equal better patient outcomes. In a striking review of melanoma, it turns out that despite a six-fold increase in incidence (driven in part by better detection of the disease), patient mortality has remained totally flat. So when you consider the next paper or startup claiming improved detection of disease with AI, ask whether there is any evidence for improved outcomes.
The gut is your second brain: new evidence from a large international study found that the specific composition of gut microbe species is strongly linked to diet and health. The collaborative project involving King’s, Harvard, MGH, Trento and ZOE (an Air Street portfolio company) found a group of 15 “good” and 15 “bad” gut microbes that are linked to inflammation, blood sugar control and overall body weight. With trillions of bacteria in your gut and a precise understanding of your biology (e.g. blood fat and blood sugar control), it’s now possible to use machine learning to personalize food recommendations.
2020’s darling life science technology is undoubtedly mRNA vaccines. This technology makes use of chemical synthesis to rapidly manufacture arbitrary mRNA sequences that encode for proteins of interest. Combined with lipid nanoparticles, mRNAs can be systemically delivered to kickstart the immune system to fight infections. Personally, I see exciting ways that machine learning can be integrated into rational and predictive mRNA vaccine design. In a new Science paper entitled Learning the language of viral evolution and escape, MIT scientists make strides in this direction. They consider today’s topical challenge: COVID-19 is rapidly mutating to become more infectious and potentially less susceptible to vaccines developed against its parents. Today, we wait until these mutations are detected in the wild and then scramble to determine whether a) today’s vaccines are still effective and b) produce new vaccines. Instead of being reactive, why not go on the offensive by modeling viral escape using unsupervised language models? This paper does that by training models to predict whether sequence mutations to viral spike proteins would lead to structural escape. We can then empirically test these mutants and stockpile vaccine doses specific to these mutations in case they arise. AI-first mRNA vaccines 🔥
🌎 The (geo)politics of AI
The UK’s Office for AI, DCMS and BEIS published their AI Roadmap. The document sets recommendations for a potential UK National AI strategy across R&D, skills and diversity, data, infrastructure and public trust. I’m particularly interested in solutions to increasing the magnetism of the UK as a center for AI R&D, massively funding our universities, boosting training programs, incentivizing startup creation and especially revamping our spinout playbook to make it the most permissive in the world. The stakes here are very high. Looking at recent data from NeurIPS 2020 (Thanks, Sergei @ Criteo!), US-based organizations publish almost 500% more than UK peers. At ICML 2020, the US is again ahead of the UK by 672%. The UK certainly has the raw potential to do far better, but it continues to lose talent to the US. Immigration in post-Brexit Britain is so far unlikely to stem the bleed unless we take a radical approach.
On a related topic, Dealroom published a report on European deep technology, which I had the pleasure of contributing to. It’s fascinating to see funding into deep technology themes (e.g. AI, biotech, quantum, energy) grow almost 15x in the last decade to now capture one-quarter of all European venture capital funding. We clearly have a huge opportunity to be a world leader in deep technology.
In the US, the US Congress approved the National Defense Authorization Act for 2021, which includes many provisions with consequences for AI and $741 billion for defense spending. These are summarized here. I thought it was neat to see that the Secretary of Energy is directed to focus on integrating AI systems for energy simulations and control systems to enhance decision making. The NSF is also permitted to establish a network of research institutes focused on AI with funding for 5 years.
President Joe Biden also set up his White House Office of Science and Technology Policy. In a big win for science, he nominated Eric Lander, a truly outstanding scientist to lead. Lander is the MIT mathematician and geneticist who played a big part in the Human Genome sequencing project and founded the powerhouse Broad Institute. In addition, Nobel laureate in Chemistry Frances Arnold, a world leader in protein engineering, was chosen to head the President’s Council of Advisors on Science and Technology.
🍪 Hardware
European carmakers including Renault, Daimler, and Volkswagen are suffering from a supply shortage of semiconductors that is making them cut vehicle production. While these chips aren’t yet used to run AI workloads in cars, the growth of electric vehicles is driving up the demand for chips in automotive. At the same time, smartphones are using more chips too, and it appears that semi fabs are prioritizing those shipments. Although the largest auto chip suppliers e.g. Infineon and NXP are both European companies with European fabs, we clearly do not have enough domestic manufacturing capacity. With landmark European auto companies suffering, this news hits where it hurts. It will add more urgency to Europe’s ambition to invest $145B into reaching technology sovereignty in the semiconductor supply chain.
International companies are dialing up the heat on NVIDIA’s acquisition of Arm after the UK’s CMA opens up consultations for market participants to comment. Because it’s Valentine’s day, I’m featuring the best fanmail of the month. This one is a beautifully topical poem:
Roses are red
Violets are blue
If NVIDIA doesn’t acquire ARM
Your prediction will have been true
-- Neal from London (winner of ‘your guide to AI fanmail’ 14 Feb, 2021)
Big tech
As predicted in our recent State of AI Report, a major corporate AI lab appears to have shut its doors. Jeff Ding reported that Alibaba’s AI lab fizzled out and its staff was absorbed into Alibaba’s Cloud Intelligence organization.
Amazon has implemented bias detection methods developed by Brent Mittelstadt, Chris Russell and Sandra Watcher into SageMaker Clarify. As discussed in their RAAIS 2020 talk here, their conditional demographic disparity test ensures fairness in algorithmic modeling. Bias and ethics of AI really hit the prime time in 2020 after progressing for many years without due attention in academia.
🔬Research & Development
Here’s a selection of impactful work that caught my eye:
Making pre-trained language models better few-shot learners, Princeton and MIT. This work proposes techniques to make few-shot learning work with smaller pre-trained language models such as RoBERTa. They introduce a prompt-based fine-tuning step and incorporate demonstrations into each training context. Using classification and regression tasks, they demonstrate outperformance of up to 30% (average of 11%) compared to vanilla finetuning.
CLIP: Connecting text and images, OpenAI. This paper introduces Contrastive Language-Image Pre-training (CLIP), which is based on the idea of using natural language as a flexible prediction space to enable generalization and transfer of neural networks across tasks and domains. The approach uses text paired with images found on the internet and tasks a model with predicting which out of a set of 32,768 randomly sampled text snippets, was actually paired with it in the dataset. In simple terms, the model learns to associate an image with a name. The authors show that CLIP is useful for overcoming costly dataset generation and narrow object classes that benchmark datasets are bound by.
DALL-E: Creating images from text, OpenAI. Another neat paper from OpenAI. DALL-E is a 12-billion parameter language model (like GPT-3) that is trained to generate images from text descriptions. The results are super impressive. The model also offers some level of controllability over the attributes and positions of small objects.
DeepCell Kiosk: scaling deep learning-enabled cellular image analysis with Kubernetes, Caltech. This paper demonstrates how quickly computer vision is making its way into microscopy, almost at a price point that is cheaper than grad students’ time :)
Deep learning-enabled medical computer vision, Salk, Google, Salesforce AI. A useful review on how deep learning has evolved in medical imaging, from images to video, and challenges to real-world clinical deployment.
A deep learning framework for drug repurposing via emulating clinical trials on real-world patient data, Ohio State University. This paper uses causal reasoning and deep learning to emulate randomized clinical trials using data gleaned from insurance claims data on nearly 1.2 million heart disease patients. They evaluated 55 potential drugs to identify 6 drugs that had not been tested on heart disease but appear to improve outcomes.
Meta Pseudo Labels, Google AI. A new top-1 accuracy record of 90.2% is set on ImageNet. This work uses a semi-supervised approach called Pseudo Labels in which a teacher network generates pseudo labels on unlabelled images that are then combined with labeled images to train the student network. This paper introduces a method to overcome the problem of the pseudo labels being wrongly predicted. Meta Pseudo Labels use the feedback from the student network to inform the teacher to generate better pseudo labels. The teacher is trained to produce better pseudo labels throughout while the student is learning.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity, Google Brain. We’re now only 1 order of magnitude away from the 10 trillion parameter model we predicted in the State of AI Report 202. These new sparse language models are significantly more sample efficient and give rise to 4-7x speed-ups over popular models like Google’s T5.
TT-Rec: Tensor train compression for deep learning recommendation models, Facebook AI. Here is a method for significantly shrinking memory-intensive deep learning recommendation models so that they’re easier to deploy at scale.
Enforcing robust control guarantees within neural network policies, CMU, Johns Hopkins, Bosch. This paper explores how to design controllers for safety-critical systems that have safety guarantees. To do so, they project the output of a neural network onto a space of certifiably stabilizing actions and train the neural network and projection end-to-end via reinforcement learning.
Off-the-shelf deep learning is not enough, and requires parsimony, Bayesianity, and causality, Oak Ridge. Science seeks to be hypothesis-driven and causal, whereas modern deep learning by its nature is more correlative. This piece argues that “the broad adoption of Bayesian methods incorporating prior knowledge, development of solutions with incorporated physical constraints and parsimonious structural descriptors and generative models, and ultimately adoption of causal models, offers a path forward for fundamental and applied research.”
📑 Resources
Mapillary, the crowdsourced street level imaging company now owned by Facebook, released an upgraded Vistas 2.0 dataset. With 25,000 images, the dataset has double the amount of semantic categories as well as approximate depth.
Seeing the forest from the trees: a more disciplined approach for AI. My friend David rightly makes the case for designing, developing and operating AI “from a system’s perspective” rather than a “model’s perspective”.
Mark Saroufim from Graphcore walks us through a simple video explanation of AlphaFold 2.
Google AI reported their annual tome of research.
Action-directed GPT-2: an ML system that predicts the next best action for sales agents and generates a good response for that action.
Top applications of graph neural networks in 2021.
Naver AI lab researchers relabeled 1.28 million ImageNet training images.
RxR: A multilingual benchmark for navigation instruction following. This is a neat new benchmark for vision-and-language navigation of AI agents.
💰Startups and exits
Here’s a financing round highlight reel:
Cruise, the GM-owned autonomous vehicle company, raised $2B at a $30B post-money valuation. Kyle Vogt (CEO) shared the news: “A handful of front-runner AV companies that look most likely to win are attracting substantially all of the best human capital and a huge chunk of the financial capital. That is how a company without millions of customers can be valued at $30 billion.”
Paige, the AI-first histopathology image analysis company, raised a $100M Series C led by Casdin Capital and Johnson & Johnson Innovation. Paige is also developing biomarkers and diagnostics.
Oqton, which makes a factory operating system for additive manufacturing, raised a $40M Series A led by Fortino Capital.
Lumiata, a predictive analytics company for health care costs, raised a $14M Series B led by Defy.vc and AllegisNL.
Flexiv, a Chinese maker of adaptive robots for the manufacturing industry, raised a $100M Series B led by Meituan.
Starburst Data, the SQL-based data query engine based on Trino (fka Presto SQL) raised a $100M round at a $1.2B valuation led by a16z. This triples the company’s valuation from June 2020.
Oxbotica, the autonomous vehicle software company, raised a $47M Series B led by bp ventures (British Petroleum). What sets Oxbotica apart from most of its peers is its lack of focus on public-road autonomy. Instead, it has deployments for customers in mining and port logistics.
K Health, a mobile virtual healthcare provider, raised a $132M round at a $1.5B valuation. Consumers buy a subscription for $9/month for unlimited access to its service and physicians.
Aerobotics, a South African provider of robotics and Ai solutions for fruit and tree farmers, raised a $17M Series B led by Naspers Foundry.
Starship Technologies, the OGs of sidewalk delivery robots, raised a $17M round from strategic investors Goodyear Ventures and TDK Ventures after seeing growth during the pandemic.
M&A and IPOs
FLIR Systems, a leading public company that makes thermal imaging sensors often used in cars, was acquired by Teledyne for $8 billion in cash and stock.
Aeva, a LiDAR startup that agreed to go public via a SPAC with InterPrivate, added a further $200M investment by Hong Kong hedge fund, Sylebra Capital. This brings the overall proceeds of the transaction to $560M.
Ariel AI, a London-based startup building on-device AR technology, was quietly acquired by Snap.
Signavio, a Berlin-based enterprise process mining company, was acquired by SAP for $1.2B. By June 2019, the company had over 1,300 customers. SAP entered the RPA market in December 2020 and owning a process mining product is the perfect addition.
Kount, a fraud prevention and digital identity solution, was acquired by Equifax for $640M. Kount has an “Identity Trust Global Network”, which they say uses AI to link trust and fraud signals from 17 billion unique devices and 5 billion annual transactions across 200 countries and territories. Over 9,000 companies use this network.
Nuvia, a 2-year-old startup that designs CPUs, was acquired by Qualcomm for $1.4B in its bid to expand its 5G capabilities.
---
Signing off,
Nathan Benaich, 14 February 2021
Air Street Capital | Twitter | LinkedIn | State of AI Report | RAAIS | London.AI
Air Street Capital is a venture capital firm investing in AI-first technology and life science companies. We’re an experienced team of investors and founders based in Europe and the US with a shared passion for working with entrepreneurs from the very beginning of their company building journey.